Threads for erock

    1. 12

      TypeScript’s usage plateaued around 2020 (at a little over a 3rd of the JS ecosystem having adopted it), and in the past year finally showing its first signs of decline as it is replaced with better, simpler alternatives like eslint-plugin-jsdocs (get started here), and a potential comment based ES202X official type system that will likely lead to a rapid decline in the need for creating .ts files. With these potholes filled, TypeScript will need to either find new potholes to try to smooth out and justify it’s existence, or accept that it’s job is done and slowly fade away.

      As an outside observer (with a partner that is a front-end dev) I’ve not seen this at all. From what I’ve seen TS remains extremely popular. Anyone in the space seen otherwise?

      1. 5

        Nope. I’ve just seen a bunch of frontend developers who were initially skeptical and now their code looks like Haskell code golf.

        They may be going overboard now but as new converts to the wonderful world of typing they are enthusiastic.

        I honestly think the typescript type system is just very well designed and ergonomic. I say that as someone who writes a lot more go than typescript. I also lived through the days of 1/2 assed type documentation in comments in JavaScript and Python. I believe the current state is better.

        1. 3

          haskell code golf

          What does this mean? Lots of interfaces? Weird infix operators?

          1. 2

            This is probably a reference to the common convention in TypeScript to give type variables single letter names. That combined with complex types (e.g. those using ternaries, infer, etc.) tends to be very hard to read. This is mitigated somewhat by using more descriptive type names. For example:

            type Reverse<T> = T extends []
              ? []
              : T extends [ infer X, ...infer Y ]
                ? [ ...Reverse<Y>, X ]
                : never
            

            …vs…

            type ReverseTuple<Tuple> = Tuple extends []
              ? []
              : Tuple extends [ infer Head, ...infer Tail ]
                ? [ ...ReverseTuple<Tail>, Head ]
                : never
            
          2. 2

            I was making a joke more about very complicated higher order type programming. Although maybe C++ template abuse would be a better joke. Stuff like this:

            type GetFieldType<T, P> = P extends `${infer Left}.${infer Right}`
                ? Left extends keyof T
                    ? FieldWithPossiblyUndefined<T[Left], Right>
                    : Left extends `${infer FieldKey}[${infer IndexKey}]`
                    ? FieldKey extends keyof T
                        ? FieldWithPossiblyUndefined<
                              GetIndexedField<Exclude<T[FieldKey], undefined>, IndexKey> | Extract<T[FieldKey], undefined>,
                              Right
                          >
                        : undefined
                    : undefined
                : P extends keyof T
                ? T[P]
                : P extends `${infer FieldKey}[${infer IndexKey}]`
                ? FieldKey extends keyof T
                    ? GetIndexedField<Exclude<T[FieldKey], undefined>, IndexKey> | Extract<T[FieldKey], undefined>
                    : undefined
                : undefined;
            
      2. 4

        Anecdotally, this is certainly not the case for me. I was very skeptical for years. I thought it’d be another CoffeeScript and kept waiting for an eslint-plugin-jsdocs-like solution to come along. Now that I’ve been using TypeScript regularly for a couple of years, I find myself starting all new projects in it and even slowly porting old projects to it as well. I would even plead guilty to occasionally going overboard with types, but overall, I’ve found it to be possible to get near Elm-levels of runtime errors (close to none). Overhauling data structures and doing major refactoring is a relative breeze now. Full “erasability” is turning out to be an excellent design goal and should make it possible for JS runtimes to run TypeScript without a build step.

      3. 2

        it’s ubiquitous with little signs of decline. Further the ecma proposal referenced in quote is nowhere close to being accepted (if ever): https://github.com/tc39/proposal-type-annotations

    2. 3

      This is great for me because I’m working on a static site that needs search this week. :-)

      1. 1

        If you’re interested, I’d love to get your feedback on a service I created recently: https://pgs.sh

    3. 12

      The one feature I really crave to see on git forges is the ability to create a PR/MR without forking the project locally on that instance (the target instance). It would be nice to not need an account on the target instance at all, but the important need is to create a PR/MR without forking there. It should be simple to do this: someone with an account could just send a .patch file in git format-patch format. I regret that the design chosen makes that feature basically the very last things, after things nobody cares about (follow projects on other instances, big meh).

      Note: the problem with forking a project is that it consumes resources on the server. If you make forking avoidable for graceful interaction, you can let people create accounts with limited rights, and not worry too much about their resource consumption.

      1. 3

        Note: the problem with forking a project is that it consumes resources on the server.

        What resources? Not disk space (at least on GitHub): https://github.blog/2015-09-22-counting-objects/#your-very-own-fork-of-rails

        1. 3

          If you let people fork on your server, you can do copy-on-write optimizations at the time of the fork, but obviously you also need to let them push new content to your server, and basically you are acting like a data-storage service for them. It is then very difficult to distinguish between malicious usage of your storage resources and expected usage. For example, if the original repository has large-ish binary blobs stored in them, reasonable commits pushed to the new fork may contain many new versions of those large binary blobs.

          1. 4

            Oh, I see. So you’re basically worried about abusive users using repos as object storage? So the problem isn’t really forks, it’s allowing repo creation at all (but if you disable that, now suddenly nobody can send MRs because they can’t fork)?

            1. 4

              Yes. This is a concrete problem for me because some instance operators disable repo creation for non-trusted accounts for this reason, breaking the expected interaction patterns for contributions from non-trusted users.

        2. 2

          That’s really interesting to know, thanks. I had assumed for some reason that it was making a deep copy.

          For me then, it’s a small cognitive load to fork. I now have this copy of the project under my account, that maybe receives a commit or two and then is usually abandoned. I can delete it to clean up the clutter, but what if I get on a role and contribute again soon? I refork and do the whole dance again.

          I’d be nice to avoid all that and send a patch.

      2. 1

        This is the big problem. PRs are not part of git but have become integral because of its UX.

        There really isn’t a good solution here: either send patches via email or via “side” proprietary APIs.

        Git appraise looks interesting but outside collaboration is not possible.

        I could see an activity pub implementation of PR be interesting, but you still need an account somewhere just to send a patch.

      3. 1

        It would be awesome to be able to just copy/paste a diff or patch.

        1. 2

          Yes, and a service that would accept such patches (for example the git format-patch format that makes it easy and convenient to send a series of commits with commit messages, etc.) through a standard web API, for example a POST request, could easily be connected with another server/instance that would submit PRs/MRs on their user’s behalf using this mechanism. I work on my favorite gitlab instance, I can use it to “send a MR” to another instance: my instance computes this patch file (trivial), sends the POST request on my behalf (I provided my username on the target instance in a form field), and I get a login screen to authenticate my account on the receiving instance that then creates the MR. This is easy to do with very simple and standard web interaction patterns, open to either manual or automated use.

          1. 1

            100% want this.

    4. 26

      Do people actually like these landing pages?
      When I end up on a page that looks like the “…into this” example I immediately look for the Github/Gitlab/etc link. I find the boring old Github-standard source tree with the rendered README.md to be a much more useful starting point than these landing pages.

      1. 13

        I did some what research on this for ggez once upon a time, reading every project page I could and trying to figure out what made me like or not like them, and comparing with others. The loose recipe I developed is more or less these things, in this order:

        • name and one-sentence description of what the hell your tool is
        • one-paragraph introduction/pitch
        • a few bullet points for features and non-features
        • actual meat now that the reader is still interested: example code, deeper discussions of the neat parts – though I find it difficult not to go TOO deep here; refer to other docs as much as possible, give the user entry points into the most useful things
        • Community/contact info
        • License

        This is far from the only recipe of course, and you always gotta modify it based on what you’re actually presenting. But that’s the format that tends to have all the stuff I actually want to see.

      2. 8

        These pages almost never have what I’m looking for, which is example code. I actively search “[unknown project] github” to avoid their homepage

        1. 1

          Hmm, if I’m looking for a binary download then example code isn’t exactly on the list

          1. 2

            Why would you want to download a binary for a tool you don’t know how to use?

        2. 1

          I’ve been playing around with oranda and IMO the primary target audience for this sort of tool are authors of command-line tools, not libraries. I find GitHub’s “source-first” layout suboptimal here since I’m mostly interested in installation options, including “curl|sh”. GitHub releases isn’t very friendly either.

      3. 5

        I find the boring old Github-standard source tree with the rendered README.md to be a much more useful starting point than these landing pages.

        Me too. I think there might be value into making a website instead, if it is well structured. But the whole idea of just presenting the exact same thing but with slicker visuals, sounds silly to me

        1. 3

          Besides the content the standard layout is a great feature too. With a github repo view my eyes are automatically drawn to certain areas because I know where to look for them. The “About” blurb, number of commits, certain files in the repo (for example I notice the Cargo.toml in this case). And then I scroll down and the README.md will be waiting for me which will be nice boring (meaning easy to read) Markdown, no distracting colors or fonts.

      4. 4

        Especially since the important information (repo.description) is missing on the landing page.

        Landing pages do work but the recipe is primarily designed for non-engineers.

        We have a different landing page recipe which usually involves code

      5. 2

        I do tend to avoid them. Specifically because my brain doesn’t like the overly big “install” and “source” part. It first wants to have something like

        • what is this about
        • are there releases
        • how old are those releases
        • (for) which OS / language / framework
        • how do the issues look like (anything catching my eye)

        And most of them can be gained faster from the github page. The above example could definitely be changed to include those things, but most of the time these websites are just SEO/PR fluff and have no content.

        But I’m also not an ‘end user’ - and probably a power user..

      6. 2

        When I’m not looking specifically for the source, landing pages have more info like a link to docs, community stuff, etc. There’s also some room for positive design personality that you won’t get being embedded inside a forge’s chrome—it doesn’t have to be the obnoxious, full-screen-banner-followed-by-3-callouts sort of layout.

        What I don’t like is bloating a README to hell & back with far more than it needs—stuff like massive image list of sponsors, badges for CI results, heck, basically any sort of multimedia that isn’t a screenshot, is often better elsewhere. Ideally the README shouldn’t be a RENDERME just to read it too.

        Basically I think the landing pages are often too fluffy in design, but so are the READMEs. Many ‘contemporary’ READMEs should be the landing page, and the README should be cut down to be plain-text readable.

    5. 12

      GitHub pages is another easy way that’s free. I’ve had plenty of posts do just fine hosted on GitHub pages making their way to HN.

      1. 8

        Or gitlab, or sourcehut.

      2. 3

        A simple golang server does just fine for all my blog posts that hit FP.

        1. 3

          Depending on your hosting provider, that may have the same bandwidth cost issues as this post discusses.

          However, GitHub Pages is free regardless of the bandwidth (or they may just limit things themselves secretly at some point, I don’t know).

      3. 1

        Agreed, this or similar service by other code forges is a good option for open source websites.

        1. 3

          good option for open source websites.

          GitHub Pages doesn’t require the repo to be public to work fwiw!

          TigerBeetle.com is in a private GitHub repo hosted by GitHub Pages.

          1. 1

            Nice! Didn’t know that.

    6. 2

      I love the idea behind nixos but hate the implementation.

    7. 30

      Tailwind & consort kinda target something the author seems to forget: many web developers of today are deeply entrenched in “component based frameworks” that kinda prevent such repeat of “atomic css snippets” since, well, you develop a “component” that get repeated as needed.

      Classic class based CSS already do this, ofc, but only for CSS + HTML, when you try to bring this component/class system to Javascript, you often ends up in something like React or Vue or idk what’s trendy today.

      And then you have 2 competing “components” systems : CSS classes, and “JS components”.

      Tailwind kinda allow you to merge them again, reducing overhead when having to work with those JS systems.

      1. 6

        I personally prefer the CSS Modules approach to solve this problem, rather than bringing in a whole framework like Tailwind. CSS Modules give you the isolation of components but don’t force you to leave CSS itself behind, and when compiled correctly can result in a much smaller CSS payload than something hand-written. Sure, you lose the descriptive and semantically-relevant class names, but if you’re building your whole app with components, you really don’t need them.

        That said, if I didn’t use something like React, or I just needed CSS that followed a similar modular approach, I guess I would reach for Tailwind. But realistically, CSS is so lovely these days that you really don’t need much of a framework at all.

        1. 3

          I find tailwind much easier to use than css modules when you stick to the defaults.

        2. 3

          CSS Modules is an abstraction at a lower level than Tailwind. The former can do everything Tailwind can do in terms of the end result. The latter provides really nice defaults/design tokens/opinions.

          1. 2

            CSS Modules is an abstraction at a lower level than Tailwind.

            Definitely, and that’s why I prefer it. The way I normally organize my applications is by using components, so I don’t really need a whole system for keeping all the styles in order. Ensuring that styles don’t conflict with one another when defined in different components is enough for me. But if I was doing something with just HTML and I didn’t have the power of a component-driven web framework, I probably would try out Tailwind or something like it.

    8. 2

      Continuing work on services for pico. We’re ramping up to build a lot more services and excited to see where we end up.

      https://blog.pico.sh

    9. 1

      Working on a microblog for lists that I launched a week ago. Specifically adding subdomain support.

      https://lists.sh

    10. 5

      It’s hard enough to ensure that you’re only using the correctly coloured function in the right place, until you consider that one of the main advantages of this sort of framework is sharing code across the server and the client.

      Hmm, I’m not sure this is the only benefit with SSR frameworks. A huge benefit is collocation of server colored functions next to the coupled client color function. Take remix, for example: the corresponding controller function (the Loader) is in the same file as the view function (the Page react component that generates html for the route). The route is generated automatically based on the path and file name.

      The bundler (esbuild) “smartly” figures out what should be run on the server (route, controller, view), the client (route and view), and then creates separate bundles as well as adds logic to automatically call the controller when a user navigates to the route.

      It’s true that sharing the view on both client and server is important here, but the real benefit of this framework is collocation and automatically linking a route with a controller and view.

    11. 27

      Getting rightfully shredded as closed-source spyware over at HN: https://news.ycombinator.com/item?id=30921231

      1. 7

        Also being prodded for using the name “Warp” (the name of a popular crate) and also trading on Rust’s name for marketing.

      2. 4

        Yea they are roasting the CEO alive and rightfully so.

    12. 8

      I have not explored Vim9 script, so I don’t know how hopeful or how sad to be about the language itself. The documentation promises potentially significant performance improvements: “An increase in execution speed of 10 to 100 times can be expected.” (That said, like many people, I would much rather write scripts for Vim in Lua or Python. But maybe Vim9 script will improve the syntax as well as the performance?)

      But I do worry about this causing a significant rift in plugin development since Neovim has support for Vim9 script as a non-goal.

      1. 6

        The rift is already there. In the latest release Lua is pretty much first class and many plugins have already jumped ship and become NeoVIM only. I don’t expect Vim9 to open the gap much wider than it already is, and if it does (for example if Vim9 only plugins start having hot stuff people don’t want to live without) it would not be surprising to see that non-goal be removed. After all they have kept up pretty well with vimscript support and porting VIM patches in general.

        1. 6

          Agreed. After neovim 0.5 I would need a really good set of arguments to move away from neovim and the thriving plug-in ecosystem using lua

        2. 2

          I could see pressure growing for vim9script support, but on the other hand, many may just author stuff in the legacy scripting for cross-compatibility because neither vim9script nor lua are necessary.

          I do hate to see this rift for code that needs the performance or flexibility though. It’s been pretty annoying for years where the core of an addon will be implemented in a pluggable scripting language and you have to make sure that’s feature-enabled and available and they’re all picking different languages. I’m disappointed that vim9script is becoming just another one of these, just without the external library dependency, and for now, definitely not available on nvim. It sounds like enough of a pain that I’d stay legacy, or do an IPC model like LSP, for compatibility, or just decide compatibility isn’t my problem.

          I think if vim9script takes off it will be through the sheer weight of vim’s popularity compared to nvim and people not concerned about compatibility, or willing to maintain two or more copies of the same logic. But I’m also not sure it’ll take off and I would’ve liked to see first-class lua in vim too. Just static linked, guaranteed in the build would’ve been enough for me!!

          Anyway, maybe it’s sad-but-okay if it’s just time to start saying vim and nvim are becoming too different. Clearly that’s happened with lua-only plugins.

    13. 1

      I am surprised that remote development only caught on again recently. When I was at Facebook a few years back (~2012), everyone had a remote machine doing all development work. From then on, I do the same myself. The portability is unparalleled. My laptop effectively only runs the Windowing system and monitors.

      Of course, that also means for the longest time, your “IDE” options are limited to Vim or Emacs.

      1. 4

        Like the article points out, though, this alignment of incentives is easy to achieve when you are just supporting employees, not so much otherwise. Every time I’ve seen someone try to provide this to a broader audience it’s become some unpalatable combination of rent-seeking, vendor lock in, and/or data harvesting.

      2. 2

        Same; it’s really nice to be able to do all my work from my personal machine without having to keep any of the “sensitive” work codebases checked out on “untrusted” hardware. You can always SSH into more RAM or CPUs; you can’t SSH into a better keyboard or display.

        Especially when pairing with other teammates, tmate is so much more pleasant than pairing over zoom. Plus it got me to start using the CLI for Jira, which makes it twenty times faster to look things up. (granted I could have done this before but I just didn’t think of it.)

        1. 1

          using the CLI for Jir

          Is this the pip installed cli for Jira?

          1. 1

            It’s been so long I forgot how I installed it, but it’s the one from Netflix.

        2. 1

          Does the JIRA CLI load results much faster than the web UI or something?

          1. 1

            Yes, 20x was not an exaggeration. Loading new pages in Jira is intolerably slow on a fast machine, and it’s much worse on my primary box.

            1. 2

              Ah hmm I may have to try this then. I had blithely assumed without checking that the slow part of using Jira in a browser would probably be the actual backend. Thanks!

            2. 1

              Have you compared with the (admittedly lacking) Mac app, which seems to use Catalyst?

              1. 1

                No, I don’t have a Mac.

      3. 1

        It’s all I’ve been doing since 2013. I recently tried out jetbrains but was not keen on any of the remote workflows so I do local dev with syncthing to the remote end. Need to decide if I’m going to keep doing this past the trial period…

      4. 1

        This is what I do for personal development and love it. I set up an old gaming rig up with arch, zerotier, and mosh. I use neovim as my editor and it works really well for me. Now with neovim 0.5, syntax highlighting and autocomplete are best in class thanks to LSPs.

    14. 9

      What an embarrassing thing to publish. What not-strawmen/ real world programmers is DHH referring to?

      1. 3

        This was my reaction as well. I was looking for some content or impassioned speech about being our best selves with some philosophical underpinnings. Instead we ended up with a tweet and a meme at the end.

    15. 5

      Consumer systems need ECC RAM as much as servers!

      I haven’t used ZFS, is it good for a laptop as well as a server? Should I install ZFS on all my systems?

      1. 8

        I haven’t used ZFS, is it good for a laptop as well as a server? Should I install ZFS on all my systems?

        Generally, the rule of thumb for ZFS is that you should have 1 GiB of RAM per 1 TiB of storage, multiplied by 2-4 if you are turning on deduplication. You can get away with a bit less if the storage is SSD than if it’s spinning rust - ZFS does a lot of buffering to mitigate fragmentation. I’ve not had a laptop that didn’t meet those requirements and the advantage of ZFS on a laptop over anything other than APFS are huge (relative to APFS they’re slightly less large):

        • O(1) snapshots, so you’re protected against accidental deletion. There are some nice tools that manage snapshots and let you do the gradual-decay thing. You can also do the NetApp-style thing with snapshots and have them automatically mounted inside the .zfs directory of the filesystem so as a user without the ability to run ZFS commands you can still copy files from a snapshot that you accidentally deleted.
        • Fast filesystem creation if you want individual units of checkpointing / restoring or different persistency guarantees (for example, I turn of sync on a filesystem mounted on ~/build so that it lies to my compiler / linker about persistence of data written there - if my machine crashes I may lose data there, but there’s nothing there I can’t recreate by simply blowing away the build and running it again, so I don’t care).
        • Delegated administration, so you can do the things above without elevating privilege, but only for the filesystems owned by you.
        • Easy backups with zfs send (if your NAS also uses ZFS then replicating your local disk, including snapshots, to the NAS is really easy).
        • Per-block checksums so you can detect on-disk corruption (doesn’t help against in-memory corruption, sadly, as this article points out).

        I haven’t used anything other than ZFS by choice for over 10 years on everything from a laptop with 20 GiB if disk and 1 GiB of RAM to servers with 512 GiB of RAM and many TiBs of disks. If you’re doing docker / containerd things, there’s a nice ZFS snapshotter that works really nicely with ZFS’s built-in snapshots.

        TL;DR: Yes.

      2. 3

        I stumbled upon this rant by torvalds about ECC and how it fade away in the consumer space. ddr5 will help somewhat, by requiring an on-chip ECC system. Problem is, it’s not helping with the data<->cpu bus and it looks like you will also not get reports in your OS about ECC errors, as you would do with normal ECC.

      3. 1

        i would also like to know this as Im about to install it on my laptop

    16. 6

      What would an ideal JavaScript dependency management system look like?

      1. 6

        It’s a good question. I’m not sure that npm is all that different from most other dependency managers. My feeling is that it’s more cultural than anything – why do JS developers like to create such small packages, and why do they use so many of them? The install script problem is exacerbated because of this, but really the same issue applies to RubyGems, PyPI, etc.

        There are some interesting statistics in Veracode’s State of Software Security - Open Source Edition report (PDF link). Especially the chart on page 15!

        Deno’s use of permissions looks very interesting too, but I haven’t tried it myself.

        1. 9

          I’m not sure that npm is all that different from most other dependency managers. My feeling is that it’s more cultural than anything – why do JS developers like to create such small packages, and why do they use so many of them?

          I thought this was fairly-well understood, certainly it’s been discussed plenty: JS has no standard library, and so it has been filled-in over many years by various people. Some of these libraries are really quite tiny, because someone was scratching their own itch and published the thing to npm to help others. Sometimes there are multiple packages doing essentially the same thing, because people had different opinions about how to do it, and no canonical std lib to refer to. Sometimes it’s just the original maintainers gave up, or evolved their package in a way that people didn’t like, and other packages moved in to fill the void.

          I’m also pretty sure most people developing applications rather than libraries aren’t directly using massive numbers of dependencies, and the ones they pulling in aren’t “small”. Looking around at some projects I’m involved with, the common themes are libraries like react, lodash, typescript, tailwind, material-ui, ORMs, testing libraries like Cypress, or enzyme, client libraries eg for Elasticsearch or AWS, etc… The same stuff you find in any language.

          1. 4

            It’s more than just library maintainers wanting to “scratch their own itch.” Users must download the js code over the wire everytime they navigate to a website. Small bundle sizes is a unique problem that only JS and embedded systems need to worry about. Large utility libraries like lodash are not preferred without treeshaking — which is easy to mess up and non-trivial.

            People writing python code don’t have to worry about numpy being 30MB, they just install it an move on with their lives. Can you imagine if a website required 30MB for a single library? There would be riots.

            I wrote more about it in blog article:

            https://erock.io/2021/03/27/my-love-letter-to-front-end-web-development.html

            1. 1

              Sure, but that’s just the way it is? There is no standard library available in the browser, so you have to download all the stuff. It’s not the fault of JS devs, and it’s not a cultural thing. At first people tried to solve it with common CDNs and caching. Now people use tree-shaking, minification, compression etc, and many try pretty hard to reduce their bundle size.

        2. 3

          I was thinking about Deno as well. The permission model is great. I’m less sure about URL-based dependencies. They’ve been intentionally avoiding package management altogether.

      2. 2

        It’s at least interesting to consider that with deno, a package might opt to require limited access - and the installer/user might opt to invoke (a hypothetical js/deno powered dependency resolver/build system) with limited permissions. It won’t fix everything, but might at least make it easier for a package to avoid permissions it does not need?

      3. 0

        hideous, I assume

      4. 1

        What would an ideal JavaScript dependency management system look like?

        apt

        1. 4

          apt also has install scripts

          1. 1

            with restrictions to ensure they are from a trusted source

            1. 4

              You mean policy restrictions? Because that only applies if you don’t add any repos or install random downloaded Debs, both of which many routinely do

              1. 1

                yeah

          2. 1

            Yes, but when you use Debian you know packages go through some sort of review process.

    17. 4

      I develop for Linux on the server and (mostly) enjoy it. I’ve tried using it on the desktop twice, once about 2 years ago and another time a decade before that, and gave up after a few days each time. If you come from an environment where everything more or less “just works” (MacOS in my case, although quality is palpably declining in recent years), it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment. I’ll bet my company wastes at least 30 minutes each week on screwed-up video calls thanks to buggy audio and video hardware support in desktop Linux.

      1. 8

        I honestly don’t experience this: I run Linux as a dev environment & everything just works!

        If anything, the user experience for USB devices is better under Linux than Windows - stuff just seems to be supported OOB & I don’t even need to go hunting for drivers these days.

        It’s possible that I have been lucky with hardware choices, but I do find it quite weird that my experience is so out of line with the rants I see about it online from time to time.

      2. 3

        If you come from an environment where everything more or less “just works”

        Perhaps I’m affected by having used Linux from 1996, but seems to me that Linux is the environment where everything just works. With the exception of exotic hardware, but those are relatively easy to circle around these days by a bit of planning.

      3. 3

        it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment.

        While this is true on average, it’s not like Mac or windows are strictly better. There certainly are reasons to prefer Linux. Mine are (in comparison to windows and Mac circa 2014, not sure what’s the current state)

        • setting up dev environments. Installing Python on windows was a nightmare. Homebrew sort-of works, but you are still fighting the system.
        • installing software in general. If I need a thing, I just type “install thing” in the terminal, and it just works. I don’t need to manually install each piece of software or babysit the updates. I update system whenever I find it convenient, the whole process is fast and I can still use my device while the update is in progress. As the sibling comment mentions, no futzing with drivers either like you have to do on windows.
        • I personally don’t like mac’s GUI. Un-disablable animations, dock eating screen space and window management don’t work for me. I much prefer windows way, where win+arrow tiles the windows, and win+number launches pinned app. It’s much easier to get that behavior in Linux, and, with some tweaking, it is optimizable further.
        • Modern windows tries to stuff a lot of things from the Internet into your attention, with suggestions, news, weather and the like. On Linux, you generally use only what you’ve configured yourself.
        1. 2

          You can hide the dock on a Mac, and it no longer “eats screen space”. You can also trivially install an app to do window snapping. I love my Linux desktop but there’s no way I’d say it’s easier to set up window management in it.

      4. 2

        I do video calls on my phone. It has a better camera and far superior microphones. And, it just works. On my desktop, the issue is usually with really poorly done end user software. So, the exception is Google Meet since it is browser based. I’ve just come to realize that the different devices I own are good at different things.

      5. 2

        If you come from an environment where everything more or less “just works” (MacOS in my case, although quality is palpably declining in recent years), it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment.

        Curiously, I use Linux for exactly the same reason: it “just works” without faffing about, whereas I never had this experience with Windows, or with my (brief) exposure to macOS. I don’t know if this is different expectations or different skill-set or something else 🤷

        Then again, I also just have a simple Nokia as I feel smartphones are hard-to-use difficult user-hostile devices that never seem to do what I bloody want, and everyone thinks I’m an oddball for that, so maybe I’m just weird.

        1. 3

          It’s not that Linux “just works”, or that any OS “just works”, for me. It’s that I have a strong likelihood when using Linux that there will be a usable error message, a useful log, and somewhere on the Net, people discussing the code.

          So debugging user problems is much much easier on Linux (or *BSD) than it is with Windows or MacOS.

      6. 2

        As someone who is slowly migrating to a linux desktop, I agree.

        I keep reading online about bluetooth, fingerprint, suspend/hibernate, multi-monitor scaling issues plaguing linux and these are things I just never have to worry about on my mbp.

        Aside: I will say though that it seems like mac is the only OS able to get bluetooth right. On my windows machine it barely works and every once in awhile I have to re-pair.

        Linux has come a long way since I first started using it 20 years ago but you really need to enjoy tinkering to get it right.

        1. 2

          Somehow, Android gets bluetooth right, on more-or-less the same kernel that Linux desktops run on. But I have never seen bluetooth work reliably on a Linux desktop. Intermittently, yes, reliably no.

    18. 7

      The first was about the plans for Prodkernel: will there still be the giant, two-year rebase? Hatch said that for now Icebreaker and Prodkernel would proceed in parallel. Delgadillo noted that Icebreaker is new, and has not necessarily worked out all of its kinks. He also said that while Icebreaker is meant to be functionally equivalent to Prodkernel, it may not be at parity performance-wise. It is definitely a goal to run these kernels in production, but that has not really happened yet.

      Now they have two separate linux kernel projects that they have to maintain. That sounds pretty brutal. If they can’t switch over ProdKernel to use their new Icebreaker then it kind of sounds like Icebreaker is going fail.

    19. 10

      I’m convinced it’s because you can’t run adblock on the mobile app.

      1. 2

        My guess is push notifications and you always have your phone so users are more likely to open their app when it’s on one of their homescreens.

      2. 1

        Fortunately I can run adblock on my router.

        1. 1

          You can’t. You can only run a host blocker, not a content blocker, and that first one is fairly easy to get around.

          Wake me up if someone invents actual content blocker on a router, I’ll be the first one to test.