Threads for erock

  1. 30

    Tailwind & consort kinda target something the author seems to forget: many web developers of today are deeply entrenched in “component based frameworks” that kinda prevent such repeat of “atomic css snippets” since, well, you develop a “component” that get repeated as needed.

    Classic class based CSS already do this, ofc, but only for CSS + HTML, when you try to bring this component/class system to Javascript, you often ends up in something like React or Vue or idk what’s trendy today.

    And then you have 2 competing “components” systems : CSS classes, and “JS components”.

    Tailwind kinda allow you to merge them again, reducing overhead when having to work with those JS systems.

    1. 6

      I personally prefer the CSS Modules approach to solve this problem, rather than bringing in a whole framework like Tailwind. CSS Modules give you the isolation of components but don’t force you to leave CSS itself behind, and when compiled correctly can result in a much smaller CSS payload than something hand-written. Sure, you lose the descriptive and semantically-relevant class names, but if you’re building your whole app with components, you really don’t need them.

      That said, if I didn’t use something like React, or I just needed CSS that followed a similar modular approach, I guess I would reach for Tailwind. But realistically, CSS is so lovely these days that you really don’t need much of a framework at all.

      1. 3

        I find tailwind much easier to use than css modules when you stick to the defaults.

        1. 3

          CSS Modules is an abstraction at a lower level than Tailwind. The former can do everything Tailwind can do in terms of the end result. The latter provides really nice defaults/design tokens/opinions.

          1. 2

            CSS Modules is an abstraction at a lower level than Tailwind.

            Definitely, and that’s why I prefer it. The way I normally organize my applications is by using components, so I don’t really need a whole system for keeping all the styles in order. Ensuring that styles don’t conflict with one another when defined in different components is enough for me. But if I was doing something with just HTML and I didn’t have the power of a component-driven web framework, I probably would try out Tailwind or something like it.

      1. 2

        Continuing work on services for pico. We’re ramping up to build a lot more services and excited to see where we end up.

        https://blog.pico.sh

        1. 1

          Working on a microblog for lists that I launched a week ago. Specifically adding subdomain support.

          https://lists.sh

          1. 5

            It’s hard enough to ensure that you’re only using the correctly coloured function in the right place, until you consider that one of the main advantages of this sort of framework is sharing code across the server and the client.

            Hmm, I’m not sure this is the only benefit with SSR frameworks. A huge benefit is collocation of server colored functions next to the coupled client color function. Take remix, for example: the corresponding controller function (the Loader) is in the same file as the view function (the Page react component that generates html for the route). The route is generated automatically based on the path and file name.

            The bundler (esbuild) “smartly” figures out what should be run on the server (route, controller, view), the client (route and view), and then creates separate bundles as well as adds logic to automatically call the controller when a user navigates to the route.

            It’s true that sharing the view on both client and server is important here, but the real benefit of this framework is collocation and automatically linking a route with a controller and view.

            1. 27

              Getting rightfully shredded as closed-source spyware over at HN: https://news.ycombinator.com/item?id=30921231

              1. 7

                Also being prodded for using the name “Warp” (the name of a popular crate) and also trading on Rust’s name for marketing.

                1. 4

                  Yea they are roasting the CEO alive and rightfully so.

                1. 8

                  I have not explored Vim9 script, so I don’t know how hopeful or how sad to be about the language itself. The documentation promises potentially significant performance improvements: “An increase in execution speed of 10 to 100 times can be expected.” (That said, like many people, I would much rather write scripts for Vim in Lua or Python. But maybe Vim9 script will improve the syntax as well as the performance?)

                  But I do worry about this causing a significant rift in plugin development since Neovim has support for Vim9 script as a non-goal.

                  1. 6

                    The rift is already there. In the latest release Lua is pretty much first class and many plugins have already jumped ship and become NeoVIM only. I don’t expect Vim9 to open the gap much wider than it already is, and if it does (for example if Vim9 only plugins start having hot stuff people don’t want to live without) it would not be surprising to see that non-goal be removed. After all they have kept up pretty well with vimscript support and porting VIM patches in general.

                    1. 6

                      Agreed. After neovim 0.5 I would need a really good set of arguments to move away from neovim and the thriving plug-in ecosystem using lua

                      1. 2

                        I could see pressure growing for vim9script support, but on the other hand, many may just author stuff in the legacy scripting for cross-compatibility because neither vim9script nor lua are necessary.

                        I do hate to see this rift for code that needs the performance or flexibility though. It’s been pretty annoying for years where the core of an addon will be implemented in a pluggable scripting language and you have to make sure that’s feature-enabled and available and they’re all picking different languages. I’m disappointed that vim9script is becoming just another one of these, just without the external library dependency, and for now, definitely not available on nvim. It sounds like enough of a pain that I’d stay legacy, or do an IPC model like LSP, for compatibility, or just decide compatibility isn’t my problem.

                        I think if vim9script takes off it will be through the sheer weight of vim’s popularity compared to nvim and people not concerned about compatibility, or willing to maintain two or more copies of the same logic. But I’m also not sure it’ll take off and I would’ve liked to see first-class lua in vim too. Just static linked, guaranteed in the build would’ve been enough for me!!

                        Anyway, maybe it’s sad-but-okay if it’s just time to start saying vim and nvim are becoming too different. Clearly that’s happened with lua-only plugins.

                    1. 1

                      I am surprised that remote development only caught on again recently. When I was at Facebook a few years back (~2012), everyone had a remote machine doing all development work. From then on, I do the same myself. The portability is unparalleled. My laptop effectively only runs the Windowing system and monitors.

                      Of course, that also means for the longest time, your “IDE” options are limited to Vim or Emacs.

                      1. 4

                        Like the article points out, though, this alignment of incentives is easy to achieve when you are just supporting employees, not so much otherwise. Every time I’ve seen someone try to provide this to a broader audience it’s become some unpalatable combination of rent-seeking, vendor lock in, and/or data harvesting.

                        1. 2

                          Same; it’s really nice to be able to do all my work from my personal machine without having to keep any of the “sensitive” work codebases checked out on “untrusted” hardware. You can always SSH into more RAM or CPUs; you can’t SSH into a better keyboard or display.

                          Especially when pairing with other teammates, tmate is so much more pleasant than pairing over zoom. Plus it got me to start using the CLI for Jira, which makes it twenty times faster to look things up. (granted I could have done this before but I just didn’t think of it.)

                          1. 1

                            using the CLI for Jir

                            Is this the pip installed cli for Jira?

                            1. 1

                              It’s been so long I forgot how I installed it, but it’s the one from Netflix.

                            2. 1

                              Does the JIRA CLI load results much faster than the web UI or something?

                              1. 1

                                Yes, 20x was not an exaggeration. Loading new pages in Jira is intolerably slow on a fast machine, and it’s much worse on my primary box.

                                1. 2

                                  Ah hmm I may have to try this then. I had blithely assumed without checking that the slow part of using Jira in a browser would probably be the actual backend. Thanks!

                                  1. 1

                                    Have you compared with the (admittedly lacking) Mac app, which seems to use Catalyst?

                                    1. 1

                                      No, I don’t have a Mac.

                              2. 1

                                It’s all I’ve been doing since 2013. I recently tried out jetbrains but was not keen on any of the remote workflows so I do local dev with syncthing to the remote end. Need to decide if I’m going to keep doing this past the trial period…

                                1. 1

                                  This is what I do for personal development and love it. I set up an old gaming rig up with arch, zerotier, and mosh. I use neovim as my editor and it works really well for me. Now with neovim 0.5, syntax highlighting and autocomplete are best in class thanks to LSPs.

                                1. 9

                                  What an embarrassing thing to publish. What not-strawmen/ real world programmers is DHH referring to?

                                  1. 3

                                    This was my reaction as well. I was looking for some content or impassioned speech about being our best selves with some philosophical underpinnings. Instead we ended up with a tweet and a meme at the end.

                                  1. 5

                                    Consumer systems need ECC RAM as much as servers!

                                    I haven’t used ZFS, is it good for a laptop as well as a server? Should I install ZFS on all my systems?

                                    1. 8

                                      I haven’t used ZFS, is it good for a laptop as well as a server? Should I install ZFS on all my systems?

                                      Generally, the rule of thumb for ZFS is that you should have 1 GiB of RAM per 1 TiB of storage, multiplied by 2-4 if you are turning on deduplication. You can get away with a bit less if the storage is SSD than if it’s spinning rust - ZFS does a lot of buffering to mitigate fragmentation. I’ve not had a laptop that didn’t meet those requirements and the advantage of ZFS on a laptop over anything other than APFS are huge (relative to APFS they’re slightly less large):

                                      • O(1) snapshots, so you’re protected against accidental deletion. There are some nice tools that manage snapshots and let you do the gradual-decay thing. You can also do the NetApp-style thing with snapshots and have them automatically mounted inside the .zfs directory of the filesystem so as a user without the ability to run ZFS commands you can still copy files from a snapshot that you accidentally deleted.
                                      • Fast filesystem creation if you want individual units of checkpointing / restoring or different persistency guarantees (for example, I turn of sync on a filesystem mounted on ~/build so that it lies to my compiler / linker about persistence of data written there - if my machine crashes I may lose data there, but there’s nothing there I can’t recreate by simply blowing away the build and running it again, so I don’t care).
                                      • Delegated administration, so you can do the things above without elevating privilege, but only for the filesystems owned by you.
                                      • Easy backups with zfs send (if your NAS also uses ZFS then replicating your local disk, including snapshots, to the NAS is really easy).
                                      • Per-block checksums so you can detect on-disk corruption (doesn’t help against in-memory corruption, sadly, as this article points out).

                                      I haven’t used anything other than ZFS by choice for over 10 years on everything from a laptop with 20 GiB if disk and 1 GiB of RAM to servers with 512 GiB of RAM and many TiBs of disks. If you’re doing docker / containerd things, there’s a nice ZFS snapshotter that works really nicely with ZFS’s built-in snapshots.

                                      TL;DR: Yes.

                                      1. 3

                                        I stumbled upon this rant by torvalds about ECC and how it fade away in the consumer space. ddr5 will help somewhat, by requiring an on-chip ECC system. Problem is, it’s not helping with the data<->cpu bus and it looks like you will also not get reports in your OS about ECC errors, as you would do with normal ECC.

                                        1. 1

                                          i would also like to know this as Im about to install it on my laptop

                                        1. 6

                                          What would an ideal JavaScript dependency management system look like?

                                          1. 6

                                            It’s a good question. I’m not sure that npm is all that different from most other dependency managers. My feeling is that it’s more cultural than anything – why do JS developers like to create such small packages, and why do they use so many of them? The install script problem is exacerbated because of this, but really the same issue applies to RubyGems, PyPI, etc.

                                            There are some interesting statistics in Veracode’s State of Software Security - Open Source Edition report (PDF link). Especially the chart on page 15!

                                            Deno’s use of permissions looks very interesting too, but I haven’t tried it myself.

                                            1. 9

                                              I’m not sure that npm is all that different from most other dependency managers. My feeling is that it’s more cultural than anything – why do JS developers like to create such small packages, and why do they use so many of them?

                                              I thought this was fairly-well understood, certainly it’s been discussed plenty: JS has no standard library, and so it has been filled-in over many years by various people. Some of these libraries are really quite tiny, because someone was scratching their own itch and published the thing to npm to help others. Sometimes there are multiple packages doing essentially the same thing, because people had different opinions about how to do it, and no canonical std lib to refer to. Sometimes it’s just the original maintainers gave up, or evolved their package in a way that people didn’t like, and other packages moved in to fill the void.

                                              I’m also pretty sure most people developing applications rather than libraries aren’t directly using massive numbers of dependencies, and the ones they pulling in aren’t “small”. Looking around at some projects I’m involved with, the common themes are libraries like react, lodash, typescript, tailwind, material-ui, ORMs, testing libraries like Cypress, or enzyme, client libraries eg for Elasticsearch or AWS, etc… The same stuff you find in any language.

                                              1. 4

                                                It’s more than just library maintainers wanting to “scratch their own itch.” Users must download the js code over the wire everytime they navigate to a website. Small bundle sizes is a unique problem that only JS and embedded systems need to worry about. Large utility libraries like lodash are not preferred without treeshaking — which is easy to mess up and non-trivial.

                                                People writing python code don’t have to worry about numpy being 30MB, they just install it an move on with their lives. Can you imagine if a website required 30MB for a single library? There would be riots.

                                                I wrote more about it in blog article:

                                                https://erock.io/2021/03/27/my-love-letter-to-front-end-web-development.html

                                                1. 1

                                                  Sure, but that’s just the way it is? There is no standard library available in the browser, so you have to download all the stuff. It’s not the fault of JS devs, and it’s not a cultural thing. At first people tried to solve it with common CDNs and caching. Now people use tree-shaking, minification, compression etc, and many try pretty hard to reduce their bundle size.

                                              2. 3

                                                I was thinking about Deno as well. The permission model is great. I’m less sure about URL-based dependencies. They’ve been intentionally avoiding package management altogether.

                                              3. 2

                                                It’s at least interesting to consider that with deno, a package might opt to require limited access - and the installer/user might opt to invoke (a hypothetical js/deno powered dependency resolver/build system) with limited permissions. It won’t fix everything, but might at least make it easier for a package to avoid permissions it does not need?

                                                1. 0

                                                  hideous, I assume

                                                  1. 1

                                                    What would an ideal JavaScript dependency management system look like?

                                                    apt

                                                    1. 4

                                                      apt also has install scripts

                                                      1. 1

                                                        with restrictions to ensure they are from a trusted source

                                                        1. 4

                                                          You mean policy restrictions? Because that only applies if you don’t add any repos or install random downloaded Debs, both of which many routinely do

                                                          1. 1

                                                            yeah

                                                        2. 1

                                                          Yes, but when you use Debian you know packages go through some sort of review process.

                                                    1. 4

                                                      I develop for Linux on the server and (mostly) enjoy it. I’ve tried using it on the desktop twice, once about 2 years ago and another time a decade before that, and gave up after a few days each time. If you come from an environment where everything more or less “just works” (MacOS in my case, although quality is palpably declining in recent years), it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment. I’ll bet my company wastes at least 30 minutes each week on screwed-up video calls thanks to buggy audio and video hardware support in desktop Linux.

                                                      1. 8

                                                        I honestly don’t experience this: I run Linux as a dev environment & everything just works!

                                                        If anything, the user experience for USB devices is better under Linux than Windows - stuff just seems to be supported OOB & I don’t even need to go hunting for drivers these days.

                                                        It’s possible that I have been lucky with hardware choices, but I do find it quite weird that my experience is so out of line with the rants I see about it online from time to time.

                                                        1. 3

                                                          If you come from an environment where everything more or less “just works”

                                                          Perhaps I’m affected by having used Linux from 1996, but seems to me that Linux is the environment where everything just works. With the exception of exotic hardware, but those are relatively easy to circle around these days by a bit of planning.

                                                          1. 3

                                                            it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment.

                                                            While this is true on average, it’s not like Mac or windows are strictly better. There certainly are reasons to prefer Linux. Mine are (in comparison to windows and Mac circa 2014, not sure what’s the current state)

                                                            • setting up dev environments. Installing Python on windows was a nightmare. Homebrew sort-of works, but you are still fighting the system.
                                                            • installing software in general. If I need a thing, I just type “install thing” in the terminal, and it just works. I don’t need to manually install each piece of software or babysit the updates. I update system whenever I find it convenient, the whole process is fast and I can still use my device while the update is in progress. As the sibling comment mentions, no futzing with drivers either like you have to do on windows.
                                                            • I personally don’t like mac’s GUI. Un-disablable animations, dock eating screen space and window management don’t work for me. I much prefer windows way, where win+arrow tiles the windows, and win+number launches pinned app. It’s much easier to get that behavior in Linux, and, with some tweaking, it is optimizable further.
                                                            • Modern windows tries to stuff a lot of things from the Internet into your attention, with suggestions, news, weather and the like. On Linux, you generally use only what you’ve configured yourself.
                                                            1. 2

                                                              You can hide the dock on a Mac, and it no longer “eats screen space”. You can also trivially install an app to do window snapping. I love my Linux desktop but there’s no way I’d say it’s easier to set up window management in it.

                                                            2. 2

                                                              I do video calls on my phone. It has a better camera and far superior microphones. And, it just works. On my desktop, the issue is usually with really poorly done end user software. So, the exception is Google Meet since it is browser based. I’ve just come to realize that the different devices I own are good at different things.

                                                              1. 2

                                                                If you come from an environment where everything more or less “just works” (MacOS in my case, although quality is palpably declining in recent years), it’s borderline incomprehensible why people put up with such a buggy and user-hostile environment.

                                                                Curiously, I use Linux for exactly the same reason: it “just works” without faffing about, whereas I never had this experience with Windows, or with my (brief) exposure to macOS. I don’t know if this is different expectations or different skill-set or something else 🤷

                                                                Then again, I also just have a simple Nokia as I feel smartphones are hard-to-use difficult user-hostile devices that never seem to do what I bloody want, and everyone thinks I’m an oddball for that, so maybe I’m just weird.

                                                                1. 3

                                                                  It’s not that Linux “just works”, or that any OS “just works”, for me. It’s that I have a strong likelihood when using Linux that there will be a usable error message, a useful log, and somewhere on the Net, people discussing the code.

                                                                  So debugging user problems is much much easier on Linux (or *BSD) than it is with Windows or MacOS.

                                                                2. 2

                                                                  As someone who is slowly migrating to a linux desktop, I agree.

                                                                  I keep reading online about bluetooth, fingerprint, suspend/hibernate, multi-monitor scaling issues plaguing linux and these are things I just never have to worry about on my mbp.

                                                                  Aside: I will say though that it seems like mac is the only OS able to get bluetooth right. On my windows machine it barely works and every once in awhile I have to re-pair.

                                                                  Linux has come a long way since I first started using it 20 years ago but you really need to enjoy tinkering to get it right.

                                                                  1. 2

                                                                    Somehow, Android gets bluetooth right, on more-or-less the same kernel that Linux desktops run on. But I have never seen bluetooth work reliably on a Linux desktop. Intermittently, yes, reliably no.

                                                                1. 7

                                                                  The first was about the plans for Prodkernel: will there still be the giant, two-year rebase? Hatch said that for now Icebreaker and Prodkernel would proceed in parallel. Delgadillo noted that Icebreaker is new, and has not necessarily worked out all of its kinks. He also said that while Icebreaker is meant to be functionally equivalent to Prodkernel, it may not be at parity performance-wise. It is definitely a goal to run these kernels in production, but that has not really happened yet.

                                                                  Now they have two separate linux kernel projects that they have to maintain. That sounds pretty brutal. If they can’t switch over ProdKernel to use their new Icebreaker then it kind of sounds like Icebreaker is going fail.

                                                                  1. 10

                                                                    I’m convinced it’s because you can’t run adblock on the mobile app.

                                                                    1. 2

                                                                      My guess is push notifications and you always have your phone so users are more likely to open their app when it’s on one of their homescreens.

                                                                      1. 1

                                                                        Fortunately I can run adblock on my router.

                                                                        1. 1

                                                                          You can’t. You can only run a host blocker, not a content blocker, and that first one is fairly easy to get around.

                                                                          Wake me up if someone invents actual content blocker on a router, I’ll be the first one to test.

                                                                      1. 14

                                                                        I’m probably an outlier too, but I won’t install the app either. Very little content is so important that I need to give up my personal data to access it.

                                                                        1. 9

                                                                          Count me as an outlier too. And it is not even about giving up my personal data. It’s more like: don’t treat me like an idiot if you want me to read/view your stuff.

                                                                          • If the web experience is so bad, why did you even bother to build a site then?
                                                                          • Exactly what part of the experience is better in the app? Do you think I am not able to judge what I find comfortable myself?
                                                                          • Even if somehow it really is better, did you factor in the trouble and annoyance of installing that app and finding the content a second time in the comparison?

                                                                          Really, I keep getting amazed about the blind spot that marketing and ‘engagement people’ have when it comes to banners like this. They care so much about brand and experience, but are stumped when you ask them how they think all these hurdles they put in between me and the content affects the impression I have of their company. (Yes, I have a habit of asking that in the companies I work.)

                                                                          1. 4

                                                                            Yall aren’t missing anything. Invariably, when I give up and install an app, it’s a worse experience than the main site. In fact, the app’s usability is often the direct inverse of their insistence that you try it.

                                                                            1. 2

                                                                              Github is a great example of this. Their iOS doesn’t have all the features the website has. It also has a crappier navigation experience and overall get annoyed with it quickly. I continue to use it because it seems to preserve my login better and browsing code is decent.

                                                                            2. 3

                                                                              I’m really curious about this trend. Google published some research 10 or so years ago that said that an extra 100ms of loading time had a measurable impact on whether users would stay on your page. Requiring an app download adds many seconds to the load time even if you’re willing to download the app. I’d expect that to see orders of magnitude more people give up than an extra 100ms delay.

                                                                              The only reason I can see for this is that companies have determined that there’s something like a bimodal distribution of visitors and losing the casual users to focus on the ones with long-term engagement is a solid decision. Well, okay, that’s the only rational explanation I can see. The most likely one is that CxOs have heard that apps are cool and want to be ‘app-first’ and ‘native-first’ because Forbes or Gartner said that’s what all the cool kids are doing.

                                                                            1. 6

                                                                              It seems to me that if one is going to go that far off the beaten path (i.e. not just running “docker build”), then it would also be worth looking into Buildah, a flexible image build tool from the same group as Podman. Have you looked into Buildah yet? I haven’t yet used it in anger, but it looks interesting.

                                                                              1. 6

                                                                                +1000 for Buildah.

                                                                                No more dind crap in your CI.

                                                                                Lets you export your image in OCI format for, among other useful purposes, security scanning before pushing, etc.

                                                                                Overall much better than Docker’s build. Highly recommend you try it.

                                                                                1. 3

                                                                                  Added looking into it to my todo list, thanks for the suggestion @mwcampbell and @ricardbejarano.

                                                                                  1. 2

                                                                                    Im intrigued, what do you use for security scanning the image?

                                                                                    1. 4

                                                                                      My (GitLab) CI for building container images is as follows:

                                                                                      • Stage 1: lint Dockerfile with Hadolint.
                                                                                      • Stage 2: perform static Dockerfile analysis with Trivy (in config mode) and TerraScan.
                                                                                      • Stage 3: build with Buildah, export to a directory in the OCI format (buildah push myimage oci:./build, last time I checked, you can’t do this with the Docker CLI), pass that as an artifact for the following stages.
                                                                                      • Stage 4a: look for known vulns within the contents of the image using Trivy (this time in image mode) and Grype.
                                                                                      • Stage 4b: I also use Syft to generate the list of software in the image, along with their version numbers. This has been useful more times than I can remember, for filing bug reports, comparing a working and a broken image, etc.
                                                                                      • Stage 5: if all the above passed, grab the image back into Buildah (buildah pull oci:./build, can’t do this with Docker’s CLI either) and push it to a couple of registries.

                                                                                      The tools in stage 2 pick up most of the “security bad practices”. The tools in stage 4 give me the of known vulnerabilities in the image’s contents, along with their CVE, severity and whether there’s a fix in a newer release or not.

                                                                                      Having two tools in both stages is useful because it increases coverage, as some tools pick up vulns that others don’t.

                                                                                      Scanning before pushing lets me decide whether I want the new, surely vulnerable image over the old (which may or may not be vulnerable as well). I only perform this manual intervention on severities high and critical, though.

                                                                                      1. 1

                                                                                        Thanks for the response. What are your thoughts on https://github.com/quay/clair which seem to replace both Gripe and Trivy?

                                                                                        1. 1

                                                                                          I haven’t used it, can’t judge.

                                                                                          Thanks for showing it to me.

                                                                                    2. 1

                                                                                      I’ve never used dind, but have only used Jenkins and GitHub Actions. Is that a common thing?

                                                                                      1. 1

                                                                                        IIRC GitHub Actions already has a Docker daemon accessible from within the CI container. So you’re already using Docker in Whatever on your builds.

                                                                                        There are many problems with running the Docker daemon within the build container, and IMO it’s not “correct”.

                                                                                        A container image is just a filesystem bundle. There’s no reason you need a daemon for building one.

                                                                                    3. 4

                                                                                      I have not looked at it, but my understanding is that Podman’s podman build is a wrapper around Buildah. So as a first pass I assume podman build has similar features. It does actually have at least one feature that docker build doesn’t, namely volume mounts during builds.

                                                                                      1. 2

                                                                                        If I remember correctly, the Buildah documents specify that while yes - podman build is basically a wrapper around Buildah - it doesn’t expose the full functionality of Buildah, trying to be more of a simple wrapper for people coming from Docker. I can’t recall what specific functionality was hidden from the user, but it was listed in the docs.

                                                                                    1. 10

                                                                                      Q: Why choose Docker or Podman over Nix or Guix?

                                                                                      Edit with some rephrasing: why run containers over a binary cache? They can both do somewhat similar things in creating a reproductible build (so long as you aren’t apt upgradeing in your container’s config file) and laying out how to glue you different services together, but is there a massive advantage with one on the other?

                                                                                      1. 28

                                                                                        I can’t speak for the OP, but for myself there are three reasons:

                                                                                        1. Docker for Mac is just so damn easy. I don’t have to think about a VM or anything else. It Just Works. I know Nix works natively on Mac (I’ve never tried Guix), but while I do development on a Mac, I’m almost always targeting Linux, so that’s the platform that matters.

                                                                                        2. The consumers of my images don’t use Nix or Guix, they use Docker. I use Docker for CI (GitHub Actions) and to ship software. In both cases, Docker requires no additional effort on my part or on the part of my users. In some cases I literally can’t use Nix. For example, if I need to run something on a cluster controlled by another organization there is literally no chance they’re going to install Nix for me, but they already have Docker (or Podman) available.

                                                                                        3. This is minor, I’m sure I could get over it, but I’ve written a Nix config before and I found the language completely inscrutable. The Dockerfile “language”, while technically inferior, is incredibly simple and leverages shell commands I already know.

                                                                                        1. 15

                                                                                          I am not a nix fan, quite the opposite, I hate it with a passion, but I will point out that you can generate OCI images (docker/podman) from nix. Basically you can use it as a Dockerfile replacement. So you don’t need nix deployed in production, although you do need it for development.

                                                                                          1. 8

                                                                                            As someone who is about to jump into nixos, Id love to read more about why you hate nix.

                                                                                            1. 19

                                                                                              I’m not the previous commenter but I will share my opinion. I’ve given nix two solid tries, but both times walked away. I love declarative configuration and really wanted it to work for me, but it doesn’t.

                                                                                              1. the nix language is inscrutable (to use the term from a comment above). I know a half dozen languages pretty well and still found it awkward to use
                                                                                              2. in order to make package configs declarative the config options need to be ported to the nix language. This inevitably means they’ll be out of date or maybe missing a config option you want to set.
                                                                                              3. the docs could be much better, but this is typical. You generally resort to looking at the package configs in the source repo
                                                                                              4. nix packages, because of the design of the system, has no connection to real package versions. This is the killer for me, since the rest of the world works on these version numbers. If I want to upgrade from v1.0 to v1.1 there is no direct correlation in nix except for a SHA. How do you find that out? Look at the source repo again.
                                                                                              1. 4

                                                                                                This speaks to my experience with Nix too. I want to like it. I get why it’s cool. I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg) and the thing I want most is to define my /etc files in their native tongue under version control and for it all to work out rather than depend on Nix rendering the same files. I could even live with Nix-the-language if that were the case.

                                                                                                1. 3

                                                                                                  I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg)

                                                                                                  As a former Google SRE, I completely agree—GCL has a lot of quirks. On the other hand, nothing outside Google compares, and I miss it dearly. Abstracting complex configuration outside the Google ecosystem just sucks.

                                                                                                  Yes, open tools exist that try to solve this problem. But only gcl2db can load a config file into an interactive interface where you can navigate the entire hierarchy of values, with traces describing every file:line that contributed to the value at a given path. When GCL does something weird, gcl2db will tell you exactly what happened.

                                                                                                2. 2

                                                                                                  Thanks for the reply. I’m actually not a huge fan of DSLs so this might be swaying me away from setting up nixos. I have a VM setup with it and tbh the though of me trolling through nix docs to figure out the magical phrase to do what I want does not sound like much fun. I’ll stick with arch for now.

                                                                                                  1. 6

                                                                                                    If you want the nix features but a general purpose language, guix is very similar but uses scheme to configure.

                                                                                                    1. 1

                                                                                                      I would love to use Guix, but lack of nonfree is killer as getting Steam running is a must. There’s no precedence for it being used in the unjamming communities I participate in, where as Nix is has sizable following.

                                                                                                      1. 2

                                                                                                        So use Ubuntu as the host OS for Guix if you need Steam to work. Guix runs well on many OS

                                                                                                3. 10

                                                                                                  Sorry for the very late reply. The problem I have with nixos is that it’s anti-abstraction in the sense that I elaborated on here. Instead it’s just the ultimate wrapper.

                                                                                                  To me, the point of a distribution is to provide an algebra of packages that’s invariant in changes of state. Or to reverse this idea, an instance of a distribution is anything with a morphism to the category of packages.

                                                                                                  Nix (and nixos) is the ultimate antithesis of this idea. It’s not a morphism, it’s a homomorphism. The structure is algebraic, but it’s concrete, not abstract.

                                                                                                  People claim that “declarative” configuration is good, and it’s hard to attack such a belief, but people don’t really agree on what really means. In Haskell it means that expressions have referential transparency, which is a good thing, but in other contexts when I hear people talk about declarative stuff I immediately shiver expecting the inevitable pain. You can “declare” anything if you are precise enough, and that’s what nix does, it’s very precise, but what matters is not the declarations, but the interactions and in nix interaction means copying sha256 hashes in an esoteric programming language. This is painful and as far away from abstraction as you can get.

                                                                                                  Also notice that I said packages. Nix doesn’t have packages at all. It’s a glorified build system wrapper for source code. Binaries only come as a side effect, and there are no first class packages. The separation between pre-build artefacts and post-build artefacts is what can enable the algebraic properties of package managers to exist, and nix renounces this phase distinction with prejudice.

                                                                                                  To come to another point, I don’t like how Debian (or you other favorite distribution) chooses options and dependencies for building their packages, but the fact that it’s just One Way is far more important to me than a spurious dependency. Nix, on the other hand, encourages pets. Just customize the build options that you want to get what you want! What I want is a standard environment, customizability is a nightmare, an anti-feature.

                                                                                                  When I buy a book, I want to go to a book store and ask for the book I want. With nix I have to go to a printing press and provide instructions for printing the book I want. This is insanity. This is not progress. People say this is good because I can print my book into virgin red papyrus. I say it is bad exactly for the same reason. Also, I don’t want all my prints to be dated January 1, 1970.

                                                                                              2. 8

                                                                                                For me personally, I never chose Docker; it was chosen for me by my employer. I could maybe theoretically replace it with podman because it’s compatible with the same image format, which Guix (which is much better designed overall) is not. (But I don’t use the desktop docker stuff at all so I don’t really care that much; mostly I’d like to switch off docker-compose, which I have no idea whether podman can replace.)

                                                                                                1. 3

                                                                                                  FWIW Podman does have a podman-compose functionality but it works differently. It uses k8s under the hood, so in that sense some people prefer it.

                                                                                                2. 2

                                                                                                  This quite nicely sums up for me 😄 and more eloquently than I could put it.

                                                                                                  1. 2

                                                                                                    If you’re targeting Linux why aren’t you using a platform that supports running & building Linux software natively like Windows or even Linux?

                                                                                                    1. 12

                                                                                                      … to call WSL ‘native’ compared to running containers/etc via VMs on non-linux OS’s is a bit weird.

                                                                                                      1. 11

                                                                                                        I enjoy using a Mac, and it’s close enough that it’s almost never a problem. I was a Linux user for ~15 years and I just got tired of things only sorta-kinda working. Your experiences certainly might be different, but I find using a Mac to be an almost entirely painless experience. It also plays quite nicely with my iPhone. Windows isn’t a consideration, every time I sit down in front of a Windows machine I end up miserable (again, YMMV, I know lots of people who use Windows productively).

                                                                                                        1. 3

                                                                                                          Because “targeting Linux” really just means “running on a Linux server, somewhere” for many people and they’re not writing specifically Linux code - I spend all day writing Go on a mac that will eventually be run on a Linux box but there’s absolutely nothing Linux specific about it - why would I need Linux to do that?

                                                                                                          1. 2

                                                                                                            WSL2-based containers run a lightweight Linux install on top of Hyper-V. Docker for Mac runs a lightweight Linux install on top of xhyve. I guess you could argue that this is different because Hyper-V is a type-1 hypervisor, whereas xhyve is a type-2 hypervisor using the hypervisor framework that macOS provides, but I’m not sure that either really counts as more ‘native’.

                                                                                                            If your development is not Linux-specific, then XNU provides a more complete and compliant POSIX system than WSL1, which are the native kernel POSIX interfaces for macOS and Windows, respectively.

                                                                                                        2. 9

                                                                                                          Prod runs containers, not Nix, and the goal is to run the exact same build artifacts in Dev that will eventually run in Prod.

                                                                                                          1. 8

                                                                                                            Lots of people distribute dockerfiles and docker-compose configurations. Podman and podman-compose can consume those mostly unchanged. I already understand docker. So I can both use things other people make and roll new things without using my novelty budget for building and running things in a container, which is basically a solved problem from my perspective.

                                                                                                            Nix or Guix are new to me and would therefore consume my novelty budget, and no one has ever articulated how using my limited novelty budget that way would improve things for me (at least not in any way that has resonated with me).

                                                                                                            Anyone else’s answer is likely to vary, of course. But that’s why I continue to choose dockerfiles and docker-compose files, whether it’s with docker or podman, rather than Nix or Guix.

                                                                                                            1. 5

                                                                                                              Not mentioned in other comments, but you also get process / resource isolation by default on docker/podman. Sure, you can configure service networking, cgroups, namespaces on nix yourself, just like any other system and setup the relevant network proxying. But getting that prepackaged and on by default is very handy.

                                                                                                              1. 2

                                                                                                                You can get a good way there without much fuss with using the Declarative NixOS containers feature (which uses systemd-nspawn under the hood).

                                                                                                              2. 4

                                                                                                                I’m not very familiar with Nix, but I feel like a Nix-based option could do for you what a single container could do, giving you the reproducibility of environment. What I don’t see how to do is something comparable to creating a stack of containers, such as you get from Docker Compose or Docker Swarm. And that’s considerably simpler than the kinds of auto-provisioning and wiring up that systems like Kubernetes give you. Perhaps that’s what Nix Flakes are about?

                                                                                                                That said I am definitely feeling like Docker for reproducible developer environments is very heavy, especially on Mac. We spend a significant amount of time rebuilding containers due to code changes. Nix would probably be a better solution for this, since there’s not really an entire virtual machine and assorted filesystem layering technology in between us and the code we’re trying to run.

                                                                                                                1. 3

                                                                                                                  Is Nix a container system…? I though it was a package manager?

                                                                                                                  1. 3

                                                                                                                    It’s not, but I understand the questions as “you can run a well defined nix configuration which includes your app or a container with your app; they’re both reproducible so why choose one of the over the other?”

                                                                                                                  2. 1

                                                                                                                    It’s possible to generate Docker images using Nix, at least, so you could use Nix for that if you wanted (and users won’t know that it’s Nix).

                                                                                                                    1. 1

                                                                                                                      These aren’t mutually exclusive. I run a few Nix VMs for self-hosting various services, and a number of those services are docker images provided by the upstream project that I use Nix to provision, configure, and run. Configuring Nix to run an image with hash XXXX from Docker registry YYYY and such-and-such environment variables doesn’t look all that different from configuring it to run a non-containerized piece of software.

                                                                                                                    1. 1

                                                                                                                      Ive been writing blog articles and thinking about a new product idea around project management.

                                                                                                                      Im also working on a module for listifi.app that will allow users to scrape a web page and turn it into a list.

                                                                                                                      1. 21

                                                                                                                        I’d like a much smaller version of the web platform, something focused on documents rather than apps. I’m aware of a few projects in that direction but none of them are in quite the design space I’d personally aim for.

                                                                                                                        1. 6

                                                                                                                          Well, “we” tried that with PDF and it still was infected with featureitis and Acrobat Reader is yet another web browser. Perhaps not unsurprising considering Adobe’s track record, but if you factor in their proprietary extensions (there’s javascript in there, 3D models, there used to be Flash and probably still is somewhere..) it followed the same general trajectory and timeline as the W3C soup. Luckily much of that failed to get traction (tooling, proprietary and web network effect all spoke against it) and thus is still more thought of “as a document”.

                                                                                                                          1. 20

                                                                                                                            This is another example of “it’s not the tech, it’s the economy, stupid!” The modern web isn’t a adware-infested cesspool because of HTML5, CSS, and JavaScript, it’s a cesspool because (mis)using these tools make people money.

                                                                                                                            1. 5

                                                                                                                              Yeah exactly, for some examples: Twitter stopped working without JS recently (what I assume must be a purposeful decision). Then I noticed Medium doesn’t – it no longer shows you the whole article without JS. And Reddit has absolutely awful JS that obscures the content.

                                                                                                                              All of this was done within the web platform. It could have been good, but they decided to make it bad on purpose. And at least in the case of Reddit, it used to be good!

                                                                                                                              Restricting or rewriting the platform doesn’t solve that problem – they are pushing people to use their mobile apps and sign in, etc. They will simply use a different platform.

                                                                                                                              (Also note that these platforms somehow make themselves available to crawlers, so I use https://archive.is/, ditto with the NYTimes and so forth. IMO search engines should not jump through special hoops to see this content; conversely, if they make their content visible to search engines, then it’s fair game for readers to see.)

                                                                                                                              1. 4

                                                                                                                                I’ll put it like this: I expect corporate interests to continue using the most full-featured platforms available, including the web platform as we know it today. After all, those features were mostly created for corporate interests.

                                                                                                                                That doesn’t mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.

                                                                                                                                1. 4

                                                                                                                                  That doesn’t mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.

                                                                                                                                  The trick here is to make sure people use it for a large value of people. I was pretty interested in Gemini from the beginning and wrote some stuff on the network (including an HN mirror) and I found that pushing back against markup languages, uploads, and some form of in-band signaling (compression etc) ends up creating a narrower community than I’d like. I fully acknowledge this might just be a “me thing” though.

                                                                                                                                  EDIT: I also think you’ve touched upon something a lot of folks are interested in right now as evidenced by both the conversation here and the interest in Gemini as a whole.

                                                                                                                                  1. 3

                                                                                                                                    I appreciate those thoughts, for sure. Thank you.

                                                                                                                                  2. 2

                                                                                                                                    That doesn’t mean everybody else has to build stuff the same way the corps do.

                                                                                                                                    I agree, and you can look at https://www.oilshell.org/ as a demonstration of that (both the site and the software). But all of that is perfectly possible with existing platforms and tools. In fact it’s greatly aided by many old and proven tools (shell, Python) and some new-ish ones (Ninja).

                                                                                                                                    There is value in rebuilding alternatives to platforms for sure, but it can also be overestimated (e.g. fragmenting ecosystems, diluting efforts, what Jamie Zawinski calls CADT, etc.).


                                                                                                                                    Similar to my “alternative shell challenges”, I thought of a “document publishing challenge” based on my comment today on a related story:

                                                                                                                                    The challenge is if the platform can express a widely praised, commercial multimedia document:

                                                                                                                                    https://ciechanow.ski/gears/

                                                                                                                                    https://ciechanow.ski/js/gears.js (source code is instructive to look at)

                                                                                                                                    https://news.ycombinator.com/item?id=22310813 (many appreciative comments)

                                                                                                                                    1. 2

                                                                                                                                      Yeah, there are good reasons this is my answer to “if you could” and not “what are your current projects”. :)

                                                                                                                                      I like the idea of that challenge. I don’t actually know whether my ideal platform would make that possible or not, but situating it with respect to the challenge is definitely useful for thinking about it.

                                                                                                                                      1. 1

                                                                                                                                        Oops, I meant NON-commercial! that was of course the point

                                                                                                                                        There is non-commercial content that makes good use of recent features of the web

                                                                                                                                  3. 4

                                                                                                                                    Indeed - tech isn’t the blocker to fixing this problem. The tools gets misused from the economic incentives overpowering the ones from the intended use. Sure you can nudge development in a certain direction by providing references, templates, frameworks, documentation, what have you - but whatever replacement needs to also provide enough economic incentives to minimise the appeal of abuse. Worse still, deployed at a tipping point where the value added exceed the inertia and network effect of the current Web.

                                                                                                                                    1. 2

                                                                                                                                      I absolutely believe that the most important part of any effort at improving the situation has to be making the stuff you just said clear to everyone. It’s important to make it explicit from the start that the project’s view is that corporate interests shouldn’t have a say in the direction of development, because the default is that they do.

                                                                                                                                      1. 2

                                                                                                                                        I think the interests of a corporation should be expressible and considered through some representative, but given the natural advantage an aggregate has in terms of resources, influence, “network effect”, … they should also be subject to scrutiny and transparency that match their relative advantage over other participants. Since that rarely happens, effect instead seem to be that the Pareto Principle sets in and the corporation becomes the authority in ‘appeal to authority’. They can then lean back and cash in with less effort than anyone else. Those points are moot though if the values of the intended tool/project/society aren’t even expressed, agreed upon or enforced.

                                                                                                                                        1. 1

                                                                                                                                          Yes, I agree with most of that, and the parts I don’t agree with are quite defensible. Well said.

                                                                                                                                  4. 2

                                                                                                                                    Yes, I agree. I do think that this is largely a result of PDF being a corporate-driven project rather than a grassroots one. As somebody else said in the side discussion about Gemini, that’s not the only source of feature creep, but I do think it’s the most important factor.

                                                                                                                                  5. 5

                                                                                                                                    I’m curious about what direction is that too. I’ve been using and enjoying the gemini protocol and I think it’s fantastic.

                                                                                                                                    Even the TLS seems great since it would allow some some simple form of client authentication but in a very anonymous way

                                                                                                                                    1. 7

                                                                                                                                      I do like the general idea of Gemini. I’m honestly still trying to put my thoughts together, but I’d like something where it’s guaranteed to be meaningful to interact with it offline, and ideally with an experience that looks, you know… more like 2005 than 1995 in terms of visual complexity, if you see what I mean. I don’t think we have to go all the way back to unformatted text, it just needs to be a stable target. The web as it exists right now seems like it’s on a path to keep growing in technical complexity forever, with no upper bound.

                                                                                                                                      1. 9

                                                                                                                                        I have some thoughts in this area:

                                                                                                                                        • TCP/IP/HTTP is fine (I disagree with Gemini there). It’s HTML/CSS/JS that are impossible to implement on a shoestring.

                                                                                                                                        • The web’s core value proposition is documents with inline hyperlinks. Load all resources atomically, without any privacy-leaking dependent loads.

                                                                                                                                        • Software delivery should be out of scope. It’s only needed because our computers are too complex to audit, and the programs we install keep exceeding their rights. Let’s solve that problem at the source.

                                                                                                                                        I’ve thought about this enough to make a little prototype.

                                                                                                                                        1. 5

                                                                                                                                          It’s of course totally fine to disagree, but I genuinely believe it will be impossible to ever avoid fingerprinting with HTTP. I’ve seen stuff, not all of which I’m at liberty to talk about. So from a privacy standpoint I am on board with a radically simpler protocol for that layer. TCP and IP are fine, of course.

                                                                                                                                          I agree wholeheartedly with your other points.

                                                                                                                                          That is a really cool project! Thank you for sharing it!

                                                                                                                                          1. 4

                                                                                                                                            Sorry, I neglected to expand on that bit. My understanding is that the bits of HTTP that can be used for fingerprinting require client (browser) support. I was implicitly assuming that we’d prune those bits from the browser while we’re reimplementing it from scratch anyway. Does that seem workable? I’m not an expert here.

                                                                                                                                            1. 6

                                                                                                                                              I’ve been involved with Gemini since the beginning (I wrote the very first Gemini server) and I was at first amazed at just how often people push to add HTTP features back into Gemini. A little feature here, a little feature there, and pretty soon it’s HTTP all over again. Prune all you want, but people will add those features back if it’s at all possible. I’m convinced of that.

                                                                                                                                              1. 4

                                                                                                                                                So you’re saying that a new protocol didn’t help either? :)

                                                                                                                                                1. 4

                                                                                                                                                  Pretty much. At least Gemini drew a hard line in the sand and not try to prune an existing protocol. But people like their uploads and markup languages.

                                                                                                                                                  1. 2

                                                                                                                                                    Huh. I guess the right thing to do, then, is design the header format with attention to minimizing how many distinguishing bits it leaks.

                                                                                                                                              2. 1

                                                                                                                                                Absolutely. There is nothing very fingerprintable in minimal valid http requests.

                                                                                                                                          2. 5

                                                                                                                                            , but I’d like something where it’s guaranteed to be meaningful to interact with it offline

                                                                                                                                            This is where my interest in store-and-forward networks lie. I find that a lot of the stuff I do on the internet is pull down content (read threads, comments, articles, documentation) and I push content (respond to things, upload content, etc) much less frequently. For that situation (which I realize is fairly particular to me) I find that a store-and-forward network would make offline-first interaction a first-class citizen.

                                                                                                                                            I distinguish this from IM (like Matrix, IRC, Discord, etc) which is specifically about near instant interaction.

                                                                                                                                            1. 1

                                                                                                                                              I agree.

                                                                                                                                        2. 2

                                                                                                                                          Have you looked at the gemini protocol?

                                                                                                                                          1. 2

                                                                                                                                            I have, see my other reply.

                                                                                                                                        1. 3

                                                                                                                                          I’ve never gotten the feedback during the last ten years: “It’s a nice app, but it would be better if it were a native app”. Not once.

                                                                                                                                          Thats my hunch as well. The only time I hear about this complaint is on HN

                                                                                                                                          1. 16

                                                                                                                                            This doesn’t really touch on the economic factors for why Electron is a thing; it’s easy to find JS devs off the street for cheap, not so much for Win32 or Cocoa. Or for that matter, finding them at all. (edit: Or hiring both at the same time. Why bother when you can (seemingly) do the same with one?)

                                                                                                                                            1. 4

                                                                                                                                              Yeah it would have been a much better article if he touched on the hire-ability aspect. OP is clearly a product guy, it seems like he has no clue there are a lot of good cross-platform frameworks out there these days.

                                                                                                                                              1. 2

                                                                                                                                                [citation required].

                                                                                                                                                1. 1

                                                                                                                                                  It depends on who u r

                                                                                                                                              2. 2

                                                                                                                                                Not sure where you got your data for JS dev pay. That might have been true a decade ago, but the pay gap has shrunk, especially for experienced devs.

                                                                                                                                                You’re about one thing: targeting multiple platforms natively isn’t economical, especially nowadays with mobile platforms.