This week I started working on some patches to git to explore some ideas around replacing git’s filesystem interactions when working with loose blobs and pack files in the
.git directory. This weekend I plan on continuing to work on these patches, while also thinking of a name for the project, as I fully expect the patches to never land upstream.
Last year I wrote my own lisp by going through the mal documentation over the course of a week. This article is a good overview of the process, but if you’re interested in doing similar, I recommend checking out mal and it’s test suite.
I worked on mal in a language I already knew, and one that was already in the corpus. It was pretty interesting comparing my implementation with the one already written. It’s on my bucket list to write in a language I’m learning.
The title of this article had me do a double-take: C and C++ development on Windows is great. No sanity is needed.
But that’s not what the article is about. What the article is about is that the C runtime shim that ships with Visual Studio defaults to using the ANSI API calls without supporting UTF-8, goes on to identify this as “almost certainly political, originally motivated by vendor lock-in” (which it’s transparently not), and then talks how Windows makes it impossible to port Unix programs without doing something special.
I half empathize. I’d empathize more if (as the author notes) you couldn’t just use MinGW for ports, which has the benefit that you can just use GCC the whole way down and not deal with VC++ differences, but I get that, when porting very small console programs from Unix, this can be annoying. But when it comes to VC++, the accusations of incompetence and whatnot are just odd to me. Microsoft robustly caters to backwards compatibility. This is why the app binaries I wrote for Windows 95 still run on my 2018 laptop. There are heavy trade-offs with that approach which in general have been endlessly debated, one of which is definitely how encodings work, but they’re trade-offs. (Just like how Windows won’t allow you to delete or move open files by default, which on the one hand often necessitates rebooting on upgrades, and on the other hand avoids entire classes of security issues that the Unix approach has.)
But on the proprietary interface discussion that comes up multiple times in this article? Windows supports file system transactions, supports opting in to a file being accessed by multiple processes rather than advisory opt-out, has different ideas on what’s a valid filename than *nix, support multiple data streams per file, has an entirely different permission model based around ACLs, etc., and that’s to say nothing of how the Windows Console is a fundamentally different beast than a terminal. Of course those need APIs different from the C runtime, and it’s entirely reasonable that you might need to look at them if you’re targeting Windows.
Windows won’t allow you to delete or move open files by default
Windows lets the file opener specify whether it supports concurrent delete or move via FILE_SHARE_DELETE, which is badly named and badly understood.
I think the bigger issue, which comes back to the spirit of this article, is what to do when a program doesn’t use a Windows API that can specify this behavior: when I last looked the C runtime library didn’t let programs specify this - _SH_DENYNO is for read and write only. So there’s a lot of people who think Windows doesn’t allow deletes or moves of opened files, because they’re running on an abstraction layer that doesn’t allow it.
Yeah, the entire thing leaves a sour taste in the mouth; portability shouldn’t have to mean “it’s just a different variant on Unix”.
Hell, I actually prefer developing on Windows with the caveat that you aren’t trying to develop Unix applications on Windows. Of course you’d have a bad time. (Though I do wish the narrow Win32 APIs supported UTF-8 as a system codepage… I think Windows 10 finally fixed this.)
(Though I do wish the narrow Win32 APIs supported UTF-8 as a system codepage… I think Windows 10 finally fixed this.)
Yeah, they do; that’s mentioned in the article. I agree that probably ought to have been done earlier, but the sheer level to which normalized UTF-16 is baked into Win32 means it’s usually less mental gymnastics for me to just convert to and from at the API boundary and use the wide APIs.
I opted for using the UTF-8 codepage so I don’t have to think about converting, especially with all the places the application I inherited touches the Win32 APIs. If the API boundary contained into one unit, and converting a MultiByte application to UTF-16 wasn’t so painful, I maybe have decided on a different path.
I did file a Wine bug however.
I think there’s a reason why languages like PHP once were created, despite having such a capable language as C/C++ widely available.
It goes something like:
You can write a fast program in C, but in perl/python/php you can write a program fast
I don’t completely disagree, but I think there’s also other reasons for PHP picking up. For example it providing the 90s version of serverless, and no compilation being required.
On top of that the fact that approaches to get closer to natural languages (in one way or another) as well as text processing were big goals, also easy interfacing with databases in the standard library.
Times change, we see trends in the opposite direction. Ambiguity and expressiveness over simplicity are not the goal anymore, and people want things to look familiar, trading writing a few more understood constructs for simplicity. People prefer being explicit instead of implicit.
An example of that is also how there’s a trend away from duck typing. Once that was a good thing, because it’s closer to natural language and less writing, and just like with natural language you were able to be shorter when many things are implied.
Then we had a rise of Java style languages and now the interesting new languages are the ones copying much of the philosophy that people associate with C.
Not saying people should use C or BCHS, but that a language isn’t bad because people follow different trends, learn other languages in school or similar. Popular languages tend to keep evolving and the ecosystems maturing.
Of course that also means legacy as in “failed attempts”, old interfaces (see Go getting new network address types), unmaintained libraries, etc. accumulating and I have to admit the lack of these is really exciting about new languages. There’s usually also just less bad, ugly, buggy, unidiomatic code on very new languages.
However, times have changed and given that there isn’t much too similar to BCHS, also because both the world and the technologies it depends on were at least different I think the existence of PHP doesn’t seem like a good argument against BCHS.
Again, not saying you should use it or that PHP worse or better. Just that such general rules usually aren’t the best helpers for practical decisions.
It has full access to the kernel’s system calls and a massive set of development tools and libraries.
I don’t think C the programming language has anything to do with syscalls. Which ones you have access to instead depending on which ones your standard library decides to implement wrappers for, same as many other languages. Granted on BSD this is likely most of them, but certainly isn’t guaranteed.
Let’s pretend I replied one level up, or that I was also unpacking that section of the website. :)
Sorry, I missed that you were referring to content on the same site. The phrase “non-mustachioed” was a thought terminator.
I think the entire site is meant to be read as satire. “The internet is unsafe” - granted, but recommending C in that case is not best practices.
httpd(8): OpenBSD’s home-grown web server. If you need more features for your web application, submit a patch.
It’s not as if you can’t use a sane language like Perl on OpenBSD, it’s included in base for building.
I wouldn’t get good GPS reception, don’t have a great way to run cabling for an antenna, and if I’m going through that expense, I might as well spend the few dollars on a RTC hat, instead of a software solution.
systemd compares the system time to a builtin epoch, usually the release or build date of systemd. If it finds the system time is before this epoch, it resets the clock to the epoch
I think this is a great example of why systemd is capable of so many features. You could totally let this be done by a different service, but then you’ll have to re-introduce this kind of trigger into systemd - in a way that it does run before all the things that need DNS. Otherwise your init system plays “crash everything” on bootup which is totally worthless.
This can easily run as your own unit though. There is no need for systemd to do it specifically. You can create a job like that and make sure it runs before network.target.
While true, you cannot run a unit as early in the boot process as this systemd method, which happens in
main before targets and units are started. In most cases this doesn’t matter, but it’s nice to not have all my system logs starting 41 years ago.
I see there point, however I prefer “obviously wrong time” to “maybe from the last boot or maybe the bootup was actually hanging for minutes” timestamps.
as you see in the article, that comes at some drawbacks and might cause manual intervention when a somewhat wrong timestamp could be good enough for everything to come back up where NTP will be used to correct the timestamps.
I guess it’s something where you have to decide for each machine whether audibility of timestamps or resilience against manual intervention is more important.
It seems to me that what you really need, then, is a log message saying “time changed from x to y”? Does systemd’s implementation include such a message?
AFAICT, journald does not, but the NTP client I use does log when it adjusts time when it starts up, which is good enough.
As for “maybe from the last boot” I usually invoke
journalctl --boot=0 unless I’m specifically looking for logs from previous boots.
Ah, by “systemd’s implementation” I meant systemd-timesyncd. Though the journal noticing this would probably work too, now that I think about it
If you care about accurate time don’t use systemd-timesyncd. It’s really not good. Chrony is the best choice for timekeeping on many applications.
Chrony’s solution for “there’s no RTC to set the clock” is to use the last timestamp on the drift file it writes during regular usage. https://chrony.tuxfamily.org/faq.html#_what_if_my_computer_does_not_have_an_rtc_or_backup_battery
I reckon you didn’t read the article before writing that comment. Why? Because it doesn’t talk about systemd-timesyncd (this mechanism is core systemd behaviour: https://github.com/systemd/systemd/blob/b049b48c4b6e60c3cbec9d2884f90fd4e7013219/src/core/main.c#L1653), and because it does describe the same policy of creating and periodically updating an epoch file.
Ha, usually I’m the guy complaining folks didn’t RTFA. I certainly did, and I reckon you missed the section about systemd-timesyncd that was added. Or maybe I was responding to other comments in this discussion about systemd-timesyncd.
You are correct about it being core systemd behavior. But only the read part. The fact I have to manually create a new unit file myself to periodically touch the timestamp file makes this pretty awkward. Also as discussed 250 has a different mechanism in systemd-timesyncd which applies later, maybe too late to be useful. It’s all a bit of a mess.
Crony doesn’t really solve the very-early-in-bootstrap problem though, it’s only going to set the clock once it’s started. It’d be nice if crony touched the epoch file systemd is expecting. Although /usr/lib is a weird place to store dynamic state like this. Is /usr/lib/unix-epoch used for anything else? My brief searches suggest it’s not.
Although /usr/lib is a weird place to store dynamic state like this.
This awkwardness is because it’s intended for people making system images (like the SD card images for a Pi) to set the epoch without needing to rebuild systemd to set the built-in epoch. That it works here is a nice side effect.
It’d be nice if crony touched the epoch file systemd is expecting.
Since all systemd cares about is the mtime of the file, you might be able to adjust paths in
chrony.conf to it. I’m not super familiar with chronyd, but maybe
rtcfile would work? It would be great if it had a directive specifically for setting the mtime (or had ability to run a command after sync).
Yep, post only talks about NTP, and my edit in response to an earlier comment even specifically says I don’t use timesyncd.
Chronyd, and other NTP-like tools, tend to start very late in the boot process, as they want networking. By having core systemd set it so early in boot, before units run, the time delta they observe once NTP is running is much smaller. (No more text saying my system booted “41 years ago” for example text)
Great trick, I’ll adding it to all my RPis!
For the future, the just released systemd 250 integrates a similar method for maintaining a rough clock for RTC-less systems into
A new setting
SaveIntervalSec=has been added to
systemd-timesyncd, which may be used to automatically save the current system time to disk in regular intervals. This is useful to maintain a roughly monotonic clock even without RTC hardware and with some robustness against abnormal system shutdown.
I’ve added a note to the post linking here. I read the 250 release notes yesterday, which probably got the thought in my mind today, but then I totally forgot about this change. It’s even literally directly below the change I was looking at,
lock-valid-range-usec-max. So frustrating I missed. :)
As I mention in the note, I don’t use timesyncd. Though if I did, it runs much later in the boot process than the clock-epoch, simultaneously with other services. I’d prefer having the clock reasonably setup before then.
Doesn’t the timer file need a service point?
And shouldn’t the “enable and start” command be
systemctl … rather than
And shouldn’t there be a
Timer files do not need a unit defined. By default, they run the service unit that matches the same name, ignoring “.timer”. Yes, I’m not sure where my copy and paste of the
As for the commands, yes typos. I’ll fix. (edit, fixed! thanks)
That’s activating the timer to begin updating the file’s modification time, which was an arbitrary value after bootup. You can set it lower if you want, and if your reboot took longer than 17 minutes, it well run sooner anyways. The clock is set from the mtime well before that.
TLDR: If you assign cpu quotas to containers and you give them all the cores they ask for (thread pools), they’ll immediately get throttled due to the overall cpu time consumed by that. Instead give them only as many cores as is possible without throttling, so tail latency stays ok and you don’t overload node after node.
At $DAYJOB our advice has been “set GOMAXPROC to some value close to the amount of CPU you’re asking for” (usually the same, since Go doesn’t count io-blocked goroutines). Otherwise teams would ask for 1 or 2 CPUs, then fall over in latency due to having several times more goroutines.
Plus that it took kernel patches to achieve that, and a lot of other investigation worth a read, IMO.
investigation worth a read
True, but the article is really long with a lot of “what ifs” so you can get lost.
I’ll be continuing my work on modernizing Mixere, an audio mixer for live productions, to get a v1.2.0 released; Mixere’s last release from the original maintainer was in 2007. My focus for this weekend is to fix type warnings and to get 64-bit support.
Last weekend I released a the source code to my first ClojuresSript application, and wrote an outline for a programming tutorial YouTube series. Goal this weekend is to get an episode recorded and released.
Finish up my first ClojureScript project, License Wizard 2000 by publishing the source code and setting up hosting. I’m considering turning it into a tutorial on YouTube.
This is awesome, and hope to see some source (with the hopes of porting this to some embedded device). There’s another, parallel effort which I hoped would’ve panned out by now, but seems to have stalled, sadly.
PalmOS was my introduction to embedded computing (with the Palm V, an incredible piece of hardware in and of itself) and holds a special place in my heart, but remains one of the more thought-out and consistent user experiences on a hand-held device to date.
My understanding is shortly after that post, Dmitry started a new job at Apple, which as you can imagine, might make it more difficult to work on reverse engineering side projects.
[Edit 3 hours later: over on the HN thread Dmitry confirms they’re still working on the project.]
Q: Why choose Docker or Podman over Nix or Guix?
Edit with some rephrasing: why run containers over a binary cache? They can both do somewhat similar things in creating a reproductible build (so long as you aren’t
apt upgradeing in your container’s config file) and laying out how to glue you different services together, but is there a massive advantage with one on the other?
I can’t speak for the OP, but for myself there are three reasons:
Docker for Mac is just so damn easy. I don’t have to think about a VM or anything else. It Just Works. I know Nix works natively on Mac (I’ve never tried Guix), but while I do development on a Mac, I’m almost always targeting Linux, so that’s the platform that matters.
The consumers of my images don’t use Nix or Guix, they use Docker. I use Docker for CI (GitHub Actions) and to ship software. In both cases, Docker requires no additional effort on my part or on the part of my users. In some cases I literally can’t use Nix. For example, if I need to run something on a cluster controlled by another organization there is literally no chance they’re going to install Nix for me, but they already have Docker (or Podman) available.
This is minor, I’m sure I could get over it, but I’ve written a Nix config before and I found the language completely inscrutable. The Dockerfile “language”, while technically inferior, is incredibly simple and leverages shell commands I already know.
I am not a nix fan, quite the opposite, I hate it with a passion, but I will point out that you can generate OCI images (docker/podman) from nix. Basically you can use it as a Dockerfile replacement. So you don’t need nix deployed in production, although you do need it for development.
I’m not the previous commenter but I will share my opinion. I’ve given nix two solid tries, but both times walked away. I love declarative configuration and really wanted it to work for me, but it doesn’t.
This speaks to my experience with Nix too. I want to like it. I get why it’s cool. I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg) and the thing I want most is to define my /etc files in their native tongue under version control and for it all to work out rather than depend on Nix rendering the same files. I could even live with Nix-the-language if that were the case.
I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg)
As a former Google SRE, I completely agree—GCL has a lot of quirks. On the other hand, nothing outside Google compares, and I miss it dearly. Abstracting complex configuration outside the Google ecosystem just sucks.
Yes, open tools exist that try to solve this problem. But only
gcl2db can load a config file into an interactive interface where you can navigate the entire hierarchy of values, with traces describing every file:line that contributed to the value at a given path. When GCL does something weird,
gcl2db will tell you exactly what happened.
Thanks for the reply. I’m actually not a huge fan of DSLs so this might be swaying me away from setting up nixos. I have a VM setup with it and tbh the though of me trolling through nix docs to figure out the magical phrase to do what I want does not sound like much fun. I’ll stick with arch for now.
If you want the nix features but a general purpose language, guix is very similar but uses scheme to configure.
I would love to use Guix, but lack of nonfree is killer as getting Steam running is a must. There’s no precedence for it being used in the unjamming communities I participate in, where as Nix is has sizable following.
Sorry for the very late reply. The problem I have with nixos is that it’s anti-abstraction in the sense that I elaborated on here. Instead it’s just the ultimate wrapper.
To me, the point of a distribution is to provide an algebra of packages that’s invariant in changes of state. Or to reverse this idea, an instance of a distribution is anything with a morphism to the category of packages.
Nix (and nixos) is the ultimate antithesis of this idea. It’s not a morphism, it’s a homomorphism. The structure is algebraic, but it’s concrete, not abstract.
People claim that “declarative” configuration is good, and it’s hard to attack such a belief, but people don’t really agree on what really means. In Haskell it means that expressions have referential transparency, which is a good thing, but in other contexts when I hear people talk about declarative stuff I immediately shiver expecting the inevitable pain. You can “declare” anything if you are precise enough, and that’s what nix does, it’s very precise, but what matters is not the declarations, but the interactions and in nix interaction means copying sha256 hashes in an esoteric programming language. This is painful and as far away from abstraction as you can get.
Also notice that I said packages. Nix doesn’t have packages at all. It’s a glorified build system wrapper for source code. Binaries only come as a side effect, and there are no first class packages. The separation between pre-build artefacts and post-build artefacts is what can enable the algebraic properties of package managers to exist, and nix renounces this phase distinction with prejudice.
To come to another point, I don’t like how Debian (or you other favorite distribution) chooses options and dependencies for building their packages, but the fact that it’s just One Way is far more important to me than a spurious dependency. Nix, on the other hand, encourages pets. Just customize the build options that you want to get what you want! What I want is a standard environment, customizability is a nightmare, an anti-feature.
When I buy a book, I want to go to a book store and ask for the book I want. With nix I have to go to a printing press and provide instructions for printing the book I want. This is insanity. This is not progress. People say this is good because I can print my book into virgin red papyrus. I say it is bad exactly for the same reason. Also, I don’t want all my prints to be dated January 1, 1970.
For me personally, I never chose Docker; it was chosen for me by my employer. I could maybe theoretically replace it with podman because it’s compatible with the same image format, which Guix (which is much better designed overall) is not. (But I don’t use the desktop docker stuff at all so I don’t really care that much; mostly I’d like to switch off docker-compose, which I have no idea whether podman can replace.)
FWIW Podman does have a
podman-compose functionality but it works differently. It uses k8s under the hood, so in that sense some people prefer it.
If you’re targeting Linux why aren’t you using a platform that supports running & building Linux software natively like Windows or even Linux?
… to call WSL ‘native’ compared to running containers/etc via VMs on non-linux OS’s is a bit weird.
I enjoy using a Mac, and it’s close enough that it’s almost never a problem. I was a Linux user for ~15 years and I just got tired of things only sorta-kinda working. Your experiences certainly might be different, but I find using a Mac to be an almost entirely painless experience. It also plays quite nicely with my iPhone. Windows isn’t a consideration, every time I sit down in front of a Windows machine I end up miserable (again, YMMV, I know lots of people who use Windows productively).
Because “targeting Linux” really just means “running on a Linux server, somewhere” for many people and they’re not writing specifically Linux code - I spend all day writing Go on a mac that will eventually be run on a Linux box but there’s absolutely nothing Linux specific about it - why would I need Linux to do that?
WSL2-based containers run a lightweight Linux install on top of Hyper-V. Docker for Mac runs a lightweight Linux install on top of xhyve. I guess you could argue that this is different because Hyper-V is a type-1 hypervisor, whereas xhyve is a type-2 hypervisor using the hypervisor framework that macOS provides, but I’m not sure that either really counts as more ‘native’.
If your development is not Linux-specific, then XNU provides a more complete and compliant POSIX system than WSL1, which are the native kernel POSIX interfaces for macOS and Windows, respectively.
Prod runs containers, not Nix, and the goal is to run the exact same build artifacts in Dev that will eventually run in Prod.
Lots of people distribute dockerfiles and docker-compose configurations. Podman and podman-compose can consume those mostly unchanged. I already understand docker. So I can both use things other people make and roll new things without using my novelty budget for building and running things in a container, which is basically a solved problem from my perspective.
Nix or Guix are new to me and would therefore consume my novelty budget, and no one has ever articulated how using my limited novelty budget that way would improve things for me (at least not in any way that has resonated with me).
Anyone else’s answer is likely to vary, of course. But that’s why I continue to choose dockerfiles and docker-compose files, whether it’s with docker or podman, rather than Nix or Guix.
Not mentioned in other comments, but you also get process / resource isolation by default on docker/podman. Sure, you can configure service networking, cgroups, namespaces on nix yourself, just like any other system and setup the relevant network proxying. But getting that prepackaged and on by default is very handy.
You can get a good way there without much fuss with using the Declarative NixOS containers feature (which uses
systemd-nspawn under the hood).
I’m not very familiar with Nix, but I feel like a Nix-based option could do for you what a single container could do, giving you the reproducibility of environment. What I don’t see how to do is something comparable to creating a stack of containers, such as you get from Docker Compose or Docker Swarm. And that’s considerably simpler than the kinds of auto-provisioning and wiring up that systems like Kubernetes give you. Perhaps that’s what Nix Flakes are about?
That said I am definitely feeling like Docker for reproducible developer environments is very heavy, especially on Mac. We spend a significant amount of time rebuilding containers due to code changes. Nix would probably be a better solution for this, since there’s not really an entire virtual machine and assorted filesystem layering technology in between us and the code we’re trying to run.
It’s not, but I understand the questions as “you can run a well defined nix configuration which includes your app or a container with your app; they’re both reproducible so why choose one of the over the other?”
It’s possible to generate Docker images using Nix, at least, so you could use Nix for that if you wanted (and users won’t know that it’s Nix).
These aren’t mutually exclusive. I run a few Nix VMs for self-hosting various services, and a number of those services are docker images provided by the upstream project that I use Nix to provision, configure, and run. Configuring Nix to run an image with hash XXXX from Docker registry YYYY and such-and-such environment variables doesn’t look all that different from configuring it to run a non-containerized piece of software.
What’s going on here? How did this get to the top of lobste.rs with 26 upvotes? I’m happy for the OP that they could get their system to work, but as far as I can tell, the story here is “package manager used to manage packages.” We have been doing that for decades. Is there any way the community can get a lever to push back on thin stories like this one?
Would it change your opinion if the article mentioned that the nix shell being used here is entirely disposable and this process leaves no mark in your OS setup? Also that even if this required some obscure versions of common system dependencies you could drop into such a shell without worrying about version conflicts or messing up your conventional package manager?
I agree that the article is thin in content, but I don’t think you can write this story off as “package manager used to manage packages.” , I think nix shell is very magical in the package management world.
Yes, but then you’d be inside a container, so you’d have to deal with the complexities of that, like mounting drives, routing network traffic etc. With nix shell, you’re not really isolated, you’re just inside a shell session that has the necessary environment variables that provide just the packages you’ve asked for.
Aside from the isolation, the nix shell is also much more composable. It can drop you into a shell that simultaneously has a strange Java, python and Erlang environment all compiled with your personal fork of GCC, and you’d just have to specify your GCC as an override for that to happen.
I get that, but I have to go through the learning curve of nix-shell, while I already know docker, since I need it for my job anyway. I am saying that there are more ways to achieve what the article is talking about. It is fine that the author is happy with their choice of tools, but it is very unremarkable for the title and given how many upvotes that article got.
Why not learn nix and then use it at work as well :) Nix knows how to package up a nix-defined environment into a docker container and produce very small images, and you don’t even need docker itself to do that. That’s what we do at work. I’m happy because as far as I’m concerned Nix is all there is and the DevOps folks are also happy because they get their docker images.
I work in a humongous company where the tools and things are less free to choose from atm, so even if I learned nix, it would be a very tough sell..
As someone who hasn’t used Docker, it would be nice to see what that looks like. I’m curious how the two approaches compare.
I think that the key takeaway is that with Docker, you’re actually running a container will a full-blown OS inside. I have a bias against it, which is basically just my opinion, so take it with a grain of salt.
I think that once the way to solve the problem of I need to run some specific version of X becomes let’s just virtualize a whole computer and OS because dependency handling is broken anyway, we, as a category simply gave up. It is side-stepping the problem.
Now, the approach with Nix is much more elegant. You have fully reproducible dependency graphs, and with nix-shell you can drop yourself in an environment that is suitable for whatever you need to run regardless of dependency conflicts. It is quite neat, and those shells are disposable. You’re not running in a container, you’re not virtualizing the OS, you’re just loading a different dependency graph in your context.
See, I don’t use Nix at all because I don’t have these needs, but I played with it and was impressed. I dislike our current approach of just run a container, it feels clunky to me. I think Docker has it’s place, specially in DevOps and stuff, but using it to solve the I need to run Python 2.x and stuff conflicts with my Python 3.x install is not the way I’d like to see our ecosystem going.
In the end, from a very high-level, almost stratospheric, point-of-view: both docker and nix-shell workflow will be the developer typing some commands on the terminal, and having what they need running. So from a mechanical standpoint of needing to run something, they’ll both solve the problem. I just don’t like how solving things by doing the evergreen is now the preferred solution.
Just be aware that this is an opinion from someone heavily biased against containers. You should play with both of them and decide for yourself.
This comment is a very good description of why I’ve never tried Docker (and – full disclosure – use Nix for things like this).
But what I’m really asking – although I didn’t make this explicit – is a comparison of the ergonomics. The original post shows the
shell.nix file that does this (although as I point out in another comment, there’s a shell one-liner that gets you the same thing). Is there an equivalent Dockerfile?
I was surprised to see Docker brought up at all because my (uninformed) assumption is that making a Docker image would be prohibitively slow or difficult for a one-off like this. I assumed it would be clunky to start a VM just to run a single script with a couple dependencies. But the fact that that was offered as an alternative to
nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)
But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)
I think containers is a perfectly capable solution to this. The closest thing you can use would probably be toolbox.
It would allow you to even provide a standardized environment which would be decoupled from the deployment itself (if that makes sense). It also mount
$HOME as well.
I use Nix, but also have experience with Toolbox.
I would recommend most people to use Toolbox over
nix-shell. With toolbox you can create one-off containers in literally seconds (it’s two commands). After entering the container you can just
dnf install whatever you need. Your home directory gets mounted, so you do not have to juggle with volumes, etc. If you need to create the same environment more often, you can create a
Dockerfile and build your toolbox containers with
podman. The upstream containers that Fedora provides are also just built using
The post shows a simple use case, but if you want to do something less trivial, it often entails learning Nix the language and
nixpkgs (and all its functions, idioms, etc.). And the Nix learning curve is steep (though it is much simpler if you are familiar with functional programming). This makes the toolbox approach orders of magnitude easier for most people - you basically need to know
toolbox create and
toolbox enter and you can use all the knowledge that you already have.
However, a very large shortcoming of toolbox/Dockerfiles/etc. is reproducibility. Sure, you can pass around an image and someone else will have the same environment. But Nix allows you to pin all dependencies plus the derivations (e.g. as a git SHA). You can give someone your Nix flake and they will have exactly the same dependency graph and build environment guaranteed.
Another difference is that once you know Nix, it is immensely powerful for defining packages. Nix is a turing-complete functional language, so nixpkgs can provide a lot of powerful abstractions. I dread every time I have to create/modify and RPM spec file, because it is so primitive compared to making a Nix derivation.
tl;dr: most people will want to use something like Toolbox, it is familiar and provides many of the same benefits as e.g. nix-shell (isolated, throw-away environments, with your home directory available). However, if you want strong reproduciblity across systems and a more powerful packaging/configuration language, learning Nix is worth it.
A cool aspect of Docker is that it has a gazillion images already built and available for it. So depending on what you need, you’ll find a ready-made image you can put to good use with a single command. If there are no images that fill your exact need, then you’ll probably find an image that is close enough and can be customised. You don’t need to create images from scratch. You can remix what is already available. In terms of ergonomics, it is friendly and easy to use (for these simple cases).
So, NixPkgs have a steeper learning curve in comparison to dockerfiles. It might be simpler to just run Docker. What I don’t like is what is happening inside Docker, and how the solution for what looks like simple problems involves running a whole OS.
I’m aware that you can have containers without an OS like described in this thread, but that is not something I often see people using in the wild.
Nit-pick: AFAIK one doesn’t really need Alpine or any other distro inside the container. It’s “merely” for convenience. AFAICT it’s entirely possible to e.g. run a Go application in a container without any distro. See e.g. https://www.cloudbees.com/blog/building-minimal-docker-containers-for-go-applications
nix shell is actual magic — like sourcerer level, wave my hand and airplanes become dragons (or vice versa) magic — well this article just demonstrated that immense power by pulling a coin out of a deeply uncomfortable kid’s ear while pulling on her nose.
I can’t speak for the previous comment’s author, but those extra details, or indeed any meat on the bones, would definitely help justify this article’s otherwise nonsensical ranking.
Yeah, I agree with your assessment. This article could just as well have the title “MacOS is so fragile, I consider this simple thing to be an issue”. The trouble with demonstrating nix shell’s power is that for all the common cases, you have a variety of ad-hoc solutions. And the truly complex cases appear contrived out of context (see my other comment, which you may or may not consider to be turning airplanes into dragons).
nix is not the first thing most devs would think of when faced with that particular problem, so it’s interesting to see reasons to add it to your toolbox.
Good, as it is not supposed to be the first thing. Learning a fringe system with a new syntax just to do something trivial is not supposed to be the first thing at all.
I find it also baffling that this story has more upvotes than the excellent and original code visualization article currently also very high. Probably some nix up vote ring pushing this
I didn’t think this article was amazing, but I found it more interesting than the code visualization one, which lost me at the first, “From this picture, you can immediately see that X,” and I had to search around the picture for longer than it would have taken me to construct a
find command to find the X it was talking about.
This article, at least, caused me to say, “Oh, that’s kind of neat, wouldn’t have thought of using that.”
This article is useless. It is way simpler (and the python way) to just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”. No need to write a nix file, and then write a blog post to convince yourself you didn’t waste your time!
Considering all nix posts get upvoted regardless of content, it’s about time we have a “nix” tag added to the site.
This article is not useless just because you don’t see its value.
I work mainly with Ruby and have to deal with old projects. There are multiple instances where the Ruby way (using a Ruby version manager) did not work because it was unable to install an old Ruby version or gem on my new development machine. Using a nix-shell did the job every time.
just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”
What do you do if this fails due to some obscure dependency problem?
What do you do if this fails due to some obscure dependency problem?
Arguably you solve it by pinning dependency versions in the
pip install invocation or requirements.txt, as any Python developer not already using Nix would do.
This article is not useless just because you don’t see its value.
No, but it is fairly useless because it doesn’t do anything to establish that value, except to the choir.
In my experience there will be a point where your dependencies will fail due to mismatching OpenSSL, glibc versions and so on. No amount of pinning dependencies will protect you against that. The only way out is to update dependencies and the version of your language. But that would just detract from your goal of getting an old project to run or is straight up impossible.
Enter Nix: You pin the entire environment in which your program will run. In addition you don’t pollute your development machine with different versions of libraries.
Arguably that’s just shifting the burden of effort based on a value judgement. If your goal is to get an old project to run while emphasizing the value of incurring zero effort in updating it, then obviously Nix is a solution for you and you’ll instead put the effort into pinning its entire runtime environment. If, however, your value to emphasize is getting the project to run then it may well be a more fruitful choice to put the effort into updating the project.
The article doesn’t talk about any of the hairier details you’re speaking to, it just shows someone taking a slightly out of date Python project and not wanting to put any personal effort into updating it… but updating it by writing a (in this case relatively trivial) Python 3 version and making that publicly available to others would arguably be the “better” solution, at least in terms of the value of contributing back to the community whose work you’re using.
But ultimately my argument isn’t with the idea that Nix is a good solution to a specific problem, it’s that this particular article doesn’t really make that point and certainly doesn’t convincingly demonstrate the value of adding another complex bit of tooling to the toolkit. All the points you’ve raised would certainly help make that argument, but they’re not sadly not present in this particular article.
Just out of curiosity, I’m also dealing with ancient ruby versions and use nix at work but I couldn’t figure out how to get old enough versions, is there something that helps with that?
Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.
If instead you want an older ruby but linked to newer libraries (eg, OpenSSL) there’s a few extra steps, but this is a great jumping off point to finding derivations to fork.
Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.
Plus glibc, OpenSSL and other dependencies with many known vulnerabilities. This is fine for local stuff, but definitely not something you’d want to do for anything that is publicly visible.
Also, note that mixing different nixpkgs versions does not work when an application uses OpenGL, Vulkan, or any GPU-related drivers/libraries. The graphics stack is global state in Nix/NixOS and mixing software with different glibc versions quickly goes awry.
This comment mentions having done something similar with older versions by checking out an older version of the
nixpkgs repo that had the version of the language that they needed.
Like others already said you can just pin
nixpkgs. Sometimes there is more work involved. For example this is the current shell.nix for a Ruby on Rails project that wasn’t touched for 5 years. I’m in the process of setting up a reproducible development environment to get development going again. As you can see I have to jump through hoops to get Nokogiri play nicely.
There is also a German blog post with shell.nix examples in case you need inspiration.
this example, perhaps. I recently contributed to a python 2 code base and running it locally was very difficult due to c library dependencies. The best I could do at the time was a Dockerfile (which I contributed with my changes) to encapsulate the environment. However, even with the container standpoint, fetching dependencies is still just as nebulous as “just apt install xyz.” Changes to the base image, an ambiently available dependency or simply turning off the distro package manager services for unsupported versions will break the container build. In the nix case, it is sort of forced on the user to spell it out completely what the code needs, combine with flakes and I have a lockfile not only for my python dependencies, but effectively the entire shell environment.
More concretely, at work, the powers to be wanted to deploy python to an old armv7 SoC running on a device. Some of the python code requires c dependencies like openssl, protobuf runtime and other things and it was hard to cross compile this for the target. Yes, for development it works as you describe, you just use a venv, pip install (pipenv, poetry, or whatever as well) and everything is peachy. then comes to deployment:
I was able to crap out a proof-of-concept in a small nix expression that made a shell that ran the python interpreter I wanted with the python dependencies needed on both the host and the target and didn’t even have to think. Nixpkgs even gives you cross compiling capabilities.
Your suggested plan is two years out of date, because CPython 2.7 is officially past its end of life and Python 2 packages are generally no longer supported by upstream developers. This is the power of Nix: Old software continues to be available, as if bitrot were extremely delayed.
CPython 2.7 is available in debian stable (even testing and sid!), centos and rhel. Even on MacOS it is still the default python, that ships witht he system. I don’t know why you think it is no longer available in any distro other than nix.
Possibly naive question: If your OS configuration is already reproducible and declarative because it is all managed by Puppet or Ansible (assuming you aren’t abusing those tools to run non-idempotent scripts), does NixOS bring a lot of extra value?
Using Puppet, Ansible, or Salt can sort of replicate the declarative nature of NixOS, but they fall short of duplicating all the behavior.
For example, if you’re managing a file or package with Salt, and you decide you don’t need it anymore, you have to remember to do
package.absent, or similar to actually remove it from running systems. In NixOS you can just remove the declaration, and it will be removed from the active system. We’ve had many incidents at work caused by someone forgetting to remove a configuration file or systemd unit when no longer needed.
Puppet/Ansible/Salt can still display a “hysteresis” effect: because they rely on e.g. apk, their reproducibility is kinda “best effort”. Notably, installing some package and then uninstalling it with them can leave your system (say, the /etc dir) in a different state than before, even if your “declarative” config is identical as it was before. With NixOS, identical config guarantees (via hashes and readonly mounts) exactly identical OS directory tree and to a much wider extent, including /etc, and fully accounting for removed files. There’s still a few things that are hard to avoid keeping modifiable (/var, $HOME, hardware, …), but other than that, it’s a very different level of immutability/reproducibility. That said, it necessarily makes NixOS more “total”, i.e. you either fully go with it, or not. (There could be ways to make this more gradual, but no such approach has been developed enough to be publicly usable by others yet AFAIK.)
To paraphrase a former coworker, Puppet needs to be run repeatedly until the log is all green. Nix only needs to be run once. (I will admit that systemd can get hung up and need multiple attempts to figure out its units, but Nix only needs one run to change which units systemd is working on.)
The main benefit of NixOS, if you got everything else replicated in Ansible/Puppet (which is very hard), is that you can do atomic upgrade and switches between systems. Nix also allows you to use a systems current configuration and copy it 1:1 to another machine and switch to that atomically.
Once you managed to declare all your system configuration with Nix you use the same expressions (and the already generated files in your Nix store) to produce Installer ISOs, VM Images, … without much hassle. This goes as far as testing a new configuration of your system in a VM that doesn’t require building a full VM image. Instead if can just mount the subset of paths required for the system into the VM. This allows you to iterate on the level of seconds instead of whatever Puppet/Vagrant allow you to do. In my experience the whole Puppet workflow is at least tens of minutes and might then still install something different every time (due to timing, package sources being upgrade halfway, …).
Don’t forget the developer goodies. I mostly use the Nix package manager on non-NixOS linuxes to quickly get to a project of mine and open a shell with all dependencies set just the way I need them to start working on that project. If you’re familiar with python’s virtualenv – it’s a similar experience but with all the C/system dependencies that you need.
With Puppet or Ansible I’d need to maintain separate setup scripts and documentation for dev and non-dev environments.
Not really. Of course assuming that Puppet or Ansible configuration is truly idempotent which is very hard thing to do. In the other hand achieving idempotency with Nix is much easier.
I’ve got two things on my to-do list this weekend: