Threads for lilyball

    1. 4

      Bonus question: The above doesn’t compile if you substitute Box or Rc for BufReader. Why not? Something about the CoercedUnsized impls on those? I don’t know the answer.

      BufReader<dyn File> is an unsized type, Box<dyn File> and Rc<dyn File> aren’t. There isn’t a coercion defined from &Box<T> to &Box<dyn Trait>, which you can see by writing let r: &Box<dyn File> = &b as _;, the compiler tells you that it’s a non-primitive cast. What you can do is write … = &(b as _);, that forces the coercion before wrapping it in a reference (though it also moves the value).

      1. 10

        For the past 6 months we’ve been banging our heads against APFS running out of disk space on our CI while showing 60GB free space. And I mean, it runs out of disk space hard, can’t even save the zsh history.

        This started with Sonoma. Upgrading to the latest point release made no difference. The internet is littered with what looks like the same problem and the suggested fix is always to empty your trash (can’t you read, I still have tens, even hundreds of GB free, I shouldn’t need to clean my trash). My current theory is that APFS is not able to keep up with recycling some internal garbage and reports it as generic ENOSPC. Mac OS, at least from the CI perspective, is worse than Windows.

        P.S. If anyone has any clue what’s going on with APFS space issue, I am all ears.

        EDIT: Forgot to add, the bizarre thing is that once you login into such a machine, it’s no longer out of space and you can write as much as you need. This is the reason for my “cannot recycle garbage fast enough” theory.

        1. 4

          I do a lot of low level filesystem development and I’ve hit this frequently. It’s like it’s caching globally some free count and it gets desynced and freaks out. Very annoying.

              1. 4

                Nope, looked into this: there are no snapshots besides the standard few and backup is disabled. We’ve even monitored if any transient snapshots are being created during the CI run but didn’t see anything suspicious.

              2. 2

                I was going to suggest maybe the 10% buffer that many filesystems keep, but that wouldn’t explain a behavior change once you log in.

                1. 1

                  The only way I know of to run out of space when you have plenty of space is to run out of inodes. If logging in fixes the problem, that suggests some per-user process such as a LaunchAgent is doing something to fix the problem. And in fact there is a LaunchAgent that deletes caches (/System/Library/LaunchAgents/com.apple.cache_delete.plist), so perhaps you’ve somehow accumulated so much cache that you’ve run out of inodes? I can’t imagine what might be making so many cache files though.

                  1. 1

                    Thanks for the information, much appreciated. I am not familiar with the cache files you are talking about (need to do some research) but I can tell you what we do: we compile (e.g., with Apple Clang) a large number of files and there are quite a few intermediate files being created and deleted in the process by the build system.

                    So we definitely don’t create any cache files ourselves but it’s possible Apple does automatically (maybe some kind of search indexing).

                    Another problem with the cache file theory is that we monitored disk space as the build progresses and there is always plenty of free space reported. But it’s possible APFS somehow does not count cache files towards usage.

                    1. 1

                      You monitored disk space, but did you monitor inode count? If you have plenty of disk space but are getting ENOSPC then that suggests you’ve run out of inodes.

                      1. 2

                        Yes, we need to look into this, thanks. The strange thing is, as I mentioned, it’s temporary, so maybe it can’t recycle inodes fast enough?

                2. 45

                  The qmark-noglob feature, introduced in fish 3.0, is enabled by default. That means ? will no longer act as a single-character glob.

                  Hell yeah! This is gonna make pasting links into the shell so much easier ^^

                  1. 30

                    I think I’ve used ? as a shell metacharacter on purpose about twice in my entire life. I strongly agree that dropping it is nice.

                    1. 3

                      I wonder if it would work well to use paste bracketing for something like this. If you are pasting something that is mostly text but has a wildcard or two auto-escape it. I can also imagine similar features like if you type a quote, then paste, it will auto-escape any quotes in the pasted text.

                      It would probably do what you want most of the time but would probably have false-positives commonly enough that it would be negative overall. But maybe there are specific cases that are clear enough to be handled (like pasting something that starts with “https:” or immediately after a single quote).

                      1. 11

                        I can also imagine similar features like if you type a quote, then paste, it will auto-escape any quotes in the pasted text.

                        Fish 3.7.1 already does this.

                        1. 2

                          Kitty just asks you every time.

                          But I think a control sequence which pastes raw would also work.

                        2. 2

                          Wouldn’t many URLs still contain the & character which would incorrectly break off the URL part-way and spawn background jobs that would almost certainly fail?

                          1. 7

                            No, echo a&b works for me in fish, as does echo a&b://%20?q. I think fish might require a space before and/or after the & for it to create a background process.

                            In bash, echo a&b does not work.

                        3. 2

                          This gives an example of a useful use of cat:

                          { foo; bar; cat mumble; baz } | whatever
                          

                          It’s always bugged me that bash can replace $(cat file) with $(< file) to read a file without any subprocesses, but if you just type < file at the prompt it doesn’t do anything (besides error if the file doesn’t exist).

                          1. 6

                            It’s a shame UNIX wasn’t designed as a capability system. In a capability system, the shell would be responsible for opening files and passing them as file descriptors to each invoked command. UNIX has an awkward mix where the shell is responsible for passing in three file descriptors but other files are expected to be opened by the command itself.

                            In my ideal world, programs would start in capsicum mode and you’d get a parallel array alongside argv that told you which file descriptor numbers any file or directory arguments had been passed as. If you did cat foo then argv[1] would be the string ”foo” but fdv[1] would be 3 and the shell would pass an open file handle to foo in as file descriptors 3. You’d also then be able to tweak the command in the shell, to pass a read-only file descriptor instead of a read-write one (since cat doesn’t need write access to the input file).

                            Once you’re in that model, it’s easy to have a consistent concept that the shell is always the thing that does file name to file descriptor mappings and you can make that much more consistent in the shell syntax.

                            If you were to retrofit this to existing systems, you’d probably manage it via an environment variable rather than an additional argument to main.

                          2. 5

                            hm it appears this behavior is a GNUism. on MacOS I can run cp -R src/ dest and it copies the files inside src, like rsync.

                            1. 3

                              that works on Linux the same. As opposed to cp -R src dest or cp -R src/* dest

                              1. 5

                                No it doesn’t.

                                Linux:

                                > cp -R src/ dest
                                > eza -aT dest
                                dest/
                                └── src/
                                    ├── .hidden
                                    └── unhidden
                                

                                It only produces dest/.hidden if the dest folder didn’t exist yet. As compared with macOS where the trailing slash on src/ makes cp copy the contents of the directory rather than the directory itself.

                                1. 1

                                  Right, I blame my lack of attention on skipping my pills today. Thanks for the clarification!

                            2. 4

                              I think it’s pretty neat that the cause of this bug ultimately turned out to be the sole unsafe block in the entire program.

                              1. 3

                                This is very bad for web openness and long term accessibility, much like the Rails browser version guard.

                                1. 11

                                  Why? Shorter expiry times don’t require any new browser support, 90 days certificates will continue to be available, shorter certs are opt-in, and other TLS certificate providers are available (even if your parameters are “free” and “supports ACME”).

                                  1. 17

                                    It puts a lot more centralized dependency on LetsEncrypt. If your site has to get a new cert every 6 days and something happens to LE, your site is now unusable without intervention.

                                    It’s not out of the realm of possibility that an attacker could force LE’s issuing/validating servers offline for 6 days (which is also the longest possible expiry in this scenario, there could be sites that have to renew the same day the outage starts).

                                    1. 9

                                      That explains why it introduces potential fragility but not why 6 day certs are bad for the open web and accessibility.

                                      1. 5

                                        The ACME client can implement multiple issuers and do some kind of load balancing or fallback between them, should one of them be inaccessible. Like Caddy does.

                                  2. 4

                                    I get why for the browser guard, but why for this? If regular 90 day certificates are already working, then there is absolutely no reason that a 6 day one wouldn’t. Sure you might need to do some work on the backend to sort out the automation (though that is hopefully already being done with 90 day certs), but for the client side this should not matter whatsoever.

                                    Let’s encrypt is great. HTTPS should not be reserved to companies which can afford to pay for certificates, which was what happened before, and it should not be difficult to set up, either. I don’t care what content you’re serving, plain HTTP (and others) should just not be used, it’s a big tracking and attack vector.

                                    1. 1

                                      The article explained why they want to start offering 6-day certificates. It is because if your private key leaks then anyone can impersonate your site until the certificate expires, unless you revoke the certificate with the leaked key. And certificate revocation is not reliable.

                                      I accept that certificate revocation is somewhat unreliable, but I will admit I am puzzled about just who it is that loses their private keys so frequently that they need a maximum of a 6-day period in which the leaked key could be used.

                                      1. 13

                                        I don’t get how “so frequently” comes into it. If you loose your key very very rarely, you don’t care about for how long it could be misused?

                                        1. 2

                                          Any individual doesn’t, but the whole web does. And if let’s encrypt loses trust, then the whole web suffers.

                                          One key is one key/site which is 100s of millions of keys. Those 100s of millions of keys do pose a risk to trusting let’s encrypt on the whole.

                                        2. 7

                                          You only need to lose your private keys once for the validity duration to matter.

                                      2. 3

                                        Unless you consider less than a year (the longest expiration in typical use, AFAIK) to be “long term”, I don’t get your point.

                                      3. 1

                                        This is a good writeup, but I’m a bit surprised at it being described as “This is the craziest kernel bug I have ever reported”. The cause is very simple (nonatomic write where an atomic write was needed), it’s hard to trigger, even harder to exploit.

                                        1. 32

                                          So many good changes in this edition, but I think my favorite is the block tail expression temporaries change, because it means you can finally wrap an expression in a block to drop any temporaries in that expression. This means code like

                                          match process_value(mutex.lock().unwrap()) {
                                              Ok(()) => { do_stuff(); }
                                              Err(e) => { log_err(e); }
                                          }
                                          

                                          can be fixed to stop holding the lock across the whole match using a simple block, like

                                          match { process_value(mutex.lock().unwrap()) } {
                                              Ok(()) => { do_stuff(); }
                                              Err(e) = { log_err(e); }
                                          }
                                          

                                          instead of having to extract the scrutinee out into a separate variable.

                                          1. 11

                                            Woooah I didn’t grok that until you pointed out. So this is at least a new option for working aroud https://fasterthanli.me/articles/a-rust-match-made-in-hell ?

                                            Edit: Indeed, this small example is a deadlock under 2021 Edition but not under 2024 Edition.

                                            1. 3

                                              YAAAAS!! The release notes prominently mention “if let temporary scope” (avoiding the same issue for if let by default, even without having to use a curly block), but it’s very nice to hear that there’s also a general solution for any place now.

                                            2. 27

                                              It’s great that Nix works for you (and others) but in my experience Nix has got to be the single WORST example of UX of any technology on this planet. I say this as someone who has collectively spent weeks of time trying very hard (in 2020, 2022, and 2023) to use Nix for developer environment management and been left with nothing. I also used NixOS briefly in 2020.

                                              Trivial tasks like wanting to apply a custom patch file as part of the flake setup I could not figure out after hours of documentation reading, asking for help on #irc afterwards, and on matrix. Sure, if I clone the remote repo, commit my patchfile and then have Nix use that as the package it’s fine.. but that’s a lot of work to replace a single patch shell command and now I have to run an entire git server, or be forced to use a code forge, and then mess around with git settings so it doesn’t degrade local clones for security reasons.

                                              Nix’s documentation is incredibly verbose in the most useless places and also non-existent in the most critical. It is the only time I’ve ever felt like I was actively wasting my time reading a project’s documentation. If you already completely understand Nix then Nix documentation is great, for anyone else… I don’t know.

                                              Last I checked flakes were still experimental after however many years it’s been meaning the entire ecosystem built on-top of them is unstable. They aren’t beta, or even alpha. A decision needs to be made on whether flakes come or go (maybe it has been now) because having your entire ecosystem built on quicksand doesn’t inspire confidence to invest the (considerable) time to learn Nix.

                                              Manually wrangling outdated dependencies when you work with software that is on a faster release cycle than Nixpkg checkpoints is painful, and unstable Nixpkgs are just that.. unstable and annoying to update. Also, cleaning orphaned leaves and the like is not trivial and has to be researched versus just being a simple to understand (and documented) command.

                                              Things like devshell, nix-shell or whatever it’s called (I cannot remember anymore) are but various options one has to explore to get developer environments which are, for some reason, not a core part of Nix (since these 3rd party flakes exist in the first place). Combine this with all the other little oddities for which there exists multiple choices, along with the uselessness of Nix’s documentation (i.e. you cannot form an understanding of Nix) and you’re suddenly in a situation where you’re adopting things for which you have no idea the consequence of. Any problem you run into must be solved with either luck (that someone else has encountered it and you find a blog post, a github issue, etc) or brute force guesswork; stabbing in the dark.

                                              The Nix language syntax is unreadable and the errors it outputs are undecipherable to the point of the community making entire packages to display actually human readable errors, or pages long tutorials on how to read them.

                                              I wish I had been successful with Nix, clearly some other people are. Nix worked for me in trivial cases (and it is great when it does!) but the second I wanted to do something “non-trivial” (i.e. actually useful) it was like driving at 100 km/h into a brick wall. Maybe things will improve in the future but until then Podman and OCI containers or microvms are far, far superior to anything NIx can provide in my experience. I will die on this hill.

                                              Yes, they are not completely hermetic like Nix is but I’ve never seen nor encountered a situation where you need a completely hermetic environment. I have no doubt these situations exist but I would (as an educated guess) argue they are needed far less often than people think.

                                              1. 9

                                                In my experience, happy nix user, nix should only be used if you have had to fight with the other package managers in anger to get something impossible done. You’ll only be motivated to push past the pain of learning it if you have enough anger about whatever you are already using.

                                                If you don’t have that anger it’s hard to push past the Nix learning curve. Which is a shame because it genuinely is a better package management/build/infra-as-code solution.

                                                1. 5

                                                  I guess I don’t see how you would patch a existing package in say Debian or arch easier then forking it and maintaining a patch…..

                                                  Heck, I couldn’t even figure out how to make deb packages, arch was much easier but still a huge pain. With NixOS I can apply patches to anything (albeit I am not using flakes, just patching nixpkgs where needed in my fork or using package override in my config). I’ve never felt quite this powerful at modifying core system components w/o breaking something or having to do a disk backup rollback.

                                                  Having the nixpkgs repo is better the documentation IMO, just grep and look at usages. This doesn’t cover flakes, but I find the documentation and CLI help/man pretty good for flakes; and there are good examples in many project to pull from.

                                                  Nothing compares to using home-manager for dot files and user level config, will never go back from that. There is no drift, all machines stay in sync with everything in versioned files that can be modularized for re-use.

                                                  1. 4

                                                    example how to patch a package:

                                                      fixed-libuvc = pkgs.libuvc.overrideAttrs (final: attrs: {
                                                        fixupPhase = ''
                                                          ${attrs.fixupPhase or ""}
                                                          sed -i "s#$out//#/#g" $out/lib/pkgconfig/libuvc.pc
                                                          '';
                                                      });
                                                    

                                                    https://github.com/NixOS/nixpkgs/issues/281478

                                                    1. 1

                                                      Pretty much pure syntax sugar and custom DSL. I bet that sed is feed to shell.

                                                    2. 3

                                                      Trivial tasks like wanting to apply a custom patch file as part of the flake setup

                                                      That’s not a trivial task. Flakes do not support patching the flake source. Nix makes it trivial to patch packages as they’re built, but patching Nix source code is not simple. More generally, if you want to patch Nix source code (whether it’s flakes or whether it’s via fetchTarball) you need to use IFD (Import From Derivation). https://wiki.nixos.org/wiki/Nixpkgs/Patching_Nixpkgs has a demonstration of how to use this to patch nixpkgs itself. In the case of an arbitrary flake, if the flake has a default.nix and importing that gets you what you want then you can do the exact same thing that URL does to patch it. If you need access to the patched flake’s outputs (e.g. if you’re patching a nixosModule) then I would look at using flake-compat to get at the outputs of the patched flake.

                                                      1. 2

                                                        The funniest thing to me is that 50% of people say: avoid flakes and half of the rest say: I only managed to get something done in nix because of flakes (me included).

                                                        I’m still not sold overall.

                                                        1. 1

                                                          wanting to apply a custom patch file as part of the flake setup

                                                          I wanted to have a flake with one package at a different version than is the release (or whatever) which was also super annoying.

                                                          brute force guesswork; stabbing in the dark

                                                          I thought it should be doable to for instance build a node project for which it turns out there are half a dozen unmaintained projects and no documentation. Seemingly because an experienced Nix person can whip this out in two seconds so nobody bothers to document it.

                                                          but I’ve never seen nor encountered a situation where you need a completely hermetic environment

                                                          100% true

                                                          I think most people are better off using something like nx or buck to build their stuff.

                                                          1. 1

                                                            Yeah I dual-boot NixOS and Arch. For whatever I can use NixOS for without much trouble, I prefer it. However, it’s nice to be able to bail out into Arch when I run into something that will clearly take many more hours of my time to figure out in NixOS than I desire (lately, developing a mixed OCaml/C++ project). I symlink my home subdirectories so it’s easy to reboot and pick up where I left off (there are definitely still dev tools in 2025 that hate symlinked directories though, fair warning to anybody else who wants to try this).

                                                            1. 1

                                                              I think flakes complicated things a lot. I started using Nix pre-flakes and did not find it hard to pick up. The language is pretty familiar if you used Haskell or a comparable functional language at some point. The language, builders, etc. clicked for me after reading the Nix pills.

                                                              Flakes are quite great in production for reproducibility (though Niv provided some of the same benefits), but adds a layer that makes a lot of people struggle. It removes some of the ‘directness’ that Nix had, making it harder to quickly iterate on things. It also split up the docs, the community, and made a lot of historical posts/solutions harder to apply.

                                                              Trivial tasks like wanting to apply a custom patch file as part of the flake setup

                                                              Could you elaborate what you mean by applying a custom patch? Do you want to patch an external flake itself or a package from nixpkgs/a flake. Adding a patch to a package is pretty easy with overrideAttrs, I do this all the time and it’s a superpower of Nix, compared to other package managers where you have to basically fork and maintain packages.

                                                              1. 1

                                                                Yea I agree. I investigated nix a year or two ago when flakes were just starting to become popular and it was a total mess to figure out. Anything outside of ordinary was a rabbit hole to figure out.

                                                                I think a better solution to the same problem is an immutable OS with distrobox. That solution leverages tech most of us already understand without the terrible programming language and fragmented ecosystem.

                                                                I ended up moving away from that setup because I need to actually work on projects instead of tinkering with my setup but I wrote a post about it: https://bower.sh/opensuse-microos-container-dev

                                                              2. 6

                                                                One thing that took me a long time to realize is that home-manager’s support for setting defaults is far better than nix-Darwin’s, not sure if you’re aware.

                                                                1. 15

                                                                  The various nix modalities (home-manager, nix-darwin, flakes, NixOS, whatever the way to install nix with the determinate-systems installer gives you) are I would say the primary reason the nix documentation has a reputation for being awful. Any given user will only use one modality but the documentation tries to either serve all of them individually or be so general that it applies to all & is thus incomprehensible unless you know how the modalities are implemented on top of nix internals.

                                                                      1. 1

                                                                        I was not aware. Is there even an advantage to using nix-darwin at that point?

                                                                        1. 2

                                                                          Totally! They complement each other well. There are many Mac-specific things HM doesn’t do.

                                                                          1. 2

                                                                            nix-darwin + home-manager is a great setup, it means you can configure system-wide things in nix-darwin and user-specific things in home-manager, and it works exactly the same way that nixos + home-manager does, which means you can have a single configuration that’s shared across darwin and nixos machines if you want (since nix-darwin takes pains to match nixos modules whenever it can).

                                                                      2. 3

                                                                        There are some iterator combinators such as count that take an additional Self: Sized bound². But because trait objects are themselves sized, it all mostly works as expected:

                                                                        This isn’t quite right. Trait objects are not themselves sized. The subsequent code sample works because of the impl<I> Iterator for &mut I where I: Iterator + ?Sized implementation. The count method isn’t being directly invoked on the trait object, it’s being invoked on the &mut which is a concrete iterator implementation.

                                                                        You can prove this yourself by making your own custom Iterator implementation that provides a custom implementation of the count() method, and then invoking it via a &mut dyn Iterator (Rust Playground). If you do that, your custom count() won’t be called, the default implementation on the &mut I instead will be used.

                                                                        1. 1

                                                                          The count method isn’t being directly invoked on the trait object, it’s being invoked on the &mut which is a concrete iterator implementation

                                                                          Is this considered a bug? I understand how and why works, but the code in your playground link is really counter-intuitive, it’s clearly a footgun. I wonder if this could be fixed?

                                                                        2. 1

                                                                          Semi-unrelated question:

                                                                          Why IntoIterator instead of Into<Iterator<…>>?

                                                                          1. 6

                                                                            IntoIterator has no type parameters. This means calling foo.into_iter() is unambiguous as to the type. Using Into<Iterator<…>> would allow for a single type to implement conversions into countless different iterators, and so the desugaring of for x in foo { would become ambiguous as to what concrete iterator type is to be used.

                                                                            1. 1

                                                                              No less ambiguous than any other use of an iterator though? x’s type would be resolved at use or with a type annotation on the variable declaration.

                                                                              1. 4

                                                                                Even a type annotation on the variable declaration doesn’t tell you what the iterator is, just what the item type is. And since Iterator is a trait and not a type you can’t actually write Into<Iterator> at all, you’d have to use a concrete implementation of Iterator as your conversion, and so there’s no way to put a type annotation on the .into() call in a for … in desugaring to pick the iterator type. There’s nothing you can write that says “force type inference to pick some type that conforms to a given trait”. You can require the given trait by using something like fn assert_is_iterator<T: Iterator>(x: T) -> T { x } but writing assert_is_iterator(foo.into()) is not sufficient to resolve the type of the into() call.

                                                                                Meanwhile, normal uses of iterators are not ambiguous in the slightest. Any type that implements IntoIterator can only be converted into a single iterator type (because the trait has no type parameters and so can only be implemented once on a given type). So if you know the type of foo then you know the type of foo.into_iter().

                                                                          2. 17

                                                                            sounds like the kernel offers syscall ABI stability then?

                                                                            1. 17

                                                                              OpenBSD requires syscalls to go through libc. I would assume if you statically link libc then you need to recompile after kernel updates, though I’m finding it difficult to get a definitive answer with a very quick search.

                                                                              1. 4

                                                                                I don’t think it’s literally ABI changes (at least, almost never), but the backward compatibility guarantee is at the libc layer not the syscall layer. In other words, it’s considered OK to change syscall behavior if you put backward compatibility logic in libc for it. In any case, it’s not just a “guideline”, which is why Go switched to using libc rather than direct syscalls on BSD/MacOS.

                                                                                1. 18

                                                                                  The article describes linking against libc.a, a static library. That means if the syscall behavior changes, the backward compatibility logic in libc will not affect already-built executables. So those executables you have lying around will now have undefined behavior, unless the syscall ABI is stable.

                                                                                  1. 9

                                                                                    Yes, AFAIK in BSD world you shouldn’t static link libc unless your binary is part of the main tree, because the syscalls aren’t stable across major versions. There are “version symbols” to make the shared library work across versions, but no way to do that with static linking. So, definitely not a “guideline”.

                                                                                    1. 4

                                                                                      It depends on which BSD. They have different stability and compatibility guarantees and different technical mechanisms to implement those guarantees. FreeBSD, OpenBSD, and macOS are as different from each other as they are from Linux.

                                                                                      1. 1

                                                                                        Oh, interesting — I haven’t paid attention to OpenBSD for a while. So yeah, it’s just FreeBSD and MacOS where I’ve run into this. (And, I believe, Solaris, but that was long ago.)

                                                                                        So given all those BSDs and Windows, I had filed the “syscalls never break” rule as strictly a Linux thing.

                                                                                        1. 9

                                                                                          So yeah, it’s just FreeBSD and MacOS where I’ve run into this

                                                                                          FreeBSD has strong syscall compatibility guarantees. If a system call changes in an incompatible way, a new one is added and the old one renamed. The old one is gated behind a COMPAT_{version number} compile option in the kernel. The official kernel builds include everything from COMPAT_4 onwards (I think, possibly older) but if you’re building an appliance and know it will run only newer code then you can remove it.

                                                                                          Similarly, FreeBSD’s libc uses symbol versioning so that things that used old versions of system calls whose ABIs have changed (e.g. new versions of stat with newer fields) will call the syscall wrapper that calls the compat version, not the new one.

                                                                                          On recent FreeBSD, the system call wrappers are in libsyscalls, which libc links, so you can provide you own implementation of the system call layer (for example, in a sandboxed environment) but use all of the libc machinery.

                                                                                          macOS is completely different. libSystem is the stable system-call interface. If you want to talk to the kernel, you go via libSystem or you expect breakage. Apple reserves the right to change the system-call layer between minor revisions. A few years ago, they changed the signature of gettimeofday, which broke all Go programs on macOS (because Go implemented its own system-call layer, rather than using the supported one).

                                                                                          1. 1

                                                                                            Thank you for clearing that up! Of course FreeBSD has a sensible way to handle it, no surprise there. Though I’m now baffled as to why I thought this happened on FreeBSD. Maybe we were using a kernel that someone had “helpfully” changed the compat setting on to save space or something.

                                                                                            1. 2

                                                                                              Non-syscall ABIs can change across major revisions. For example, device ioctls may change (though FreeBSD ioctl numbers include the size, so usually there’s a new one added rather than an old one changing). This used to happen more but now the project uses old jails on new systems for package builds. This means that a lot of things end up with compatibility interfaces.

                                                                                  2. 12

                                                                                    Doesn’t openbsd specifically investigate syscalls to make sure they come from libc and not anywhere else? I may be misremembering.

                                                                                    1. 2

                                                                                      I don’t know about OpenBSD, but macOS and Windows both also have an equivalent policy (libSystem.dylib and ntdll.dll, respectively) and both have syscall numbers that are fairly unstable.

                                                                                      My understanding is that, for OSes other than Linux, it’s a common decision to treat the kernel and libc as two pieces of the same codebase which just happen to live on opposite sides of a kernelspace/userspace split, but still share an enum and care more about keeping the enum members alphabetized than about keeping their raw integer representations stable.

                                                                                      1. -6

                                                                                        This article aged like milk

                                                                                        1. 14

                                                                                          Since earlier today? Your milk should be lasting longer than that.

                                                                                        2. 22

                                                                                          I think the core assumption made by this post, namely

                                                                                          Panics [are] Bad For Systems Libraries

                                                                                          is entirely unfounded and is missing the point. In fact, taking the assumption at face-value leads to a pretty egregious example of Goodhart’s Law.

                                                                                          That a panic is reachable is not in itself a problem. The panic is a symptom, and it’s a symptom of the program having entered a state in which some important invariant has been violated; so important that execution cannot reasonably continue.

                                                                                          Yes, it’s often possible to design the code in such a way that you can prove to the compiler that invariant violations are not possible. If you can do that, then great: but by nature, system programming often involves dealing with low-level details that can’t be so easily reasoned about during compilation. By chasing the ‘no panic’ goal to the point of ensuring that the library doesn’t even link against the panic handler, you’re encouraging:

                                                                                          • a mode of development in which invariants are deliberately left unchecked
                                                                                          • workarounds that amount to panicking-but-worse, such as branching to an infinite loop
                                                                                          • ‘sentinel’ error values that silently go unchecked (i.e: returning NULL or a negative number, as is overly common in C)

                                                                                          ‘Don’t even link to the panic handler’ is a destructively wide brush to paint with and not a goal worth pursuing for a serious project. State your priors: document what environment your library is expecting it to be born into and what invariants it expects to be upheld. And if those invariants are violated, for god’s sake, panic!

                                                                                          libc::printf(MSG.as_ptr() as *const _)

                                                                                          And checking on Godbolt, we see the small binary that confirms that this library is indeed no-panic:

                                                                                          As an aside, I find this quite funny: of course it’s going to be ‘no-panic’, because you’re deferring responsibility to the system’s libc. What’s the chance that your system libc doesn’t include any panics/aborts/crash paths? Zero, I’d bet. Invariant violations are a fact of interacting with an outside world that does not want to conform to the nice rules and abstractions we paint in code.

                                                                                          1. 7

                                                                                            I don’t agree that the post is missing the point.

                                                                                            Firstly, is panicking even a correct thing to do for a library? What happens if the user of the library is a C program, which has no stack unwinding?

                                                                                            Secondly, size matters in embedded. I’m writing C++ without exceptions for a job, and I can’t use std::vector there either.

                                                                                            1. 6

                                                                                              Firstly, is panicking even a correct thing to do for a library? What happens if the user of the library is a C program, which has no stack unwinding?

                                                                                              Deeepends. Panicing in general is a last resort and also does not necessarily include unwinding. You can compile the library with panic=abort, which is a common choice nowadays.

                                                                                              Secondly, size matters in embedded. I’m writing C++ without exceptions for a job, and I can’t use std::vector there either.

                                                                                              Side note: Khalil Estell makes good arguments that using exceptions on embedded hardware can make sense for size reasons. https://www.youtube.com/watch?v=bY2FlayomlE

                                                                                              It’s something I appreciate the C++ community for, Rust is not at that level of debate yet.

                                                                                              1. 6

                                                                                                The size argument doesn’t apply, anyone who cares about that can set panic=abort.

                                                                                              2. 5

                                                                                                While I agree with you that the position taken by this blog post is missing the point, I think there is an important distinction to made between panics and assertions.

                                                                                                Assertions check for programming mistakes, in correct code assertions should never fail. Assertions are an extremely important part of writing safety-critical code, as they expose divergence between the mental model of the programmers and the model actually implemented by the code.

                                                                                                Panics on the other hand may be hit even if the code is perfect, due to violation of external constraints that make it incorrect/impossible for the program to continue execution.

                                                                                                1. 9

                                                                                                  I think in system where panic-freedom matters, that distinction does not exist. Functions in Rust declare their panic conditions as pre-conditions to avoid and are very often triggered by an internal assertion. You will rarely see panic! in code directly.

                                                                                                  I assume you’re have some experience in safety-critical, so more for the casual reader: When talking about safety-critical code, it’s also untrue that those systems are never allowed to crash and use panic as an error strategy. The important part is that this strategy is declared, checked and handled and does not happen in an uncontrolled fashion. You can totally have a supervisor that restarts your task (which is then not allowed to crash). The problem with panics is that they are hidden control flow and hard to track in full completeness and harder to test for than normal code, so panic-free programming may make things easier.

                                                                                                  1. 5

                                                                                                    Any external constraint that fails should lead to an error being reported gracefully, not a crash.

                                                                                                  2. 4

                                                                                                    I think you are attacking a straw man. All the evidence in the post suggests that the author is just as concerned about avoiding those kinds of bugs as you are.

                                                                                                    I’ve belatedly realised that there’s a strong reason to prevent panics in the kind of library the author is working on, though it’s unstated in the article: when working with data from the network, if the code can panic then that’s probably a denial-of-service security vulnerability.

                                                                                                    The security advantage of no-panic Rust is that it forces programmers to use fallible APIs instead of hidden-panic APIs, so they can’t accidentally overlook a crash DoS bug. This leads to the starting point of the article: how to expose static invariants to the compiler so that it can see that fallible APIs cannot fail and drop the dead code for the never-taken branches.

                                                                                                    1. 2

                                                                                                      Your entire response makes no sense to me given abort fulfills all the requirements you mention without the cost of an unwinding runtime, which is simply not needed for OP’s use case but if necessary can trivially be reintroduced by a consumer application by calling panic.

                                                                                                    2. 4

                                                                                                      This is honestly useful, not ‘cause I’m ever going to write code this way (I hope), but to see what is necessary to write code that is guaranteed panic-free. If you can get over the “do we need this, really?” feeling then it’s a fun ride.

                                                                                                      In general I think it’d be more reasonable to solve these problems with a custom panic handler, though they say in the footnotes:

                                                                                                      …we can mitigate this code size overhead by writing our own panic handler, which we could engineer to be much smaller than the std one. This does address the code size concern, but it does not compose well, as there can only be one panic handler for an entire binary, so it doesn’t make sense for a library to provide one.

                                                                                                      In which case I would like to suggest that the choice of panic handler and the size of its generated code is not the library’s problem. Putting lots of work in to understand the problem is reasonable, doing it to prevent 300 kb of panic code from being generated in the lib when the application will include the exact same code seems a bit silly.

                                                                                                      1. 3

                                                                                                        I got the impression from the first paragraph that they are thinking of writing libraries in Rust that can link into non-Rust programs without dragging in lots of unwanted support code.

                                                                                                        1. 2

                                                                                                          So either the Rust library is providing its own panic handler, in which case it can choose to use abort instead of the 300k of “polite” panic code, or it isn’t, in which case it’s not the library’s problem. No?

                                                                                                          I have been doing a lot of #[no_std] embedded code lately, so I know that can be very very tiny. But I haven’t tried linking a Rust library into a C program. I’m not sure how it works if you want to use std in a library but not link in std panic support code. Presumably a similar problem to wanting to use the C++ stdlib in a library but not link in the exception support code?

                                                                                                          1. 4

                                                                                                            Libraries aborting is also considered bad!

                                                                                                            1. 2

                                                                                                              If a library finds its internal state has gone sideways, I don’t know what it can properly do other than abort, or I guess call a hook the client provides that is its equivalent of a panic handler. The point of panicking is that the code has lost the plot and it’s not safe to continue.

                                                                                                              1. 3

                                                                                                                But UB should not have happened, so unless the “panic” was caused by cosmic rays flipping bits that shouldn’t be flipped, it absolutely is perfectly sane and sound to simply return. Like, this can panic: arr[i];. If we replace it with arr.get(i).ok_or(MyErr::Unrecoverable)?;, surely that should be fine? If the index value we got is out of bounds, we can’t continue with whatever we were doing with that array, but that doesn’t mean that there is nothing we can do at all!

                                                                                                                1. 2

                                                                                                                  Panics are for inexplicable errors, meaning logic errors (or, sure, cosmic rays) where you don’t know anymore what you can do and what you can’t. Making every single API in your library potentially return an “unrecoverable” error makes the API incoherent and is just an encouragement to not check return values. Errors are part of the API and should be meaningful. Besides, if it returns Unrecoverable, meaning “I have no idea what’s going on anymore, please don’t call me again or you may get garbage back”, what do you expect the caller to do?

                                                                                                            2. 1

                                                                                                              Just compile the library with panic=abort. A Rust panic can’t cross an extern "C" boundary anyway, if you really want that you have to declare your function as extern "C-unwind" instead.

                                                                                                        2. 2

                                                                                                          Interesting.

                                                                                                          So what is actually wrong with a program that has an Rc live across wait points? As in if we disable static checks and just ran the program (like a C compiler) could there ever be thread contention and undefined behaviour? It seems like only one thread would use the Rc at a time… so an Arc seems like overkill at first blush.

                                                                                                          Can the cores of a CPU really be so out of sync that the refcount set from before the work stealing is not reflected correctly in the core that picks up the work? How would that happen? Maybe If it was stealing it back and had an earlier version of the future in cache? If so, you’d have similar issues with the other plain old data in the future… so it can’t be that.

                                                                                                          I can’t tell if this is a compiler false positive (completeness issue) or saving us from actual UB.

                                                                                                          1. 4

                                                                                                            The compiler simply enforces what the type promises, and in this case Rc says that it isn’t safe to send elsewhere. You could have some kind of RcSendable type constructible from an Rc with a refcount of one, or you could have some kind of structure containing Rcs that guarantees that they can’t be leaked to some other part of the program, and have them be Send, but making Rc itself Send in a limited set of circumstances would be difficult, for questionable gain.

                                                                                                            Keep in mind that it’s impossible to make a compiler that allows all correct programs but rejects all incorrect programs. So since Rust wants to reject all programs that have UB, it must also reject some programs that don’t have UB. Efforts are ongoing to increase the number of correct programs that Rust allows, but adding special logic to the compiler to allow fresh Rcs to be Send seems not worth it.

                                                                                                            1. 4

                                                                                                              So what is actually wrong with a program that has an Rc live across wait points?

                                                                                                              The problem is elsewhere.

                                                                                                              An Rc automatically frees its contents. It uses a refcount which is adjusted when the Rc is cloned or dropped. If Rc were Send then you could clone it into multiple threads. The refcount adjustments don’t use atomic instructions so they are likely to go wrong and cause use-after-free errors.

                                                                                                              1. 1

                                                                                                                If a send an Rc that I own to another thread, won’t it be moved (neither cloned nor dropped - the refcount stays constant)?

                                                                                                                (And then my question was if every last clone of a given Rc was sent/moved as part of a single owned value, in this case a future, to another thread, mightn’t that be technically valid?)

                                                                                                                1. 5

                                                                                                                  You can’t statically prove it’s the last reference though, so the type system has to disallow the general case where it might not be the last reference. If you only need the one reference, perhaps don’t use an Rc?

                                                                                                                  1. 6

                                                                                                                    As an aside, it would be nice to have functions on Rc<T>/Arc<T> to go between the two types, but only if their reference count is 1.

                                                                                                                    impl<T> Rc<T> {
                                                                                                                        fn into_arc(self) -> Result<Arc<T>, Self> { ... }
                                                                                                                    }
                                                                                                                    
                                                                                                                    impl<T> Arc<T> {
                                                                                                                        fn into_rc(self) -> Result<Rc<T>, Self> { ... }
                                                                                                                    }
                                                                                                                    

                                                                                                                    That would avoid the need to reallocate the inner value. It seems like their current implementation has exactly the same memory representation (with the small exception that AtomicUsize can have more stringent alignment requirements than usize, although it should be trivial to just make sure the RcInner struct is aligned properly)

                                                                                                                    1. 4

                                                                                                                      I believe system allocators frequently align to at least 8 bytes, which means that in practice the RcInner should already end up aligned suitably for ArcInner.

                                                                                                                      Given that, you could implement this yourself. This assumes of course that RcInner and ArcInner don’t ever change layouts (or if they do, that the layouts stay identical).

                                                                                                                      fn rc_to_arc<T>(mut rc: Rc<T>) -> Result<Arc<T>, Rc<T>> {
                                                                                                                          // first, check to make sure we have unique ownership
                                                                                                                          if Rc::get_mut(&mut rc).is_none() {
                                                                                                                              return Err(rc);
                                                                                                                          }
                                                                                                                          // next, grab the raw pointer value
                                                                                                                          let p = Rc::into_raw(rc);
                                                                                                                          // check to make sure it's aligned for AtomicUsize.
                                                                                                                          // this pointer points to the value, not the header,
                                                                                                                          // but the header's size is a multiple of the AtomicUsize
                                                                                                                          // alignment and so if the pointer is aligned, so is
                                                                                                                          // the header.
                                                                                                                          if (p as *const AtomicUsize).is_aligned() {
                                                                                                                              // the memory layout of RcInner and ArcInner is identical.
                                                                                                                              Ok(unsafe { Arc::from_raw(p) })
                                                                                                                          } else {
                                                                                                                              Err(unsafe { Rc::from_raw(p) })
                                                                                                                          }
                                                                                                                      }
                                                                                                                      

                                                                                                                      That said, this is a very niche use-case.

                                                                                                                  2. 2

                                                                                                                    It’s pessimistic because it can’t prove at compile time that those things are safe at run time.

                                                                                                                    Moving an Rc across threads isn’t necessarily the problem, it’s what happens to the Rc before and after, how it is shared.

                                                                                                                2. 3

                                                                                                                  You definitely can have “memory value is V0, CPU 1 writes V1, CPU 2 reads V0”, and you’re exactly right that applies to any memory location.

                                                                                                                  If you want to ensure writes made by one CPU are visible to another with certainty you need to issue instructions for that. Otherwise your write may be in a cache or memory queue not visible to the other CPU, or your read may be from a stale cache

                                                                                                                  1. 2

                                                                                                                    As far as I remember, Rust targets some abstract common denominator of existing memory models. This allows the compiler to make valid choices while checking the higher level code (what LLVM does on the lower level is highly platform specific though). That memory model is quite conservative, thus the errors. What would happen in reality for the potentially racy construct is sort of UB as it’s naturally not defined :)

                                                                                                                    1. 1

                                                                                                                      So any value that is Send and that is moved to another thread might have some instructions added so that it is read coherently?

                                                                                                                      That sounds fine in this specific case so long as the same instructions were applied to RcInner. But that is pointed to with a NonNull, for which the compiler doesn’t want to mess with and which isn’t Send

                                                                                                                      Am I on the right track?

                                                                                                                      1. 6

                                                                                                                        You make this sound like it’s automatic. The guts of Rc use raw pointers, which aren’t Send, therefore Rc is not Send. If you take something like Arc, the guts are also not Send, but Arc (unsafely) implements Send explicitly as a manual promise. Arc’s methods are manually coded so that the overall type behaves in an atomic/coherent way.

                                                                                                                        The whole point of Rc is that it doesn’t go to all that trouble, which has a cost as well, but it still allows an object to be referenced from multiple locations which are accessible to only one thread. (Including objects of types which themselves are not Send.) Yes you could modify Rc to implement Send. Congratulations, you implemented Arc.

                                                                                                                        Many primitive types such as i32 are also Send, and so are types derived from them. Not because the compiler inserts any special instructions or something, but because the mechanism by which it’s moved from one thread to another is assumed to be safe (e.g. channel, mutex, etc. - safe here usually meaning it uses a memory barrier of some kind), and there’s nothing about the type itself that needs special treatment.

                                                                                                                  2. 4

                                                                                                                    This is an interesting idea, but it does break the Rust stability guarantees, which says that updating the stable Rust compiler without updating any crates should not break compilation. I know this has been broken before, but preview crates would require you to update your crates when updating the compiler (assuming anything has changed with the preview feature).

                                                                                                                    The way preview crates require updating would also break with cargo -Z minimal-versions update.

                                                                                                                    1. 1

                                                                                                                      Yeah that seems like the biggest issue with the idea IMO. I like the idea of leaning on crates.io for stats, but having the implementation actually being outside the compiler, thus subject to Cargo.lock, seems too problematic.

                                                                                                                      Maybe you can get the best of both by making the external crate reexport the macro from std so what’s locked doesn’t matter. Basically the external crate could be something like:

                                                                                                                      macro_rules! const_item {
                                                                                                                        (...) => { std::_secret_sauce::const_item::v1(...) };
                                                                                                                      }
                                                                                                                      

                                                                                                                      And users will always use v1 until they update this crate.

                                                                                                                      1. 2

                                                                                                                        I think that’s basically what this part of the post is suggesting?

                                                                                                                        But I figure we still handle it by actually having the preview functionality exposed by crates in sysroot that are shipping along the compiler. These crates would not be directly usable except by our blessed crates.io crates, but they would basically just be shims that expose the underlying stuff.

                                                                                                                        1. 1

                                                                                                                          That’s not what I understand: I think “actual” implementation (referred to just above your quote) is the lang feature, but the crate would still have the macro implementation.

                                                                                                                          The reason for my interpretation is the “release 2.0” paragraph:

                                                                                                                          No problem, we release a 2.0 version of the crate and we also rewrite 1.0 to take in the tokens and invoke 2.0 using the semver trick.

                                                                                                                          This is only required if the crate contains the macro implementation, which is equivalent to saying the macro is subject to the lockfile.

                                                                                                                          1. 2

                                                                                                                            I think Niko is just throwing a bunch of random ideas of how it might work and seeing what might stick. I mean, the part I quoted was under this question:

                                                                                                                            But would this actually work? What’s in that crate and what if it is not matched with the right version of the compiler?

                                                                                                                            And shipping sysroot crates with the compiler which actually use the compiler feature while the crates on crates.io just wrap those sysroot crates would in fact solve that problem, so I think that is what he means in that sections.

                                                                                                                    2. 5

                                                                                                                      I agree with this, except for the end.

                                                                                                                      Everyone hates two-factor authentication but coming from someone who has first hand experience in security for higher education, I’ve seen even SMS style 2fa almost completely stop all compromised accounts.

                                                                                                                      I’m sure there’s a better way to do it, but I’m not sure what the better way is. I’m not sure passkey is the answer, but that’s the one everyone embraced– I feel like it’s going to be a nightmare to deal with when it’s time to recover an account though, and I’ve yet to try using it.

                                                                                                                      1. 9

                                                                                                                        In my experience, if you have user-support, Passkeys are pretty much a non-starter.

                                                                                                                        The big problem with passkeys from a user support perspective is, nobody understands it and nobody has any good tooling around debugging it. I’ve put passkeys in the hands of our IT department as users and it’s been a complete disaster to debug and troubleshoot when things go wrong. And lots of things go wrong.

                                                                                                                        If you have user support, you can’t use passkey’s (last I tried about a year ago). Pretty much all the big tech companies pushing passkeys hard basically don’t have any user support, which makes it a lot easier to deploy passkeys.

                                                                                                                        The best I could do is make passkeys completely optional and when users get outside of the happy path, just delete all their passkeys and let them try to set it up again after rebooting their computer. i.e. you don’t actually fix anything, you just hope whatever problem happened won’t happen a second time around.

                                                                                                                        We never got out of beta and have no current plans to ever try and actually deploy it. Maybe in another 5 years passkeys won’t be such a mess and we can actually try deploying them again. But maybe by then we will be on to try #4 at getting public/private keypairs across web infrastructure.

                                                                                                                        We currently use TOTP, and while it’s hard to implement, at least it’s doable. 95% of the problems with TOTP are clock sync issues. We have special code at setup time that verifies their clock and shows them how to go about turning on time sync for their devices. It’s amazing how many users have bad time sync, even with Android and iOS devices by default having NTP turned on for years now. Even our tech support people have issues understanding and troubleshooting TOTP, because there is generally at least 3 devices involved(user machine, user phone and the server) and that’s very complicated to reason about.

                                                                                                                        With current passkey implementations, there are generally at least 4 things involved, the users computer, their phone, the network/bluetooth between the two user devices and the server and that network. It’s a recipe for disasters and things to go wrong.

                                                                                                                        1. 3

                                                                                                                          Recovering an account with a passkey is the exact same scenario as recovering an account with a 2fa that got lost.

                                                                                                                          1. 5

                                                                                                                            That’s the lazy way, and I agree it’s the only way I ever had any chance of getting passkeys to deploy when I tried about a year ago. You can’t troubleshoot and debug passkeys well at all last I tried. There is a lot of magic dances that have to happen just right, and the browsers and OS’s just say “ERROR” and give up if anything goes wrong..

                                                                                                                            With current passkey implementations, there are generally at least 4 things involved, the users computer, their phone, the network/bluetooth between the two user devices and the server and that network. It’s a recipe for disasters and things to go wrong.

                                                                                                                            Maybe in another 5 years passkeys won’t be such a mess and we can actually try deploying them again. But maybe by then we will be on to try #4 at getting public/private keypairs across web infrastructure.

                                                                                                                          2. 1

                                                                                                                            I feel like it’s going to be a nightmare to deal with when it’s time to recover an account though

                                                                                                                            Magic link should do the trick.

                                                                                                                          3. 3

                                                                                                                            It is not about Rust, it is not about Assembly, nor ARM64 vs. RISC-V…

                                                                                                                            Just:

                                                                                                                            $ man setenv
                                                                                                                            ...
                                                                                                                            ATTRIBUTES
                                                                                                                               For an explanation of the terms used in this section, see attributes(7).
                                                                                                                            
                                                                                                                               ┌─────────────────────┬───────────────┬─────────────────────┐
                                                                                                                               │Interface            │ Attribute     │ Value               │
                                                                                                                               ├─────────────────────┼───────────────┼─────────────────────┤
                                                                                                                               │setenv(), unsetenv() │ Thread safety │ MT-Unsafe const:env │
                                                                                                                               └─────────────────────┴───────────────┴─────────────────────┘
                                                                                                                            

                                                                                                                            n.b. MT-Unsafe

                                                                                                                            Environment variables are a simple interface between the parent process (the environment) and the program being executed. Not a dynamic global thread-safe map.

                                                                                                                            1. 5

                                                                                                                              Note that this is only true on the platform where you obtained that manual page. The people who wrote the implementation have made a choice not to make the interface safe.

                                                                                                                              If you look at https://illumos.org/man/3C/setenv ours is MT-Safe, and there’s really no reason that every other libc couldn’t make the same decision.

                                                                                                                              1. 3

                                                                                                                                Agree, thanks.

                                                                                                                                BTW: Does Illumos have independent env for threads or shared by whole process (and just thread-safe)? It might be useful or not… However at current state (env is usually not only shared but even MT-Unsafe), it is just a bad design if someone recklessly modifies env in a multi-threaded program or if a library can be parametrized only using env and does not provide any other per-thread configuration (like functions or thread-local variables).

                                                                                                                                1. 4

                                                                                                                                  The environment is definitely per process, not per thread. FWIW, I don’t think setenv(3C) is a good interface, and I don’t think people should use it (there are vastly preferable alternatives for every legitimate use case in 2025), I just don’t think it should be unnecessarily thread/memory unsafe!

                                                                                                                                    1. 2

                                                                                                                                      What’s the preferable alternative for “i’m using a library which changes behavior based on environment variables and i need to control that behavior”? Note that libc itself is a library that does this (e.g. the TZ env var).

                                                                                                                                      1. 4

                                                                                                                                        I think in general the environment variables are intended to allow control from outside the process; this applies to the ambient timezone and locale stuff especially. If you override them within the process you’re preventing those mechanisms from working as designed. In general, any library interface that can only be configured through environment is not a good interface.

                                                                                                                                        In the case of locales, the newlocale(3C) routine was added to allow programs to look up a specific locale and get a handle to it, rather than use the ambient locale from LANG and LC_ALL in the environment. Probably we should be looking to add a newtimezone(3C) routine to allow a similar thing with timezones!