Threads for jamesw

    1. 3

      I was poking vtables this week so if anyone’s interested in their layout, it currently goes: pointer to the drop function, two usizes for size and alignment, then n pointers to the object’s methods.

      I believe that with the new upcasting feature, the super-trait’s function pointers come first followed by the sub-trait’s function pointers. That way you can treat the vtable as matching either trait.

      1. 1

        How does upcasting work (layout-wise) when a trait has multiple super-traits? I’d assume some redundancy is necessary in that case.

        1.  

          Hmm, I’m not sure actually. There’s a proposed layout in the original RFC proposal in which some of a super-trait’s methods can come after the sub-trait’s, and sometimes a pointer to a new vtable is required.

      2. 15

        I don’t understand why Mozilla needs a license to do anything with my content. What is Mozilla’s role in this relationship? My computer is running a piece of software, I input some data into the software, I ask the software to send the data to servers of my choice (for example the lobste.rs servers, when I hit “Post” after typing this comment). What part of this process requires Mozilla to have a “nonexclusive, royalty-free, worldwide license” to that content? And why did they not need to have that “nonexclusive, royalty-free, worldwide license” to that content a week ago? I would get it if it only applied while using their VPN, but it’s for Firefox too?

        Why do I not need to accept a similar ToS to use e.g Curl? My relationship with Curl is exactly the same as my relationship with Firefox: I enter some data into it (via a GUI in Firefox’s case, via command-line arguments in Curl’s case), Curl/Firefox makes a request towards the servers I asked it to with the data I entered, Curl/Firefox shows me whatever the server returned. Is it Mozilla’s view that Curl is somehow infringing on my intellectual property by not obtaining a license to the data I provide?

        1. 6

          Basically, they are trying to have some service to sell. Go to about:preferences#privacy and scroll down to “Firefox Data Collection and Use” and every section below there is about data that Firefox collects and sends to Mozilla so they can do something nominally-useful with it. In my version there’s also “Sync” and “More From Mozilla” tabs, which are even more of the same.

          Someone at Mozilla has decided that the fact you don’t want to buy the services is irrelevant, they’ll just sell all that juicy data produced as a side-effect to whoever wants it. More than they already were, anyway.

          1. 1

            I don’t understand why Mozilla needs a license to do anything with my content. What is Mozilla’s role in this relationship? My computer is running a piece of software, I input some data into the software, I ask the software to send the data to servers of my choice (for example the lobste.rs servers, when I hit “Post” after typing this comment).

            Maybe they only mean inputs into Firefox itself and not the sites that you visit with Firefox. Things like Pocket, the add-on store, the password manager, and the “report broken site” form. I’m sure they could make this clearer if it’s the case, but I’m personally willing to lean towards this.

            1. 19

              If that’s the case, it’s seriously impressive to be 2 “clarifications” in after the original announcement and still not have made that part clear. Anything that’s left unclear at this point is surely being left unclear intentionally.

            2. 1

              Why do I not need to accept a similar ToS to use e.g Curl?

              Ha. I wish I’d thought of that question.

              Arguably you do have to agree to something to use curl, but it’s very minimal and certainly supports your point. Here is curl’s licence (which is not one of the standard ones), from https://curl.se/docs/copyright.html :

              COPYRIGHT AND PERMISSION NOTICE

              Copyright (c) 1996 - 2025, Daniel Stenberg, daniel@haxx.se, and many contributors, see the THANKS file.

              All rights reserved.

              Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

              THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

              Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization of the copyright holder.

              1. 6

                mdBook absolutely is one of the inspirations! In particular, the way we got rid of docusaurus metadata is by parsing the ToC information directly from markdown, the way mdBook does it.

                But I don’t think we’d be able to use it directly — we do need full control over the content. For example, we don’t want to use SUMMARY.md file, and want to stick with README.md, as it is more conventional and handled specially by GitHub. Similarly, we want full control of the resulting HTML and CSS.

                To get that level of customizability, you might use some fully general SSG, like Jekyll or Docusaurus, but, at that point, you might as well just roll your own!

                1. 5

                  I totally agree mdbook probably wouldn’t suit your needs. I will say that until very recently I didn’t appreciate how customizable it truly is, I’ve been messing with the plugin system and it’s pretty cool.

                2. 4

                  We’re using mdBook for our documentation and there have been benefits and drawbacks.

                  We’ve customised things quite a lot with extra CSS, JS, and preprocessors which has worked well. For example the wavedrom preprocessor generates register diagrams and waveforms without us having to check in SVGs.

                  The main problem has been that we want to mix our documentation into the code. There are README.md files all over the repo and most modules have a doc/ directory. This means turning the whole repo into a book, which doesn’t seem like something mdBook was designed for. When you mdbook build it copies the entire repository (including build artifacts) to the output and seems to run them all through the search indexer.

                  There are also limits to how far we want to customise it. It would be nice to be able to search in only the hardware or software sections for example, but we don’t want to mess with the search bar too much. It would also be nice to have the doxygen/rustdoc easily accessible within the book, but we don’t want to mess with the navigation too much.

                  Soon we may need to add a way to navigate to slightly different variants/versions of our documentation and I think this might push mdbook a bit too far. The TigerBeetle docs here look great and I’ll think of them if we need a change. I also think the Pigweed docs are a good example, though I don’t remember if they’re custom-made.

                  1. 3

                    One useful heuristic here is that the tool should bend to the content, not vice verse. It’s easy to change content processing rules, but it is much harder to change the content to fit the rules.

                    That’s actually was one of the bigger gripes with docusaurus — it forced us to spill part of the information about the content into docusaurus-specifc meta.

                  2. 3

                    Why? The article explains many of their choices. TigerBeetle is written in Zig. Mdbook is rust

                    1. 6

                      As a counter example, we use Haskell pandoc for parsing markdown, not Zig’s SuperMD.

                      1. 2

                        Have you considered using cmark-gfm, a C library for parsing GFM (used by GitHub themselves)?

                        We use it to convert Markdown to XHTML which we then splice into the container document (used to display package README.md in our package repository web interface). Works reasonably well. If I were writing a Markdown-based document system from scratch, that’s what I would likely use.

                        1. 3

                          Yup! Our gut feeling is that pandoc should be more long-term stable, but this is mostly a coin toss. Using cmark-gfm would’ve also worked fine I suppose, but for our use case I don’t think it makes much of a difference.

                      2. 6

                        Mdbook is a CLI tool. It doesn’t matter what language it’s implemented in. It would be like saying you can’t use grep in a Zig project.

                    2. 3

                      Cool, RDAP was a bit nicer to use.

                      When I last needed this in 2020 all gTLDs supported RDAP but few ccTLDs did. Based on this deployment tracker 1/4 of ccTLDs now support it, with a further 1/4 supporting “stealth RDAP” whatever that means (the link is broken).

                        1. 1

                          Of the domains I cared about, it looks like the ICANN lookup tool isn’t working for .al, .co, de, .es, .me, .ng, .rs, .se, or .us, at least. I haven’t checked many domains.

                          It looks like “stealth rdap” means that the ccTLD has an rdap server, but it’s not set up for autodiscoverability. One attempt to probe them found them for .de and .us, from my list above.

                        2. 1

                          Brief text post: https://frame.work/gb/en/blog/highlights-from-the-framework-2nd-gen-event

                          I’m surprised by the 12-inch version. I’ve been reluctant to get a Framework laptop because the 13 seemed too small and the 16 too big. I was sort of hoping they’d introduce a third size but didn’t expect it to be even smaller. I guess the 2-in-1 feature is the important thing, not the size.

                          Not interested in the desktop or AI stuff and assume it’s an appeal to investors rather than customers.

                          More positively I think their previous products look great and I hope they keep improving them.

                          1. 6

                            This causes a bit of headache for me. I doubled down on ring as the default backend for rustls when releasing ureq 3.0 just a couple of months ago. But this might mean I should switch to aws-lc-rs. Hopefully that doesn’t upset too many ureq users :/

                            1. 17

                              There’s been some positive momentum on the GitHub discussion since you posted. Namely the crates.io ownership has been transferred to the rustls people and 2 of them have explicitly said they’ll maintain it. They need to make a new release to reflect the change and then will be able to resolve the advisory.

                              1. 7

                                That does buy some time. It’s the same people stepping up who are writing/maintaining rustls, which makes me happy.

                              2. 3

                                Out of interest, what factors did you consider when choosing between aws-lc-rs and ring?

                                1. 8

                                  My dream for ureq is a Rust native library without C underpinnings. The very early releases of rustls made noises I interpreted to be that too, even though they never explicitly stated it being a goal (and ring certainly isn’t Rust native like that). rustls picked ring, and ureq 1.x and 2.x used rust/ring.

                                  As I was working on ureq 3.x, rustls advertised they were switching their default to aws-lc-rs. However the build requirements for aws-lc-rs were terrible – like requiring users on Windows to install nasm (this has since been fixed).

                                  One of ureq’s top priorities has always been to “just work”, especially for users new to Rust. I don’t want new users to face questions about which TLS backend to chose. Hence I stuck with rustls/ring for ureq 3.x.

                                  aws-lc-rs has improved, but it is still the case that ring has a higher chance to compile on more platforms. RISCV is the one I keep hearing about.

                                  1. 3

                                    Wait does that mean the Rust ecosystem is moving towards relying on Amazon and AWS for its cryptography? That doesn’t sound great. Not that I believe Amazon would add backdoors or anything like that, but I expect them to maintain aws-lac and aws-lc-rs to suit their own needs rather than the needs of the community. It makes me lose some confidence in Rust for these purposes to be honest.

                                    1. 4

                                      I expect them to maintain aws-lac and aws-lc-rs to suit their own needs rather than the needs of the community

                                      What do you see as the conflict here, i.e. where would the needs differ for crypto libraries?

                                      I’d expect a corporate funded crypto project to be more likely to get paid audits, do compliance work (FIPS etc), and add performance changes for the hardware they use (AWS graviton processors I guess), but none of that seems necessarily bad to me.

                                      1. 6

                                        Things like maintaining API stability, keeping around ciphers and cryptographic primitives which AWS happens to not need, accepting contributions to add features which AWS doesn’t need or fix bugs that doesn’t affect AWS, and improving performance on platforms which AWS doesn’t use are all things that I wouldn’t trust Amazon for.

                                      2. 3

                                        Yeah. To AWS advantage is quantum resistant cryptos and FIPS. In a comment below I found there is another initiative “graviola” that seems promising.

                                        1. 1

                                          Wait does that mean the Rust ecosystem is moving towards relying on Amazon and AWS for its cryptography?

                                          There is also the RustCrypto set of projects.

                                          1. 1

                                            There is, but AIUI AWS-LC uses assembly code and thus can provide timing-safety, whereas RustCrypto is “written in pure Rust”.

                                  2. 10

                                    Big update, lots of good unsafe stuff in the new edition. Setting env vars, extern blocks, attributes, unsafe fn bodies, static mut references banned altogether. All of these things make unsafe code easier, or clearer at least.

                                    Also a huge fan of formatting being tied to editions so it can improve without churn.

                                    It’s nice that the “breaking” changes made in editions tend to be quite small things which unblock larger features later or just make life a tiny bit easier. It seems unlikely for any of these changes to be difficult to accommodate and I imagine cargo fix will do most of the work.

                                    1. 9

                                      Authored 5 years ago…. well done getting this in! 🥳

                                      1. 2

                                        There’s no code in there, just a giant xml file.

                                        1. 10

                                          That’s expected, it’s a protocol/API, not an implementation. To write code, you generate bindings for your language from the XML.
                                          If you look at the MR associated with the commit, it links a bunch of code that implements the protocol, both in servers and clients. The protocol being merged means those implementations will be able to be released.

                                          I’m glad that’s happening and that the Wayland project is focusing more on getting things in users hands than before!

                                          1. 5

                                            There’s a draft implementation for wlroots here if you’re interested in code:

                                            https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4962

                                          2. 9

                                            The article Developers shouldn’t distribute their own software might be relevant (I don’t have an opinion).

                                            1. 15

                                              For very popular programs this is sensible, but for most software is basically just another way of saying your software shouldn’t be distributed, because unless you are very lucky, no one else will do it.

                                            2. 39

                                              The most egregious thing about this, other than the part where the devenv maintainer reverted the PR that disabled the telemetry in nixpkgs, is that it, for some reason slurps up all of the files in your git repo. I’ve never heard of any telemetry that does this.

                                              1. 16

                                                I don’t use devenv, but the way I understand this the file sending isn’t related to telemetry, it’s part of an explicit feature to generate an environment for a local project by sending it to an LLM.

                                                1. 12

                                                  IIUC yes this command send your code to an AI service for config generation, and if you don’t opt-out it also sends the entire git repo to a server and retains it, so they can keep improving their AI models….

                                                  1. 2

                                                    I am just thinking out loud - what if the current project is a sub folder in a large git repo, a single repo for the whole organization, and you invoke this command? Will this command send the whole repo to its LLM hosted somewhere?

                                                    1. 1

                                                      There is only one “sending files to the server” (there’s code in the original post, did you click the links?), and the difference is in whether that upload is annotated with the do-not-track flag or not. Based on Domen’s response on the nixpkgs commit, I’d also assume he actually does handle that flag correctly server-side.

                                                      There’s deficiencies in this feature, yes, for example if you’re working in a subproject in a monorepo the current logic doesn’t make any sense, but this is also obviously not nefarious.

                                                  2. 1

                                                    Are you referring to PR 541? I can’t find the commit where it was reverted by @domenkozar, and I don’t understand how it could get reverted if it was never merged?

                                                    1. 9

                                                      I think they mean the nixpkgs PR nixpkgs#381817 which was reverted in nixpkgs#381981.

                                                      Though as the OP mentions the nixpkgs PR doesn’t prevent data being sent, it only sends a DNT flag with it.

                                                      1. 2

                                                        I think they mean the nixpkgs PR nixpkgs#381817 which was reverted in nixpkgs#381981.

                                                        Honestly that nixpkgs doesn’t seem to have any requirements around approvals is way more concerning to me than the actual revert itself (still concerning but for a different reason). Anyone who has the ability to merge PRs can push code without any oversight.

                                                        Edit: fixed thought duplication, added sentence

                                                        Correction: I am incorrect. There is a github action which validates code ownership.

                                                    1. 1

                                                      Yeah, all those links are in the second sentence of the article.

                                                    2. 5

                                                      For more on register allocation design, I thought this article was really good: Cranelift, Part 4: A New Register Allocator (regalloc2).

                                                      I notice there’s now a regalloc3 as well, but I don’t know anything about it.

                                                      Aside: I don’t know how much time regalloc takes compared to optimisation, linking, etc. but I wonder if there’s a noticeable difference in effort between x86_64 (where there are 16? registers, some with fixed purposes) and RISC-V (which has 32 registers with no fixed purpose, modulo ABI and compressed instructions).

                                                      1. 4

                                                        I like the font, but I don’t like that they want you to give an email address and sign their “EULA” (which is just the SIL open font license?) to download from the website.

                                                        1. 3

                                                          If you have clang-tidy available, there’s a lint to prevent assignments in if conditionals. Swapping the operands just looks too uncanny to me and you have to convince everyone to do it.

                                                          1. 5

                                                            Both GCC and clang have warnings for the assignment in conditional mistake, and I believe they’re included in -Wextra. There really isn’t any need for this awkwardness.

                                                            One exception: I use reversed comparisons when the left side would otherwise be very long and potentially make the comparison hard to spot.

                                                            if (5 == function(lots, of, very + long + argument, expressions))
                                                            
                                                          2. 6

                                                            I like the idea but I don’t think that I could get used to the reversed argument order of the compress command.

                                                            1. 4

                                                              I think every time I’ve used tar I write the archive name last by mistake, so this way around suits me. For CLIs it feels more familiar with cp and mv to put the destination last, and less like assembly where the sources come last.

                                                              1. 4

                                                                It’s not just tar though, all archival commands that I can think of that take a list of files put the archive name first: ar, pax, zip, rar, zpaq, pkzip, arj, lha etc.
                                                                And as others have mentioned there are practical reasons to prefer this order.

                                                                1. 1

                                                                  Sure, but I use mv and cp every day and archive commands (which famously have clunky CLIs) every few months. My mental model comes from the former.

                                                                  A flag to specify the output would be helpful though to make xargs easier as mentioned.

                                                                2. 2

                                                                  tar is one of those ancient unix utilities with a weirdly bespoke pre-getopt() syntax. But it does have some vague foreshadowing of getopt(), which (at least in my mind) helps to make sense of it.

                                                                  Its synopsis is roughly tar opts file… which in modern versions can also be written tar -opts file…. The flags are usually clustered, like abc or -abc instead of -a -b -c.

                                                                  The really weird thing is that the -f flag takes an argument, but unlike getopt the argument does not have to follow straight after the -f: there can be other clustered flags between the -f and the tar file name.

                                                                  And for weird historical reasons you basically always invoke tar with an f option – I don’t remember in 30 years of using unix ever telling tar to read the implicit default tape device even when I have been reading off an actual magnetic tape. So it’s easy to forget the f flag has a load-bearing meaning.

                                                                  So the actual synopsis is tar -opts -f tarfile file… or tar fopts tarfile file….

                                                                  The fun thing is comparing tar cf dest.tar file… with cp or mv file… destdir, which at first seems gratuitously inconsistent, but when you understand tar’s -f option, the differences become merely quaint, or perhaps obtusely preserved historical ceremony.

                                                                  1. 1

                                                                    With tar you can use any order you want

                                                                    tar -cf foo.txt.tar foo.txt
                                                                    tar -c foo.txt -f foo.txt.tar
                                                                    
                                                                    1. 1

                                                                      Thanks, that is helpful, though I’ll probably forget it and default to tar czvf next time I need it.

                                                                    2. 1

                                                                      With cp and mv, there is a destination option for when using with xargs (-t for target).

                                                                    3. 1

                                                                      Yeah, this doesn’t work well with xargs. I’m a very big fan of how systemctl switched to systemctl <command> <list of services>.

                                                                    4. 9

                                                                      As a Rust user who won’t give up Rust’s strong safety, I’m jealous. Yes, Rust has no-std, but it would be cool if Rust had full std without libc. Note: On Windows, that would mean not using the VC++ runtime and UCRT, while still using kernel32 and other Win32 APIs.

                                                                        1. 2

                                                                          FYI this uses c-gull which is a Rust implementation of libc. It might be cool to also have a std backend that skips the libc API and the C ABI entirely (maybe using this library does skip the C ABI, I’m not sure).

                                                                        2. 0

                                                                          Yes, Rust has no-std, but it would be cool if Rust had full std without libc.

                                                                          Contribute that? At the end of the day, wishes don’t do anything.

                                                                          https://doc.rust-lang.org/nightly/rustc/platform-support/x86_64-unknown-linux-none.html already exists, but is lacking std support.

                                                                          1. 1

                                                                            I got started on an attempt at this some years ago, but lost motivation early on when I realized that Rust’s std is quite enmeshed with libc. It’s 100% possible, but it’ll take more work than I myself can put in right now.

                                                                        3. 38

                                                                          Incidentally, I’m not a fan of languages going through the libc for interfacing with the OS. It makes cross compiling hell, while languages which just perform syscall directly just work. I think operating systems should:

                                                                          1. Separate out a syscall wrapper library from their libc, so that non-C languages don’t have to link against a library whose main purpose is to contain a ton of utility functions meant for C programmers; and
                                                                          2. Define a stable ABI for that syscall wrapper library, so that languages can target that ABI instead of targeting “whatever libc the build machine happens to have installed”

                                                                          It’s frankly baffling to me that this hasn’t happened yet, especially in OpenBSD which is the main driver behind the “every interaction with the kernel must go through the libc” cult.

                                                                          1. 26

                                                                            It makes cross compiling hell, while languages which just perform syscall directly just work.

                                                                            I think Zig’s outright trolling here:

                                                                            • Zig’s stdlib by default doesn’t depend on glibc and does syscalls directly, so cross-compilation just works
                                                                            • But you also can use Zig with glibc, and the cross-compilation still just works! When you specify a Zig target, you can request a specific version of glibc, you don’t need to compile on old Debian just to dodge symbol multiversioning.
                                                                            1. 27

                                                                              Hey I didn’t know that, that sounds excellent. Clearly, a lot of work has gone into this from Zig.

                                                                              However I maintain that operating systems are making this way harder than it needs to be. It’s ridiculous that every language has to bend over backwards to link against a gigantic C utility functions library with an incredibly unstable ABI and with no sort of ABI standardization across POSIX systems, just to access a few basic syscall wrapper functions.

                                                                              1. 11

                                                                                Fully agree with this, yeah!

                                                                                1. 3

                                                                                  That’s just the original sin of libc: it can’t decide if it’s the OS interface or the C standard library. There is no good reason to conflate the two.

                                                                                2. 6

                                                                                  When you specify a Zig target, you can request a specific version of glibc, you don’t need to compile on old Debian just to dodge symbol multiversioning.

                                                                                  Which honestly is key. It’s always nice to say “my language does not need libc” but then in the real world you will need to interface with this anyways and you’re back to square one.

                                                                                  1. 4

                                                                                    you don’t need to compile on old Debian just to dodge symbol multiversioning.

                                                                                    The fact that you are basically forced to do that with C/C++ (and also Rust, I believe) is incredibly infuriating.

                                                                                    1. 2

                                                                                      You aren’t forced to do that with C/C++ if you use Zig as your C/C++ cross-platform build tool. That’s an advertised feature of Zig I haven’t tried yet, so I don’t know about the limitations.

                                                                                      1. 4

                                                                                        The main problem is that Zig’s cross-build cleverness stops at libc, so if you have dependencies on (say) curl and openssl, you (I mean me) are better off doing a native build on the target machine.

                                                                                          1. 5

                                                                                            But that’s not build cleverness, that’s a manually created build script. It’s nice that someone made that build script, but it’s dishonest to compare it to the automated smartness of Zig’s libc handling.

                                                                                            1. 5

                                                                                              It’s the same thing… In both cases I ported the build over to zig

                                                                                            2. 3

                                                                                              If I remember correctly I ran into problems trying to use Zig to cross-build some Rust bindings. This is supposed to work, but Zig was unable to find the cross-build dependencies.

                                                                                              I was trying out Zig as a cross-build compiler because I had read that it has a lot of cleverness so that the cross-build environment is built in. I hoped that would save me from lots of tedious setup. But I had forgotten about my C dependencies, and the error messages suggested to me that I would need to set up a cross-build environment for them. Which nullified the point of the exercise.

                                                                                              1. 3

                                                                                                Yes, I have run into this same issue as well (something to do with rodio) when trying to cross-compile rust with zig.

                                                                                                Granted I don’t know zig, but gave up and committed instead to a much more vanilla static build of rust with musl, and it worked fine.

                                                                                        1. 2

                                                                                          If you don’t mind setting it up, you could statically link to musl to avoid this. I think the rustc *-musl targets will do it automatically. (I agree it’s infuriating)

                                                                                        2. 2

                                                                                          Yes i found that you could even target a particular version of libc, earlier even a compiled programme on a system with newer libc woudn’t work on a system with relatively older libc, then trying to install a linux os with older libc would also fail as it wouldn’t run the newer Makefile/build-script, it just became a stupid whack-a-mole game. Now i just cross compile even from windows for any desired linux distribution a lot of not so trivial nim/c code and it just works..

                                                                                          1. 2

                                                                                            Is the Zig -libc Linux only?

                                                                                            Because as the parent notes, there is a lot of operating systems that does not have a stable ABI for syscalls.

                                                                                            1. 2

                                                                                              libcless std is linux only AFAIK. There’s been contributions to add libcless freebsd as well.

                                                                                              1. 1

                                                                                                I guess they need to paramatise that by major versions since I think the syscalls is only guranteed stable within those.

                                                                                                1. 1

                                                                                                  Old FreeBSD binaries are supported when the kernel is compiled with the COMPAT options.

                                                                                          2. 5

                                                                                            I’m not a fan of languages going through the libc for interfacing with the OS. It makes cross compiling hell, while languages which just perform syscall directly just work.

                                                                                            That sounds backwards to me. The C library is standardized; syscalls aren’t (especially if you want to target Windows or embedded platforms!)

                                                                                            But I come from a world (Darwin) where the C library is just a piece of a bigger libSystem that all system calls go through, so the idea of a separate / optional libc is kinda foreign. I must be missing something.

                                                                                            1. 6

                                                                                              I think the point is “we want to free ourselves from all artifacts that C has imposed on us.” When libc is the only way to make syscalls, you can’t do that.

                                                                                              A library libsyscall seems to make sense, if the interface is stable (per OS), but you’re in some interesting conditional hell, that libc papers over with the “standard” interface, if you adopt that approach. Still, not sure all software implementing syscalls themselves makes sense…

                                                                                              1. 3

                                                                                                Libsyscall still imposes C constraints - it will require both a stack and calling convention.

                                                                                                1. 3

                                                                                                  It means that you need to follow C conventions – for those functions in particular. A language wouldn’t have to know how to call C functions in general, it could have standard library functions implemented in assembly which calls those functions using C stack and calling conventions. Or compiler intrinsics which produces the instruction sequence necessary to call those C functions.

                                                                                              2. 4

                                                                                                We’re not talking about the C standard library as defined by the C standard, but rather as defined by POSIX and implemented by UNIX-like systems.

                                                                                                Both glibc in GNU and libSystem in Darwin contain loads and loads of C utility functions meant for C programs, and they bring in things like the C locale model and the C file model. And at least glibc doesn’t have a stable ABI to link against; and I suspect Apple has similar mechanisms to GNU to allow them to break ABI. In either case, almost nothing of it is relevant for a language which just wants to do its own thing, but we still need to link against all of it just to perform syscalls.

                                                                                                If there was a standardized (by POSIX, perhaps?) system interface C library with a standardized and stable ABI, a language could implement cross compiling from any platform to any compliant platform mostly straightforwardly, with great UX. Today, it’s hell.

                                                                                                1. 1

                                                                                                  And at least glibc doesn’t have a stable ABI to link against; and I suspect Apple has similar mechanisms to GNU to allow them to break ABI.

                                                                                                  Apple’s libSystem is just a dynamic library that exports all the system calls and C library functions. You link to it like any other dylib, and you call it using regular C calling conventions. Apple couldn’t change that without breaking literally every program in existence!

                                                                                                  1. 2

                                                                                                    glibc is also just a dynamic library that exports all the system calls and C library functions which you link to like any other .so and call using regular C calling conventions. Yet they change the ABI constantly in a way that’s horrible to deal with when cross compiling, they just have mechanisms in place to make that not break every program in existence.

                                                                                                    1. 1

                                                                                                      You could imagine Apple creating a libMach.dylib that exports only low-level API/ABI specific to macOS, leaving out libc/POSIX concepts like fopen that are implemented as wrappers on top of the native functionality, or cruft like errno that is inherently tied to the programming model of 1970s C.

                                                                                                      If libSystem.dylib depended on libMach.dylib then existing programs would continue working. Programs that want to avoid a dependency on libc (for example because they’re written in a non-C language with its own stdlib) could link libMach.dylib directly.

                                                                                                      1. 1

                                                                                                        They could, but what would be the point? It wouldn’t make anything easier, or more performant. You’d just be mapping fewer code pages into your address space.

                                                                                                  2. 1

                                                                                                    If you are building on system X and want to target system Y, to cross compile you often need access to libc from Y on X ( this might be hard/impossible). Unless you are using a language that performs syscalls directly - in that case you don’t need the libc and cross compilation becomes simpler/possible.

                                                                                                    1. 2

                                                                                                      It sounds like the thing confusing me is that I’m thinking of dynamic libraries and you’re talking about static libraries. (Right?) If libc we’re dynamic your cross-compiler could just write the names of the library functions into the imports section of the binary, without having to have the actual libc.

                                                                                                      1. 6

                                                                                                        GNU libc is a dynamic library that has a dependency/compilation model similar to static libraries – the standard way to link against a specific GNU libc version is to have a chroot with that version installed. It’s not like macOS where you can compile on a v15.1 machine but target a minimum version of v14.0 (or whatever) via compiler flags.

                                                                                                        The header files have #define macros that unpack to implementation-defined pragmas to override linker settings, there’s linker scripts that do further manipulation of the symbols so that a call to stat() turns into a reference to xstat64/2.0 or whatever, the .so itself might require the binary to be linked to an auxiliary .a stub library of a forward- (but not backward-)compatible version. It’s not straightforward.

                                                                                                        Consequently, trying to cross-compile a Linux executable that links against GNU libc is a huge pain because you need to get your hands on a GNU libc binary build without going through the system package managers (that are often an assumed part of the development environment for the type of person who wants to use GNU libc).

                                                                                                        Other Linux libc implementations (read: musl) don’t have the same limitation because they don’t have the GNU project’s … idiosyncratic … philosophy about the Linux <-> libc <-> executable relationship.

                                                                                                  3. 5

                                                                                                    It’s frankly baffling to me that this hasn’t happened yet, especially in OpenBSD which is the main driver behind the “every interaction with the kernel must go through the libc” cult.

                                                                                                    Why are you surprised that this hasn’t happened in an OS where everything is C and the kernel and libc development are basically one and the same? As far as I can tell all it would do is add a layer of mapping they currently have no need for, what would the concrete benefit be from the project’s point of view?

                                                                                                    1. 7

                                                                                                      OpenBSD is obsessed with security, so you would think they would want to make it easier to write applications in memory-safe languages. But nope—your choices are either C, or some language that eventually pretends to be C.

                                                                                                      (I’m being a little snarky here. I’ve gotten the impression that OpenBSD’s interest in security does not extend as far as having the humility to try to migrate away from C. That’s fine; there are plenty of solid pragmatic reasons to make that decision—but in 2025 it’s not a recipe for security.)

                                                                                                      1. 4

                                                                                                        That seems like a better tack, but fundamentally I’m not sure it argues for the change in question? Would a raw assembly ABI for non-C languages to bind to be in any way safer or easier than a C-ABI one? And it’s not like you’d remove the dynamic binding since openbsd is actively trying to remove raw syscalls with sycall origin verification.

                                                                                                      2. 3

                                                                                                        Because not everything is C. Even in the BSD world, people run lots of userspace code that’s not written in C.

                                                                                                      3. 3

                                                                                                        OpenBSD doesn’t promise any kind of API or ABI stability from release to release and I believe doing so is a non-goal. This got brought up the better part of a decade ago as an issue for Rust’s libc crate and they (apparently) haven’t decided what to do about it.

                                                                                                        For better and for worse, when OpenBSD breaks something, porters usually go through the ports tree and deal with the fallout. I realize that using ports@openbsd.org as your cross-compilation tool is extremely high latency but it’s pretty reliable.

                                                                                                        1. 2

                                                                                                          Has OpenBSD ever made libc changes which take a function which was POSIX-compliant and makes it non-compliant? I don’t believe it has, and it’s certainly not typical. That means that there is some level of implicitly promised API stability.

                                                                                                          1. 2

                                                                                                            Offhand I think they nuked gets() from orbit but that one function is so bad that it’s fair game.

                                                                                                            1. 2

                                                                                                              That was actually removed from the C standard in C11.

                                                                                                              1. 1

                                                                                                                Nice! I am glad to hear that. ❤️

                                                                                                                I am pretty sure OpenBSD nuked it from orbit before C11. ;)

                                                                                                                1. 2

                                                                                                                  Apparently not, I am slightly surprised to discover. Tho all the BSDs had gets() print a noisy warning since the early 1990s, which was probably sufficient discouragement.

                                                                                                      4. 2

                                                                                                        Product announcement, but hopefully it being open source silicon is interesting enough.

                                                                                                        It’s a significant project in terms of scope. All the RTL, verification, firmware, and tests are on GitHub. There’s plenty of interesting stuff about the hardware design and security model in the documentation.

                                                                                                        It’s a root of trust, you can think of it as a 32-bit RISC-V microcontroller with peripherals, cryptography accelerators, and a bunch of mitigations against fault injection and side channel attacks.

                                                                                                        (I work on OpenTitan but not at Google, and not on the product)

                                                                                                        1. 2

                                                                                                          I don’t keep up with rustc internals, but would it be possible (and helpful) to:

                                                                                                          • allow writing proc macros using const fns and run them with MIRI to avoid the separate crate and codegen stage.

                                                                                                          • parse dependent crates first and then only compile the items from dependencies that are used, potentially skipping some transitive dependencies entirely.