1. 96
  1.  

  2. 20

    The article ended up being more about build systems than about adding Zig code to C projects, but the parts about migrating gracefully to a new language reminded me a lot of my experience introducing Kotlin to an existing Java code base.

    Really good bidirectional interop with Java made it a total breeze to start small with an isolated section of the code we were comfortable throwing out and rewriting in Java if we decided Kotlin wasn’t for us. After we decided we liked it, it was painless to gradually convert existing code whenever we were making substantial changes. There never needed to be a “stop the world to convert to Kotlin and hope we got everything 100% correct” period on our roadmap; it just happened organically, bit by bit, as part of delivering features.

    1. 19

      Yesterday on Twitter, someone said:

      The success of docker was always based in the fact that the most popular web technologies of that day and age sucked so bad at dependency management that “just download a tarball of a stripped down os image” seemed like a reasonable solution.

      This is true, but it’s sort of more true that as TFA says,

      The reason why we can often get away with using languages like Python or JavaScript to drive resource-intensive computations, is because under the hood somebody took years to perfect a C implementation of a key procedure and shared it with the world under a permissive license.

      And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

      I think TFA is correct that moving forward, it’s not going to be possible to boil the ocean and throw out all existing unsafe software, but we can at least simplify things by using simpler and more direct dependency management in C/C++.

      1. 29

        And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

        I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you. They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption. They are part of an ecosystem and are great team players.

        A docker image says “f this, here’s everything and the kitchen sink in the exact version and configuration that worked for me, don’t change anything, good luck maintaining dependencies when we don’t bother fast enough. Screw your system preferences for the behavior of dependency x, y, or z (which they rightly have no need to know about or concern themselves with - but the user very much has the right to), this is what works for me and you’re on your own if you want to diverge in the slightest.”

        I write and maintain open source software (including things you might use). It’s hard to use system dependencies and abstract away our dependency on them behind well-defined boundaries. But it’s important because I respect that it’s not my machine the code will run under, it’s the users’.

        Docker - like Electron but let’s not get into that here - isn’t about what’s better in principal or even in practice - it’s solely about what’s easier. At some point, it was universally accepted that things should be easy for the user even if that makes the developer’s job a living hell. It’s what we do. Then sometime in the past ten years, it all became about what’s easiest and most pain-free for developers. Software development (don’t you dare say software engineering) became lazy.

        We can argue about the motives but I don’t blame the developers, I think they are following a path that was paved by corporations that realized users don’t know any better and developers were their only advocates. It was cheaper to invent these alternatives that let you push software out the door faster with greener and greener developers than it was investing in the existing ecosystem and holding the industry to a higher standard. Users have no one advocating for them and they don’t even realize it.

        1. 4

          Software development (don’t you dare say software engineering) became lazy.

          This sentiment is as old as Unix: https://en.wikipedia.org/wiki/Worse_is_better

          1. 10

            Docker is neither simple nor correct nor consistent nor complete in either the New Jersey or MIT sense.

            I think that if the takeaway from reading Worse Is Better is that lazy software development is acceptable, then that is the incorrect takeaway. The essay is about managing complexity in order to get a rough fit sooner than a perfect fit perhaps too late to matter. Read the essay.

            1. 7

              I read the essay. The essay itself codifies a position that it opposes, based on the author’s observations about the state of the New Jersey/MIT split. It’s one person’s idea of what “Worse Is Better” means, with the essay created to criticize the self-defined argument, not the definitive idea. But we can split semantics about the essay some other time.

              When someone says that “software development has become lazy” and adds a bunch of supporting information around that for a specific situation, what I read is “I am frustrated with the human condition”. Software developers have been lazy, are lazy, and will continue to be lazy. Much like a long-standing bug becomes an expectation of functionality. Fighting human nature results in disappointment. To ignore the human factors around your software is to be willingly ignorant. Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software. Docker solves real problems with software distribution. It may be a hamfisted solution, but railing against the human condition and blaming corporate interests is not the way to understand the problems that Docker solves, it’s just an ill-defined appeal to a boogeyman.

              1. 8

                Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software.

                You, uh, you sure about that? Like, really sure?

                1. 3

                  Our community’s discourse is so dominated by cynicism, we need to find a way to de-escalate that, not add fuel to the fire. So the benefit of the doubt is more important now than ever. That means that whenever there’s a benign explanation for something, we should accept it.

                  1. 11

                    Our community is split into two groups:

                    • Those exploiting software and human labor for financial gain at the expense of the Earth and its inhabitants.
                    • Those engaging in craftsmanship and improving the state of technology for end users, by creating software you can love.

                    Think carefully before choosing to defend the former group.

                    1. 1

                      I don’t think it’s that simple. I definitely feel the pull of the second group and its ideals, but sometimes the practices of the first group can be put to good use to, as you say, improve the state of technology for end-users. Consider: if there’s an unsolved problem affecting end-users, e.g. one caused by the sudden changes that happened in response to the pandemic, and the most practical way to solve that problem is to develop and deploy a web application, then if I spend time crafting an elegant, efficient solution that I would be proud to show to people here, then I’ve likely done the actual users a disservice, since I could get the solution out to them sooner by taking the shortcuts of the first group. That’s why I defend those practices.

                      1. 3

                        This fast-to-market argument only has a home because the world is run so much by the former group.

                        Consider the case of email vs instant messaging. Email was standardized and made ubiquitous at a time before market forces had a chance to spoil it with vendor lock-in. Meanwhile, text messaging, and messaging in general is incredibly user-hostile. But it didn’t have to be this way. If messaging were orchestrated by the second group, with the end-user experience in mind as the primary concern, we would have widely popular federated messaging with robust protocols. Further, many other technologies would exist this way, with software of the world, in general, being more cooperative and reusable. In such case, total time to develop and deploy a web application would be decreased from where it is today, and furthermore it would have more capabilities to aid the end-user.

                        All this “glue” code that needs to be written is not fundamentally necessary in a technical sense; it’s a direct result of the churn of venture capital.

                        1. 6

                          The friendliest ways of building websites, with the least amount of code, right now are things like Wix, Wordpress, cPanel, and so forth. These are all very much commercial ventures, squarely from the first camp.

                          Your example of messaging is also questionable, because the successful messaging stuff was taken over by the first camp while the second camp was screwing around with XMPP and IRCv3 and all the rest.

                          The killer advantage the first camp has over the craftsmen in the second camp is that they’re not worried about “quality” or “products people love”…they are worried about the more straightforward (and sustainable) goal of “fastest thing we can put out with the highest profit margin the most people want”.

                          I wish–oh how much do I wish!–that the second group was favored, but they aren’t as competitive as they need to be and they aren’t as munificent or excellent as they think they are.

                  2. 1
                    1. 5

                      In my eyes that’s proof that Docker failed to build a moat more than anything else, and in fact it has greater chances to be evidence in support of friendlysock’s theory than the opposite: companies don’t go gently into the night, VC funded ones especially, so you can be sure that those billions fueled pantagruelian marketing budgets in a desperate scramble to become the leading brand for deploying distributed systems.

                      Unfortunately for them the open source game didn’t play out in their favor.

                      1. 3

                        Unfortunately for them the open source game didn’t play out in their favor.

                        I don’t think there’s any actual disagreement here; just differences about how snarky we want to be when talking about the underlying reality. Yes, Docker is a company with VC cash that had an incentive to promote its core offering. But no, Docker can’t actually make the market accept its solutions, so e.g. Docker Swarm was killed by Kubernetes.

                        Okay, maybe you can say, but Kubernetes was just promoted by Google, which is an even bigger capitalist nightmare, which okay, fine is true, but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far. You can get to a certain point by just being big and hyped, but then if you aren’t actually any good, you’ll eventually end up crashing against reality, like Docker Swarm or Windows Mobile or XML or the Soviet Union or whoever else tries to substitute a marketing budget for reality.

                        1. 2

                          but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far.

                          I do agree that containers are a solution to a problem. An imperfect solution to a problem we should not have in the first place but, regardless, it’s true that they can be a useful tool in the modern development world. That said, I fear that it’s the truth that can only go so far, and that skilled use of a communication medium can produce much bigger impact in the short to medium term.

                      2. 5

                        That article suggests they raised more than a quarter of a billion dollars, and then talks about how they lost to the even more heavily propagandized (by Google) Kubernetes meme when they couldn’t figure out how to monetize all the victims. Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                        Like, devs get memed into dumb shit all the time by sales engineers.

                        If they didn’t there wouldn’t be devrel/devangelist positions.

                        Edit:

                        (and just to be clear…I’m not denying that Docker has some use cases. I myself like it for wrapping up the seeping viscera of Python projects. I’m just disagreeing that it was from some spontaneous outpouring of developer affection that it got where it is today. See also, Java and React.)

                        1. 4

                          Like, devs get memed into dumb shit all the time by sales engineers.

                          If they didn’t there wouldn’t be devrel/devangelist positions.

                          Yeah, true enough based on my experience as a former dev advocate.

                          1. 1

                            Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                            There can’t be two vast capitalist conspiracies. If there are two, it’s not a vast conspiracy. Calling it a “capitalist conspiracy” either means that there is only one or that you like using snarky names for perfectly ordinary things.

                            1. 2

                              I would call a conspiracy of half the capitalists pretty vast, FWIW.

                  3. 2

                    Yes. But that was only the conclusion of my argument; I think it’s fair to say that the actual points I was making regarding dependencies are pretty objective/factual and specific to the docker situation.

                  4. 2

                    While I agree and am loathe to defend docker in any way, if instead of a docker image we were talking about a Dockerfile then that is comparable to a build system that declares dependencies also.

                    1. 2

                      I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you.

                      Agreed.

                      They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption.

                      I disagree. The interfaces aren’t stable or standardized at all. Distros put a huge amount of effort into trying to put fingers into the leaking dam, but the core problem is that Make is a Turing complete language with extreme late binding of symbols. The late binding makes it easy to write a Makefile that works on one machine but not another. Adding more layers of autoconf and whatnot does not really solve the core problem. The thing C/C++ are trying to do is… not actually that hard at all? It’s just linking and building files and trying to cache stuff along the way. Newer languages just include this as part of their core. But because every C/C++ project has its own Turing complete bespoke solution, they are incompatible and can’t be moved to new/different platforms without a ton of human effort. It’s a huge ongoing PITA for everyone.

                      The thing that would actually be good is to standardize a descriptive non-Turing complete, configuration language that can just describe dependencies between files and version constraints. If you had that (big if!), then it wouldn’t be a big deal to move to new platforms, deal with platforms changing, etc. Instead we get duplication where every distro does their own work to fill in the gaps by being the package manager that C/C++ need.

                      1. 2

                        Sorry if I wasn’t clear: the abstracted interfaces I’m referring to aren’t provided by the Makefile or whatever. I meant standardized things like pkgconf definition files in their place, man files in their place, using the packages made available by the system package manager rather than bringing in your own deps, etc.

                  5. 9

                    Another language (albeit more high-level than Zig) that has a great C interop story is Nim. Nim makes it really simple to wrap C libraries for use in Nim.

                    1. 6

                      One thing that I really like about Zig is how good it is going the other way: making libraries that can be called from C easily (and thence other languages). How does Nim handle that case? It’s an often neglected case.

                      1. 4

                        I used Nim to write a library callable by JNI on Android (https://github.com/akavel/hellomello), totally fine. The current version of the project (unfortunately I believe it’s somewhat bitrotten now) is macroified, but an earlier iteration was showing clearly how to do that by hand.

                        1. 2

                          While I doubt anyone uses it, my very fancy ‘ls’ has as shared library/.so extension system where you can build .so’s either in Nim or in C and either way load them up. A C program could similarly load them up no problemo with dlopen & dlsym. That extension/.so system may constitute a coding example for readers here.

                          1. 1

                            It should be fairly easy, though I can’t attest to that through personal experience. The GNUNet project calls Nim code from their C code IIRC.

                        2. 6

                          We have already seen how disruptive changing language can be when the Python cryptography package added a Rust dependency which in turn changed the list of supported platforms and caused a lot of community leaders to butt heads.

                          My understanding of the situation was that it only broke the package on unsupported platforms, that others were unofficially supporting downstream. Said others also missed the warning on the mailing list months in advance (AFAICT because they simply weren’t following it and/or it wasn’t loud enough), and frankly that’s kind of alarming given that it’s a security package.

                          Link to the previous discussion of this whole controversy: https://lobste.rs/s/f4chm2/dependency_on_rust_removes_support_for

                          1. 5

                            Since a few people on HN misinterpreted the purpose of mentioning that example, I’ll preemptively quote here my reasoning behind its addition to the article:

                            The point about the Python package example is not to say that Zig can get on platforms where Rust can’t, but rather that the C infrastructure that we all use is not that easy to replace and every time you touch something, regardless of how decrepit and broken it might have been, you will irritate and break someone else’s use case, which can be a necessary evil sometimes but not always.

                            1. 3

                              I’m still not sure you’ve chosen a good example – that Python cryptography package was already defacto-broken on those unsupported platforms (in the sense that crypto that the developers have never even attempted to make work correctly with the build dependencies that were in use should never have been trusted to encrypt anything) long before Rust ever showed up on the scene. The entire controversy around that package was about people mistaking “can be coerced into compiling on” with “works correctly on”, and thereby assuming and foisting on to end users a lot of ill-conceived, dangerous risks.

                              “Zig would have allowed people to keep lying to themselves in this case” seems…uncompelling. There must be better examples & arguments that could be made here? Not “irritating and breaking” somebody’s broken, dangerous usecase is just not a selling point. It’s rather the opposite if you’re a package author who would prefer that people didn’t continue to build an unsupported footgun for users to shoot themselves with out of your work.

                              And this all misses teh_cyanz’s larger point, which is that the quote

                              changed the list of supported platforms

                              is simply incorrect. It continued to build on all supported platforms. It broke the build for some people on platforms that had never been supported (this was explicitly stated by the maintainer), but who had just happened to hack together builds that may or may not have ever even worked correctly.

                          2. 6

                            I agree with the sentiment that rewrite-it-in-X is not viable for many software projects, though the reasoning of why Zig and not Rust and can’t fully agree with. Rust support for interfacing C is really good and for C++ there is https://cxx.rs emerging. Don’t get me wrong, I like Zig and want it to succeed, just that for that specific purpose Rust might be the better target because of the guaranteed absence of undefined behavior of safe Rust code. See e.g. https://daniel.haxx.se/blog/2020/10/09/rust-in-curl-with-hyper/ of how curl allows to be configured with certain Rust-based components.

                            1. 15

                              The post is almost in its entirety about using Zig to compile (zig cc) and build (zig build) C/C++ projects. This is not something that Rust intends to offer and has very little to do with interfacing with C. It’s about being a C/C++ compiler.

                              The only part where I mention extending a C project with Zig I also mention that you can do the same with Rust.

                              1. 8

                                Rust has a weaker build.rs and cc crate, but even this is often sufficient to throw away the C build system and Ship-Of-Theseus it in Rust.

                                1. 2

                                  Indeed, my comment was tangential and in regards to

                                  Instead of running away from the C/C++ ecosystem, we must find a way of moving forward that doesn’t start by throwing in the trash everything that we have built in the last 40 years.

                                  with which I very much agree. If you talk about this then of course the question of C/C++ interop comes up. From my own cursory attempts of making zig talk to C libs I ran into problems pretty quickly trying to consume Win32 APIs as the header files could not be parsed. This of course can be worked around and fixed but it is pretty central to the question. (And Rust’s cbindgen has it limits as well of course). My understanding is also that zig provides no direct way to call into c++, which is undoubtedly a tricky subject but also this is quite central to the topic.

                                  But back to the build system: My very incomplete understanding of the example of implementing a redis command that you provide is that this also required reworking some of the Makefiles. This is expected of course but I doubt a little that you can just throw in zig into any sized decades old codebase without issues. It might still be easier than to integrate with Rust/cargo but there is work there either way. And so far having a custom build.rs that uses the cc and/or cmake crate has provided good build support for the things I attempted.

                                  And don’t get me wrong, I like zig and the approach it takes. The compile time meta programming is really cool and the focus on a fast and versatile toolchain is great. I definitely want it to succeed and to offer a real “systems programming” alternative.

                              2. 5

                                once you learn JS you can do […] video games (Unity)

                                Not true. Newer versions of Unity don’t have JavaScript support at all, and even in older versions that did support it, it was a weird non-standard offshoot of JavaScript instead of the kind you see in your browser.

                                1. 5

                                  This article’s argument falls to case analysis, although it seems solid at first. Simply ask: Is this C code going into the (Linux) kernel? If yes, then the toolchain must be able to compile the kernel, and zig cc can’t do that yet. If no, then Zig is only one of many languages with decent toolchains for emitting native code. It’s been known for a long time that the reason why UNIX is oriented around C is because UNIX kernels are written in C.

                                  This isn’t to say that C is good, merely that zig cc needs to be able to compile chimeric kernels of mixed C and Zig before we can seriously imagine a Zig-based UNIX.

                                  1. 5

                                    Maybe I just have a weird perspective, but “stuff that goes into the OS kernel” seems a pretty niche domain to me. (Not to mention that IMHO almost nothing should go into a kernel, but I realize today’s mainstream OS’s are nowhere near that ideal.)

                                    As for “Zig is only one of many…”, you like others have misread this post. It’s not about porting stuff to Zig, it’s about using Zig as a sane cross-platform build system for C/++ code.

                                    1. 2

                                      I see your second point. I should clarify the context on your first point, though.

                                      When not talking about unikernels, we want the syscall barrier to be impenetrable. This means that the vast majority of code is written only to run on one side of the syscall barrier. In turn, this means that systems programming should be thought of as coming in two distinct flavors on a UNIX system: the kernel and the userland.

                                      The takeaway for many of us, a long time ago, was that a language which replaces C must do so in both the kernel and userland niches. It would be easier to instead specialize on one niche, and popular userland languages like Python and ECMAScript are wholly dedicated to that path instead.

                                      1. 1

                                        (I was thinking of microkernels, but same difference.) I get your point, but I think outside-the-kernel systems programming is still a valid domain.

                                        How different is the kernel ABI on Linux? I guess I’m asking that rhetorically because I could just look it up if I really wanted to know; but I imagine it wouldn’t be too much work to add support for it to Zig. (…he said, in that hand-wavey way one assesses problems one isn’t familiar with.)

                                        1. 1

                                          How different is the kernel ABI on Linux?

                                          Not very. The big difference (with Linux and any other kernel except Solaris) is that the equivalent of libc is very different. Kernels typically start with a freestanding C environment and then build the rest of the environment on top. Memory allocation is the biggest reason that this is important. In userspace, malloc may end up calling mmap or platform equivalents, which will then acquire a bunch of locks to allocate address space (and possibly need to find physical pages, which may acquire more locks). This is fine because userspace can’t possibly be holding those locks. In the kernel, you need separate allocation paths for if acquiring VM subsystem locks may cause deadlock. This is typically done with a flag indicating whether malloc should return NULL immediately if it can’t satisfy the allocation with existing pools or block.

                                          1. 1

                                            Makes sense. How does that affect the compiler, then? Isn’t it just a matter of linking against a different libc? (Unless the compiler generates code that bypasses libc, like Go; does Zig do that?)

                                            1. 2

                                              It doesn’t really impact the compiler, as long as your compiler can integrate with the provided functions and, importantly, doesn’t need to allocate memory for anything that isn’t explicit in the program. The Linux headers are an exciting C-like language that GCC can parse in some C modes but not C++, so accessing the kernel functionality may be a problem. Rust is effectively duplicating a load of the contents of the headers in Rust.

                                  2. 4

                                    C++ is still my daily driver, and I’m happy with it … except for the pain of either making my projects build on foreign platforms [read: without Xcode], or incorporating outside libraries. While CMake is a lot better than make/autoconf, that’s still like saying the flu is better than Ebola.

                                    In short I very much like the idea of using Zig as a sane build system for C/++. I’ve bookmarked those Redis examples for later perusal.

                                    1. 4

                                      Don’t rewrite it in Rust, unless you need to make major changes. Don’t distribute C/C++ source that nobody can easily build, either. We should be compiling legacy stuff to WASM, and running it sandboxed.

                                      1. 4

                                        WASM is less secure than native code along some important dimensions – it has a flat memory space and lacks protections on the stack and heap. See figure 1 in the paper for a summary.

                                        Many C libraries require many syscalls to be open (curl, etc.) or have transitive dependencies that do. The bigger the attack surface, the more important these issues are in practice.

                                        https://www.usenix.org/conference/usenixsecurity20/presentation/lehmann

                                        https://news.ycombinator.com/item?id=24216764

                                        https://lobste.rs/s/fr8ki1/everything_old_is_new_again_binary

                                        1. 2

                                          WASM is still missing some major bits of functionality like C++ exceptions and tail-calls.

                                          Also, how much legacy stuff uses system APIs that aren’t available in WASM’s sandbox?

                                          1. 2

                                            Is WASM anywhere near the point where that’s feasible? Actual question. It seems like we’d need incredibly fast interpreters for various architectures and OSs and all of those take time to build.

                                            1. 3

                                              It’s there, it just doesn’t solve the problem @Sophistifunk is implying. It is easy to run a library in a sandboxed environment with or without WASM as long as the library is designed with that use case in mind. A lot of Windows libraries export their interfaces entirely in terms of COM objects with clearly-defined copy semantics for all buffers and no shared state. You can run these in an unprivileged DCOM server and have strong isolation. Unfortunately, the libraries where you actually want to run them sandboxed are not these ones, they’re the ones that are optimised for performance at the expense of everything else and have shared mutable state propagating across the library boundary in all sorts of places. WASM doesn’t help here at all because your WASM-compiled library has a different ABI to the code containing it and so can’t be used directly (except from a language like Verona that has first-class sandboxing of foreign code as part of its core type system).

                                              1. 1

                                                These libraries simply need to be abandoned. Slightly faster CVEs is not acceptable, and I look forward to the day that acting so negligently means your customers can sue you.

                                                1. 2

                                                  A back of the envelope calculation suggests that the cost of rewriting all of that code is on the order of $10T. That’s probably a lowball estimate and I wouldn’t be surprised if it’s low by 1-2 orders of magnitude. So, while I agree that we should get rid of it, it’s not going to happen any time soon.

                                              2. 2

                                                Firefox it’s already doing that with a few libraries (graphite, ogg), using RLBox + wasm2c (not lucet anymore).

                                                Also, article about easy wasm2c usage: https://kripken.github.io/blog/wasm/2020/07/27/wasmboxc.html

                                                1. 1

                                                  It’s very close. And no need to interpret it, it’s easy to compile. It’s also easy to inspect and verify, and the modules only have access to things you give them.

                                                  1. 2

                                                    At some point we have to commit to safety and correctness at the cost of a speed hit, otherwise, the anti-safety crowd can always use the speed difference, no matter how minuscule to prevent us from achieving safety.

                                                    We already take a huge hit with things like Java, Python, Node, etc. The variability between hardware platforms and that single core performance has been largely flat for years, the absolute speed of mitigations is a ruse. Somehow now, with this code running on the fastest processor in the world, we can’t sacrifice an XX% reduction in throughput compared to native, but the code running on the previous generation was ok?

                                                    Focusing on top of line speed above all else will enable the anti-safe folks to always be able to move the goal posts.

                                                    lol, edit, I see /u/unrelentingtech posted the wasmboxc link. it is very much what we are all looking for.

                                              3. 2

                                                Can you use tabs yet?

                                                1. 1

                                                  all but a solved problem.

                                                  Is this intended to mean ‘anything but a solved problem’?

                                                  1. 1

                                                    One thing that LLVM can’t do, is link MachO executables for Apple Silicon (the new Apple ARM chips)

                                                    Wait, what? I’ve not seen this mentioned anywhere else. I’m sure the XCode version of ld is customized by Apple, but I’d be amazed if vanilla lld doesn’t work on an M1…

                                                    1. 3

                                                      Apple doesn’t yet use the LLVM linker in the Apple-provided toolchains. Their own linker, ld64, does not yet have a completely finished drop-in replacement in LLVM, though one is underway. Every pair of binary format and architecture needs custom code in the linker. Apple’s linker is pretty good and so this hasn’t been a priority for anyone who works outside of Apple (just use ld64 - it works fine with LLVM LTO modes and it’s pretty fast, much faster than GNU BFD ld) or inside Apple (just use ld64, it’s their system linker). I believe the Apple folks would like to move to an lld-based linker at some point so that they’re not the only ones maintaining the linker but moving in that direction will always be lower priority than making sure that the system linker works well.

                                                      1. 1

                                                        When Jakub explained it to me, I too was amazed both at that fact, and at the horrible choice of names.

                                                      2. 1

                                                        I believe that “wrap the compiler” is the wrong approach, because what happens if you need to wrap the compiler twice? This can happen if you want to do FFI between two higher-level languages.

                                                        My gut instinct has been to recommend that tools provide information in a way that existing tooling can consume, such as .pc files to include FFI headers/RTS libs, and maybe a dependency tool that emits Makefile-format information. Is this wrong? I have only been on the user side of these discussions, never the toolchain developer.