1. 43

    1. 65

      Counting lines of changelog as a proxy for stability and churn doesn’t work. Different projects have different verbosity of announcements. Impact of each feature/change can vary greatly.

      Article complains about anemic standard library, but most of those dreaded changelog lines are adding functions to stdlib.

      Rust can have a dozen releases in a row that add only tiny quality-of-life improvements, and once in a while drop a feature that has been 7 years in the making. That’s not related to release frequency. Things land when they’re ready.

      But the cherry on top is showing Node with implication it’s more stable with less churn. Node actually makes backwards incompatible changes (both Node and Rust are serious about respecting semver-major, and only Node bumps it). Node doesn’t have editions that keep old packages work. In a 3-year old project you can easily find yourself with a choice between Node that is too new for your deps, or Node that is too old to work on your OS.

    2. 66

      There are a lot of reasons to not use Rust, but this post does not list them out. Speaking as someone who has used Rust professionally for four years, this is my take on these points:

      Rust is (over)hyped

      The rationale being that it’s stackoverflow’s most loved language and 14th most used… Its a growing language, and seeing more and more industry adoption. Currently, due to posts like this, it is a hard sell to management for projects which aren’t low-level, despite developers loving using it (and I’ve been told multiple times that people feel far more comfortable with their Rust code, despite not being experts in the language). Rust might be overhyped, but the data provided to back up this claim is just not correct.

      Rust projects decay

      I know this person has written a book on Rust, but… I have to question what the hell they’re talking about here. The steady release cycle of the Rust compiler has never once broken my builds, not even slightly. In fact, Rust has an entire Epoch system which allows the compiler to make backwards-incompatible changes while still being able to compile old code.

      I mean, seriously, I genuinely don’t know how the author came to this conclusion based on the release cycle. Even recent releases don’t have many added features. Every project I have ever come across developed in Rust I’ve been able to build with cargo build and I’ve never once thought of the version it was developed with or what I had in my toolchain. Python 3 has literally had a series of breaking changes fairly recently and its being compared as a language doing it “better” because it has fewer releases.

      Rust is still beta (despite the 1.0)

      sigh. Because async traits aren’t stabilized? Even though there is a perfectly workable alternative, async-traits crate, which simply makes some performance trade-offs? I’m excited for async traits being stabilized, and its been a bummer that we haven’t had them for so long, but that doesn’t make it Beta.

      The standard library is anemic

      This is just an opinion the author has that I strongly disagree with (and I imagine most Rust developers would). The standard library is small, this was/is a design decision with a number of significant benefits. And they do bring 3rd party libraries into the standard library once they are shown to be stable/widely used.

      async is hard

      To put this more accurately, Rust forces you to write correct async code, and it turns out correct async code is hard. This is an important distinction, because a language like Go makes it just as easy to write incorrect async code as it does correct async code. Having been bitten by enough data races and other undefined behavior in my lifetime, I love Rust’s stance on async code, which is to make it hard to do incorrectly with minimal runtime overhead.

      Frankly, the Rust compiler is some incredible engineering that is also pushing the bounds of what a programming language can do. I mean, seriously, as frustrating as async Rust can be to work with, it is an impressive feat of engineering which is only improving steadily. Async Rust is hard, but that is because async code is hard.

      [edit] Discussed below, but technically Rust just prevents data-races in async code, and does not force you to write code which is free from race-conditions or deadlocks (both of which are correctness issues). Additionally, the “async code” I’m talking about above is multi-threaded asynchronous code with memory sharing.

      Frankly, the points being made in this post are so shoddy I’m confused why this is so high on lobsters. The anti-Rust force is nearly as strong as the pro-Rust force, and neither really contributes to the dialog we have on programming languages, their feature-set and the future of what programming looks like.

      Use Rust, don’t use Rust, like Rust, don’t like Rust, this post is not worth reading.

      1. 18
        async is hard

        To put this more accurately, Rust forces you to write correct async code, and it turns out correct async code is hard. This is an important distinction, because a language like Go makes it just as easy to write incorrect async code as it does correct async code.

        I have not written any production Rust code (yet) but the “async is hard” resonates with me. I’ve wrestled with it in C, C++, Java, and Go and it’s easy to make a mistake that you don’t discover until it’s really under load.

        1. 19

          it’s easy to make a mistake that you don’t discover until it’s really under load.

          I think you really hit the nail on the head with this point. The particularly damning thing about data-race bugs is that they are probabilistic. So you can have latent code with a 0.0001% chance of having a data-race, which can go undetected until you reach loads which make it guaranteed to occur… And at that point you just have to hope you can (a) track it down (good luck figuring out how to recreate a 0.0001% chance event) and (b) it doesn’t corrupt customer data.

          There is a reason so many Rust users are so passionate, and its not because writing Rust is a lovely day in the park every day. Its because you can finally rest at night.

        2. 7

          I’m in the process of switching to Rust professionally, dabbled for years and the biggest selling point of Rust is it’s ability to help me write working software with little or no undefined behaviour.

          Most languages let you build large applications that are plagued by undefined behaviour.

      2. 13

        Async Rust is hard, but that is because async code is hard.

        I think this is debatable. Async rust introduces complexities that don’t exist in other async models which rival in ease-of-use and efficiency. It has unintuitive semantics like async fn f(&self) -> T and fn f(&self) -> impl Future<Output=T> subtly not being the same; the latter is + &'self_lifetime for the Future). It also allows odd edge-cases that could’ve been banned to simplify the model:

        • Future::poll() can be spuriously called and must keep stored the most recently passed in Waker. Being two words in size, you can’t atomically swap them out resulting in requiring mutual exclusion for updating and notification. Completion based async models don’t require this.
        • Waker is both Clone and Send, meaning it can outlive the task which owns/polls the Future. This results in the task having to be heap allocated (+ reference counted to track outstanding Wakers) if it’s spawned/scheduled generically. Contrast this with something like Zig async or rayon’s join!() which allow stack-allocated structure concurrency. Waker could’ve been tied to the lifetime of the Future, forcing leaf futures to implement proper deregistration of them but reducing task constraints.

        The cancellation model is also equally simple, useful, and intricately limiting/error-prone:

        • Drop being cancel allows you to stop select!() between any future, not just ones that take CancellationTokens or similar like in Go/C# (neat). Unfortunately, it means everything is cancellable so a().await; b().await is no longer atomic to the callee (or “halt-safe”) whereas it is in other async models.
        • It also means you can’t do asynchronous cancellation since Drop is synchronous. Future’s which borrow memory and use completion-based APIs underneath (e.g. Overlapped IOCP, io_uring) are now unsound if cancelled unless you 1) move ownership of the memory to the Futures (heap alloc, ref counting, locked memory) 2) block in Drop until async cancellation occurs (can deadlock runtime: waiting to drive IO but holding IO thread).

        Sure, async is hard. But it can be argued that “Rust async” is an additional type of hard.

        1. 2

          Async rust introduces complexities that don’t exist in other async models which rival in ease-of-use and efficiency.

          I don’t disagree Rust introduces an additional kind of hard: async Python is much easier to use than async Rust. I wrote in another comment how it all comes down to trade offs.

          I do agree with you, there are more sharp edges in async Rust than normal Rust, but from my understanding of how other languages do it no language has a solution without trade offs that are unacceptable for Rust’s design.

          Personally, I think async is the wrong paradigm, but also happens to be the best we have right now. Zig is doing interesting things to prevent having the coloring problem, but I don’t think any language is doing it perfectly.

        2. [Comment removed by author]

      3. 4

        Async Rust is hard, but that is because async code is hard.

        Any data to back that exact claim? I love rust and I’m working professionally in it for last few years but I think I would still find erlang approach to async code easier.

        1. 11

          Fair point, and to really dive into that question we have to be more specific about what exactly we’re talking about. The specific thing that is hard is multi-threaded asynchronous code with memory sharing. To give examples of why this is hard, we can just look at the tradeoffs various languages have made:

          • Python and Node both opted to not have multi-threading at all, and their asynchronous runtimes are single-threaded. There is work to remove the GIL from Python (which I actually haven’t been following very closely), but in general, one option is to avoid the multi-threading part entirely.

          • Erlang/BEAM (which I do love) makes a different tradeoff, which is removing memory sharing. Instead, Erlang/BEAM processes are all about message-passing. Personally, I agree with you, and I think the majority of asynchronous/distributed systems can work this way effectively. However, that isn’t to say it is without tradeoffs, message passing is overhead.

          So essentially you have two options to avoid the dangerous shenanigans of multi-threaded asynchronous code with memory sharing, which is to essentially constrain one of the variables (multi-threading or memory sharing). Both have performance trade-offs associated with them, which may or may not be deal-breaking.

          Rust lets you write multi-threaded asynchronous code with memory sharing and write it correctly. In general though I agree with you about the Erlang approach, and there isn’t really anything stopping you from writing code in that way with Rust. I haven’t been following this project too closely, but Lunatic (https://github.com/lunatic-solutions/lunatic) is a BEAM alternative for Rust, and last I checked in with it they were making great progress.

          1. 4

            Yes, I can agree that “multi-threaded asynchronous code with memory sharing” is hard to write. That’s a much more reasonable claim.

            The only thing I would disagree slightly is the assertion that rust solves this problem. That’s not really completely true, since deadlocks are still just as easy to create as in c++. For that the only sort of mainstream solution I can think of is STM in Clojure (and maybe in Haskell?).

            1. 3

              Fair enough, its just a bit of a mouthful :)

              I hadn’t heard of STM, but that is a really cool concept bringing DB transaction-notions to shared memory. Wow I need to read about this more! Though I don’t think that solves the deadlock problem globally, as if we’re considering access which is not memory (eg. network), and thus not covered by STM, then we can still deadlock.

              From my understanding, solving deadlocks is akin to solving the halting problem. There just simply isn’t a way to avoid them. But you are right, Rust doesn’t solve deadlocks (nor race conditions in general), just data-races. I’ll modify my original text to clarify this a bit.

              1. 6

                Bear in mind, though, that STM has been through a hype cycle and some people are claiming that, like String Theory, it’s in the “dead walking” phase rather than past the hype. For example, Bryan Cantrill touches on transactional memory in a post from 2008 named Concurrency’s Shysters.

                So fine, the problem statement is (deeply) flawed. Does that mean that the solution is invalid? Not necessarily — but experience has taught me to be wary of crooked problem statements. And in this case (perhaps not surprisingly) I take umbrage with the solution as well. Even if one assumes that writing a transaction is conceptually easier than acquiring a lock, and even if one further assumes that transaction-based pathologies like livelock are easier on the brain than lock-based pathologies like deadlock, there remains a fatal flaw with transactional memory: much system software can never be in a transaction because it does not merely operate on memory. That is, system software frequently takes action outside of its own memory, requesting services from software or hardware operating on a disjoint memory (the operating system kernel, an I/O device, a hypervisor, firmware, another process — or any of these on a remote machine). In much system software, the in-memory state that corresponds to these services is protected by a lock — and the manipulation of such state will never be representable in a transaction. So for me at least, transactional memory is an unacceptable solution to a non-problem.

                As it turns out, I am not alone in my skepticism. When we on the Editorial Advisory Board of ACM Queue sought to put together an issue on concurrency, the consensus was twofold: to find someone who could provide what we felt was much-needed dissent on TM (and in particular on its most egregious outgrowth, software transactional memory), and to have someone speak from experience on the rise of CMP and what it would mean for practitioners.

        2. 3

          I think you’ll find Erlang much harder tbh. Have you used it much? Erlang requires that you do a lot of ‘stitching up’ for async. In Rust you just write .await, in Erlang you need to send a message, provide your actor’s name so that a response can come back, write a timeout handler in case that response never comes back, handle the fact that the response may come back after you’ve timed out, decide how you can recover from that, manage your state through recursion, provide supervisor hierarchies, etc.

          1. 8

            Fortunately, almost all of that is abstracted away by gen_server you in practice you don’t actually do all that boilerplate work yourself, you just take advantage of the solid OTP library that ships with Erlang.

          2. 3

            For sure I have way more experience with Rust, but I’m not really sure that all of what you listed is downside or Erlang specific. You also need to handle timeouts in rust (eg. tokio::time::timeout and something (match?) to handle the result), you might also need to handle possibility that future will be canceled. Others like recursion (which enables hot reloads) and supervisors are not obvious negatives to me.

            1. 1

              Handling a timeout in Rust is pretty trivial. You can just say timeout(f, duration) and handle the Result right there. For an actor you have to write a generalized timeout handler and, as mentioned, deal with timeouts firing concurrent to the response firing back.

              I think for the most part handling cancellation isn’t too hard, at least not for most code. Manual implementors of a Future may have to worry about it, but otherwise it’s straightforward - the future won’t be polled, the state is dropped.

          3. 2

            In Rust you just write .await, in Erlang you need to send a message

            TBH I do not see difference between these two.

            provide your actor’s name so that a response can come back

            You can just add self() as a part of message.

            write a timeout handler in case that response never comes back,

            As simple as adding after block to the receive block.

            handle the fact that the response may come back after you’ve timed out

            Solved in OTP 24 with erlang:monitor(process, Callee, [{alias, reply_demonitor}]).

            decide how you can recover from that

            In most cases you simply do not try to recover from that and instead let the caller to do that for you.

            Simplest async-like receive looks like, from the docs:

            server() ->
                    {request, AliasReqId, Request} ->
                        Result = perform_request(Request),
                        AliasReqId ! {reply, AliasReqId, Result}
            client(ServerPid, Request, Timeout) ->
                AliasMonReqId = monitor(process, ServerPid, [{alias, reply_demonitor}]),
                ServerPid ! {request, AliasMonReqId, Request},
                %% Alias as well as monitor will be automatically deactivated if we
                %% receive a reply or a 'DOWN' message since we used 'reply_demonitor'
                %% as unalias option...
                    {reply, AliasMonReqId, Result} ->
                    {'DOWN', AliasMonReqId, process, ServerPid, ExitReason} ->
                    Timeout ->

            And that is all.

            1. 2

              TBH I do not see difference between these two.

              The difference is huge and kind of the whole selling point of actors. You can not share memory across actors, meaning you can not share state across actors. There is no “waiting” for an actor, for example, and there is no way to communicate “inline” with an actor. Instead you must send messages.

              You can just add self() as a part of message.

              Sure, I wasn’t trying to imply that this is complex. It’s just more. You can’t “just” write .await, it’s “just” add self() and “just” write a response handler and “just” write a timeout handler, etc etc etc. Actors are a very low level concurrency primitive.

              As simple as adding after block to the receive block.

              There’s a lot of “as simple as” and “just” to using an actor. There’s just .await in async/await. If you add a timer, you can choose to do so and use that (and even that is simpler as well).

              The tradeoff is that you share state and couple your execution to the execution of other futures.

              Solved in OTP 24 with erlang:monitor(process, Callee, [{alias, reply_demonitor}]).

              It’s “solved” in that you have a way to handle it. In async/await it’s solved by not existing as a problem to begin with. And I say “problem” loosely - literally the point of Erlang is to expose all of these things, it’s why it’s so good for writing highly reliable systems, because it exposes the unreliability of a process.

              It takes all of this additional work and abstraction layering to give you what async/await has natively. And that’s a good thing - again, Erlang is designed to give you this foundational concurrent abstraction so that you can build up. But it doesn’t change the fact that in Rust it’s “just” .await.

              1. 4

                Sure, I wasn’t trying to imply that this is complex. It’s just more….Actors are a very low level concurrency primitive.

                Sure, if you pretend you have to raw-dog actors to do concurrency in Erlang, and that OTP doesn’t exist and take care of almost all the boilerplate in gen_server etc. We could also pretend that async/await syntax doesn’t exist in Rust and we need to use callbacks. Wow, complex!

      4. 3

        i am curious about what are the recents python3 breaking changes.

        1. 3

          Perhaps the best example is 3.6 introducing async and await as keywords in the language (and thus breaking code which used them for variables). In Rust, this was done via the 2018 Edition with the Epoch system.

          The difference is a Python 3.6+ interpreter can’t run code from Python 3.5 using async/await as non-keywords, while Rust can compile a mix of 2015 Edition and 2018 Edition code with some using async/await as non-keywords and others with it.

        2. 1

          It’s possible GP has reached the same age as me, where you mentally think something happened last year when it was like 3 years ago.

    3. 20

      So many of the changes to Rust are just new stdlib methods on various types. It’s not really churn, it’s nice lil things that come up in time. Major changes to the language are rare and happen in batches every couple of years.

      Rust was Stack Overflow’s most loved language for 7 years in a row (in 2023, they replaced loved/dreaded by admired/desired), and yet, was ranked as the 14th most used in 2022.

      All that means is that people are writing Rust at home, not at work. Like me.

      Related to the point above, Rust is actually still in the beta phase, with some important features such as async Traits still missing, which brings more churn to the ecosystem.

      Meh, async is not beta. I’ve written 10s of thousands of LOC with Async rust. It’s well beyond beta. And async traits are landing soon - not that we haven’t had a pretty trivial workaround of async-trait for years.

      I’m not a fan of this approach due to the big attack surface that it opens to supply chain attacks and backdoors,

      Meh, I disagree. I don’t really think it makes that big of a difference to security tbh. Barely.

      more difficult.

      TBH it makes it easier. In sync Rust you might ask “can I avoid a copy? can I share my stack?” but in Async rust the answer is “just clone the value, don’t think about it”.

      Accidental blocking is absolutely a problem though, I hope we get a tool for that, maybe something like Go’s race detector? Dunno.

      Anyway, I disagree with the conclusions.

    4. 14

      I find it a little odd the recommendation for Rust as a C or assembly replacement (even more on the latter). I 100% see Rust as a C++ replacement, in all the ways it could mean.

      Otherwise, yeah, like others have mentioned the author is on the nose. I stopped using Rust after about 3 years when I realized the complexity is out the wazoo. The lack of a standard, the massive toolchain to build Rust itself, the massive man hours to create a Rust compiler, are just way too much. In comparison, look at the amount of C compilers over the years.

      It’s this exact reason I switched to Zig. If you want 100% memory safety, then Go (or any GC language) really is a great choice.

      1. 22

        Rust is a great C replacement. C features and design patterns map almost 1:1 to Rust. Writing of wrappers for C libraries is straightforward. There’s very accurate c2rust.

        OTOH Rust has plenty of impedance mismatches with C++. It doesn’t have enough OOP features to translate C++ interfaces to usable Rust. Can’t emulate copy constructors. Doesn’t have placement or RVO. Templates vs generics are all different except the superficial choice of angle brackets for their syntax.

        I know some people equate it like: C is small, C++ is big, Rust is big, therefore Rust == C++, but that is a false equivalence. They have their complexity for different reasons, and there’s only a small overlap in their features.

        1. 6

          Rust is a great C replacement. C features and design patterns map almost 1:1 to Rust. Writing of wrappers for C libraries is straightforward. There’s very accurate c2rust.

          Only if the library you’re wrapping’s model of ownership lines up nicely with rust’s ideas cleanly, which is not always the case.

          A good write-up on this is the article Giving up on wlroots-rs by the developer behind the wlroots rust wrapper wlroots-rs, as well as the Way Cooler compositor.

          1. 5

            This is one famous example, but OTOH there are thousands of crates successfully wrapping C dependencies. A lot of libraries have boring init + free combos, sometimes an Rc or Cow equivalent. Some need an artisanal smart pointer.

        2. 5

          Rust is a C++ replacement, not C. Its concept of references and lifetimes is lifted from C++, and those concepts do not really exist in C (and where they do they are described differently). Rust has ML’s type system, but that doesn’t make it a child of C and not C++.

          1. 2

            Rust took move semantics from C++, but I don’t think “lifted a couple of features from” means it’s a suitable replacement for the whole language.

            I look at this from perspective of Liskov’s substitutability. Rust is close to having a superset of C features, but only a subset of C++ features, and seemingly common features (moves, references, dtors, overloading) have conflicting semantics.

            Ownership exists in C, because that’s inherent to having heap allocations and free that must be called exactly once. References and lifetimes also exist when you can have pointers to the stack and non-GC pointers to interiors of allocations. These things exist even if C doesn’t use explicit terminology for them.

            C doesn’t enforce lifetimes via the type system, but via threats of UB, and runtime crashes/corruption. By analogy, dynamically typed languages can’t prevent type errors at compile time, but that doesn’t mean they don’t have types or type errors.

            There’s also another way to look at it, putting language theory aside. Rust has the least amount of success in GUIs and game development, which happen to be C++’s strong domains. GTK in C has decent Rust bindings. Qt is unusable. Rust was accepted in Linux, which famously rejected C++.

        3. 1

          That’s really interesting to hear. Do you have a resource I (and readers) could check out regarding Rust-C equivalence? :o

          And yes, your last paragraph is more or less the reasoning. Well that and I experienced that Rust has advanced features not like C++, but are powerful as the ones in C++ (unlike C which lacks them).

    5. 11

      As far as I know, today, Rust is the only programming language (other than C) able to create efficient WebAssembly (wasm) libraries: no need to embed a big runtime or to use an over-complex compilation toolchain.

      Zig is pretty good for this too, with support for wasm and wasi targets in the compiler and no large runtime needed. Just compile your library with -target wasm32-freestanding or -target was32-wasi and you’re good to go.

      The language is pre-1.0 though, so it doesn’t have an advantage over rust there.

    6. 8

      This article makes really weak/wrong points. I could see why someone might to want to pick Go over Rust, but not for these reasons.

      I’d like to add that there is no requirement to use async Rust. Yes it is harder, but 99% of the time it is unnecessary, and can be isolated to the parts that need it (e.g. handling large amount of connections).

    7. 6

      Exactly what I hear from peaceful crustaceans. Sad but true.

    8. 5

      Rust projects decay

      Between January 2020 and September 2023 Rust has seen 31 releases, which amounts to 4500 lines of changelog.

      Lots of people have already commented on this part of the article in this thread, but I want to add this really doesn’t line up with my experiences. I’ve a small library I started before Rust version 1 came out, and the only changes I’ve had to make to it have been for feature requests and churn in third-party libraries. I don’t think I’ve ever had to change any of the code so that it’d work on a newer version or edition of Rust (it still uses the 2018 edition!). I’ve not had a comparable experience in any other language I’ve worked in.

    9. 5

      I don’t agree with the point that the standard library must be batteries-included. Common Lisp has libraries that are used in lieu of stdlib implementations such as bourdeaux threads. Having this would mitigate the first point, i.e too many releases, as any functionality upgrade can be done independently of the language itself. And when you have a language like Rust which tries to do so much with its design, where so much effort has to go into that itself, letting libraries pick up the slack is a great idea.

      I do agree with the rest of the article though.

    10. 4

      Not sure if the number of releases is a huge deal, as Rust ships every 6 weeks (if I’m not mistaken), regardless if it has a lot of changes in those releases or not. So the fact that it has had 31 releases since 2020 isn’t necessarily indicative that there are too many breaking changes one would have to fix if you sat on your library for a couple years.

      However, I have seen a theme where the language gets increasingly more complicated as time goes on. There are also a lot of features hidden in nightly. Maybe not all of them will make it though, but it seems way too many to be considered. It’s already a very information dense language. As someone who just got into it, the complexity only seems to increase the more I learn about it.

    11. 3

      Can somebody give a short summary on why Rust has async? Why not just stick to threads?

      1. 9
        1. Rust has found itself competing in the “web service” space. In that space users very often want async for a number of reasons. Dealing with a Slow Loris attack without async, for example, is hard. Threads are also relatively expensive, in terms of memory/ creation. For a language that prioritizes “fast” and “light”, threads aren’t totally ideal if you want lots and lots of concurrency.

        2. Before async await things could be awkward. Writing code that you wanted to time out meant that your various network / IO clients had to ensure they exposed a socket API at all levels. It’s often desirable to place your concrete client impl behind a trait, but now your trait has to expose socket APIs or otherwise timeouts have to be specified at a much higher level. In async that’s much less of a concern - you can pretty trivially handle timeouts. And those timeouts can apply to non-io code. You could write a JSON-lines decoder that, after every 100 lines, yields back to the caller - this is really nice and trivial to express with async, and it doesn’t even have to do with performance or parallelism or IO etc.

      2. 3

        Sticking to the threads is the right thing to do, while async should be employed only when needed and only in the little parts of the code where it is needed.

        Unfortunately Rust community jumped on the async bandwagon uncritically, because it is web-scale

    12. 3

      You should Zig. Dependent types included.

      1. 6

        Starting a new project in a memory unsafe language in 2023 is foolish

        So no, they definitely should not since Zig is not a memory safe language.

        1. 1

          if you compile with ReleaseSafe, the program will crash before anything

          1. 7

            No, not always and it might never become safe, see: https://github.com/ziglang/zig/issues/2301

      2. 4

        Zig doesn’t have dependent types (in the sense of Idris, Agda, Lean, Coq, etc), it has templates. The syntax is cool, but the static semantics of comptime instantiation is very different – much closer to C++ and D.

        1. 1

          A function in Zig can act like a type constructor for dependent types. (see my article) It’s not as good as Idris etc, of course.

          1. 11

            They are more like macros, or untyped, memoized functions that are staged at compile time. In dependently typed languages functions are checked before application/instantiation, not after (like in Zig, D, C++, etc), with compile time evaluation carefully interleaved with type checking. I think it’s ok to say Zig’s comptime parameters are “a bit like dependent types if you squint”, but saying they “are dependent types” is muddying the terminology too much.


              True. The generated code is type checked at compile-time, good-enough for me.


                Yeah, just please don’t continue to muddy things. There are important differences.

                The generated code is type checked at compile-time, good-enough for me.

                The issue is that you can introduce bugs in library code that can break downstream consumers without knowing it. It’s the same sort of issue you run into with dynamically typed languages, just pushed back to compile time. This also leads to downstream users getting exposed to the guts of your library internals when they use an API wrong, which is not a great bad user experience.

                Now, I’m not going to claim that dependently typed languages have everything sorted out either with regards to modularity, etc. For example as a result of definitional equality, changing the internal implementation of functions can also break downstream consumers (some dependently typed languages let you make those internals private, but library authors are often pretty lax about this). But these are different issues to those surrounding template expansion, and I think it’s important to recognise these distinctions to help us better compare languages and their tradeoffs.