He kicked off an incredibly influential project, then didn’t block other people from evolving it in a direction that wasn’t his original vision, and can talk about it so sensibly. Clearly he’s attached to some aspects of his initial vision (and he does sometimes argue that specific Rust features were mistakes/shouldn’t be adopted), but recognizes that actually existing Rust fills an important niche, that it probably couldn’t have filled while following his initial vision.
So many tech luminaries would be writing bitter posts about how much better everything would be if the project had just listened to them. Or they never would’ve stepped down in the first place, and the project would’ve stalled.
Yes definitely … and I was thinking about this a little more: What do Graydon-Rust and Rust-2023 actually have in common? The only things I can think of are:
it’s an imperative ALGOL-like language that has algebraic data types (OCaml influence)
it has the fn keyword
it pushes the boundary on the type system, but even that is different – “typestate” vs. borrow checking
Almost everything else seems different?
the syntax is more elaborate as he says, and it has more traditional C-style keywords like break and continue
integer types are different (see auto-bignum)
container types are different (vec would be builtin, vs. library)
the unit of code larger than a function would be different - ML-like modules vs. traits
type system would be very different – more structural than nominal, less inference
memory management would be dynamic/GC, not static
concurrency would be different – actors vs. async/await
error handling would be different – the result isn’t what I wanted at any point, and I don’t know where I would have gone with it.
there would be more dynamic features – reflection, more dynamic dispatch
metaprogramming would be different – he wanted quasiquotes
it would have tail calls, but it would NOT have environment capture
the implementation is different – OCaml rust-prehistory vs. self-hosted LLVM
That’s like EVERYTHING in a language !!! What did I miss?
It’s a bit shocking that it even worked, and ended up where it is today … it’s so wildly different.
It seems like Mozilla got together a bunch of talented language and compiler engineers around Graydon’s side project, and then produced something completely different.
As an aside, has anyone rendered the early doc links from this post (and published them)?
Very funny that he cautions against rewriting a major project in Rust. Truly the Rust he wanted had no future.
Interesting that he used square brackets for generics and the project later switched to angle brackets. Lots of people like to complain about the parsing ambiguity of angle bracket generics on the web.
Seeing a slide titled OMGWTFBBQ made me oddly nostalgic.
I had exactly the same thought while reading this. The other thought I had was that I would have very much preferred many of the ideas he suspects the reader would not. In many cases the choices he would have made perfectly match my retroactive wishes after having used rust for a while.
He’s infamous for some abusive rants, but they’re all directed at Linux contributors, and not at people working on other open source projects, or their work
To me it’s not a coincidence that he created what’s arguably the most collaborative open source project of all time, and he gets emotional about what is let into the codebase.
It’s basically because of the large scope and highly collaborative nature of the project – that’s the only “control” he has
Most maintainers would try to review all patches, and they would drown in it, and lose contributors, and the project would end up smaller. But he doesn’t, so he fishes out “smells” and then blows up when he sees something he doesn’t like
I’m not defending it (I certainly wouldn’t want to work in that environment). But I’m saying it’s not coming from a petty or jealous place
And TBH the rants generally seem like they have a reasonable technical justification, and there could be something to learn. He just chose to make it profane and abusive so everybody remembers it … kind of a shitty tactic, but there’s a method to the madness
He’s infamous for some abusive rants, but they’re all directed at Linux contributors, and not at people working on other open source projects, or their work
The first Linus rant that comes to mind is the one where glibc replaced memcpy with one that complied with the spec and was faster, but broke programs that expected memcpy to me have like me move. So I’m going to have to disagree with this as a statement of fact.
Sure, but I’d say Torvalds is angry because he has “skin in the game” … Not because he’s a petty or jealous person.
I mean Graydon quit and Torvalds didn’t until recently, and Linux is much bigger than Rust – it’s inherently a stressful position
I’ve corresponded with him directly many years ago, and he’s a very clear and effective communicator, always interested in the best solutions.
It’s unfortunate that he got what he wanted – people remember the 1% of his extreme anger – but 99% of the time he’s helpful and effective.
(And not to say I don’t have any technical problems with what they’re doing. Lots of Linux and git are simply incoherent. But I regard that as a side effect of scale. It’s literally parallel development where people don’t think about what others are doing. The contrast is something like https://www.oilshell.org/ where I try to keep the whole language in my head, and make it globally coherent, and it doesn’t scale development-wise. I think that’s mostly OK for what we’re doing, but it’s a tradeoff, and could be better.)
Better for what and along what axis is the question.
The vision the community (and probably mozilla) latched on is clearly very different than Graydon’s, but is it worse for it? Would the world be better off with an other efficient-ish applications language, and more importantly still without any sort of viable replacement or competitor to C++?
The performance aspect seems like the most important difference to me, and could be a deal breaker for many use cases. Wasn’t Go designed as a replacement for C++ though?
It was designed as a replacement for some C++ uses at the upper edge (or is it bottom?): network services, daemons some CLI. But it never aimed to broadly replace C++ itself in the niches were it’s solid. Rather to take the space where C++ and scripting languages would meet.
Very interesting, having been aware of Rust since ~2010, and having read Graydon’s comments on language design over the years, I knew it was originally a much different and more dynamic language. But there are lots of interesting details here.
I’d prefer relaxing the C++ competitor / zero-cost constraint as well, but I agree that such a language wouldn’t have been as popular.
The way I think of Rust is that it a MONOPOLY on the performance + memory safety combo.
If you’re the Linux kernel, Chrome, Firefox, or Android, and you want a memory safe language without any performance excuses NOT to adopt it (which people will bring up), then Rust is your ONLY option. Given the current state of systems languages, it appears it might be your only option in ten or more years too!
(I prefer GC for the things I’m working on, but working on a GC has taught me first-hand that the performance costs can be huge. On the other hand, a fast GC has a huge code complexity cost. Rust puts the complexity in the type system and arguably the user interface.)
Other points
Interesting comments regarding Rust’s Ruby-style “exterior” iterators and Python-style “interior” iterators. I usually reference this blog post on that issue, which calls them “external” and “internal”. I also think of it as “Python pull” vs. “Ruby/Rust push”.
Glad to find someone else who agrees on environment capture. Maybe it’s because I’ve been programming in Python/C++ for too long, but I don’t find implicit capture useful, or easy to reason about. I prefer explicit capturing of state, with “normal code”. (PHP actually has an interesting middle ground! You explicit list the variables to capture.)
A program fragment is safe if it does not cause untrapped errors to occur. Languages where all program fragments are safe are called safe languages. Therefore, safe languages rule out the most insidious form of execution errors: the ones that may go unnoticed.
– Luca Cardelli
So while Rust’s integer behavior is MUCH better than C/C++, it’s also unsafe. Graydon agrees:
Integers overflow and either trap or wrap. Great. Maybe in another decade we can collectively decide this is also an important enough class of errors to catch? (Swift at least traps in release by default – I wish Rust had chosen to).
On library-defined containers, iterations, smart pointers, and cross-crate monomorphization. I don’t know which side I fall on, but it’s interesting to me that this is still controversial! The fact that Go got generics ~10 years later is a useful data point.
Complex grammar – I get his point, and there are probably specific places where the syntax could have been simpler. But overall I’d say that humans are able to more easily recognize subtle syntax than computers, so human-friendly syntax makes sense. Conversely, humans are not just annoyed by redundant syntax, but it can make programs harder to read.
Reducing the labor of writing compiler front ends / linters / formatters is still something of an open problem. He points to some evidence, and I was surprised to learn recently that after ~12 years, Go’s influential AST module is not “settled”. But I think we should just figure out the canonical way to do that, e.g. exporting a semi-stable lossless syntax tree from the compiler. People want subtle syntax, and many tools need to understand that syntax.
Traits – I don’t have much experience or an opinion here, but his comments seem to somewhat contradict this recent good blog post by @matklad:
Cargo defines a rigid interface for a reusable piece of Rust code. Both producers and consumers must abide by these rules, there is no way around them. As a reward, they get a super-power of working together by working apart.
This seems borne out by evidence, although I don’t use Cargo. Maybe he thinks that there is a point in the design space that would have produced better composition?
Personally I use a more dynamic dependency inversion style in GC languages, where you glue all your components together with turing-complete code in main(), not with Cargo. But that’s arguably at a smaller scale.
but his comments seem to somewhat contradict this recent good blog post by @matklad:
This is complicated :-)
The first thing is that, when we talk about modules, there are really two different language features (more on this):
modules as in ML-modules, a vehicle for polymorphism
modules as in JavaScript or Rust modules — a container of things and a unit of distribution
Rust has excellent module system in the second sense (crate/module separation, modules are nested bundles of things, crates are anonymous, crates have explicit dag of dependencies, crate is a visibility boundary, compiler supports linking two version of the crate into a single binary, there’s only one way to build a crate (cargo)), and it powers a lot of scalability.
When it comes to polymorphism, traits are better than Java-style inheritance (which rigidly couples type & interface) or C++ style duck typing + ADL (which doesn’t define an interface at all, and makes it hard to speak about semver), but are arguably worse than modules would have been. Some deficiencies of traits are papered over with conditional compilation, which is also an extra-linguistic wart.
Case study — there’s serde, which defines serialization interface, and there’s time, which defines date-time types. With modules, we’d have serde-time crate which depends on the two and defines serialization for date-time. With traits, of them has to depend on the other. That obviously sucks, so we use conditional compilation to make time depend on serde only if that’s enabled on build time. That gets things done, but is horrible:
conditional compilation exists outside of the type system, so all language tooling becomes heuristics base
as a library author, you now need to check a combinatorial explosion of build flag combinations to make sure that everything works
separate compilation goes out of the window. You need to wait until serde compiles before you can compile time and things that transitively depend on time, as opposed to compiling serde and time separately, and only blocking for “serialization of time” bit.
I’m not sure about your serde-time case study. I think that there are trait / type-class systems that would allow this, defining a separate third serde-time package. In Haskell for example those are called “orphan instances”, and most settings of GHC accept them by default (an instance of a class/trait for a type is not an “orphan” if it is defined in the same place as either the class/trait or the type itself). Accepting orphan instances has design downsides, as checking for coherence becomes non-local, but this is a well-known point in the trait/typeclasses design space that is known to improve modularity.
Thanks for the response! I do wish ML “modules” were called something else – I never understood how OCaml modules were really that different than Java/C++-style objects. They both couple state and behavior, and provide polymorphism – maybe it’s a static vs. dynamic issue. I recall that I asked once on /r/programminglanguages, but I’ve forgotten since I don’t use OCaml. [1]
I think the conditional compilation issue kind of gets at my point. I think it’s just simpler if you stitch together “modules” (units of code) with Turing-complete code in main(). You always need something like that. This prevents a “shadow language” of modules. (This reminds me of Zig’s decision to represent files as structs, which I think of as a very interesting “Perlis-Thompson” solution.)
I use dynamic dispatch whereever necessary, and it’s not slow for anything I’m doing. But there should be a language where you can use a mix of dynamic and static dispatch in that style too. (Classes without virtual functions are a form of static dispatch, but they can’t do everything templates can)
So I’m basically happy with using classes in a dependency inversion style WITHIN a single program. But inheritance across library boundaries is indeed a huge smell, often leading to brittle designs (though SerenityOS does it in the “classical” style I believe).
So I’ve always wondered about how to make something that scales to a wider ecosystem like Cargo or npm.
But the “smell” I see is the tall pyramids of dependencies. I do not think software should be structured like that :) This is after working with Google’s monorepo – the “default” is very much a pyramid, and it leads to enormous binaries and huge compile times – software dev where a build cluster is required. People complain about 1G Docker containers, but the equivalent where a normal statically linked binary is 200MB - 1G is just as bad.
You have all these unnecessary transitive dependencies because people haven’t organized their code in a sane way. It’s more economical to just add a function in any old place and add a dependency, than to structure your code.
npm suffers from this huge pyramid as well. The pyramid doesn’t just bloat binaries, but it causes bugs due to instability at the base.
The alternative is using dependency inversion, basically where the main() depends directly on 100 different modules, and then wires them up dynamically. I am not sure how this compares with Graydon’s view, but I find that it scales very well. You never really run into a “factoring problem” that can’t be solved easily.
I actually think the answer is separately distributed IDLs and ABIs – “header files” essentially. That is, runtime software composition (like Windows COM and shell) rather than compile-time. At a certain scale you need this kind of stability, and explicit protocol design. And then the internals can be rapidly refactored with a static type system.
(edit: just read over your comment again, and it seems like you may agree with this, e.g. with regard to to .mli.)
I also re-read this after your comment and it was good - https://faultlore.com/blah/swift-abi/ (I have been brainstorming about turning Oils’ metalanguage into an actual language for users :) It doesn’t have modules yet )
I would encourage writing up the linked comment on module system design … I definitely think FP vs. OOP is missing the point, and the relationship to modules is pretty interesting too. I agree with what you said about the smaller unit having cycles and the larger unit being acyclic.
The most surprising one here is about traits. I share Grayson’s affinity towards ML family languages, which means I’ve used module systems in the way he’s describing here. But I don’t know many people who prefer module systems, especially when contrasted with typeclass / trait implementation.
Programmers tend to prefer the implicit resolution of traits, meaning you can just write “.to_string()” and the compiler will know which implementation you mean, vs. writing Int.to_string(). That’s not to say that everything programmers prefer is what’s best for them :)
I actually wonder if Rust would have been able to become so popular without traits.
As somebody who used Rust from its early days, I think back then I would have much preferred traits and found modules confusing and clunky. It’s only these days that I now get Graydon’s preference for module systems, and really wish for them in Rust (not that they would ever be, or should ever be added to it at this stage).
I think Rust made the right decision for the time and place, but I’m hoping to see languages in the future that can figure out how to marry modules with implicits like in OCaml’s modular implicits proposal.
I read this as the inverse of the JavaScript origin story, ie one man toiling for a week or two and laying out a bunch of semi-ok ideas that the rest of the world has lived with and worked around for decades. Whereas the Rust story is more one man starting it then allowing a team to slowly evolve it to 1.0 with practically nothing but technical battles and compromises to get each feature out. Oddly I agree with him partially about traits and disagree about reflection, reflection is a cool feature, but not something I would want to see used in code I maintain.
Sounds like Rust is not the language he wanted it to be, but it is the language the world needed.
This is an abjectly fascinating post, I would love for some of these ideas to be picked up and explored more by other languages.
First-class &. I wanted & to be a “second-class” parameter-passing mode, not a first-class type, and I still think this is the sweet spot for the feature. In other words I didn’t think you should be able to return & from a function or put it in a structure. I think the cognitive load doesn’t cover the benefits.
I think this might be great for a lang with a Go or Swift-like level of abstraction, that uses borrow checking to assist GC or RC memory management. My general advice to new Rust programmers who ask “how do I put a reference into a structure” is “you don’t want to”, followed by “you really don’t want to” and “you miiiiiight want to if the performance is worth putting up with a lot of headache”. The main place I make structures with references in them in Rust is iterators, and one of his other points involves changing iterators, so.
On the flip side, I think that being able to return a borrowed value from a function is probably pretty useful, so that may be worth supporting.
…In general I think structural types are better…
This is funny ‘cause IMO the really strong nominal types are one of Rust’s greatest advantages. XD They are such a departure from C/C++/Java style “sure I guess this type looks like that type if you squint, go ahead and use it like one” that it changes how you think about code; always being 100% certain about what type something is at any time is one of those things that seems like a limitation but is actually very liberating.
Expressivity: Similarly I would have traded away so much expressivity that it would probably make most modern Rust programmers start speaking about the language the way its current critics do: it’d feel clunky and bureaucratic, a nanny-state language…
This is actually interesting ’cause it kinda suggests how lots of small incremental changes can drastically change the feel of a language. Rust has lots of work put into convenience, which is a double edged sword but also so many of those things are fairly small things. I wonder what Graydon thinks of Austral?
This is funny ‘cause IMO the really strong nominal types are one of Rust’s greatest advantages.
Can you talk more about this? I think of rust as being more structurally typed…
Wait, I think I have them backwards! Nominative typing means that differently named types are always incompatible, even if the underlying types are the same, right?
I think most abstractions come with costs and tradeoffs, and I would have traded lots and lots of small constant performancee costs for simpler or more robust versions of many abstractions. The resulting language would have been slower. It would have stayed in the “compiled PLs with decent memory access patterns” niche of the PL shootout, but probably be at best somewhere in the band of the results holding Ada and Pascal.
I have vague memories of Rust being initially influenced by Ruby and wanting to be a readable-but-fast language, with an emphasis on readable-first, fast-second. Not so “influenced” as Crystal (that started as an almost 100% source-compatible compiler for Ruby), but to the point of copying various pieces of syntax.
Is this a case of Mandela Effect? Or was Rust really influenced by Ruby?
Iirc, from early on, Graydon was targeting a systems language (whatever that means), that had good performance, and modest overhead. That’s a space that Go lives in, to take one example (worth remembering here that early Rust actually made use of garbage collection, and I don’t think it was just reference counting https://news.ycombinator.com/item?id=5811854).
That later evolved into “no compromises, target is to be competitive with anything you can write in C or C++”.
Whether you’d call that original vision “readable-first, fast-second” or not is a squishy question (I lean no, but clearly “fast” is more emphasized than it used to be).
worth remembering here that early Rust actually made use of garbage collection, and I don’t think it was just reference counting
It was not intended to remain that, but it didn’t have the time to move from the RC impl before it was stripped out. You can find this note for 0.12 in the full changelog:
The old, reference counted GC type, Gc<T> which was once denoted by the @ sigil, has finally been removed. GC will be revisited in the future.
Not that Rc and Arc are independent of that (Rc was added in 0.7 and Arc predates that).
Problem: As anyone who knows bitwise operators can tell you, 1|2 == 3. So that’s obviously equivalent to:
matches!(n, 3)
Right? The answer is of course no. But how can it be? Because the syntax of a match expression is not the normal rust syntax, but its own language, where bitwise or doesn’t mean bitwise or anymore.
I wish they had invented a new operator for that syntax instead of overloading an existing one. Because code like this reads like one thing and does something else, and when wrapped in a macro, you can’t tell.
The | is well known for or-patterns, in languages which describe matching, e.g. look at | in regular expressions. I think the mistake here was that Rust went a little too overboard with macros. Every little thing doesn’t need a macro or a new syntax sugar. Plenty of things can be expressed with the existing match expression.
Honestly, it sounds like he wanted OCaml with some perf optimizations and multicore. He even wrote his Rust compiler in OCaml. And OCaml is now heading towards that sweet spot fairly quickly.
Go and Graydon-Rust are such different languages that I have trouble understanding this sentiment. You are comparing a language that was reluctant to add generics and a language that wanted first class module.
i’m coming at it more like substitution, rather than comparison: i think you could use graydon-rust in a lot of places that go is currently the best choice, and not regret it. much like how rust isn’t that similar to C++, but you can use it instead for an increasing number of projects
as far as i can tell, the reasons for go’s existence are: cheap concurrency, compile speed, and simplicity (for humans)
all of which graydon-rust would do a pretty good job of, while being more expressive
Graydon’s outlook here is really impressive.
He kicked off an incredibly influential project, then didn’t block other people from evolving it in a direction that wasn’t his original vision, and can talk about it so sensibly. Clearly he’s attached to some aspects of his initial vision (and he does sometimes argue that specific Rust features were mistakes/shouldn’t be adopted), but recognizes that actually existing Rust fills an important niche, that it probably couldn’t have filled while following his initial vision.
So many tech luminaries would be writing bitter posts about how much better everything would be if the project had just listened to them. Or they never would’ve stepped down in the first place, and the project would’ve stalled.
Yes definitely … and I was thinking about this a little more: What do Graydon-Rust and Rust-2023 actually have in common? The only things I can think of are:
Almost everything else seems different?
That’s like EVERYTHING in a language !!! What did I miss?
It’s a bit shocking that it even worked, and ended up where it is today … it’s so wildly different.
It seems like Mozilla got together a bunch of talented language and compiler engineers around Graydon’s side project, and then produced something completely different.
As an aside, has anyone rendered the early doc links from this post (and published them)?
https://github.com/graydon/rust-prehistory/blob/df8cc964772b36fe120df51eb5ee408b6dc2953a/doc/rust.texi#L82-L137
OTOH the initial assessment why Rust was needed was spot on, so I think it accomplished the goal, even if via different path.
Yeah definitely, looking over it, I would add these stable design points
swap[T]()
syntaxSo in those senses it’s the same, but it still surprises me how different it turned out!
(Also fun to see the stack iterators and actors, re-reading that deck)
That’s a really neat artifact!
I had exactly the same thought while reading this. The other thought I had was that I would have very much preferred many of the ideas he suspects the reader would not. In many cases the choices he would have made perfectly match my retroactive wishes after having used rust for a while.
I can’t think of an example of a tech luminary who would be bitter. it might be i’m super skeptical about what constitutes a luminary, though.
Linus Torvalds would 100% say “it’s shit” if he had not been guiding Linux until recently.
I think this is a huge misread of Torvalds
He’s infamous for some abusive rants, but they’re all directed at Linux contributors, and not at people working on other open source projects, or their work
To me it’s not a coincidence that he created what’s arguably the most collaborative open source project of all time, and he gets emotional about what is let into the codebase.
It’s basically because of the large scope and highly collaborative nature of the project – that’s the only “control” he has
Most maintainers would try to review all patches, and they would drown in it, and lose contributors, and the project would end up smaller. But he doesn’t, so he fishes out “smells” and then blows up when he sees something he doesn’t like
I’m not defending it (I certainly wouldn’t want to work in that environment). But I’m saying it’s not coming from a petty or jealous place
And TBH the rants generally seem like they have a reasonable technical justification, and there could be something to learn. He just chose to make it profane and abusive so everybody remembers it … kind of a shitty tactic, but there’s a method to the madness
The first Linus rant that comes to mind is the one where glibc replaced memcpy with one that complied with the spec and was faster, but broke programs that expected memcpy to me have like me move. So I’m going to have to disagree with this as a statement of fact.
Sure, but I’d say Torvalds is angry because he has “skin in the game” … Not because he’s a petty or jealous person.
I mean Graydon quit and Torvalds didn’t until recently, and Linux is much bigger than Rust – it’s inherently a stressful position
I’ve corresponded with him directly many years ago, and he’s a very clear and effective communicator, always interested in the best solutions.
It’s unfortunate that he got what he wanted – people remember the 1% of his extreme anger – but 99% of the time he’s helpful and effective.
(And not to say I don’t have any technical problems with what they’re doing. Lots of Linux and git are simply incoherent. But I regard that as a side effect of scale. It’s literally parallel development where people don’t think about what others are doing. The contrast is something like https://www.oilshell.org/ where I try to keep the whole language in my head, and make it globally coherent, and it doesn’t scale development-wise. I think that’s mostly OK for what we’re doing, but it’s a tradeoff, and could be better.)
That particular change broke the closed source Flash player plugin, which basically broke the Web back then. Have to agree with Linus there.
Yeah, because that’s what he said about git after giving up his primary developer status to Junio C. Hamano? Oh, wait… https://www.linuxfoundation.org/blog/blog/10-years-of-git-an-interview-with-git-creator-linus-torvalds
Haha, don’t ruin my perfectly good slander with facts. :-)
It’s certainly an unusual stance, but I don’t know if it led to the best outcome vs. being BDFL and making Rust a better language.
Better for what and along what axis is the question.
The vision the community (and probably mozilla) latched on is clearly very different than Graydon’s, but is it worse for it? Would the world be better off with an other efficient-ish applications language, and more importantly still without any sort of viable replacement or competitor to C++?
The performance aspect seems like the most important difference to me, and could be a deal breaker for many use cases. Wasn’t Go designed as a replacement for C++ though?
Kinda but not.
It was designed as a replacement for some C++ uses at the upper edge (or is it bottom?): network services, daemons some CLI. But it never aimed to broadly replace C++ itself in the niches were it’s solid. Rather to take the space where C++ and scripting languages would meet.
Very interesting, having been aware of Rust since ~2010, and having read Graydon’s comments on language design over the years, I knew it was originally a much different and more dynamic language. But there are lots of interesting details here.
I’d prefer relaxing the C++ competitor / zero-cost constraint as well, but I agree that such a language wouldn’t have been as popular.
The way I think of Rust is that it a MONOPOLY on the performance + memory safety combo.
If you’re the Linux kernel, Chrome, Firefox, or Android, and you want a memory safe language without any performance excuses NOT to adopt it (which people will bring up), then Rust is your ONLY option. Given the current state of systems languages, it appears it might be your only option in ten or more years too!
(I prefer GC for the things I’m working on, but working on a GC has taught me first-hand that the performance costs can be huge. On the other hand, a fast GC has a huge code complexity cost. Rust puts the complexity in the type system and arguably the user interface.)
Other points
Interesting comments regarding Rust’s Ruby-style “exterior” iterators and Python-style “interior” iterators. I usually reference this blog post on that issue, which calls them “external” and “internal”. I also think of it as “Python pull” vs. “Ruby/Rust push”.
Glad to find someone else who agrees on environment capture. Maybe it’s because I’ve been programming in Python/C++ for too long, but I don’t find implicit capture useful, or easy to reason about. I prefer explicit capturing of state, with “normal code”. (PHP actually has an interesting middle ground! You explicit list the variables to capture.)
Missing auto-bignum. Very interesting! Some people took issue with my comments that Rust’s definition of unsafe is NOT the definition of unsafe in programming languages. Here’s a clear and useful definition:
So while Rust’s integer behavior is MUCH better than C/C++, it’s also unsafe. Graydon agrees:
On library-defined containers, iterations, smart pointers, and cross-crate monomorphization. I don’t know which side I fall on, but it’s interesting to me that this is still controversial! The fact that Go got generics ~10 years later is a useful data point.
Complex grammar – I get his point, and there are probably specific places where the syntax could have been simpler. But overall I’d say that humans are able to more easily recognize subtle syntax than computers, so human-friendly syntax makes sense. Conversely, humans are not just annoyed by redundant syntax, but it can make programs harder to read.
Traits – I don’t have much experience or an opinion here, but his comments seem to somewhat contradict this recent good blog post by @matklad:
https://matklad.github.io/2023/03/28/rust-is-a-scalable-language.html
This seems borne out by evidence, although I don’t use Cargo. Maybe he thinks that there is a point in the design space that would have produced better composition?
Personally I use a more dynamic dependency inversion style in GC languages, where you glue all your components together with turing-complete code in main(), not with Cargo. But that’s arguably at a smaller scale.
This is complicated :-)
The first thing is that, when we talk about modules, there are really two different language features (more on this):
Rust has excellent module system in the second sense (crate/module separation, modules are nested bundles of things, crates are anonymous, crates have explicit dag of dependencies, crate is a visibility boundary, compiler supports linking two version of the crate into a single binary, there’s only one way to build a crate (cargo)), and it powers a lot of scalability.
When it comes to polymorphism, traits are better than Java-style inheritance (which rigidly couples type & interface) or C++ style duck typing + ADL (which doesn’t define an interface at all, and makes it hard to speak about semver), but are arguably worse than modules would have been. Some deficiencies of traits are papered over with conditional compilation, which is also an extra-linguistic wart.
Case study — there’s
serde
, which defines serialization interface, and there’stime
, which defines date-time types. With modules, we’d haveserde-time
crate which depends on the two and defines serialization for date-time. With traits, of them has to depend on the other. That obviously sucks, so we use conditional compilation to maketime
depend onserde
only if that’s enabled on build time. That gets things done, but is horrible:serde
compiles before you can compiletime
and things that transitively depend ontime
, as opposed to compilingserde
andtime
separately, and only blocking for “serialization of time” bit.I’m not sure about your
serde-time
case study. I think that there are trait / type-class systems that would allow this, defining a separate thirdserde-time
package. In Haskell for example those are called “orphan instances”, and most settings of GHC accept them by default (an instance of a class/trait for a type is not an “orphan” if it is defined in the same place as either the class/trait or the type itself). Accepting orphan instances has design downsides, as checking for coherence becomes non-local, but this is a well-known point in the trait/typeclasses design space that is known to improve modularity.Thanks for the response! I do wish ML “modules” were called something else – I never understood how OCaml modules were really that different than Java/C++-style objects. They both couple state and behavior, and provide polymorphism – maybe it’s a static vs. dynamic issue. I recall that I asked once on /r/programminglanguages, but I’ve forgotten since I don’t use OCaml. [1]
I think the conditional compilation issue kind of gets at my point. I think it’s just simpler if you stitch together “modules” (units of code) with Turing-complete code in main(). You always need something like that. This prevents a “shadow language” of modules. (This reminds me of Zig’s decision to represent files as structs, which I think of as a very interesting “Perlis-Thompson” solution.)
I use dynamic dispatch whereever necessary, and it’s not slow for anything I’m doing. But there should be a language where you can use a mix of dynamic and static dispatch in that style too. (Classes without virtual functions are a form of static dispatch, but they can’t do everything templates can)
So I’m basically happy with using classes in a dependency inversion style WITHIN a single program. But inheritance across library boundaries is indeed a huge smell, often leading to brittle designs (though SerenityOS does it in the “classical” style I believe).
So I’ve always wondered about how to make something that scales to a wider ecosystem like Cargo or npm.
But the “smell” I see is the tall pyramids of dependencies. I do not think software should be structured like that :) This is after working with Google’s monorepo – the “default” is very much a pyramid, and it leads to enormous binaries and huge compile times – software dev where a build cluster is required. People complain about 1G Docker containers, but the equivalent where a normal statically linked binary is 200MB - 1G is just as bad.
You have all these unnecessary transitive dependencies because people haven’t organized their code in a sane way. It’s more economical to just add a function in any old place and add a dependency, than to structure your code.
npm suffers from this huge pyramid as well. The pyramid doesn’t just bloat binaries, but it causes bugs due to instability at the base.
The alternative is using dependency inversion, basically where the main() depends directly on 100 different modules, and then wires them up dynamically. I am not sure how this compares with Graydon’s view, but I find that it scales very well. You never really run into a “factoring problem” that can’t be solved easily.
I actually think the answer is separately distributed IDLs and ABIs – “header files” essentially. That is, runtime software composition (like Windows COM and shell) rather than compile-time. At a certain scale you need this kind of stability, and explicit protocol design. And then the internals can be rapidly refactored with a static type system.
(edit: just read over your comment again, and it seems like you may agree with this, e.g. with regard to to
.mli
.)I also re-read this after your comment and it was good - https://faultlore.com/blah/swift-abi/ (I have been brainstorming about turning Oils’ metalanguage into an actual language for users :) It doesn’t have modules yet )
I would encourage writing up the linked comment on module system design … I definitely think FP vs. OOP is missing the point, and the relationship to modules is pretty interesting too. I agree with what you said about the smaller unit having cycles and the larger unit being acyclic.
[1] I googled and I chimed on this thread
https://old.reddit.com/r/ProgrammingLanguages/comments/nxumma/nuts_or_genius_modules_are_classesobjects/
and this is related
https://old.reddit.com/r/ProgrammingLanguages/comments/mee6zy/ocaml_modules_vs_cjava_oop/
Though their use in practice requires a bunch of real examples, e.g.
https://old.reddit.com/r/programming/comments/8uup0/xavier_leroy_objects_and_classes_versus_modules/
The most surprising one here is about traits. I share Grayson’s affinity towards ML family languages, which means I’ve used module systems in the way he’s describing here. But I don’t know many people who prefer module systems, especially when contrasted with typeclass / trait implementation.
Programmers tend to prefer the implicit resolution of traits, meaning you can just write “.to_string()” and the compiler will know which implementation you mean, vs. writing Int.to_string(). That’s not to say that everything programmers prefer is what’s best for them :)
I actually wonder if Rust would have been able to become so popular without traits.
As somebody who used Rust from its early days, I think back then I would have much preferred traits and found modules confusing and clunky. It’s only these days that I now get Graydon’s preference for module systems, and really wish for them in Rust (not that they would ever be, or should ever be added to it at this stage).
I think Rust made the right decision for the time and place, but I’m hoping to see languages in the future that can figure out how to marry modules with implicits like in OCaml’s modular implicits proposal.
I read this as the inverse of the JavaScript origin story, ie one man toiling for a week or two and laying out a bunch of semi-ok ideas that the rest of the world has lived with and worked around for decades. Whereas the Rust story is more one man starting it then allowing a team to slowly evolve it to 1.0 with practically nothing but technical battles and compromises to get each feature out. Oddly I agree with him partially about traits and disagree about reflection, reflection is a cool feature, but not something I would want to see used in code I maintain.
Sounds like Rust is not the language he wanted it to be, but it is the language the world needed.
This is an abjectly fascinating post, I would love for some of these ideas to be picked up and explored more by other languages.
I think this might be great for a lang with a Go or Swift-like level of abstraction, that uses borrow checking to assist GC or RC memory management. My general advice to new Rust programmers who ask “how do I put a reference into a structure” is “you don’t want to”, followed by “you really don’t want to” and “you miiiiiight want to if the performance is worth putting up with a lot of headache”. The main place I make structures with references in them in Rust is iterators, and one of his other points involves changing iterators, so.
On the flip side, I think that being able to return a borrowed value from a function is probably pretty useful, so that may be worth supporting.
This is funny ‘cause IMO the really strong nominal types are one of Rust’s greatest advantages. XD They are such a departure from C/C++/Java style “sure I guess this type looks like that type if you squint, go ahead and use it like one” that it changes how you think about code; always being 100% certain about what type something is at any time is one of those things that seems like a limitation but is actually very liberating.
This is actually interesting ’cause it kinda suggests how lots of small incremental changes can drastically change the feel of a language. Rust has lots of work put into convenience, which is a double edged sword but also so many of those things are fairly small things. I wonder what Graydon thinks of Austral?
Can you talk more about this? I think of rust as being more structurally typed…
Wait, I think I have them backwards! Nominative typing means that differently named types are always incompatible, even if the underlying types are the same, right?
Yes. Structural typing means types with the same structure are equivalent, nominative typing means the name is the only factor of identity.
Ocaml has a structural object system, go has structural interfaces.
(Just for completeness) OCaml also has structural typing of modules and polymorphic variants. Quite a good mix of nominal and structural.
I have vague memories of Rust being initially influenced by Ruby and wanting to be a readable-but-fast language, with an emphasis on readable-first, fast-second. Not so “influenced” as Crystal (that started as an almost 100% source-compatible compiler for Ruby), but to the point of copying various pieces of syntax.
Is this a case of Mandela Effect? Or was Rust really influenced by Ruby?
https://doc.rust-lang.org/reference/influences/influences.html lists Ruby as inspiration for the closure & block syntax. That was probably more present in the earlier versions, before internal iteration was stripped out.
Though I don’t know whether the internal iteration itself came from Ruby or from functional languages.
On the other hand, a fair amount of early interest did come from prominent ruby people (wycats, steve), which may have coloured your view?
Iirc, from early on, Graydon was targeting a systems language (whatever that means), that had good performance, and modest overhead. That’s a space that Go lives in, to take one example (worth remembering here that early Rust actually made use of garbage collection, and I don’t think it was just reference counting https://news.ycombinator.com/item?id=5811854).
That later evolved into “no compromises, target is to be competitive with anything you can write in C or C++”.
Whether you’d call that original vision “readable-first, fast-second” or not is a squishy question (I lean no, but clearly “fast” is more emphasized than it used to be).
It was not intended to remain that, but it didn’t have the time to move from the RC impl before it was stripped out. You can find this note for 0.12 in the full changelog:
Not that Rc and Arc are independent of that (Rc was added in 0.7 and Arc predates that).
pattern-binding ambiguity
This?
Or equivalently,
matches!(n, 1 | 2)
.Problem: As anyone who knows bitwise operators can tell you,
1|2 == 3
. So that’s obviously equivalent to:Right? The answer is of course no. But how can it be? Because the syntax of a match expression is not the normal rust syntax, but its own language, where bitwise or doesn’t mean bitwise or anymore.
I wish they had invented a new operator for that syntax instead of overloading an existing one. Because code like this reads like one thing and does something else, and when wrapped in a macro, you can’t tell.
Rather
The
|
is well known for or-patterns, in languages which describe matching, e.g. look at|
in regular expressions. I think the mistake here was that Rust went a little too overboard with macros. Every little thing doesn’t need a macro or a new syntax sugar. Plenty of things can be expressed with the existingmatch
expression.it sounds like he could make another language, and have it replace go (rather than c++, as rust is doing now)
Honestly, it sounds like he wanted OCaml with some perf optimizations and multicore. He even wrote his Rust compiler in OCaml. And OCaml is now heading towards that sweet spot fairly quickly.
Go and Graydon-Rust are such different languages that I have trouble understanding this sentiment. You are comparing a language that was reluctant to add generics and a language that wanted first class module.
i’m coming at it more like substitution, rather than comparison: i think you could use graydon-rust in a lot of places that go is currently the best choice, and not regret it. much like how rust isn’t that similar to C++, but you can use it instead for an increasing number of projects
as far as i can tell, the reasons for go’s existence are: cheap concurrency, compile speed, and simplicity (for humans)
all of which graydon-rust would do a pretty good job of, while being more expressive
[Comment removed by author]