The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You have also once again made a straw man with the nonsense you invented about what I think about modern culture
You’re the one who brought it up:
I am quite happy to differ in opinion to someone who says ‘great content’ unironically
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload. cargo check via rust-analyzer in my editor is faster and does enough for my interactive workflow most of the time.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
I didn’t see many specific comparisons between Go and Zig in the video, more like a personal vibe or opinion on Go’s relevance at this point.
I believe Go still holds its ground between Rust, Zig, C, and higher-level scripting languages. Maybe I’m wrong but I would bet a lot of the push for manual memory management might be more about fashion or premature optimization than actual necessity.
I think a lot of it also comes from people having the idea you have to pick between “strongly typed” and “memory managed”. Go and Swift (and any ml language) are great examples that you can have both.
I believe Rust was also initially going to be strongly typed and memory managed!
A lot of people in the Rust community think “zero cost abstraction” is a core promise of the language. I would never have pitched this and still, personally, don’t think it’s good. It’s a C++ idea and one that I think unnecessarily constrains the design space. I think most abstractions come with costs and tradeoffs, and I would have traded lots and lots of small constant performance costs for simpler or more robust versions of many abstractions. The resulting language would have been slower. It would have stayed in the “compiled PLs with decent memory access patterns” niche of the PL shootout, but probably be at best somewhere in the band of the results holding Ada and Pascal.
Yes, there seems to be a conflation, which is rather new, manual memory management of course has its place, but I suspect a huge percentage of various projects people make don’t really need it.
I think specifically Go offers a lot of power while also being the most productive language on the market for general purpose computing stuff. Like Rust and Zig are more powerful, but you also need to know a lot more in order to be productive. Go doesn’t have the super expressive type system or high performance or infinitely configurable build system, but you can push it a very long ways before you need to reach for one of those more powerful systems—if you’re in some niche like mobile or AI or high performance, Go may not suffice, but for most general tasks like server software or desktop daemons, Go will give more performance than the overwhelming majority of projects will ever require, and it will be vastly more productive, especially in a team setting.
I’ve been thinking that Go is in a weird halfway house position which maybe made sense for a lot of people on the fence, but I would not be sad if we could leave it in the past.
It is very natural given that Mitchel seem to like C-like languages & wrote considerable amount of ruby (vagrant). So it is not like he is just picking these because it’s trending again.
I mean the whole trait disaster situation that he mentions in the block of Rust v Zig is very much on point. It’s just extremely un-fun. What’s the point?
It’s too bad this person’s editor has a broken implementation of “jump to definition,” but since Rust traits are coherent there is exactly one implementation of read which could apply to a given type and it should never be ambiguous. In contrast, if one were to use Zig’s comptime to implement polymorphism, the editor cannot actually determine dispatch outside of a specific monomorphic context because comptime just executes arbitrary Zig. This is not an argument Zig is actually winning.
But it’s also the case that today no one really writes polymorphic code in Zig at all because its faculties for it are so limited. So I guess the “point” is that a lot of people find polymorphism a useful language feature, even if their IDEs are currently unable to resolve traits for them.
I don’t think “jump to definition” is particularly material here. I obviously don’t have problem with semantic jump to definition, but I still find myself regularly getting lost figuring out what actually calls what in some real-world rust code bases. This might be due to the fact that rust-tooling is still pretty far from the ideal, but I don’t think so.
I had more or less the same navigation problem in a large Java codebase, where “action at a distance” effect is created by inheritance. In Java, the problem is easier (as you can syntactically follow the specific chain of superclasses, instead of impls being anywhere in a trait-or-type defining crate), the solution is much better (IJ is pretty close to ideal with respect to semantic understanding), and still, at lest for me personally, there’s a big mental overhead to piece together spatially disjoint fragments of code into a straight-line sequence of steps that actually happens at runtime.
I do think it is an unsolved problem in language design to combine expressive power of polymorphism with cognitive simplicity of straight-line non-generic code that just does things, one after another.
Practically, I solve it by not using traits, interfaces, inheritance, closures, etc, unless absolutely necessary (chiefly at the boundary between separately linked or at least separately developed components), but I feel that that’s an unorthodox position, and that the no-action default in Rust is to make much more things generic&indirect.
No informed opinion on how Go, Rust, and Zig compare for this particular aspect though, as I only have relevant experience of reading other’s people code in Rust.
I also barely used traits in the Rust that I write. I’m not sure if its really true that ordinary Rust users use tons of traits (certainly I’ve seen codebases that do use them in ridiculous ways), or if that impression largely derives from their use in exactly the way you describe: as parts of the interfaces of highly abstract open source libraries, which are separately developed components trying to play nicely with unknown peers.
And even then sometimes I look at the rustdoc output of some potential dependency and wonder what on earth the author was thinking! I’m not even talking like obviously bad examples like recent versions of the base64 crate.
Adding onto that navigation problem in Java are interfaces and DI. Every time I wanted to see a function definition I was taken to the interface instead, and I had to then puzzle out which of the five implementations was actually injected into this class by DI.
Adding onto that navigation problem in Java are interfaces and DI. Every time I wanted to see a function definition I was taken to the interface instead, and I had to then puzzle out which of the five implementations was actually injected into this class by DI.
That almost seems to be the goal of DI, i.e. that you don’t know what the exact class is that you should expect?
Yes, and that’s one of its downsides, which only gets worse when you abuse DI where it shouldn’t be. It’s really annoying to have to run the program with a debugger to do something that should be entirely possible with static analysis.
That sounds nice, until you actually need to figure out which implementation is doing something unexpected.
Indirection is indispensable in large systems, but it also comes at an extremely high readability penalty. Using it lightly improves code immensely. Using it heavily makes code incomprehensible.
There are different flavors of DI. Java DI implementations in particular (e.g. Spring) often use DI for all wiring that DI can possibly be used for, which leads to these types of frustrations. I prefer to use DI only across container and module boundaries, and for security purposes (where the implementation class is likely to be opaque from the injection site’s POV). Obviously, this isn’t supported by Java …
Adding onto that navigation problem in Java are interfaces and DI. Every time I wanted to see a function definition I was taken to the interface instead, and I had to then puzzle out which of the five implementations was actually injected into this class by DI.
For what it’s worth, intellij has pretty decent support even for that.
Also, for spring specifically, Intellij can actually resolve it in most cases (I believe via executing the same rules as Spring would). Also, Spring started getting more aligned with the native compilation model, so they have some support for doing DI ahead of time, similarly to the more modern “competition” microservice frameworks.
In contrast, if one were to use Zig’s comptime to implement polymorphism, the editor cannot actually determine dispatch outside of a specific monomorphic context because comptime just executes arbitrary Zig.
True!
But it’s also the case that today no one really writes polymorphic code in Zig at all because its faculties for it are so limited.
Eh.. not really. The actual truth is that you can get a lot done without the need of creating a ton of polymorphic abstractions. Comptime is like salt: you just need to add a pinch of it to your program to bring out the umami flavor.
but since Rust traits are coherent there is exactly one implementation of read which could apply to a given type and it should never be ambiguous.
I think the idea was we have a fn parse(request: &dyn Read) -> Struct and he can only jump to the definition of Read::read and not the specific version of Read used (e.g. File::read) because the function doesn’t specify it… because the function doesn’t just work on File it also works on TcpStream and so on.
Which is really more of a rejection of polymorphism (dynamic or static) than a complaint about IDE limitations. Which is.. a choice.
I suppose hypothetically a IDE with super good debugger integration might be able to pause a execution and then let you jump to the actual implementation of the trait by the current object you are looking at.
Ooh, I never really thought about how poorly “Jump to Definition” works with the From / Into split. But that got me thinking that it would be really slick if IDE tooling could provider a better experience here…
Say I jump to definition on an .into() call that’s converting from Foo into Bar. It’d be really slick if Rust Analyzer jumped to the impl Into<U> for T where T: From<U> block, but could fill in Foo for T and Bar for U. Then I could just jump again into the underlying implementation of impl From<Foo> for Bar
(Or bigger picture idea: probably the biggest challenge when navigating polymorphic code is going from a context with a specific type to context with a generic type, then back to a specific type. We’ve all learned to mentally track the specific type across that boundary, but there’s no reason we couldn’t have tooling that could do that for us, or at least make it easier)
Super interesting, traits were one of the big ideas that really sold me on Rust! Even when I first started learning the language, I felt like using Rustdocs to find which types implement a trait and what bounds I’d need to add to my generics was mostly intuitive to me.
I can definitely see why some folks would prefer the unchecked templating style that C++ offers (which I think is closer to how Zig’s comptime stuff works? but I’ve really only skimmed the beginner guides on comptime so could be very wrong here). But personally, I’ve been swayed by Rust’s model and I definitely prefer it over the “looser” style (i.e. I prefer getting errors on the definition of a generic function by default rather than getting errors where it’s invoked)
That said, there are definitely some real pain points around traits and trait bounds in practice. The biggest ones to me are around Send / Sync / 'static bounds for futures (e.g. prevalent in Tokio), as well as extremely generic code that you find in libraries like Axum in Diesel (both libraries I enjoy, but being generic-heavy definitely comes with a cognitive cost)
My philosophy these days is to use enums in most situations, and to only use traits when necessary (and if so, to try and use dyn Trait if possible, since passing around a concrete type beats propagating generic parameters everywhere). Even so, I’m still very “pro trait”, and common traits like Iterator, Read, and From / Into have been a joy to use
i wonder if i’ll live to see the day where we can talk about a language without putting a different language down
The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
Very nice channel indeed. Found it accidentally via this interview about Smalltalk and enjoyed it very much.
Do you have other channel recommendations?
I found Software Unscripted to pretty good too. Not quite as calm as Developer Voices, but the energy is positive.
Thanks! Didn’t know Richard Feldman hosted a podcast, he’s a good communicator.
Signals and Threads is another great podcast, albeit doesn’t seem to have a scheduled release
Thanks for the suggestion. I will check it out!
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
As someone else said, Developer Voices is excellent, and the on the opposite end of the spectrum from OP.
Two more:
The Software Unscripted podcast publishes on YouTube too, and I enjoy it a fair bit at least in the audio only format.
Book Overflow, which focuses on reading a software book about once every two weeks and talking about it in depth.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
then you have chosen poorly.
No, I think it’s okay for people to make great content for a living.
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You’re the one who brought it up:
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
My experience is that Lobste.rs is way more Rust leaning than Go leaning, if anything.
We have more time to comment on Lobsters because our tools are better ;)
Waiting for compile to finish, eh?
Hahahahaha. Good riposte!
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload.
cargo checkviarust-analyzerin my editor is faster and does enough for my interactive workflow most of the time.Yeah, Haskell is so superior to Rust that’s not even fun at this point.
It’s funny you say that because recently it seems we get a huge debate on any Go-related post :D
First thought was “I bet it’s The Primeagen.” Was not disappointed when I clicked to find out.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
i can’t upvote this enough
I didn’t see many specific comparisons between Go and Zig in the video, more like a personal vibe or opinion on Go’s relevance at this point.
I believe Go still holds its ground between Rust, Zig, C, and higher-level scripting languages. Maybe I’m wrong but I would bet a lot of the push for manual memory management might be more about fashion or premature optimization than actual necessity.
I think a lot of it also comes from people having the idea you have to pick between “strongly typed” and “memory managed”. Go and Swift (and any ml language) are great examples that you can have both.
I believe Rust was also initially going to be strongly typed and memory managed!
Source: Graydon Hoare
Last I checked, Rust’s type system is in fact quite strong?
This is in the context of parent comment listing languages that are both. Rust turned out strongly typed but not memory managed.
That… makes no sense? 90% of the PL research is done in managed languages.
Also “strongly typed” is essentially meaningless, I assume you mean “statically typed”.
Yes, there seems to be a conflation, which is rather new, manual memory management of course has its place, but I suspect a huge percentage of various projects people make don’t really need it.
I think “strongly typed” is the last thing I would consider Go.
I think specifically Go offers a lot of power while also being the most productive language on the market for general purpose computing stuff. Like Rust and Zig are more powerful, but you also need to know a lot more in order to be productive. Go doesn’t have the super expressive type system or high performance or infinitely configurable build system, but you can push it a very long ways before you need to reach for one of those more powerful systems—if you’re in some niche like mobile or AI or high performance, Go may not suffice, but for most general tasks like server software or desktop daemons, Go will give more performance than the overwhelming majority of projects will ever require, and it will be vastly more productive, especially in a team setting.
“one of the most productive”, otherwise … fully agree!
Yes, my mistake. Was typing quickly!
I’ve been thinking that Go is in a weird halfway house position which maybe made sense for a lot of people on the fence, but I would not be sad if we could leave it in the past.
I found it interesting how Mitchell was describing the ideal web backend as “honestly, maybe Rails or PHP”. Everything comes full circle.
It is very natural given that Mitchel seem to like C-like languages & wrote considerable amount of ruby (vagrant). So it is not like he is just picking these because it’s trending again.
I mean the whole trait disaster situation that he mentions in the block of Rust v Zig is very much on point. It’s just extremely un-fun. What’s the point?
It’s too bad this person’s editor has a broken implementation of “jump to definition,” but since Rust traits are coherent there is exactly one implementation of read which could apply to a given type and it should never be ambiguous. In contrast, if one were to use Zig’s comptime to implement polymorphism, the editor cannot actually determine dispatch outside of a specific monomorphic context because comptime just executes arbitrary Zig. This is not an argument Zig is actually winning.
But it’s also the case that today no one really writes polymorphic code in Zig at all because its faculties for it are so limited. So I guess the “point” is that a lot of people find polymorphism a useful language feature, even if their IDEs are currently unable to resolve traits for them.
I don’t think “jump to definition” is particularly material here. I obviously don’t have problem with semantic jump to definition, but I still find myself regularly getting lost figuring out what actually calls what in some real-world rust code bases. This might be due to the fact that rust-tooling is still pretty far from the ideal, but I don’t think so.
I had more or less the same navigation problem in a large Java codebase, where “action at a distance” effect is created by inheritance. In Java, the problem is easier (as you can syntactically follow the specific chain of superclasses, instead of impls being anywhere in a trait-or-type defining crate), the solution is much better (IJ is pretty close to ideal with respect to semantic understanding), and still, at lest for me personally, there’s a big mental overhead to piece together spatially disjoint fragments of code into a straight-line sequence of steps that actually happens at runtime.
I do think it is an unsolved problem in language design to combine expressive power of polymorphism with cognitive simplicity of straight-line non-generic code that just does things, one after another.
Practically, I solve it by not using traits, interfaces, inheritance, closures, etc, unless absolutely necessary (chiefly at the boundary between separately linked or at least separately developed components), but I feel that that’s an unorthodox position, and that the no-action default in Rust is to make much more things generic&indirect.
No informed opinion on how Go, Rust, and Zig compare for this particular aspect though, as I only have relevant experience of reading other’s people code in Rust.
I also barely used traits in the Rust that I write. I’m not sure if its really true that ordinary Rust users use tons of traits (certainly I’ve seen codebases that do use them in ridiculous ways), or if that impression largely derives from their use in exactly the way you describe: as parts of the interfaces of highly abstract open source libraries, which are separately developed components trying to play nicely with unknown peers.
And even then sometimes I look at the rustdoc output of some potential dependency and wonder what on earth the author was thinking! I’m not even talking like obviously bad examples like recent versions of the base64 crate.
Adding onto that navigation problem in Java are interfaces and DI. Every time I wanted to see a function definition I was taken to the interface instead, and I had to then puzzle out which of the five implementations was actually injected into this class by DI.
That almost seems to be the goal of DI, i.e. that you don’t know what the exact class is that you should expect?
Yes, and that’s one of its downsides, which only gets worse when you abuse DI where it shouldn’t be. It’s really annoying to have to run the program with a debugger to do something that should be entirely possible with static analysis.
I’d agree that in most languages, if you know what the class will be, it really shouldn’t be using DI.
That sounds nice, until you actually need to figure out which implementation is doing something unexpected.
Indirection is indispensable in large systems, but it also comes at an extremely high readability penalty. Using it lightly improves code immensely. Using it heavily makes code incomprehensible.
There are different flavors of DI. Java DI implementations in particular (e.g. Spring) often use DI for all wiring that DI can possibly be used for, which leads to these types of frustrations. I prefer to use DI only across container and module boundaries, and for security purposes (where the implementation class is likely to be opaque from the injection site’s POV). Obviously, this isn’t supported by Java …
Yes, this is vile. I hate this so much.
For what it’s worth, intellij has pretty decent support even for that.
Also, for spring specifically, Intellij can actually resolve it in most cases (I believe via executing the same rules as Spring would). Also, Spring started getting more aligned with the native compilation model, so they have some support for doing DI ahead of time, similarly to the more modern “competition” microservice frameworks.
True!
Eh.. not really. The actual truth is that you can get a lot done without the need of creating a ton of polymorphic abstractions. Comptime is like salt: you just need to add a pinch of it to your program to bring out the umami flavor.
I think the idea was we have a
fn parse(request: &dyn Read) -> Structand he can only jump to the definition ofRead::readand not the specific version ofReadused (e.g.File::read) because the function doesn’t specify it… because the function doesn’t just work onFileit also works onTcpStreamand so on.Which is really more of a rejection of polymorphism (dynamic or static) than a complaint about IDE limitations. Which is.. a choice.
I suppose hypothetically a IDE with super good debugger integration might be able to pause a execution and then let you jump to the actual implementation of the trait by the current object you are looking at.
It could be much less esoteric:
x.into()easily brings you toimpl Into<U> for T where T: From<U>which is simultaneously correct and useless.Ooh, I never really thought about how poorly “Jump to Definition” works with the
From/Intosplit. But that got me thinking that it would be really slick if IDE tooling could provider a better experience here…Say I jump to definition on an
.into()call that’s converting fromFoointoBar. It’d be really slick if Rust Analyzer jumped to theimpl Into<U> for T where T: From<U>block, but could fill inFooforTandBarforU. Then I could just jump again into the underlying implementation ofimpl From<Foo> for Bar(Or bigger picture idea: probably the biggest challenge when navigating polymorphic code is going from a context with a specific type to context with a generic type, then back to a specific type. We’ve all learned to mentally track the specific type across that boundary, but there’s no reason we couldn’t have tooling that could do that for us, or at least make it easier)
Super interesting, traits were one of the big ideas that really sold me on Rust! Even when I first started learning the language, I felt like using Rustdocs to find which types implement a trait and what bounds I’d need to add to my generics was mostly intuitive to me.
I can definitely see why some folks would prefer the unchecked templating style that C++ offers (which I think is closer to how Zig’s comptime stuff works? but I’ve really only skimmed the beginner guides on comptime so could be very wrong here). But personally, I’ve been swayed by Rust’s model and I definitely prefer it over the “looser” style (i.e. I prefer getting errors on the definition of a generic function by default rather than getting errors where it’s invoked)
That said, there are definitely some real pain points around traits and trait bounds in practice. The biggest ones to me are around
Send/Sync/'staticbounds for futures (e.g. prevalent in Tokio), as well as extremely generic code that you find in libraries like Axum in Diesel (both libraries I enjoy, but being generic-heavy definitely comes with a cognitive cost)My philosophy these days is to use enums in most situations, and to only use traits when necessary (and if so, to try and use
dyn Traitif possible, since passing around a concrete type beats propagating generic parameters everywhere). Even so, I’m still very “pro trait”, and common traits likeIterator,Read, andFrom/Intohave been a joy to use