I found the article interesting; having presented Go as both horrible and good, it reminds me of C a bit: a “quirky, flawed and enormous success” language. Perhaps it’s no coincidence given the fact that they share some of their designers :)
However, as someone who wrote some code in both Go and Rust, I couldn’t disagree more with “I think the reasons why Go’s popularity skyrocketed, will apply to Rust.” I think you’re missing one, very important bit: Go is easy to write.
It may be stupid, it may be flawed, but you write your code quickly and it gets the job done. Go has succeeded in attracting Python programmers, because it also allowed them to build their programs quickly, with not much effort, and they ran quite a lot faster.
The barrier of entry to Rust is massive. Yes, there are obvious advantages to code that you’ve already written in it and made compile, but as far as development effort goes, Rust is not the kind of thing you choose if you want a thing done quickly.
I think Go’s success is more similar to Javascript’s or Python’s rather than Rust’s. It’s easy to pick up and good enough in practice. Rust goes for the opposite: it makes itself harder to learn and use, but for a superior long-term benefit. I don’t think it’ll reach quite the same audience, or popularity.
+1. I feel like the reasoning in the article is a bit skewed: it considers programming languages to be formal artifacts and compares them on their technical merits. It is a perfectly valid thing to do and the analysis in the article is thoughtful.
Then it starts making predictions based on the assumption that technical merits of a language define its success, completely missing the wetware side of programming languages. Programming languages are made to be used by both humans and computers, and their human effects can be very subtle: even some stupid little thing like long compilation times or quirky syntax can be disruptive.
Go is good enough at the machine level (much better than Python/Ruby and the like), at the same time cutting many corners to be easy for humans (simple, minimal and familiar syntax, small number of concepts in the language, simple and unobtrusive type system, low-latency GC, good tooling, very fast compilation times and feedback loop, a simple but effective concurrency model, large and actually useful standard library). Sometimes Go feels almost like cheating: it is full of high-quality implementations of complex things with very simple/minimal/hidden human interfaces (GC, goroutines, the standard library). Go consistently makes it harder for humans to make wrong choices, compared to most other mainstream programming languages (one subtle example: structures are value-types which are copied by default, unlike pass-by-reference craze of Java/Python/Ruby, making unintended sharing harder and even alleviating absence of immutability to some degree).
Rust is excellent for machines, but its human side is much more uneven than in Go. It is much better than Go in preventing humans from making mistakes in many areas. At the same time, it brings non-trivial, large, open-ended interfaces and does not hide implementation complexity as well from the programmer. It brings huge learning curve and cognitive overhead. Implementation/language complexity can be a minefield in itself: humans might get confused, might miss a simpler way to do something, etc. Rust is designed for very patient and conscientious programmers who are willing to spend time and efforts to get things right. Sadly, this is often not the recipe for success in many parts of the software industry.
I’d be happy to see a world where Go fills a high-level niche and Rust makes systems foundation.
I think the trouble with discussions about a language’s “technical merits” is that somewhere along the way some people have lost sight of the purpose of programming languages: to act as an interface to make it easy for programmers to create software. Good languages remove resistance to getting programs written. Bad languages make it harder.
Go is very good at satisfying a particular niche - making it easy to write software without sacrificing much performance. I’d argue that this is a niche which is in high demand and that explains the popularity of Go.
Rust has a different niche - minimising memory access errors while providing sophisticated language features and having good performance. The trade-off is that the language is much harder to master than Go and programming is in general more difficult. Rust’s features are all laudable things but given it’s lower popularity it seems like there’s just less demand for languages of this type.
This may sound defensive. I apologize for my poor writing. Instead, I want to suggest that the entirety of the OP is written from the wrong mindset and that the below points are specific inflections of that wrong mindset.
The article ignores the number one reason that Go was written: SPEED OF COMPILE TIMES! The article also ignores another very important reason that Go was written: It is for programmers, “not capable of understanding a brilliant language but we want to use them to build good software”. The quote is by Rob Pike.
The article places some importance on immutable types, would the average programmer know how to leverage an immutable type to any benefit from it?
“The standard library of Rust is just as rich as that of go” - REALLY? Where is net/http? That absence alone makes this statement an outright lie. Looking for json, xml? Again not in the stdlib. Compression and archiving like tar, zip, bz2, lzw, gzip? Again in the Go stdlib, not in the Rust std. Cryptography including symmetric, asymmetric, and hashes? In the Go stdlib and absent from Rust std. I could go on, but I’d have to refer below the fold of the Go standard lib. Compare https://golang.org/pkg/ to https://doc.rust-lang.org/stable/std/ for yourself.
“I think we could call Rust a superior language to Go in, quite literally, every single way possible.” Not in speed of compile time. Not in ease of use for the average and below average developer. When these two points are your most important values, Rust does not look superior at all.
Regarding passing the critical point and being a mainstay, I absolutely agree. Rust is here to stay and I’m glad that it is. Regarding it being a better language than any other for most tasks, I absolutely disagree. Rust’s place is to replace C++. It is a simpler, more sane, language than C++ to be used in the same places, when that level of control is needed. For anything else, a more simple language with less mental load required and faster compile times is better suited to the task.
Finally, on the mindset and point of view, if “superior” does not take the human aspect into consideration at all, this post may have a lot of truth to it, however, code is written by humans. Humans have different needs than a bullet lists of supported features. Keeping in mind the goals of the Go programming language when it was written (from Rob Pikes 2009 Google Tech Talks presentation): type safety, memory safety, good support for concurrency, GC, and speed to compile.
It is easy to forget that as projects grow, compile speeds become non-trivial. Many languages had tackled all of those things, except the last. Go continues to put emphasis on this. When compile speed was greatly slowed with the 1.4 release, it was increased greatly in the next few releases until it was faster than it had been before. This is an important principle in developer productivity. If we stop valuing this, then one of the most important parts of Go isn’t valued. If you aren’t going to value that, then you must say so. It is, after all, one of the most important parts of the language.
Continuing on the mindset and point of view: the article places little value on the simplicity of Go. This is another one of Go’s greatest strengths. There is no doubt that generics, and memory management in Rust make it more complex than Go. Go’s simplicity is such a huge strength that many developers do not want generics in the language. They don’t want that added complexity. To ignore this simplicity as a value is to ignore one of the most important part of the language.
Given these additional things which we must value when comparing things, the conclusions made in the article simply are not that simple. Yes, there is a place for Rust. There is also a place for Go. Should anything being written in Go be written in Rust instead as the article suggest? Absolutely not.
“The standard library of Rust is just as rich as that of go” - REALLY?
Yeah, that’s a silly statement to make, given that Rust specifically tries to have a small stdlib and pushes non-essential things to the Crates ecosystem. I think this is a trade-off that works for Rust’s favor in the long run, but I understand people who prefer the Go/Python philosophy.
“The standard library of Rust is just as rich as that of go” - REALLY? Where is net/http? That absence alone makes this statement an outright lie. Looking for json, xml? Again not in the stdlib. Compression and archiving like tar, zip, bz2, lzw, gzip? Again in the Go stdlib, not in the Rust std. Cryptography including symmetric, asymmetric, and hashes? In the Go stdlib and absent from Rust std. I could go on, but I’d have to refer below the fold of the Go standard lib. Compare https://golang.org/pkg/ to https://doc.rust-lang.org/stable/std/ for yourself.
That was my reaction too. The amount of stuff one can do with Go without having to choose between multiple similar but different libraries and without having to write basic stuff oneself is amazing and easy.
“The article ignores the number one reason that Go was written: SPEED OF COMPILE TIMES! The article also ignores another very important reason that Go was written: It is for programmers, “not capable of understanding a brilliant language but we want to use them to build good software”. The quote is by Rob Pike.”
You nailed it right here. Two, supporting points. First, Pike actually developed on Oberon-2 at one point loving its fast flow and lack of segfaults in common uses. He wanted that experience in Go. Second, Google has to hire, train, and make productive a humongous number of people from many backgrounds in shortest time possible. That’s at least their common case which the second point optimizes for. If these arent necessary or are frowned upon, people evaluating languages in such situations might want something other than Go.
Also, Ill add that we dont inherently need Go or rapid compiles to achieve that. A language with long compiles might have quick-good-enough mode for fast iterations with final result sent through optimizing compiler. The second problem can be solved with a distinction between a simpler core with advanced stuff optional and layered on top. Coding guidelines with tools that enforce it can keep the simple core the default. Advanced stuff is allowed where it makes sense (ie macros eliminating boilerplate). I also thought might be helpful to have tool that converts those features into human-readable, core form preserving comments so less skilled can still maintain it.
One thing I find interesting is that the original “blub” language was Java. Go shares the same goal of being accessible to a wide variety of programmers, yet takes an incredibly different tack from Java (embracing generics in 2.0 closes some of the gap. At a meta-level, I suppose you could say they took the same tack: release generics a decade after the language was released).
Yes, I would argue GC is something that’s inherently bad in this context. Actually, I’d go as far as to say that a GC is bad for any statically typed language. And Go is, essentially, statically typed.
It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.
That’s why Go has the “defer” statement, it’s there because of the GC. Otherwise, destructors could be used to defer cleanup tasks at the end of a scope.
So that’s what makes a GC inherently bad.
A GC, however, is also bad because it “implies” the language doesn’t have good resource management mechanisms.
There was an article posted here, about how Rust essentially has a “static GC”, since manual deallocation is almost never needed. Same goes with well written C++, it behaves just like a garbage collected language, no manual deallocation required, all of it is figured out at compile time based on your code.
So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ? Doing something that can be done at CT during runtime ? It’s bad from a performance perspective and also bad from a code validation perspective. And it has essentially no upsides, as far as I’ve been able to tell.
As far as I can tell the main “support” for GC is that they’ve always been used. But that doesn’t automatically make them good. GCs seem to be closer to a hack for a language to be easier to implement rather than a feature for a user of the language.
It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left.
Why do you think this would be the case? A language with GC can also have linear or affine types for enforcing that resources are always freed and not used after they’re freed. Most languages don’t go this route because they prefer to spend their complexity budgets elsewhere and defer/try-with-resources work well in practice, but it’s certainly possible. See ATS for an example. You can also use rank-N types to a similar effect, although you are limited to a stack discipline which is not the case with linear/affine types.
So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ?
No, not necessarily. Garbage collectors can move and compact data for better cache locality and elimination of fragmentation concerns. They also allow for much faster allocation than in a language where you’re calling the equivalent of malloc under the hood for anything that doesn’t follow a clean stack discipline. Reclamation of short-lived data is also essentially free with a generational collector. There are also garbage collectors with hard bounds on pause times which is not the case in C++ where a chain of frees can take an arbitrary amount of time.
Beyond all of this, garbage collection allows for a language that is both simpler and more expressive. Certain idioms that can be awkward to express in Rust are quite easy in a language with garbage collection precisely because you do not need to explain to the compiler how memory will be managed. Pervasive use of persistent data structures also becomes a viable option when you have a GC that allows for effortless and efficient sharing.
In short, garbage collection is more flexible than Rust-style memory management, can have great performance (especially for functional languages that perform a lot of small allocations), and does not preclude use of linear or affine types for managing resources. GC is hardly a hack, and its popularity is the result of a number of advantages over the alternatives for common use cases.
What idioms are unavailable in Rust or in modern C++, because of their lack of GC, but are available in a statically typed GC language ?
I perfectly agree with GC allowing for more flexibility and more concise code as far as dynamic language go, but that’s neither here nor there.
As for the theoretical performance benefits and real-time capabilities of a GCed language… I think the word theoretical is what I’d focus my counter upon there, because they don’t actually exist. The GC overhead is too big, in practice, to make those benefits outshine languages without runtime memory management logic.
I’m not sure about C++, but there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure, and Rust makes these things explicit.
The idea that all memory should be explicitly tracked and accounted for in the semantics of the language is perhaps important for a systems language, but to say that it should be true for all statically typed languages is preposterous. Languages should have the semantics that make sense for the language. Saying a priori that all languages must account for some particular feature just seems like a failure of the imagination. If it makes sense for the semantics to include explicit control over memory, then include it. If it makes sense for this not to be part of the semantics (and for a GC to be used so that the implementation of the language does not consume infinite memory), this is also a perfectly sensible decision.
there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure
As far as I understand and have been told by people who understand Rust quite a bit better than me, it’s not possible to re-implement this code in Rust (if it is, I would be curious to see the implementation!)
Note that the polymorphic variables (a, b, c) get instantiated with different closures in different ways, depending on what the format string is, so giving a type to them is problematic because Rust is explicit about typing closures (they have to talk about lifetimes, etc).
And probably the reason why it seems so complex is because CPS (continuation-passing style) is, in general, quite hard to wrap your head around.
I do think that the restrictions present in this example will show up in simpler examples (anywhere where you are trying to quantify over different functions with sufficiently different memory usage, but the same type in a GC’d functional language), this is just a particular thing that I have on hand because I thought it would work in Rust but doesn’t seem to.
FWIW, I spent ~10 minutes trying to convert your example to Rust. I ultimately failed, but I’m not sure if it’s an actual language limitation or not. In particular, you can write closure types in Rust with 'static bounds which will ensure that the closure’s environment never borrows anything that has a lifetime shorter than the lifetime of the program. For example, Box<FnOnce(String) + 'static> is one such type.
So what I mean to say is that I failed, but I’m not sure if it’s because I couldn’t wrap my head around your code in a few minutes or if there is some limitation of Rust that prevents it. I don’t think I buy your explanation, because you should technically be able to work around that by simply forbidding borrows in your closure’s environment. The actual thing where I got really hung up on was the automatic currying that Haskell has. In theory, that shouldn’t be a blocker because you can just introduce new closures, but I couldn’t make everything line up.
N.B. I attempted to get any Rust program working. There is probably the separate question of whether it’s a roughly equivalent program in terms of performance characteristics. It’s been a long time since I wrote Haskell in anger, so it’s hard for me to predict what kind of copying and/or heap allocations are present in the Haskell program. The Rust program I started to write did require heap allocating some of the closures.
It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.
Deterministic freeing of resources is not mutually exclusive with all forms of garbage collection. In fact, this is shown by Rust, where reference counting (Rc) does not exclude Drop. Of course, Drop may never be called when you create cycles.
(Unless you do not count reference counting as a form of garbage collection.)
Well… I don’t count shared pointers (or RC pointers or w/e you wish to call them) as garbage collected.
If, in your vocabulary, that is garbage collection then I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.
If, in your vocabulary, that is garbage collection
Reference counting is generally agreed to be a form of garbage collection.
I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.
In Rc or shared_ptr, the moment of the object’s destruction can also not be determined at compile time. Only the destruction of the Rc itself; put differently the reference count decrement can be determined at compile time.
I think your argument is against tracing garbage collectors. I agree that the lack of deterministic destruction is a large shortcoming of languages with tracing GCs. It effectively brings back a parallel to manual memory management through the backdoor — it requires manual resource management. You don’t have to convince me :). I once wrote a binding to Tensorflow for Go. Since Tensorflow wants memory aligned on 32-byte boundaries on amd64 and Go allocates (IIRC) on 16-byte boundaries, you have to allocate memory in C-land. However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions. This was one of the reasons I rewrote some fairly large Tensorflow projects in Rust.
However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions.
Hmm. This seems a bit odd to me. As I understand it, Go code that binds to C libraries tend to use finalizers to free memory allocated by C. Despite the lack of a guarantee around finalizers, I think this has worked well enough in practice. What caused it to not work well in the Tensorflow environment?
When doing prediction, you typically allocate large tensors relatively rapidly in succession. Since the wrapping Go objects are very small, the garbage collector kicks in relatively infrequently, while you are filling memory in C-land. There are definitely workarounds to put bounds on memory use, e.g. by using an object pool. But I realized that what I really want is just deterministic destruction ;). But that may be my C++ background.
I have rewritten all that code probably around the 1.6-1.7 time frame, so maybe things have improved. Ideally, you’d be able to hint the Go GC about the actual object sizes including C-allocated objects. Some runtimes provide support for tracking C objects. E.g. SICStus Prolog has its own malloc that counts allocations in C-land towards the SICStus heap (SICStus Prolog can raise a recoverable exception when you use up your heap).
One benefit of GC is that the language can be way simpler than a language with manual memory management (either explicitly like in C/C++ or implicitly like in Rust).
This simplicity then can either be preserved, keeping the language simple, or spent on other worthwhile things that require complexity.
I agree that Go is bad, Rust is good, but let’s be honest, Rust is approaching a C++-level of complexity very rapidly as it keeps adding features with almost every release.
you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.
That is a terrible point. The result of closing the file stream should always be checked and reported or you will have buggy code that can’t handle edge cases.
Is this actually used with any production code ? To my knowledge it was meant to be more of a feature for debugging and language developers. Rather than a true GC-less option, like the one a language like D provides.
Just another “Why Rust should replace Language X” article with not much depth. By now the greatest entrance barrier to Rust is the community’s attitude. Go lacks in a certain field that Rust does well? let’s talk for hours about how rust does it better. Rust lacks async networking that Go was designed to do well? no problem we’ll catch up for sure.
They can’t figure out why everyone doesn’t love their thing and why some like this other thing so much better while they are convinced that the other thing is inferior.
I don’t think this is true for the whole Rust community. However, when a language has temporal exponential growth, there are a lot of newcomers who are intoxicated with the language. It usually takes a few years for a language community at large to get a better perspective on the role of their favorite language. It’s pretty much what happens during all language hypes.
I also fundamentally disagree with the author. Go is not a bad language, it just has different design goals than Rust. I think the strongest criticism that one can have is that it does not provide many improvements above modern Pascal, Modula-2, or Oberon-2. The Wirth languages are also very simple. But Go has clearly shown rebooting a language philosophy with a new ecosystem and a more familiar (C-like) syntax can bring success.
Go creators seem to disagree, discarding almost FIFTY years of progress in language design.
Have you even considered that they don’t necessarily agree that fifty years of overcomplicated type system designs might not be considered ‘progress’ by everyone?
I think the article is good but it over looks a big factor that sets go apart from rust. To phrase it as a joke:
I’ve loved every programming language I’ve tried before I’ve used it in production.
A more thorough explanation starts with Google offering full time production usage to a large number of developers who subsequently churned into other startups. Go has some rough edges, but it’s core is easy to pick up. This means a single expert on a team can get several other developers productive within a few weeks. You just need one person to catch the gotchas, explain the idioms, and guide the general structure of the project.
So while I think the article describes a small flame of interest, there was a giant company pouring millions of dollars in gasoline onto the the fire. Mozilla is great for supporting the development of Rust, but they don’t have the money to subsidize it’s commercial adoption. While I’d love for a language’s intrinsic properties to drive it’s success, the environment they are released into plays an out sized role.
edit : For the past few months I’ve played with rust and love the language, I really hope it’s usage becomes more widespread. Until Rust has a wealthy benefactor churning out experienced developers who get hired for their ops/python/ML/etc. experience and then those developers start introducing the language to production environments, it’s adoption will be slower than any of us want.
You know someone is over-hyping Rust (or is just misinformed) when you see statements like
Which means there’s no risk of concurrency errors no matter what data sharing mechanism you chose to use
The borrow checker prevents data races which are involved in only a subset of concurrency errors. Race conditions are still very possible, and not going away any time soon. This blog post does a good job explaining the difference.
Additionally, I have my worries about async/await in a language that is also intended to be used in places that need control over low level details. One library that decides to use raw I/O syscalls on some unlikely task (Like error logging) and, whoops, there goes your event loop. Bounded thread pools don’t solve this (What happens if you hit the max? It’s equivalent to a limited semaphore), virtual dispatch becomes more of a hazard (Are you sure every implementation knows about the event loop? How can you be sure as a library author?), what if you have competing runtime environments (See twisted/gevent/asyncio/etc. in the Python community. This may arguably be more of a problem in Rust given it’s focus on programmer control), and the list goes on. In Go, you literally never have to worry about this, and it’s the greatest feature of the language.
That definition of “race condition - data race” essentially refers to a operational logic error on the programmer’s side. As in, there’s no way to catch race conditions that aren’t data races via a compiler, unless you have a magical business-logic-aware compiler, at which point, you wouldn’t need a programmer.
As far as the issues with async I/O go… well, yes. Asyncio wouldn’t solve everything.
But asyncio also wouldn’t necessarily have to be single threaded. It could just meant that a multi-threaded networking application will now spend less resources on context-switching between threads. But the parallelism of threads > cpu_count still comes useful for various blocking operations which may appear here and there.
As far as GO’s solution goes, their solution to the performance issue isn’t that good. Since goroutines have significant overhead. Much less than a native thread, but still, considerably more overhead than something like MIO.
The issue you mentioned as an example, hidden sync I/O syscall by some library, can well happen in a goroutine run function just as well, the end-result of that will essentially be a OS native thread being blocked, much like in Rust. At least, as far as my understanding of goroutine goes, that seems to be the case.
Granted, working with a “pool” of event loops representing multiple threads might be harder than just using goroutines, but I don’t see it as being that difficult.
That definition is the accurate, correct definition. It’s important to state that Rust helps with data races, and not race conditions in general. Even the rustonomicon makes this distinction clear.
The discussion around multiple threads seems like a non-sequitur to me. I’m fully aware that async/await works fine with multiple threads. I also don’t understand why the performance considerations of goroutines were brought into the picture. I’m not making any claims about performance, just ease of use and programmer model. (Though, I do think it’s important to respond that goroutines are very much low enough overhead for many common tasks. It also makes no sense to talk about performance and overhead outside of the context of a problem. Maybe a few nanoseconds per operation is important, and maybe it isn’t.)
The issue I mentioned does not happen in Go: all of the syscalls/locks/potentially blocking operations go through the runtime, and so it’s able to deschedule the goroutine and let others run. This article is another great article about this topic.
It’s great that you’re optimistic about the future direction Rust is taking with it’s async story. I’m optimistic too, but that’s because I have great faith in the leadership and technical design skills of the Rust community to solve these problems. I’m just pointing out that they ARE problems that need to be solved, and the solution is not going to be better than Go’s solution in every dimension.
The issue I mentioned does not happen in Go: all of the syscalls/locks/potentially blocking operations go through the runtime, and so it’s able to deschedule the goroutine and let others run.
Ok, maybe I’m mistaken here but:
“Descheduling a goroutine”, when a function call is blocking, descheduling a goroutine has the exact same cost as descheduling a thread, which is huge.
Secondly, go is only using a non-blocking syscall under the hood for networking I/O calls at the moment. So if I want to wait for an operation on any random file or wait for an asynchronous prefetch call, I will be unable to do so, I have to actually block the underlying thread that the goroutine is using.
I haven’t seen any mention of “all blocking syscalls operations” being treated in an async manner, they go through the runtime, yes, but the runtime may just decide that it can do nothing about it other than let the thread be de-scheduled as usual. And, as far as I know, the runtime is only “smart” about networking I/O syscalls atm, the rest are treated like a blocking operation/
descheduling a goroutine has the exact same cost as descheduling a thread, which is huge.
A goroutine being descheduled means it yields the processor and calls into the runtime scheduler, nothing more. What happens to the underlying OS threads is another matter entirely. This can happen at various points where things could block (e.g. chan send / recv, entering mutexes, network I/O, even regular function calls), but not at every such site.
the runtime is only “smart” about networking I/O syscalls atm
Yes, sockets and pipes are handled by the poller, but what else could it be smarter about? The situation may well be different on other operating systems, but at least on Linux, files on disk are always ready as far as epoll is concerned, so there is no need to go through the scheduler and poller for those. In that case, I/O blocks both the goroutine and the thread, which is fine for Go. For reference, in this situation, node.js uses a thread pool that it runs file I/O operations on, to avoid blocking the event loop. Go doesn’t really need to do this under the covers, though, because it doesn’t have the concept of a central event loop that must never be blocked waiting for I/O.
Descheduling a goroutine is much cheaper than descheduling a thread. Goroutines are cooperative with the runtime, so they ensure that there is minimal state to save when descheduling (no registers, for example). It’s on the order of nanoseconds vs microseconds. Preemptive scheduling helps in a number of ways, but typically causes context switching to be more expensive: you have to be able to stop/start at any moment.
Go has an async I/O loop, yes, but it runs in a separate managed thread by the runtime. When a goroutine would wait for async I/O, it parks itself with the runtime, and the thread the goroutine was running on can be used for other goroutines.
While the other syscalls do in fact take up a thread, critically, the runtime is aware when a goroutine is going to enter a syscall, and so it can know that the thread will be blocked, and allow other goroutines to run. Without that information, you would block up a thread and waste that extra capacity.
The runtime manages a threadpool and ensures that GOMAXPROCS threads are always running your code, no matter what syscalls or I/O operations you’re doing. This is only possible if the runtime is aware of every syscall or I/O operation, which is not possible if your language/standard library are not desiged to provide. Which Rust’s doesn’t, for good reasons. It has tradeoffs with respect to FFI speed, control, zero overhead, etc. They are different languages with different goals, and one isn’t objectively better than the other.
And, as far as I know, the runtime is only “smart” about networking I/O syscalls atm, the rest are treated like a blocking operation/
Pretty much everything that could block goes through sockets and pipes though. The only real exception is file I/O, and file I/O being unable to be epolled in a reasonable way is a kernel problem not a Go problem.
I am happy about this article. While I’m a Go fan it’s actually great there’s more languages. The main reason is not the competition (I don’t really see them competing anyway), but a problem that languages as they are becoming popular frequently face: You have more and more programners that use the language mostly or exclusively for its success, while actually wanting to use a different language, either consciously or subconsciously. The outcome of the subconscious wanting to use a different language is that one bends the language, libraries, frameworks, ecosystem into being like that other language.
The result of that is that big languages often become the same generic “all the hip features + all the used to be hip ten years ago features” style language.
This is not to say languages are never allowed to adapt or copy good ideas. It’s just that I don’t think when we have all these languages with biggish user bases it would be sad if the differences would be merely on the cosmetic site while the approaches to solving problems at hand end up being pretty much equal.
Other folks have grabbed important technical issues, but something I think is also missed is the weight of GOOG behind golang. The rust evangelion strike force just isn’t as well-funded.
I see this a lot and I disagree with it pretty much completely. Google doesn’t care whether people outside Google use Go at all really. They really aren’t pushing anyone to use it or pressuring anyone to use it. It’s not heavily advertised to new programmers in the same way something like Java and C# are by the companies that back them. Google supports Go heavily, but despite that I’m pretty sure I see way more evangelism and advertisement for Rust despite it having no big corporate sponsors to speak of.
I found the article interesting; having presented Go as both horrible and good, it reminds me of C a bit: a “quirky, flawed and enormous success” language. Perhaps it’s no coincidence given the fact that they share some of their designers :)
However, as someone who wrote some code in both Go and Rust, I couldn’t disagree more with “I think the reasons why Go’s popularity skyrocketed, will apply to Rust.” I think you’re missing one, very important bit: Go is easy to write. It may be stupid, it may be flawed, but you write your code quickly and it gets the job done. Go has succeeded in attracting Python programmers, because it also allowed them to build their programs quickly, with not much effort, and they ran quite a lot faster.
The barrier of entry to Rust is massive. Yes, there are obvious advantages to code that you’ve already written in it and made compile, but as far as development effort goes, Rust is not the kind of thing you choose if you want a thing done quickly.
I think Go’s success is more similar to Javascript’s or Python’s rather than Rust’s. It’s easy to pick up and good enough in practice. Rust goes for the opposite: it makes itself harder to learn and use, but for a superior long-term benefit. I don’t think it’ll reach quite the same audience, or popularity.
+1. I feel like the reasoning in the article is a bit skewed: it considers programming languages to be formal artifacts and compares them on their technical merits. It is a perfectly valid thing to do and the analysis in the article is thoughtful.
Then it starts making predictions based on the assumption that technical merits of a language define its success, completely missing the wetware side of programming languages. Programming languages are made to be used by both humans and computers, and their human effects can be very subtle: even some stupid little thing like long compilation times or quirky syntax can be disruptive.
Go is good enough at the machine level (much better than Python/Ruby and the like), at the same time cutting many corners to be easy for humans (simple, minimal and familiar syntax, small number of concepts in the language, simple and unobtrusive type system, low-latency GC, good tooling, very fast compilation times and feedback loop, a simple but effective concurrency model, large and actually useful standard library). Sometimes Go feels almost like cheating: it is full of high-quality implementations of complex things with very simple/minimal/hidden human interfaces (GC, goroutines, the standard library). Go consistently makes it harder for humans to make wrong choices, compared to most other mainstream programming languages (one subtle example: structures are value-types which are copied by default, unlike pass-by-reference craze of Java/Python/Ruby, making unintended sharing harder and even alleviating absence of immutability to some degree).
Rust is excellent for machines, but its human side is much more uneven than in Go. It is much better than Go in preventing humans from making mistakes in many areas. At the same time, it brings non-trivial, large, open-ended interfaces and does not hide implementation complexity as well from the programmer. It brings huge learning curve and cognitive overhead. Implementation/language complexity can be a minefield in itself: humans might get confused, might miss a simpler way to do something, etc. Rust is designed for very patient and conscientious programmers who are willing to spend time and efforts to get things right. Sadly, this is often not the recipe for success in many parts of the software industry.
I’d be happy to see a world where Go fills a high-level niche and Rust makes systems foundation.
I think the trouble with discussions about a language’s “technical merits” is that somewhere along the way some people have lost sight of the purpose of programming languages: to act as an interface to make it easy for programmers to create software. Good languages remove resistance to getting programs written. Bad languages make it harder.
Go is very good at satisfying a particular niche - making it easy to write software without sacrificing much performance. I’d argue that this is a niche which is in high demand and that explains the popularity of Go.
Rust has a different niche - minimising memory access errors while providing sophisticated language features and having good performance. The trade-off is that the language is much harder to master than Go and programming is in general more difficult. Rust’s features are all laudable things but given it’s lower popularity it seems like there’s just less demand for languages of this type.
This may sound defensive. I apologize for my poor writing. Instead, I want to suggest that the entirety of the OP is written from the wrong mindset and that the below points are specific inflections of that wrong mindset.
The article ignores the number one reason that Go was written: SPEED OF COMPILE TIMES! The article also ignores another very important reason that Go was written: It is for programmers, “not capable of understanding a brilliant language but we want to use them to build good software”. The quote is by Rob Pike.
The article places some importance on immutable types, would the average programmer know how to leverage an immutable type to any benefit from it?
“The standard library of Rust is just as rich as that of go” - REALLY? Where is net/http? That absence alone makes this statement an outright lie. Looking for json, xml? Again not in the stdlib. Compression and archiving like tar, zip, bz2, lzw, gzip? Again in the Go stdlib, not in the Rust std. Cryptography including symmetric, asymmetric, and hashes? In the Go stdlib and absent from Rust std. I could go on, but I’d have to refer below the fold of the Go standard lib. Compare https://golang.org/pkg/ to https://doc.rust-lang.org/stable/std/ for yourself.
“The package ecosystem of Rust outmatches…” Maybe, but not in some important aspects, consider https://github.com/search?q=language%3ARust+stars%3A%3E1000&type=Repositories vs. https://github.com/search?l=&p=1&q=language%3AGo+stars%3A%3E1000&ref=advsearch&type=Repositories&utf8=✓
“I think we could call Rust a superior language to Go in, quite literally, every single way possible.” Not in speed of compile time. Not in ease of use for the average and below average developer. When these two points are your most important values, Rust does not look superior at all.
Regarding passing the critical point and being a mainstay, I absolutely agree. Rust is here to stay and I’m glad that it is. Regarding it being a better language than any other for most tasks, I absolutely disagree. Rust’s place is to replace C++. It is a simpler, more sane, language than C++ to be used in the same places, when that level of control is needed. For anything else, a more simple language with less mental load required and faster compile times is better suited to the task.
Finally, on the mindset and point of view, if “superior” does not take the human aspect into consideration at all, this post may have a lot of truth to it, however, code is written by humans. Humans have different needs than a bullet lists of supported features. Keeping in mind the goals of the Go programming language when it was written (from Rob Pikes 2009 Google Tech Talks presentation): type safety, memory safety, good support for concurrency, GC, and speed to compile.
It is easy to forget that as projects grow, compile speeds become non-trivial. Many languages had tackled all of those things, except the last. Go continues to put emphasis on this. When compile speed was greatly slowed with the 1.4 release, it was increased greatly in the next few releases until it was faster than it had been before. This is an important principle in developer productivity. If we stop valuing this, then one of the most important parts of Go isn’t valued. If you aren’t going to value that, then you must say so. It is, after all, one of the most important parts of the language.
Continuing on the mindset and point of view: the article places little value on the simplicity of Go. This is another one of Go’s greatest strengths. There is no doubt that generics, and memory management in Rust make it more complex than Go. Go’s simplicity is such a huge strength that many developers do not want generics in the language. They don’t want that added complexity. To ignore this simplicity as a value is to ignore one of the most important part of the language.
Given these additional things which we must value when comparing things, the conclusions made in the article simply are not that simple. Yes, there is a place for Rust. There is also a place for Go. Should anything being written in Go be written in Rust instead as the article suggest? Absolutely not.
Yeah, that’s a silly statement to make, given that Rust specifically tries to have a small stdlib and pushes non-essential things to the Crates ecosystem. I think this is a trade-off that works for Rust’s favor in the long run, but I understand people who prefer the Go/Python philosophy.
That was my reaction too. The amount of stuff one can do with Go without having to choose between multiple similar but different libraries and without having to write basic stuff oneself is amazing and easy.
“The article ignores the number one reason that Go was written: SPEED OF COMPILE TIMES! The article also ignores another very important reason that Go was written: It is for programmers, “not capable of understanding a brilliant language but we want to use them to build good software”. The quote is by Rob Pike.”
You nailed it right here. Two, supporting points. First, Pike actually developed on Oberon-2 at one point loving its fast flow and lack of segfaults in common uses. He wanted that experience in Go. Second, Google has to hire, train, and make productive a humongous number of people from many backgrounds in shortest time possible. That’s at least their common case which the second point optimizes for. If these arent necessary or are frowned upon, people evaluating languages in such situations might want something other than Go.
Also, Ill add that we dont inherently need Go or rapid compiles to achieve that. A language with long compiles might have quick-good-enough mode for fast iterations with final result sent through optimizing compiler. The second problem can be solved with a distinction between a simpler core with advanced stuff optional and layered on top. Coding guidelines with tools that enforce it can keep the simple core the default. Advanced stuff is allowed where it makes sense (ie macros eliminating boilerplate). I also thought might be helpful to have tool that converts those features into human-readable, core form preserving comments so less skilled can still maintain it.
One thing I find interesting is that the original “blub” language was Java. Go shares the same goal of being accessible to a wide variety of programmers, yet takes an incredibly different tack from Java (embracing generics in 2.0 closes some of the gap. At a meta-level, I suppose you could say they took the same tack: release generics a decade after the language was released).
The “lacks” of Go in the article are highly opinionated and without any context of what you’re pretending to solve with the language.
Garbage collection is something bad? Can’t disagree harder.
The article ends with a bunch of extreme opinions like “Rust will be better than Go in every possible task
There’re use cases for Go, use cases for Rust, for both, and for none of them. Just pick the right tool for your job and stop bragging about yours.
You love Rust, we get it.
Yes, I would argue GC is something that’s inherently bad in this context. Actually, I’d go as far as to say that a GC is bad for any statically typed language. And Go is, essentially, statically typed.
It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.
That’s why Go has the “defer” statement, it’s there because of the GC. Otherwise, destructors could be used to defer cleanup tasks at the end of a scope.
So that’s what makes a GC inherently bad.
A GC, however, is also bad because it “implies” the language doesn’t have good resource management mechanisms.
There was an article posted here, about how Rust essentially has a “static GC”, since manual deallocation is almost never needed. Same goes with well written C++, it behaves just like a garbage collected language, no manual deallocation required, all of it is figured out at compile time based on your code.
So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ? Doing something that can be done at CT during runtime ? It’s bad from a performance perspective and also bad from a code validation perspective. And it has essentially no upsides, as far as I’ve been able to tell.
As far as I can tell the main “support” for GC is that they’ve always been used. But that doesn’t automatically make them good. GCs seem to be closer to a hack for a language to be easier to implement rather than a feature for a user of the language.
Feel free to convince me otherwise.
Why do you think this would be the case? A language with GC can also have linear or affine types for enforcing that resources are always freed and not used after they’re freed. Most languages don’t go this route because they prefer to spend their complexity budgets elsewhere and defer/try-with-resources work well in practice, but it’s certainly possible. See ATS for an example. You can also use rank-N types to a similar effect, although you are limited to a stack discipline which is not the case with linear/affine types.
No, not necessarily. Garbage collectors can move and compact data for better cache locality and elimination of fragmentation concerns. They also allow for much faster allocation than in a language where you’re calling the equivalent of malloc under the hood for anything that doesn’t follow a clean stack discipline. Reclamation of short-lived data is also essentially free with a generational collector. There are also garbage collectors with hard bounds on pause times which is not the case in C++ where a chain of frees can take an arbitrary amount of time.
Beyond all of this, garbage collection allows for a language that is both simpler and more expressive. Certain idioms that can be awkward to express in Rust are quite easy in a language with garbage collection precisely because you do not need to explain to the compiler how memory will be managed. Pervasive use of persistent data structures also becomes a viable option when you have a GC that allows for effortless and efficient sharing.
In short, garbage collection is more flexible than Rust-style memory management, can have great performance (especially for functional languages that perform a lot of small allocations), and does not preclude use of linear or affine types for managing resources. GC is hardly a hack, and its popularity is the result of a number of advantages over the alternatives for common use cases.
What idioms are unavailable in Rust or in modern C++, because of their lack of GC, but are available in a statically typed GC language ?
I perfectly agree with GC allowing for more flexibility and more concise code as far as dynamic language go, but that’s neither here nor there.
As for the theoretical performance benefits and real-time capabilities of a GCed language… I think the word theoretical is what I’d focus my counter upon there, because they don’t actually exist. The GC overhead is too big, in practice, to make those benefits outshine languages without runtime memory management logic.
I’m not sure about C++, but there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure, and Rust makes these things explicit.
The idea that all memory should be explicitly tracked and accounted for in the semantics of the language is perhaps important for a systems language, but to say that it should be true for all statically typed languages is preposterous. Languages should have the semantics that make sense for the language. Saying a priori that all languages must account for some particular feature just seems like a failure of the imagination. If it makes sense for the semantics to include explicit control over memory, then include it. If it makes sense for this not to be part of the semantics (and for a GC to be used so that the implementation of the language does not consume infinite memory), this is also a perfectly sensible decision.
Could you give me an example of this ?
As far as I understand and have been told by people who understand Rust quite a bit better than me, it’s not possible to re-implement this code in Rust (if it is, I would be curious to see the implementation!)
https://gist.github.com/dbp/0c92ca0b4a235cae2f7e26abc14e29fe
Note that the polymorphic variables (a, b, c) get instantiated with different closures in different ways, depending on what the format string is, so giving a type to them is problematic because Rust is explicit about typing closures (they have to talk about lifetimes, etc).
My God, that is some of the most opaque code I’ve ever seen. If it’s true Rust can’t express the same thing, then maybe it’s for the best.
If you want to understand it (not sure if you do!), the approach is described in this paper: http://www.brics.dk/RS/98/12/BRICS-RS-98-12.pdf
And probably the reason why it seems so complex is because CPS (continuation-passing style) is, in general, quite hard to wrap your head around.
I do think that the restrictions present in this example will show up in simpler examples (anywhere where you are trying to quantify over different functions with sufficiently different memory usage, but the same type in a GC’d functional language), this is just a particular thing that I have on hand because I thought it would work in Rust but doesn’t seem to.
FWIW, I spent ~10 minutes trying to convert your example to Rust. I ultimately failed, but I’m not sure if it’s an actual language limitation or not. In particular, you can write closure types in Rust with
'static
bounds which will ensure that the closure’s environment never borrows anything that has a lifetime shorter than the lifetime of the program. For example,Box<FnOnce(String) + 'static>
is one such type.So what I mean to say is that I failed, but I’m not sure if it’s because I couldn’t wrap my head around your code in a few minutes or if there is some limitation of Rust that prevents it. I don’t think I buy your explanation, because you should technically be able to work around that by simply forbidding borrows in your closure’s environment. The actual thing where I got really hung up on was the automatic currying that Haskell has. In theory, that shouldn’t be a blocker because you can just introduce new closures, but I couldn’t make everything line up.
N.B. I attempted to get any Rust program working. There is probably the separate question of whether it’s a roughly equivalent program in terms of performance characteristics. It’s been a long time since I wrote Haskell in anger, so it’s hard for me to predict what kind of copying and/or heap allocations are present in the Haskell program. The Rust program I started to write did require heap allocating some of the closures.
It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.
Deterministic freeing of resources is not mutually exclusive with all forms of garbage collection. In fact, this is shown by Rust, where reference counting (
Rc
) does not excludeDrop
. Of course,Drop
may never be called when you create cycles.(Unless you do not count reference counting as a form of garbage collection.)
Well… I don’t count shared pointers (or RC pointers or w/e you wish to call them) as garbage collected.
If, in your vocabulary, that is garbage collection then I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.
If, in your vocabulary, that is garbage collection
Reference counting is generally agreed to be a form of garbage collection.
I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.
In
Rc
orshared_ptr
, the moment of the object’s destruction can also not be determined at compile time. Only the destruction of theRc
itself; put differently the reference count decrement can be determined at compile time.I think your argument is against tracing garbage collectors. I agree that the lack of deterministic destruction is a large shortcoming of languages with tracing GCs. It effectively brings back a parallel to manual memory management through the backdoor — it requires manual resource management. You don’t have to convince me :). I once wrote a binding to Tensorflow for Go. Since Tensorflow wants memory aligned on 32-byte boundaries on amd64 and Go allocates (IIRC) on 16-byte boundaries, you have to allocate memory in C-land. However, since finalizers are not guaranteed to run, you end up managing memory objects with
Close()
functions. This was one of the reasons I rewrote some fairly large Tensorflow projects in Rust.Hmm. This seems a bit odd to me. As I understand it, Go code that binds to C libraries tend to use finalizers to free memory allocated by C. Despite the lack of a guarantee around finalizers, I think this has worked well enough in practice. What caused it to not work well in the Tensorflow environment?
When doing prediction, you typically allocate large tensors relatively rapidly in succession. Since the wrapping Go objects are very small, the garbage collector kicks in relatively infrequently, while you are filling memory in C-land. There are definitely workarounds to put bounds on memory use, e.g. by using an object pool. But I realized that what I really want is just deterministic destruction ;). But that may be my C++ background.
I have rewritten all that code probably around the 1.6-1.7 time frame, so maybe things have improved. Ideally, you’d be able to hint the Go GC about the actual object sizes including C-allocated objects. Some runtimes provide support for tracking C objects. E.g. SICStus Prolog has its own malloc that counts allocations in C-land towards the SICStus heap (SICStus Prolog can raise a recoverable exception when you use up your heap).
Interesting! Thanks for elaborating on that.
So Python, Swift, Nim, and others all have RC memory management … according to you these are not GC languages?
One benefit of GC is that the language can be way simpler than a language with manual memory management (either explicitly like in C/C++ or implicitly like in Rust).
This simplicity then can either be preserved, keeping the language simple, or spent on other worthwhile things that require complexity.
I agree that Go is bad, Rust is good, but let’s be honest, Rust is approaching a C++-level of complexity very rapidly as it keeps adding features with almost every release.
That is a terrible point. The result of closing the file stream should always be checked and reported or you will have buggy code that can’t handle edge cases.
You can turn off garbage collection in Go and manage memory manually, if you want.
It’s impractical, but possible.
Is this actually used with any production code ? To my knowledge it was meant to be more of a feature for debugging and language developers. Rather than a true GC-less option, like the one a language like D provides.
Here is a shocking fact: For those of us who write programs in Go, the garbage collector is actually a wanted feature.
If you work on something where having a GC is a real problem, use another language.
Just another “Why Rust should replace Language X” article with not much depth. By now the greatest entrance barrier to Rust is the community’s attitude. Go lacks in a certain field that Rust does well? let’s talk for hours about how rust does it better. Rust lacks async networking that Go was designed to do well? no problem we’ll catch up for sure.
They can’t figure out why everyone doesn’t love their thing and why some like this other thing so much better while they are convinced that the other thing is inferior.
I don’t think this is true for the whole Rust community. However, when a language has temporal exponential growth, there are a lot of newcomers who are intoxicated with the language. It usually takes a few years for a language community at large to get a better perspective on the role of their favorite language. It’s pretty much what happens during all language hypes.
I also fundamentally disagree with the author. Go is not a bad language, it just has different design goals than Rust. I think the strongest criticism that one can have is that it does not provide many improvements above modern Pascal, Modula-2, or Oberon-2. The Wirth languages are also very simple. But Go has clearly shown rebooting a language philosophy with a new ecosystem and a more familiar (C-like) syntax can bring success.
I think even Wirth would design a language in 2018 differently compared to his work in 1971.
Go creators seem to disagree, discarding almost FIFTY years of progress in language design.
They can do whatever they want, but please don’t drag Wirth through the mud.
I think even Wirth would design a language in 2018 differently compared to his work in 1971.
Oberon-2 is from 1991 and Wirth has updated its specification. Here is the 2013 revision of the spec:
https://www.inf.ethz.ch/personal/wirth/Oberon/Oberon07.Report.pdf
He also still seems to be working on the compiler:
https://www.inf.ethz.ch/personal/wirth/news.txt
It still looks as simple as it did when I wrote some Oberon in the early 00s.
That’s unsurprising and doesn’t detract form anything I wrote.
Have you even considered that they don’t necessarily agree that fifty years of overcomplicated type system designs might not be considered ‘progress’ by everyone?
No. Quite honestly, I believe they just haven’t bothered to look at anything the didn’t “invent” themselves.
[Comment from banned user removed]
I think the article is good but it over looks a big factor that sets go apart from rust. To phrase it as a joke:
I’ve loved every programming language I’ve tried before I’ve used it in production.
A more thorough explanation starts with Google offering full time production usage to a large number of developers who subsequently churned into other startups. Go has some rough edges, but it’s core is easy to pick up. This means a single expert on a team can get several other developers productive within a few weeks. You just need one person to catch the gotchas, explain the idioms, and guide the general structure of the project.
So while I think the article describes a small flame of interest, there was a giant company pouring millions of dollars in gasoline onto the the fire. Mozilla is great for supporting the development of Rust, but they don’t have the money to subsidize it’s commercial adoption. While I’d love for a language’s intrinsic properties to drive it’s success, the environment they are released into plays an out sized role.
edit : For the past few months I’ve played with rust and love the language, I really hope it’s usage becomes more widespread. Until Rust has a wealthy benefactor churning out experienced developers who get hired for their ops/python/ML/etc. experience and then those developers start introducing the language to production environments, it’s adoption will be slower than any of us want.
You know someone is over-hyping Rust (or is just misinformed) when you see statements like
The borrow checker prevents data races which are involved in only a subset of concurrency errors. Race conditions are still very possible, and not going away any time soon. This blog post does a good job explaining the difference.
Additionally, I have my worries about async/await in a language that is also intended to be used in places that need control over low level details. One library that decides to use raw I/O syscalls on some unlikely task (Like error logging) and, whoops, there goes your event loop. Bounded thread pools don’t solve this (What happens if you hit the max? It’s equivalent to a limited semaphore), virtual dispatch becomes more of a hazard (Are you sure every implementation knows about the event loop? How can you be sure as a library author?), what if you have competing runtime environments (See twisted/gevent/asyncio/etc. in the Python community. This may arguably be more of a problem in Rust given it’s focus on programmer control), and the list goes on. In Go, you literally never have to worry about this, and it’s the greatest feature of the language.
It doesn’t help that they state (or did state until recently) on their website that Rust was basically immune to any kind of concurrency error.
That definition of “race condition - data race” essentially refers to a operational logic error on the programmer’s side. As in, there’s no way to catch race conditions that aren’t data races via a compiler, unless you have a magical business-logic-aware compiler, at which point, you wouldn’t need a programmer.
As far as the issues with async I/O go… well, yes. Asyncio wouldn’t solve everything. But asyncio also wouldn’t necessarily have to be single threaded. It could just meant that a multi-threaded networking application will now spend less resources on context-switching between threads. But the parallelism of threads > cpu_count still comes useful for various blocking operations which may appear here and there.
As far as GO’s solution goes, their solution to the performance issue isn’t that good. Since goroutines have significant overhead. Much less than a native thread, but still, considerably more overhead than something like MIO.
The issue you mentioned as an example, hidden sync I/O syscall by some library, can well happen in a goroutine run function just as well, the end-result of that will essentially be a OS native thread being blocked, much like in Rust. At least, as far as my understanding of goroutine goes, that seems to be the case.
Granted, working with a “pool” of event loops representing multiple threads might be harder than just using goroutines, but I don’t see it as being that difficult.
That definition is the accurate, correct definition. It’s important to state that Rust helps with data races, and not race conditions in general. Even the rustonomicon makes this distinction clear.
The discussion around multiple threads seems like a non-sequitur to me. I’m fully aware that async/await works fine with multiple threads. I also don’t understand why the performance considerations of goroutines were brought into the picture. I’m not making any claims about performance, just ease of use and programmer model. (Though, I do think it’s important to respond that goroutines are very much low enough overhead for many common tasks. It also makes no sense to talk about performance and overhead outside of the context of a problem. Maybe a few nanoseconds per operation is important, and maybe it isn’t.)
The issue I mentioned does not happen in Go: all of the syscalls/locks/potentially blocking operations go through the runtime, and so it’s able to deschedule the goroutine and let others run. This article is another great article about this topic.
It’s great that you’re optimistic about the future direction Rust is taking with it’s async story. I’m optimistic too, but that’s because I have great faith in the leadership and technical design skills of the Rust community to solve these problems. I’m just pointing out that they ARE problems that need to be solved, and the solution is not going to be better than Go’s solution in every dimension.
Ok, maybe I’m mistaken here but:
“Descheduling a goroutine”, when a function call is blocking, descheduling a goroutine has the exact same cost as descheduling a thread, which is huge.
Secondly, go is only using a non-blocking syscall under the hood for networking I/O calls at the moment. So if I want to wait for an operation on any random file or wait for an asynchronous prefetch call, I will be unable to do so, I have to actually block the underlying thread that the goroutine is using.
I haven’t seen any mention of “all blocking syscalls operations” being treated in an async manner, they go through the runtime, yes, but the runtime may just decide that it can do nothing about it other than let the thread be de-scheduled as usual. And, as far as I know, the runtime is only “smart” about networking I/O syscalls atm, the rest are treated like a blocking operation/
Please correct me if this is wrong.
A goroutine being descheduled means it yields the processor and calls into the runtime scheduler, nothing more. What happens to the underlying OS threads is another matter entirely. This can happen at various points where things could block (e.g. chan send / recv, entering mutexes, network I/O, even regular function calls), but not at every such site.
Yes, sockets and pipes are handled by the poller, but what else could it be smarter about? The situation may well be different on other operating systems, but at least on Linux, files on disk are always ready as far as epoll is concerned, so there is no need to go through the scheduler and poller for those. In that case, I/O blocks both the goroutine and the thread, which is fine for Go. For reference, in this situation, node.js uses a thread pool that it runs file I/O operations on, to avoid blocking the event loop. Go doesn’t really need to do this under the covers, though, because it doesn’t have the concept of a central event loop that must never be blocked waiting for I/O.
Descheduling a goroutine is much cheaper than descheduling a thread. Goroutines are cooperative with the runtime, so they ensure that there is minimal state to save when descheduling (no registers, for example). It’s on the order of nanoseconds vs microseconds. Preemptive scheduling helps in a number of ways, but typically causes context switching to be more expensive: you have to be able to stop/start at any moment.
Go has an async I/O loop, yes, but it runs in a separate managed thread by the runtime. When a goroutine would wait for async I/O, it parks itself with the runtime, and the thread the goroutine was running on can be used for other goroutines.
While the other syscalls do in fact take up a thread, critically, the runtime is aware when a goroutine is going to enter a syscall, and so it can know that the thread will be blocked, and allow other goroutines to run. Without that information, you would block up a thread and waste that extra capacity.
The runtime manages a threadpool and ensures that GOMAXPROCS threads are always running your code, no matter what syscalls or I/O operations you’re doing. This is only possible if the runtime is aware of every syscall or I/O operation, which is not possible if your language/standard library are not desiged to provide. Which Rust’s doesn’t, for good reasons. It has tradeoffs with respect to FFI speed, control, zero overhead, etc. They are different languages with different goals, and one isn’t objectively better than the other.
Pretty much everything that could block goes through sockets and pipes though. The only real exception is file I/O, and file I/O being unable to be epolled in a reasonable way is a kernel problem not a Go problem.
I am happy about this article. While I’m a Go fan it’s actually great there’s more languages. The main reason is not the competition (I don’t really see them competing anyway), but a problem that languages as they are becoming popular frequently face: You have more and more programners that use the language mostly or exclusively for its success, while actually wanting to use a different language, either consciously or subconsciously. The outcome of the subconscious wanting to use a different language is that one bends the language, libraries, frameworks, ecosystem into being like that other language.
The result of that is that big languages often become the same generic “all the hip features + all the used to be hip ten years ago features” style language.
This is not to say languages are never allowed to adapt or copy good ideas. It’s just that I don’t think when we have all these languages with biggish user bases it would be sad if the differences would be merely on the cosmetic site while the approaches to solving problems at hand end up being pretty much equal.
This is compiler dependent, not language dependent. Go is supported by GCC on equal footing with Ada and Fortran.
And the Go compiler from Google is both fast at compiling and produces programs with good performance.
“Go is very slow” is highly inaccurate.
Other folks have grabbed important technical issues, but something I think is also missed is the weight of GOOG behind golang. The rust evangelion strike force just isn’t as well-funded.
I see this a lot and I disagree with it pretty much completely. Google doesn’t care whether people outside Google use Go at all really. They really aren’t pushing anyone to use it or pressuring anyone to use it. It’s not heavily advertised to new programmers in the same way something like Java and C# are by the companies that back them. Google supports Go heavily, but despite that I’m pretty sure I see way more evangelism and advertisement for Rust despite it having no big corporate sponsors to speak of.