Generally I’ve found channels to be great for synchronization and some very limited other (i.e. the keyword “select” is used, or you’re capturing os signals) cases. Otherwise I just use mutexes and shared state. Its easier for me to reason about somehow. Also channels are not particularly performant surprisingly (this born out in a micro-benchmark where they’re used as a threadsafe queue).
I worked at SpaceMonkey for a time. One of the big things I took away from working on that codebase is actually something JT doesn’t mention, but it kind of falls out of his points: channels make bad package APIs.
There’s a reason things like bufio.Scanner or sql.Rows don’t have a token channel. The power of channels, imo, comes from being able to use them with goroutines transparently behind a nice, synchronous-looking function-based API.
Just like any tool CSP works really well for some use cases and not so well for others. The author has picked a case where a mutex was a natural fit but instead he used contorted methods to make it into a CSP model implemented in Go channels. It’s like posting an article saying that he’s implemented a vector using linked lists and then complaining that it didn’t work well.
I’ve noticed concern of allocations come up in a few Go articles I’ve read. Why is this the case? Is it because of gc pause latency? As a counter, in Erlang, which only has message passing except for a few libraries, allocations are very rarely a concern. This is mostly because each process (goroutine, sort of) has its own heap and doing a GC on a single process is very pain free. GC pauses are virtually non-existent except in some pathologically bad code.
This was blocked at my work for pornography??? Looking at the title I should have known.
My last job used a McAfee gateway that would block pages based, sometimes seemingly random, categories such as pornography, gambling, “hacking”, etc.
At first I tried to figure out why a page would be blocked, but after a while I just gave up and chocked it up to shitty software. This might be one of those cases for you.
weird. the only thing I can think of that might cause that is that this site is being served straight out of an S3 bucket. maybe they’re blocking S3-served sites?
I do the same thing.
voronoipotato, do you get the same thing with http://callcc.io?
Nope your site is just fine, loads perfectly.
Huh, if you do figure out what’s up, please let me know!
Weird, I would have guessed an overly-strict firewall was blocking the S3 IP range.
Really interesting article, but there was one tidbit at the end that confused me:
… goroutines are Go’s best feature (and incidentally one of the ways Go is better than Rust for some applications)
Is this implying that go’s M:N threading makes it a fundamentally more “correct” choice for certain applications? My understanding is that goroutines, while more efficient than threads in Rust, are still subject to the same resource exhaustion issues. For example, you might be able to get away with 1M goroutines as opposed to 100k Rust threads, but that doesn’t mean you never have to think about the number of goroutines.
M:N threading can be “more convenient”.
At the risk of repeating the basics: Using a full-size stack for every continuation when you have large numbers of almost-identical continuations is clearly horribly inefficient. E.g. imagine you have a basic CRUD web backend that just takes an id, loads the relevant object from some datastore, transforms it into JSON and returns it to the client. While the request is being processed by the datastore, the app needs to store enough data to, when it receives the response from the datastore, know which client to send the rendered JSON back to. That only actually means storing two small integers - the (socket) id of the datastore connection and the id of the web connection - potentially as little as 8 bytes (or even 4 bytes on a 32-bit machine) in total. But in traditional one-thread-per-request architectures each request would have its own thread meaning its own stack meaning at an absolute minimum one 4k physical page of memory. Whereas if you’re just storing a continuation for the datastore connection that’s potentially just simple data and the language runtime could potentially store the continuations for many requests in the same memory page.
So there’s at least a theoretical argument that non-native threads can be more efficient. AIUI (and I could easily be wrong here) M:N green threads (rather than full continuations) mostly makes sense in the context of segmented stacks (i.e. variable stack sizes). Both Rust and Go started with segmented stacks and green threads and eventually abandoned the segmented stacks; Rust abandoned the green threads at least partly on the grounds that they no longer made sense. I would think the same reasoning may apply to go, at least if it’s storing a full stack for each goroutine (maybe it isn’t?) - that now that segmented stacks have been abandoned there’s no value in M:N. But I could easily be missing something.
That is an interesting question. The best information I could find in a quick google search was that Go 1.2 increased the minimum stack size of a goroutine from 4KB to 8KB in 1.2. It also looks like Rust allows you to set the stack size of a new thread. I’m not knowledgable enough to know exactly what this means, but it’s certainly interesting.
Is 100k OS threads an actual reasonable number? Lightweight threads seem better for a web server, but spawning a “thread” per request, but I could be wrong.
I was thinking of this comment thread, which has a lot of good discussion on the topic. This comment specifically has an interesting experiment that seems to imply 100k+ is feasible.