1. 112
  1.  

  2. 47

    I found this article far more valuable than the title led me to expect. Thanks for taking the time to make it so concretely relatable.

    1. 30

      The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

      It makes sense but I never thought about that. It’s funny because I would indeed think that threads were a convenient revelation if my initial thinking revolved around async :) Although there are plenty of downsides of threads in most environments; it does seem like you need a purpose-built runtime like Erlang / Go to make sure everything gets done right (timeouts, cancellation, easy-to-use queues / sempahores, etc.)

      It’s similar to the recent post about Rust being someone’s first experience with systems programming. That will severely affect your outlook for better and worse. There are also a lot of people who learned C++ before C too and I can only imagine how bewildering an experience that is.

      1. 4

        Yeah, threads really are a “convenient revelation”! Aren’t OS-level threads implemented on top of CPU-level callbacks? https://wiki.osdev.org/Interrupt_Descriptor_Table

        1. 10

          I wouldn’t call CPU-level interrupt handlers “callbacks”. They’re too low-level of a concept for that. It’d be like calling an assembly-language JMP instruction a CPU-level case statement, just because case statements are ultimately implemented in terms of JMP or a similar CPU instruction.

          1. 4

            I was turned on to code when threading was current but recent. This reminds me of the day I finally understood that multiprocessing was previously done by, you guessed it, multiple processes.

            1. 2

              I should have said “synchronous” or “straight line code” and not “threads”. There is a lot of overloading of terms in the world of concurrency which makes conversations confusing. See my other reply:

              https://lobste.rs/s/eaaxsb/i_finally_escaped_node_you_can_too#c_fg9k7y

              I agree with the other reply that the term “callbacks” is confusing here. Callbacks in C vs. JavaScript are very different things because of closures (and GC).

              I’d say if you want to understand how OS level threads are implemented, look up how context switches are implemented (which is very CPU specific). But I’m not a kernel programmer and someone else may give you a better pointer.

            2. 3

              Although there are plenty of downsides of threads in most environments

              How come ? After all, threads are the basic multithread building block exposed directly by the OS.

              1. 10

                I should have said synchronous / “straight line” code – saying “threads” sort of confuses the issue. You can have straight line code with process-level concurrency (but no shared state, which is limiting for certain apps, but maybe not as much as you think)

                It’s very easy to make an argument that threads exposed by the OS (as opposed to goroutines or Erlang processes) are a big trash fire of design. Historically that’s true; it’s more a product of evolution than design.

                One reason is that global variables are idiomatic in C, and idiomatic in the C standard library (e.g. errno, which is now a thread local). Localization also uses global variables, which is another big trash fire I have been deep in: https://twitter.com/oilshellblog/status/1374525405848240130

                Another big reason is that when threads were added to Unix, syscalls and signals had to grow semantics with respect to threads. For example select() and epoll(). In some cases there is no way to reconcile it, e.g. fork() is incompatible with threading in fundamental ways.

                The other reason I already mentioned is that once you add threads, timeouts and cancellation should be handled with every syscall in order for you to write robust apps. (I think Go and node.js do a good job here. In C and C++ you really need layers on top; I think ZeroMQ gives you some of this.)

                So basically when you add threads, EVERYTHING about language has to change: data structures and I/O. And of course C didn’t have threads originally. Neither did C++ for a long time; I think they have a portable threading API now, but few people use that.


                The original concurrency primitive exposed by Unix is processes, not threads. You can say that a thread is a process that allows you to write race conditions :)

                From the kernel point of view they’re both basically context switches, except that in the threading case you don’t change the address space. Thus you can race on the entire address space of the process, which is bad. It’s a mechanism that’s convenient for kernel implementers, but impoverished for apps.

                OS threads are pretty far from what you need for application programming. You need data structures and I/O too and that’s what Go, Erlang, Clojure, etc. provide. On the other hand, if your app can fit within the limitations of processes, then you can write correct / fast / low level code with just the OS. I hope to make some of that easier with Oil; I think process-level concurrency is under-rated and hard to use, and hence underused. Naive threading results in poor utilization on modern machines, etc.

                tl;dr Straight-line code is good; we should be programming concurrent applications with high level languages and (mostly) straight line code. OS threads in C or C++ are useful for systems programming but not most apps

                1. 1

                  Race conditions, data races, deadlocks are a bit of an overstated problem. 99% of the cases, people are just waiting for IO, protecting shared data structures with locks is trivial and it often will take 3+ orders of magnitude less time than IO. It is a non issue, honestly.

                  Personally, I find the original P() and V() semantics introduced by Dijskstra to be the easiest concurrency idiom to reason about. All these newer/alternative semantics, being them promises, futures, defereds, callbacks, run to completion, async keywords and what have you, feel like a hack compared to that. If you can spawn a new execution flow (for lack of a better name) without blocking your current one, and query it for completion, then you can do it with almost whatever construct you have. Including threads.

                  The case for threads being that you can share data thus saving large amounts of memory.

                  In all seriousness, which percentage of people uses concurrency for other purposes than circumventing the need to wait for IO?

                2. 2

                  There are a lot of ways to shoot yourself in the foot with something like pthreads. The most common is probably trying to share memory between threads, something as simple as adding a value to the end of a dynamic array fails spectacularly if two threads try to do it at the same time and there’s no synchronization mechanism. The same applies for most of your go-to data structures.

                  1. 4

                    Shared memory has more to do with the language and its memory management model than sync/async though. You can have an async runtime scheduled N:M where it’s up to you to manage resource sharing.

                    That’s the case if you use (for example) a libuv in C with a threadpool for scheduling. On the other hand Erlang which works pretty much in async way in all communication would not have the same issue.

                    1. 1

                      What’s the problem with adding a semaphore right before adding the value? Is it not how everyone does it? (honest question)

                  2. 2

                    The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

                    Your comment made me realize that! Crazy, but very interesting…

                    I wonder if that has any impact in how good these new developers are/will-be at parallel or synchronous programming.

                    The problem is that JavaScript is such a loosey-goosey language that I’m fairly convinced that people are probably writing incorrect async code in JavaScript, but it works well enough that they might even be worse off. Maybe I’m just being elitist, but I’ve reviewed some of my own Node code recently and caught several mistakes I had when modeling a “Notifier” object that had to manage its own state asynchronously. It never caused an issue “in the field” so I only noticed because I was refactoring from removing a deprecated dependency.

                    EDIT: Also, I’m one of those who learned C++ before C (And I don’t claim that I “know” C by any stretch: I understand the differences between the language in a technical sense, but I can’t write or read idiomatic C in real code bases). But I learned C++ before C++11, so I think that might not be what you are talking about. Learning C++98 probably isn’t that bewildering compared to today because we didn’t have smart pointers, or ranges, or variants, etc. The weirdest thing at the time was probably the STL iterators and algorithms stuff. But all of that just felt like an “obvious” abstraction over pointers and for loops.

                    1. 2

                      Yeah, JS (and Node backend code) has really interesting asynchronous behaviour; when folks start using other languages with better computational concurrency/parallelism, a lot of things that they relied on will no longer be true. Easiest example is the fact that there’s only ever one JS “thread” executing at any given time, so function bodies that would have race conditions don’t (because the function is guaranteed to continue executing before a different one starts).

                  3. 13

                    And yet, it is still necessary to download one third of a gigabyte worth of Node modules to build WebPack, to be able to use Phoenix’s live view. Lovely article, but it does not fully cure my Javascript fatigue, haha

                    1. 9

                      I followed this very helpful post to replace webpack with snowpack, which uses esbuild (a bundler written in go) for super fast iterations and fewer dependencies: https://www.richardtaylor.dev/articles/replacing-webpack-with-snowpack-in-a-phoenix-application

                      1. 2

                        This is great! Thank you!

                      2. 4

                        Well, you aren’t forced to use LiveView. Additionally I think that most of the deps come from Webpack which you can replace with “lighter” builder if you want.

                        1. 2

                          I am using LiveView without webpack. I used symlinks:

                          [dmpk2k@bra js]$ ls -lah
                          [...]
                          -rw-r--r-- 1 dmpk2k dmpk2k 3.7K Feb  2 21:51 app.esm.js
                          lrwxrwxrwx 1 dmpk2k dmpk2k   50 Dec  4 22:49 phoenix.js -> ../../../../../deps/phoenix/priv/static/phoenix.js
                          lrwxrwxrwx 1 dmpk2k dmpk2k   70 Dec  4 22:49 phoenix_live_view.js -> ../../../../../deps/phoenix_live_view/priv/static/phoenix_live_view.js
                          [...]
                          

                          Inside the layout:

                          <script defer type="text/javascript" src="<%= Routes.static_path(@conn, "/js/phoenix.js") %>"></script>
                          <script defer type="text/javascript" src="<%= Routes.static_path(@conn, "/js/phoenix_live_view.js") %>"></script>
                          <script defer type="module" src="<%= Routes.static_path(@conn, "/js/app.esm.js") %>"></script>
                          

                          Inside app.esm.js:

                          let csrfToken = document.querySelector("meta[name='csrf-token']").getAttribute("content");
                          window.liveSocket = new phoenix_live_view.LiveSocket("/live", Phoenix.Socket, {
                            params: {_csrf_token: csrfToken},
                            hooks: Hooks
                          });
                          window.liveSocket.connect();
                          

                          Perhaps I’m missing something by skipping off the beaten path, but it works fine, and life is easier. At some point I’ll make the build gzip the two files, but otherwise there isn’t much more to be gained with a bundler.

                          Webpack boils an ocean to make a cup of tea.

                          1. 1

                            +10! There are lighter projects similar to LiveView: https://github.com/vindarel/awesome-no-js-web-frameworks/ & https://github.com/dbohdan/liveviews some without needing JS at all.

                            1. 4

                              Only way I see it could work is that it will use WebAsm, which for me isn’t much different from using JS.

                              EDIT: I have checked - whole Phoenix LiveView with it’s dependencies (phoenix library for socket integration and morphdom) is ~173 KB unminified and ungzipped. After minification and gzipping it will be almost negligible. Most of the “bloat” comes form Webpack (as mentioned earlier) which can be without much issues replaced with any other build tool of your preference, even just cat to join all files together.

                          2. 9

                            It looks like the complaint here is that the author doesn’t really grok async processes, so they demand sync processes via threads. FYI you can use worker threads in NodeJS that use a different thread.

                            I’m not sure what the data structures non sequitur was about, you can write any data structures you need in JS if they aren’t already in the language.

                            This article is all about personal preference, though. The author can’t remember the Promise API, but in the context of the post, it seems to mean they can’t remember how to write client.query(/*..*/).then() instead of using await client.query. Is it that abstracted for you or did you just never really use promises to begin with?

                            I’ve been with JavaScript for a long time (since roughly 1998) and I remember what it was like when it was pretty much something you used to add a little jazz hands action to your site via DHTML. The evolution of JS (which is criticized by the author) is due to millions of eyes being on the language and adding improvements over time. JS naturally came out of an async ecosystem (the browser), so Node followed the same idea. Callbacks were a hassle to deal with, so we got to Promises. Promise syntax is a big unwieldy, so we switched to async/await. If someone comes up with an easier way to do it, they will. You can still write fully blocking functions if you want. You can also avoid your “red/blue” function thing by using Promises without signifying a function with async. Just use the old syntax.

                            I don’t primarily develop in Node, but I see a lot of misdirected anger or hate on the language and ecosystem because people just don’t understand how to use it. It’s different than a lot of stuff for sure, but it is a nice tool to have.

                            1. 4

                              Thank you! I’ve been writing JS since 1999 so I definitely relate to the DHTML days. For the last 8-10 years I’ve been writing Ruby professionally, and switched to JS (well, mostly TypeScript) just last year when I changed jobs. Gotta say, there’s been a ton of work done around developer ergonomics and making the language a bit less unwieldy.

                              1. 4

                                I’m not sure what the data structures non sequitur was about

                                I thought it tied in quite nicely to the part about Erlang and Elixir. Erlang was designed with good data structures around concurrency but Node’s data structures have been strained since callbacks and Promises are building on top of that abstraction.

                              2. 4

                                I saw the title and the first thing that came to my mind was “Well the only way to escape that hell is by going to Elixir/Erlang/OTP”.

                                I read the article and a smile come to my face. I’m happy that the OP escaped Node. I’ve been using Elixir before it was even v1 and the design decisions in Elixir are made to help the developers and not because they wanted to solve a problem they created in the first place.

                                During the last 3 months I started doing “Front-end” programming (I’ve been doing low-level and systems engineering for 7 years) and feels like it’s… incomplete… Still evolving, etc.

                                1. 3

                                  I dunno, maybe it’s because I started learning seriously with node (and fiddled with basic / java before) that my brain is wired to understand it and reason about it well, but I’ve never been in a nodeJS codebase with promises or async / await where I thought it was impossible or tough to reason about solely because of those two aspects. I also use Python and Go which are also colored, so maybe that’s why I didn’t share the author’s sentiment about how it’s harder to reason about… I can appreciate the notion of workers / tasks as he describes, but at the end of the day, it depends on the developer(s) and the broader eng org, right? There’s no one size fits all here?

                                  To each their own. It’s a useful language and there’s a time and place for it IMO!

                                  Also… the repl example, I think in node 14 you can use await in the top level?

                                  https://v8.dev/features/top-level-await

                                  https://nodejs.org/en/blog/release/v14.8.0/

                                  EDIT: also, does node properly use cpu resources? I forget, but IIRC it used to under utilize, and required separate processes per core. Not sure if it’s better now.

                                  1. 4

                                    I also find the author’s confusion.. confusing. I think async/await is much easier to reason about than callbacks ever were, and promises are pretty obvious about how they work with the chained syntax of then/catch/finally functions. 🤷‍♀️

                                    1. 3

                                      Go which are also colored

                                      Are you sure about that? Because IIRC Go isn’t coloured at all.

                                      also, does node properly use cpu resources? I forget, but IIRC it used to under utilize, and required separate processes per core. Not sure if it’s better now.

                                      No, because JS is by definition single core. So external tools (for example C libraries) can use multiple cores (via threads) but IIRC Node is still single-threaded.

                                      1. 2

                                        Sorry I didn’t mean to include Go in that list, it was just there for context about what languages I use! On two hours of sleep :/.

                                        Right, re: your other point, yep makes sense.

                                    2. 3

                                      The thing I think people kinda forgot in the dark ages of the 2010s is that, like, it’s really nice just having APIs that pretend to be synchronous and then letting the runtime sort it out. Like, it’s worth so much. Dealing with Javascript putting all of that front-and-center, especially in a web context, is just uncomfortable.

                                      I’ve been programming in Elixir since 2014 and Javascript since..2010, call it…and before that C/C++/Java with all of the magic of multithreading. I really like Elixir for normal work, but if I have to code under pressure or just hack something together quickly Javascript is something I’ll reach for because I can keep the whole thing (ES5-style, at least) in my head and the language won’t fight me when I do strange, degenerate things. Go has won me over from Elixir for tooling.

                                      1. 2

                                        Finally! I think I started to use Erlang in 2011 after reading how the async code works in it. The only problem with Erlang that an average programmer gets hang up on the syntax (this is probably not a big deal with Elixir any more) and they think that it is an esoteric language with no real use in every day computation. The very same people then go ahead and write single threaded single connection Java code and blame the network when it is slow (literally happened to me). Some other people embrace Node because it is async! Erlang’s market share is nothing but the proof that an average programmer has no idea about async in any meaningful depth.

                                        1. 2

                                          Something I see in common between Elixir (in this context) and Clojure (which I’ve been learning recently) is the emphasis on simplicity. I know very little about Elixir and I’ve only been learning Clojure for about a month, but I’ve grown to appreciate the burden that’s lifted when you can focus on one isolated procedure at a time while working on a larger program.

                                          1. 4

                                            Elixir borrowed a lot from Clojure - macros, build system (Mix team was using some insights from Leiningen authors), pipelines, etc.

                                            On the other hand I see that Clojure borrowed a little from Erlang, so this is like closed loop of good design inspirations.

                                          2. 1

                                            Few questions from someone not close to either the erlang world or the js world:

                                            1. The author uses Node and Javascript pretty interchangeably. Are all JS frameworks basically the same as far as concurrency goes?
                                            2. If my main erlang process needs to wait for the ps process to finish, what’s the advantage of having a separate process? isn’t the whole point of multithreading so that your main thread can do stuff while waiting for the DB to finish?
                                            1. 1

                                              Are all JS frameworks basically the same as far as concurrency goes?

                                              Pretty much. The only way to have concurrency in JavaScript is to use async. JS by definition is single-threaded with async capabilities.

                                              If my main erlang process needs to wait for the ps process to finish, what’s the advantage of having a separate process?

                                              Error handling mostly. In Erlang failure of one process do not propagate to other (unless explicitly requested).

                                              isn’t the whole point of multithreading so that your main thread can do stuff while waiting for the DB to finish?

                                              Erlang processes weren’t created for “multithreading” but for error isolation. The parallelisation was added later. But returning to your question about “doing other stuff while waiting for DB to finish” - you still can do so, just spawn other process (it is super cheap, as Erlang processes are different from OS processes). You still will need some synchronisation at some point, in Erlang each process is synchronous and linear, and it relies on messages to do communication between processes. So “behind the scenes” in function Postgrex.query/2 (in Erlang functions are defined by module name, function name, and arity, you can have multiple functions named identically with different arity) is something like (quasi-simplified as there is more behind scenes, for example connection pool, and most of the code there will be hidden behind functions, but in it’s “core” it reduces to that):

                                              def query(pid, query) do
                                                ref = generate_unique_reference()
                                              
                                                # Send message to process identified by `pid` with request to execute `query`
                                                # `ref` is used to differentiate between different queries. `self/0` returns PID of
                                                # current process, we need to send it, so the Postgrex process know whom
                                                # send response to.
                                                send(pid, {:execute, self(), ref, query})
                                              
                                                # Wait for response form the Postgrex process
                                                receive do
                                                  {:success, ^ref, result} -> {:ok, result}
                                                  {:failure, ^ref, reason} -> {:error, reason}
                                                after
                                                  5000 -> throw TimeoutError # if we do not get response in 5s, then throw error
                                                end
                                              end
                                              

                                              So in theory, you could do different things waiting for the response from the DB, this is for example how sockets are implemented in gen_tcp and gen_udp, these just send messages to the owning process, and owning process can do different things “in the meantime”. In this case, just for convenience of developer, all that message passing back and forth, is hidden behind utility function. However in theory you can do so in fully asynchronous way. This is almost literally how gen_server (short for generic server) works.

                                              1. 1

                                                So you’d spawn a new process that makes a DB call, then to handle the result that new process spawns other processes? And in the meantime the original process can keep going? What if you need the result in the original process? Or do you just not design your code that way?

                                                1. 1

                                                  So you’d spawn a new process that makes a DB call, then to handle the result that new process spawns other processes?

                                                  No and no. You are always working within scope of some Erlang process, and Postgrex start pool of connections when you do Postgrex.start_link/1. So you do not need to create any process during request, all processes are already there when you do request.

                                                  And in the meantime the original process can keep going?

                                                  You can do so, however there is rarely any other work to do by that given process. When writing network application with TCP connections, then in most cases each connection is handled by different Erlang process (in case of Cowboy each request is different Erlang process). So in most cases you just block current process, as there is no meaningful work for it anyway. However that do not interfere other processes, as from programmer viewpoint Erlang uses preemptive scheduler (internally it is cooperative scheduler).

                                                  What if you need the result in the original process?

                                                  My example above provides information in original process, it doesn’t spawn any new processes. It is all handled by message passing.

                                              2. 1
                                                1. Yep, Node is just a wrapper around V8 (the JavaScript runtime from Chromium) plus an API that provides access to system resources (filesystem, networking, OpenSSL crypto, etc). The concurrency being discussed is common across all JavaScript runtimes, and isn’t specific to any of the API that Node provides. I don’t know why the author says “Node” in the article, they’re talking about JavaScript.
                                                2. No idea, I haven’t fully escaped JavaScript yet.
                                              3. 1

                                                I want escape JavaScript in general. It pains me to no end :(

                                                What are your thoughts on io_uring and IOCP which is used in Linux and Windows.

                                                1. 1

                                                  I like Elixir and Node. I use Node on a daily basis and we use JS pretty much exclusively at my job. There’s been a bit of talk about moving some of our code into Elixir, but there are some drawbacks to doing so. Although Elixir programs are very fast and can handle many thousands of connections, the deployment and scaling story is…rough. Yeah, you might not need to auto-scale it at first, but you’re going to need to eventually, and it turns out that BEAM and Kubernetes (or practically any other container-based infra) don’t really play nice together.

                                                  Not sure how others are handling this, but it seems like a lot of extra work just to deploy the thing. I love Elixir, but you really need to know you need it before you embark.

                                                  1. 2

                                                    It is not much different from .NET or Java in that area. So in many cases it is unfamiliarity, not complexity of the process as is. A lot of people come from Ruby/Python/Node and may feel that it is more complicated than it should be, but that is the nature of compiled languages.

                                                    1. 1

                                                      Not sure how large you’re planning on scaling it, but https://hexdocs.pm/libcluster/Cluster.Strategy.Kubernetes.html is worth a look. We’re not auto-scaling it right now exactly, but we’ve got a multi-node cluster in k8s and it seems to be handling it quite well.

                                                    2. 1

                                                      Has anyone here used Akka? Curious about the Erlang concurrency model running on other VMs.