1. 38
  1.  

    1. 10

      Not exactly a hot take, but I think virtual threads will largely spell the death of the reactive programming style in Java. Of course there are problem spaces where its conceptual model is a better fit than a thread-centric model (e.g., because backpressure is a first-class concept) but from what I’ve observed, the vast majority of people who are using reactive libraries are looking to support large numbers of concurrent clients and are tolerating the reactive model to achieve that goal. Virtual threads will be a much better fit for those people.

      Of course, this won’t happen overnight, but I’m guessing with the release of Java 21, we’ll see a sharp drop in the number of new reactive projects.

      1. 3

        Agreed. I maintain Manifold, an old streaming/CSP-style lib for Clojure, and a year ago, a fellow stream lib implementer and I were discussing the way forward with vthreads, what to do about backwards compatibility, etc.

        And yeah, one of my conclusions was that if vthreads existed then, Manifold would probably not have been written, it wouldn’t have made enough sense. Vthreads cover much of it, and structured concurrency will cover most of the remainder.

        Netty, and other event-driven servers have similar situations. Anything that revolves around managing a thread pool, really.

      2. 3

        Java, the language, is built for blocking I/O. This never changed and you can see it in how its syntax requires blocking I/O. Examples:

        • checked exceptions;
        • try/finally or try-with-resources;
        • intrinsic locks, and the standard mutexes and semaphores.

        Java with reactive stuff is basically using a Java subset, and isn’t in the language’s character.

        OTOH, “reactive” is really an euphemism for function composition, which will never go out of fashion, but which always strikes fear in the hearts of developers, especially when the word “monad” gets used.

      3. 3

        I agree with your general sentiment. This Loom feature brings Java closer to Erlang. Java will still be missing the monitors, the nodes (so the Erlang’s OTP), but the VM were a function call can spawn a thread, and have the function executed in that new thread – will probably bring more of Erlang-style idioms into Java going forward. And that’s a good thing.

        1. 3

          It’s not exactly the same but with the structured concurrency API it gets even closer

      4. [Comment removed by author]

    2. 5

      I don’t work with Java professionally (only as a hobbyist), but virtual threads is very exciting. It sits in a great niche in the tradeoff space of async programming:

      1. No function-coloring async/await
      2. No compile-time hell (looking at you C++ and Rust)
      3. Memory safe
      4. No callback hell
      5. Composes perfectly with pretty much all existing blocking code (basically everything that doesn’t use the synchronized keyword). They made the change down low in the runtime so that all code built on top of java.util.concurrent will just magically work with no changes. This one is the largest benefit in my opinion. No other ecosystem has been able to manage a sync -> async migration that composes perfectly with existing code.

      The pattern matching stuff is interesting but I don’t see it getting too much more than the occasional use.

    3. 2

      I see a lot of new features that seem inspired by other languages that have been gaining popularity such as Elixir. Still, this should definitely be celebrated by anyone who is both doing Java for their day-job work and also has the ability to upgrade.

      Down the line, this may make other languages that compile to the JVM a lot more compelling, assuming these new features reach down to that level and aren’t just Java syntax sugar.

    4. 2

      “quite the update” might be a reaction from someone who does not realize the green threads of loom were very very similar to the original java threading, at least on Solaris.

      Meanwhile, i checked the JVM 21 spec, and you still can not represent a uintN in N bits with use of underlying hardware for such and use of underlying hardware instruction set for operating immediately on such.

      Why is this left out? There must be some reason, but I genuinely don’t know.

      1. 8

        Java’s virtual threads are not at all similar to the green threads Java originally had. They have nothing in common actually.

        First of all, that was N:1 multithreading (like Javascript), and it wasn’t meant to stay that way, being an implementation detail. And the “cooperation” happened via an explicit thread “yield”.

        Projects Loom exposes M:N multithreading, meaning that many “virtual” threads get to be executed on multiple platform threads until an I/O boundary is hit. At that point the virtual thread gets suspended by the runtime, to be resumed later. They actually implemented continuations under the hood, and I hope some day they’ll expose continuations publicly as well. Also, when virtual threads get suspended, the thread’s callstack gets copied to heap memory, to be restored later. And they applied some interesting optimizations to make that efficient, in cooperation with the garbage collectors, which now need support for virtual threads too.

        Here’s a nice presentation about it: https://youtu.be/6nRS6UiN7X0?si=TSQIN8JiAmFy0p06

        1. 14

          The path here has been quite long.

          Originally, UNIX did’t have any threading. People patched it on top by replacing blocking system calls in their userspace wrappers with non-blocking ones that yielded and using timer signals to do involuntary context switching. This was an N:1 threading model (and was quite fragile: if you did a blocking system call directly without going via a libc wrapper, your thread would stall all threads). This model worked moderately well on single-processor systems but was problematic with SMP and multicore because all threads for a process ran on a single core. It mattered less for the threads-for-I/O model, where only a small number of threads were typically runnable at a time and the rest were blocking waiting for I/O. It was typically fine on a dual-CPU system because you could run kernel threads on one core and userspace threads on another, so each blocking system call switched the userspace thread on one core and kicked off some in-kernel work on the other.

          SunOS introduced a lightweight process (LWP) model[1] that allowed two process-like things to share an address space, file descriptor table, and all other process state except a virtual CPU context. The threading libraries built on top of this put thread-specific state in userspace (on SPARC, I believe they reserved one general-purpose register for the thread pointer) and shared all kernel state between threads in the same process. This gave a 1:1 threading model: the kernel is responsible for scheduling all threads and any blocking call triggers a scheduler event. This worked well when you had a similar number of threads and cores but when the number of threads significantly exceeded the core count you started to see significant kernel resource consumption and scheduler overhead[2]. Most *NIX systems adopted the 1:1 model.

          Solaris introduced an N:M threading model. This used a userspace threading library similar to the one from N:1, where blocking system calls were replaced by non-blocking ones but were then multiplexed across multiple kernel threads. Both NetBSD and FreeBSD implemented N:M threading models and then gave up on them. They have a lot of problems. The kernel doesn’t know which userspace thread is running on a kernel-scheduled entity (KSE) an so per-thread priorities are hard as are any of the bits of the *NIX system call interface where the kernel needs to understand which thread is running for the current system call (e.g. priority-propagating locks). The userspace scheduler doesn’t have any visibility into the kernel’s state and so can’t tell whether it’s scheduling a thread to run on a KSE that will run or is about to be preempted: it may pick a high-priority thread to run just before the kernel preempts it and runs another KSE for the same process that the userspace scheduler has put a low-priority thread on. Many of these problems have been reinvented on hypervisors over the last 15 years: it turns out that running one scheduler on top of another almost always leads to weird performance artefacts and no one knows how to do it well.

          As Matt Dillon pointed out, a lot of the problems with N:M threading are not actually problems with N:M threading, they’re problems with C/POSIX abstractions. They’re problematic as an OS abstraction because the lowest-level things in userspace sit in this abstract machine. They remain popular for language VMs, where raw system calls are typically not permitted and the language can happily multiplex things on kqueue / epoll with explicit yield and where all per-thread state is managed by the VM. Most actor-model language VMs provide an N-actors:M-threads model, with one thread per core (pinned to the core) and very large numbers of actors, for example.

          When Java was launched, it could use an N:1 threading model (the only option on Windows 3.1, which didn’t have preemptive threads and required explicit yielding) or 1:1. The N:1 model in the JVM hit the scalability problems that N:1 models always do but the 1:1 model was not ideal for Java’s threads-for-I/O-multiplexing design because they suffer when thread counts get very high.

          Some JVMs have implemented N:M threading internally for a while (I thought OpenJDK did this 15 years ago, but apparently not?). Unfortunately, this interacts very poorly with JNI because JNI code may stash things in thread-local storage and then find that, for the same Java thread, a second call is on a different OS thread. Oh, and preempting a thread in native code is expensive (requires a timer signal, which is far more expensive than an OS thread switch). It also has some drawbacks for compute-heavy threads, where you actually want OS-driven preemption and fairness.

          The key thing in the new proposal is that the programmer is in control. If you have compute-heavy threads or threads using a lot of JNI, you put an OS thread under them. If you have lightweight threads that are just blocking for I/O, you multiplex them. This should allow you to trade the advantages and disadvantages of 1:1 and N:M threading and pick the one that makes sense for a particular problem. There are still probably a lot of fun corner cases (I’m not sure what happens in OpenJDK if you hold a priority-propagating lock in a virtual thread, perform a blocking I/O operation, and have a real thread try to acquire the lock: do a bunch of unrelated virtual threads get a priority boost?).

          [1] I’m not sure it was first. AIX had a threading model at a similar time and I think Irix had its own threading model as well. POSIX threads came along a bit later to unify different threading implementations.

          [2] Most O(1) scheduler work came quite a long time after these initial implementations. Even with O(1) schedulers, this can suffer because a voluntary yield to another explicit userspace thread can be cheaper than a full OS context switch (compare setcontext performance to sched_yield sometime).

          1. 1

            Thank you kindly for the history lesson, I’m missing some of it, this is useful.

        2. 6

          The original author mentioned Solaris, so they’re probably referring to how “green threads” on Solaris meant M:N exactly the way you’re describing it. (Wouldn’t surprise me if they dumbed it down for other platforms, which were all pretty new to threading at the time.)

      2. 3

        I was also initially confused by the linked article presenting this as new, since it sounded a lot like green threads. The JEP itself does discuss the relationship though:

        Virtual threads are a lightweight implementation of threads that is provided by the JDK rather than the OS. They are a form of user-mode threads, which have been successful in other multithreaded languages (e.g., goroutines in Go and processes in Erlang). User-mode threads even featured as so-called “green threads” in early versions of Java, when OS threads were not yet mature and widespread. However, Java’s green threads all shared one OS thread (M:1 scheduling) and were eventually outperformed by platform threads, implemented as wrappers for OS threads (1:1 scheduling). Virtual threads employ M:N scheduling, where a large number (M) of virtual threads is scheduled to run on a smaller number (N) of OS threads.

        But as @robey points out, this seems not entirely true? Java threads on Solaris did do what they called “many-to-many” threading by default (you could force 1:1 or M:1, but it was not default).

        1. 3

          That still misses the forest for the trees - the actually impressive part of virtual threads is that they automagically replace blocking IO calls on the VM side, making a much higher IO contention possible in the plain, old blocking code style.

          1. 5

            I don’t think that’s the novel bit. This is what most N:M threading implementations have done over the last 20+ years. They replace blocking calls with non-blocking ones and yield, and then poll on (userspace) context switch to see which have finished. That’s basically a necessity for any 1:N or N:M threading implementation.

            The interesting thing here is that they are exposing both a 1:1 and N:M threading model, with user control over which they use for any given thread. This lets them do things like have full OS scheduler priority support for real threads but also lighweight multiplexing for virtual threads, in the same program.

        2. 2

          Thank you; I was almost certain, and just because a java enhancement proposal says X, does not mean the author or reviewers vetted X. I had worked at Sun for a summer and fall fresh out of grad school on, of all things, Solaris internals (then switched to a bell labs team at motorola labs, where I did a bunch of C++ and then early green threads era Java including with threads).

      3. 2

        Is uintN support one of the aims of project Valhalla?

        Hmm, a bit of searchengineering suggests not, but Java 8 introduced APIs for treating signed integers as unsigned. Which strikes me as a throwback to BCPL or assembly language…