Threads for johnjoz

    1. 2

      “quite the update” might be a reaction from someone who does not realize the green threads of loom were very very similar to the original java threading, at least on Solaris.

      Meanwhile, i checked the JVM 21 spec, and you still can not represent a uintN in N bits with use of underlying hardware for such and use of underlying hardware instruction set for operating immediately on such.

      Why is this left out? There must be some reason, but I genuinely don’t know.

      1. 8

        Java’s virtual threads are not at all similar to the green threads Java originally had. They have nothing in common actually.

        First of all, that was N:1 multithreading (like Javascript), and it wasn’t meant to stay that way, being an implementation detail. And the “cooperation” happened via an explicit thread “yield”.

        Projects Loom exposes M:N multithreading, meaning that many “virtual” threads get to be executed on multiple platform threads until an I/O boundary is hit. At that point the virtual thread gets suspended by the runtime, to be resumed later. They actually implemented continuations under the hood, and I hope some day they’ll expose continuations publicly as well. Also, when virtual threads get suspended, the thread’s callstack gets copied to heap memory, to be restored later. And they applied some interesting optimizations to make that efficient, in cooperation with the garbage collectors, which now need support for virtual threads too.

        Here’s a nice presentation about it: https://youtu.be/6nRS6UiN7X0?si=TSQIN8JiAmFy0p06

        1. 13

          The path here has been quite long.

          Originally, UNIX did’t have any threading. People patched it on top by replacing blocking system calls in their userspace wrappers with non-blocking ones that yielded and using timer signals to do involuntary context switching. This was an N:1 threading model (and was quite fragile: if you did a blocking system call directly without going via a libc wrapper, your thread would stall all threads). This model worked moderately well on single-processor systems but was problematic with SMP and multicore because all threads for a process ran on a single core. It mattered less for the threads-for-I/O model, where only a small number of threads were typically runnable at a time and the rest were blocking waiting for I/O. It was typically fine on a dual-CPU system because you could run kernel threads on one core and userspace threads on another, so each blocking system call switched the userspace thread on one core and kicked off some in-kernel work on the other.

          SunOS introduced a lightweight process (LWP) model[1] that allowed two process-like things to share an address space, file descriptor table, and all other process state except a virtual CPU context. The threading libraries built on top of this put thread-specific state in userspace (on SPARC, I believe they reserved one general-purpose register for the thread pointer) and shared all kernel state between threads in the same process. This gave a 1:1 threading model: the kernel is responsible for scheduling all threads and any blocking call triggers a scheduler event. This worked well when you had a similar number of threads and cores but when the number of threads significantly exceeded the core count you started to see significant kernel resource consumption and scheduler overhead[2]. Most *NIX systems adopted the 1:1 model.

          Solaris introduced an N:M threading model. This used a userspace threading library similar to the one from N:1, where blocking system calls were replaced by non-blocking ones but were then multiplexed across multiple kernel threads. Both NetBSD and FreeBSD implemented N:M threading models and then gave up on them. They have a lot of problems. The kernel doesn’t know which userspace thread is running on a kernel-scheduled entity (KSE) an so per-thread priorities are hard as are any of the bits of the *NIX system call interface where the kernel needs to understand which thread is running for the current system call (e.g. priority-propagating locks). The userspace scheduler doesn’t have any visibility into the kernel’s state and so can’t tell whether it’s scheduling a thread to run on a KSE that will run or is about to be preempted: it may pick a high-priority thread to run just before the kernel preempts it and runs another KSE for the same process that the userspace scheduler has put a low-priority thread on. Many of these problems have been reinvented on hypervisors over the last 15 years: it turns out that running one scheduler on top of another almost always leads to weird performance artefacts and no one knows how to do it well.

          As Matt Dillon pointed out, a lot of the problems with N:M threading are not actually problems with N:M threading, they’re problems with C/POSIX abstractions. They’re problematic as an OS abstraction because the lowest-level things in userspace sit in this abstract machine. They remain popular for language VMs, where raw system calls are typically not permitted and the language can happily multiplex things on kqueue / epoll with explicit yield and where all per-thread state is managed by the VM. Most actor-model language VMs provide an N-actors:M-threads model, with one thread per core (pinned to the core) and very large numbers of actors, for example.

          When Java was launched, it could use an N:1 threading model (the only option on Windows 3.1, which didn’t have preemptive threads and required explicit yielding) or 1:1. The N:1 model in the JVM hit the scalability problems that N:1 models always do but the 1:1 model was not ideal for Java’s threads-for-I/O-multiplexing design because they suffer when thread counts get very high.

          Some JVMs have implemented N:M threading internally for a while (I thought OpenJDK did this 15 years ago, but apparently not?). Unfortunately, this interacts very poorly with JNI because JNI code may stash things in thread-local storage and then find that, for the same Java thread, a second call is on a different OS thread. Oh, and preempting a thread in native code is expensive (requires a timer signal, which is far more expensive than an OS thread switch). It also has some drawbacks for compute-heavy threads, where you actually want OS-driven preemption and fairness.

          The key thing in the new proposal is that the programmer is in control. If you have compute-heavy threads or threads using a lot of JNI, you put an OS thread under them. If you have lightweight threads that are just blocking for I/O, you multiplex them. This should allow you to trade the advantages and disadvantages of 1:1 and N:M threading and pick the one that makes sense for a particular problem. There are still probably a lot of fun corner cases (I’m not sure what happens in OpenJDK if you hold a priority-propagating lock in a virtual thread, perform a blocking I/O operation, and have a real thread try to acquire the lock: do a bunch of unrelated virtual threads get a priority boost?).

          [1] I’m not sure it was first. AIX had a threading model at a similar time and I think Irix had its own threading model as well. POSIX threads came along a bit later to unify different threading implementations.

          [2] Most O(1) scheduler work came quite a long time after these initial implementations. Even with O(1) schedulers, this can suffer because a voluntary yield to another explicit userspace thread can be cheaper than a full OS context switch (compare setcontext performance to sched_yield sometime).

          1. 1

            Thank you kindly for the history lesson, I’m missing some of it, this is useful.

        2. 6

          The original author mentioned Solaris, so they’re probably referring to how “green threads” on Solaris meant M:N exactly the way you’re describing it. (Wouldn’t surprise me if they dumbed it down for other platforms, which were all pretty new to threading at the time.)

      2. 3

        I was also initially confused by the linked article presenting this as new, since it sounded a lot like green threads. The JEP itself does discuss the relationship though:

        Virtual threads are a lightweight implementation of threads that is provided by the JDK rather than the OS. They are a form of user-mode threads, which have been successful in other multithreaded languages (e.g., goroutines in Go and processes in Erlang). User-mode threads even featured as so-called “green threads” in early versions of Java, when OS threads were not yet mature and widespread. However, Java’s green threads all shared one OS thread (M:1 scheduling) and were eventually outperformed by platform threads, implemented as wrappers for OS threads (1:1 scheduling). Virtual threads employ M:N scheduling, where a large number (M) of virtual threads is scheduled to run on a smaller number (N) of OS threads.

        But as @robey points out, this seems not entirely true? Java threads on Solaris did do what they called “many-to-many” threading by default (you could force 1:1 or M:1, but it was not default).

        1. 3

          That still misses the forest for the trees - the actually impressive part of virtual threads is that they automagically replace blocking IO calls on the VM side, making a much higher IO contention possible in the plain, old blocking code style.

          1. 5

            I don’t think that’s the novel bit. This is what most N:M threading implementations have done over the last 20+ years. They replace blocking calls with non-blocking ones and yield, and then poll on (userspace) context switch to see which have finished. That’s basically a necessity for any 1:N or N:M threading implementation.

            The interesting thing here is that they are exposing both a 1:1 and N:M threading model, with user control over which they use for any given thread. This lets them do things like have full OS scheduler priority support for real threads but also lighweight multiplexing for virtual threads, in the same program.

        2. 2

          Thank you; I was almost certain, and just because a java enhancement proposal says X, does not mean the author or reviewers vetted X. I had worked at Sun for a summer and fall fresh out of grad school on, of all things, Solaris internals (then switched to a bell labs team at motorola labs, where I did a bunch of C++ and then early green threads era Java including with threads).

      3. 2

        Is uintN support one of the aims of project Valhalla?

        Hmm, a bit of searchengineering suggests not, but Java 8 introduced APIs for treating signed integers as unsigned. Which strikes me as a throwback to BCPL or assembly language…

    2. 19

      I’m not a fan of what feels like needless hostility (confrontational tone?) in the article, and was expecting to hate it going in, but it does make some good points.

      There’s an important distinction between a future—which does nothing until awaited—and a task, which spawns work in the runtime’s thread pool… returning a future that marks its completion.

      I feel like this point in particular does not get attention when talking about async in languages and took me a long while to get the mental model for.

      To whatever challenges teaching Rust has, async adds a whole new set.

      I disagree with this opinion. In any language with native async infrastructure built-in, I’ve had to learn how it works pretty intimately to effectively use it. My worst experiences have been with Python’s asyncio while the easiest was probably F#.

      1. 7

        I disagree with this opinion. In any language with native async infrastructure built-in, I’ve had to learn how it works pretty intimately to effectively use it.

        I don’t think you’re disagreeing? The article is essentially saying that you have to learn async along with the rest of the language, and you are also saying that you had to learn async with the rest of the language.

        1. 10

          I think the difference is they’re making it sound like some uniquely difficult thing in Rust, and I disagree that it’s some Rust-only problem.

          1. 5

            It’s an async/await problem.

            In languages with concurrency and no async/await (erlang, elixir, go, …), the choice of the scheduling model of your code is determined at the call site. The callee should not care about how it is executed.

            1. 6

              In go:

              x := fetch(...)
              go fetch(...)
              

              In Rust:

              let x = fetch(...).await;
              tokio::spawn(async { fetch(...).await; });
              

              You have the same amount of control of scheduling. If you’re referring to being unable to call an async method from a sync context, this is technically also true in Go, but since everything runs in a goroutine everything is always an async context.

              What makes Rust harder is the semantics around moving and borrowing but also the “different concrete type for each async expression” nature of the generated state machines. For example this is easy in go, but painful in Rust:

              handlers[e] = func(...) ...
              // later
              x := handlers[event]
              go x()
              
      2. 2

        may i ask what you used to figure out how to think about concurrency in F# ?

        1. 3

          A lot of experience getting C#/F# async interop working and the .NET documentation/ecosystem is pretty great these days.

          https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/async

          In F# 6, they made the C# async primitives work seamlessly so you’re no longer wrapping anything in tasks to wait on it on the F# side.

    3. 1

      I still don’t see what benefit WebAssembly offers over MLIR.

      1. 2

        Perhaps the fact that WebAssembly is supported by all modern web browsers out-of-the-box.

    4. 3

      I never saw the point of R7RS-large when SRFIs exist. It should not be an issue to decouple the language specification from the libraries.

      1. 14

        My issue with SRFI’s has always been that, when I go to a Scheme’s website and it says It implements R5RS with SRFI 17, SRFI 91, and SRFI 26 I don’t know what that means.

        But If see R6RS, or R7RS-small I have a pretty good idea what is in there. Personally I like R6RS, and Racket’s implementation the best, so that is what I use.

        1. 18

          Some people (including myself and it would appear @Decabytes as well) find it harder to reason about and discuss ecosystems like this where there is a large universe of possible combinations. There is no single reference or document that covers what is supported. Instead there are now various opaque identifiers that must be mentally juggled and compared and remembered. (It’s great that there’s a single place to look up the meaning of each SRFI, but I don’t think that solves comprehension.)

          If you are making some software, you can no longer say “works with Scheme R[number]RS implementations”, but you instead have to list out the SRFIs you use, which may or may not be supported by the user’s favoured implementation. Then you have to repeat that complexity juggling with other libraries you may also want to use.

          It’s a general issue that tends to arise with any ecosystems arranged in this way. It prioritises implementer flexibility and experimentation over user comprehension of what’s supported. (Maybe that’s okay, maybe it’s not! Probably like all things … it depends on context, each person’s preference, etc.)

          People have made similar complaints about the XMPP ecosystem with its XEPs, which is also an ecosystem of optional extensions.

          1. 13

            How I feel about Haskell language extensions.

            1. 4

              The Haskell situation is maybe a little better in that—practically speaking—there is only one Haskell compiler in widespread use. You could argue that this is a net loss for the Haskell ecosystem! But the fact that most (all?) language extensions are enabled on a per-module basis means that compatibility comes down to asking “which version of GHC are you using?” rather than needing to ask “does your compiler support TemplateHaskell? And DerivingVia? How about MultiWayIf?”

              (I’ll add that the Haskell language extensions are referred to by name. If common usage in the Scheme community is to talk about “SFRI 26,” it does seem like Haskell puts less of a burden on one’s memory when talking about these things.)

          2. 3

            There is no single reference or document that covers what is supported

            There is, for example, chicken has the exact list of SRFIs it implements here https://wiki.call-cc.org/supported-standards#srfis

            so if you write a program which uses SRFIs X Y and Z you just need to check if X and Y and Z are in that list. This is a very deterministic, black and white, well defined thing.

            You don’t need to memorize the numbers btw. I don’t know why people think that.

            It prioritises implementer flexibility and experimentation over user comprehension of what’s supported

            It’s explicitly listed out what is is supported, it is very easy for a user to look at the list and see that the things on the list are supported.

            But I also really don’t think that’s true at all. the SRFI process is to enable users to have understandable libraries of code that they can use, potentially coordinated across implementations. implementors doing exploration would not take the time to specify stable APIs like that, write documentation on them necessarily. I think you have this backwards.

            1. 2

              Your assertion that this whole system is “very easy” has convinced me to avoid Scheme ¯_(ツ)_/¯ it doesn’t sound easy to me

              1. 0

                What problem are you having with this system?

                1. 3

                  Cross checking several implementations against a list of specs before I start writing a program sounds complicated to me but is “very easy” for typical scheme users tells me I’m not smart enough to enjoy this language

                  1. 1

                    There’s been some kind of confusion, you are now talking about a different problem.

                    It’s changed from checking if a set of numbers is a subset of another, to finding an intersection of multiple sets.

          3. 3

            Great observation, it immediately brought to mind OpenGL extensions back in the day. What a nightmare.

        2. 26

          What is unclear about that to you?

          I really enjoyed this question.

          1. 7

            Its just a monoid in the category of endofunctors. Whats the problem?

          2. 1

            I flagged this comment as unkind.

            1. 4

              Alright.

        3. 7

          Are you just saying that you don’t have the SRFI numbers memorized?

          Not sure about Decabytes, but for me: yes. The few times I’ve touched Scheme these numbers have been opaque and confusing.

          1. 2
            1. 5

              They are all there, but what’s there doesn’t necessarily mean the implementations are actually compliant. There’s often caveats “Our implementation just re-exports core foo in place of srfi-foo and differs in semantics” — or they won’t tell you that, and it’ll be different.

              Ah, the joys of SRFIng.

              1. 1

                That’s a totally different question? It doesn’t make sense to write that as a reply to my comment that provides a list of the SRFIs.

                If a low quality implementation is providing an incorrect implementation of a specification then that is obviously a bug. I don’t know what that has to do with me though.

        4. 5

          Yes I don’t have them all memorized because there are so many and what Schemes implement what varies. I have much better knowledge of revised reports because they are self contained groupings of specific functionality.

          1. 0

            it’s precise

            unclear would be stuff like “this has a bunch of list utilities and most of the file io functions you are used to from other places”

    5. 9

      SRFIs would be better if they targeted more practical problems. There’s a fairly recent JSON SRFI, which is great. But, generally, most are experimental language features that have no business being standardized, most of the time, imho.

      Then, there’s the terminal interface SRFI, which sounds great, but doesn’t have a fallback pure scheme implementation. So you’re at the will of implementations / library authors to build them out, which is non-trivial, and likely not fully cross compatible anyway.

      The community is too small, and the ecosystem too fractured. :/

      1. 3

        Not every SRFI can be implemented in pure scheme, but that’s an advantage not a disadvantage: since they can specify things that would otherwise be impossible to add as an external library. It lets you know that you need an implementation to provide you SRFI-x.

        The community is too small, and the ecosystem too fractured

        This is ironic isn’t it. For such a small community to be so fractured…

        1. 2

          The lisp eats itself.

          1. 2

            https://en.wikipedia.org/wiki/Ouroboros

            That comment reminded me of the German guy dreaming of that and realizing it fits the data on the structure of benzene.

      2. 3

        Why is a JSON lib part of the standardisation process anyway? I don’t get the value-add vs adding the lib to the package manager.

        Probably the answer is that compatibility isn’t reliable enough for that or there is no package manager, or something like that?

        1. 3

          SRFIs are more about an API spec, not the implementation. They commonly have a reference implementation, of course.

          Package managers in scheme aren’t standardized… :) until r6rs you didn’t even have a common way to do libraries across implementations outside of load so… it’s all very tricky. Very very tricky. :)

          1. 3

            @river, also.

            Well, not my community, but I would want a standards process to end up with me being able to use code written under the standard in any compatible implementation with ideally zero changes. This is how C and Fortran compilers work.

            Going beyond C, I’d expect standardisation to establish a common system for specifying modules and packages, project dependencies and their versions and a registry of compatible packages for people to use.

            If you get all of that then you should be able to switch implementations quite easily and a common registry and package format would encourage wider code reuse rather than the current fracturing that lispers complain about.

            Is that just not something schemers are interested in? (Genuine question) And if they’re not, then what’s the point of the standardisation process?

            1. 3

              Is that just not something schemers are interested in?

              It is something some of us are very interested in.

              But it is also something that some implementors are explicitly against.

              Which means we all end up screwed and it makes the standards useless and harms the language and community in the long term.

              what’s the point of the standardisation process?

              The goal was the get cooperation and interoperability, but it sadly didn’t end up happening.

              1. 3

                What a shame. Thanks for clearing that up for me.

        2. 1

          Agree completely: things like a JSON library are great to have. useful to specify as a SRFI but not part of the language itself. The whole value of scheme is to have a tiny core language that it is so flexible that you can add things like this as a library.

      3. 2

        SRFIs would be better if they targeted more practical problems. There’s a fairly recent JSON SRFI, which is great. But, generally, most are experimental language features that have no business being standardized, most of the time, imho.

        I feel this too. A lot of the newer SRFIs feel very “galaxy brain” slash “language design by the back door” slash “the SRFI author is probably the only one who wants this” but are unlikely to positively impact my day-to-day life writing Scheme programs tbh

        In general I feel that there are many very smart people churning out API designs that I don’t actually like nor want to use. Maybe I’m not smart enough to appreciate them. If so that’s OK. Aesthetically, many of the designs feel very “R6RS / abstract data types / denotational semantics” focused. Which is fine I guess, but I don’t personally enjoy using APIs designed that way very much, nor do I think they’re going to “make fetch Scheme happen” for the modal industry programmer anyway

        ultimately folks are free to do whatever they want with their free time so I’m not mad about it, I’m happy to just keep plugging along using (mostly unchanging) older R5RS implementations and porting code to my own toy module system, etc and relying on portable code as much as possible

        FWIW I thought R6RS was “fine” until they broke all my code using DEFINE-RECORD-TYPE because something something PL nerds denotational semantics etc. I have appreciated that the R7RS-small work I’m aware of thus far doesn’t break my code in the same way R6RS did

        1. 3

          R6RS […] denotational semantics

          I believe it was R6RS that included the first operational semantics for a standard Scheme. Previously R4RS had included a denotational semantics, which R5RS had left unchanged despite there being changes in the text that required a change to the semantics.

          In neither R4-, R5- nor R6RS did one need to read the semantics to make effective use of the language.

        2. 2

          A lot of the newer SRFIs feel very “galaxy brain” slash “language design by the back door” slash “the SRFI author is probably the only one who wants this” but are unlikely to positively impact my day-to-day life writing Scheme programs tbh

          FWIW, I agree. The newer SRFIs seem very much aimed at comprehensiveness instead of ergonomics. If you look at some of the older SRFIs they seem a lot more focused and minimal.

        3. 1

          denotational semantics is not the enemy here.

          many of us were very unhappy with R6RS.

  1. 2

    It’s hard to see if the author understands that “the halting problem” is a specific thing with a proof that you can’t solve it, and translated accidentally from the author’s native language landing on a predefined phrase, or if the title is just embarrassing clickbait.

    1. 5

      It’s an obvious clickbait :-)

      Though, it also is a rather profound fact that every Turing machine which runs in O(F(N)) time for some primitive recursive function F is itself a primitive recursive function in disguise… You could compute anything practically meaningful without reaching for the power of Turing machine! “Just add an instruction counter” is a bit deeper than it seems at a first glance.

  2. 21

    Clickbait headline. More correct: “Apple wants you to pay $99/year to develop hobby apps for iOS that work on a device for more than a week.”

    I don’t think this is any intentional stance on Apple’s part; it’s just such a niche use case from their perspective. Or it’s sort of the shareware model where you can try it out free for a little while, but then you need to pay to keep using it.

    1. 8

      If you can perpetually build/run an app on your iThing for $0 then this is technically indistinguishable from sideloading, which Apple very much does not want. You could wrap this up to offer app sales outside the App Store.

      1. 7

        And why would Apple ever want to permit you to develop an app that doesn’t directly make them money?

        1. 5

          Because Apple traditionally has been a hardware company?

          1. 17

            Historically, yes, but at this point Apple has reached market saturation for their hardware and is slowly becoming a services and advertising company.

            1. 4

              This. Apple’s app store appears to have made about $80 billion in revenue in 2022 from app sales, or about 20% of Apple’s gross revenue. That doesn’t subtract the 70% that the app developers actually get to keep, but it also doesn’t count the $X/yr those developers pay for dev kits.

        2. 3

          This assertion doesn’t appear to correlate with reality. The majority of apps in the App Store are free with no in-app purchases.

          1. 2

            How many of those use in-app advertising?

            1. 1

              Potentially many. Apple doesn’t take a cut from in-app advertising so it doesn’t matter for the purpose of this discussion.

  3. 30

    What was one revelation of the Snowden leaks? NSA/five eyes really hate encryption and lamented about the fact more and more web traffic is encrypted. It would be very convenient for them to have a honeypot MITM that strips encryption and can see all traffic, while also preventing Tor users from effectively browsing the clearnet.

    Cloudflare sees all traffic between you and the website you want to visit in clear text. Cloudflare is located in San Francisco, CA, USA. 20% of clearnet internet traffic goes through Cloudflare. Every US company can be forced by secret federal court order to allow the NSA to tap into their communications and no one at such a company who knows about it may talk about it to anyone unless they want to spend the next 10-20 years behind bars. It doesn’t matter if Cloudflare was an NSA-thing from the start or turned into one later, it very surely is given its size and market share.

    DDoS protection is nothing special. Hosters like Hetzner have first-rate DDoS-protection and it’s included free of charge with their VPS packages. With some very few exceptions, I think it’s nonsense that companies think they have to use Cloudflare for DDoS protection.

    Please think twice before using services like Cloudflare, especially when they’re “free”. Who is the product?

    1. 3

      Please think twice before using services like Cloudflare, especially when they’re “free”. Who is the product?

      While I agree with that, it’s often not even the choice of most tech people, unless it’s their own company. Similar things are true for cloud usage at large. There’s very little incentive to care about privacy and that kind of security in most companies. It doesn’t cost companies anything, but it brings them certain benefits. It’s just not how your typical company operates.

      Of course this also explains why companies, large and small are being “hacked” all the time. But the response is using some mandatory security courses for employees and hoping it doesn’t happen next time. Security is barely a worthwhile endeavor for most companies, outside of marketing and similar things. It sounds good both in ads and in internal presentations, projects, etc. But it’s rarely meant sincerely in commercial contexts.

      It’s more like companies showing you a “Your privacy is important to us”, when the only reason that they are required to have that banner up is precisely cause they couldn’t care less about it.

      Companies still will eagerly provide your data to CDNs, analytics tools, and all sorts of other third parties, embed Facebook, not read the docs enough to opt out for non-facebook sending their data to FB and so on. It’s simply not an objective for a company that exists to increase profit. It’s not just about privacy. It’s a general theme. It’s about all about incentives.

    2. 2

      Cloudflare sees all traffic between you and the website you want to visit in clear text.

      Please explain this claim.

      1. 22

        If a website uses Cloudflare, the traffic between you and the website is 100% readable by Cloudflare. If you don’t believe me, read this:

        CF does see all of the passwords, OAuth tokens, secrets, and PII that go through its systems, however, Cloudflare operates in accordance 56 with the GDPR and isn’t an advertising or data collection company giving them little to no incentive to steal any PII or steal the passwords of customers/website operators.

        trust us™

        1. 6

          It’s not a question of belief. It was simply a technical question. As @edk mentions, the CDN functionality relies on being able to terminate the TLS connection on a Cloudflare server.

          It certainly is a security puzzle worth thinking about. For example, there are protocols (designed before TLS was widespread) that use nonces and do not pass plain text passwords or even login identities (see “userhash”), even within TLS protected streams, e.g. https://datatracker.ietf.org/doc/html/rfc7616

          1. 3

            It doesn’t seem like a security puzzle to me.

            A lot of CloudFlare’s (and other CDN) features depend on MITMing, reading data, but also things like modifying headers, sometimes compressing or re-incoding images, etc. And of course they cache the data. Tunneling through cloudflare wouldn’t be a big problem, but also wouldn’t gain you anything.

            You could of course do that just for passwords, but the thing you protect against by having an account and a password could still be done by Cloudflare (reading content, and even modifying requests and responses).

      2. 8

        Cloudflare is a CDN at heart. Like any CDN it needs to think in plaintext so it can cache things. So Cloudflare’s reverse proxy terminates TLS and (optionally!!) re-establishes TLS in order to talk to whatever is behind it. Setting aside any internal policy/security measures, which I hope exist but have no way of knowing for sure, someone with access to Cloudflare’s infrastructure could snoop on traffic while it’s between TLS connections, so to speak.

        I should note that unlike parent I am not totally convinced Cloudflare is the NSA, although I would imagine they’ve seen more FISA orders than most companies their size.

        1. 6

          They don’t really need to “be” NSA. If they operate in the US, as they do, any employee can be compelled to do their bidding through a National Security Letter, and it might even be a punishable offense for that employee to tell his boss.

          1. 0

            That’s the happy case. There are many Government far more malign than the US Government; I’d bet that some of them (e.g. the Chinese and Russian Governments) have at least attempted to compromise individual employees of Cloudflare.

            1. 3

              The “happy case” depends entirely on who exactly has their privacy infringed by a Cloudflare compromise, and it will likely not be the same answer for everyone involved.

      3. 2

        What was one revelation of the Snowden leaks? NSA/five eyes really hate encryption and lamented about the fact more and more web traffic is encrypted.

        This was a published issue long before Snowden. Clipper chip arguments from 1994 or so and back earlier with James Bamford’s Puzzle Palace all these supposed revelations were in the clear. https://a.co/d/8KBvKPL

        1. 3

          Yeah, but Snowden demonstrated that the surveillance was an order of magnitude or two larger than what people realistically expected.

          1. 1

            I think (pretty much aligned with your point) that “people” in your sentence really means “people who didn’t read Bamford’s The Puzzle Palace from 1983, or read any freedom of information act documents since then about NSA, or ever visit NSA” because most the people i knew were like “no duh…should be obvious”.

            And, again to your point, the number of such people was adequately large to create a sustained reaction to Snowden’s leaks.

            I do think the co-opting of NSA equipment to watch domestic cellphone network traffic was the only previously unemphasized thing (because it’s outside NSA’s charter, unless one side of the conversation crosses the US border).

  4. 84

    Graydon’s outlook here is really impressive.

    He kicked off an incredibly influential project, then didn’t block other people from evolving it in a direction that wasn’t his original vision, and can talk about it so sensibly. Clearly he’s attached to some aspects of his initial vision (and he does sometimes argue that specific Rust features were mistakes/shouldn’t be adopted), but recognizes that actually existing Rust fills an important niche, that it probably couldn’t have filled while following his initial vision.

    So many tech luminaries would be writing bitter posts about how much better everything would be if the project had just listened to them. Or they never would’ve stepped down in the first place, and the project would’ve stalled.

    1. 16

      Yes definitely … and I was thinking about this a little more: What do Graydon-Rust and Rust-2023 actually have in common? The only things I can think of are:

      • it’s an imperative ALGOL-like language that has algebraic data types (OCaml influence)
      • it has the fn keyword
      • it pushes the boundary on the type system, but even that is different – “typestate” vs. borrow checking

      Almost everything else seems different?

      • the syntax is more elaborate as he says, and it has more traditional C-style keywords like break and continue
      • integer types are different (see auto-bignum)
      • container types are different (vec would be builtin, vs. library)
      • the unit of code larger than a function would be different - ML-like modules vs. traits
      • type system would be very different – more structural than nominal, less inference
      • memory management would be dynamic/GC, not static
      • concurrency would be different – actors vs. async/await
      • error handling would be different – the result isn’t what I wanted at any point, and I don’t know where I would have gone with it.
      • there would be more dynamic features – reflection, more dynamic dispatch
      • metaprogramming would be different – he wanted quasiquotes
      • it would have tail calls, but it would NOT have environment capture
      • the implementation is different – OCaml rust-prehistory vs. self-hosted LLVM

      That’s like EVERYTHING in a language !!! What did I miss?

      It’s a bit shocking that it even worked, and ended up where it is today … it’s so wildly different.

      It seems like Mozilla got together a bunch of talented language and compiler engineers around Graydon’s side project, and then produced something completely different.


      As an aside, has anyone rendered the early doc links from this post (and published them)?

      https://github.com/graydon/rust-prehistory/blob/df8cc964772b36fe120df51eb5ee408b6dc2953a/doc/rust.texi#L82-L137

      1. 7

        OTOH the initial assessment why Rust was needed was spot on, so I think it accomplished the goal, even if via different path.

        1. 3

          Yeah definitely, looking over it, I would add these stable design points

          • Safety – he mentions C++ being wildly unsafe in the slides. From 2016 - Rust is mostly safety
          • de-emphasize GC and pointer-rich data structures
          • de-emphasize and control mutability, and especially shared mutability
          • parameterized types, interestingly using swap[T]() syntax
          • RAII (destructors)
          • UTF-8 strings, yay

          So in those senses it’s the same, but it still surprises me how different it turned out!

          (Also fun to see the stack iterators and actors, re-reading that deck)

        2. 1

          That’s a really neat artifact!

          • Very funny that he cautions against rewriting a major project in Rust. Truly the Rust he wanted had no future.
          • Interesting that he used square brackets for generics and the project later switched to angle brackets. Lots of people like to complain about the parsing ambiguity of angle bracket generics on the web.
          • Seeing a slide titled OMGWTFBBQ made me oddly nostalgic.
    2. 11

      I had exactly the same thought while reading this. The other thought I had was that I would have very much preferred many of the ideas he suspects the reader would not. In many cases the choices he would have made perfectly match my retroactive wishes after having used rust for a while.

    3. 6

      I can’t think of an example of a tech luminary who would be bitter. it might be i’m super skeptical about what constitutes a luminary, though.

      1. 11

        Linus Torvalds would 100% say “it’s shit” if he had not been guiding Linux until recently.

        1. 21

          I think this is a huge misread of Torvalds

          He’s infamous for some abusive rants, but they’re all directed at Linux contributors, and not at people working on other open source projects, or their work

          To me it’s not a coincidence that he created what’s arguably the most collaborative open source project of all time, and he gets emotional about what is let into the codebase.

          It’s basically because of the large scope and highly collaborative nature of the project – that’s the only “control” he has

          Most maintainers would try to review all patches, and they would drown in it, and lose contributors, and the project would end up smaller. But he doesn’t, so he fishes out “smells” and then blows up when he sees something he doesn’t like

          I’m not defending it (I certainly wouldn’t want to work in that environment). But I’m saying it’s not coming from a petty or jealous place

          And TBH the rants generally seem like they have a reasonable technical justification, and there could be something to learn. He just chose to make it profane and abusive so everybody remembers it … kind of a shitty tactic, but there’s a method to the madness

          1. 13

            He’s infamous for some abusive rants, but they’re all directed at Linux contributors, and not at people working on other open source projects, or their work

            The first Linus rant that comes to mind is the one where glibc replaced memcpy with one that complied with the spec and was faster, but broke programs that expected memcpy to me have like me move. So I’m going to have to disagree with this as a statement of fact.

            1. 4

              Sure, but I’d say Torvalds is angry because he has “skin in the game” … Not because he’s a petty or jealous person.

              I mean Graydon quit and Torvalds didn’t until recently, and Linux is much bigger than Rust – it’s inherently a stressful position

              I’ve corresponded with him directly many years ago, and he’s a very clear and effective communicator, always interested in the best solutions.

              It’s unfortunate that he got what he wanted – people remember the 1% of his extreme anger – but 99% of the time he’s helpful and effective.

              (And not to say I don’t have any technical problems with what they’re doing. Lots of Linux and git are simply incoherent. But I regard that as a side effect of scale. It’s literally parallel development where people don’t think about what others are doing. The contrast is something like https://www.oilshell.org/ where I try to keep the whole language in my head, and make it globally coherent, and it doesn’t scale development-wise. I think that’s mostly OK for what we’re doing, but it’s a tradeoff, and could be better.)

            2. 4

              That particular change broke the closed source Flash player plugin, which basically broke the Web back then. Have to agree with Linus there.

        2. 18

          Yeah, because that’s what he said about git after giving up his primary developer status to Junio C. Hamano? Oh, wait… https://www.linuxfoundation.org/blog/blog/10-years-of-git-an-interview-with-git-creator-linus-torvalds

          Has it lived up to your expectations? How is it working today in your estimation? Are there any limitations?

          Torvalds: I’m very happy with git. It works remarkably well for the kernel and is still meeting all my expectations.

          1. 7

            Haha, don’t ruin my perfectly good slander with facts. :-)

    4. 2

      It’s certainly an unusual stance, but I don’t know if it led to the best outcome vs. being BDFL and making Rust a better language.

      1. 3

        Better for what and along what axis is the question.

        The vision the community (and probably mozilla) latched on is clearly very different than Graydon’s, but is it worse for it? Would the world be better off with an other efficient-ish applications language, and more importantly still without any sort of viable replacement or competitor to C++?

        1. 1

          The performance aspect seems like the most important difference to me, and could be a deal breaker for many use cases. Wasn’t Go designed as a replacement for C++ though?

          1. 3

            Kinda but not.

            It was designed as a replacement for some C++ uses at the upper edge (or is it bottom?): network services, daemons some CLI. But it never aimed to broadly replace C++ itself in the niches were it’s solid. Rather to take the space where C++ and scripting languages would meet.

  5. 1

    This would be a reasonable place to mention Python’s predecessor SetL.

  6. 2

    I’m slightly embarrassed that I didn’t notice the pun in the title until the second reading.

    1. 1

      Sorry, what pun? Maybe I am a language model.

      1. 2

        The ‘We’re Afraid’ bit is a direct reference to the kinds of language construct that the paper is describing. It’s similar to the ‘we found an neuron’ thing in the title of another paper that was shared here a few weeks back.

        1. 4

          How’s that a pun? (native english speaker wondering what you’re seeing)

        2. 1

          Interesting, thank you for explaining!

  7. 1

    fedora linux 37 for aarch64 works great for me on a quite affordable ec2 graviton (2? 3? i forget) instance. it’s fantastic, the cloud side of my daily driver (with AsahiLinux on M2 on my lap side, also fantastic)

  8. 3

    Good article! But with the caveat that tree-walking interpreters are the slowest type, so most “real” (practical) language implementations move to a bytecode representation, or else compile to native code or transpile to a faster language.

    (I guess pure LISP interpreters are by necessity tree-walking. Or are there any that compile to bytecode?)

    1. 9

      Common LISP actually compiles to a binary.
      Compiling to bytecode does not ensure a faster interpreter, as demonstrated by Python. :-)

      What make other approaches faster is very often the fact that some very efficient heuristics will detect recurrent patterns in the tree that will be collapsed into faster routines. Whatever your compiler you still have to run your execution tree in a way or another.

      1. 10

        Common Lisp is a language with several different implementations. Some of them compile to native code binaries and some don’t.

        1. 1

          SBCL can produce native code?

          1. 3

            yes, it does compile to native code

    2. 2

      (I guess pure LISP interpreters are by necessity tree-walking. Or are there any that compile to bytecode?)

      CLISP compiles to bytecode.

    3. 1

      As a simple example, consider an expression in tree form like 1+2. It’s easy to move to a linearizes form suitable for a stack machine. You can write it in postfix notation as 1 2 +. Push 1, Push 2, Add.

  9. 5

    Having basically written this same article but in 2016, I just don’t think Elm scales like people think it might. I’ve also had two different companies I worked for already where we ran into the limitations and needed hacks so dirty that rewrites were seen as more practical. Need synchronous native code? SoL. Do you have i18n and l10n considerations? There’s no good solution for you. Need a browser API not supported yet by the Elm team, good luck becoming anointed to the boy’s club gets access to solving your real world problems.

    I feel Elm is best used as a tool to learn FP and has a lot of things to teach about design/architecture (we see TEA now used as an acronym all over because of the idea’s success), but it’s not the horse you should bet on when you can do TEA-style programming without all of the limitations; there are dozens of options now in many different languages, especially the purely functional languages.

    1. 4

      Both of the following can be true at the same time:

      • Elm has flaws
      • Elm scales better than TypeScript

      I’d say Elm’s downsides compared to TypeScript are more domain specific (e.g. no native i18n support) whereas TypeScript’s are more structural (e.g. npm vs Elm’s package manager). So how relevant those downsides are to you depends on your use cases.

      At NoRedInk we’ve been extremely happy with Elm since 2015, but to be fair, we don’t do any i18n. Maybe if we did we’d be less happy with it.

      Vendr is another company with 400K+ LoC Elm in production that powers their whole frontend with Elm, and has for years (they hosted the NYC Elm meetup pre-pandemic), and they use TypeScript on the backend, so they’re very aware of how the two stack up!

      1. 3

        Maybe “might not scale for you” would be a more accurate phrasing depending on your product requirements. There is a subset of applications, even common ones like SPAs, where Elm be the easiest work with because of runtime being something you don’t need to think about. I still however stand by that that are quite a few sore spots that can be showstoppers for other applications.

        I don’t think TypeScript vs. Elm is the only fair comparison though. There are good functional (and even TEA-like) frameworks that compile to JavaScript in PureScript, ReScript, derw, ClojureScript, Scala, F#, Haskell+GHCjs that are also worth considering and could cover those limitations. The package manager was mentioned, and it too has issues: working offline, private repositories, the freedom to host a packages not on Microsoft GitHub, dealing with versioning providing patches packages released for older version versions of the Elm compiler.

        If I were put in a position to choose TypeScript or Elm though, even if I had to make some painful workarounds, I would absolutely choose Elm because TypeScript isn’t built for functional ergonomics and it’s by-design type system adherence to the goofiness of JavaScript make it awful and verbose to work with. I also wouldn’t be where I am without having chosen to invest time learning Elm.

    2. 3
  10. 4

    Genuine curiosity: for folks wanting multicore OCaml, why not use F# from current dotnet 6.x since it works everywhere, and can compile to native, I believe, too.

    1. 11

      A few reasons off the top of my head:

      • OCaml has fast compile from scratch and near-instant incremental compilation
      • Compiles to actual native binary as opposed to a full bundle of .NET runtime in a thin wrapper executable
      • OCaml’s Multicore approach is similar to Java’s Project Loom virtual threads approach, i.e. no need for async/await, just program in direct style and I/O is automatically nonblocking.
      1. 2

        Thank you!

    2. 10

      The most common answer to “why use X when you could use Y” is “we already have a bunch of X and rewriting it all in Y is a non-starter”

    3. 5

      does ‘everywhere’ include NetBSD, OpenBSD?
      F# GUI binding on non-Windows platforms, is there a good framework to pick up?

      I think access to .NET ecosystem is a big plus for F#, but it is not clear that syntax constructs between the 2 languages map one-to-one. I cannot find links right now, but as a casual reader it seems that Ocaml is more advanced.

      1. 7

        They absolutely don’t map. There’s a common subset, but that’s about it. F# lacks a module system comparable to ML (functors etc.), and its OO is the .Net OO, not OCaml’s object system with structural typing and object type inference.

        1. 4

          Plus the programming model offered by domains+effects is nothing like async F#.

      2. 1

        I wonder if F# will evolve towards parity

  11. 2

    I wonder how it differs from the programmer perspective from good old days of green threads in the JVM until 2002 or 2003 or whenever it was.

    It used to be that Java on the JVM was doing essentially the Goroutines experience before Go was a thing (by several years).

    1. 3

      if I remember right, those coroutine libraries were not providing parallelism, they provided concurrency at the syntax/program level. Project Loom is essentially enables JVM (not just Java) to establish a low-level platform to develop an erlang-like ecosystem, where a given function call can be wrapped (if needed) in a lightweight thread.

  12. 34

    I fear the ramifications of Fuchsia. From where I’m sitting, it looks like Google bootstrapped Android off the back of Linux, didn’t really give much back, and then set out on a campaign to rid themselves of components with licenses that might obligate them to give anything back in the future. Android just keeps getting more and more closed, and what remains open is increasingly useless without adding on proprietary software. They’ve made a mockery of the freedoms granted by the GPL; for most Android users, the only alternative to “whatever OS we decide to give you” is picking some hacked-up mess from a forum that will be maintained for approximately 37 seconds. To me, Fuchsia feels like an attempt to close this loop; once the Linux kernel is out of the picture, Google will have rid itself of all those troublesome GPL components and can forget that whole “open source” thing ever happened.

    1. 7

      Frankly, this doesn’t make any sense since Fuchsia is open source? Yes, Fuchsia is not GPL, but Google wrote Fuchsia so Google can decide its license.

      1. 18

        It’s a difference between users having guaranteed rights to the source code now and in the future, vs users depending on continued benevolence of Google.

        Sadly, GPL covers only the kernel. Android is already problematic from software freedom perspective due to PlayServices dependency, and important components like camera image processing being kept closed-source.

        With such history, do not expect to Google to be a good steward of a project they can close as much as they want. Google already keeps Android forks inferior, and Fuchsia gives them even more code they could make closed at any time to make Android forks harder to maintain.

      2. 9

        But doesn’t this just confirm the statement ?

        Can decide its license

        and thus can do anything they want with it, including the addition of proprietary extensions and APIs that you need to run the android of the future, closing its source later on or changing the license such that it’s not free to use.

      3. 2

        that’s entirely compatible with what /u/jordemort said, is it not?

    2. 6

      There has definitely been this trend, and it’s not just Google. Amazon also comes to mind.

      Businesses tend towards rent-seeking behavior by nature, and often only “donate” when it is a means to that end. Google is neither a charity nor a non-profit.

    3. 5

      Your theory might be coincidental with what the interviewee said was the real motivation : the insanity of Google having 4 or more disjoint teams all separately customizing the Linux kernel?

      From the article:

      “At that time, Fuchsia was never originally about building a new kernel. It was actually about an observation I made: that the Android team had their own Linux kernel team, and the Chrome OS team had their own Linux kernel team, and there was a desktop version of Linux at Google [Goobuntu and later gLinux], and there was a Linux kernel team in the data centers. They were all separate, and that seems crazy and inefficient.”

      1. 9

        It doesn’t seem that crazy and inefficient to me. All those teams supported different products with different requirements. A big thing with Android was (finally) getting Binder upstreamed into the LInux kernel… that was a long process. I’ve not heard anything about ChromeOS or the desktop efforts that had IPC mechanism requirements that couldn’t be fulfilled by existing projects like dBus.

        ChromeOS is about providing a polished and narrowly focused experience, without the flexibility that a nominal desktop OS should provide. So I don’t see as much overlap there either. And the server team I’m sure was more worried about software defined networking, virtualization, and making sure process scheduling doesn’t bog down on a 64-core machine. Also not necessarily a lot of overlap.

      2. 6

        I think the reasons why an engineer might want to start a project aren’t necessarily the same reasons why management might want to get behind a project.

      3. 1

        so the solution to that inefficiency is not to unify their linux efforts, but to develop an entirely new kernel??

        1. 1

          Maybe they found that it wasn’t efficient to shoehorn basically the same monolithic kernel into everything from mobiles to cloud clusters.

          1. 1

            maybe but that’s not what the interviewee said

    4. 2

      I just got a new phone and installed LineageOS on it. The GPL is doing absolutely nothing to help keep AOSP free and open: the requirements to deploy Google things are due to the Play Store having a monopoly on most apps that people actually need and the fact that a lot of things depend on Play Services and so on.

      I’m looking forward to Fuchsia replacing Linux in Android. It’s a much better kernel design and a better implementation. The main obstacle for Fuchsia at the moment is that Google open source projects are very much Google projects that happen to be open source. They are very bad at building (and not then immediately screwing over) communities.

    5. 1

      yeah it’s bad, but we already knew everything would move in that direction without effective organized resistance.

  13. 1

    Seems a counterexample would be a dataset of type Identity x Boolean, and anonymization would be dropping the Identity coordinate. That’s definitely totally anonymous.

    1. 4

      As with everything, that depends on what the data is, and on what other information the attacker has. Suppose it’s an academic course roster, and the boolean represents who passed. Suppose the attacker was a student in that course and already knows several other people’s grades. In that case, they might gain new information even from something so simple as the count of how many trues and how many falses the data set contains.

      Of course that’s a contrived example, but the point is that there are very few safe generalizations about anonymization. If you’re not using some mathematically rigorous framework such as differential privacy, you’ve always got risks like the above.

      1. 2

        I could see that happening quite easily - my university use to post test+assignment grades on a board next to student id #s. That wouldn’t take much work to deanonymize, but even actual random#s for a study would need consistent # for each student across multiple courses, and if the course sizes were small enough (post grad CS courses at my university went from 2 - largely through attrition intentionally caused by the lecturer - to maybe 25. Correlating random numbers to actual people would likely not have been too difficult even before you got to grades)

  14. 5

    Yes it matters.

    At least with C++ developers can slowly learn the more arcane part of the language language while they use it. A bit more difficult with Rust.

    Furthermore, it might be possible to implement some form of borrow checking for existing languages.

    Any language should be easy to learn. That’s true for C and python. Language popularity is highly correlated with ease of learning. And this is true for all the new languages out there that try to do fancy things: most developers do not care.

    Personally, all I would ever want, is something mostly like C/C++, with pythonic features, easier to read and use, faster to compile, without a GC, statically compiled, without sophisticated things.

    1. 17

      I wouldn’t call C easy to learn. It probably has less essential complexity than Rust has, but there’s still a lot of fiddly details to learn that wouldn’t come up in languages created decades later with garbage collection and better tooling and syntactic defaults.

      1. 9

        A couple issues I found when wanting to learn C is all of the variation because of its history. What tooling should I use? Which conventions should I follow? Which version is the current version?

        The various C standards are not conveniently discoverable and even when you map them out, they’re written in reference to past standards. So to get the set of rules you have to mentally diff K&R C with a handful of other standards published over 40 years, etc. Starting completely from no knowledge and trying to figure out “What are the complete set of rules for the most modern version of C?” is nearly impossible. At least that has been my experience and biggest struggle when trying to get started with C multiple times over the years.

        Then I constantly see veteran C programmers arguing with each other about correct form, there seems to be far less consensus than with modern languages.

      2. 5

        I’d say C is easy to learn but hard to master. But that could be said about a lot of languages.

      3. 2

        I think there is a big difference to what is the absolute minimum you can learn.

        You can “learn” C with programs that compile and run the happy path mostly correctly. The probably have tons of bugs and security issues but you are using the language.

        Rust forces you to handle these issues up front. This does make the minimal learning longer but the total learning to be a “production ready coder” is probably actually shorter.

    2. 15

      Man, I was terrified when I was learning C++. I would stick to the parts I was “comfortable” with, but when I would call someone else’s code (or a library) I couldn’t reliably know how the features they used would intersect with mine. And the consequences very often were debugging core dumps for hours. I’m no Rust fanboy, but if you’re going to have a language as complicated as Rust or C++, I’d rather learn with one that slaps my hand when doing something I probably oughtn’t do.

    3. 11

      So, Nim once ARC lands?

      1. 3

        Is that not the case already ? I unfortunately do not use Nim often these days so I might be out of touch, but if I recall correctly arc/orc are available but not the default.

        EDIT: Yeah, It seems to use ref counting by default currently but the doc advise to use orc for newly written code Cf: https://nim-lang.github.io/Nim/mm.html

    4. 8

      Any language should be easy to learn. That’s true for C and python. Language popularity is highly correlated with ease of learning.

      All other things being equal, yes, ease of learning is good. But at some point one may have to sacrifice ease of learning to make the expert’s work easier or more reliable or faster or leaner. Sometimes that’s what has to be done to reach the level of quality we require.

      If it means some programmers can’t use it, so be it. It’s okay to keep the incompetents out.

      1. 8

        I was mostly with you, but “incompetents” is harsh.

        1. 4

          Can we at least agree that there is such a thing as incompetent programmers? I’m all for inclusivity, but at some point the job has to get done. Also, people can learn. It’s not always easy, but it’s rarely impossible.

          1. 4

            There are, but generally they’re not going to be successful whether they use Rust or another language. There are inexperienced developers who aren’t incompetent but just haven’t learned yet who will have an easier time learning some languages than Rust, and there are also experienced programmers who simply don’t know Rust who will also have an easier time learning other languages than Rust. Since the incompetent programmers will fail with or without Rust, it seemed like you were referring to the other groups as incompetent.

            1. 5

              Ah, the permanent connotation of “incompetent” eluded me. I was including people who are not competent yet. You only want to keep them out until they become competent.

              My original point was the hypothesis that sometimes, being expert friendly means being beginner hostile to some extent. While it is possible (and desirable) to lower the learning curve as much as we reasonably can, it’s rarely possible to flatten it down to zero, and in some cases, it just has to be steep.

              Take oscilloscopes for instance. The ones I was exposed to in high school were very simple. But the modern stuff I see now is just pouring buttons all over the place like a freaking airliner! That makes them much scarier to me, who have very little skill in electronics. But I also suspect all these buttons are actually valuable to experts, who may have lots of ways to test a wide variety of circuits. And those button give them a more direct access to all that goodness.

              In the end, the question is, are steep learning curve worth it? I believe that in some cases, they are.

              1. 3

                That makes sense. Thanks for clarifying. I don’t know if I have a strong opinion, but I do believe that there are cases that require extreme performance and that often requires expertise. Moreover, having been a C++ programmer for a time, I’m grateful that where C++ would accept a broken program, Rust slaps my hand.

      2. 2

        True. Ada Programming language easy to learn but not widely accepted or used

    5. 4

      At least with C++ developers can slowly learn the more arcane part of the language language while they use it. A bit more difficult with Rust.

      I’m not sure what parts of Rust you consider “arcane”. The tough parts to learn, borrow checking and lifetimes, aren’t “arcane” parts of Rust; they are basically it’s raison d’être.

      Any language should be easy to learn.

      Ideally languages would be as simple/easy as they can be to meet their goals. But a language might be the easiest-to-learn expression of a particular set of goals and still be tough to learn – it depends on the goals. Some goals might have a lot of inherent complexity.

    6. 3

      If carefully aware of escape analyses as a programmer, you might realize that with Go, and, while programming Go for several years, I’m by no means a Go fanboy.

      In particular, I conjecture that you could write a program in Go that does not use GC, unless the standard library functions you use themselves use GC.

      I need to learn Rust, i realize, having written that prior sentence, and having been originally a C fan.

      1. 6

        Personally, I would strongly recommend the O’Reilly “Programming Rust, 2nd ed.” For me it was a breakthrough that finally allowed me to write Rust and not get stuck. It may not be “perfectly shiny” what I write, but before that, I often stumbled into some situations I just couldn’t get out of. Now I understand enough to be able to at least find some workaround - ugly or not, but it lets me go on writing.

        Also, coming from Go (with a history of C++ long ago beforehand), one thing I had to get over and understand “philosophically” was the apparent lack of simplicity in Rust. For this, my “a ha” moment was realizing, that the two languages make different choices in a priorities triangle of: simplicity vs. performance vs. security. Go does value all 3, but chooses simplicity as the highest among them (thus GC, nulls, etc; but super approachable lang spec and stdlib APIs and docs). Rust does value all 3 too, but chooses performance AND security as the highest. Thus simplicity necessarily is just forced to the back-seat, with a “sorry, man; yes, we do care about you, but now just please stay there for a sec and let us carry out the quarrel we’re having here; we’ll come back to you soon and really try to look into what you’d like us to hear.” And notably the “AND” here is IMO a rather amazing feat, where before I’d assume it just has to often be an “or”. Also this theory rather nicely explains to me the sparking and heated arguments around the use of unsafe in the community - it would appear to happen around the lines where the “AND” is, or looks like it is, kind of stretching/cracking.

    7. 2

      Personally, all I would ever want, is something mostly like C/C++, with pythonic features, easier to read and use, faster to compile, without a GC, statically compiled, without sophisticated things.

      I think you’re looking for Myddin (still WIP) or possibly Hare. Whether they’re “pythonic” is debatable though.

    8. 1

      Any language should be easy to learn.

      Not only easy to run. Easy. Because why would one make it difficult if it clearly can be made easy? The whole point of programming languages is providing simoler alternatives to the targets of their compilers.

  15. 4

    It’s curious they don’t emphasize Limbo from Bell Labs though it’s cited in the bibliography and obliquely mentioned in passing. I perpahs misunderstood it to very much be a Bell Labs thus pre-Google-branding early Go.

    1. 25

      Go borrowed ideas from Newsqueak, Alef, and Limbo, but it’s very much its own language, started from scratch at Google. The main thing all these language have is channels and some kind of lightweight thread. Limbo had low-latency garbage collection, which definitely helped us believe that was possible for Go too. And it was the first of the three to add preemption to the lightweight threads. But beyond that, there’s not a lot in common.

      Limbo’s module (package) system was dynamically loaded and had separate API definitions and implementations. Strings were handled quite differently. There was nothing like Go’s interfaces. No built-in maps. (It did have built-in linked lists.) Slices were not the same. And the implementation of course was very different from Go: a JIT’ed portable bytecode language inside a virtual machine running its own virtual operating system.