Threads for schell

    1. 2

      This is not my area of my expertise but I feel like there are better solutions out there. I understand that this is an established approach. One thing that jumped out on me is that in the process of unification as described in the OP (and as implemented in the linked gist) it makes it hard to give useful feedback. A lot of information is lost in the unification and only the error is signalled.

      With a minor tweak the constraint set can be turned into a dependency graph. Each constrain, in effect, an edge in the graph. Each node is a (variable, resolved type) pair. At the leaves of the graph we’ll have concrete types: either as defined in the code, inferred from literal values, or from function return types if specified. Then the graph is traversed in reverse order propagating types.

      Nice thing about this is that graphs are relatively common. You either used them for something else, or you can find a library that implements all the basic algorithms, and probably do that efficiently. As a bonus, we have all the inference information. Once we get a type mismatch, in addition to just saying there’s one, we can also provide a whole inference chain so that the user could understand better what and where they need to change to make types align.

      1. 2

        This algorithm has been around since the 70s. 50 years. I would hope that we would have improvements over algorithms made then, but in practice we often do not. That’s because PL theory is hard.

        Unfortunately, saying “there are better solutions out there” doesn’t hold much weight. While it’s almost certainly true, getting there is hard. It’s orders of magnitude easier to criticize than to come up with a working algorithm like this, even if you have a general intuition of the idea.

        You’ve already said that you wouldn’t even spend any time actually working on this. That gives this criticism even less weight.

      2. 1

        You should do it!

        1. 1

          It’s on my projects list but it’s a little bit further along so probably not any time soon, unfortunately.

    2. 3

      Compilers with stronger type systems is a step in the right direction. We talk less about what it takes to refactor at my work than I recall at previous shops (we use Rust, and before that Haskell). Having any compilation step helps heaps. Hopefully there will be more ingenious ideas that help make the art of programming more enjoyable while allowing us to tackle bigger and harder problems.

    3. 2

      I’m balancing family time, working on my rust web front end library ‘mogwai’, my rust game engine and trying not to waste too much time playing valheim. :)

    4. 6

      I wonder if this means that we also need a Xerox PARC for our times – and if our current operating systems can be rescued; the desktop revolution required new ones.

      1. 11

        A lot can be achieved without throwing the whole OS out. The author notes how iOS largely discards the desktop metaphor, and I would like to add that this was achieved despite it being a fork of OSX.

        Personally though, I think the largest obstacle in the way of innovation is how locked-down our operating systems and the devices they run on are. If you run a proprietary OS, you can’t experiment with the environment at all, and this is exacerbated on many modern devices, which are designed to run one (proprietary) OS, and actively try to hinder the use of any other OS. The only people who get to rethink the user environment are designers at Apple/Microsoft/Google, and even they only get to do so within the limits set by their company.

        Open source OSs have given us outsized innovation relative to their development resources and popularity in this regard, and it’s a shame that they remain obscure to most.

        1. 16

          I’ve seen very little UX innovation from open source OSs. Maybe more of it exists but isn’t widely publicized, but in that case why not? Wouldn’t it attract users?

          Mostly what I’ve seen in open source Unix desktops is (in the old days) terrible bizarro-world UIs like Motif; then clunky copies of Windows or Mac UIs; well-done copies of Windows or Mac UIs; and some stuff that’s innovative but in minor ways. Again, this is my viewpoint, as a Mac user who peeks at Linux sometimes. Prove me wrong!

          Back in the early ‘00s Nautilus looked like it was going to be innovative, then it pretty much died and most of the team came to Apple and built Safari instead (and later contributed to iOS.)

          Creating great UX requires eating your dog food, and so much of the Linux world seems to be of the opinion that the terminal and CLI are the pinnacle of UX, so we get advancements like full-color ANSI escape sequences and support for mouse clicks in terminals. I have no doubt this makes CLIs much better, but where does that leave the UX that non-geeks use? A CLI mindset won’t lead to good GUI, any more than a novelist can pick up a brush and make a great painting.

          (Updated to add: I wonder if some of the Unix philosophy and principles aren’t contrary to good end-user UX. “Everything is a file” means everything has the limits of files as bags-of-bits. The emphasis on human readable file formats (that it takes a geek to understand) means less interest in non-geek ways to work with that data. And “small pieces loosely joined” results in big tool shelves that are very capable but come with a steep learning curve. Also, in a rich user interface, my experience is that connections between components are more complex to do, compared to streams of text.)

          1. 8

            Another point in your favour: KDE feels like a slavish clone of whatever Microsoft was doing then. Perhaps not in your favour: Emacs is a “different” UI that people actually use, but the foundation for Emacs was laid by Multics Emacs being in Lisp instead of line noise^W^WTECO, and Lucid doing most of the work bringing it kicking and screaming into the GUI.

            I really think the Linux and perhaps nerd fixation on the VT100 and VT100 accessories is actively damage. We certainly didn’t perfect UIs in 1977. CLIs deserve better, and maybe when we finally escape the character cell ghetto, we can actually move forward again.

          2. 8

            I think you’re conflating innovative with polished, and with (“non-geek”) user friendly. Something made by a handful of volunteers is never going to be as polished as what a company with a budget in the billions churns out, and something made by terminal weirdos for terminal weirdos is never going to be user friendly to people who aren’t terminal weirdos.

            One project I would point to as innovative, dwm, directly addresses some of the things the author brings up in that blog post. It has a concept of tags, where every window is assigned a number of tags, and the user can select a number of tags to be active. This results in all windows with one of the selected tags appearing on the screen. This directly maps to the concept of “contexts” in the blog post, and takes it a step further by allowing contexts to be combined at will. It also allows the desktop to be decluttered at a keystroke, yet still have all the prior state immediately accessible when needed. dwm is definitely not polished, and it sets an intentional barrier to entry by requiring users to edit a C header and compile it, but it’s hard to argue that it isn’t innovative.

            I don’t think it’s surprising to note that Linux people build Linux user environments, or that Mac people don’t like those Linux environments. What I would love to see though, would be the environments Mac people would build if given the chance.

            1. 1

              I don’t think it’s surprising to note that Linux people build Linux user environments, or that Mac people don’t like those Linux environments.

              Is there a definition of “Linux user environment” and “Mac environment” here that I’m unaware of? When I hear “Linux user environment” I simply think of the userland that sits atop the Linux kernel.

              1. 2

                I meant it in a broader sense. As I see it, there are certain design ideals and philosophy associated with different operating systems, that are broadly shared by their developers and users. These ideals largely shape the user environments those developers build, and those users use.

                1. 3

                  As I see it, there are certain design ideals and philosophy associated with different operating systems, that are broadly shared by their developers and users.

                  Does such an ideology or philosophy agree for free operating systems though? Is it defined somewhere and used as a guidepost? Do the users agree to this guidepost?

                  Just look at the history of Linux and BSD and you’ll find enough hand-wringing about the different cultures in the communities and the users they attract. I just don’t think any operating system, free or not, has consensus around its philosophy. Users use computers for a variety of things, and I think a small minority use an operating system specifically to align with a personal philosophy.

          3. 7

            I’ve seen very little UX innovation from open source OSs.

            This is systemic and not limited to open source OSes. There has not been significant UX innovation among commercial operating systems either. Arguably for good reason: People are mostly satisfied. We’ve gotten so good at UX that we are now going in the other direction; using dark patterns to betray our users to get them to do things they don’t want to do. Most “This UX is bad” posts are picking nits compared to what we had to work with in the 90s.

            FWIW, I do think the open source community’s work on tiling window managers are commendable. Stacking managers are partially an artifact of low-resolution displays and early skeuomorphic attempts.

            As far as the “CLI mindset” - It’s interesting nobody has taken a step back and considered why the CLI continues to proliferate… and if anything gain momentum in the last 15+ years. There was high hopes in the 90s it would be dead within a decade. Many seem to have fallen victim to the appeal to novelty fallacy, and simply assumed newer = better, not fully comprehending the amount of work that went into making CLIs valuable to their users. You can’t blame the greybeards: The CLI power users these days are Zoomers and younger Millennials zipping around with extravagant dotfiles. And while GUI UX has stagnated, the quality of life on the command line has improved dramatically the last 20 years.

            1. 3

              There has not been significant UX innovation among commercial operating systems either.

              Point taken. In large part the web browser took on the role of an overlay OS with its own apps and UX inside a desktop window, and became the locus of innovation. That UX started as a very simple “click an underlined link to go to another page”, exploded into an anarchy of weird designs, and has stabilized somewhat although it’s still pretty inconsistent.

              I wonder if this is the way forward for other new UX paradigms: build on top of the existing desktop, as an overlay with its own world. That’s Urbit’s plan, though of course their weird languages and sharecropping ID system aren’t necessary.

          4. 5

            (Updated to add: I wonder if some of the Unix philosophy and principles aren’t contrary to good end-user UX. “Everything is a file” means everything has the limits of files as bags-of-bits. The emphasis on human readable file formats (that it takes a geek to understand) means less interest in non-geek ways to work with that data. And “small pieces loosely joined” results in big tool shelves that are very capable but come with a steep learning curve. Also, in a rich user interface, my experience is that connections between components are more complex to do, compared to streams of text.)

            New reply for your edit: I’m developing the impression the “original sin” with Unix was an emphasis on passing the buck of complexity to the user. A simple system for the developers is not necessarily a simple system for the users.

            1. 3

              Not even necessarily the end user — the emphasis on ease of implementation leads to a dearth of useful interfaces for the programmer at the next level of abstraction, forcing them to either reinvent basic things like argument parsing and I/O formats in an ad-hoc fashion, impeding composability and leaking abstractions into the next layer, almost like throwing an exception throughout all layers of the system.

              1. 4

                This thread is also making me think a lot about this old Gruber post. You can’t just bolt on usability, it needs to be from top to bottom, because the frontend is a Conway’s law manifestation of the backend. Otherwise, you get interfaces that are “what if it was a GTK frontend to ls”.

                1. 1

                  When A.T. needs to configure a printer, it’s going to be connected directly to her computer, not shared over a network.

                  Ironically, this kind of dates the article now that laptops, wifi printers, tablets, and smartphones are a thing.

                  Good article aside from that, though.

          5. 4

            but where does that leave the UX that non-geeks use

            I do want to emphasize that there are geeks that enjoy GUI UXes as well. I have a preference for most of my regularly used tools and applications to be GUIs, and I suspect there are others out there as well.

          6. 4

            I definitely agree the mainstream open source UIs are somewhat not innovative. But there have been many interesting experiments in the last 20 years, tiling window managers being one of the more popular examples. For me, the tragedy is that the “market leading” environments seem to be making decisions which are expressly designed to kill off alternatives and innovation (stuff like the baking of window manager and compositor together in wayland, requiring sysiphean approaches like wlroots; client side decorations)

            1. 7

              Tiling window managers are almost all the exact opposite of innovative - they’re what people were trying to escape from with Windows 1.x!

              1. 2

                This seems… Questionable, I guess, but maybe I’m missing some context since my useful memory of computers starts some time in the late 80s with the Apple II and DOS machines. What widely used interface in that era was a real parallel to something like xmonad?

                1. 2

                  There were plenty of “tiling” (really, split-screen) GUIs at the time, and overlapping windows was definitely the main feature of Windows 2.x (not even Windows 1.x had overlapping windows).

                  1. 2

                    Yeah, fair enough. I guess I think this last 10 or 15 years’ tiling WMs feel pretty qualitatively different to people who grew up on the mainstream computing of late 1980s – early 2000s, but probably much more because of the contrarian model they embrace than because that model’s basic ideas are novel as such. (And of course it’s fair to ask how contrarian that model is now that most computing is done on phones, where there’s scarcely a “windowing” model at all.)

              2. 1

                If you are arguing that there was no innovation from tiling window managers due to “windows 1.0”, then the only polite thing I can say is I fundamentally disagree with you.

      2. 2

        I think we do, but I’m not sure who would invest in a modern PARC without expecting a quick turn around and a solid product to hock.

        1. 8

          For better or for worse, many companies shifted R&D to startups, which basically have to create something product-ready with an exit plan funded by VC. I don’t think long-term research can really be done in this model.

          Excuse me Mister Culver. I forgot what the peppers represent.

          1. 1

            Frankly if we had a good method to fund this sort of thing, we’d have leveraged it to fund the load-bearing OSS that forms our common infrastructure.

    5. 7

      Ohh I remember working in Haskell and being extremely frustrated that libraries wouldn’t use longer generic type names. Free documentation opportunity, wasted! There’s easy stuff and common shortcuts, but sometimes I just get lost in the noise

      1. 1

        Often in Haskell the idea is to reduce the scope of what you need to keep in your head at any given time. Often the types themselves are the most important details. For example when looking at a function with two args you can often reduce the scope to the input types and the output types. Knowing the outer context about where the inputs come from is actually a burden in this situation and can lead to misunderstandings. We only care about the implementation of this little function. Clear your mind of all other thoughts ;)

        To keep the implementation free and clear of meaning collisions we use very simple names.

        Also sometimes it can get so general and category or type theory heavy that there are no good names!

        But I agree sometimes it’s a bit too much.

    6. 20

      where even though everything compiled correctly, it didn’t work. For a compiled language, that is not something you expect

      I would never expect that of C++! Maybe Haskell or Rust, but not C++.

      C++ has a ton of holes derived from C, and a whole bunch of new features that can be confused. Here are a couple of surprises I encountered in semi-automatically translating Oil to C++ (even though I’ve been using both C and C++ for decades):

      This is in addition to all the “usual” ones like scope/shadowing, uninitialized variables (usually warned, but not always), leaving off braces like goto fail, unexpected wrapping with small integer types, signed/unsigned problems, dangling pointers, buffer overflows, use after free, etc.

      string_view is nice but it’s also a “pointer” and can dangle. Those are all reasons that code may not work when it compiles.

      I think leaning on the compiler too much in C++ gives diminishing returns. It encourages a style that bloats your compile times while providing limited guarantees. With C that’s even more true since I consider it more of a dynamic language (e.g. void* is idiomatic).


      Historically, C was even more dynamically typed than it is now. Types were only for instruction selection, e.g. a + b for 2 ints generated different code than 2 floats. That’s about it. You didn’t have to declare function return types or parameter types – they’re assumed to be ints. Reading the old Thompson and Ritchie code really underscores this.

      C++ has more of the philosophy of types for correctness, but it was constrained by compatibility with C in many cases. It comes from a totally different tradition and mindset than say Haskell or ML.

      1. 3

        I would never expect that of C++! Maybe Haskell or Rust, but not C++.

        I am somewhat hesitant to say this about Rust or Haskell, even in jest - it’s a best an aspirational aphorism about code in these languages, and if you’re trying to think seriously about program correctness it matters that it’s very possible to write code in Rust or Haskell that compiles but is not correct (for some definition of correct). If you want to write code that you can prove is correct at compile time, that’s a noble goal and you need more sophisticated tools for doing this than the ones Haskell or Rust give you.

        But yes no one says this even in jest about C++.

        1. 4

          Such generalizations are never true in the absolute sense, but there is a noticeable difference in how often and how correct programs are when they compile for the first time in Rust vs less strict languages.

          Rust does a remarkable job eliminating “boring” language-level problems, like unexpected nulls, unhandled errors, use-after-free, and unsynchronized data shared between threads. These things most of the time just work in Rust on the first try. In C++ kinda maybe if you’re programming with NASA-level of diligence, but typically I’d expect compiling in C++ to be just the first step before testing and debugging to weed these problems out.

        2. 2

          I don’t think it’s a binary as much as it’s far as the language’s guarantees on compile-time safety through things like the type system or borrow checker making it more likely that if it compiles, it’s correct.

        3. 2

          Yeah honestly I don’t really believe in that whole philosophy – I feel like it leads you into a Turing tarpit of a type system. There are plenty of other engineering tools besides type systems that need more love.

          But I think that refactoring can be quite safe in strongly typed languages, and that’s useful. Writing new code isn’t really because you don’t know what you want yet, and you can have logic bugs. But refactoring can be, and that’s what the article is about.

        4. 2
          id :: a -> a
          

          Implement this function, as long as you don’t:

          • Throw exceptions
          • Cast
          • Loop infinitely

          Then if it compiles, it’s correct.

        5. 1

          It is true with regard to a property called parametricity. On an intuitive level it states that type parameters are used as expected. So a function map :: (a -> b) -> [a] -> [b] must satisfy that each element in [b], it must have an image wrt f in [a] (note that you could as well return the empty list for each input and it would typecheck, thus our guarantee is worded a bit strangely).

        6. 1

          It is very often correct though. Usually what we say is that if you understand the problem and your solution compiles it probably works. If it doesn’t work you likely don’t understand the problem.

          You experience this programming in Haskell more often than Rust (I think because of HKTs) but it is still often the case in Rust.

    7. 3

      Good work Srid! Nice to see you’re still writing Haskell and nice to see your static site projects progressing :)

    8. 10

      This reminds me of the long and repetitive text editing tasks that got easier when I had learned vim well enough. I had less resistance to begin. Teammates do tend to view the results or the process as wizardry.

      1. 6

        Great example. In my experience it also reminds me of learning Haskell (or another pure functional lang) in that it’s frustrating at first but if you just stick with it it gets much, much easier and makes a great addition to your skill set. Also similar is the Rust borrow checker, though that is less frustrating.

        1. 5

          I can relate to both of these. Another that comes to mind is Linux, and perhaps particularly in terms of how it appears to the casual observer the usage of a TWM and a terminal-centric workflow.

          Oh, and touch typing on a keyboard without printed keys. I learned to type properly last year after two decades of hunt-and-peck typing, and now - paired with the above - my partner often tells me I look like a wizard.

          But it’s just a case of lots and lots and lots of repetition.

          1. 1

            Now get yourself some Linear A keycaps and really look like a wizard!

            1. 1

              I have runes on mine. Which in addition is 40% ortho. It feels exactly the same to use it among people.

    9. 7

      Nix is the dream of an ultimate DevOps toolkit in a similar way as Haskell is the dream of an ultimate programming language. Neither of these dreams seem to be congruous with reality.

      1. 5

        I definitely felt that way with Nix - that the promise did not live up to reality. Haskell on the other hand does give you the things it promises, but convincing others (founders, VCs, managers) that it’s worth writing the stack in is near impossible, which means that the ecosystem get stunted. It’s a bit of a cold start problem.

        Going into more detail about Haskell specifically - non-traditional languages like Haskell are often the “weird thing in the room”. When something goes wrong humans often look at the first weird thing in the room to blame, regardless of whether or not it deserves that blame. The result is that many commercial Haskell projects get re-written in another more common language at the first sign of ANY corporate turbulance.

      2. 4

        That’s certainly a strong claim with no supporting evidence in response to a post with supporting evidence.

        1. 5

          I’ve had these discussions numerous times. Nobody ever cares about supporting evidence – they (we?) just flat out reject it if it doesn’t match their preconceptions or flat out accept it if it does. So I don’t bother with it anymore.

          1. 4

            I think I may have misunderstood your original comment.

            Generally people intuitively believe that Haskell is “unrealistic” and only suited for “academic” projects. Some even go so far as to say that people writing in Haskell tend to be “elitists”. In fact, I used to tacitly (with no fault of my own) believe that; it is basically the general impression the tech community holds collectively.

            All of that of course turned out to be utterly false, once I started using Haskell for work and real projects. The real question is - how do we educate the community about what is factual?

      3. 3

        As someone who was swept up in the hype of both of these, this comment hits too close to home. They still have their use cases and offer advantages not seen in many other places, but they are certainly not for every user nor every task.

    10. 5

      This is a fantastic talk! The idea that robust systems are inherently distributed systems is such a simple and obvious idea in hindsight. Distributed systems are difficult, and I have had upper managers claim that we need “more robust” software and less downtime, yet refuse to invest in projects which involve distributed algorithms or systems (have to keep that MVP!). I think Armstrong was right that in order to really build a robust system we need to design for millions of users, even if we only expect thousands (to start), otherwise the design is going to be wrong. Of course this is counter-intuitive to modern Scrum and MVPs.

      Additionally, there is so much about Erlang/OTP/BEAM that seem so cutting-edge yet the technology has been around for a while. It will always be a wonder to me that Kubernetes has caught on (and the absolutely crazy technology stack surrounding it) yet Erlang has withered (despite having more features), although Elixir has definitely been gaining steam recently. Having used kubernetes at the past two companies I’ve been at, it has been nothing but complicated and error-prone, but I guess that is just much of modern development.

      I have also been learning TLA+ on the side (partially to just have a leg to stand on when arguing that a quick and sloppy design is going to have faults when we scale up, and we can’t just patch them out), and I think there are so many ideas that Lamport has in the writing of the TLA+ Book that mirror Armstrong’s thoughts. It is really unfortunate that software has figured out all of these things already but for some reason nobody is using any of this knowledge really. It is rare to find systems that are actually designed rather than just thrown together, and that will never lead to robust systems.

      Finally, I think this is where one of Rust’s main features is an under-appreciated super-power. Distributed systems are hard, because consistency is hard. Rust being able to have compile-time checks for data-races is huge in this respect because it allows us to develop small-scale distributed systems with ease. I think some of the projects bringing OTP ideas to Rust (Bastion and Ludicrous are two that come to mind) have the potential to build completely bullet-proof solutions, with the error-robustness of Erlang and the individual-component robustness of Rust.

      1. 4

        No. Rust prevents data races, not race conditions. It is very important to note that rust will not protect you from the general race condition case. In distributed systems, you’ll be battling race conditions, which are incredibly hard to identify and debug. It is an open question if the complexity of rust will get in the way of debugging a race condition (erlang and elixir are fantastic for debugging race conditions because they are simple, and there is very little to get in your way of understanding and debugging them).

        1. 2

          The parent post says rust has compile time checks for data races and makes no claim about race conditions. Did I miss something?

          1. 2

            When you are working with distributed systems, it’s race conditions you worry about, not data races. Misunderstanding the distinction is common.

            Distributed systems are hard, because consistency is hard. Rust being able to have compile-time checks for data-races is huge in this respect because it allows us to develop small-scale distributed systems with ease.

        2. 1

          Yes, Rust prevents data races which is (as mentioned by another poster) what I wrote. However, Rust’s type system and ownership system does makes race conditions more rare in my experience, since it requires the data passed between threads to be explicitly wrapped in an Arc and potentially Mutex. It is also generally easier to use a library such as Rayon or Crossbeam to handle simple multithreaded cases, or to just use message-passing.

          Additionally most race conditions are caused by data races, so… yes, Rust does prevent a certain subsection of race conditions but not all of them. It is no less a superpower.

          It is an open question if the complexity of rust will get in the way of debugging a race condition (erlang and elixir are fantastic for debugging race conditions because they are simple, and there is very little to get in your way of understanding and debugging them).

          I don’t understand this point. Rust can behave just like Erlang and Elixir (in a single-server use-case, which is what I was talking about) via message passing primitives. Do you have any sources for Rust’s complexity being an open question in this case? I am unaware of the arguments for Rust’s affine type system is cause for concern in this situation – in fact it is usually the opposite.

          1. 2

            “most race conditions are caused by data races”

            What definition of “most” are you using here?

            Many people writing distributed system are using copy or copy on write systems and will never encounter a data race.

            Do I have any sources? Yes. I debug distributed systems, I know what tools I use, and ninjaing them into and out of rust is not going to be ergonomic.

            1. 5

              Just some quick feedback/level-setting, I feel like this conversation is far more hostile and debate-like than I am interested in/was hoping for. You seem to have very strong opinions, and specifically anti-Rust opinions, so lets just say I said Ada + Spark (or whatever language with an Affine type system you don’t have a grudge against).

              The point I was making is that an affine type system can prevent data-races at compile-time, which are common in multi-threaded code. OTP avoids data-races by using message-passing, but this is not a proper fit for all problems. So I think an extremely powerful solution would be an affine-type powered system for code on the server (no data-races) with an OTP layer for server-to-server communication (distributed system). This potentially gets the best of both worlds – flexibility to have shared memory on the server, while OTP robustness in the large-scale system.

              I think this is a cool idea and concept, and you may disagree. That is fine, but lets keep things civil and avoid just attacking random things (especially attacking points that I am not making!)

              1. 2

                Not the parent:

                In the context of a message-passing system, I do not think affine|linear types hurt you very much, but a tracing GC does help you, since you can share immutable references without worrying about who has to free them. Linear languages can do this with reference-counted objects—maintaining ref. transparency because the objects have to be immutable, so no semantics issues—but reference counting is slow.

                Since the context is distributed systems, the network is already going to be unreliable, so the latency hit from the GC is not a liability.

                1. 1

                  Interesting point although I don’t know if I necessarily agree. I think affine/linear types and GC are actually orthogonal to each other; I imagine its possible for a language to have both (although I am unaware of any that exist!) I don’t fully understand the idea that affine/linear types would hurt you in a multi-threaded context, as I have found them to be just the opposite.

                  I think you are right that reference counted immutable objects will be slightly slower than tracing GC, but I imagine the overhead will be quickly made up for. And you’re right – since its a distributed system the actual performance of each individual component is less important, and I think a language like Rust is mainly useful in this context in terms of correctness.

              2. 1

                Can you give an example of a problem where message passing is not well suited? My personal experience has been that systems either move toward a message passing architecture or become unwieldy to maintain, but I readily admit that I work in a peculiar domain (fintech).

                1. 2

                  I have one, although only half way. I work on a system that does relatively high bandwidth/low latency live image processing on a semi-embedded system (nVidia Xavier). We’re talking say 500MB/s throughput. Image comes in from the camera, gets distributed to multiple systems that process it in parallel, and the output from those either goes down the chain for further processing or persistence. What we settled on was message passing but heap allocation for the actual image buffers. The metadata structs get copied into the mailbox queues for each processor, but it just has a std::shared_ptr to the actual buffer (ref counted and auto freed).

                  In Erlang/Elixir, there’s no real shared heap. If we wanted to build a similar system there, the images would be getting copied into each process’s heap and our memory bandwidth usage would go way way up. I thought about it because I absolutely love Elixir, but ended up duplicating “bare minimum OTP” for C++ for the performance.

                  1. 2

                    Binaries over 64 bytes in size are allocated to the VM heap and instead have a reference copied around: https://medium.com/@mentels/a-short-guide-to-refc-binaries-f13f9029f6e2

                    1. 2

                      Hey, that’s really cool! I had no idea those were a thing! Thanks!

                  2. 1

                    You could have created a reference and stashed the binary once in an ets table, and passed the reference around.

                2. 1

                  It is a little tricky because message passing and shared memory can simulate each other, so there isn’t a situation where only one can be used. However, from my understanding shared memory is in general faster and with lower overhead, and in certain situations this is desirable. (although there was a recent article about shared memory actually being slower due to the cache misses, as every update each CPU has to refresh its L1 cache).

                  One instance that I have had recently was a parallel computation context where shared memory was used for caching the output. Since the individual jobs were long-lived, there was low chance of contention, and the shared cache was used for memoization. This could have been done using message-passing, but shared memory was much simpler to implement.

                  I agree in general that message passing should be preferred (especially in languages without affine types). Shared memory is more of a niche solution (although unfortunately more widely used in my experience, since not everyone is on the message passing boat).

      2. 4

        I think a good explanation is that K8s allows you to take concepts and languages you’re already familiar with and build a distributed system out of that, while Erlang is distributed programming built from first principles. While I would argue that the latter is superior in many ways (although I’m heavily biased, I really like Erlang) I also see that “forget Python and have your engineering staff learn this Swedish programming language from the 80ies” is a hard sell

        1. 2

          You’re right, and the ideas behind K8s I think make sense. I mainly take issue with the sheer complexity of it all. Erlang/OTP has done it right by making building distributed systems extremely accessible (barring learning Erlang or Elixir), while K8s has so much complexity and bloat it makes the problems seem much more complicated than I think they are.

          I always think of the WhatsApp situation, where it was something like 35 (?) engineers with millions of users. K8s is nowhere close to replicating this per-engineer efficiency, you basically need 10 engineers just to run and configure K8s!

    11. 10

      Although I like this article because it shows the details of writing your own implementations of Future, it would be unfair to assume that one encounters all these issues in the wild when writing async rust. You may encounter one or two depending on the types involved but this post is contrived to show you them all in one go.

      In my experience writing my own futures has been great and not at all as tricky as the article suggests. I think that is because (at least in my case) a Future is a way to provide an async interface to a long process in a different context. Usually you have that context to work with - ie the browser’s DOM or some other callback scenario.

      tldr; I do enjoy these articles. They’re more about grokking rust deeply as you won’t run into most of these problem in practice.

      1. 6

        I get your point. Having written a couple of Futures used in production code, I have questioned the wisdom of doing so.

        I’m not anti async in rust, it just doesn’t feel like the documentation and patterns have caught up enough in a “canonical” way. That is, there’s a lot of smart and well written notes on doing it but at least the last time I did it (~4 months ago), there’s still a lot of rough edges.

        For now, I’ve decided to wait for the dust to settle a bit more before investing anymore time with async/await and Rust.

        I’ve had good luck with channels and threads the last 3.5 years of writing Rust so I’ll probably just stick to that until the dust settles.

      2. 2

        I feel like the article made it fairly clear that this is not something you normally do as a matter of course of developing software.

    12. 1

      Since no one contributed a Rust example:

      fn run_length_encode(ins: &str) -> Vec<(char, usize)> {
          let mut out = vec![];
          let mut i = ins.chars();
          if let Some(mut c) = i.next() {
              let mut count = 1;
              for new_c in i {
                  if new_c == c {
                      count += 1;
                  } else {
                      out.push((c, count));
                      count = 1;
                      c = new_c;
                  }
              }
              out.push((c, count));
          }
          out
      }
      

      and in Haskell we can use group and mapMaybe for the heavy lifting:

      import Data.List (group)
      import Data.Maybe (mapMaybe)
      
      runLengthEncode :: Eq a => [a] -> [(a, Int)]
      runLengthEncode = mapMaybe f . group
        where 
          f [] = Nothing 
          f (x:xs) = Just (x, 1 + length xs) 
      
    13. 2

      Dasp is great. I think rust audio development is in an interesting era right now - there seem to be a wealth of libs and bindings and decoders, but no encoders. Rust seems like a great fit for DSP. Can’t wait for those libs to mature.

    14. 2

      I use a modified Dvorak layout on a datahand keyboard. It’s not too hard to use hjkl in their current place. It is a little frustrating at first but that’s how you know that you’re learning. Just keep at it.

    15. 0

      I think the author sums up my feelings pretty well.

    16. 2
      • Learning Scala, because it someone recommended it and it seemed really interesting. From my little usage of it so far, it feels like Java meets Rust.
      • Sleep. A lot. I am running purely on caffeine right now.
      • Probably play some more Skyrim and Diablo III.
      • Work more on my nixinfo crate, I managed to get quite a bit of work done on it. Now most functions will output to a Result<String> instead of a String, and I managed to slim it down some.
      • Probably test out a bunch of more programming languages. I love Rust and all, but using the one same language all the time can get… ugh. Too bad it’s hard to find languages I haven’t already tried:
        • I don’t like C, C++, Dart, Go, JS, Python, Ruby, nor Swift.
        • Fortran, OCaml, Lisp, Nim, Pascal, and Perl are ok, but not something I’d prefer.
        • Zig is interesting but confusing.
        • I like PHP, but I have no use case for it.
      1. 2

        I find your list of programming languages very interesting! How much Scheme have you tried? If you’ve only tried Common Lisp, I’d give Scheme a chance on its own. They’re philosophically quite different.

        Also, what about PHP is up your alley? Most of your other language preferences make sense to me in context, but PHP is confusing to me, especially given that you like Rust. I feel like PHP and Rust are philosophical opposites of one another in nearly every way, but I might be missing some aspect that’s important to you.

        I love how programming languages are just as much tools for the mind as they are tools for the computer, so two different people might have radically different preferences.

        1. 2

          How much Scheme have you tried?

          Not very much. Though I do know of it. I’ll have to look into it.

          Other lisp I’ve used are indeed Common Lisp and Emacs Lisp.

          I was going to try Clojure as well, but at the time the Java requirement was throwing me off (I’m a little picky about what gets put on my system). That’s not quite an issue right now, obviously, since I’m using Scala. But at the time it was enough for me to avoid it.

          Also, what PHP is up your alley?

          Ok, this might sound a little weird, but hear me out. I really like shell scripting, like a lot. I’m always creating shell scripts all the time. PHP, to me, feels like the shell scripting of the web. There’s something about PHP that came just as naturally to me as bash scripting.

          I love how programming languages are just as much tools for the mind as they are tools for the computer, so two different people might have radically different preferences.

          Oh 100% I agree. I have one friend who swears left and right Python is the way the way to go, they can implement like anything in it. But at the same time I have another friend who hates Python with a passion and wouldn’t touch it with a 10 foot pole unless he was being threatened under death or something.


          I am really sorry about the late response by the way. Like an hour or so after posting my reply, I crashed right at my desk and I just woke up a few minutes ago.

      2. 1

        Get into Haskell! It’s very rewarding. Rust and Haskell are good friends.

        1. 2

          I really should. I used Haskell for a time when I was on XMonad a long time ago, but when I left it I also left Haskell.

          I like Haskell, though I do remember it having a bit of steeper learning curve when I tried it.

          Though I do think there was something about Haskell that really threw me off. Just not sure what it was because it was so long ago.

          Anyways yeah, I’ll have to get back into Haskell at some point.

    17. 6

      For those who see this post and get discouraged about the complexity of async rust - don’t give up. This is not a great example of async rust at its most elegant. Maybe that’s because of async-std, I don’t know as I’ve never used it. I have used tokio 0.2, which is great. Instead of reinventing std as async I think it hits a sweet spot - it’s easy to spawn a new thread in which to do blocking IO and then await it using async syntax and std primitives. There are some macros that make life easy there as well.

      Of course there are gotchas and some rough edges, for example - if your program holds a mutable reference over an await point you’ll get a pretty cryptic error. There’s still work being done. But if you wrap your mutable things in Arc<Mutex<_>> and tend to write functional code then it can be easy and simple. Go has no comparison. It simply lacks the types to get the job done. Go’s syntax for multithreaded channels look promising on the outside but there’s just not enough type checking for me to be confident that the program is doing what I think it is.

      Also keep in mind that there are certain programming domains that are necessarily async, like the browser! One of rust’s design goals is to provide the journeyman rustacean with as many viable programming contexts as possible - server, desktop, microcontroller, browser, lambda, etc.

      I’ve been very pleased with rust and async using tokio and js-futures. It’s not all hype, there are real productivity boosts to be had.

    18. 3

      Adding user home and registration to my todo bot service http://srcoftruth.com. The forms use higher kinded data types and at the end of this sprint I may have a new form library that shakes out.

    19. 3

      The forced moratorium of 2001-2006 was also prime time for innovation in Flash. It’s going to happen one way or another.

    20. 8

      I’ve got a lot of respect for Haskell, but this seems like a truly complicated way to solve a simple problem.

      Never mind that as soon as you want to add some tuning parameter to your config file, you’re going to need to either thread an effect type through your entire program or perform some other equally non-trivial refactor. Double nevermind having to compose monads with a transformer stack.

      At least Idris offers the bang! effect notation (which probably requires strictness to be sane) built on extensible effects. Haskell version of the Eff type here: http://www.cs.indiana.edu/~sabry/papers/exteff.pdf - That’ll at least alleviate the monad transformer stack madness.

      1. 2

        yeah, i think this is also a really bad example; it seems infinitely nicer to not pass a big amount of state to various functions, but simply the little bits and pieces that each small function needs in order to do its work (in this case).

      2. 1

        Forgive me, I didn’t read the article but I have used extensible-effects in Haskell and I highly recommend it.