Threads for agent281

  1. 7

    I’m worried that a de-facto move away from dynamic stuff in the Python ecosystem, possibly motivated by those who use Python only because they have to, and just want to make it more like the C# or Java they are comfortable with, could leave us with the very worst of all worlds.

    This resonates a lot with me.

    I do use type hints in most of my new Python code, largely for documentation purposes. I don’t run a type checker or enforce any sort of “static typing”. Most of the non-documentation utility I find in type hints comes from runtime things: libraries like Pydantic which can automatically derive validation and serialization rules just from declaring a list of fields with type hints.

    And as Luke has mentioned before, and as I’ve explained here on before, Python’s type hints as enforced by checkers really do represent a separate language with significantly different semantics (and I have a litany of other complaints about the Python type-hinting ecosystem that maybe I’ll write up one day). I understand a lot of people feel “forced” to write Python and don’t like dynamically-typed languages, but the solution to that is to get comfortable with dynamic typing, not to try to change Python until it looks like the statically-typed language you would have preferred.

    1. 1

      I spend a good whole hour today trying to type check a class decorator, unsuccessfully, until I realized that, since I was already using a slight less strict version of –strict for mypy, I could just not add type hints at all and everything would still work and type check outside of it.

      I like types, they are good, and help me catch bugs before even running things. But Python’s type system is really not all there once you start to even approach any level of dynamic shenanigans.

      You can type callbacks, and do some cool shit with protocols, but metaclasses and user defined generics have been a world of pain, in my experience.

      1. 1

        But Python’s type system is really not all there once you start to even approach any level of dynamic shenanigans.

        I was going to complain that it isn’t even very good at some static typing shenanigans, but it seems like they might have added support for recursive types recently. I might have to try it out again.

        1. 1

          Huh, I’d think that was the same as forward reference, which has been around for a while. Seems like it isn’t, or at least it’s a special case special enough to require specific handling.

      1. 3

        +1 for Data and Reality, but the first edition is better, imo. IIRC, the second edition is posthumous, with a few chapters removed, and interspersed with comments from another author, which I found more distracting than enlightening. Anyway, a great reading, highly recommended, and mandatory if you’re into data modeling!

        1. 7

          IIRC, the second edition is posthumous, with a few chapters removed, and interspersed with comments from another author, which I found more distracting than enlightening.

          You’re thinking of the third edition. As Hillel wrote in that newsletter post, second edition was by Kent only.

        2. 2

          This is wildly off topic, but I guess I’m going to shoot my shot.

          Once more: we are not modeling reality, but the way information about reality is processed, by people. — Bill Kent

          I’ve been wondering for a while if the confusion about quantum physics in public discourse is because quantum physics is about modeling physics when operating at scales where measurement affects what is measured and therefore observer is inextricably linked to observation.

          Is the probabilistic nature of quantum physics just modeling that accounts for the large relative size of the tools of observation relative to the object of observation? (E.g., when you observe a billiard ball with light the mass of light is far smaller than the mass of the billiard ball so it doesn’t after the speed or positioning of the ball. If you observed a billiard ball with another billiard ball, it would affect the speed or positioning of the observed billiard ball.) Or is there something more fundamental about the nature of the model that I’m misunderstanding? Does “objective reality” exist beyond the capabilities of the model?

          (That last question is squishy as hell.)

          Nobody that I’ve talked to that’s into quantum physics seems to know much about it except as a vehicle to support their brand of metaphysics and I don’t know how to parse what you find on the internet. (Of course, the only person I personally know who I consider to be qualified to speak on the subject, a graduate level physics student, said she really didn’t know that much about it.)

          Also, I’m semi-afraid I won’t understand the answers that I receive.

          1. 2

            Coincidentally enough I have a degree in physics, though I haven’t looked at it in a while, so this answer is based on a nine-year old memory:

            It’s not a measurement error, the information literally doesn’t exist at that level of granularity. At a quantum level, particles are also waves, right? The energy of a wave correlates with its frequency, while the energy of a particle is in its kinetic energy, which correlates with momentum. The position of the particle correlates with the peaks and troughs of the wave: if the wave peaks at a certain point, there’s a higher probability the particle “is there”.

            Now, to have a precise frequency, you need a lot of wavelengths, so there’s a lot of places the particle can be. If you add together lots of different waves with different frequencies, you can get a sum wave with a single enormous peak, and the particle is localized. But now there’s many different frequencies, so the kinetic energy is uncertain.

            Mathematically, any physical observation corresponds to a transformation of the wave vector. And the position and momentum transformations don’t commute, so mathematically measuring position-then-momentum must give a different result than measuring momentum-then-position.

            It’s all weird and messy I know and I’m not doing a good job explaining it. The important thing is that this all holds regardless of how we measure it. And it leads to all sorts of weird consequences like quantum tunneling. if you put a barrier between a particle and measure the momentum very precisely, the position can get uncertain enough that it could be on the other side of the barrier.

            1. 1

              First off, thank you for taking the time to respond to this! I don’t think your reply was weird or messy at all. If you want weird and messy, keep reading!

              Now, to have a precise frequency, you need a lot of wavelengths, so there’s a lot of places the particle can be. If you add together lots of different waves with different frequencies, you can get a sum wave with a single enormous peak, and the particle is localized. But now there’s many different frequencies, so the kinetic energy is uncertain.

              This is great and I think it is core to my question. You are talking about how there is a probabilistic way to calculate the frequency, which, if I’m understanding correctly, means you can determine where the particle is as it is traveling down the path of the wave. So this is a model of the behavior of the particle.

              However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time. Instead, we are using the model as a way to understand the behavior of a particle. Which is why the quote above sparked my question: are we modeling reality or the way we think about reality?

              You also said this:

              It’s not a measurement error, the information literally doesn’t exist at that level of granularity.

              Which might imply that my intuition is wrong and at the physical level particles fundamentally don’t behave the way I think they should. Or maybe you mean that the information is unknowable because of how we model the problem.

              And it leads to all sorts of weird consequences like quantum tunneling. if you put a barrier between a particle and measure the momentum very precisely, the position can get uncertain enough that it could be on the other side of the barrier.

              Again, is this a matter with the model or with reality? Are we essentially dealing with mathematical errors in the model that build up, but the particles actually have smooth trajectories in space? Or is there something more fundamentally random occurring?

              I know that at a certain point we are dealing with questions so far beyond concrete human abilities that as far as we can tell the model is reality so maybe this is really a philosophical point. Furthermore, I hope that my line of reasoning doesn’t feel totally unreasonable and unsatisfiable.

              Mathematically, any physical observation corresponds to a transformation of the wave vector. And the position and momentum transformations don’t commute, so mathematically measuring position-then-momentum must give a different result than measuring momentum-then-position.

              That’s a neat detail!

              1. 1

                However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time. Instead, we are using the model as a way to understand the behavior of a particle. Which is why the quote above sparked my question: are we modeling reality or the way we think about reality?

                That’s the crazy part: when it’s not being interacted with, the particle is a wave. The particle is nowhere and the wave is everywhere. If something, anything, then interacts with the wave, it collapses back into being a particle, with either a more-defined position and a less-defined velocity or a more-defined velocity and less-defined position. It’s weird and wonderful and makes no sense whatsoever, but it’s also what’s borne out in experiment after experiment. The universe just doesn’t care about what makes sense to humans.

                @wizeman already mentioned the double-slit experiment, where a series of individual particles diffracts just like waves do. But there’s other, wilder experiments, like quantum bomb-testing. If a live bomb is guaranteed to be set off by a single photon, you can cleverly exploit wave interference to determine whether a bomb is live or not without setting it off (50% of the time).

                People have tried to reconcile QM with human intuition, and a lot of them fall under what we call Bell experiments to find more “rational” explanations. In the end the weird unintuitive QM rules always win.

                1. 1

                  However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time.

                  Wouldn’t that contradict the observations of the double-slit experiment? Especially the ones that were performed by firing a single photon/electron/atom/molecule at a time.

                  1. 1

                    I don’t think I made this completely clear before, but I don’t mean that something making intuitive sense to me makes it correct.

              2. 1

                Is the probabilistic nature of quantum physics just modeling that accounts for the large relative size of the tools of observation relative to the object of observation? … Does “objective reality” exist beyond the capabilities of the model?

                This is dealt with by the field ‘foundations of quantum mechanics,’ and the experiments are fascinatingly subtle. It turns out that not only is there no such thing as negligible measurement, there isn’t a value of an observable beforehand. We generally get at that via measurements on entangled systems that we have manipulated in some clever way.

                If you really don’t want probability in the underlying theory, the usual route is to go to Everett’s interpretation, which does away with a classical observer entirely. In that case the random chance we see is because we perceive ourselves as on one lobe of a bifurcating wave function.

                1. 1

                  and the experiments are fascinatingly subtle.

                  Hence my fear of not understanding the replies!

                  If you really don’t want probability in the underlying theory, the usual route is to go to Everett’s interpretation, which does away with a classical observer entirely. In that case the random chance we see is because we perceive ourselves as on one lobe of a bifurcating wave function.

                  Oh, no! My fears have been realized!

                  My issue isn’t really with having probability in the theory.* It’s that when people talk about quantum physics I can’t tell if they are talking about the model of quantum physics or how the world actually works. I know that might sound crazy: theory is supposed to be about how the world works! But sometimes theory is an incomplete picture of how the world works. It’s a particular view that solves particular problems. That’s not necessarily how theory is always talked about and it may be completely inaccurate here.

                  You have all sorts of pop sci articles that claim that objective reality doesn’t exist. I can’t tell how seriously I should take such claims. Are they misinterpreting experiments that demonstrate the defects of the model or is there actually a deeper more fundamental question being raised here?

                  I don’t want to sit here and ignore what might be very fundamental deviations from how I view the world, but I also feel the need to clarify how I should interpret these claims before accepting them.

                  • I don’t really really have an issue here besides my own ignorance.
                  1. 2

                    talking about the model of quantum physics or how the world actually works

                    Given we haven’t (to my knowledge) got any single theory that describes all observed phenomena (from small to large scales) - I assume that anything other than experimental results is “about the model” (which is not to dismiss it!).

                    1. 1

                      I read a bit from the quantum foundations wikipedia article. It seems like all of this is still under debate and I’m not crazy.

                      From the wiki page:

                      A physical property is epistemic when it represents our knowledge or beliefs on the value of a second, more fundamental feature. The probability of an event to occur is an example of an epistemic property. In contrast, a non-epistemic or ontic variable captures the notion of a “real” property of the system under consideration.

                      There is an on-going debate on whether the wave-function represents the epistemic state of a yet to be discovered ontic variable or, on the contrary, it is a fundamental entity.

                      Thank you @madhadron for giving me the right field to look under. Thank you @hwayne for entertaining my questions. Apologies to @teymour for so thoroughly derailing the conversation.

                    2. 1

                      The pop sci articles are hard to interpret because they’re trying to bring something back to colloquial English that is deeply unfamiliar. But the objective reality does not exist parts are some of the experimental facts that any theory has to account for rather than quirks of a particular theory.

              1. 3

                The PL zoo has implementations of different paradigms written in OCaml:


                1. 7

                  For examples of these not mentioned: Erlang has symbols called atoms, VB.NET has date literals (and XML literals).

                  1. 7

                    Erlang also has pattern matching over binaries. I don’t want this as a language feature, but I do want sufficient language features to be able to implement this as a library feature. I hate writing any low-level network or filesystem (or binary file-format) code in non-Erlang languages as a result. This is especially annoying for wire protocols that have fields that are not full bytes and most-significant-bit first. Erlang handles these trivially, C forces you to experience large amounts of pain.

                    1. 3

                      Yes - binary pattern matching is pretty much the biggest thing I miss on anything not BEAM. There’s all sorts of little touches to make it error-free too, like the endianness notation and precise bit alignment.

                    2. 4

                      Why stop there? In Common Lisp reader macros allow you to pretty much do whatever you want! Want a date literal? Cool. Want to mix JSON with sexpressions? No problem! Want literal regex? Yup, can do that too.

                      1. 4


                        I guess you meant s-expressions, cough ;)

                        1. 2

                          yes, of course. Thanks for being pedantic.

                          1. 1

                            I think it was a kebab case joke?

                            1. 5

                              I don’t think there’s a joke in there. s-expressions is the way it’s spelled. The hyphen emphasizes “Ess expressions” over “sex pressions”

                              1. 1

                                Now who’s being pedantic?

                                (The article has a whole section on kebab case…)

                                1. 2

                                  Now I’m confused! :)

                                  If you were joking, I didn’t pick up on it. I wasn’t trying to be pedantic, just offer the background of why it’s s- and not just plain s.

                      2. 3

                        Examples of symbols:

                        • As the article mentioned, Ruby has symbols: :example
                        • As you mentioned, Erlang has atoms: example
                        • Elixir has atoms: :example
                        • Clojure
                          • has keywords, which are generally used as keys within maps: :example
                          • has symbols, which generally refer to variables when metaprogramming: 'example
                            • Many other Lisps such as Scheme and Common Lisp have symbols and use the same syntax for them.
                        1. 1

                          Clojure also has Symbols, IIRC called keywords (I guess because they are most often used as keys in mapping data structures).

                          1. 3

                            They’re also called keywords because Clojure has a different construct for symbols. (The subtle distinction between when to use a keyword and when to use a symbol is a common “discussion” point with new adopters.)

                          2. 1

                            There was an attempt to add XML literals and XML querying syntax to JavaScript (the E4X syntax), but it never got wide enough support.

                            1. 3

                              Scala has XML literals. I imagine this is the PL equivalent of seeing someone’s old picture from high school.

                          1. 6

                            If you like the Frink date literal, you may also like Elixir’s sigils. It’s used for date, time, regex and more. You can even create your own custom sigils.

                            1. 4

                              Thanks. I like sigils better than Lisp reader macros, because you can parse a sigil using a context free grammar, without executing code from the module that defines the sigil. The ability to parse a program without executing it is valuable in a lot of contexts.

                            1. 7

                              I kind of wish more people would take a computer science degree. These rants by ultra-minimalists/retro computing fans often get situated in a kind of weak context, with a bit of a scraped together notion.

                              Not leading with “Why portable C and an abstraction layer over the non-portable memory bits is definitely wrong” is kind of a demonstration of the entire problem with this article. Because that’s actually the real answer to the problem: to write a stable compiled program in a stable language which utilizes a cross-platform library that’s retargetable. Example: - I could spend some time recompiling a program I wrote against that in 2001, recompile, and it’d work. It’s a solved problem, if you’re a developer with an interest in it.

                              I also note that entire episodes of original star trek were about this concept:

                              There was a time when computers were super playful, but now they feel cold, and have been weaponized against people.

                              because the mythologies induced by, e.g., What the Dormouse Said (a good book when dosed with other histories) are not fully representative of reality.

                              To the author, if you’re reading: you need to start digging into the contexts deeper, you’re still “ I had a vague idea of what programming was.” only vaguely touching the context the technologies were created in and the social reasons they were formed the way they were.

                              1. 2

                                This comment saddens me. From where I’m sitting, it amounts to calling a self motivated person ignorant, calling their project pointless, and telling them to do better. There is constructive criticism here, but it is sandwiched by less helpful comments.

                              1. 2

                                Wow look at how many comments are in this thread. Go criticism really strikes a nerve in some people, eh? As far as career-promoting blog posts are concerned, this is state-of-the-art. I’m sure OP will get a lot of clout from this.

                                I wonder how this can be reproduced? Because this clearly isn’t like, some air-tight case against Go. And OP is clearly heavily biased towards Rust which has its own share of problems. I won’t enumerate the reasons why because I don’t wanna debate within an upvote-ordered comment tree (pretty bad medium for deep discussion IMO).

                                Is this guy famous or something? Or is this type of rant simply a honeypot for commenters?

                                1. 2

                                  I shared this article because the content is interesting and germane, and the author is reputable. I suppose, if I have an ulterior motive as a language designer, that I shared it because I think that it is important to keep in mind that most programming languages are completely unsuitable for serious work; examples of bad programming-language design are important to discuss.

                                  I am always amazed at exactly how vociferous the folks are when they choose to defend programming languages; our tribalism is intense and we have no idea how to moderate ourselves.

                                  1. 3

                                    Not to mention that every language has a “sweet spot” — a product-market fit if you will — but there are always people who want to treat their favored language as universally superior. Go has an unusually pure cultural and technological sweet spot, and if it works for you to live in that spot, it is actually quite fun. But it’s far from universally applicable, from both a technical and personal perspective.

                                    1. 2

                                      What is “serious work”, though?

                                      But anyway, I think people are not being vociferous here, the article seems to be. To me at least.

                                    2. 1

                                      As far as career-promoting blog posts are concerned, this is state-of-the-art. I’m sure OP will get a lot of clout from this.

                                      How? Nobody who decides over people’s careers likes a troublemaker, not even when he causes trouble in a blog post about a programming language.

                                      1. 1

                                        Go criticism really strikes a nerve in some people, eh?

                                        I’m going to link my comment from elsewhere in the thread:


                                        1. 1

                                          Go criticism really strikes a nerve in some people, eh?

                                          I don’t think it’s Go criticism, it’s just unfounded, or at least, unexplained criticism. I’m not even using Go, and I think the article is part-rant, part-troll, because it’s not making sensible arguments to me.

                                        1. 6

                                          Others use it, so it must be good for us too

                                          I agree with that statement, and I think it’s a lie people tell themselves.. about Python.

                                          I also agree with it regarding Go. There is way to many people that simply favor very different languages. One can see that by how so much Go code coming out of Google is C++/Java/Python style. Folks, don’t get Go jobs when you prefer to design software differently. You’ll be angry about how Go isn’t X and Go people well be angry that code isn’t idiomatic.

                                          On a related note. If you want to program in a language cause Google, you should do Dart. and be happy with it. (/sarcasm)

                                          Everyone who has concerns about it is an elitist jerk

                                          If you feel the need to write about some language you simply don’t write and whose design choices you don’t understand or agree with, you aren’t elitist or a jerk, you just seem to be really bored and unsure about what to do with your time.

                                          Its attractive async runtime and GC make up for everything else

                                          That’s a really odd thing to say in my opinion. There’s quite a few other languages, also in the functional programming realm for example. I don’t really know many people that use Go for these reasons alone. Sure they pushed the concurrency/parallelism part a lot initially, maybe too much, but people stopped that, because everyone overused it. Overusing being use cases where it doesn’t make sense or even is counter-productive.

                                          Every language design flaw is ok in isolation, and ok in aggregate too

                                          I think something like this is often mentioned. It’s a general theme with languages that don’t try to copy C++ or Java (with nicer syntax or something). Having different view points/design decisions from you/the language you like isn’t a design flow. It’s the same with operating systems. They have different priorities and different design decisions or the are just yet and another “the next Linux/Windows” with little to no innovation.

                                          We can overcome these by “just being careful” or adding more linters/eyeballs

                                          I think that’s yet another topic that comes with using a language you don’t wanna be using. A lot of the “Odd thing about Go” articles mention something that is either very obvious even if you just started out (reading Effective Go or something done by the initial authors) or something highly un-idiomatic.

                                          Because it’s easy to write, it’s easy to develop production software with

                                          It is compared to most other languages. Production software is a lot harder than people tend to admit, which can be seen by pretty much every postmortem out there. When you develop something and it’s not a success or don’t have users communicating potentially big issues you might even know about issues.

                                          Because the language is simple, everything else is, too


                                          We can do just a little of it, or just at first, or we can move away from it easily

                                          Ah. Yup. Prototypes going into production. Not sure how it relates to Go.

                                          We can always rewrite it later


                                          Go is closer to closed-world languages than it is to C or C++. Even Node.js, Python and Ruby are not as hostile to FFI.

                                          And everywhere it comes with drawbacks, like software having issues it couldn’t have with the interpreted language. And things like libraries not compiling, so everyone is forced to use containers as a hacky workaround. Also all of these languages have lots of libraries proudly declaring “Pure Python/Ruby/JavaScript” as one of their main features. Don’t act like these issues don’t exist.

                                          Evidently, the Go team didn’t want to design a language.

                                          Or simply not a language the author would want to use. The team also did Limbo and Newquask. Languages with similar design choices, that didn’t end up being popular, because there was no big name company behind it.

                                          And so they didn’t. They didn’t design a language. It sorta just “happened”.

                                          Because it needed to be familiar to “Googlers, fresh out of school, who probably learned some Java/C/C++/Python” (Rob Pike, Lang NEXT 2014), it borrowed from all of these.

                                          Isn’t that basically the story of Python, C, Linux, etc.?

                                          Something I see in a lot of this articles as I mentioned is doing very strange things and then blaming the language. It’s the classic “JavaScript has === for comparison” argument, where one simply disregards the language, acts like it’s a different one and complains that it’s not. I think that’s a big problem with language design. So many new languages are C++ with syntax for something odd improved and some cruft removed. That’s fine, but in the end you are still doing C++. Even more often that cruft is replaced with some other cruft very, very rapidly.

                                          For now now Go has mostly avoided that, even though I think Google is pushing there. Google seems to be the place where everyone would rather use Java or C++. Not to bash them, but they pretty much made the opposite design decisions on everything Go did, so it’s gonna be hard.

                                          I get where that sentiment comes from though. Sometimes I feel like the only person developing in Go despite Google and not because of them. It’s really annoying that they seem to have taken a stronger grip on the project, which can most prominently be seen comparing with the old and some library stuff.

                                          But basically that’s the response the author expected, minus the “large company” argument, which is silly in every context. Large company can also mean “they have three teams playing fire brigade for that project” and “nobody dares to write a better solution, because it might break something”.

                                          If the author reads this: The link to the language designed isn’t working because of a bad certificate. Nice website though! :)

                                          1. 10

                                            you just seem to be really bored and unsure about what to do with your time.

                                            Alternative explanation: sometimes people try to vent complaints when they are exposed to something that they don’t like. When the complaints are rebuffed repeatedly they can build up and become more frustrating. The result is that the person isn’t just responding to the object of criticism, but also to the culture around it. People react strongly when they feel like they aren’t seen or heard.

                                            The Go community comes off a little hard headed sometimes and I think this draws out sharper criticism.

                                            1. 4

                                              I get that. However I don’t think it makes sense to criticize Go (or any language) to not be like another.

                                              It happens way to often and what that leads to is that over time languages turn into the same kind of mess with tons of deprecated old style features.

                                              That’s I think why things are seen as hard headed. Go developers (as in users) that actually like the language don’t want it to change, they like the trade-offs made. They recognize that the language has sharp edges, especially when using it wrong, but they like the fact that this makes it so that you can be aware of all the sharp edges.

                                              In many languages there’s fewer sharp edges on the surface but you can’t to be aware of huge amounts of subtle details and importantly also their history. This means that even after years of development you will still learn new things about the language itself

                                              I actually think that’s why some people enjoy very old and minimalist language. I think it’s also why developers actually enjoy new languages. Not because they bring that cool be feature, but because they don’t have that history of language changes you kind of have to know to really grasp the language. And that history is being created by good and smart ideas on how to make the language better.

                                              Features even ones that make something easier do add complexity. There’s many talks about Go being “finished” from years ago. This is kind of a way to say that the authors won’t change it. That’s a feature of the language.

                                              It’s fair to say you don’t mind it and it’s fair to say whatever shit the language, but wouldn’t a better way be to fit example use C# when that’s what you want out of a language instead of complaining that Go isn’t.

                                              I think that hard headedness should show that these things are the way they are on purpose.

                                              Personally I’m happier about “Go sucks because X, so don’t use it” articles than “Go sucks, this skills be changed” overs, because the first might tell you to not use it if you don’t like the design decisions, whereas the latter wants to ten the language into another.

                                              I think there’s enough languages out there that would better suit the need of the author of the article. This is what I meant with the “bored” stuff you quoted. Work with languages you like more and if your boss forces you to use a language you fundamentally disagree with even after taking an actual close look on it consider changing to something you enjoy or potentially consider making your own thing. LLVM is great, everything is a HTTP API, WASM allows for interfacing with others to some degree so it’s relatively unlikely that you are really forced to use a certain language.

                                              Sorry that was a lot of words, I just hope it helps to not misinterpret what I’m trying to say.

                                              1. 2

                                                This is almost exactly what I feel about the article, the author is bored. You offered good questions to every argument there. Not saying Go is or isn’t this, just pointing out the things about the article that are weird.

                                                The article may have a certain hidden purpose, but it’s really not showing what it’s claiming to show.

                                            2. 2

                                              Others use it, so it must be good for us too

                                              I agree with that statement, and I think it’s a lie people tell themselves.. about Python.

                                              And Rust. And C. And C++. Etc…

                                              1. 3

                                                Indeed. I really only used those languages as example. They are matters of taste, philosophy, design choices and even culture to me. People are different in how they think and approach stuff so of course they’ll use different programming languages to work with even when the goals are identical.

                                                Just how companies are successful with different ways of managing tasks, doing business, communicating, monetization models, editors, office suites and so on. And time and luck also play roles. I think people very much overestimate the role of technologies/products used for the success of a a product.

                                            1. 1

                                              Does it have real enums and pattern matching yet?

                                              1. 3

                                                What do you mean by “real” enums? How are Nim enums deficient? (I ask as someone with only passing knowledge of the language.)

                                                1. 2

                                                  Basically the “sum type” or “tagged union done right” style used by OCaml, Haskell, F#, Rust, and others. There’s nothing in them you can’t do with a C-style enum and union, but they are incredibly handy.

                                                  1. 2

                                                    That’s not a “proper enum”. An enum is what C and Nim call an enum, the thing you mean is (as you now correctly said) a sum type.

                                                    Anyway, Nim has object variants.

                                                    1. 1

                                                      Rust calls it an enum, so that’s the first connection my brain makes. It has also been called an algebraic data type. “Sum type” is a term that only makes sense if you know the math and the particular analogy the term comes from. Long story short, there are many names for it and they all suck ass.

                                                      Object variants do appear to be more or less what I want, done less well, so it has yet another bad name for the same feature. No pattern matching, though, so it’s like peanut butter missing the jelly. I’m actually quite fond of Nim, but I am always amazed by the ability of language designers making imperative-descended languages to entirely pass up some of the best innovations of functional language lineages.

                                                      1. 4

                                                        Enum stands for “enumerated”, a bunch of values associated with integers. It doesn’t make sense to use this name for sum types.

                                                    2. 2

                                                      Does the explanation in this article under caveats jive with your critique?


                                                      Consider a slightly more complex sum data type in which more that one constructor carries a value, for example

                                                      data Pair = PI Int | PD Double

                                                      in our implementation it would correspond to:

                                                      /* Data constructor names */
                                                      enum pair_const {PI, PD};
                                                      /* Actual sum data type */
                                                      struct pair {
                                                              enum pair_const type;
                                                              union data {
                                                                      int i;
                                                                      double d;
                                                              } data;

                                                      This is not safe, nothing prevents us from accessing .d even if type is PI, because the fields of the union are always accessible. This is what distinguishes our implementation from classic sum types in Haskell for example, in which they are safe.


                                                      Apologies for the formatting.

                                                      Thank you so much, @roryokane! You are too kind! Getting proper formatting on mobile was a huge pain. This looks so much nicer.

                                                1. 2

                                                  Every time I engage with Elixir/Erlang content it just makes me happy. I don’t feel like I get that from most tech.

                                                  More specific to this video, I’ve always thought that the preemptive multitasking model made BEAM particularly well suited to UI work. In Jupyter, you often wait for cells to complete. If it’s a large task this can be a major blocker. Here that seems to be less of a concern. Very remarkable.

                                                  1. 3

                                                    Happy holidays, folks! I hope the year is winding down nicely for everyone.

                                                    1. 2

                                                      Do you have any thoughts on command runners? E.g., do you like to use Makefiles, Justfiles, Invoke, or just run the commands raw?

                                                      1. 1

                                                        As my other comment points out, I’ve historically used tox for my one open-source projects, but I’m currently exploring other options.

                                                        I’ve used plenty of other things at various jobs, and generally go with “whatever the team likes best”. For the team I currently work with, that’s Dockerizing everything, with a Makefile to drive it.

                                                        1. 1

                                                          I thought tox was more oriented around testing. Does it also allow you to run commands for deployments or other project related tasks?

                                                          1. 2

                                                            You can use tox to automate any task you like, really. It has a lot of testing-oriented features because that’s a lot of what people want to automate, but it isn’t limited only to testing.

                                                            For my projects that have used tox, I don’t have automated “deploy” because the “deploy” is publishing a new version of the package to the Python Package Index, and A) I always want to decide when and whether to do that, and B) am not going to hand over my PyPI API token to any CI tool, because that could cause some havoc if the CI tool ever gets breached.

                                                            At work, our deployments are set up in the CI/CD tool using its configuration mechanism. Local dev tasks are all driven by the Makefile.

                                                      1. 2

                                                        I’ve always found this concept very interesting, but I am not sure about the practicality of it.

                                                        1. 3

                                                          While I haven’t dug that far into it, I believe it’s the basis of the very successful (though unfortunately proprietary) implementation of GraalVM, which provides significant speedups for most JVM languages.

                                                          1. 3

                                                            Yup, also PyPy! which people use in production. I think PyPy was the first use of Futamura projections “in production” – I’d be interested in any arguments against that. [1]

                                                            And Babashka the Clojure scripting tool uses Graal / Truffle, lots of others here:

                                                            I think the main downside is that it means you have compilers in your binary, which makes it like 50+ MB

                                                            For Oil I found these ideas very appealing / interesting, but instead we’re just doing a straightforward C++ translation, with static typing (e.g. JVM bytecode is dynamically typed, even though Java is statically typed!)

                                                            So we use the C++ compiler, and the binary ends up more like 30x faster, not 3x faster. And it’s 1.2 MB, having no compiler, and much more portable to different machines, and (way) faster to compile. (Also we couldn’t have used it without a metalanguage change, i.e. it would be RPython and not Python)

                                                            Although I’m sure there are differences in how say Babashka is built and how PyPy is that I probably don’t understand

                                                            [1] edit: I’ll have to read this again, I’m probably mixing a few things up -

                                                            Partial evaluation is an old idea in computer science. It’s supposed to be a way to automatically turn interpreters for a language into a compiler for that same language. Since PyPy was trying to generate a JIT compiler, which is in any case necessary to get good performance for a dynamic language like Python, the partial evaluation was going to happen at runtime.

                                                            1. 3

                                                              And Babashka the Clojure scripting tool uses Graal / Truffle, lots of others here:

                                                              I think Babashka explicitly doesn’t use the Truffle framework, but it does use GraalVM.

                                                              Combining AOT and Interpretation

                                                              It could be interesting to explore Clojure on Truffle or a Clojure run inside Java on Truffle, but the combination of pre-compiled libraries and code running inside a Truffle context while having good startup time poses its own challenges. A benefit of how SCI is currently set up is that it’s easy to combine pre-compiled and interpreted code. As SCI is implemented in Clojure it was also easy to support ClojureScript, which is the dialect of Clojure that compiles to JavaScript. SCI on JavaScript enabled writing a Node.js version of babashka, called nbb and a browser version called scittle.


                                                              1. 1

                                                                So, with PyPy, can I feed it an interpreter to get a compiler back? Where do I read more about how to do that in Pypy?

                                                                1. 3

                                                                  I remember I tried this many years ago. Not sure it will run now, but the idea is, write BF interpreter in RPython, and get a faster interpreter:


                                                                  Then add some kind of main() harness to get a JIT compiler


                                                                  It takes a long time to translate your interpreter, and you get a big mandelbrot fractal while you wait !

                                                                  I guess Graal could give you an AOT compiler from an interpreter (?), but I’m not sure if RPython has that

                                                                  I think both of them are very difficult if you’re not familiar with the codebase, it’s not really an “end user” thing

                                                                  They have done (a very complete) Python, Prolog, I think C extensions to some degree, a bunch of others, and even bridging languages like Graal does:


                                                                  FWIW I found that a nice benefit of generating interpreters is that you can “parameterize” it by the GC algorithm. GC and ref counting policies are both littered ALL OVER most interpreters and VMs. If you have a layer of indirection (a code generator), then you can test out multiple GC algorithms easily. Otherwise you are kind of stuck rewriting huge amounts of code. Oil is taking advantage of that.

                                                                  1. 1

                                                                    Also I just Googled and there are similar things for Graal:



                                                                    I don’t know if this code works/runs, and actually generates say a JIT or AOT BF compiler to x86, but I think it’s supposed to. (And I’d be interested if anyone knows or makes it work)

                                                                    Somewhat related line of work I looked at a few years ago:

                                                                    Again I think there are significant engineering downsides to using these extremely mathematical techniques, but it’s very fun and mind expanding

                                                                  2. 2

                                                                    The thing you’re looking for is RPython. There used to be example versions of RPython implementations of scheme, ruby, and a few other languages. I’m not sure that they still work, nor do I know if RPython exists at all as its own thing, or if it’s more fully integrated into a “PyPy is only Python” type effort.

                                                                    1. 3

                                                                      The Ruby implementation, Topaz, is still alive. I don’t personally use Ruby often enough to know whether Topaz is still compatible with typical libraries, though.

                                                                      I used RPython for Monte and Cammy. My Monte-in-RPython code is available, but probably not enlightening. My Cammy-in-RPython core looks a lot like the toy interpreters displayed in the RPython tutorials linked by @andyc, and is almost certainly more readable.

                                                                      In general, RPython has to be obtained from PyPy tarballs. In that sense, yes, RPython is tied deeply to PyPy. However, the rpython/ and py/ source code can be used without pypy/. For example, this Nix fragment pins a specific version of a PyPy tarball, and later in the same file we see that rpython/ is copied out of the tarball to be used on its own.

                                                                  3. 1

                                                                    Replying to myself for education – it does seem like partial evaluation / Futamura projections have problems in practice that lots of people hit the hard way, and that I didn’t quite understand. They are very elegant in theory but they say nothing about “quantities” or performance. That is, you could get your JIT or AOT compiler for free, but then you could have a huge about of optimization to do (like years or decades).

                                                                    Some good comments on HN:

                                                                    I also read this last night and understood it better than the first time:


                                                                    In the end the whole thing was nixed by the fact that the staged interpreter had already become way more complex than the compiler I had written previously and that the improvements in compile time were more than lost by the slower run time.

                                                              1. 2

                                                                Two, it turns out that coherence isn’t just for show. If you call make_list_eq_thingy(IntEq) twice, you get two different Eq(List(I32)) values which, by ML’s type rules at least, are not the same.

                                                                Could this be solved with memorization? So long as a functor is pure, you memoize the result and return a cached implementation on a second invocation.

                                                                (Apologies if this is covered later. I am half way through the article, but I need to get to work. 😬)

                                                                Even though it’s originally inspired by Scala, the concept behind modular implicits is very simple.

                                                                The absolute shade.

                                                                1. 1

                                                                  Could this be solved with memorization?

                                                                  To some extent maybe, the section on inheritance deals with one way of reasoning about this. To me it seems like it’s a problem with a fair amount of odd little nooks and crannies to explore in terms of practical design space, most of which are yet unplumbed. ML people, at least the ones who write about type systems, seem to often be purists about “if this type is abstract, you never know anything about it.”

                                                                  The absolute shade.

                                                                  All in good fun. I’ve read a fair bit about Scala but never used it for anything nontrivial. :-)

                                                                1. 2

                                                                  Professionalization, for lack of a better word: tech workers became a flatter workforce rather than a couple thousand creative weirdos.

                                                                  a larger percentage of us…just don’t care to think about computers much.

                                                                  Glad the author was able to put into words what I couldn’t. Tech sometimes feels like it tries to be about everything except actually programming. You’re weird if you write things without 1000 dependencies, and there’s an army of people eager to bro-splain that you just invented your own framework when you wrote hello world. Rust is really cool now, because we all decided it was finally cool. (We’ve always thought Rust was cool, even when we were ignoring it.)

                                                                  It’s all somewhat alienating to me if you can’t tell. :) I mostly don’t pay much attention these days.

                                                                  1. 4

                                                                    In my experience if this is your life it’s time to get out of California Start-Up Country and work for a company that does something real and concrete for a change, instead of a company that builds frameworks for building frameworks for building frameworks. It’s not perfect, but it’s a start.

                                                                    1. 3

                                                                      Oh I have! More commenting on the online discourse being pretty shallow overall.

                                                                    2. 2

                                                                      This article was strangely comforting. I feel like I run into a lot of people with strong opinions, loosely held and it’s just exhausting. Even if you “win” an argument some times you end up hearing your own argument parroted back with higher intensity than you originally argued with. The even tone in this article is a welcome breath of fresh air.

                                                                      1. 2

                                                                        In my experience, this pushback mostly comes from insecure engineering managers that got sidetracked from programming into management mostly because they weren’t very good at it to begin with.

                                                                        Good managers are able to tell that an organization owning the code it operates is usually a net asset, and can simplify things a lot by only having the features that are actually needed and integrating straightforwardly with the already existing ecosystem.

                                                                      1. 5

                                                                        Instead of whittling buildings from living trees as real builders once did, we are reduced to merely assembling purchased wood and bricks.

                                                                        1. 7

                                                                          But you can build arbitrary things out of wood and bricks.

                                                                          What I see with the cloud and modern frameworks is that you can start with a canned architecture, and get something that’s roughly similar to what you want kinda quickly.

                                                                          But then you spend 90% of your time patching over the last 10%, and you never really get what you want. Instead you pass it on to the next team, which rewrites it on some newer non-composable abstraction, from the same vendor, or a different one. The abstractions eat up any hardware improvements, so that the same program has higher latency than it did 5 years ago.

                                                                          1. 2

                                                                            But the wood you are purchasing isn’t 2x4s. It’s just tree branches that you need to combine together. They don’t fit quite right so you need to stick some mud in between them so the wind doesn’t cut through the building.

                                                                            1. 8

                                                                              I wouldn’t even call it raw tree branches or 2x4’s. Those can be eventually fashioned into the shape you want, with enough work (and, in software, work can be automated!).

                                                                              I would say the analogy is closer to trying to build a house out of Ikea furniture parts. Those parts are great, if what you want to build is what the designers intended! But if it’s not (and it’s often not), then you’re stuck with hacks. After the hacks, the system continues to works poorly.

                                                                              Steve Yegge has a couple good analogies that get at the composition problem:

                                                                              Java is like a variant of the game of Tetris in which none of the pieces can fill gaps created by the other pieces, so all you can do is pile them up endlessly.


                                                                              In the cloud, a common pattern I see is “adding caches” for things that shouldn’t be slow in the first place. The caches patch over some problems, and add more.

                                                                              They leave big holes in correctness and performance, which are obvious problems we should be aware of. There are also highly non-obvious problems like metastable states:

                                                                              (i.e. the presence of a cache now means that restarting a stressed system does NOT cause it to recover. All cloud companies have complex software and processes to patch over this problem, implicitly or explicitly. On the thread about Twitter, I pointed out cloud systems have people turning cranks all day long, and SREs were (probably still are) the most numerous type of employee at Google)

                                                                              And Legos:

                                                                              With the right set (and number) of generically-shaped Lego pieces, you can build essentially any scene you want. At the “Downtown Disney” theme park at Disney World in Orlando, there’s a Legoland on the edge of the lagoon, and not only does it feature highly non-pathetic Lego houses and spaceships, there’s a gigantic Sea Serpent in the actual lake, head towering over you, made of something like 80 thousand generic lego blocks. It’s wonderful.

                                                                              Dumb people buy Lego sets for Camelot or Battlestars or whatever, because the sets have beautiful pictures on the front that scream: “Look what you can build!” These sets are sort of the Ikea equivalent of toys: you buy this rickety toy, and you have to put it together before you can play with it. They’ve substituted glossy fast results for real power, and they let you get away with having no imagination, or at least no innovative discipline.


                                                                              A major point of my posts on software architecture last year, which it took me many words to get around to, is that Unix is a language-oriented operating system, and that means it composes like a language.

                                                                              The cloud is not language-oriented, and doesn’t compose.

                                                                              And I think the “everything is text” problem is sort of a red herring (it’s a tradeoff/downside of the Unix style).

                                                                              I believe that we’re so focused on solving the I need types for fine-grained autocomplete problem that we’ve lost sight of the we’re writing way too much code that works poorly problem. One problem is local and immediate, while the other is global and systemic.

                                                                            2. 1

                                                                              We’ve been building from wood and bricks for the last few decades but these days we’re building prefabbed kitchens and bathrooms and jamming them together until the joists buckle and shipping that

                                                                            1. 7

                                                                              I am saddened, really.

                                                                              Whole swaths of our culture will go down the toilet

                                                                              “Write me a two page essay about …”

                                                                              “Write me a college entrance essay about …”

                                                                              “Write me a complaint letter about…”

                                                                              “Write a yearly review for this person…”

                                                                              “Summarize this super long chat, so I don’t have to read it. Are there any action items in there for me?’

                                                                              “Answer my emails”

                                                                              “Pick out the important stuff from my social media and summarize it for me”

                                                                              1. 9

                                                                                I imagine a future like that of Pixar’s Wall-E, without the sustainable lifestyle: the last oil well runs dry, the last solar cell’s efficiency drops to nil, and nobody knows anything, nobody knows anything except prompt engineering.

                                                                                1. 4

                                                                                  This morning I read about someone getting to the top of the leaderboard in advent of code with entirely AI generated code and it saddened me. I’m prone to melancholy and I already struggle with finding meaning in many tasks. It feels like the work I do might be entirely outclassed by these models in the future.

                                                                                  The only thing I can think to do is learn these tools to see what works and what doesn’t. The mystery of it might be more overwhelming than the reality.

                                                                                  1. 4

                                                                                    While it is certainly possible that programming may be the modern day analog of the buggy whip, there is at least one way to take a good attitude here: the same kind of thing has happened with art in the past where people thought their medium would be replaced by the photocopier or the audio recording, or oil paint, or the drawing tool, etc

                                                                                    Artists have found ways for centuries to use the new tools to make things that they couldn’t make before.

                                                                                    Similarly even though human chess players can be trounced by the best computers, people still play chess. The top players use a computer as a tool to explore variations and ideas that might be impossible to analyze themselves.

                                                                                    I saw a recent youTube where they took an artist and a non artist and let them use an AI tool to create paintings. The artist clearly made better art using the tool than the novice.

                                                                                    1. 3

                                                                                      First off, thank you for trying to console me. I’ve been a bit down this week so I might be overly sensitive.

                                                                                      I was talking to my girlfriend earlier and the conclusion I’ve come to is that the future is rarely what the maximalists contend it will be. Probably best that I spend some time understanding how these tools work so I can work along their grain, instead of against it.

                                                                                1. 3

                                                                                  Anyone actually running this? Curious to hear experiences and get details of the setup at scale

                                                                                  1. 3

                                                                                    Yeah it’s a real pleasure! I’d suggest Fleet (FleetDM).

                                                                                    I’m using it via elastic-agent today but running into issues with the agent itself, not osquery.

                                                                                    One important thing to note is that you cannot use one config for all OSes. There’s a few subtle differences for instance FIM between windows and Linux.

                                                                                    Also check out File Carving for something almost completely undocumented and extremely interesting.

                                                                                    1. 1

                                                                                      Read about this on the orange site from one of yogthos’s comments. They also made a blog post and submitted it here. They may have some insights.

                                                                                    1. 14

                                                                                      Gleam is my favorite language that I haven’t actually tried yet (but I will as soon as I get home from my traveling).

                                                                                      I read the language tour on my phone one evening, and the next evening I was reading and understanding source code. That’s how approachable the language is!

                                                                                      use is interesting because it’s a fairly complex feature, but it opens up so many options (as shown in the link). At first I was skeptical, but I think I agree. It’s worth the added complexity.

                                                                                      To give two examples of where Gleam has made (imo) interesting design decisions to keep the language simple:

                                                                                      • There is no Option[T], only Result[T, E]. If you don’t want an error, just use Result[T, Nil]. Is it more typing? Sure, but now you only have one type instead of two.
                                                                                      • A type (think struct) can have multiple constructors, each with their own fields. This means they effectively double as enums. Have a look at the docs, they explain it better than I can.

                                                                                      Anyway, congrats Louis! Very inspiring work :-)

                                                                                      1. 4

                                                                                        A type (think struct) can have multiple constructors, each with their own fields. This means they effectively double as enums. Have a look at the docs, they explain it better than I can.

                                                                                        This sounds like it’s taken straight from Haskell. (Not a criticism, just background.)

                                                                                        1. 4

                                                                                          One could do worse things then take inspiration from Haskell!

                                                                                          I did not know this. :-)

                                                                                          1. 4

                                                                                            It’s not specifically inspired by Haskell, it was mostly inspired by Erlang. I didn’t realise that Haskell had the same feature but now I read the documentation it seems very similar. Cool stuff!

                                                                                            1. 3

                                                                                              I mean, Gleam does belong to the ML family of languages IMO, so we may as well say the feature is inspired by Standard ML! /s

                                                                                              1. 1

                                                                                                In what sense does it relate to ml?

                                                                                                1. 7

                                                                                                  Some examples of “ML-like” features include…

                                                                                                  • Sum types as a primary data modeling tool, in some cases even displacing records for small-ish data structures

                                                                                                  • First-class functions and some use of recursion, often used to constrain and thereby elucidate control flow. This is sometimes in lieu of imperative-/procedural-style constructs such as…

                                                                                                    • If/else statements (not if/else expressions)
                                                                                                    • Switch/case statements (not if/else expressions)
                                                                                                    • Goto, which has basically been a strawman against this style of programming for the past decade or so
                                                                                                    • Defer, which though it allows code to be written out-of-order, still introduces a discrete procedural “step” to computation

                                                                                                    In languages like Javascript and Gleam, this extends to the use of a syntactic construct you may know as a callback function.

                                                                                                  • Type inference (usually Damas–Hindley–Milner type inference)

                                                                                                  • Other idioms that, like the above, help to obviate and/or discourage the use of global state in implementations as they scale

                                                                                                  There are plenty of ML-like languages for different runtimes, including a few that are used for systems programming.† Languages often described as “ML-like” include…

                                                                                                  • Scala, ‘an ML for the JVM
                                                                                                  • F#, ‘an ML for the CLR
                                                                                                  • Elm, which also takes some inspiration from Haskell while not going quite as far with the generics, apparently for sake of error message readability
                                                                                                  • Facebook’s Reason, which is sometimes even called ReasonML for clarity
                                                                                                  • Rust, one of the only systems programming languages to have this distinction. Check for the features above if you don’t believe me!

                                                                                                  Haskell is explicitly inspired by ML, but is often considered its own category due to the radical departures it makes in the directions of a) purely functional programming and b) requiring (in most cases) the use of monads to represent effects in the type system.

                                                                                                  My educated guess: This is largely because the core syntax and feature-set is relatively well understood at this point. As such syntactic sugar is rarely necessary in order to express intent directly in efficient code. This is unlike in “dynamic” languages such as Python, Ruby, and Elixir, which tend to make liberal use of metaprogramming in order to make the syntax more directly express intent. This can often make it unclear what is actually happening to make a given piece of code run.

                                                                                                  1. 3

                                                                                                    I find it interesting that nothing on that list is inherent to functional languages. All these sweet goodies might as well exist in an imperative language, but outside of Rust, they don’t.

                                                                                                    I’m still sad Go completely missed the boat on that one.

                                                                                                    1. 1

                                                                                                      Yup. Maybe someday we’ll have an approach in between those of Go and Rust; that’s some of what I’m looking to Gleam for, even if it’s not primarily a systems programming language.† In the meantime, we have the sometimes-insightful, sometimes-regressive minimalism of Go; and the sometimes-insightful, sometimes-overwhelming maximalism of Rust.

                                                                                                      † It is my broad understanding that the BEAM VM—on which I expect most* Gleam code will run—helps concurrent applications scale by denying granular control of threaded execution. This can be thought of as vaguely similar to the Go runtime’s decision to have a blessed implementation of cooperative multitasking, namely goroutines. In contrast, the Erlang ecosystem benefits from having a blessed provider and model (respectively) for all concurrent computation, thanks to the OTP supervision tree combined with the actor model of concurrent computation. It takes power away from the developer, but to the benefit of the system at large. Boy, is it ever exciting that we might actually have a statically-typed language built atop that excellent multitasking infrastructure!

                                                                                                      1. 1

                                                                                                        From my PoV Go and Rust are incomparable since Go is a GC’d language and Rust is a non-GC’d language. So on this basis Gleam, too, can never compete with Rust but is for sure a strong contender in the space Go plays in.

                                                                                                        1. 1

                                                                                                          If all that matters is the engine that runs your code, the sure. If a project is performance-sensitive, then its options for reasonable programming language are constrained anyway IMO. When I compare these languages, I have their developer experience in mind. At least, relative to how much performance they leave on the table.

                                                                                                      2. 1

                                                                                                        It honestly depends what one even means by “functional language” lots of ML and ML-like things exist which are not particularly functional including possibly: SML, OCaml, Swift, Scala, Kotlin, Haxe

                                                                                                        1. 2

                                                                                                          What’s not-functional about SML, OCaml, and Scala? Are you perhaps comparing them to the purely functional Haskell?

                                                                                                          1. 1

                                                                                                            What’s functional about them? Every language I listed is equally functional and not-functional depending how you use it. They’re all ML variants after all.

                                                                                                      3. 2

                                                                                                        Facebook’s Reason, which is sometimes even called ReasonML for clarity

                                                                                                        Credit where credit is due: Reason is a syntax extension for OCaml. It’s not a whole cloth creation by Facebook.

                                                                                                      4. 3

                                                                                                        I think it is an ML language in every way, although we moved away from the traditional ML syntax in the end.

                                                                                              1. 6

                                                                                                I work at a startup that uses Rust and this rings very true to me.

                                                                                                I like programming Rust. It’s fun and its interesting and it makes me feel smart when I use it.

                                                                                                hiring, onboarding, code reviews, estimates, sprint planning, library maturity, being on-call, maintaining existing projects? yeah. That’s a different thing.

                                                                                                1. 4

                                                                                                  A lot of people talk up the reliability and ease of maintenance in rust. What do you find challenging about it? Are there edge cases you wish you had known about before?

                                                                                                  1. 1

                                                                                                    well, I can only comment on my experience. When I say maintaining existing projects, I don’t think “edge cases” is the right framing, because the problem is not really something about the language itself; it’s not a problem with Rust’s capabilities as a language. I also don’t think “reliability” is the right framing because it’s assuming that maintenance just means keeping the thing online.

                                                                                                    In reality, maintenance really means that engineer A wrote something that fit requirements alpha, and now the requirements have changed to requirements beta, and engineer B has to modify the project that was built by someone else for different requirements to fit their new requirements. Maybe engineer A is gone, maybe engineer A is busy; you can’t just require that all changes go through the same person, because if you can’t exchange work fluidly between engineers, you’re never going to keep the velocity you need in a startup environment. In a startup, requirements tend to change very rapidly because you’re still finding product-market fit.

                                                                                                    Hiring people that already know how to program Rust is hard enough. Hiring people that know how to operate Rust in production is even harder. Applicants that do known Rust very often have not used Rust in a team or production setting.

                                                                                                    The obvious problem with hiring a bunch of people that don’t really know Rust is that you wind up with a lot of projects that were written by people who were learning Rust when they wrote it. Most people think the consequence of this is that you have two styles of code floating around your org: “Rust by experienced Rust programmers” and “Rust by inexperienced Rust programmers”. Reality is actually much worse. In reality, people who come to Rust from Go, Python, JavaScript, Java, etc… each has a specific and idiosyncratic way of learning Rust. Instead of having 2 styles of code in your org, you really wind up with N+1 styles of code in your org: 1 style for each of the N contexts that you hired from and then 1 style for people that are writing code in your org’s native style. If you have a stable team that’s been working together and they’re all coming from $language and they all start programming Rust together, you wind up with two kinds of code: “Rust by people thinking like $language” and “Rust by people who have been writing Rust for a while”. When you have a team that’s new to working with one another and comes from a large diversity of contexts and programming backgrounds, what you really wind up with is “Rust by people thinking like $language” for each $language in the set of all languages that describe the prior experience of all of your engineers.

                                                                                                    Operators have a pretty hard time with it. Most people say that learning to write Rust comfortably and productively takes about three months of programming Rust regularly, but what about the people on your team that need to interact with your projects on an infrequent basis? It’s very common for someone to have a role where they do a lot of operations-focused work, but might want to contribute to code on an infrequent basis. Maybe they want to update a library because a vulnerability was found in one of our dependencies, do they have the toolset? Can they update the dependency? What if they find a small bug, will they be able to fix a small bug they noticed in production, or will they have to send it to another engineer to fix it? Virtually all of these engineers could fix a small bug in a Go, Python, or Java program, but fixing even a small bug in a Rust program is often very difficult for people who don’t use the language regularly. Often times they can’t even understand what the error messages are talking about.

                                                                                                    The testing ecosystem of Rust is also fairly immature. Support for benches in tests is still in nightly, and none of us want to be depending on nightly in production. Custom test runners is also a nightly feature. Sure, these things exist, but there’s not a lot of stable tooling build on top of them, because their foundations are not stable.

                                                                                                    For context, it’s a game company, we’re making an MMO, and the client uses Unreal Engine. Our engineers come from places like Epic, Google, Amazon, Riot, Blizzard, etc. These are smart, experienced engineers, people with a lot of production experience on large scale server deployments and large scale multiplayer games.

                                                                                                    I think Rust is a very capable language and I think my teammates are very capable engineers; I don’t think the problem is either the capabilities of the language or the capabilities of the people. I enjoy writing Rust, and I think most people I work with would say the thing, but thinking it’s cool and thinking it’s effective are different things. I’m convinced it’s cool, but I’m not convinced that it’s particularly well-suited for rapidly-growing teams.

                                                                                                    1. 1

                                                                                                      First off, thank you for taking the time to leave a detailed comment!

                                                                                                      I misunderstood what you meant by maintenance. What you are describing I might think of as brown field development or legacy code bases. E.g., continued development and evolution of a codebase. I was thinking about operational maintenance. I assumed that might imply rough tooling around maintenance tasks and therefore be a bit of an edge case in program life cycle. I think I have a better idea what you mean now.

                                                                                                      I see how it could be challenging to get everyone to write the same code style in Rust. It’s a bit of a kitchen sink language and it’s fairly young so there may not be enough cultural norms (like the Zen of Python) or people to enforce said norms. By contrast, Go, Python, and Java have normalized conventions, which makes it easier to assimilate into the community.

                                                                                                      It makes me wonder how long it took other languages to “find their voice”. C++ is famous for shops having their own subset so it seems like not every language necessarily comes to normalize a set of conventions. I’m not deep in the Rust ecosystem so I don’t feel qualified to say how that is developing. Hopefully enough people will develop the cultural knowledge to provide a good base new shops to adopt Rust.

                                                                                                      Thanks again for your thoughts!