Threads for ngrilly

  1. 4

    If you haven’t seen Guy Steele’s excellent Growing a Language conference talk, it’s worth your time.

    1. 1

      Yes, it almost feels like Steele is playing a mind trick on the audience in that talk!

    1. 4

      Most programmers refactor their code as it grows to remove accidental complexity. Language designers should do the same, refactoring their language as they evolve it, but they usually don’t, because most users prioritize backward compatibility over simplicity. I’m not sure I agree with that, but it seems to be the dominant approach.

      1. 9

        The problem is that it’s really hard to do this and not break shit along the way. Programmers get grumpy when their code breaks. See Elm 0.17 to 0.18, and to a lesser extent perhaps to 0.19. There’s also the issue of, if you break someone’s code once, are you going to do it again? And again? When does it stop?

        I am semi-seriously considering something like this with my own language, Garnet. After 1.0 release, the opportunity for breaking changes would occur every X years, probably with X increasing over time. Maybe a fibonacci sequence or something; the gaps would go 2, 3, 5, 8 … years, so you always know when to expect them to happen long in advance. Somewhat inspired by Rust’s editions, in terms of “this is a known-stable language release”, but able to break backwards compat (and also being less frequent).

        1. 1

          With a sufficiently expressive macro system, I think you could pull this off (relatively) easily:

          When features get removed, rather than than being axed completely, they get moved to a standard library macro, and then when source files get compiled in a new version that has removed built-in support for the feature, it automatically inserts the import into the top of the source if it’s used in it. Those macro contexts could bar feature compatibility with (from their perspective) the future, such that if you want to use new language features in a block of code using a legacy macro, you need to refactor the legacy macro away. Doing so would decrease maintenance burden substantially, because you don’t need to worry about new language features conflicting with now-sunset language features.

          I think that gives the best of both worlds: reduction of core language complexity, while not breaking source files that have been left untouched since the times of dinosaurs.

          1. 1

            The problem is that it’s really hard to do this and not break shit along the way.

            Agreed. Then we end up with a “perfect” language but no one using it. The next step for language designers would be to invest in tools that would help with refactoring the code as the language evolves. I remember Go did a bit of that in the early days before 1.0. But that was mostly for relatively trivial transformations.

            1. 4

              Rust does it too - they have an “edition” system, and every three years a new edition ships that can contain new, backwards-incompatible syntax.

              What differentiates this from e.g. the C++11, C++14, C++17 etc. sitatuon is that you get to mix-and-match these editions within the same project, the compiler handles it fine. Also, changes made in editions are designed in such a way that fixing the breakage in your code is easy and largely automated, so it suffices to run cargo fix --edition in nearly all cases.

              1. 3

                TBF lots of languages have some sort of evolution feature. Python has __future__ imports, Perl has feature and version pragmas, …

                I think the great success of Rust’s editions system is the eminently reliable migration tool obviously, and

                you get to mix-and-match these editions within the same project, the compiler handles it fine

                you don’t, really, the edition is a per-crate stricture, obviously you can have multiple crates in a given project, but it’s much coarser. If anything you can “mix and match” C++ a lot more. GCC actually guarantees that as long as all your objects are built with the same compiler you can link them even if they use different versions of the standard. And you can even link cross-version if the features were not considered unstable in that compiler version (so e.g. you can’t link c++17 from GCC7 and C++17 from GCC8 because C++17 support was considered unstable in GCC8).

                But I think that’s advantageous.

                An other major advantage of Rust is simply that’s it’s an extremely statically typed language, so there are lots of language improvements which can be done with middling syntax tweaks and updating the prelude, whereas adding a builtin to a dynamically typed language has the potential to break everything with limited visibility. Not being object-oriented (so largely being “early bound”, statically dispatched) and very strict visibility control also means it’s difficult for downstream to rely on implementation details.

        1. 18

          Do you have any more information on the project? This is a bit light.

          1. 3

            I haven’t shared the open source project publicly yet, but I plan to later this year.

            This thread has some example code and a link for more info if you’re interested (some details have changed since): https://twitter.com/haxor/status/1618054900612739073

            And I wrote a related post about motivations here: https://www.onebigfluke.com/2022/11/the-case-for-dynamic-functional.html

            1. 18

              There is no static type system, so you don’t need to “emulate the compiler” in your head to reason about compilation errors.

              Similar to how dynamic languages don’t require you to “emulate the compiler” in your head, purely functional languages don’t require you to “emulate the state machine”.

              This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.

              I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.

              This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.

              Introducing constraints to the set of possible programs makes it easier to reason about our programs.

              1. 4

                I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.

                Regarding this:

                “making a subset of programs impossible”

                How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.

                I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

                1. 10

                  I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.

                  Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.

                  It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.

                  In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.

                  1. 4

                    How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.

                    You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.

                    If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.

                    I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”

                    But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

                    I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.

                    I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.

                    1. 3

                      I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

                      I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.

                      It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).

                      Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.

                      And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.

                      My claim is you have to think like the compiler to do that.

                      My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.

                      1. 6

                        Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.

                        I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!

                        1. 3

                          Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.

                          Nevertheless, I have observed that:

                          • Many programmers have explicitly taken programming to avoid doing maths.
                          • Many programmers dispute that programming is applied maths, and some downvote comments saying otherwise.
                          • The first set is almost perfectly included in the second.

                          As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.

                          While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.

                          I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.

                          There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.


                          Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.

                          And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.

                          1. 6

                            Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.

                            1. 1

                              To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.

                              Except for the exceptions. I’m clearly missing something, though I have yet to be told what.

                              Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.

                              And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.

                              1. 5

                                There seems to be no conclusive evidence one way or the other: https://danluu.com/empirical-pl/

                                1. 3

                                  Sharing this link is the only correct response to a static/dynamic argument thread.

                                  1. 1

                                    I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.

                                    Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.

                                    Edit 2: just realised this is a list of studies, not just a single study. Even better.

                        2. 1

                          How do you know what subset becomes impossible?

                          Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.

                          Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?

                        3. 4

                          Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.

                          https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity

                          People in the programming language design community strive to make their languages more expressive, with a strong type system, mainly to increase ergonomics by avoiding code duplication in final software; however, the more expressive their languages become, the more abruptly duplication penetrates the language itself.

                          That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.

                          Those languages are not only harder for humans to understand, but tools as well

                          1. 4

                            The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.

                            1. 1

                              Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.

                            2. 2

                              Sure, but I think you’re talking past the argument

                              This is probably a fair characterization.

                              Those languages are not only harder for humans to understand, but tools as well

                              I am a bit skeptical of this. Certainly C++ is harder for a tool to understand than C say, but I would be much less certain of say Ruby vs Haskell.

                              Though I suppose it depends on if the tool is operating on the program source or a running instance.

                          2. 7

                            One common compelling reason is that dynamic languages like Python only require you to learn a single tool in order to use them well. […] Code that runs at compile/import time follows the same rules as code running at execution time. Instead of a separate templating system, the language supports meta-programming using the same constructs as normal execution. Module importing is built-in, so build systems aren’t necessary.

                            That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.

                            1. 4

                              I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.

                              1. 3

                                I think there are a lot of excellent ideas in both Clojure and Elixir!

                                With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).

                                For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.

                              2. 2

                                I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?

                                Some things I wish Lisp syntax did better:

                                1. More syntactically first-class data types besides lists. Most obviously dictionaries, but classes kind of fit in there too. And lightweight structs (which get kind of modeled as dicts or tuples or objects or whatever in other languages).
                                2. If you have structs you need accessors. And maybe that uses the same mechanism as namespaces. Also a Lisp weak point.
                                3. Named and default arguments. The Lisp approaches feel like cludges. Smalltalk is a kind of an ideal, but secretly just the weirdest naming convention ever. Though maybe it’s not so crazy to imagine Lisp syntax with function names blown out over the call like in Smalltalk.
                                1. 1

                                  Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!

                                  1. 1

                                    This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.

                                    1. 2

                                      Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/

                            1. 2

                              I mainly use VSCode since a few years. Was using Sublime Text before.

                              I’m impressed by how many people are still using Sublime here. The lack of native LSP integration became a bit of an issue for me over time. That’s clearly one of the main points that motivated me into trying VSCode.

                              That’s also interesting how Kakoune and Helix are emerging and being used by more and more developers. Helix seems really promising.

                              Surprised no one mentioned Lapce yet. Curious about the experience from anyone using it.

                              1. 5

                                Please slow down on the sourcegraph spam. Lobsters is not your marketing channel.

                                1. 5

                                  I see what you mean, but in that instance, the article was actually useful to me. (I don’t use or work for Sourcegraph.)

                                  1. 1

                                    For sure my bad.

                                    1. 4

                                      To be clear, the problem is not the number of submissions pushing your own stuff, it is the ratio between those and your other contributions. You have submitted more stories than comments and all except one of your stories has been sourcegraph (the other one is currently sitting at -2). No one cares if active contributors to the community plug their own stuff, people care if people just treat this place as a marketing channel. Join the discussion on other topics, submit interesting things you read elsewhere and no one will complain when you submit things like this.

                                      The rule of thumb I was told was that no more than 10% of your contributions should be self-promotion. I’d qualify that slightly and suggest that one-line comments don’t count towards the other 90%. Spend some time thinking about how your unique perspective can enrich other discussions.

                                      1. 2

                                        Starting yesterday, I’m going to be a better community member and not treat Lobsters like an open mic night.

                                        Spend some time thinking about how your unique perspective can enrich other discussions.

                                        Will do. I want to apologize to you, @friendlysock, and the rest of the community for my behavior, it was unacceptable and I’ll do better going forward.

                                  1. 3

                                    I hope we will see a follow up post in 2-3 weeks with a nice speedup in modernc.org/sqlite.

                                    1. 1

                                      Is there an upcoming improvement?

                                      1. 3

                                        “I hope”

                                    1. 16

                                      Is it my bubble or is sqlite everywhere lately?

                                      1. 24

                                        Every 5-7 years we find a new place where SQLite can shine. It is a testament to the engineering and API that it powers our mobile apps (Core Data)/OSes, desktop apps (too many examples), and, eventually app servers, be they traditional monoliths (Litestream) or newer serverless variants, like what’s described here.

                                        I also see a trend where we’re starting to question if all the ops-y stuff is really needed for every scale of app.

                                        1. 6

                                          I’m in the same bubble, reading about LiteStream, Fly.io and Tailscale. And I really love what they are doing in the SQLite ecosystem. But I don’t really understand how CloudFlare is using SQLite here. It’s not clear if SQLite is used as a library linked to the Worker runtime, which is the usual way to use it, or if is running in another server process, in which case it’s closer to the traditional client-server approach of PostgreSQL or MySQL.

                                          1. 4

                                            Yeah this post is very low on technical detail, and I can’t seem to find any documentation about the platform yet - I guess once things open up in June we’ll know more.

                                            Definitely keen to see if they are building something similar to Litestream, it seems like a model that makes sense for SQLite; a single writer with the WAL replicated to all readers in real time.

                                            I’m trying to convince people at work that using a replicated SQLite database for our system instead of a read only PostgreSQL instance would make our lives a lot better, but sadly we don’t have the resources to make that change.

                                            1. 2

                                              I guess CloudFlare D1 is based on CloudFlare Durable Objects, a kind of KV database accessible through a JavaScript API. They probably implemented a SQLite VFS driver mapping to Durable Objects (not sure how they mapped the file semantics to KV semantics though). If I understand correctly, Durable Objects is already replicated, which means they don’t need to replicate the WAL like Litestream.

                                          2. 5

                                            I think there’s probably a marketing/tech trend right now for cloud vendors (fly.io, cloudflare) to push for this technology because it’s unfamiliar enough to most devs to be cool and, more importantly, it probably plays directly to the vendors’ strengths (maintaining these solutions is probably much easier than, say, running farms of Postgres or whatever at scale and competing against AWS or Azure).

                                            If it’s any consolation, in another five or ten years people will probably rediscover features of bigger, more fuller-featured databases and sell them back to us as some new thing.

                                            (FWIW, I’ve thought SQLite was cool back in the “SQLite is a replacement for fopen()” days. It’s great tech and a great codebase.)

                                            1. 14

                                              Litestream author here. I think SQLite has trended recently because more folks are trying to push data to the edge and it can be difficult & expensive to do with traditional client/server databases.

                                              There’s also a growing diversity of types of software and the trade-offs can change dramatically depending on what you’re trying to build. Postgres is great and there’s a large segment of the software industry where that is the right choice. But there’s also a growing segment of applications that can make better use of the trade-offs that SQLite makes.

                                          1. 1

                                            redbean, a web server shipped as a single binary executable, including Lua and SQLite, seems to be a perfect fit: https://redbean.dev/

                                            1. 10

                                              I like Go, but the article makes valid points to be honest.

                                              1. 2

                                                Why favour top posting over inline responses?

                                                1. 15

                                                  Because that’s how email works in the world outside of free software mailing lists.

                                                  1. 4

                                                    True. Probably because it’s easier to compose an email using top posting. But when I send inline responses to my non-tech friends and colleagues who have been using Outlook and top posting their whole life, they usually appreciate it.

                                                  2. 2

                                                    It was just a reference to how HEY works - there’s only top posting there, presumably because it makes it easier to follow an email thread. At first I was annoyed by this, but now I kind of like it.

                                                    1. 6

                                                      Why does it make it easier to follow an email thread? Surely it is easier to follow a discussion if answers come after questions, reactions after statements.

                                                      1. 3

                                                        It’s really a UX problem. It’s really easy to mess up the formatting such that your inline replies are interpreted as part of the quoted text itself, and will appear collapsed by default to the recipient. So I never reply inline, because I want to make sure my full message is seen.

                                                        1. 2

                                                          It really depends on the message, though, right? We can’t operate with the assumptions that every email is just a list of questions that needs answers. Plus, even when you’re top posting you can copy bits from the original message to embed them in your answer and provide an additional context for me. For me the main point is that by sticking to top posting you make it very clear in which order a conversation unfolded. Just like with old-school paper letters.

                                                          1. 7

                                                            For me the main point is that by sticking to top posting you make it very clear in which order a conversation unfolded.

                                                            Threads do that better. It is also more manageable than different people coming up with ad-hoc quoting methods for top-posting (often using different colours, fonts, etc and saying things like “responses inline in green below”). Trying to sort through a conversation like that is much more difficult than it needs to be.

                                                    1. 22

                                                      I’m discovering we have a satire tag :)

                                                      1. 10

                                                        Thanks for opening that discussion. I’d really like burntsushi to come back. The invitation he got to delete his account is very passive agressive. Would be great to rephrase this and invite him to come back.

                                                        1. 3

                                                          My employer used to use HipChat, and I was envious of Slack. Now we use Slack, and I’m envious of Zulip. I don’t have any hope of being able to convince IT to change, though.

                                                          1. 4

                                                            Could be worse. You could have to use MS Teams. It is so bad that I’m almost dreaming of switching to Slack.

                                                            1. 10

                                                              Could be worse. You could use Slack and MS Teams and Yammer.

                                                            2. 2

                                                              Similar experience to you.

                                                              One of the reasons I left my old job was that MS teams was so awful and mandatory.

                                                              Now I use slack at work and sit in envy of Zulip.

                                                              “Could be worse”, I remind myself.

                                                              1. 1

                                                                Considering Slack’s per-seat pricing, I think Slack is far from building an empire that will stand the test of time.

                                                              1. 21

                                                                This is a neat backstory, glad to see more “behind the scenes” rust development.

                                                                To me, the person I feel doesn’t get enough credit (though he does get a lot) is Niko Matsakis. As I understand the progression of rust, it started as a higher level, green threaded ML variant of sorts, and ended as this low level systems programming language we know today. But the key thing that defines rust, I think, is the borrow checker ownership model, which I think is thanks mostly to Niko. So while Graydon gets the credit for creating rust, I almost feel that was a different language, and the true “father” of rust as we know it is Niko.

                                                                And then I get wondering what it would have been like had the language been designed around the borrow checker from the start, or if that had been bolted onto a different language. I wonder if a “C with borrowck” is possible and what that looks like. I personally love rust’s ML heritage and traits and iterators and RAII but I think it maybe turns off some hardcore low level and embedded developers, and they more than anyone are who we need to give memory safety to.

                                                                1. 19

                                                                  What is the key thing that defines Rust?

                                                                  Borrow checker is one candidate, but that’s an implementation. I think the key thing that defines Rust is its value. Rust’s value is Graydon’s contribution. Yes, Rust had an extremely different implementation, but it always had the same value. At least from the first public release to 1.0.

                                                                  The current website says “Rust is a language empowering everyone to build reliable and efficient software”, but that’s post-1.0 change. (I actually consider this the most significant post-1.0 change. I think it was almost a coup.)

                                                                  The previous website says “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety”. That’s it. That’s Graydon’s contribution. It implies, Rust is not a simple language. Rust is not a language that is easy to learn. Rust is not a language that is fast to compile. To achieve “fast, memory safe, thread safe”, Graydon was ready to trade off everything else.

                                                                  To see the value is what defines Rust, consider a counterfactual: what is a simple language that is easy to learn and fast to compile? It is Go. The value is what differentiates Rust and Go, not particular implementation choices.

                                                                  1. 6

                                                                    A useful historic link about language values would be the first slide deck on Rust: http://venge.net/graydon/talks/intro-talk-2.pdf

                                                                    1. 1

                                                                      According to these slides, initial Rust was a compiled and statically typed Erlang with C-style syntax and OCaml-style semantics :) I was really excited by that approach, but if I understood correctly, it was incompatible with fast calling to C (because of the GC and the growable stacks required for lightweight threads). Then seamless integration with C was prioritized, and as a consequence the GC and the lightweight threads had to be removed, amd without a GC the language needed another mechanism for automatic memory management which led to the borrow checker. Today’s Rust is very different from what was originally envisioned.

                                                                      1. 2

                                                                        This is… not the whole story. Because Rust borrow checker preceded both removal of GC and green threads. In fact one of the hardest problem faced by design of Rust borrow checker was that it must work with GC. This is why Rust borrow checker is “extensible”, for example working fine with reference counted pointer implemented in the library.

                                                                        1. 1

                                                                          Thanks for following-up on this. I didn’t know and that’s very interesting. What was the purpose of the borrow checked when there is a GC? For non-memory resources like file handles, etc.?

                                                                          1. 2

                                                                            Thread safety

                                                                            1. 1
                                                                  2. 7

                                                                    On traits and iterators: hypothetical memory safe C would insert bound checks like everyone else including Rust. The primary motivation behind Rust’s iterators is bound check elision, not syntax sugar. The primary motivation behind Rust’s traits is to support Rust’s iterators. Memory safe C without traits and iterators would be, say, 10% slower than Rust, or have lots of unsafe indexing.

                                                                    I agree about RAII. Zig-style defer would work too. (The difference is that defer is not bound to type.)

                                                                    1. 2

                                                                      I almost feel that was a different language, and the true “father” of rust as we know it is Niko.

                                                                      would like to read the blog post version of this.

                                                                      1. 8

                                                                        I am aware it is almost unintelligible today without context, but Niko’s two posts in 2012 are “at the moment” record of this defining point in Rust history.

                                                                        Imagine never hearing the phrase aliasable, mutable again (November 2012) is about semantics of borrowing, and Lifetime notation (December 2012) is about syntax of borrowing. Note: none of eight(!) options discussed in syntax post is current syntax, although option 6 is close.

                                                                      2. 2

                                                                        I personally love rust’s ML heritage and traits and iterators and RAII but I think it maybe turns off some hardcore low level and embedded developers, and they more than anyone are who we need to give memory safety to.

                                                                        Tbh I’m kind of glad that it remains, and I’d be less enthusiastic about Rust if it wasn’t! I also think it’s really nice to bring these ideas to more systems programmers, who may have never been exposed to ML-style languages. It also makes it easier for languages that come after Rust to bring even more influences from ML into the mainstream (say, module systems for example).

                                                                        1. 2

                                                                          I wonder if a “C with borrowck” is possible and what that looks like.

                                                                          Cyclone was a research “safe C” language that might be of interest. Its region analysis has been cited as a predecessor/influence on the borrow checker, from my understanding.

                                                                        1. 4

                                                                          I think litestream is extremely interesting tech, but I’ve been bothered (and admittedly too lazy to test or read more) by this possibility:

                                                                          1. Users PUTs logs.
                                                                          2. App generates id, writes to SQLite, commits
                                                                          3. Plug gets pulled (or, in cloud terms, indtanace/dyno/container) and machine goes away.
                                                                          4. Litestream doesn’t stream the WAL segment to s3.

                                                                          On Heroku this would be bad, and the write completely gone when the app comes up again. So, I guess the assumption that needs to be made is that the disk is persisted across container runs?

                                                                          1. 3

                                                                            Yes, you need persistent volumes to ensure the last few seconds of committed data are fsynced to disk, which is the case on Fly, but not on Heroku.

                                                                            1. 4

                                                                              I agree with everything ngrilly said but I’ll also add that you can use Litestream as a library in a Go app if you want to selectively confirm that certain writes are sync’d to a replica before returning to the client. The API needs to be cleaned up some but there’s an example in this repo: https://github.com/benbjohnson/litestream-library-example

                                                                          1. 24

                                                                            I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.

                                                                            Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.

                                                                            1. 14

                                                                              Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.

                                                                              (Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)

                                                                              1. 14

                                                                                I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.

                                                                                But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.

                                                                                1. 10

                                                                                  There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.

                                                                                  Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.

                                                                                  I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.

                                                                                  SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.

                                                                                  1. 9

                                                                                    I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.

                                                                                    Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.

                                                                                    As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.

                                                                                    1. 2

                                                                                      hypermedia specific about HTTP

                                                                                      Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.

                                                                                      But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.

                                                                                      just a simple text based stateless protocol

                                                                                      The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.

                                                                                      herd is taking the wrong direction again

                                                                                      I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.

                                                                                      By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.

                                                                                  2. 3

                                                                                    Cargo Cult/Flavour of the Week/Stockholm Syndrome.

                                                                                    A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.

                                                                                    And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.

                                                                                    1. 2

                                                                                      Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.

                                                                                      However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!

                                                                                      However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.

                                                                                      I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.

                                                                                      Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.

                                                                                      I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.

                                                                                      Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.

                                                                                      I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.

                                                                                      So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.

                                                                                      I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.

                                                                                      1. 1

                                                                                        I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.

                                                                                        1. 2

                                                                                          Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.

                                                                                          1. 1

                                                                                            Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.

                                                                                    1. 1

                                                                                      Alternatively: GraphQL. Less dangerous than SQL, less tight coupling to the exact DB structure, but about as powerful.

                                                                                      Disclaimer: haven’t used it so far

                                                                                      1. 72

                                                                                        That’s a pretty major disclaimer!

                                                                                        1. 8

                                                                                          I’ve poked at GraphQL a couple times and repeatedly come to the conclusion that it makes a bunch of very domain specific trade-offs and can’t really be described as a query language at all.

                                                                                          Pretty much any form of accepting a subset of SQL is more general purpose, has better support for complex data models, can do graph adjacency or pivot tables and such which is where I’ve seen most ORMs even struggle.

                                                                                          I wish there were less clojure-flavored datalogs as query interfaces to be had.

                                                                                          1. 2

                                                                                            less clojure-flavored datalogs as query interfaces

                                                                                            Less?

                                                                                            1. 1

                                                                                              May I suggest Preql ?

                                                                                              (warning: shameless self-promotion)

                                                                                            2. 8

                                                                                              GraphQL is not a query language. It’s an RPC API abstraction.

                                                                                              1. 7

                                                                                                I’m not sure “not even having joins” is “about as powerful”.

                                                                                                1. 3

                                                                                                  Indeed. I think the “QL” in GraphQL is misleading, unless you have a very basic definition of “query”.

                                                                                                  1. 2

                                                                                                    I’ve been using Hasura, and I can join to other tables (can’t do GROUP BY, though). I guess it depends on how you implement things?

                                                                                                    (There may be some limits to Hasura’s joining that I’m not aware of - I’ve only tried to do very basic “follow the foreign-key and fetch some fields from that table” kind of things.)

                                                                                                  2. 3

                                                                                                    Except that one major difficulty of GraphQL is precisely to resolve GraphQL queries to SQL queries in an efficient way. Except if you use something like Hasura or similar which will do the work for you.

                                                                                                    1. 1

                                                                                                      We curently use postgraphile for reads and some basic write operations. Anything more complicated we make an API for it. Seems to work pretty well overall.

                                                                                                    1. 4

                                                                                                      FWIW, Swift has this too. It’s not limited to generics; in Swift you can define multiple normal functions with the same parameters (if any) but different return types, and the compiler will do it’s best to choose the right one.

                                                                                                      On the downside, it must get complicated for the compiler to solve these puzzles, so I’m sure this feature contributes to both Rust’s and Swift’s infamously slow compile times.

                                                                                                      1. 5

                                                                                                        Rust’s compile times aren’t due to issues like selecting the right return type, but rather for things like code generation. Monomorphization as a strategy for implementing statically-dispatched parametric polymorphism requires the generation of distinct copies of each generic type or function based on the concrete types it’s actually used with. The time to perform this code generation and update call sites can be long. Monomorphization can also lead to code bloat, though there are techniques to manage that bloat.

                                                                                                        I don’t know as much about Swift, but I doubt that type resolution is a major cause of slow compilation.

                                                                                                        1. 3

                                                                                                          I remember at least one case where the type resolution was a major compilation time slowdown, it was checking 11k+ alternatives for a common function before it was fixed.

                                                                                                          1. 2

                                                                                                            Your link about using an inner function to limit the code generated by monomorphization is super interesting.

                                                                                                            1. 2

                                                                                                              Glad you like it! I wrote that post.

                                                                                                        1. 7

                                                                                                          Welcome to Rust! If you’re interested and want to dig deeper on the topic, search for “monomorphic dispatch” which is what you’re describing in the post. Monomorphic dispatch can happen in either the return position (as in the post) or the arguments. Monomorphic dispatch is a form of “generics” (and sometimes simply referred to as such) where the compiler looks at the type placeholder in the function signature and all (possible) uses of the function, then generates individual functions with the correct signature. So fn foo<T>() -> T with two possible types (for simplicity, lets say the concrete types are Bar and Baz), the compiler generates two functions; fn foo_Bar() -> Bar and fn foo_Baz() -> Baz. Hence the mono in monomorphic (i.e. one function per concrete type in the actual program). This is in contrast with “dynamic dispatch” where the concrete type isn’t used at compile time, and instead uses indirection (via pointers) to have just one version of the function, and at runtime follows the indirection to arrive at the concrete types actual implementation. Rust uses “Trait Objects” (i.e. dyn Trait) for dynamic dispatch.

                                                                                                          To recap, monomorphic dispatch is where the compiler generates individual functions for each concrete type, whereas dynamic dispatch is where the compiler uses indirection with a single function that then dynamically (at runtime) finds/utilizes the correct implementation.

                                                                                                          Most often, monomorphic dispatch is preferred for performance, but can suffer in readability and code bloat (i.e. binary size). While dynamic dispatch takes a runtime hit for the indirection, but makes reading the code easier and a smaller binary footprint. This isn’t always the case, but generally speaking I’d say its a decent rule to start with.

                                                                                                          1. 13

                                                                                                            I’m going to quibble on the terminology here a bit. The type of polymorphism you’re describing is parametric polymorphism, and monomorphization is one strategy for dispatching parametrically polymorphic functions statically. You could call the strategy “monomorphic dispatch” as you do here; the distinction I’m drawing is between the kind of polymorphism (parametric polymorphism) and the implementation strategy (static dispatch with monomorphization / monomorphic dispatch).

                                                                                                            Other common kinds of polymorphism, since I’m getting into it:

                                                                                                            • Subtype polymorphism: polymorphism based on the substitutability of one type for another. Different languages determine the substitution relation in different ways. Languages with subclasses and superclasses establish a subtype relationship that way (subclasses may be used wherever the superclass is expected); Rust uses subtype polymorphism for lifetime bounds, treating any lifetime longer than the required bound as substitutable for it.
                                                                                                            • Ad hoc polymorphism: polymorphism based on the number and types of the inputs to a function. This is not part of the type system, but is instead about the selection of functions for dispatch; it is sometimes called “operator overloading,” “specialization,” or “multiple dispatch” (all different flavors of the same thing).
                                                                                                            1. 1

                                                                                                              The article is about parametric polymorphism is general. Your comment is about something different which is how to compile this using either monomorphization (like Rust or C++) or some kind of dynamic dispatch (like Java or OCaml I think).

                                                                                                            1. 3

                                                                                                              Excellent post. So basically the author wants a relational database with a query language more suited to programmatically building queries (like MongoDB which is mentioned), table definitions based on Protocol Buffers (this is how Google F1 works - they have a few great papers about it), query planner hints to avoid performance cliffs, and automated zero-downtime migrations (again Google F1 has an interesting strategy). I’d say this is an ambitious program but nothing than can’t be done with an evolution of MySQL and PostgreSQL. I pretty much with all the concerns, but I don’t think they are bad enough to make me abandon relational databases and use a KV or document database instead.