Threads for ngrilly

    1. 5

      I love Helix. My config is 10 lines long. I don’t feel the need for any extensions. Yes the key bindings are different than Vim but the select first, act second pattern makes sense.

      In any case, I read code more than I write code, so its great project hierarchy navigation and search features out of the box greatly compensate for any slowdowns I get from re-learning the normal mode commands.

      Also works equally well on all platforms, since in my case, just the base program is sufficient.

      1. 1

        What do you mean by “project hierarch navigation”?

        1. 2

          Your definition might not be the same as mine but their very straightforward file picker (Space-f) window, which automatically search from the root directory of your project (.git) with fuzzy file name matching and preview is really good IMHO.

          I work on a multi-millions LOC project and Helix parses it near instantaneously and with nothing additional to install like emacs’ projectile for example.

          Their “global search in workspace folder” (Space-/) which I think uses the base crate at the core of ripgrep is also really efficient.

        2. 1

          I’m guessing op does something similar to me:

          cd project
          hx .
          

          From there, you land in a file list (otherwise under “space f”) with fuzzy search. Similarly “space b” gives you search for open buffers/files. “space /” holds global search, and helix handles “go to definition” via language support.

          You can get something similar in neovim probably - via plugins - bit it works well oob for hx.

    2. 30

      What if Signal got a post-phonenumber makeover. ;]

      1. 8

        I’d very much like that. Apparently it’s already in the codebase they just need to turn it on. Sick of waiting for this to be honest.

        1. 10

          I don’t think that the code in the codebase is actually doing the right thing. Signal has inherited a design flaw from the phone network and email: they conflate an identity with a capability. Knowing my phone number should not automatically give you the right to call me.

          The thing I want from Signal is the ability to create single-use capabilities that authorise another party to perform key exchange. That lets me generate tokens that I can hand to people (ideally by showing them a QR code from the UI) that let them call me from their Signal account but don’t let them (or whatever social networking malware they’ve grated access to their address book) pass on the ability to call me. Similarly, I want to be able to give a company a token that lets them call me but doesn’t let them share that ability with third parties.

          This would also significantly reduce spam. If I have someone’s phone number in my address book and they have mine in theirs, you can grant access automatically, but for anyone else you need to be authorised to send me messages. Spam fighting is the main reason that they claim they keep the server code secret but necessary because of a fundamental design flaw in the protocol.

          Unfortunately, Signal wants to add new kinds of identifiers but keep conflating them with capabilities, rather than fixing the problem.

          Adding new identifiers will be useful in group chats (currently, I can’t join a group chat without sharing my phone number with everyone there), letting me have a per-group identifier, but that doesn’t help much if one malicious person in the group can leak that identifier and then any spammer can use it to contact me. If they built a capability mechanism then I could authorise members of the group to send me group messages but not authorise anyone else to contact that identity and, if I wanted to have a private chat with a group member, explicitly authorise that one person to send me private messages.

          Most of the infrastructure for doing this was already added for sealed senders but I haven’t seen any sign that anyone is working on it.

          1. 2

            single-use capabilities that authorise another party to perform key exchange … ideally by showing them a QR code from the UI

            Recently encountered something that works exactly like that btw: https://simplex.chat

            1. 1

              Super interesting I’d heard of simplex but never looked into it much. their white paper is so far an interesting read

        2. 10

          What it illustrates is that we simply can’t rely on a centralized service.

          I strongly believe that Moxie is doing all he can with good faith. That Signal is “good”.

          But any centralized authority, even if benevolent, cannot be a long term solution. Even if it is “easier”.

          1. 8

            There are legitimate usability and UX problems with federated and/or decentralised chat platforms. As well as more technical cryptographic hurdles compared to a centralised solution. However I agree wholeheartedly with your point - there are just problems that need to be solved before any decentralised messaging system is accessible and seamless enough for “normal” users.

            Also, if memory serves correct I don’t believe Moxie works with Signal any longer. I think he’s left.

            1. 8

              Yes, Moxie wrote at length about the challenges of federation. The main one being the difficulty of coordinating changes and improvements.

              In addition to UX, if Signal were widely federated, it might be 100x harder to add PQC like they just did, if it involved convincing every Signal server admin to upgrade.

              Rightly or wrongly, federated systems are more ossified, and in the case of something like Signal, that presents future security risks.

              1. 2

                In addition to UX, if Signal were widely federated, it might be 100x harder to add PQC like they just did

                The change primarily (or even only) affects end-to-end components, meaning the server infrastructure is minimally (or not at all) affected. 100x harder it definitely is not.

                federated systems are more ossified

                But that is for ideological reasons, not technological ones. Federated systems often emphasise compatibility - that isn’t a technical requirement though. If you are in control of the primary server as well as the main client, you can force changes anyway. It raises the bar for deployments in that federation but that’s a good thing.

                1. 4

                  100x harder it definitely is not.

                  I dunno, email is the ultimate federated communication platform, and we still don’t have widespread encrypted email (without relying on a central provider). So maybe it’s not harder because of the server software, but it sure seems a lot harder to me.

                2. 3

                  What federated systems have E2EE enabled? I’m genuinely curious, because AFAIK systems like Matrix and Mastodon don’t. But I may be wrong.

                    1. 2

                      I could swear there was an article on here just recently about how E2EE in Matrix adds a ton of complexity.

                    2. 1

                      Thanks for replying. I don’t get why people get their panties in a twist over Signal when these alternatives exist.

                      1. 1

                        Matrix is pretty awful from a normal user‘s pov - slow, inconsistent, buggy. I think that’s why signal is much more widely used

                3. 2

                  I get what you’re saying, but federated systems have much larger consequences than just the server infrastructure. Perhaps I should have said “centralized” instead, since the relevant issue is that Signal is solely responsible for all server and client code. They don’t need to do the slow business of coordination, which we’ve seen from older systems like email/IRC/Jabber, tend to take a long time to get upgraded to the point that improvements can be relied upon.

                4. 1

                  In another part of lobste.rs right now, Mastodon is being scorched for not acceding to each and every demand put to it by other members of the fediverse. If Mastodon was dominant enough to unilaterally enforce, say, E2EE on ActivityPub, is that decentralized? Would that be a popular move?

            1. 1

              My understanding was that he was stepping down as CEO but still very involved with the project. I may be totally wrong on this.

              The centralization of Signal and the refusal of any alternative client connecting to the central signal server is a strong decision by Moxie, for a lot of technical reasons I think I understand (I simply disagree with the fact that those technical decisions should take precedence over the moral consequences of them). But, at least, Moxie has a real, strong and opiniated ethic.

              I hope that whoever comes next will keep it that way. It is so easy to be lured by the blinking lights when to start to have millions of users. That’s why we should always consider Signal as a temporary solution. It is doomed from the start, by design. In one or ten years, it will be morally bankrupt.

              The opposite could be said from the Fediverse. While the official Mastodon project have already showed sign of being “bought”, Mastodon is not needed to keep the fediverse going. It could be (and is already) forked. Or completely different clients can be used (Pleroma is one of the most popular).

              1. 1

                refusal of any alternative client connecting to the central signal server is a strong decision by Moxie

                Whenever I hear this I think of how Whisperfish is a thing and how I should look at https://molly.im/

                1. 2

                  Those fork were, at first, really criticized. If I remember correctly, they were even briefly blocked.

                  Due to the social pressure, Signal is now mostly ignoring them but they are really not welcome.

        3. 2

          I really want to use Signal, and recommend it to my friends and families, but I’m also sick of waiting for them to offer end-to-end encrypted backups on iPhone (it’s apparently possible on Android).

        1. 2

          Not going to happen without in-browser code verification, which needs quite a lot of coordination between standardisation bodies and browser vendors. WhatsApp’s approach is not enough.

          1. 1

            Running a local server that had a browser interface would be no problem.

            1. 1

              any program that has network access can listen on ports, so if any malicious code gets localhost:1234 before signal does, it gets all the cookies even if it can’t access your files

              1. 1

                Isn’t this more of a concern of the security of a machine? Wouldn’t key loggers and others be more of a concern?

          2. 1

            The only thing they’d need is to add a “secure mode” to Service Workers, which would prevent all bypasses. The difficulty is of course preventing the abuse of it for persistent client-side takeovers on compromised websites; I don’t know if a permission dialog would be good enough since people don’t actually read what they say.

            1. 1

              what if they could be signed and could store data to be readable only by workers signed with the same key?

        2. 2

          Me too. My browser is at least decently accessible to me, the Signal desktop client is not.

    3. 2

      If only this could convince GitHub to make their code review tool more similar to Gerrit (especially when it comes to reviewing and updating a series of dependent changes).

    4. 24

      I’ve interacted with the LLVM project only once (an attempt to add a new clang diagnostic), and my experience with Phabricator was a bit painful (in particular, the arcanist tool). Switching to GitHub will certainly reduce friction for (new) contributors.

      However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.

      1. 10

        There are so many alternatives they could have chosen if they wanted the pull/merge request model. It really is a shame they ended up where they did. I’d love to delete my Microsoft GitHub account just like I deleted my Microsoft LinkedIn account, but the lock-ins all of these projects takes means to participate in open source, I need to keep a proprietary account training on all of our data, upselling things we don’t need, & making a code forge a social media platform with reactions + green graphs to induce anxiety + READMEs you can’t read anymore since it’s all about marketing (inside their GUI) + Sponsors which should be good but they’re skimming their cut of course + etc..

        1. 4

          It really is a shame they ended up where they did.

          If even 1% of the energy that’s spent on shaming and scolding open-source maintainers for picking the “wrong” infrastructure was instead diverted into making the “right” infrastructure better, this would not be a problem.

          1. 2

            Have you used them? They’re all pretty feature complete. The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.

            It’s the same with chat apps—they can all send messages, voice/video/images, replies/threads. There’s no reason to be stuck with WhatsApp, Messenger, Telegram, but people do since their network is there. So you need to get the network to move.

            1. 3

              The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.

              And open-source collaboration is, in fact, a social activity. Thus suggests an area where alternatives need to be focusing some time and effort, rather than (again) scolding and shaming already-overworked maintainers who are simply going where the collaborators are.

              1. 2

                Breaking out the word “social” from “social media” isn’t even talking about the same thing. It’s social network ala Facebook/Twitter with folks focusing on how many stars, how green their activity bars are, how flashy their RENDERME.md file is, scrolling feeds, avatars, Explore—all to keep you on the platform. And as a result you can hear anxiety in many developers on how their Microsoft GitHub profile looks—as much as you hear folks obsessing about their TikTok or Instagram comments. That social anxiety should have little place in software.

                Microsoft GitHub’s collaboration system isn’t special & doesn’t even offer a basic feature like threading, replying to a inline-code comment via email puts a new reply on the whole merit request, and there are other bugs. For collaboration, almost all of alternatives have a ticketing system, with some having Kanban, & additional features—but even then, a dedicated (hopefully integrated) ticketing system, forum, mailing list, or libre chat option can offer a better, tailored experience.

                Suggesting open source dogfood on open source leads to better open source & more contributions rather than allowing profit-driven entities to try to gobble up the space. In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization. The alternatives are in the good-to-good-enough category so there’s nothing to lose and opens up collaboration to a larger audience.

                But I’ll leave you with a quote

                Choosing proprietary tools and services for your free software project ultimately sends a message to downstream developers and users of your project that freedom of all users—developers included—is not a priority.

                — Matt Lee, https://www.linuxjournal.com/content/opinion-github-vs-gitlab

                1. 3

                  In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization.

                  The population of potential collaborators who self-select out of GitHub for “privacy/freedom”, or “those blocked by sanctions”, is far smaller than the population who actually are on GitHub. So if your goal is to make an appeal based on size of community, be aware that GitHub wins in approximately the same way that the sun outshines a candle.

                  And even in decentralized protocols, centralization onto one, or at most a few, hosts is a pretty much inevitable result of social forces. We see the same thing right now with federated/decentralized social media – a few big instances are picking up basically all the users.

                  But I’ll leave you with a quote

                  There is no number of quotes that will change the status quo. You could supply one hundred million billion trillion quadrillion octillion duodecillion vigintillion Stallman-esque lectures per femtosecond about the obvious moral superiority of your preference, and win over zero users in doing so. In fact, the more you moralize and scold the less likely you are to win over anyone.

                  If you genuinely want your preferred type of code host to win, you will have to, sooner or later, grapple with the fact that your strategy is not just wrong, but fundamentally does not grasp why your preferences lost.

                  1. 2

                    Some folks do have a sense of morality to the decisions they make. There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it. Following the crowd is less something I’m interested in than being the change I & others would like to see. Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue. I see a lot of positive collaboration on SourceHut ‘despite’ not having the social features and doing collaboration via email + IRC & it’s really cool. It’s possible to overthrow the status quo—and if the status quo is controlled by a US megacorp, yeah, let’s see that change.

                    1. 2

                      Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue.

                      But this is a misleading statement at best. Suppose that on Platform A there are one million active collaborators, and on Platform B there are ten. Sure, technically “collaboration will continue” if a project moves to Platform B, but it will be massively reduced by doing so.

                      And many projects simply cannot afford that. So, again, your approach is going to fail to convert people to your preferred platforms.

                      1. 2

                        I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference like choosing a flavor of jam at the market. And if folks aren’t voicing an opinion, then the status quo would remain.

                        1. 4

                          I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference

                          You seem to see it as a stark binary where you either have it or you don’t. Most people view it as a spectrum on which they make tradeoffs.

                          1. 2

                            There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it.

                            Already mentioned it. This case is a clear ‘not worth it’ because the alternatives are sufficient & the social network part is more harmful than good.

                            1. 3

                              the social network part is more harmful than good.

                              I think you underestimate the extent to which social features get and keep people engaged, and that the general refusal of alternatives to embrace the social nature of software development is a major reason why they fail to “convert” people from existing popular options like GitHub.

                              1. 2

                                To clarify, are you saying that social gamification features like stars and colored activity bars are part of the “social nature of software development” which must be embraced?

                              2. 0

                                Would you clarify?

          2. -1

            And yet here you are, shaming and scolding.

        2. 1

          What alternatives do you have in mind?

          1. 7

            Assuming they wanted to move specifically to Git & not a different DVCS, LLVM probably would have the resources to run a self-hosted Forgejo instance (what ‘powers’ Codeberg). Forgejo supports that pull/merge request model—and they are working on the ForgeFed protocol which would as a bonus allow federation support which means folks wouldn’t even have to create an account to open issues & participate in merge requests which is a common criticism of these platforms (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required). If pull/merge request + Git isn’t a requirement, there are more options.

            1. 1

              (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required)

              How do they manage to require you to make an account for self-hosted GitLab? Is there a fork that removes that requirement?

              1. 3

                Self-hosting GitLab does not require any connection to GitLab computers. There is no need to create an account at GitLab to use a self-hosted GitLab instance. I’ve no idea where this assertion comes from.

                One does need an account to contribute on a GitLab instance. There is integration with authentication services.

                Alternatively, one could wait for the federated protocol.

                In my personal, GitHub-avoiding, experience, I’ve found that using mail to contribute usually works.

                1. 1

                  One does need an account to contribute on a GitLab instance.

                  That’s what I meant… account required for the instance. With ForgeFed & mailing lists, no account on the instance is required. But news 1–2 weeks ago was trying to get some form of federation to GitLab. It was likely a complaint about needing to create accounts on all of the self-hosted options.

      2. 8

        However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.

        I think the core thing is that projects aren’t in the “maintain a forge” business, but the “develop a software project” business. Self-hosting is not something they want to be doing, as you can see by the maintenance tasks mentioned the article.

        Of course, then the question is, why GitHub instead of some other managed service? It might be network effect, but honestly, it’s probably because it actually works mostly pretty well - that’s how it grew without a network effect in the first place. (Especially on a UX level. I did not like having to deal with Phabricator and Gerrit last time I worked with a project using those.)

        1. 5

          I would not be surprised if GitHub actively courted them as hostees. It’s a big feather in GH’s cap and reinforces the idea that GH == open source development.

          1. 8

            I think the move started on our side, but GitHub was incredibly supportive. They added a couple of new features that were deal breakers and they waived the repo size limits.

        2. 2

          There is Codeberg & others running Forgejo/Gitea as well as SourceHut & GitLab which are all Git options without needing Microsoft GitHub or self-hosting. There are others for non-Git DVCSs. The Microsoft GitHub UI is slow, breaks all my browser shortcuts, and has upsell ads all throughout. We aren’t limited to if not MicrosoftGitHub then SelfHost.

          1. 3

            This is literally what I addressed in the second paragraph of my comment.

            1. 2

              Not arguing against you, but with you showing examples.

    5. 38

      This is the silliest thing to argue about ever and I can’t even imag–

      oh right French puts a space before sentence-ending punctuation and it bugs me so much grr argh

      ahem, as I was saying, I can’t believe anyone would take the time to worry about this. :-P

      1. 11

        In English we put tabs after punctuation so everything lines up & readers can adjust the width for what works for them.

        1. 5

          Only in countries that use metric punctuation though. In America, we use the imperial punctuation space which equates to about 0.254 metric tabs worth of spaces.

          1. 2

            And yet some how the British are still using Imperial tabs defined by the width of the current king/queen’s pinky finger.

      2. 9

        This is even worse, we French people put space before sentence-ending punctuations (like ?!:), except for dots where space is only after the dot. Despite being French, I think this is weird and prefer the English way :)

        1. 6

          The rule I was taught is the space comes before and after a punctuation mark if and only if it is non-connex: ?!:;«» but not ,.()/

          1. 1

            What does non-connex mean?

            1. 1

              Sorry I meant disconnected/non-connected. It means it’s in two parts that don’t touch: https://en.wikipedia.org/wiki/Connectedness

              1. 1

                Thanks! I’ve been taught the same rule, except for the “disconnected” concept :)

        2. 2

          And that is why we have   :)

    6. 1

      This is ambitious! Written in Jai, a brand new programming language not even releases publicly yet. Supporting Linux/macOS/Win from the start. Rendering a GUI instead of a text UI. Curious to see where this will go.

    7. 1

      I hate JIRA, and I was the manager :) We replaced it with Linear.app, which is fantastic. There are several great alternatives with customizable workflows.

    8. 2

      Really interesting point on structural versus nominal typing. Worth a clarification in Zig language reference.

      Update: I just read again the Zig language reference, and it’s actually quite clear that a struct type cannot be coerced to another struct type, except for anonymous struct literals.

      1. 4

        Yeah it seems close to what C does, and possibly inspired by it? I remember being surprised to be able to make anonymous types in C, then realising they weren’t structural when I tried to exploit that.

    9. 3

      I’m totally late to this lobster thing

      1. 1

        Edit: My site has been lobstered for the first time! I think my writing is too simple, and the concept of putting types in terms may be hard to understand. For more information, please re-read.

        Eh, don’t worry, most of us here are bad writers :) But I find simple, direct styles of writing charming, especially for technical articles: it allows one to focus on the meaty details without having to parse lots of prose.

        I can’t comment on how understandable your explanations of Zig’s type system was, though, as I already write a lot of Zig and have used these patterns extensively in the past.

      2. 1

        You’re writing is not “too simple” :) Your post was very clear to me. I’m not sure sure what you mean by “inline function” though. The proposals you linked related to that have been recently rejected by Andrew.

        1. 1

          I’m not sure sure what you mean by “inline function” though.

          In C++ it’s called “closure literal” I think. I’ve updated the article!

          The proposals you linked related to that have been recently rejected by Andrew.

          It will be there, I think. I once tried to add in the syntax it myself, but the AST parser code is too overwhelming. Now that Zig is undergoing changes in that, I might try it again in future.

          Function literal (inline expression) allows some nice API design. See FTXUI for an example.

          1. 2

            It will be there, I think.

            Did you see this comment from Andrew:

            I have rejected both of these proposals. In Zig, using functions as lambdas is generally discouraged. It interferes with shadowing of locals, and introduces more function pointer chasing into the Function Call Graph of the compiler. Avoiding function pointers in the FCG is good for all Ahead Of Time compiled programming languages, but it is particularly important to zig for async functions and for computing stack upper bound usage for avoiding stack overflow. In particular, on embedded devices, it can be valuable to have no function pointer chasing whatsoever, allowing the stack upper bound to be statically computed by the compiler. Since one of the main goals of Zig is code reusability, it is important to encourage zig programmers to generally avoid virtual function calls. Not having anonymous function body expressions is one way to sprinkle a little bit of friction in an important place.

            1. 2

              Thanks! I did not see that.

    10. 19

      Author here. AMA.

      1. 10

        No questions but thanks for an interesting post. I’ve reposted it on lemmy.

      2. 5

        Do you have a particular pet peeve or favorite footgun in Go?

        Alternatively, when you use other languages after writing lots of Go does anything tend to jump out and make you say “oh goddammit”? Either because you wanted X feature/idiom/lib that Go has and the other language doesn’t, or vice versa?

        1. 11

          The for-loop variable thing is a real footgun.

          In other languages, I wish there was the same commitment to not having frivolous breaking changes.

          Also, I find try/catch ugly. Why do I need a special kind of if statement just for errors?

          1. 4

            There is a proposal to fix the for loop variable footgun: https://github.com/golang/go/wiki/LoopvarExperiment.

            I agree try/catch is ugly and I don’t miss in Go. But I think the current solution is very verbose. I really like Zig’s solution to this problem.

            1. 5

              I’ve never used another language that encourages developers to enrich their errors on all levels, as much as go does with fmt.Errorf().

              As a contrast to Java, catching certain subclassed errors and rethrowing them with another type/message, almost always leads to logs that are harder for me to understand, even though the complete stack trace is available (maybe I’ve been lucky with go deps so far, or unlucky with jars).

          2. 1

            Can you provide a definition or other guideline for determining objectively whether a change is “frivolous” or not?

            1. 4

              The cost of a breaking change is a function of many variables, the most dominant variable being user impact.

              A breaking change in a programming language impacts every user of that programming language. Core language changes are more costly than changes to standard library functions. Changes to widely-used packages are more costly than changes to niche packages. And so on.

              I don’t know how to determine if a breaking change is or is not frivolous. But I do know that if my program compiles and passes tests with language version N, it sure as hell better keep compiling and passing tests for all language versions > N, excepting critical security vulnerabilities. I guess that’s a reasonable, if imprecise, definition of frivolous.

              1. 2

                The only possible conclusion of your guideline is that breaking changes must always be completely forbidden.

                1. 5

                  Breaking changes in programming languages should be completely forbidden by default, yes.

                  An exception can be carved out for critical security vulnerabilities.

                  1. 3

                    When you encounter someone who is able to get something so absolutely perfect on the first try that nothing about it will ever need to be removed, and also willing to support it until the heat death of the universe, do be sure to let me know.

                    Until then I will treat your position as unhelpful, unreasonable, unrealistic and un-constructive.

                    1. 4

                      You’re preaching to the choir on this point. But the rules for languages are far more strict than packages in general. You can make breaking changes in a language, no problem. You just have to make those changes explicitly opt-in for user code.

                  2. 2

                    Fortran 66 had a vague definition of DO loops; some implementations executed loops once even if the trip count is zero. Fortran 77 changed DO loops so they work like every other language. Should the standards committee not have made this change?

                    1. 1

                      Is a program written in Fortran 66 expected to behave the same way when compiled as Fortran 77? Presumably – hopefully! – no. Assuming so, then this is no problem. Languages can make any breaking changes they like, as long as those changes are opt-in for user code.

                      1. 2

                        I think it was an opt-out change. Fortran 66 compilers were replaced by Fortran 77 compilers with an option for the old DO loop semantics. It’s the main incompatibility between Fortran 66 and 77. Many Fortran compilers still have a similar option for compiling dusty decks.

                      2. 1

                        What if I argue that installing the new version (of the compiler or interpreter or whatever) is “opting in”?

                        1. 2

                          The code is what opts-in, not the build host.

                2. 1

                  Some languages are able to achieve that, yes. For example, Clojure hasn’t had a breaking change yet.

      3. 3

        How much generics do you use in your code?

        I found that outside one, maybe two functions that do generic stuff, my codebase has mostly remained the way it was, but maybe that’s partly related to me having an interface heavy API.

        1. 4

          I’ve been writing some generic libraries for things like managing concurrency and handling database pagination, but not a ton. In a 10kloc codebase, it’s probably less than 1kloc.

      4. 2

        Since the addition of generics, which added 30% of relatively tricky stuff to the spec, I wonder if this is still true from your original post? (I read the spec originally, but haven’t since the addition of generics in 1.18.)

        The spec is very short and quite readable, so it’s a great way to learn Go.

        1. 3

          Good question. I haven’t tried to read it recently. I hope generics didn’t kill the readability of the spec.

          1. 4

            The spec grew with generics, though not by a huge amount. But my personal experience with the spec has changed. Previously, if I had any question about language semantics, or syntax validity, or etc., I could almost always get an answer from reading the spec. But now, when I have questions about generics, the spec almost never provides useful answers. Instead, I have to dig around for the generics proposal doc, which (bizarrely) doesn’t seem to be linked anywhere on go.dev. Maybe I’m doing something wrong.

      5. 1

        So, have you tried Prettier?

      6. 1

        If you tag Carmack enough, does he tweet about your post? :)

        1. 1

          Only one way to find out!

    11. 5

      There’s a pretty good general analysis of this as a PL design/implementation question, with discussion of what Javascript/Ruby/Python/C#/etc. each do, in a sidebar of the book Crafting Interpreters in the chapter on closures, “Design Note: Closing Over the Loop Variable”.

      1. 2

        Amazing summary. Not surprising considering that Russ Cox mentioned Dart as a source of inspiration for those new semantics, and that Robert Nystrom, the book’s author, is working on Dart :)

    12. 21

      Cool as this is, NIST is not actually a trustworthy authority for secure coding. This is the same NIST that at least once allowed backdoors to be put into its recommended crypto algorithms.

      Call me when the US Department of Transportation recommends it, when they certify code that fucks up people die.

      1. 7

        I have friends at NIST. While the encryption team has made mistakes, they are still exceptionally trust worthy in their reference work, such as their famous [peanut butter standard reference material].(https://shop.nist.gov/ccrz__ProductDetails?sku=2387&cclcl=en_US)

        1. 1

          “Price: $1,107.00” for 0.51 kg of very special peanut butter :D

      2. 7

        Looks like there’s been some progress towards automotive requirements: https://ferrous-systems.com/blog/the-ferrocene-language-specification-is-here/

        1. 2

          There has! Ferrous Systems are doing God’s work, the lack of a specification is a major hurdle in any field with accountability requirements.

          (Independent of why accountability is necessary, if we’re being cynical :-P).

      3. 5

        Fair enough. But this is still helpful in making the case for using Rust as an alternative to C, C++ and Ada for functional safety.

        1. 4

          Oh, certainly. It’s good publicity! Maybe even some funding. But not, on its own, going to make engineering in Rust more trustworthy.

    13. 4

      If you haven’t seen Guy Steele’s excellent Growing a Language conference talk, it’s worth your time.

      1. 1

        Yes, it almost feels like Steele is playing a mind trick on the audience in that talk!

    14. 4

      Most programmers refactor their code as it grows to remove accidental complexity. Language designers should do the same, refactoring their language as they evolve it, but they usually don’t, because most users prioritize backward compatibility over simplicity. I’m not sure I agree with that, but it seems to be the dominant approach.

      1. 11

        The problem is that it’s really hard to do this and not break shit along the way. Programmers get grumpy when their code breaks. See Elm 0.17 to 0.18, and to a lesser extent perhaps to 0.19. There’s also the issue of, if you break someone’s code once, are you going to do it again? And again? When does it stop?

        I am semi-seriously considering something like this with my own language, Garnet. After 1.0 release, the opportunity for breaking changes would occur every X years, probably with X increasing over time. Maybe a fibonacci sequence or something; the gaps would go 2, 3, 5, 8 … years, so you always know when to expect them to happen long in advance. Somewhat inspired by Rust’s editions, in terms of “this is a known-stable language release”, but able to break backwards compat (and also being less frequent).

        1. 2

          The problem is that it’s really hard to do this and not break shit along the way.

          Agreed. Then we end up with a “perfect” language but no one using it. The next step for language designers would be to invest in tools that would help with refactoring the code as the language evolves. I remember Go did a bit of that in the early days before 1.0. But that was mostly for relatively trivial transformations.

          1. 5

            Rust does it too - they have an “edition” system, and every three years a new edition ships that can contain new, backwards-incompatible syntax.

            What differentiates this from e.g. the C++11, C++14, C++17 etc. sitatuon is that you get to mix-and-match these editions within the same project, the compiler handles it fine. Also, changes made in editions are designed in such a way that fixing the breakage in your code is easy and largely automated, so it suffices to run cargo fix --edition in nearly all cases.

            1. 4

              TBF lots of languages have some sort of evolution feature. Python has __future__ imports, Perl has feature and version pragmas, …

              I think the great success of Rust’s editions system is the eminently reliable migration tool obviously, and

              you get to mix-and-match these editions within the same project, the compiler handles it fine

              you don’t, really, the edition is a per-crate stricture, obviously you can have multiple crates in a given project, but it’s much coarser. If anything you can “mix and match” C++ a lot more. GCC actually guarantees that as long as all your objects are built with the same compiler you can link them even if they use different versions of the standard. And you can even link cross-version if the features were not considered unstable in that compiler version (so e.g. you can’t link c++17 from GCC7 and C++17 from GCC8 because C++17 support was considered unstable in GCC8).

              But I think that’s advantageous.

              An other major advantage of Rust is simply that’s it’s an extremely statically typed language, so there are lots of language improvements which can be done with middling syntax tweaks and updating the prelude, whereas adding a builtin to a dynamically typed language has the potential to break everything with limited visibility. Not being object-oriented (so largely being “early bound”, statically dispatched) and very strict visibility control also means it’s difficult for downstream to rely on implementation details.

        2. 1

          With a sufficiently expressive macro system, I think you could pull this off (relatively) easily:

          When features get removed, rather than than being axed completely, they get moved to a standard library macro, and then when source files get compiled in a new version that has removed built-in support for the feature, it automatically inserts the import into the top of the source if it’s used in it. Those macro contexts could bar feature compatibility with (from their perspective) the future, such that if you want to use new language features in a block of code using a legacy macro, you need to refactor the legacy macro away. Doing so would decrease maintenance burden substantially, because you don’t need to worry about new language features conflicting with now-sunset language features.

          I think that gives the best of both worlds: reduction of core language complexity, while not breaking source files that have been left untouched since the times of dinosaurs.

    15. 2

      I mainly use VSCode since a few years. Was using Sublime Text before.

      I’m impressed by how many people are still using Sublime here. The lack of native LSP integration became a bit of an issue for me over time. That’s clearly one of the main points that motivated me into trying VSCode.

      That’s also interesting how Kakoune and Helix are emerging and being used by more and more developers. Helix seems really promising.

      Surprised no one mentioned Lapce yet. Curious about the experience from anyone using it.

    16. 18

      Do you have any more information on the project? This is a bit light.

      1. 3

        I haven’t shared the open source project publicly yet, but I plan to later this year.

        This thread has some example code and a link for more info if you’re interested (some details have changed since): https://twitter.com/haxor/status/1618054900612739073

        And I wrote a related post about motivations here: https://www.onebigfluke.com/2022/11/the-case-for-dynamic-functional.html

        1. 18

          There is no static type system, so you don’t need to “emulate the compiler” in your head to reason about compilation errors.

          Similar to how dynamic languages don’t require you to “emulate the compiler” in your head, purely functional languages don’t require you to “emulate the state machine”.

          This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.

          I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.

          This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.

          Introducing constraints to the set of possible programs makes it easier to reason about our programs.

          1. 4

            I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.

            Regarding this:

            “making a subset of programs impossible”

            How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.

            I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

            1. 10

              I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.

              Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.

              It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.

              In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.

            2. 4

              How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.

              You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.

              If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.

              I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”

              But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

              I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.

              I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.

            3. 3

              I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.

              I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.

              It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).

              Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.

              And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.

              My claim is you have to think like the compiler to do that.

              My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.

              1. 6

                Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.

                I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!

                1. 3

                  Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.

                  Nevertheless, I have observed that:

                  • Many programmers have explicitly taken programming to avoid doing maths.
                  • Many programmers dispute that programming is applied maths, and some downvote comments saying otherwise.
                  • The first set is almost perfectly included in the second.

                  As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.

                  While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.

                  I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.

                  There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.


                  Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.

                  And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.

                  1. 6

                    Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.

                    1. 1

                      To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.

                      Except for the exceptions. I’m clearly missing something, though I have yet to be told what.

                      Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.

                      And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.

                      1. 5

                        There seems to be no conclusive evidence one way or the other: https://danluu.com/empirical-pl/

                        1. 3

                          Sharing this link is the only correct response to a static/dynamic argument thread.

                        2. 1

                          I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.

                          Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.

                          Edit 2: just realised this is a list of studies, not just a single study. Even better.

            4. 1

              How do you know what subset becomes impossible?

              Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.

              Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?

          2. 4

            Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.

            https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity

            People in the programming language design community strive to make their languages more expressive, with a strong type system, mainly to increase ergonomics by avoiding code duplication in final software; however, the more expressive their languages become, the more abruptly duplication penetrates the language itself.

            That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.

            Those languages are not only harder for humans to understand, but tools as well

            1. 4

              The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.

              1. 1

                Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.

            2. 2

              Sure, but I think you’re talking past the argument

              This is probably a fair characterization.

              Those languages are not only harder for humans to understand, but tools as well

              I am a bit skeptical of this. Certainly C++ is harder for a tool to understand than C say, but I would be much less certain of say Ruby vs Haskell.

              Though I suppose it depends on if the tool is operating on the program source or a running instance.

        2. 7

          One common compelling reason is that dynamic languages like Python only require you to learn a single tool in order to use them well. […] Code that runs at compile/import time follows the same rules as code running at execution time. Instead of a separate templating system, the language supports meta-programming using the same constructs as normal execution. Module importing is built-in, so build systems aren’t necessary.

          That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.

        3. 4

          I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.

          1. 3

            I think there are a lot of excellent ideas in both Clojure and Elixir!

            With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).

            For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.

        4. 2

          I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?

          Some things I wish Lisp syntax did better:

          1. More syntactically first-class data types besides lists. Most obviously dictionaries, but classes kind of fit in there too. And lightweight structs (which get kind of modeled as dicts or tuples or objects or whatever in other languages).
          2. If you have structs you need accessors. And maybe that uses the same mechanism as namespaces. Also a Lisp weak point.
          3. Named and default arguments. The Lisp approaches feel like cludges. Smalltalk is a kind of an ideal, but secretly just the weirdest naming convention ever. Though maybe it’s not so crazy to imagine Lisp syntax with function names blown out over the call like in Smalltalk.
          1. 1

            Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!

            1. 1

              This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.

              1. 2

                Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/

    17. 5

      Please slow down on the sourcegraph spam. Lobsters is not your marketing channel.

      1. 5

        I see what you mean, but in that instance, the article was actually useful to me. (I don’t use or work for Sourcegraph.)

      2. 1

        For sure my bad.

        1. 4

          To be clear, the problem is not the number of submissions pushing your own stuff, it is the ratio between those and your other contributions. You have submitted more stories than comments and all except one of your stories has been sourcegraph (the other one is currently sitting at -2). No one cares if active contributors to the community plug their own stuff, people care if people just treat this place as a marketing channel. Join the discussion on other topics, submit interesting things you read elsewhere and no one will complain when you submit things like this.

          The rule of thumb I was told was that no more than 10% of your contributions should be self-promotion. I’d qualify that slightly and suggest that one-line comments don’t count towards the other 90%. Spend some time thinking about how your unique perspective can enrich other discussions.

          1. 2

            Starting yesterday, I’m going to be a better community member and not treat Lobsters like an open mic night.

            Spend some time thinking about how your unique perspective can enrich other discussions.

            Will do. I want to apologize to you, @friendlysock, and the rest of the community for my behavior, it was unacceptable and I’ll do better going forward.

    18. 3

      I hope we will see a follow up post in 2-3 weeks with a nice speedup in modernc.org/sqlite.

      1. 1

        Is there an upcoming improvement?

        1. 3

          “I hope”

    19. 16

      Is it my bubble or is sqlite everywhere lately?

      1. 24

        Every 5-7 years we find a new place where SQLite can shine. It is a testament to the engineering and API that it powers our mobile apps (Core Data)/OSes, desktop apps (too many examples), and, eventually app servers, be they traditional monoliths (Litestream) or newer serverless variants, like what’s described here.

        I also see a trend where we’re starting to question if all the ops-y stuff is really needed for every scale of app.

      2. 6

        I’m in the same bubble, reading about LiteStream, Fly.io and Tailscale. And I really love what they are doing in the SQLite ecosystem. But I don’t really understand how CloudFlare is using SQLite here. It’s not clear if SQLite is used as a library linked to the Worker runtime, which is the usual way to use it, or if is running in another server process, in which case it’s closer to the traditional client-server approach of PostgreSQL or MySQL.

        1. 4

          Yeah this post is very low on technical detail, and I can’t seem to find any documentation about the platform yet - I guess once things open up in June we’ll know more.

          Definitely keen to see if they are building something similar to Litestream, it seems like a model that makes sense for SQLite; a single writer with the WAL replicated to all readers in real time.

          I’m trying to convince people at work that using a replicated SQLite database for our system instead of a read only PostgreSQL instance would make our lives a lot better, but sadly we don’t have the resources to make that change.

          1. 2

            I guess CloudFlare D1 is based on CloudFlare Durable Objects, a kind of KV database accessible through a JavaScript API. They probably implemented a SQLite VFS driver mapping to Durable Objects (not sure how they mapped the file semantics to KV semantics though). If I understand correctly, Durable Objects is already replicated, which means they don’t need to replicate the WAL like Litestream.

      3. 5

        I think there’s probably a marketing/tech trend right now for cloud vendors (fly.io, cloudflare) to push for this technology because it’s unfamiliar enough to most devs to be cool and, more importantly, it probably plays directly to the vendors’ strengths (maintaining these solutions is probably much easier than, say, running farms of Postgres or whatever at scale and competing against AWS or Azure).

        If it’s any consolation, in another five or ten years people will probably rediscover features of bigger, more fuller-featured databases and sell them back to us as some new thing.

        (FWIW, I’ve thought SQLite was cool back in the “SQLite is a replacement for fopen()” days. It’s great tech and a great codebase.)

        1. 14

          Litestream author here. I think SQLite has trended recently because more folks are trying to push data to the edge and it can be difficult & expensive to do with traditional client/server databases.

          There’s also a growing diversity of types of software and the trade-offs can change dramatically depending on what you’re trying to build. Postgres is great and there’s a large segment of the software industry where that is the right choice. But there’s also a growing segment of applications that can make better use of the trade-offs that SQLite makes.

    20. 1

      redbean, a web server shipped as a single binary executable, including Lua and SQLite, seems to be a perfect fit: https://redbean.dev/