Threads for jmelesky

    1. 15

      This has been making the rounds on mastodon and the action here appears to be in the issues, which are currently limited to block additional users from commenting. The main issue people are commenting in is here: https://github.com/RupertBenWiser/Web-Environment-Integrity/issues/28

      Includes appearances from a corporate community manager type figure trying some linguistic judo to shame people for making non-constructive criticism, mean criticism which does not accept the premises of the project. Of course, rejecting the premises of the project is certainly constructive criticism from the perspective of the web as a whole and this line certainly didn’t go over well with the people who are sick of an ad company using their browser’s monopoly market share to dictate how the internet runs for everybody else.

      It remains questionable how much power whining online actually has. Google and the people who work there still have some vestiges of prestige left that can be threatened, maybe.

      Finally, it’s a very small gesture but you can help by using a non-blink-based internet browser and normalizing this among your friends & family. Firefox is very good these days.

      1. 3

        Includes appearances from a corporate community manager type figure trying some linguistic judo to shame people for making non-constructive criticism, mean criticism which does not accept the premises of the project.

        Do you have a link to this? I skimmed through the comments but it’s mostly a pile on at this point (understandably, to an extent, but still very low signal:noise ratio)

      2. 2

        So is Safari.

        1. 1

          But the number of Chrome-only websites is only growing. Chrome is IE6 all over again.

          1. 6

            I think this depends on the particular online bubble you travel in. I’ve used Firefox as my primary browser for more than 5 years and the only thing that I switch to Chromium for is Google Meet.

            1. 4

              Lucky you! I’m thinking of creating a “wall of shame” of Chrome-only websites to draw attention to the issue.

              1. 1

                Yep name and shame away!

            2. 3

              I know I’ve had to use Chrome or Chromium for a bunch of things, including health provider sites and job application sites. Seems to happen with a bunch of SaaS companies that have small businesses as their main customers (based on anecdata alone, fwiw).

            3. 1

              Google Meet seems to work well in Firefox these days. I’m also glad Teams has expanded browser support recently, and I no longer have to switch away from Firefox at all.

          2. 3

            Really? I have never encountered one. I have found some that didn’t work with the version of Safari on my 2013 MacBook Pro, but they did work with Firefox. The only web site I’ve seen that works with a single browser was Bing Chat, and that’s Edge only, not Chrome.

    2. 30

      The best bit is after the original release they force pushed a version that removed the is_elon flag.

      [1] While removed from their originally published repo it was forked: https://github.com/B1Z0N/twitter-algorithm-with-author_is_elon/blob/7f90d0ca342b928b479b512ec51ac2c3821f5922/home-mixer/server/src/main/scala/com/twitter/home_mixer/functional_component/decorator/HomeTweetTypePredicates.scala#L225

      1. 5

        They weren’t “hardcore” enough to run a grep -R $BOSS before publishing /s

        1. 11

          Oh, they did, they just forgot the -i, so it caught the UserIsElon code, but not the author_is_elon.

    3. 4

      Have they open sourced the hosting webapp yet?

      1. 27

        I have plans to do it really soon, that’s actually my next project for Pijul.

        1. 5

          Is it possible to host Pijul without using Nest? Like with Git creating bare repo available via SSH? I couldn’t find that anywhere in the docs.

          1. 7

            A very frequently asked question, yet it is a chapter of the manual, it couldn’t be much clearer: https://pijul.com/manual/working_with_others.html

        2. 4

          Pijul looks very cool. Do you consider in its current state to be a production ready replacement for git? I checked the FAQ but didn’t see that question.

        3. 1

          That’s really exciting to hear; the fact that the web UI was proprietary is something that I’d always found disappointing. What prompted the change of heart, may I ask?

          1. 5

            My understanding is that the plan has always been to open source it, but that a closed source approach was taken at first so people would focus on contributions to the Pijul protocol rather than to the relatively unimportant CRUD app.

            1. 5

              Exactly. Maintaining open source projects takes time, one has to prioritise. In my experience, many of the “future eager contributors” to the Nest didn’t even care to look at Pijul’s source code, which makes me seriously doubt about their exact intentions.

          2. 5

            No change of heart. Pijul is usable now, so the focus can move on to something else. Also, I find it easier to guarantee the security (and react in case of a problem) of a web server written 100% by me.

      2. 4

        Not that I am aware of. The sad thing is @pmeunier remains the cornerstone of this project and as far as I understand, he lack time to work on pijul these days (which is totally okay, don’t get me wrong, the amount of work he already spent on this project is tremendous).

        I am curious to see what the future holds for pijul, but I am a bit pessimistic I admit.

        1. 17

          I am curious to see what the future holds for pijul, but I am a bit pessimistic I admit.

          Any concrete argument?

          It is true that I haven’t had much time in the last few weeks, but that doesn’t mean Pijul is unusable, or that I won’t come back to it once my current workload lightens. Others have started contributing at a fast pace, actually.

          1. 10

            Others have started contributing at a fast pace, actually.

            That is wonderful news! I keep wishing the best for pijul, even though I don’t use it anymore. This remains a particularly inspiring software to me. Sorry if my comment sounded harsh or unjust to you, I should know better but to write about my vague pessimism when the Internet is already a sad enough place.

            I really wish you all the better for pijul (:

    4. 11

      We put a fair bit of our application in the DB and we like it.

      The DB handles all of our authentication/authorization using PG’s access controls. We run web and PyQT UI’s plus console scripts against the application. Not to mention we happily hand out PG connection info to power users to go abuse the DB themselves without worry.

      We do use Python as the DB language, thanks to PG. Testing is a touch more complex, but not remotely hard or anything, the data going in and out is usually easy to reason about and easy enough to get at when it’s not. It’s pretty easy to build a test harness for it. With tools like Liquibase, updating the schema’s and functions are easy.

      The biggest downsides I’ve come across:

      • Most tools(especially SQL reporting tools) are not built such that they expect the DB to hold and manage user accounts.
      • Your DB server tends to want more resources, but scaling for most applications isn’t that big of a deal.
      • Sometimes tracking down the stupid lock or debugging the troubled SQL statement can get annoying(but this is true of all SQL DB’s once you get past treating it as a KV store).
      • DB upgrades are a bit more troublesome sometimes.

      Overall quite happy. This sort of thing doesn’t always work well, it totally depends on the application. It’s not a one-sized fits all solution, but nothing is.

      1. 1

        We do use Python as the DB language, thanks to PG.

        Out of curiosity, why Python instead of a trusted extension like Perl or v8?

        1. 4

          We use PyQT as the front-end for the “native” application, and lots of Python everywhere else, so it’s just easier to keep it all Python.

    5. 6

      Can someone explain to me what this is supposed to be, an Emacs fork with a few changes that the maintainer refuses to merge?

      1. 6

        Pretty much, yeah. The gnu emacs maintainers seem to be very opposed to certain kinds of features, and so some forks spring forth that add those. Maybe having tree-sitter integrated well will make a difference, who knows.

        1. 10

          This fork could become huge:

          • Upstream Emacs changes are included.
          • No political bullshit reasons for not merging changes that the FSF is against.
          • No silly FSF requirement for signing over copyright (which is an administrative burden and deterrent).
          • Easier PR workflow for people more used to “modern” style of contributing through GitHub.

          Those last two would make drive-by contributions from one-time contributors much easier and more likely, which might make small but important quality of life improvements more likely to land. The second point could potentially make this fork better in technical ways (see the Gnus nonblocking and GC improvement that have been merged already).

          1. 3

            As the core Emacs user seems to be unwilling to make any changes to the default configuration, I doubt that this will get any real traction. Which is sad.

            1. 6

              Maybe, maybe not. It’s largely forgotten at this point, but Xemacs was used by a whole bunch of people for a decade or so. And the fact that lots of people used it was a major part of why mainline Emacs adopted a bunch of changes.

              Can that same pattern work again? I think it’s worth a shot.

              1. 4

                Yeah, I was thinking this fork could become the next Xemacs.

                The same pattern also worked with egcs (the non-GNU gcc fork). I wonder if there are other such cases where the FSF’s hand was basically forced.

    6. 8

      The Newton had a bunch of features that I still miss from modern systems. The two that most spring to mind:

      The drawing app had really good shape recognition. The MS Whiteboard app is now maybe 60% as good on a CPU at least an order of magnitude more powerful. If you drew a square, a circle, a line between them, and wrote in the square, you’d end up with vector shapes that connected properly (so dragging the square or circle would keep the line / arrow attached and where the text was associated with the shape that you drew in).

      The other thing is copy and paste. It annoyed me that the iPhone launched without this feature when the Newton was the only small-screen device to do it well. You could select something on the Newton and drag it to the edge of the screen. You’d then get a little tab containing it as a clipping. You could then switch to another app and drag it back. Direct manipulation everywhere, no abstract clipboard idea for people to understand, and an interface that scaled trivially to multiple concurrent clipboards. I wish newer hand-held devices would do this instead of their current clumsy copy-and-paste (which only works reliably for text, just like ’80s-era X11).

      1. 3

        It’s sort of remarkable how bad input has been on mobile devices without triggering any kind of competition/experiments to make it better.

        1. 2

          There used to be a great keyboard on Android called 8pen which was definitely in the “experiment” category. It’s a bit hard to describe, but the gist was:

          • you had a circle in the middle, and four quadrants radiating out from the circle
          • you typed a letter by starting your finger in the center, going out to a quadrant, crossing one or more quadrant boundaries, and bringing your finger back to the center circle
          • you added a space by lifting your finger and bringing it back down again

          The end result was that your finger stayed in fairly rapid, fairly fluid motion, and only stopped at the end of a word. It definitely required learning, but it was remarkably easy (and forgiving–the quadrants were all large, and there was optional haptic feedback every time you crossed a boundary) once you knew it. I could routinely type full, correct text messages without actually looking at my phone. I used it as my keyboard for years, and loved it.

          Sadly, the company that made it shuttered, and it’s no longer available.

          edited to add: Apparently in the last couple years somebody made an open-source clone called 8vim. I am delighted, and encourage everyone to try it out.

    7. 3

      It’s fascinating how the KEY feature of the Newton and all the other handhelds of the period was cutting-edge handwriting recognition, and it turned out we just needed better on-screen QWERTY keyboards.

      1. 5

        I would argue that we are still waiting for better on-screen keyboards…the point and tap approach leaves a lot to be desired.

        1. 3

          an on-screen keyboard that could be configured to your hands, and provide haptic feedback would be great. I think that the Newton was remarkable in the fact that it could do handwriting recognition considering the limitations of the hardware.

      2. 4

        I think one of the keys to Palm’s success was that they didn’t try for cutting-edge handwriting recognition. Instead, users used Graffiti for input. That let them avoid needing an on-screen keyboard while also giving them an out on handwriting recognition. It meant smaller, more capable devices, at the cost of users learning a new input type.

        BlackBerry showed up around the turn of the century and started to slowly nudge Palm out of the top spot. Much of that was better integration with office software suites, but a nontrivial part was the keyboard. It was physical, and tiny (though still far dominating the profile of the device), but didn’t have the learning curve of Graffiti.

        And then the iPhone showed up with its on-screen keyboard, and the game was up.

        Still, there’s a definite market for devices that recognize handwriting. The reMarkable devices come to mind, but there are others, too. And the handwriting recognition has definitely improved over 1993.

        1. 2

          IIRC, Graffiti was a third-party package that Palm bought or licensed; I think it was also available on the Newton, but that could just be brainworms at this point.

          I miss my Newton. I loved it unreservedly.

          1. 3

            Palm developed it, but I believe they developed it for other devices. That is, Palm started out as a software company, and it was a few years before they released their own PDA (the original PalmPilot). So Graffiti was available for the Newton before it was on their own hardware. Also several other portable devices that didn’t have the same level of success.

            Someone probably still owns the Graffiti IP (I think Palm was acquired by HP, so maybe them?), so it’s possible they could release Graffiti as a keyboard for Android or iPhone.

            1. 4

              This is broadly the case.

              I hope that my attempt to précis the history won’t get the details too wrong.

              • NewtonOS 1.0 needed to learn your handwriting. It was thus impossible to demo in-store; it needed a week or so to get to know you. But the vision was there and it was very impressive.

              • Jeff Hawkins saw a way to fix this and launched his own simplified handwriting-entry system. You need to learn to type, so why not re-learn how to write? Simplified single-stroke letters, one on top of the other, in a defined area with zones and hotspots for capitals and numbers so that the app didn’t have to guess. Don’t make the device learn the human’s ways, as they are bad at that: make the human learn the device’s ways, as humans are good at that.

              • Graffiti 1 launched as a Newton app.

              • Hawkins realises that the Newt is too big and ambitious. He feels that what people really need is a shirt-pocket sized device, that runs for weeks on a few AAA cells. Enter your data on a PC or Mac, sync it very easily to the pocket device: plug in, press 1 button, done. Only make minor edits on the device.

              • He prototypes the device as a block of wood to get the size right. He buys in an OS kernel, he and a small team wrap a simple GUI around it with the existing Graffiti as the sole text-entry system.

              • Result: Palm Pilot. Big hit.

              • NewtonOS 2.0 can read hand-printed letters, i.e. non-cursive. No learning needed. This gets around the need-to-learn-the-owner’s-writing problem. Print at first, write in longhand when you have time to teach it. But it’s still big and unwieldy, while the Palm range are tiny. They only do 20% of the functionality but it’s the important 20%.

              • Palm’s IPO hits problems when Xerox sues over Graffiti. Palm loses. https://www.theregister.com/2002/01/03/xerox_wins_palm_handwriting_case/

              • Palm intros Graffiti 2 but you have to relearn it again. https://www.theregister.com/2003/01/14/palm_draws_up_plans/

              • Ruling overturned but it’s getting late. https://www.theregister.com/2004/05/24/palm_vs_xerox/

              [Links are from my employer but from long before my time.]

    8. 3

      Default collation is definitely a hidden gotcha in postgres.

      If you don’t have the ability to drop and recreate the DB, or you can but can’t control the default collation (true for some hosted DB services), there is another (tedious) workaround.

      First, you can specify a default collation per column. And that can happen with an ALTER TABLE statement:

      ALTER TABLE report ALTER COLUMN comment TYPE text COLLATE "uk_UA";
      

      Doing this for all char/varchar/text columns is left as an exercise for the reader.

      Second, if you cannot alter the column collation (perhaps your DBA is unwilling), you can specify collation use during the retrieval:

      SELECT count(*) FROM report WHERE lower(comment COLLATE "uk_UA") like '%кредиторськ%';
      

      It’s …. not great. But it’s better than nothing when you’re stuck. See the official docs for more detailed info.

    9. 11

      ehhhh as a research language it’s not dead yet.

      But if you think a language (spec) should continue to be updated (which is fair; even C and even Fortran get more spec revisions than Standard ML has), then yeah that part of it doesn’t seem likely to come alive anytime soon.

      This post also talks about syntax and semantics as its flaws. I think that’s completely untrue. It hasn’t sprouted because it doesn’t have a major any industry backer like OCaml has in Jane Street.

      1. 8

        I’m not sure if it’s fair to call Jane Street the main backer of OCaml. They are definitely a big backer and they are vocal about their use of OCaml, but they aren’t alone. OPAM is maintained by OCamlPro, for example, which is a consulting company.

        It’s also interesting to note that SML did have commercial users and backers. AT&T (if I’m not mistaken, or another big telecom company) had a big SML codebase. Harlequine Software was selling a commercial IDE for it (MLWorks).

        I mostly support the OCaml side in the debate about int/float problem and other issues mentioned, but I’m not ready to give any answer as to why the OCaml community is much bigger than that of SML now. All I can say for myself is that the existing community of OCaml and the languages features not available in SML tip the scales towards OCaml for me all the time. I want to write programs and make it easier for other people to write in an ML, and OCaml is the one ML that makes it easier for me.

        Also, fun fact: MLton has a way to use overloading in your own code, even though you need to explicitly enable it in the compiler options.

        1. 2

          I didn’t call it the main backer of OCaml; but I would call it that now that you mention it. “Main” doesn’t mean they’re the only one. They seem to be the biggest one though.

      2. 6

        As a research and education language SML is still hanging around, true, but as an industry language, let’s face it, it’s dead.

        It hasn’t sprouted because it doesn’t have a major any industry backer like OCaml has in Jane Street.

        Difficult to say, is OCaml successful because of Jane Street backing or is Jane Street backing it because it’s a viable language? I think there are multiple factors involved. Personally I think the fact that they have a single (main) implementation, with a simple C-like native executable compiler, helped a lot.

        1. 8

          Personally I think the fact that they have a single (main) implementation, with a simple C-like native executable compiler, helped a lot.

          Agreed. SML has some excellent compilers, but the flagship (SML/NJ, flagship due to its prevalence in education) is not friendly to shipping executables. MLton, the excellent whole-program optimizing compiler, is not friendly to interactive development. Ocaml’s single implementation definitely helps.

          I think the big thing (possibly a side-effect of above) is SML and Ocaml’s library situations. SML has the Basis library, which is a standard library in a very early 90’s way (simple data types, socket-level networking). And it’s not actually a library, it’s a set of definitions and interfaces that are implemented by different SML implementors (much like the language itself). It took years for the committee to finalize the Basis, and that was quicker than standardizing the language.

          Ocaml, on the other hand, has the one implementation, by Inria, with a standard library that doesn’t take a multilateral committee to update. Still, the standard library is also fairly spare. But the library actually changes over time, which is not nothing, and it includes important bits like a parser library. Between that, and pretty good C interoperability, Ocaml has much more out of the box.

          Jane Street can then build a layer of different library bundles on top of that, and because they work with all of the Ocaml implementations (just the one), they gain a lot of traction in the community. That centralization allows a standard package manager (opam). Which is easy enough to add to that it grows.

          Pretty soon, SML is nowhere near Ocaml.

          1. 4

            but the flagship (SML/NJ, flagship due to its prevalence in education)

            Yes it’s common in education but I really wouldn’t call this the flagship at all. MLton and Poly/ML are the only ones I’d recommend to anyone for serious use.

        2. 4

          As a research and education language SML is still hanging around, true, but as an industry language, let’s face it, it’s dead.

          Yup I agree.

          Difficult to say, is OCaml successful because of Jane Street backing or is Jane Street backing it because it’s a viable language? I think there are multiple factors involved. Personally I think the fact that they have a single (main) implementation, with a simple C-like native executable compiler, helped a lot.

          Maybe, but it seems overall completely arbitrary to me. And that’s fine, most successful things in life are arbitrarily successful.

    10. 16

      This article took me back 10-15 years ago when I, as a teenager, used to spend a lot of time on flash game websites. There was an overwhelming number of games (tower defense, scare games, rpgs, shooters or flash movies, remember them?) and I loved it. Problematic were those sites relying on flash to function (menus, Flash SPAs!, videos).

      When mobile devices started dominating roughly around 2011-2013, HTML5 allowed more and more things and flash websites were widely shunned, I naturally joined in the criticism of flash as an aspiring web developer and even redesigned the website flashsucks.org (archived).

      However, I somehow didn’t realize how this development would not only (thankfully) get us rid of flash-based web development, but also all this art, entertainment and community of all those flash games I learned to love so dearly.

      What do teenagers have today? Apps in app stores, often of low quality and riddled with IAPs and ads. The flash game sites also had ads, but with extremely few exceptions they would not make you sit through ads to play the games.

      So in a way, and I hate to admit it, I definitely miss those days where you could just go on a website like Newgrounds and try dozens of games with a single click each.

      Not to mention the scene back then which is lost forever. There is also an app developer scene, but it’s divided between the app stores and definitely doesn’t have the same “spirit”. Everything has become much more serious and for-profit. You can almost draw an analogy to YouTube and how it changed in the last 10 years. I liked YouTube back then much more as well (way less corporate and actually promoting individual and small content creators), but that’s another topic.

      1. 12

        What do teenagers have today? Apps in app stores, often of low quality and riddled with IAPs and ads. The flash game sites also had ads, but with extremely few exceptions they would not make you sit through ads to play the games.

        I’m not sure if Flash had maintained the popularity that it had, that the situation would be better. I think those are more market forces and less anything having to do with the platform. Zynga was pushing IAP through their major Flash games, and I’m sure the trend would’ve blown up if the mobile scene had a good Flash experience.

        Also, a ton of flash games are low quality. Not everything in the Flash era was an indie gem. If there hadn’t been a massive campaign to slowly kill Flash, I’m sure we would’ve ended up in the same situation as we’re in today with shitty app games.

        I totally sympathize with losing a lot of the weird art of flash games, but sometimes I think we romanticize the past too much without trying to think about how market forces would’ve shaped the tech of yore had it survived.

        1. 9

          Yes, you’re probably right. A lot of flash games back then were horribly bad, but it took mere seconds to switch to another one. Nowadays, everything is skewed due to fake ratings, which may waste you some time when you have downloaded and installed an app that turns out to be crap.

          YouTube is a good indicator for the development you describe, as the core offering (video uploads for anyone) didn’t change, but all the market-factors around it did. I fondly remember YouTube from 10-15 years ago, because everything was much more relaxed, independent and experimental. The major content creators nowadays rarely risk things. I think there are two factors at play here:

          1. The ratio of video consumers versus producers gets larger and larger. Back in the day, you could click on any commenter’s channel and see at least a few videos they made themselves. Often times they were bad, fitted with 009 Sound System’s Dreamscape and/or 10 FPS screencaps with Notepad-captions, but it gave everyone personality and expressed their interests. The videos were often instructional, sometimes even helpful. When I look at YouTube-channels of commenters today, they are mostly empty husks with just a list of likes and subscriptions, if at all. It just makes it so bland.
          2. The audience gets younger and younger. The entire YouTube Kids fiasco deserves its own thread, but everyone probably remembers those few months a few years ago where you suddenly had a lot of random button-mash-comments and strange emoji-comments on YouTube-videos (until they introduced YouTube Kids). As it turns out, those were toddlers who got mom’s or dad’s iPad and just happily typed away. Some baby lullabies get billions of views, and a lot of content definitely tailored towards sub-10-year-olds is usually in trending. I remember back then that YouTube was a more mature place. I created my account in 2008 at the age of 13, but strongly remember definitely never acting my age. While back in the day people tried acting older than they were, it now seems to be the opposite, especially in regard to content creators. Even ones in their late 20s often act very immaturely.
      2. 8

        Have you seen itch.io? You can browse Free+Web Browser games. That subset of the site isn’t nearly as big as stuff like Armorgames and Newgrounds were before, but it has captured a strong indie spirit, IMO, also being the locus of many game jams (there are like 15-20 going on for any given day).

        One could argue that we need more sites like itch, and I wouldn’t disagree there (especially given what happened to bandcamp recently), but it’s definitely a vibrant creative space.

        1. 1

          what happened to bandcamp?

      3. 3

        There was an overwhelming number of games (tower defense, scare games, rpgs, shooters or flash movies, remember them?) and I loved it.

        Are you familiar with the Flashpoint project? As worr suggests, it’s definitely tricky to find the gems among the crap, but it’s an astonishing collection.

        There is also an app developer scene, but it’s divided between the app stores and definitely doesn’t have the same “spirit”.

        You might be interested in the community over at itch.io. It’s a bit broader, since it includes print gamedev as well as video games, but it definitely has a similarly scrappy vibe.

    11. 2

      Wow. So many subpages. And yet it’s hard to find what the abbreviation even means.

        1. 1

          It looks to me that this task technically doesn’t require you to write a program at all.

    12. 31

      And I still can’t shake off a bit of a cult-ish vibe there. Regardless whether on purpose, or purely accidental.

      It’s not accidental. See Who Owns the Stars: The Trouble with Urbit for more background about on the political motivations behind the project.

      1. 12

        “[I]n many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.”

        ― Mencius Moldbug AKA Curtis Yarvin

        1. 2

          What’s the context? This can be read as him raising an army by nonsense or an observation of how gullible people are by contrasting the masses to an army.

          1. 4

            Why not both? There’s precedent.

            “You don’t get rich writing science fiction. If you want to get rich, you start a religion.” - L. Ron Hubbard

            1. 2

              That’s valid too, but combining them pretty much doubles the importance of actual context.

              Right now it’s the equivalent of my maybe-favorite reasoning, which is that circular reasoning works because it’s predicated on the fact that circular reasoning works.

      2. 15

        There is nothing more boring than a personal attack to a systems creator to discredit the system. You need to assume that they are a diety that can predict exactly how every part of the system will interact with every other part.

        The article may eventually get to that point, but after reading a third of it and not getting there I have better things to do with my life.

        1. 34

          Boring, sure, but not necessarily unwarranted. The author goes to great lengths to explain why it is important to him to consider not even the creator alone, but all who will benefit from a system as it grows and becomes widely known / used.

          If Urbit were some random open source library I’d agree that attacking the author is pointless. But it’s not, it’s an entire alternative socioeconomic apparatus with the author’s political views embedded within it in a meaningful way (not just in terms of language, which has actually been changed to be less political). He has said as much himself.

          To me, the article is more an explanation of why the author won’t participate in or support Urbit rather than a takedown of the system itself. Just like how many people (of all political stripes) don’t shop at certain stores or buy products from certain companies if they disagree strongly with the owners.

          1. 2

            What prevents someone from forking Urbit or starting a similar project to advance a radical anarcho-socialist platform?

            I don’t really care, but that Yarvin guy seems to have put some effort into his work, disagreeable or not, while the opposition focuses on complaining and raging.

            If Urbit really is a threat, aren’t online comments the least useful slacktivist countermeasure?

            1. 10

              What prevents someone from forking Urbit or starting a similar project to advance a radical anarcho-socialist platform?

              We have to draw a distinction between Urbit the community and Urbit the software. TFA does discuss both (for example, the author critiques Hoon, which is part of the software). However, the discussion of Yarvin and his supporters is part of a critique of Urbit the community.

              An example here might be people who are opposed to using VS Code because of the closed source and Microsoft connections. Most of them would probably feel fine about using a hard fork of the open source code, but that wouldn’t “be” VS Code, it would be some other project / community.

              Honestly, it’s even a bit more complicated than this since Urbit the software is designed to reflect a particular set of social values. But I’m not overly concerned with that since a fork could (presumably without much trouble) alter the design.

              As an aside:

              If Urbit really is a threat, aren’t online comments the least useful slacktivist countermeasure?

              If what you meant was that Lobsters comments are useless, talk about boring arguments… No one comes to Lobsters under the belief that their comments will change the world. We’re here to discuss topics we find interesting with people who also like to discuss those topics. The whole “ya’ll are so dumb for discussing something that interests you” meme is tired and unoriginal.

              If what you meant was that TFA is useless, then isn’t all political commentary useless? And wouldn’t that include Yarvin’s extensive political commentary? Making an argument about something and putting it out there for others to think about is pretty much the whole point of free speech. No one is forcing you to read it, and no one is forcing you to comment on it.

            2. 3

              Nothing prevents this. There’s even a quote from Yarvin from some years ago when he was still actively involved in the project saying that he had no problem with other people forking Urbit’s (open source) code and implementing another Urbit with a different namespace model.

            3. 2

              There have been many without the incompatable-with-existing-software low-performance VM or the intentionally hierachical and rent-seeking namespace/routing scheme. Scuttlebut is probably the most similar variant along the axis of ‘share write-only data in a censorship-resistant way’ (except it actually somewhat achieves the latter).

              For the ‘send someone some ETH to prove you care about the identity’ part of the system, you could just demand whoever you’re talking to have a message in their history where they mention a wallet ID and a planned transaction amount and time to the EFF. You have the added bonus of the money possibly doing something useful rather than going to people who are in a position to receive it precisely because they thought they’d make money or enjoy having power by rent-seeking. I guess this also achieves the feature where it props up Peter Thiel’s investment in Etherium.

              For the we-did-our-own-crypto-it’s-definitely-better-than-openssl in a terribly inefficient VM part, I guess you’ll have to write a scuttlebut client in brainfuck or whatever esolang takes your fancy. You could also just put in some busy loops and rewrite the hashing bit to introduce some security vulnerabilities I suppose.

              Sadly none of this comes with a central body of a few hundred people you need to seek permission from before you can discover peers or have your traffic routed. I guess we’ll just have to form a commune of rent-seeking tyrants or something. That part seems really hard to do under an actually p2p protocol so maybe scuttlebut falls down as a replacement there.

            4. 2

              As somebody who’s thought a lot about what properties make technology better at advancing anarchist ideals, it’s precisely the absence of hierarchy which I would prioritize above all else. Urbit prioritizes hierarchy in every aspect of its permission and identity models. I would never use it as a starting point.

              (Edit: fix typo)

              1. 1

                Sorry - I only now realized that this conversation took place more than a week ago. Please don’t feel any obligation to respond.

        2. 18

          It’s absolutely relevant if the system was designed to enable an anti-democratic agenda. It’s definitely boring to keep having to call the project out on that front, but it’s important to make people aware of the underlying motivations of its creator. I wouldn’t consider this necessary to do if a warning was added to the original post (I do think it’s interesting to look at the technical details of systems like this), but I assume they weren’t aware of this at the time of writing.

          1. 17

            An anti-democratic agenda is not a flaw when it comes to personal computers. If 99% of the population didn’t want me to do something with my personal computer, I would want my personal computer to anti-democratically ignore them and do what I tell it to do anyway.

            Of course the status quo isn’t democratically-controlled computing, it’s oligarchically-controlled computing. When I do my computing on privately-owned, closed-source platforms - Google, Facebook, Twitter, WeChat, etc.- it’s the fairly small number of people who work at those companies who control how I do my personal computing, rather than the electorate of the political unit I happen to live in.

            1. 8

              An anti-democratic agenda is not a flaw when it comes to personal computers

              I think you misunderstand “anti-democratic”. Yarvin has expressed support for dictatorship and slavery. Do you truly think that you would have more computing freedom in such a political environment than in a democracy?

              1. 3

                Possibly, depending a lot on the specific details of how the government of the political unit I lived in were set up. Certainly in a democracy a majority of the people could vote for politicians whose agenda includes restricting my compute freedom (for any number of reasons), and a lot of protections of individual rights in systems we call “liberal democracies” are grounded in self-consciously undemocratic processes, like judicial rulings.

          2. 13

            Which are irrelevant because the creators aren’t omniscient.

            Somehow Stallman enabled Google to happen with his hippie ideas about helping your neighbour. I don’t think creating the worlds largest spy agency was his primary motivation when he wanted the printers firmware source code. What matters is the interaction between the system and the world and not what the original creator intended.

            The article at the top points out those interactions and why they are bad. The article in the comment above is pure character assassination and who ever wrote it should feel bad.

            1. 4

              Somehow Stallman enabled Google to happen with his hippie ideas about helping your neighbour.

              I recommend doing some reading. The main public figure behind what you’re talking about is not RMS, but Eric S. Raymond, co-founder of the sometimes-controversial OSI and originator of the term “open source” (I think the vocabulary already illustrates a bit of a difference). Reading the FSF’s GPLv3 (or this article in particular) is enough to identify that Stallman was actively trying to forestall (wink!) the rise of something like Google on the back of FOSS. Contrast that with ESR’s term “open source”, as discussed in his book The Cathedral and the Bazaar (the name might sound familiar!).

              tl;dr: The best reading about the whole mess by far is this two-or-so–page article from 1999 by O’Reilly.

              As to your point: I guess it is character assassination in part? I think it’s also a decent discussion of the issues surrounding the Urbit project, and the significance of the dynamic between founder/maintainer and userbase. But I’ve only really given it a skim.

          3. 3

            How exactly is the design undemocratic?

            1. 17

              As a point of fact, the distribution of address space is explicitly feudal and meant to enable rent seeking. There are arguments for why this is a good or bad thing, but it is very much not democratic.

              1. 4

                Those concerns are entirely orthogonal.

                Feudalism and rent-seeking are both definitely bad, but it’s entirely possible for a democratic society to vote themselves both. I’d argue that many have; 29% of wealth is taxed in Australia, and ‘public-private partnerships’ (yecch) are funneling vast amounts of taxpayers money into the hands of a few individuals and companies.

                1. 6

                  I disagree that this is a category error. Societies can be more or less democratic and feudalism and rent-seeking move power from the people to an elite.

                  This is recognised by organisations like the Economist that assess a “democracy index” for various countries. Democracy is the measure of how much the people rule versus how much an elite rule and we can measure it for countries, workplaces, indeed, any community.

                  We don’t need to imagine this, we can simply look at the world as it exists. e.g. social democracies like Norway are clearly more democratic* than liberal democracies like the UK and USA where much more power is in the hands of an elite and there is much more widespread misinformation (in large part because the media is controlled by elites), voter suppression, and disenfranchisement through poor voting systems.

                  * I think Norway is a more democratic society because:

                  • it uses PR so a much greater proportion of the population has a meaningful say on their elected representatives (compare the UK and USA where only swing-constituencies really matter)
                  • there is proportionally more variety in media ownership, including significant union-allied media organisations
                  • workers benefit from sectoral-bargaining by large and powerful unions, so bosses have less power over workers
                  • income distribution is flatter, meaning elites have less economic power relative to the average person. In contrast, in the UK and USA proportionally more people are in poverty and they are much less likely than the rich to exert political power by voting, lobbying, protest or donations.

                  We can see the effect of this in the high levels of voter turnout (~75% in Norway vs ~60% in the USA and UK), high level of reported trust in government and media and perhaps more controversially in poverty rates and wealth and income equality (why would a truly free population choose to give so many resources to an elite while others are malnourished?)

                2. 2

                  It’s well known that democracies can vote themselves into, or be manipulated into authoritarianism, which is exactly why it’s important to call out these attempts when we see them.

                  1. 4

                    Yes! But @friendlysock’s point was that feudalism and rent-seeking aren’t democratic. I think that’s a category error; as you point out, it’s well known that democracy can result in either or both.

                    1. 1

                      It is not a category error. You can build non-democratic things out of democratic things.

                      Separating from potential semantic issues. Urbit centralizes power (power over routing, power over peer discovery, namespace being distributed with some mixture of money, seniority and nepotism, voting power only being available to a select few – the senate – who are also endowed with the most of the above) in a way that is somewhere between capital-equals-power and pre-selected, recursively appointed hierarchy (ie. feudalism).

                      The word Democracy in the context that @friendlysock used it very clearly refers to distribution of power (as opposing centralization) rather than voting-as-a-method-of-distributing power.

                      It is difficult to believe that one would respond to such a post as if it were discussing voting-as-a-means-of-power-distribution if one were having a discussion in good faith, and more difficult still to believe one would double down on such an interpretation except as a rhetorical technique.

                      1. 1

                        It is difficult to believe that one would respond to such a post as if it were discussing voting-as-a-means-of-power-distribution if one were having a discussion in good faith

                        Let’s be clear about this, instead of beating about the bush with “it is difficult to believe”. My position is as follows:

                        • I wasn’t conflating Democracy with voting. In fact, I consider voting one of the major problems with modern Democracy. I’m a proponent of sortition as a replacement for voting as a potential solution.
                        • Democracy is orthogonal to centralization, and orthogonal to power distribution, because …
                        • Democracy - in the sense of Government of the people, by the people, for the people - can and has in recent times resulted in despotic centralisation of power. Steering well clear of Godwin here, I’d point to Chavez as an example.

                        Perhaps this is the difference that’s leading you to think I’m arguing in bad faith.

                        I’d argue that in every meaningful sense, a centralised despotic Government with widespread popular support can still be Democratic; it’s just that the will of the people here is the problem.

                        1. 0

                          You still seem to be intentionally confusing things that are the result of a democratic process with things that can be described as democratic.

                          The people of england could have a vote (or the sortition lottery winners could decide) on whether Manchester should be walled off and ruled despotically by Eddie Izzard. It would be the result of a democratic process as the people of Manchester were outvoted by everyone else in England, but the power relationship between Eddie Izzard and the people of Manchester would not be democratic.

                          Similarly decisions based on propaganda and populism are less democratic, and decisions justified via manipulated elections are not really democratic at all.

                          Any relationship which enables rent-seeking is inherently undemocratic, and any rent-seeking-enabling structure explicitly ruled by the 256 people chosen by virtue of their capital and interest in having power over said structure is not democratic at all.

                          Very loosely, when used as an adjective in the context above, ‘Democratic’ would mean the people with power are the people whose interests are most relevant. This is inherently and definitionally untrue in any rent-seeking relationship (although the relationship could still be a result of a greater structure which is democratic), and untrue in any system which can accurately be described as feudal.

                          1. 1

                            You still seem to be intentionally confusing things that are the result of a democratic process with things that can be described as democratic.

                            I think this is at the heart of our disagreement, and where we may have to agree to disagree.

                            I would say that a state of affairs arrived at by a democratic process is by definition democratic; although it may also be rent-seeking, feudal, or despotic. Or some combination of all three.

                    2. 1

                      Ahh, got you, sorry for misunderstanding!

      3. 2

        Indeed. But also be forewarned that the author of that article is an avowed socialist; there are distasteful politics of all flavours on display here.

        1. 17

          Not all distasteful things are equally distasteful.

          1. 14

            That’s true; but Yarvin’s politics are still pretty damn distasteful.

        2. 11

          wonder how he manages to reconcile his socialism with his distasteful politics

          1. 4

            I think gulags were the preferred mechanism?

    13. 1

      I got started on an Apple II back in the day, and this is astounding.

    14. 3

      I’ve never been a fan of UML just because I’m not a visual thinker, but I definitely agree that formal specifications have fallen out of favor.

      My charitable take is that a big part of it is because our tools are getting better, such that the cost of being somewhat wrong is often considerably lower than the cost of trying to be precisely correct. When iterating on a piece of software meant a new months- or years-long development cycle, it really really paid to make sure you got it right the first time. But now? “Sure, we should be able to change how that works in time for the weekly production release.”

      My less-charitable take is that it’s because in enterprise and consumer software, we’ve sort of decided that specs should be written by product managers who often have little or no training or background in rigorously specifying a system and are instead focused almost exclusively on high-level “user stories” that don’t require thinking in fine detail, or on UI mockups that don’t require writing down precise business rules.

      That said, though, there are still contexts where you see people take up-front design discussion pretty seriously. Of course there are obvious candidates like aerospace, but also, for example, the processes some programming languages have in place to manage their evolution (JEP, PEP, etc.) involve a lot of design discussion before anyone is willing to review any code.

      1. 1

        specs should be written by product managers who often have little or no training or background in rigorously specifying a system

        When you say “project manager”, I expect someone whose job duties and training are in:

        • keeping track of dependencies between tasks so people aren’t asked to do things for which the prerequisites aren’t available yet
        • keeping up to date estimates of cost and timeline
        • getting people to work on the tasks that are on the critical path to shipping
        • being a personable business-savvy “face” for the project

        None of those are requirements gathering?

        To be fair, you need PMs to be in the requirements gathering anyway because reqs impact schedule and cost, so leaving them out is counter productive.

        1. 3

          specs should be written by product managers who often have little or no training or background in rigorously specifying a system

          When you say “project manager”, I expect someone

          Mismatch. They said “product manager”, not “project manager”. A product manager should be the major source of requirements.

          1. 1

            Ah! Thanks for pointing this out. I did indeed misread.

    15. 5

      Aren’t interviews fundamentally about putting a person on the spot? And by all means I agree that it’s not ideal, but joining a new work environment is precisely that. I don’t see why the discussion is centered on the “why would you ask me to reverse a binary tree” type problems.

      Applicants that can do well under pressure and/or have time and energy to prepare well to answer questions about obscure algorithms will likely continue to do so. On a large enough scale, a firm can exploit this to drastically cut out false positives.

      This is indeed inefficient, but more so in the economic sense. Applicants will spend hours of productive work preparing for their expectations of what will be asked. Secondly, it risks excluding individuals on the basis of their material conditions; who in our society does better under pressure (definitely not oppresed minorities), who can afford to “grind Leetcode” for months prior to the interview?

      1. 26

        Aren’t interviews fundamentally about putting a person on the spot?

        Interviews are fundamentally about learning more about the candidate (and giving the candidate an opportunity to learn more about the company and team).

        Putting them on the spot is not the point, unless it’s a specific type of interview for a very high-stress, split-second industry like EMT or air traffic controller. Software developers don’t code “on the spot”, unless the company culture is pathological. Instead, they code in ways sustainable in the long term, or during a crunch. Even in the latter, they’re not expected to solve data structure puzzles within the hour or lose their employment.

        The goal of an interview is to determine how well a candidate will do in a given role. The more we make interviews about “putting a person on the spot” instead, the more we make the company culture about firefighting and crunch.

        1. 5

          If you are a company that has oncall and need engineers to be able to fix something in an emergency, then testing for ability under pressure is not a bad thing. In that case maybe the test should be for under pressure debugging instead of memorizing algos, though.

          1. 7

            I don’t think you can set an artificial fire and put it out with any meaningful output - every company has unique practices, tooling, playbooks, etc.

            When I’ve sat in on SRE/ops interviews I’ve seen good outcomes from describing a hypothetical environment - web server and DB, your web server is throwing a 503 error - and walking through the troubleshooting steps. It can be relatively low pressure, but I’ve seen good signals: experienced ops people talk about remaining calm, surveying the landscape with familiar tools (ping, dig, etc), and build up a very quick mental model of the problem.

      2. 16

        I won’t join companies that think day to day programming work should be a high-stress environment. Especially not the ones that do it on purpose to maximize exploitation of their work force.

    16. 7

      Only the JSON example discusses performance differences. The rest of the article is feature comparison. Of course my query will be faster on a partial index that satisfies my domain need.

      1. 9

        It’s a strange JSON example, too – only 206 rows of 14MB apiece.

        They don’t indicate whether the postgres table is using json or jsonb (the latter allows for extremely flexible indexes on the entire doc field, among other things).

        Overall, confused by the aim of the article.

      2. 1

        Yes, it’s a very thin piece, when it comes to performance differences discussion. I was expecting more data and measurements, but the article ended up being just a comparison of features.

    17. 3

      “Best Practices” optimize for the future: flexibility, stability, maintainability, and ease of feature development. And those are good things to optimize for, especially when you don’t know your problem domain well yet.

      Notably that doesn’t include optimizing for speed or memory use. So if either of those are a higher priority than, say, code maintainability, then, yes, it makes sense to deprioritize best practices and focus on them instead.

    18. 8

      I think there are valid arguments on both sides here, but this post doesn’t seem to be grounded in experience.

      Practically speaking, users of weird architectures do contribute patches back. Those people eventually become the maintainers. When those people go away, the project drops support for certain architectures. That happened with CPython, e.g. it doesn’t support Mac OS 9 anymore as far as I remember.

      It’s sort of a self-fulfilling prophesy – if the code is in C, you will get people who try to compile it for unusual platforms. If it’s in Rust, they won’t be able to try.

      I’m not saying which one is better, just that this post misses the point. If you want to use Rust and close off certain options, that’s fine. Those options might not be important to the project. Someone else can start a different project with the goal of portability to more architectures.

      Changing languages in the middle of the project is a slightly different case. But that’s why the right to fork exists.

      1. 25

        Author here: this post is grounded in a couple of years of experience as a packager, and a couple more years doing compiler engineering (mostly C and C++).

        Practically speaking, users of weird architectures do contribute patches back. Those people eventually become the maintainers. When those people go away, the project drops support for certain architectures. That happened with CPython, e.g. it doesn’t support Mac OS 9 anymore as far as I remember.

        This is the “hobbyist” group mentioned in the post. They do a fantastic job getting complex projects working for their purposes, and their work is critically undervalued. But the assumptions that stem from that work are also dangerous and unfounded: that C has any sort of “compiled is correct” contract, and that you can move larger, critical work to novel architectures just by patching bugs as they pop up.

        1. 6

          OK I think I see your point now. TBH the post was a little hard to read.

          Yes, the people contributing back patches often have a “it works on my machine” attitude. And if it starts “working for others”, the expectation of support can arise.

          And those low quality patches could have security problems and tarnish the reputation of the project.

          So I would say that there are some projects where having the “weird architectures” off to the side is a good thing, and some where it could be a bad thing. That is valid but I didn’t really get it from the post.


          I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

          After hacking on CPython, I was surprised at how much it diverged from that. There are a lot of #ifdefs in CPython making it largely unportable C.

          In the ideal world you would have portable C in most files and unportable C in other files. Patches for random architectures should be limited to the latter.

          In other words, separate computation from I/O. The computation is very portable; I/O tends to be very unportable. Again, sqlite and Lua are good examples – they are parameterized by I/O (and even memory allocators). They don’t hard-code dependencies, so they’re more portable. They use dependency inversion.

          1. 10

            TBH the post was a little hard to read.

            That’s very fair; I’m not particularly happy with how it came out :-)

            I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

            I’ve heard this argument before, and I think it’s true in one important sense: C has a whole bunch of mechanisms for making it easy to get your code compiling on different platforms. OTOH, to your observation about computation being generally portable: I think this is less true than C programmers generally take for granted. A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs; a lot of primitive operations in C are under-specified in the interest of embedded domains.

            Maybe it’s possible to truly cross-platform C, but it’s my current suspicion that there’s no to verify that for any given program (even shining examples of portability like sqlite). But I admit that that’s moving the goalposts a bit :-)

            1. 12

              Maybe it’s possible to truly cross-platform C, but it’s my current suspicion that there’s no to verify that for any given program (even shining examples of portability like sqlite).

              I think the argument holds up just fine despite the existence of counterexamples like Sqlite and Lua; basically it means that every attempt to write portable and safe code in C can be interpreted as an assertion that the author (and every future contributor) is as capable and experienced as Dr. Hipp!

            2. 6

              A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs

              That’s largely a result of the CPU vendors optimising for C, due to its popularity. Which leads to its popularity. Which…

            3. 2

              A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs; a lot of primitive operations in C are under-specified in the interest of embedded domains.

              As the author of a C library, I can confirm that fully portable C is possible (I target the intersection of C99 and C++). It wasn’t always easy, but I managed to root out all undefined and unspecified behaviour. All that is left is one instance of implementation defined behaviour: right shift of negative integers. Which I have decided is not a problem, because I don’t know a single platform in current use that doesn’t propagate the sign bit in this case.

              The flip side is that I don’t do any I/O, which prevents me from directly accessing the system’s RNG.

              Incidentally, I’m a firm believer in the separation of computation and I/O. In practice, I/O makes a relatively small portion of programs. Clearly separating it from the rest turns the majority of the program into “pure computation”, which (i) can be portable, and (ii) is much easier to test than I/O.

          2. 5

            I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

            I see this as parallel to “no such thing as memory-safe C”. Sure, cross-platform C exists in theory, but it’s vanishingly rare in practice, and I’d wager even the examples you cite are likely to have niche platform incompatibilities that haven’t been discovered yet.

            1. 1

              I’d wager even the examples you cite are likely to have niche platform incompatibilities that haven’t been discovered yet.

              Portability in C is hard, but it is simple: no undefined behaviour, no unspecified behaviour, no implementation defined behaviour. If you do that, and there are still are platform incompatibilities, then the platform’s compiler is at fault: it has a bug, fails to implement part of the standard, or simply conforms to the wrong standard (say, C89 where the code was C99).

              If we’re confident a given project is free of undefined, unspecified, and implementation defined behaviour, then we can be confident we’ll never discover further niche platform incompatibilities. (Still, achieving such confidence is much harder than it has any right to be.)

              1. 3

                Portability in C is hard, but it is simple: no undefined behaviour, no unspecified behaviour, no implementation defined behaviour.

                That is a very tall order, though. Probably impossibly tall for many (most?) people. I asked how to do this and the answers I would say were mixed at best. Simple isn’t good enough if it’s so hard nobody can actually do it.

      2. 3

        If it’s in Rust, they won’t be able to try.

        I think this is the most trenchant point here. If someone wants to maintain a project for their own “weird” architecture, then they need to maintain the toolchain and the project. I’ve been in that position and it sucks. In fact, it’s worse, because they need to maintain the toolchain before they even get to the project.

        I’m particularly sensitive to this because I’m typing this on ppc64le. We’re lucky that IBM did a lot of the work for us, but corporate interests shift. There’s no Rust compiler for half the systems in this room.

        1. 2

          I’m not familiar with these systems. What are they used for? What kind of devices use them? What industries/sectors/etc.?

          1. 3

            Ppc is very common in aerospace and automotive industries. Of course there are also power servers running Linux and Aix, but those are comparatively a niche compared to the embedded market.

            1. 6

              Got it. Sounds like definitely something that would not be hobbyists working on side projects using mass-market hardware. I think the article was referring to this–these corporate users should be willing to pay up to get their platforms supported.

              1. 3

                So does that mean we should only care about architectures that have corporate backing? Like I say, this isn’t a situation where it’s only a project port that needs maintainers. The OP puts it well that without a toolchain, they can’t even start on it. If Rust is going to replace C, then it should fill the same niche, not the same niche for systems “we like.”

                For the record, my projects are all officially side projects; my day job has nothing officially to do with computing.

                1. 8

                  So does that mean we should only care about architectures that have corporate backing?

                  Yes, it does. Money talks. Open source is not sustainable without money. I can work on a pet project on the side on evenings and weekends only for a relatively short period of time. After that it’s going to go unmaintained until the next person comes along to pick it up. This is going to happen until someone gets a day job working on the project.

                  If Rust is going to replace C, then it should fill the same niche, not the same niche for systems “we like.”

                  C has a four-decade head start on Rust, if no one is allowed to use Rust until it’s caught up to those four decades of porting and standardization effort–for the sake of people’s side projects–then that argument is a non-starter.

                  1. 3

                    Yes, it does. Money talks.

                    In such a world there would be no room for hobbyists, unless they work with what other people are using. Breakage of their interests would be meaningless and unimportant. That’s a non-starter too.

                    But, as you say, you’re unfamiliar with these systems, so as far as you’re concerned they shouldn’t matter, right?

                    1. 9

                      In that (this) world, there is room for hobbyists only insofar as they support their own hobbies and don’t demand open source maintainers to keep providing free support for them.

              2. 2

                OpenWrt runs on TP-Link TL-WDR4900 WiFi Router. This is a PowerPC system. OpenWrt is nearly a definition of hobbyists working on side projects using mass-market hardware.

                1. 2

                  It says on that page that this device was discontinued in 2015. Incidentally, same year Rust reached 1.0.

                  1. 2

                    I am not sure what you are trying to argue. The same page shows it to be in OpenWrt 19.07, which is the very latest release of OpenWrt.

    19. 8

      a terrible take from a person who is full of 💩 https://campaign.gavinhoward.com/platform/

      the author is too shortsighted to see how shortsighted they are 🤷

      https://twitter.com/wbolster/status/1365983352835231744?s=19

      1. 17

        To be fair, author’s extremist views on christianity, society, medicine or politics doesn’t make him wrong on another unrelated subject. “You should first fix bugs instead of switching to a less portable language” is not a wrong thing to say. “You should not break other people’s operating systems” is not a wrong thing to say. Rust may be superior in terms of end product quality, but it is not the default choice for every situation.

        1. 19

          “don’t write bugs” doesn’t work despite 40 years of industry experience with unsafe languages. see other comments.

          the piece is littered with opinions about what others should do (with their spare time even), while at the same time maintaining poor people shouldn’t be allowed to vote, and holding other dehumanising views. imo, that is very relevant background: it tells me that whatever this person thinks that others should do is completely irrelevant.

          1. 4

            yes you are entitled to your opinion, so is the author else we could not have a dialog about these subjects. The view in the piece is “fix your bugs” and “fuzz your bugs” and “be responsible” not “don’t write bugs”.

            And Rust is about “preempt a class of bugs” well so is any language that consider nulls a design flaw for instance. So are languages that prevent side effects, that enforce exhaustive conditionals, that enforce exception handling, and so on. So it isn’t the ultimate language.

          2. 4

            “don’t write bugs” doesn’t work despite 40 years of industry experience with unsafe languages.

            Except in cryptography. In this particular niche, “don’t write bug” is both mandatory and possible. In this particular niche, Rust is of very little help.

            That said, the author may not be aware of the specifics of cryptography. He may be right for the wrong reasons.

        2. 15

          I think “consistently being wrong” is a valuable insight, and can span unrelated subjects.

        3. 7

          “You should not break other people’s operating systems” is not a wrong thing to say.

          Very reasonable, but not what’s happening, nor what he’s saying.

          What’s happening is “making a change to one software project”, that is provided under a license that does not guarantee support.

          What he’s saying is “changing what platforms you support is unacceptable”. He’s also saying “switching to Rust is a dereliction of duty”. Which implies “you have a duty that is not listed in your software license, and which I won’t elaborate on”.

          In other words, what he’s saying is, ultimately, “nobody should make changes to their software that I disapprove of”. Which is wildly unreasonable.

          Unsurprisingly, his political views skew towards “the only people who are allowed to be part of society are people that I approve of” (he approves only of a certain subset of Christian faithful, who must also be straight and cisgendered, believe that certain parts of the Constitution are invalid, and be in the top fraction of earners).

          In other words, his poor political thinking mirrors his thinking on the politics of open source development in ways that make it relevant to the conversation.

          1. 2

            What’s happening is “making a change to one software project”, that is provided under a license that does not guarantee support.

            What’s happening is “making a breaking change” without marking it as such (deprecation, major version number bump, that kind of thing). That’s not okay. I don’t care it was a labour of love provided for free. The rules should be:

            1. Don’t. Break. Users. Pretty please.
            2. If you break rule (1), bump the major version number.
            3. If rule (1) was broken, keep the old versions around for a while. Bonus point if you can actually support them (fix bugs etc).
            1. 6

              It’s not a breaking change if it breaks on platforms that were never explicitly supported (not even by python). (I’m not entirely sure if they used to support these explicitly, but it’s not obvious).

              1. 2

                I agree if there’s an explicit description of supported platforms. If not… the only reasonable fallback would be any platform supported by Python where it builds & run out of the box. That can mean a lot of platforms.

            2. 2

              If you break rule (1), bump the major version number

              They did, at least for certain values of “major version number”. Rust was introduced in 3.4. The x.y releases are when backwards-incompatible changes are introduced. They’re changing their versioning scheme to make it more explicity: next major version will be 35 (instead of 3.5).

              Honestly, a quick look at their changelog shows backwards-incompatible changes at 3.3, 3.1, 3.0, 2.9, … In other words, adding rust when they did was entirely in line with their release cycle.

              1. 1

                I commend them for listening to feedback (or is it cave in to mass anger?), and switch to semantic versioning. That’s cogent evidence of trustworthiness in my book.

      2. 3

        Author here.

        Everyone is full of crud on some level. However, interacting with people nicely on the Internet is a good way for me to get rid of that and to become less of a terrible person.

        On that note, hi! I’m Gavin. Nice to meet you. :)

    20. 7

      The author seems to be very short sighted in my opinion. Even though I don’t like that cryptography library is moving to Rust, I’d rather they created a new crypto library in rust and donated the old C one to new maintainers. It is in the end their call, and Rust will provide a ton of safety features out-of-the-box.

      I think that it all boils down to which platforms the cryptography library committed to support. If they want to support all platforms that Python runs, then RIIR is the wrong option. If they’re going to make it clear that it supports just a subset of those, then it is quite OK. People who are in the orphaned platforms can fork the last known working version and move ahead with a new team.

      also SIGH but the first post linked on his sidebar is a rant where he explains why he refuses to wear a mask and calls business who force him to wear a mask, tyrants…

      1. 4

        Is your concern that they are keeping the “brand” for cryprography? There’s certainly nothing preventing new maintainers from taking over the C version…

        1. 1

          not the brand from a marketing perspective, but that build systems and other automated tools point-of-view. Those who are pulling their sources will depend on a whole new toolchain to build it, that affects package maintainers and products for a whole range of architectures.

          IMO, drastic changes like that should be done in new libraries so that it doesn’t break other systems. They could’ve declared that library finished, created cryptography2 or some other name and proceed to RIIR. The people using their library would have the option to stay with the unmaintained system and keep their stuff working, or go through the trouble of upgrading to the new library.

          1. 5

            They could’ve declared that library finished, created cryptography2 or some other name and proceed to RIIR

            That’s a strange inversion of responsibility. It’s also not viable since they’re doing an incremental rewrite, not a wholesale one. A wholesale rewrite might warrant a new library, but incremental rewrites are reliant on the full existing code and history to do properly.

            The responsibility for maintaining a library falls on the library maintainers. The responsibility for maintaining a distro falls on the distro maintainers. If a library stops supporting an environment, then the distro has to choose: also stop supporting the environment, or fork the library which they have license to do. “Tell the library maintainers to maintain their library differently” is not one of the options. Or, at least, not an option that will work.

          2. 5

            I personally would be okay with just a major version number bump or equivalent. Keep the brand, say the newer version is now the canonical one. Just:

            • keep the old one for a while;
            • mark the change as “breaking”.
      2. 2

        Rust will provide a ton of safety features out-of-the-box.

        Those safety features are almost irrelevant in the specific case of cryptographic code. Platform support aside, it won’t hurt, but it won’t do much good either. Don’t get me wrong, I would love that Rust be supported everywhere. Until then, C is still the better choice for portable cryptographic code.

        My biggest qualm about this change is that they didn’t mark it as “breaking change, let’s bump the major version number”.

        1. 5

          “My biggest qualm about this change is that they didn’t mark it as “breaking change, let’s bump the major version number”.”

          This was misunderstood by others as well. This project does not use semantic versioning, and the users were warned beforehand :

          https://cryptography.io/en/latest/api-stability.html#versioning

          https://github.com/pyca/cryptography/issues/5771#issuecomment-775038889

          https://github.com/pyca/cryptography/issues/5771#issuecomment-775041677

          1. 2

            They’ve changed their mind, and from now on their project will use a semver-compatible system. The version they released that caused the issues was on their old (weird) scheme, tho.

            1. 0

              Kudos to them, then. While ideally they should probably have reverted the change, switch to semver, then apply the change back, the switch to semver shows that they are listening to feedback.

      3. 0

        it’s significantly worse than that; see my other response here. this is simply a terrible person.

        1. 7

          Why don’t we stick to discussing the article, and leave the person out of it?