Threads for ubernostrum

    1. 7

      It feels like this moves the goalposts so far away that it’s impossible to produce something that is “truly Free Software” by the author’s definition. If you ship a game under the freest and most freedom-respecting Free Software license ever devised, but I decide after downloading it that what I really wanted was a spreadsheet, does your game stop being Free Software because it’s hard for me to turn it into Eve Online?

      1. 5

        No, but if it’s really hard to mod the game at all because it has a very complex build system, then that makes one of the benefits of free software practically inaccessible to the end user despite the license. I agree that it’s good to reserve the term “free software” for software licensed under terms that make it legally possible for users to exercise the Four Freedoms with it, regardless of how easy or difficult the actual build steps are; but that said, it’s still desirable for software to be easy-to-modify in a practical way, which is genuinely orthogonal to free vs nonfree.

        1.  

          The thing is, though, that making software which is both sufficiently good and useful as-is and sufficiently customizable to suit these arguments is a far higher bar than anything I’ve historically seen advocated by the Free Software community.

          And in fact the leading lights of that community have historically been extremely skeptical of too much customizability on grounds that it might open loopholes for proprietary software to use Free Software without being “infected” by copyleft. Consider the long and contentious history of Stallman vetoing features in GCC because “adversaries” might “benefit” from them, for one example.

      2.  

        This joke really caught me off-guard. I guess GNU Online would start as a LibeOffice fork.

    2. 10

      Simple is better than complex. Fast is better than slow. Cheap is better than expensive. On-time is better than overdue. Correct is better than buggy. Good is better than bad.

      1. 1

        Chess coaches sometimes jokingly say “Just don’t blunder!”

        Programming teachers can say “Just don’t write bad code.”

        1. 3

          The gaming community distilled it further down to “git gud”.

      2. 1

        In high school my friend was guessing and guessing a password he’d forgotten. Trying to be helpful, someone said “did you try the right one?”

      3. 1

        Worse is better :)

    3. 1

      Nice to see there’s software out there with chess master versioning. I wonder if there is intended wordplay with Nimzowitsch and Larsen pioneering hypermodern systems.

      1. 1

        I’m not sure I get the claim – the Nimzo-Indian, not the Nimzo-Larsen, is what I think of as associated with his name. And the chess.com master-level games database gives:

        • 96,120 games in the Nimzo-Indian: 1.d4 Nf6 2.c4 e6 3.Nc3 Bb4
        • 18,883 games in the Nimzo-Larsen: 1.b3
        1. 2

          He probably meant the Nimzo Indian which is indeed popular. In addition to Larsen’s 1.b3, there is also 1.Nf3 d5/Nf6 2.b3 which Nimzowitch tried and is probably called Nimzo-Larsen which is nowhere as popular as the Nimzo Indian in classical chess but has gained some popularity as a blitz/bullet opening in online chess. Hikaru Nakamura plays it often (with the inclusion of 2.e3) for example.

        2. 1

          which part of my comment is confusing? the hypermodernism part?

          1. 2

            The confusing thing is at the link, which says:

            we call it NimzoLarsen after the most famous Chess opening popularized by Nimzowitsch.

            You wondered if they intended a connection to hypermodern chess, and I’m saying that I have doubts about their understanding of the chess side of things (because if you asked chess players to name an opening popularized by Nimzowitsch, I think the Nimzo-Indian would be far and away the most common response, just as it is far and away the more commonly played opening when compared to the Nimzo-Larsen, the Nimzowitsch Sicilian, etc.).

    4. 2

      Related: if all you want is to ensure that example code snippets in your docs actually work, the Sphinx documentation tool has built-in support for that. It doesn’t figure out your “documentation coverage” but is still something I’ve found quite useful.

    5. 8

      A little while back I wrote a comment that for some reason collected a bunch of “troll” votes despite the fact that it consisted entirely of presumably-serious arguments made in favor of static typing by presumably-serious people who presumably expected to be taken seriously while making such arguments (“doesn’t scale”, “you have to write too many tests”, “IDEs can’t ever support it”, etc.). Except I flipped them around to be in favor of dynamic typing, and otherwise argued as such arguments usually go.

      So this time I’ll just be blunt. The static-versus-dynamic “debate” is stupid. Every attempt to demonstrate empirically the taken-as-self-evident “truth” of the superiority of static typing has failed, and dynamically-typed languages have continued to be unreasonably (from the perspective of static-typing fans) effective and popular in the real world.

      But that doesn’t stop people from trying to force static typing onto every language, just because they can’t handle writing dynamically-typed code. Sigh.

      What’s amusing to me in the case of Python is how much the community has basically decided that the static-checking use case is almost the least important thing you can do with type annotations; all the actually interesting work and evolution in Python that involves type annotations is doing things with them at runtime. Libraries like Pydantic deriving runtime validation and serialization/deserialization logic. Web frameworks like FastAPI and Litestar building on that, and also building things like dependency injection and OpenAPI documentation auto-derived at runtime from type annotations. Database toolkits like SQLAlchemy using type annotations to define DB schema.

      And on and on and on – there’s a ton of interesting work being done with Python type annotations. It’s just that a huge amount of it does not involve running static checkers on code.

      1. 2

        Libraries like Pydantic deriving runtime validation and serialization/deserialization logic. Web frameworks like FastAPI and Litestar building on that, and also building things like dependency injection and OpenAPI documentation auto-derived at runtime from type annotations. Database toolkits like SQLAlchemy using type annotations to define DB schema.

        Last time this came up, I learned about typer too. Personally, I’ve started to get attached to typing as a form of documentation, even when not enough of the ecosystem is annotated for the static checking use case to have any legs at all.

        I think part of the reason the community has decided that the static checking use case is less important is that you really need annotations to be more widespread than they currently are for it to be all that useful.

        I like the current balance but wouldn’t be unhappy if types were widespread enough to make it worth adding mypy to my stack of pre-commit hooks.

    6. 21

      The traditional counterpoint here is to remind everyone Go isn’t actually “simple”, it’s simplistic, and that bad things can and will happen when the simplistic Go model of the world does not match the complicated reality.

      Normally this is done by linking “I want off Mr. Golang’s Wild Ride”, which presents examples of the “simplistic-ness” of Go and the trouble it can cause, but maybe it’s time for the genre to get a few more well-known samples. Though it’s still worth quoting from the original conclusions:

      Over and over, Go is a victim of its own mantra - “simplicity”.

      It constantly takes power away from its users, reserving it for itself.

      It constantly lies about how complicated real-world systems are, and optimize for the 90% case, ignoring correctness.

      It is a minefield of subtle gotchas that have very real implications - everything looks simple on the surface, but nothing is.

      1. 2

        I’ve read that article a few times and ultimately I think the author probably knows what they are talking about and even makes some good points, but the bulk of the article doesn’t actually do a go job of supporting their thesis. I’d go as far as to say that the 2022 update at the conclusion makes reference to the REAL article that they should write:

        I’ve lost count of the amount of incidents directly caused by poor error handling, or Go default values.

        That sounds like actual real world problems and not picking a quirk of a weak, but stable, stdlib platform accommodation which is unlikely to actually have any impact on things which people are using go for. Likewise, the dependency graph of a poorly considered 3rd party library is hardly demonstrative of a problem with the language aside from the discussion about how to best expose the feature in the stdlib in a way that works well with all of the existing stdlib design and expectations.

        1. 4

          I think the examples about file metadata are actually really instructive, because as the article points out, the cross-platform reality of file metadata is actually quite complex, but Go tries to hide it behind a simplistic interface and as a result it’s very easy to write programs that seem like they do the right thing but actually don’t.

          The HTTP request example is another one I’m deeply sympathetic to – in the Python world, linters have to flag use of the requests library because, although it can at least easily add timeout behavior to a request, it doesn’t do so by default, which then means you can wind up waiting forever on a request with no timeout. The fact that it pulls in so many extra packages in Go, and that the huge dependency graph boils down to a package that has to do the empty-assembly trick to access nanotime(), is just the chef’s-kiss of awfulness on top, and fits in well with the author’s other criticisms of Go (especially its “do as I say, not as I do” approach).

          1. 1

            Alright, you’ve kind of won me over as to how it supports their thesis, but…

            the cross-platform reality of file metadata is actually quite complex, but Go tries to hide it behind a simplistic interface

            I’m just gonna go out on a limb and say this wasn’t a decision made to hide complexity, rather it speaks to the desire to keep the original stable API unchanged in spite of needing it to represent a platform to which it doesn’t cleanly map. I think the author is projecting an intent and drawing conclusions about an overall philosophy and then using that as a straw-man for their arguments. Multi-platform standard libraries are always going to have to strike a balance between only providing the lowest common denominator of all of the potential platforms AND providing actually useful abstractions for getting work done without encouraging the proliferation of a million different platform specific modules. So, in this particular case, we’ve chosen to pick on windows filesystems support which matters zero to +99% of go users and the 1% that it does matter to are unlikely to actually run into the problems which the author is highlighting. To me that /feels/ like go has chosen a decent balance in providing utility in the stdlib which fits the majority of the use cases AND it cannot be emphasized enough that the entire stdlib is itself written in go and someone who wants to be very explicit about all of the minutia can chose the path less well traveled and be as specific and intentional as they want to be.

            I suppose the problem I have with this example is that the author says this in his update:

            That’s not how most Go code is written though. I’m interested not in what the language lets you do, but what is typical for a language - what is idiomatic, what “everyone ends up doing”, because it is encouraged.

            but the example they have chosen is using the encouraged and idiomatic patterns but to do unusual things on an uncommon target platform. A better example would support their thesis using idiomatic and encouraged practices on a common platform to do normal things. Which, he goes on to mention:

            I’m tired of being woken up due to the same classes of preventable errors, all the time.

            So… why not use one of these innumerable classes of preventable errors as the example? I’m guessing that they aren’t being woken up to fix production that is broken because it is splitting the wrong file extension on a windows filesystem that contains files which have names which are composed of non UTF8 characters. So to choose that example feels like cherry picking and seems disingenuous.

            the author cannot go back in time and change the example that they’ve chosen any more than the go developers can go back in time and decide that the stdlib’s API should be lower, less featureful and more generic because many years in the future some small percentage of go users will be targeting windows and an even smaller subset of those user will be experiencing some strange bugs.

            At any rate, I’ve now spent entirely too much of my time thinking about this :)

            1. 2

              Multi-platform standard libraries are always going to have to strike a balance between only providing the lowest common denominator of all of the potential platforms AND providing actually useful abstractions for getting work done without encouraging the proliferation of a million different platform specific modules.

              The article shows how Rust manages to thread the needle on this – there aren’t “a million different platform specific modules”, there’s just the base common file permissions every platform supports, and the richer permissions that only are supported o Unix.

              So, in this particular case, we’ve chosen to pick on windows filesystems support which matters zero to +99% of go users and the 1% that it does matter to are unlikely to actually run into the problems which the author is highlighting.

              If Go wants to be a Unix-only (or “Unix and Plan 9”) programming language, it’s welcome to be one. But it doesn’t advertise itself as one – it advertises itself as cross-platform, while deeply embedding Unix-isms and planning to say “well it’s your fault for using that operating system”.

              the author cannot go back in time and change the example that they’ve chosen any more than the go developers can go back in time and decide that the stdlib’s API should be lower, less featureful and more generic because many years in the future some small percentage of go users will be targeting windows and an even smaller subset of those user will be experiencing some strange bugs.

              This is the kind of dismissiveness that turns me off Go. Again, if the idea is to just declare Windows a non-supported platform, then do that. Half-assing it the way Go currently does is just going to make people angry.

    7. 16

      Regardless what is implemented, the implementer should spend X hours each week shoulder surfing people’s computer usage, to work out what friction points need to be fixed. Harder with the privacy aspects around healthcare, but should still be doable.

      1. 19

        When I worked in this field, it was a hard requirement for everyone in an engineering position to spend some time in a hospital, interacting with healthcare professionals and, if possible, attend medical procedures (including surgeries). Sadly it was a company thing, but IMHO it should be a regulatory requirement. It was difficult to make that happen – there was a lot of bureaucracy and there were a lot of legal barriers, and with good reason, since most of us weren’t trained healthcare workers so the people who knew what they were doing had to watch us every step of the way.

        It was an extraordinarily useful experience, not only because it taught you more about medical equipment design and maintenance than any book, workshop or training could teach you, but also because it gave you a very real understanding of the pace at which decisions have to be made, about how important they are, and what conditions they’re made under. The golden standard for interfaces isn’t “can a user who’s never seen your software before operate it”. It’s “will the head nurse who had to take a double shift be able to use the software that they’re not even allowed to touch before going through training be able to use it at the end of the second shift without a nervous breakdown”, bearing in mind that the barrier for nervous breakdown is way above being yelled at and spat on by teenagers high on shit you haven’t even heard of.

        A small but non-zero proportion of people would quit after their first field day. At the time I understood that was one of the reasons why the folks upstairs tried to get new hires into a hospital as quickly as possible, so they knew what they were in right away and have time to think if they want to keep going before they invested too much time in a job they’d want to leave the day after their first hospital visit.

        I agree with you wholeheartedly. IMHO it should be a hard requirement for anyone who works on mission-critical software/equipment to be there. The notion that you can write software, design interfaces or equipment, devise security measures, whatever, without understanding how the software or the equipment is used, based on nothing but general principles, is complete and utter nonsense.

        Consumer/business/enterprise software vendors can do that (and they do it as a cost control measure, hence the abysmal state of HMI in the last decade or so) for all I care, as users can always vote with their wallet. But with mission-critical software that’s not always the case – regulatory requirements and long equipment lives make vendor lock-in a lot easier and not everyone involved (e.g. patients, for medical software) get to have a say in what gets used for them.

        Frankly, I think it would be an useful exercise just to get people in our industry some real life exposure in general. Some of the people who quit after their first field day just doubted they could handle the pressure, and that’s 100% understandable (and quitting is totally the grown-up thing to do in my book; my stint wasn’t very long, either, for unrelated reasons – for all I know I could’ve had a mental breakdown of my own in ten years’ time).

        But word around the watercooler was one dude had quit because he thought he wasn’t compensated nearly enough for how stressful the job actually was – although his salary was way better than that of a nurse or a paramedic. Another former colleague had not just quit, but had to go to therapy for years after a few hours’ worth of experiencing, second hand, what nurses and medics experience every day – and we didn’t work on the kind of equipment that gets used in the ER or anything, it was very tame stuff by hospital standards. It completely changes your idea of what workplace stress is about and what you have to do to cope with it. And about what professional respect and professional standards mean.

        1. 10

          I worked at a company that would allow (but not require) doing ride-alongs to in-home care visits, and which encouraged working with the “non-technical” employees and teams. It can lead to very productive partnerships (recognizing places where even some basic improvements to software will give huge improvements for the team that uses it), but also requires very flexible and understanding management, since you end up with a bunch of project ideas that aren’t on the official product roadmap.

      2. 1

        Especially doable if the value of X is unspecified.

    8. 27

      NIST stated “Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically)” back in 2017, yet my previous company insisted on password rotations as late as 2022 (when I stopped working there). Apparently, not many at the C-level got the memo.

      1. 14

        My experience of “security” teams in health care is that they are the source of a lot of the stereotypes about bad security teams.

        I’ve been told that forced quarterly password rotation is required by “regulation” (no it isn’t). I’ve been told that as a software developer who does not provide care or interact with patients, I am not permitted to have my calendar loaded on my personal phone unless I fully MDM it, because of “regulation”. But Slack (which will sign a BAA and let you put PII in messages and file attachments) on the same personal phone apparently didn’t require MDM.

        I could go on and on about this stuff, and it’s one reason why, despite feeling like I was working on good things and doing good, I’m glad to be working in a different field now.

      2. 2

        Sadly, security policy is most often dictated by what feels safe rather than what is safe.

    9. 5

      I like a lot of this advice, parts like “always return objects” and “strings for all identifiers” ring with experience. I’m puzzled that the only justification for plural names is convention when it’s otherwise not at all shy of overriding other conventions like 404s and .json URLs. It’s especially odd because my (unresearched) understanding is that all three have a common source in Rails API defaults.

      1. 4

        The difficulty with 404 is that it expresses that an HTTP-level resource is not found, and that concept often doesn’t map precisely to application-level resources.

        As a concrete example, GET /userz/123 should (correctly!) 404 because the route doesn’t exist, I typoed users. But if I do a GET /users/999 where 999 isn’t a valid user, and your API returns 404 there as well, how do I know that this means there’s no user 999, instead of that I maybe requested a bogus path?

        1. 11

          how do I know that this means there’s no user 999, instead of that I maybe requested a bogus path?

          From solely the status code, you don’t.

          Fortunately, though, HTTP has a thing called the response body, which is allowed to supply additional context and information.

          1. 2

            Of course, but I shouldn’t need to parse a response body to get this basic level of semantic information, right?

            1. 0

              Yeah, you should, because if we require that there be enough fine-grained status codes to resolve all this stuff we’re going to need way more status codes and they’re going to stop being useful. For example, suppose I hit the URL /users/123/posts/456 and get a “Route exists but referenced resource does not” response; how do I tell from that status code whether it was the user or the post that was missing? I guess now we need status codes for

              • Route does not exist
              • Route exists but first referenced resource does not
              • Route exists but second referenced resource does not

              And on and on we go.

              Or we can use a single “not found” status code and put extra context in the response body. There’s even work going on to standardize this.

              Remember: REST is not about encoding absolutely every single piece of information into the verb and status code, it’s about leveraging the capabilities of HTTP as a protocol and hypertext as a medium.

        2. 7

          There’s yet another interpretation too, that’s particularly valid in the wild! I may send you a 404 because the path is valid, the user is valid, but you are not authorized to know this.

          Purists will screech in such cases about using 403, but that opens to enumeration attacks and whatnot so the pragmatic thing to do is just 404.

        3. 3

          Perhaps a “204 No Content”?

          1. 3

            That doesn’t convey the message “yeah, you got the URL right, but the thing isn’t there”

            1. 1

              I think it basically does.

              1. 1

                Well, it says “OK, no content”. Not, “the thing is not here, no content”. To me these would be different messages.

        4. 1

          The last time this came up, I said “status codes are for the clients you don’t control.” Using a 404 makes sense if you want to tell a spider to bug off, but if your API is behind auth anyway, it doesn’t really matter what you use.

          https://lobste.rs/s/czlmyn/how_how_not_design_rest_apis#c_yltriz

          1. 1

            You never control the clients to an HTTP API. That’s one of the major selling points! Someone can always curl.

    10. 4

      I feel that SemVer has two major problems, neither of which are really the fault of the spec:

      1. Most projects are not carefully planned upfront with a very clear external/internal interface (some languages help provide this distinction, but not all), which leads to a lot of breaking changes. Having version 0.x essentially just ignores SemVer until you feel you are ready to actually be SemVer-compatible.
      2. Due to the above, strict SemVer (or ComVer) requires that a lot of updates result in a new major version, which means users cannot upgrade that dependency without carefully skimming through the changelog and adapting to any breaking changes for features they use.

      In the case the author gives, where a tiny breaking change to fix undocumented (wrong) behavior, I wonder if it should really not be intentionally kept as a minor patch, with maybe a follow-up patch to alleviate the issue (either accepting the wrong behavior as official, or planning another change down the road). Because if every major version bump is likely to not affect most users, because it’s triggered by edge-case breaking changes, you will end up with very slow adoption as each major version takes consideration from end-users.

      On the other side you have data science libraries like pandas that have major breaking changes (unintentionally or not) for minor versions regularly, and that’s not better either. But having this strict interpretation of major versions is good for stability, bad for adoption.

      1. 11

        I think the problem is that most OSS is in fact on version 0 but no one wants to admit it. If you don’t have an API that can be kept stable for years and multiple maintainers with commit access, your project is at v0. But people use v1 to mean “production ready” instead which is different. It’s production ready if it can solve a problem reliably in production, but that’s not the same as v1.

      2. 9

        I should write a blog about this somewhere so I can cite it and stop repeating it, but the core problem with SemVer is that it is used to version implementations, not interfaces. You cannot do graceful deprecation with SemVer. In a project with a good support cycle, you have three states for interfaces within an implementation:

        1. Supported.
        2. Present but deprecated.
        3. Gone.

        Each release will cycle interfaces through this little state machine. You cannot express this if you’re using SemVer for the implementation. If your library supports an interface Foo, you have three versions in SemVer:

        • 1.0 - Foo is supported.
        • 1.1 - Foo is deprecated, Bar is supported.
        • 2.0 - Foo is gone, Bar is supported (hopefully not deprecated already)

        1.1 to 2.0 is not a breaking change for anyone that moved from Foo to Bar, but there’s no way, if you are using SemVer for implementations to indicate this. You may even have more complicated things such as

        • 1.0 - Foo is supported.
        • 1.1 - Foo is supported but has some new features.
        • 1.2 - Foo is deprecated, Bar is supported.
        • 2.0 - Foo is gone, Bar is supported (hopefully not deprecated already)

        Now moving from 1.1 to 2.0 is a breaking change for everyone, but moving from 1.2 to 2.0 is not for anyone who is heeding their deprecation warnings. The thing that you want is to use SemVer for interfaces, where each version of the implementation has a tuple of interface versions. Now the flow is easy:

        • {1.0} (Foo is supported)
        • {1.1} (Foo is supported and has new features)
        • {1.1, 2.0} (Foo is supported as is Bar)
        • {2.0} (Foo is gone, Bar remains)

        Now, if your dependency resolution first says ‘I need 1.x’ then it will match the first three versions. When you get to the third, it will say ‘by the way, there’s a newer thing you might want to migrate to’. Then you update it to say 2.0 and it still works with the third one, but will allow you to move to the fourth.

        There are more subtle problems that relate to how richer type systems interact with the guarantees in SemVer. For example, anything that does pattern matching on structural types makes adding or removing a feature a breaking change.

        1. 5

          I’ve mentioned it before, but I think Django’s approach – which is not semver – is a good one.

          Django does three feature releases per major version: X.0, X.1, X.2. So over the past few years there’s been Django 3.0, 3.1, 3.2, then 4.0, 4.1, 4.2, and now 5.0 is approaching release.

          The Django API compatibility policy is that every third feature release (the X.2) is an LTS, and the nice upgrade path is LTS-to-LTS. If your app is currently running on an LTS, and emits no deprecation warnings, the same codebase will run unmodified on the next LTS. So if you had an app running on 3.2 LTS, you could clear any deprecation warnings it emits and then jump direct to 4.2 LTS.

          It’s not semver because the major version number does not tell you anything about breaking changes; the rule is that a piece of API that’s going to go away will emit deprecation warnings for two releases, and then it’s gone, and that happens in every feature release, not just major version bumps.

        2. 3

          Interesting. In terms of web APIs, my thinking is good to do

          Release 1:

          • /api/foo exists
          • /api/bar exists

          Release 2:

          • /api/foo is deprecated
          • /api/foo-v2 is added
          • /api/bar exists

          Release 3:

          • /api/foo is removed
          • /api/foo-v2 exists
          • /api/bar exists

          As opposed to having /api/v1/… and /api/v2/… because that way you can handle the lifecycle for endpoints individually.

          1. 3

            Yup. That’s precisely what good versioning looks like and it works because you’re doing SemVer on interfaces, not on implementations.

        3. 3

          Very good point.

          On top of that, for type-safe languages, I’d prefer another nuance: breaking from of a compile-error due to a breaking change is annoying. But from a runtime error because of a breaking change is way worse. It would be nice if I can expect which one will happen by looking at how the version changed.

        4. 1

          I wrote this a couple years back which might help framing things: https://labs.tomasino.org/contract-based-dependency-management/ And here is a good follow-up response to the idea: https://rys.io/en/156.html

        5. 1

          I like this approach, as well as what django does (per @ubernostrum’s reply), in theory. But it requires careful up-front planning, and my experience is that a lot of projects are driven by the request for performance or features, whether private professional work or OSS.

          And often those features cannot “wait” for multiple versions until you’ve paved a smooth upgrade path with deprecation warnings, which leaves you with two choices: Strict SemVer (lots and lots of major versions, often) or a more loose approach to versioning where you say “okay, we are introducing a new feature and tweaking some bits” and calling it a minor version.

          Both django and python itself, as well as other packages, has versioning that is not classic SemVer, but they have a strict meaning of what a version number means anyway.

          The silver bullet would be a system that caters to people who want to do right by their users, but do not have carefully planned interfaces or feature roadmaps. I am not sure if such a system logically can exist or makes sense, but it would be nice.

      3. 6

        The question is: what is a breaking change anyways? Even fixing a bug can break a client that relied on that bug. (mandatory semi-related xkcd)

        1. 2

          On the one hand, it’s a continuum. If you have some public method but it turns out that no one in the world was using it, you’re not actually breaking anyone by removing it, so it’s not a “breaking change” and vice versa if there is some undocumented internal algorithm that you change and it breaks people, it is a “breaking change.” But I think more realistically, it’s about setting appropriate boundaries and expectations: if you do X, Y, Z, and not A, B, C, we promise we will try to make sure our updates don’t break your software for as long as possible. Most languages have a certain amount of culturally defined boundaries, like you can use public methods, but not methods with underscore or transitive dependencies or whatever.

    11. 25

      Background: back before Django was originally publicly released, its ORM was literally a code generator. You would run it over a set of model-definition classes, and it would output some new files of code containing the query functions for those models. This apparently got some pushback from people who were shown early versions of what became Django, but there wasn’t enough time to completely rewrite it before the first public release. So Django launched with an ORM that was still, at heart, the code generator; the only difference was that instead of writing the files out to disk, it generated the code as modules that only existed in memory, and hacked them into the import path to make them work (this is not as hard as it sounds).

      This still was pretty unpopular, so for the Django 0.95 release the ORM was completely rewritten, an effort which came to be known as “magic removal” (since the original ORM would “magically” cause whole modules of code to appear in your import path that you’d never written).

      At the time, I worked for the newspaper company where Django had originally been developed, and we both used internally and sold commercially a news-oriented CMS built on top of Django, which was going to take a while to port to the new “magic-removal” version of Django. So I volunteered to help maintain bugfix branches of the pre-“magic-removal” Django, which ended up being available not just to us and our customers, but also to anyone who wanted the code. The branches got renamed a while back, but are still visible in the main Django GitHub repository.

      It mostly wasn’t that bad; some critical bugfixes from later post-“magic-removal” Django were backported in, but the pre-“magic-removal” code was actually pretty stable and since there was no new feature work happening, there were only around 40 commits over the course of a little over two years of maintaining it.

      The worst part was this bug, which was the bane of several people. As befits a gnarly bug, once I finally understood what was actually going on (and please don’t expect me to be able to explain it now, 16 years later, when I’ve forgotten almost everything I ever knew about the internals of the pre-“magic-removal” ORM), the fix was literally a two-line change. Though it turned out the same bug lurked in the new post-“magic-removal” Django, and had to be fixed there too.

      I don’t know if I’d do that again; my views on software updates and addressing technical debt before it reaches the “oops, gonna need two years to dig out of this hole” stage have evolved a lot over the course of my career, and so now I push hard for staying up-to-date and handling changes in dependencies ASAP.

    12. 2

      Also, if you have to deal with timezones in Python, use pendulum (https://pendulum.eustace.io)

      1. 1

        Wasn’t aware of this. It looks great, thanks!

        1. 2

          Also sometimes useful is this library which does not actually implement any date/time logic but does provide drop-in wrappers for the datetime module’s contents, with the specific goal of making dates and datetimes, and the timezone-aware and timezone-naïve versions of them, all be different and incompatible-to-a-type-checker types. So, for example, datetype.AwareDateTime is a separate type from, and type-checks as incompatible with, datetype.NaiveDateTime and datetype.Date.

    13. 15

      The mIRC example is, um, not great. Author says it has

      Icons discernible through both shape and colour.

      But when I look at it, approximately half the icons are small rectangles. Six of those are rectangles with tiny pithy text labels inside them, giving even less room for the graphical elements that supposedly help to distinguish them further. And even with my glasses on it’s quite difficult, at normal distance from my screen, to clearly distinguish what’s going on in those icons.

      I strongly suspect the author is making the common mistake of having familiarity from long experience and conflating it with ease for new users (and I never used mIRC back in the day – I’ve always been a command-line IRC client user – so I don’t have the long experience and familiarity with mIRC that would help me instantly know what all those similar-looking little rectangles do).

      If someone still thinks that’s a good example, go find a person who’s under the age of 20, and ask them to guess what all these Windows 95 icons mean. Just as the author didn’t recognize a banker’s/file box, that icon sheet is full of objects and concepts which once were extremely common/recognizable/“iconic” and now are not. And so there are infamous stories like people asking what the save icon (floppy disk) in so many programs is supposed to be. Plus, take a look at that Windows 95 icon sheet again from a few feet away, and notice how many of them are not particularly distinct in shape or color.

      And this is kind of a recurring problem throughout the post. The author complains, for example, that the flag icon in Outlook “might also just be a sketch by Mondrian”, but that style of waving-banner icon for “flag” is very common, and is common because the shape of it helps to distinguish it from other plain-rectangle style icons. Does that mean someone who’d never used an email program (let alone Outlook) before would instantly know what it means? Of course not – that falls into the fallacy of the “intuitive” (i.e., instantly perfectly understood by someone who has no prior context for the program or its functionality) interface. Interfaces have to be learned, but the important thing is that they have consistent conventions for those who have learned them, and affordances to assist in learning, not that they be perfectly intuitable to a Boltzmann brain that just popped into existence a moment ago. And far too much complaining about “usability” really comes down to “conventions have evolved over time, away from what I knew years ago” – it’s not that modern software lacks conventions, it’s that modern software has different conventions than it did in the Windows 95 days.

      This doesn’t mean that all modern software has great usability, of course, but the older software the author holds up as better examples was not particularly great in its day, which undermines the whole “decline” narrative – there’s always been a spectrum of software usability, and I don’t think prior eras were on average much better than today, nor is today on average much worse than earlier.

      1. 3

        But when I look at it, approximately half the icons are small rectangles.

        At least they’re somewhat different colours. I look at my Gnome Files icons and they’re all small white bars on a black background.

    14. 6

      What is the “Archive” icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out?

      It’s a bankers box. It’s a common way to store archival papers in America.

      1. 6

        So, a product that’s sold across the world chose an icon representation that makes sense only to people in one country?

        HCI books from the ‘80s talk about that as a bad idea. The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world. Picking a locale-specific physical object is even worse.

        Mind you, Outlook still can’t do quoting right in replies, in spite of people complaining about it since I was a child, so I have very low expectations for that team. They’ve completely rewritten the app at least twice without fixing basic functionality.

        1. 3

          The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

          I absolutely have seen documentaries of museums and archives in other countries with boxes of extremely similar form factor. And clerical/office staff (the traditional target users of much “office” software) would historically have been quite familiar with such boxes.

          The real issue here is almost certainly temporal — the archival storage box is now an anachronism on par with the floppy disk save icon. It’s a metaphor for a physical-world thing that in the physical world is no longer a common object.

          1. 4

            The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

            I did some consulting for a company that manages warehouses for long-term document storage (and also did fun things like taking tapes from banks mainframes and printing their daily audit results on microfiche). They had a lot of boxes in their warehouses but very few looked like the ones in the icon. I actually owned a few boxes like that (Staples used to sell them), but I would never associate them with archiving (in part because they ended up being stored in a basement and nothing in them survived).

        2. 3

          I don’t know how common Bankers Boxes are in other countries. I know the author is Swedish, so that might affect their perspective. I do know that Manila folders are uncommon outside the US, and becoming less common in the US as computers replace filing cabinets.

          1. 1

            I can attest to that. I’ve literally never seen one until I was well into my twenties. They are so uncommon that any equivalent term for it in my native language is ambiguous; pretty much every word you can use to translate the word “folder” also means “file”. We’ve settled on an awkward convention at some point in the late ‘90s – awkward because the word used for “file” also dentoes box or a locker that you put folders in, not the other way around – but it’s a convention that’s entirely specific to computers, it has no real-life correspondent.

            My hot take on the subject is that it’s a fun anecdote but a largely irrelevant design problem. The icon is weird for sure but it takes about two double-clicks to figure out what it’s for. Other than making localisation via automatic translation weird (Google Translate & co. don’t know about the conventional, computer-specific translation of those terms, so they end up using the equivalent terms for “file” and “folder” interchangeably) it has no discernible effect on computer usage. Like all technical terms, and like all symbolic representations of abstract or technical concepts, they’re just things you learn.

        3. 1

          The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world.

          That makes me really want to use owl imagery in any arcane documentation I write. Two (correct!) meanings in one :)

        4. 1

          It’s not a US-centric thing. An insurance broker I’ve known since being a kid in the 90s has a room chock-full of these boxes, and I’m from the UK.

      2. 4

        You are both wrong, that is obviously a Lego man with a mustache, wearing a flat cap and looking towards the left.

        What truly baffles me in that picture is why the junk bin icon is next to the Delete label, rather than next to the one saying, like Junk :-).

        1. 3

          Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

          Sorry, couldn’t resist.

          On a more serious note, I suspect the reason for a whole lot of these terrible designs are branding. Companies desperately want their products to be different from everybody else. Especially anyone with near-monopoly power, to really milk that cognitive dissonance your users will get from trying to use anything else, and to force your competitors to take a huge opportunity cost trying to keep up with the changes. Using the same icon and naming for things could be considered being a follower, rather than a leader, or some such BS.

          1. 2

            Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

            Half-serious, but all the delete bins I’ve ever seen have grid netting, or are translucid/solid. That one looks just like an old rubbish bin, hence my joke :-).

            Hidden behind my entirely unclassy joke is actually my equally unclassy professional opinion that, like most graphical conventions based on stylised concepts and symbols, software icon representations are entirely conventional, based on conventions specific to various cultures or niches, and are efficiently disseminated by external adoption, like virtually all symbolic representations in this world, from mathematical and technical symbols to Morse code for the Latin alphabet. Consequently, there is far more value in keeping them constant than in chasing magic resonant energy inner chakra symbolic woo-woo intuitiveness or whatever the word for it is today. Half the road signs out there are basically devoid of inherent meaning for most drivers. They work fine because you learn them once and, in most cases, you’re set for life. Left untouched, icons would work fine, too.

            1. 1

              Very good point. Like how everybody recognised the “save” icon, even if they’d never seen a floppy disk. On a related note, I wonder how we could salvage the situation, and get back some consistency. We’d need to somehow shift the incentives of companies intent on “branding” everything in sight. Maybe an accessibility org with a bit of clout could start certifying the accessibility of applications, and deduct points for any unrecognisable permutations of well-known patterns?

              1. 1

                I don’t know if all-round, universal standardisation is possible. There are standards specific to certain niches, e.g. ISO 15223 for medical devices labeling or ISO 7000-series standards for use on equipment, or to specific equipment, e.g. IEC 60417 for rechargeable batteries. But diversity of function inherently limits their application; lots of devices have functions only they perform, so standardising their representation is pretty much impossible.

                IMHO it’s not something that can be solved through regulatory means. It’s a problem of incentive alignment. The reason why we see this constant UI churn in commercial applications is that most organisations that develop customer-facing software or services have accrued large design & UX teams (on the tech side) affiliated to large marketing orgs (on the non-tech side), which lend a lot of organisational capital to their managers – because they’re large. These people cannot walk into a meeting room and say okay, we have 1M customers, we’re basically swimming in money, we’re no one’s touching the interface and making a million people learn how to use our app again on my watch. If they did, half their team would become redundant, at which point half their organisational capital would evaporate.

                All branches of engineering get to a point where advancing the state of the art requires tremendous effort and study. Computer engineering is no exception. 180 years ago, advancing the state of the art in electric motors mostly required tinkering and imagination; doing that today usually happens via a PhD and it’s really hard to explain the results to people who don’t have a PhD in the same field.

                Perpetually bikeshedding around that point (or significantly below it) on the other hand is accessible to most organisations. It doesn’t help that UX and interface design are related to immediate and subjective sensory experiences, so everyone is entitled to an opinion on these topics, which makes them susceptible to being influenced primarily by loud people and bullies in their orgs.

                1. 1

                  I don’t know if all-round, universal standardisation is possible.

                  Yeah, I argued for certification rather than standardisation for that reason. Just like a lot of things can’t easily be standardised in an objective and easily transferable format, having a trusted arbiter is probably more useful to achieve cohesion.

      3. 2

        Which means icons should(?) be a part of a localisation process too. Although it would bring a whole other set of new problems along too.

        1. 1

          I’ve just realised that icons are being partly localised already. Rich-text editors’ [B]old, [I]talic, and [U] in English are [N]egrita, [C]ursiva, and [S]ubrayado in Spanish. Consequentially, they have different keyboard shortcuts too.

          1. 2

            Same in Swedish-localized Office apps, which is annoying because Ctrl-F gives bold (”Fetstil”), not Find.

      4. 1

        Oh, now that you say it, I see it and that makes sense. But I didn’t know what it was supposed to be either before.

    15. 17

      People won’t let go, they won’t accept that the paradigm of rails is a bad idea. They see flask and similar libraries eating their lunch and come up with these contrived made up reasoning that cannot be put to test.

      Every single django codebase I have seen have became unmanageable, most of them have been replaced. We know the reasons nowadays but some diehards don’t want to let go for nostalgic reasons.

      Back in the day, rails, Django and similar were a quick way for a beginner to have access to quite a lot of useful things. The web development ecosystem has matured tremendously, such things can be mixed and matched at will, without the need for a gigantic decision lock called Framework, that will essentially get in the way of adding unexpected functionality.

      The oficial python documentation covers how to use usgi and asgi as well as the SQLite module, that is where beginners should start. They will get results an order of magnitude quicker than if they start learning django.

      This reads like all those perl and PHP posts. They had their time and they were great. We can even learn something from them and might still be useful in specific situations. But things have evolved.

      1. 23

        they won’t accept that the paradigm of rails is a bad idea

        I often saw that argument, but never saw any proof for it.

        Every single django codebase I have seen have became unmanageable

        Every single codebase, no matter the framework/library you use, can grow to become unmanageable, this is not a Django problem.

        Also, I’ve seen more unmanageable Flask/FastAPI codebases than Django ones, so who is right? I wanna say that both our dataset are biased.

        1. 7

          I often saw that argument, but never saw any proof for it.

          I have seen a lot of bad Rails code in my career, and feel pretty confident in saying that the mutable-state free-for-all that Rails and Django are built around is unworkable. It’s possible to avoid the footguns but that is not the general case.

          This is less an indictment of dynamic languages, rather the particular misunderstanding of MVC that Rails popularized and spread far and wide. You can achieve high degrees of correctness in dynamic languages, but the approaches you take need to be very different than you would in a static typing system. But I am probably wandering too far off-topic.

        2. 6

          My experience is that teams inevitably end up bouncing between the two poles:

          “Why did we use this huge full-stack framework? Let’s get rid of all that junk we don’t use and switch to something lighter!”

          Some time later…

          “Why did we use this microframework? Look at all the stuff we had to build on our own! Let’s stop maintaining all that custom code and switch to a full-stack framework!”

          Some time later…

          “Why did we use this huge full-stack framework? Let’s get rid of all that junk we don’t use and switch to something lighter!”

          etc.

          The exception is when you actually are operating at a scale where no off-the-shelf tech stack does what you need. For the Python world, it has been empirically established that this occurs when you are approximately Instagram in size, and at that point it is generally considered “a good problem to have”, at least for your stock option value.

      2. 13

        Every single django codebase I have seen have became unmanageable

        Every successful codebase will become unmanageable but Django will do fine until a much higher point than Flask.

    16. 4

      Yes, prepared statements, or in many APIs just passing a query with parameter placeholders followed by a set of parameters to bind to it (which not all drivers will actually turn into a prepared statement, but will interpolate the parameters safely), is the magical cure-all for SQL injection.

      But we live in a world of hotshot rockstar wizard cowboy ninja guru coders who believe using any sort of query-construction library (let alone those awful awful ORMs) is beneath them, and who love to write articles encouraging everyone else to avoid those libraries like the plague.

      1. 3

        Wait, parameterized queries aren’t a standard feature of every database library in Python? ODBC has had it since the early 90s - there’s no excuse to be manually inserting arguments into the query with any modern database API. And at least in the PHP world, PDO provides emulation for drivers that don’t do it themselves.

        1. 5

          Every driver module I know of supports them. Doesn’t mean people will use them.

        2. 2

          Without library support for forcing use of prepared statements, you’re stuck auditing everything that uses SQL to verify that you don’t have injection vulnerabilities.

    17. 5

      I’ve always thought that Go’s approach of “just add the totally legit and professional-sounding project named https://github.com/xXx42069xXX/leftpad to your project and hope for the best because that’s what everybody else uses” is bonkers and insane. I hadn’t even thought about the domain expiry issue.

      1. 5

        I used to take it seriously when Go people would criticize Python’s package management. Then I learned how Go actually does package management, including not just the “put some Git URLs at the top of the file” but stuff like “you have to change the package name to include v2 if you want to do a major version bump”.

        1. 3

          Just because it is different than what you are used to doesn’t mean it is bad?

          1. 2

            No, it’s bad because it’s got a ton of problems. I’ve mentioned two specific ones above.

            1. 1

              I think you can debate whether those things are “bad”. Personally I do experience neither as bad for example.

              I’ll ask the same question I asked earlier in this thread: What alternative do you suggest? What would be better than “putting some Git URLs at the top of a file” ?

              1. 2

                What alternative do you suggest? What would be better than “putting some Git URLs at the top of a file” ?

                Doing what the rest of the civilized world does: proper packaging with metadata-bearing artifacts uploaded to a repository, and specifying dependencies in a configuration file which can list which of those packages are required, including version ranges or pinned exact versions as needed.

                Go’s approach is terrifying, because one of the fundamental guarantees you want from a packaging system is that if you re-run a workflow that requires fetching a package, you should get either exactly the thing you got last time, or perhaps a “not found”. Go introduces the exciting third possibility of “get a package, but not the same thing you got last time”, because Git and Git commits and history are inherently mutable.

                Even horrible no-good very-bad Python does a better job of this: the Python Package Index says that once you have published Version N of Package Foo, you can never again publish something different and claim it is also Version N of Package Foo. The closest you can come is publishing a pre-built binary for a target that previously didn’t have one, but even that runs out of options quickly and may get closed off by policy eventually.

                The only reason people don’t regularly encounter the massive broken badness Go’s system enables is that someone (likely several someones) figured this out, and though they couldn’t successfully change Go to not be so horribly broken with respect to packaging, they did at least paper over some of the worst of the brokenness by introducing the module proxy to act like a traditional centralized package repository as seen in other languages, and to enforce invariant properties like “what you got last time you requested this is what you should also get the next time”, which are not otherwise enforceable in the default Go “packaging” flow.

                1. 2

                  Go hasn’t worked this way since 2018 when it added modules.

                  1. 2

                    Go very much still works this way, even post-modules!

                    https://go.dev/doc/modules/developing

                    In Go, you publish your module by tagging its code in your repository to make it available for other developers to use. You don’t need to push your module to a centralized service because Go tools can download your module directly from your repository (located using the module’s path, which is a URL with the scheme omitted) or from a proxy server.

                    https://go.dev/blog/publishing-go-modules

                    Do not delete version tags from your repo. If you find a bug or a security issue with a version, release a new version. If people depend on a version that you have deleted, their builds may fail. Similarly, once you release a version, do not change or overwrite it. The module mirror and checksum database store modules, their versions, and signed cryptographic hashes to ensure that the build of a given version remains reproducible over time.

                    This is exactly what I was describing, including the problem with “package” identifiers that resolve to inherently mutable targets and the workaround of just relying on the module proxy to act as an implicit traditional centralized package repository.

                    1. 1

                      I don’t think you are really up to date or understanding how Go works in 2023.

                      Yes, git tags are mutable. But guess what happens to your build when you move them around. In this case I let v1.2.0 of a package point to a different release. Go doesn’t let you and properly flags it as a security error.

                      go: downloading github.com/pborman/uuid v1.2.0 verifying github.com/pborman/uuid@v1.2.0: checksum mismatch downloaded: h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g= go.sum: h1:+ZZIw58t/ozdjRaXh/3awHfmWRbzYxJoAdNJxe/3pvw=

                      SECURITY ERROR This download does NOT match an earlier download recorded in go.sum. The bits may have been replaced on the origin server, or an attacker may have intercepted the download attempt.

                      For more information, see ‘go help module-auth’.

                      1. 1

                        Perhaps you should scroll back up to the top of this page, read the article that started this thread, and then read the author’s other recent work (including discussion here on lobste.rs), or go even further back and then go explain how the author clearly does not understand how Go modules work, since the problems he’s writing about do not, according to you, exist.

                        1. 2

                          My friend, you are so confused it’s not even funny.

                          1. 1

                            His complaint, if you go read the earlier post, is that the “packages” in Go are basically just repository URLs, sometimes with tags for “versioning”, and that these are mutable and cause breakage if they mutate. He points out you can work around this, but the existence of workarounds does not make Go’s “packaging” good or its design decisions sound.

                            That’s also, coincidentally, the sort of thing I’m pointing out as a problem with Go and its “packaging”.

                            1. 2

                              Even if you bypass the module proxy, the go.sum file will prevent a version’s contents from changing. I’m just going to leave this here https://discuss.python.org/t/stop-allowing-deleting-things-from-pypi/17227/117

                              1. 2

                                And if you are worried about packages disappearing, which is a legit concern for any package ecosystem, then you can use go mod vendor to keep packages local as part of your project. Personally I think Go probably has the strongest vendoring story around and it has helped me a great deal to be certain that something I write today will still build in the future.

      2. 4

        I think a system where you deal with go4.org/intern and other random domains expiring one at a time will be more robust than a system where everyone use PyPI or NPM and if it ever goes down everything will be broken everywhere simultaneously. You get good at dealing with failure through practice.

        1. 4

          The irony here is that Go programmers don’t get “practice” and so don’t get “good at dealing with failure”, because Go papers over so many of the problems via the implicit centralized single global package-registry instance that is the module proxy. One of this author’s recent posts was about how much breakage is waiting in the weeds the moment you don’t use the proxy, even.

        2. 2

          everyone use PyPI or NPM and if it ever goes down everything will be broken everywhere simultaneously

          Run your own pypi cache or mirror. Same for npm. Verdaccio is good.

          edit to add: in npm land, if you suddenly find yourself in the situation where you have been screwed over by this, it may be possible to dig yourself out of the pit by scouring npm’s cache for downloaded tarballs.

      3. 4

        What alternative do you suggest? Where should leftpad be hosted to make it more legit and professional?

      4. 3

        I’ve always thought that Go’s approach of “just add the totally legit and professional-sounding project named https://github.com/xXx42069xXX/leftpad to your project and hope for the best because that’s what everybody else uses” is bonkers and insane.

        Funny. I had the opposite reaction when I learned about that approach. I like it. Because this means that ultimately the people publishing the library keep control of their code and I am free to add whatever library I want, even if it is outside the standard package repository of the language (if there were any).

        If anything, it makes you pay a bit more attention to what you’re importing, instead of just thinking Oh, it’s on NPM/PyPi/whatever, it must be legit. Surely someone has checked it.

        hope for the best because that’s what everybody else uses

        That is bonkers and insane in any language.

    18. 4

      Future contributions to Element’s forks will use the reciprocal AGPLv3 license, with a Contributor License Agreement (CLA).

      Disappointing. I’m wary of CLAs, but they vary wildly so since I haven’t read the one they’re using I can’t comment. AGPLv3 though, is a terrible license. IMO it shouldn’t even count as a free software license, but that’s probably a controversial opinion.

      1. 3

        What makes you say that?

        1. 10

          First, the way in which it is often promoted as working, controlling how you can operate software to ensure the goals of the license, is a violation of software freedom 0:

          The freedom to run the program as you wish, for any purpose (freedom 0).

          Second, the way it actually work if you read it, restricting how you can modify the code, and what additional work you must do whenever you make modifications, which is that any time you change the code, you must also change it to advertize a copy of your changed version, is a violation of software freedom 1:

          The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

          In particular, that you have to provide hosting for the source code for your changed version, with no specified time limit, seems onerous. And wildly underspecified. I’m not even sure it’s actually possible to comply with the AGPLv3 except in very particular circumstances.

          1. 7

            Thank you for the detailed response. That’s a very controversial interpretation of the AGPLv3 license.

            1. 4

              Not really. A lot of people objected when the FSF endorsed AGPL, because it’s not a Free Software license under their own definition.

          2. 5

            Do you have the same concerns with GPLv3?

            In particular, that you have to provide hosting for the source code for your changed version, with no specified time limit, seems onerous. And wildly underspecified. I’m not even sure it’s actually possible to comply with the AGPLv3 except in very particular circumstances.

            It reads like it’s simply: as long as users can interact with the software. So, take your modified program offline, then you no longer have to provide source.

            1. 3

              Do you have the same concerns with GPLv3?

              No, other than the fact that GPLv3 code can be combined with AGPLv3 stuff to create a combined software that is AGPLv3. This means you’re allowed to add restrictions, which sorta undermines the point of copyleft.

              It reads like it’s simply: as long as users can interact with the software. So, take your modified program offline, then you no longer have to provide source.

              The license terms don’t say anything about running the program. If you modify it, you have to make sure it points network users to a copy of the source code, and if someone else acquires a copy of your version to run, they have zero obligations to network users. It seems to me like the only reasonable way to comply is to make the program “self-serve” its source code to network users.

          3. 3

            Second, the way it actually work if you read it, restricting how you can modify the code, and what additional work you must do whenever you make modifications, which is that any time you change the code, you must also change it to advertize a copy of your changed version, is a violation of software freedom 1:

            Not to mention that for things like this where users don’t interact with it directly there is no clear way to even attempt to comply with section 13 unless the feature is built into the protocol somehow, so I’m not sure if there practically is any compliant way to run patched versions.

            1. 1

              The clear way is to put it on a publicly accessible VCS.

              1. 7

                No, simply having it available is not sufficient.

                Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.

        2. 9

          I am not the person you asked, but:

          • Increasingly, AGPL’s primary use case is to use copyright to enforce a monopoly on commercial exploitation of the software, and along with things like the BSL has become a popular license to change to when investors in one’s “open source” SaaS start demanding the elimination of competition in order to ensure return on investment.
          • AGPL ought to be GPL-incompatible, actually is GPLv2-incompatible, and only is compatible with the GPLv3 by, basically, dictatorial fiat in the form of a special exemption carved out in GPLv3 which allows passing on less freedom to downstream than you received from upstream but only when you are passing on less freedom due to an AGPL requirement. This effectively undermines the entire stance of the FSF, which historically has been absolutist but at least consistently absolutist: they insisted that nothing and no cause or reason, no matter how good or tempting, could ever justify compromising or making an exception to the GPL’s grant of freedom. And then went and compromised and made an exception to the GPL’s grant of freedom. As the saying goes, now they’re just haggling over the price.
          • AGPL feels, subjectively to me, like it does not actually grant Freedom 0. It’s clear that the intent of the AGPL is to make it de facto (even if not de jure) impossible to run the software for some purposes, at least while also exercising other freedoms. I’m sure the FSF would say that the particular purposes which are made more difficult by the AGPL are ones that they do not morally support, but discriminating against use cases and purposes is fundamentally incompatible with Freedom 0 as stated.
        3. 3

          For me, it is that I do not understand how AGPLv3 interacts with configuration.

          Complex configuration is honestly source code. Software needing complex configuration ships with an example config, that you edit or replace. So I replace a piece of AGPLv3 covered distribution with my own configuration then run the server. Does this mean an obligation to publish my config file? By the way, Synapse configuration may include some low-grade (they allow to get remotely a part of what you could do if you had read access to server’s 0600 files) secrets.

    19. 32

      Hosting on GitHub, so now they will be beholden to another tech giant, and using a closed source platform. 😢

      1. 32

        Technically true but they’re pretty clear that they’re only using it for repo hosting. They don’t seem to plan to use any of the GH features that might lock them in. They can switch hosting at any time. So not exactly beholden. At the same time they’re outsourcing the most tedious parts (infra, backup, etc.).

        1. 29

          That’s also how Python started using Github, before migrating PRs, then issues.

        2. 18

          we will not be accepting Pull Requests at this time

          (My emphasis.) Considering that the vast majority of potential contributors know Git better than Mercurial, and GitHub is by a long margin the leading VCS hosting platform, the pressure from contributors to accept PRs into GitHub is probably going to increase from now on. I wouldn’t be surprised if within the year there were thousands of upvotes for various issues which boil down to “Why not let contributors submit PRs?” and “It would be cool if we could use this GitHub feature.” I’d give it three years before PRs are accepted on GitHub, and then another two years before more than 90% of changes are submitted via GitHub PRs.

      2. 24

        I personally would have liked the project to use a different git forge, but to be fair, many, many Mozilla projects are already on GitHub. Just see how many repos there already are. Mozilla started using GitHub many, many years ago. Over 2000 repos under the mozilla organization - not even counting mozilla-services, mozilla-mobile, mozlilasecurity.

        But as someone working on the codebase, switching from Mercurial (hg) to git is a very welcome change.

      3. 11

        Mozilla was already a heavy user of GitHub for things that weren’t the main Firefox tree. When I worked there (2011-2015) all the repositories I dealt with were on GitHub (though of course, being Mozilla, the bugs/issues were all tracked in Bugzilla).

        For any type of project that depends on community contribution, GitHub and its network effects make it not even a choice, really; projects that stick to ideologically-pure hosting options suffer for it because of the vastly smaller number of potential contributors who will seek out or jump through the hoops to interact with them there.

      4. 7

        They’re pretty clear about using GH just as a mirror, not for development.

        1. 7

          No, they’re pretty clear about “hosting the repository on GitHub”, with all changes landing there first.

          1. 10

            How do you get that from:

            • We will continue to use Bugzilla, moz-phab, Phabricator, and Lando
            • Although we’ll be hosting the repository on GitHub, our contribution workflow will remain unchanged and we will not be accepting Pull Requests at this time

            The changes will still land in Phabricator, not GitHub.

            1. 25

              Phabricator is a code review platform, not a repository hosting platform. This looks like the same flow that LLVM had for a while:

              • GitHub is the canonical repository.
              • Things are reviewed on Phabricator before being merged into the repo.

              If you’re only using it for repo hosting, there’s very little lock in with GitHub. It’s just a server that hosts the canonical mirror of your repo. You can even set up GitHub actions that keep another mirror up to date with every push, so if GitHub becomes unfortunate then you just update the URLs of remotes and keep working.

              If you’re using GitHub issues, PRs, and so on, then you end up with GitHub metadata in your commit history and that makes moving elsewhere harder. If a commit says ‘Fixes: #1234’, you need to have access to the GitHub issues thing to find out what that actually means.

              1. 5

                And a group accustomed to Phabricator is not going to willingly switch to GitHub’s review tools, which are vastly inferior.

                1. 4

                  Isn’t that exactly the choice the LLVM project made?

                  1. 2

                    As I recall a lot of people were unwilling, and understandably so.

                    1. 3

                      The migration to GitHub pull request for LLVM is a total disaster. I have a summary at https://maskray.me/blog/2023-09-09-reflections-on-llvm-switch-to-github-pull-requests , though I try to use a less aggressive tone…

            2. 3

              Effective June 1, 2021: Phabricator is no longer actively maintained.

              https://phacility.com/phabricator/

              1. 8
                1. 4

                  Cool, but why then did they say Phabricator? Did they not migrate yet? Are they aware of the fork, or that the original program is unmaintained?

                  1. 7

                    Presumably for the same reason we say “Twitter” - everyone knows what the old thing is, and the name doesn’t really matter.

                  2. 2

                    I think it is very well known in the phab user community. My previous company used it too and we were all aware. I would be surprised if they aren’t aware

                    1. 2

                      We upgraded the VyOS tracker to Phorge long ago but references to “Phabricator” are still everywhere. Not going away any time soon. :)

      5. 6

        Y’all act like Mozilla didn’t have a conversation about this. I bet they got a few people on the team who understand the risks of the decision. This is, apparently, the choice they think is correct for the health of Firefox overall.

        1. 8

          When an extremely well-known open source project decides to make a part of their process involve closed source infrastructure, that makes me doubt that the decision-makers truly understand the motivation of a lot of people who have been part of the history of the project.

    20. 20

      This is an article that’s worth saving to link back the next time someone insists that game dev is the home of People Who Care About Performance or that The Market Will Not Allow Poor Performing Games or whatever.

      Except of course it’s not just this instance with this game. Games with awful problems like these are released pretty regularly, even from (some might say especially from) the biggest and most financially successful studios and publishers. There’s nothing actually unique about game dev or about the people who do it that makes them better at or more aware of or more committed to performance than any other field of programming; they screw it up at least as often as everyone else.

      1. 15

        I think the interesting part about this report is that it seems to show that CO did care about performance, at least in certain areas. They went all-in on Unity’s DOTS system because they wanted the simulation itself to run as smoothly as possible - then they got bit by Unity’s tendency to release features before they’re fully ready and/or drop them half-baked.

        1. 12

          The delicious irony here is that, as I understand it, DOTS or at least the architecture of it was a Mike Acton project.

          You know, the guy who dunks on people for not caring about performance, who gives talks saying that people who don’t live up to his personal standards for performance should all be fired, is often cited approvingly as someone who really cares about and pushes others to care about performance, etc.

          1. 15

            As far as I understand it, the game-tick-simulation parts that heavily use DOTS are working really well here. The part that fell down was where Unity half-assed the connection to their render pipeline and shipped it anyway, so if you want to use DOTS for your game, you have to do a bunch of gymnastics to get it to play nicely with HDRP

        2. 1

          A city builder has hundreds of thousands of buildings and people, so it doesn’t seem surprising that a framework built for more typical games that have a much smaller number of entities might fall over.

          1. 13

            Per the article,

            They chose DOTS as the architecture to fix the CPU bottlenecks their previous game suffered from and to increase the scale & depth of the simulation, and largely succeeded on that front. CO started the game when DOTS was still experimental, and it probably came as a surprise how much they had to implement themselves even when DOTS was officially considered production ready. I wouldn’t be surprised if they started the game with Entities Graphics but then had to pivot to custom solutions for culling, skeletal animation, texture streaming and so on when they realized Unity’s official solution was not going to cut it.

            so the actual running of the simulation sounds like they made the right call. It was the dodgy connection between that and the rendering that caused the problem.

      2. 4

        We already know game devs don’t actually care about perf in the manner that’s implied: the number of games that are still 32bit despite that easily eating 15-20% cpu performance is mind blowing - this would only be reasonable if your game never hits max cpu load (although even then all you’re doing is needlessly wasting battery life on laptops).

        1. 5

          the number of games that are still 32bit despite that easily eating 15-20% cpu performance is mind blowing

          That’s highly data-structure dependent. On x86, the biggest win from 64-bit is being able to assume SSE, but most games are likely to compile with that anyway. Beyond that, you get much faster position-independent code (doesn’t matter for Windows, because even DLLs are not position independent in 32-bit mode) and you get more registers. On the flip side, you have bigger pointers.

          If your game’s main data structure is a large scene graph then the size of the pointers will have a big impact on cache hit rates and smaller pointers can easily be a bigger win than you lose from fewer registers. Worse, the performance variation across different systems is a lot bigger from cache misses than it is from fewer registers and so you’re likely to have performance cliffs in different places on different CPUs even within a single range from Intel or AMD.

      3. 3

        https://www.metacritic.com/game/cities-skylines-ii/

        User reviews are “3.3: generally unfavorable.” Lots of people mentioning the performance. So the market seems to be in the process of not allowing poor performance.

        1. 1

          Now do the reviews for Minecraft :)

          1. 2

            User score 7.8.

            Googling tells me that some people experience performance problems with Minecraft, but it’s not the default.

            1. 1

              The joke here is that if you hang out on Minecraft forums the performance is a fairly common complaint and has been for basically the entire time the game has existed. A lot of guides to mods and add-ons for Minecraft recommend to all users that they install OptiFine, a mod whose sole purpose is to try to make the game’s performance more acceptable.

              And yet the game continues to be popular and loved. Which is a strong counterexample to the “markets will punish poor software performance” theory.