1. 70
    1. 47

      I’m so tired of rehashing this. Pointing out that SemVer is not 100% infallible guarantee, or that major versions don’t always cause major breakage adds nothing new.

      Lots of projects have a Changelog file where they document major changes, but nobody argues that reading changelogs would hurt you, because it may not contain all tiniest changes, or mention changes that would discourage people from upgrading, staying on insecure versions forever, etc.

      SemVer is just a machine-readable version of documentation of breaking changes.

      1. 23

        Yes, and the article tries to succinctly sum up what value can be derived from that and what fallacies await. I’d have to lie to have ever seen it summed up thought that lens in one place.

        I’m sorry it’s too derivative to your taste, but when the cryptography fire was raging, I was wishing for that article to exist so I can just paste it instead of extensive elaborations in the comments section.

      2. 11

        I thought the same thing initially, but it could also be coming from the perspective of using Rust frequently, which is strongly and statically typed. (I don’t actually know how frequently you use it; just an assumption.)

        A static/strong type system gives programmers a nice boundary for enforcing SemVer. You mostly just have to look at function signatures and make sure your project still builds. That’s the basic promise of the type system. If it builds, you’re likely using it as intended.

        As the author said, with something like Python, the boundary is more fuzzy. Imagine you write a function in python intended to work on lists, and somebody passes in a numpy array. There’s a good chance it will work. Until one day you decide to add a little extra functionality that still works on lists, but unintentionally (and silently) breaks the function working with arrays.

        That’s a super normal Python problem to have. And it would break SemVer. And it probably happens all the time (though I don’t know this).

        So maybe for weakly/dynamically typed languages, SemVer could do more harm than good if it really is unintentionally broken frequently.

        1. 8

          That’s all very true!

          Additionally what I’m trying to convey (not very successfully it seems) is that the reliance on that property is bad – even in Rust! Because any release can break your code even just by introducing a bug – no matter what the version number says. Thus you have to treat all versions as breaking. Given the discussions around pyca/cryptography this is clearly not common knowledge.

          The fact that this is much more common in dynamic languages as you’ve outlined is just the topping.

          I really don’t know what I’ve done wrong to warrant that OP comment + upvotes except probably hitting some sore point/over-satiation with these topics in the cryptography fallout. That’s a bummer but I guess nothing I can do about it. 🧘

          1. 7

            Car analogy time: You should treat cars as dangerous all the time. You can’t rely on seatbelts and airbags to save you. Should cars get rid of seatbelts?

            The fact that SemVer isn’t 100% right all the time is not a reason for switching to YOLO versioning.

            1. 3

              Except that SemVer is not a seatbelt, but – as I try to explain in the post – a sign saying “drive carefully”. It’s a valuable thing to be told, but you still have to take further measures to ensure safety and plan for the case when there’s a sign saying “drive recklessly”. That’s all that post is saying and nothing more.

            2. 2

              Seatbelts reduce the chance of death. Reading a changelog reduces the chance of a bad patch. Trusting semver does not reduce the chance of an incompatible break.

              1. 6

                I really don’t get why there’s so much resistance to documenting known-breaking changes.

                1. 3

                  I really don’t get why there’s so much resistance to documenting known-breaking changes.

                  I mean you could just…like…read the article instead of guessing what’s inside. Since the beginning you’ve been pretending the article’s saying what it absolutely isn’t. Killing one straw man after another, causing people to skip reading because they think it’s another screech of same-old.

                  I’m trying really hard to not attribute any bad faith to it but it’s getting increasingly harder and harder so I’m giving up.

                  Don’t bother responding, I’m done with you. Have a good life.

                  1. -1

                    mean you could just…like…read the article instead

                    So where in that article do you say why people don’t want to document known breaking changes ?

                    Offtopic: That was really hard to read. Too many fat prints and

                    quotes

                    with some links in between. It just destroyed my reading flow.

                    I also think the title “will not save you” is obviously telling everything about why people are just not reading it. It’s already starting with a big “it doesn’t work”, so why should I expect it to be in favor of it ?

                    1. 4

                      So where in that article do you say why people don’t want to document known breaking changes ?

                      Well, the pyca/cryptography team documented that they were rewriting in Rust far in advance of actually shipping it, and initially shipped it as optional. People who relied on the package, including distro package maintainers, just flat-out ignored it right up until it broke their builds because they weren’t set up to handle the Rust part.

                      So there’s no need for anyone else to cover that with respect to the cryptography fight. The change was documented and communicated, and the people who later decided to throw a fit over it were just flat-out not paying attention.

                      And nothing in SemVer would require incrementing major for the Rust rewrite, because it didn’t change public API of the module. Which the article does point out:

                      Funny enough, a change in the build system that doesn’t affect the public interface wouldn’t warrant a major bump in SemVer – particularly if it breaks platforms that were never supported by the authors – but let’s leave that aside.

                      Hopefully the above, which contains three paragraphs written by me, and only two short quotes, was not too awful on you to read.

                      1. 1

                        Thanks, your summary is making a good point, and yes the original blogpost was hard to read, I did not intend this to be a troll.

                        And nothing in SemVer would require incrementing major for the Rust rewrite

                        Technically yes, practically I know that many rust crates do not increment the minimum required rust compiler version until a major version. So fair enough, semver in its core isn’t enough.

          2. 3

            AFAIU, I think the OP comment may be trying to say that they agree with and in fact embrace the following sentence from your article:

            Because that’s all SemVer is: a TL;DR of the changelog.

            In particular, as far as I can remember, trying to find and browse a changelog was basically the only sensible thing one could do when trying to upgrade a dependency before SemVer became popular (plus keeps fingers crossed and run the tests). With the main time waster being trying to even locate and make sense of the changelog, with basically every project showing it elsewhere, if at all. (Actually, I seem to remember that finding any kind of changelog was already a big baseline plus mark for a project’s impression of quality). As such, having a hugely popular semi-standard convention for a tl;dr of the changelog is something I believe many people do find super valuable. They know enough to never fully trust it, similarly as they’d know to never fully trust a changelog. Having enough experience with changelogs and/or SemVer, they however do now see substantial value in SemVer as a huge time saver, esp. compared to what they had to do before.

            Interestingly, there’s a bot called “dependabot” on GitHub. I’ve seen it used b a team, and what it does is track version changes in dependencies, and generate a summary changelog of commits since last version. Which seems to more or less support what I wrote above IMO.

            (Please note that personally I still found your article super interesting, and nicely naming some phenomena that I only vaguely felt before. Including the one I expressed in this post.)

          3. 2

            I think there is something a bit wrong about the blanket statement that others shouldn’t rely on semver. I suspect that for many projects, trying one’s best to use the API as envisioned by the author, and relying on semver, will in practice provide you with bugfixes and performance improvements for free, while never causing any major problems.

            I like the parts of this blog post that are pointing out the problems here, but I think it goes way too far in saying that I “need to” follow your prescribed steps. Some of my projects are done for my own enjoyment and offered for free, and it really rubs me the wrong way when anyone tells me how I “should” do them.

            [edited to add: I didn’t upvote the top level comment, but I did feel frustrated by reading your post]

            1. 1

              I’m not sure to respond to that. The premise of the article it that people are making demands, claiming it will have a certain effect. My clearly stated goal is to dissect those claims, so people stop making those demands. Your use case is obviously very different so I have no interest to tell you to do anything. Why am I frustrating you and how could I have avoided it?

              1. 3

                My negative reaction was mostly to the section “Taking Responsibility”, which felt to me like it veered a bit into moralizing (especially the sentence “In practice that means that you need to be pro-active, regardless of the version schemes of your dependencies:”). On rereading it more carefully/charitably, I don’t think you intended to say that everyone must do it this way regardless of the tradeoffs, but that is how I read it the first time through.

        2. 9

          Type systems simply don’t do this. Here’s a list of examples where Haskell’s type system fails and I’m sure that you can produce a similar list for Rust.

          By using words like “likely” and “mostly”, you are sketching a sort of pragmatic argument, where type systems work well enough to substitute for informal measures, like semantic versioning, that we might rely on the type system entirely. However, type systems are formal objects and cannot admit such fuzzy properties as “it mostly works” without clarification. Further, we usually expect type-checking algorithms to not be heuristics; we expect them to always work, and for any caveats to be enumerated as explicit preconditions.

          1. 2

            Also there were crate releases where a breaking change wasn’t catched because no tests verified that FooBar stayed Sync/Send.

          2. 1

            All I meant is that languages with strong type systems make it easier to correctly enforce semver than languages without them. It’s all a matter of degree. I’m not saying that languages like Rust and Haskell can guarantee semver correctness.

            But the type system does make it easier to stay compliant because the public API of a library falls under the consideration of semver, and a large part of a public API is the types it can accept and the type it returns.

            I’m definitely not claiming that type systems prevent all bugs and that we can “rely entirely on the type system”. I’m also not claiming that type systems can even guarantee that we’re using a public API as intended.

            But they can at least make sure we’re passing the right types, which is a major source of bugs in dynamically typed languages. And those bugs are a prominent example of why OP argues that SemVer doesn’t work—accidental changes in the public API due to accepting subtly different types.

    2. 19

      This is very upsetting. I went in expecting argumentative clickbait and found a solid, nuanced discussion!

      I think SemVer is one of the better standards we have, overall. But as with any tool, there’s a time and a place for it and I thought this article did a good job of acknowledging that while also covering some common issues with how it is used. Thanks, OP.

      1. 3

        Yep, and the very first thing the author mentions is exactly what I was thinking: test coverage.

        If you write a bunch of tests, you can bump your dependencies and make sure the pass. Sure you might bump dependencies and the tests pass and then in production breaks. Well hopefully you can write a test for that thing you missed.

        Having good tests is so essential to help preventing dependency rot.

    3. 16

      I said it back in the PyCa/Rust thread: any consumer-facing statement of compatibility is at best a good-intentions claim of intent, not a reliable statement of effect from the consuming side. No matter what your dependencies claim, you need to test it because you may depend on observable but unspecified behaviour. The remedy isn’t to demand some mythically more-perfect adherence to a semantic versioning model, but instead to upgrade and test that upgrade frequently. Embrace change, breakage, and rapid fixing in your own code so that you can be resilient against the evolution of your dependencies.

      1. 5

        …upgrade and test that upgrade frequently.

        In my mind, this is it, 100%. I worked at a company that used automated tooling to enforce SemVer for internal dependencies. But even so, product teams that waited a long time between upgrades had a really bad time of it. Aside from the obvious problem that small changes can sneak past SemVer (for example, you might have been relying on a side effect that goes away), switching from 1.0 to 3.2 is always going to be painful because so much has changed! SemVer is a great start, but that’s it, a start. Good development practices are still necessary.

    4. 12

      Great article!

      I want to share one interesting non-trivial effect of ecosystem-wide embrace of semver, which, I think, isn’t mentioned in the post.

      In Rust, semver is adopted more or less universally. At the same time, libraries do not typically commit their lockfiles to git. The result is that your typical library is tested on CI with the latest versions of dependencies.

      It absolutely it is the case that sometimes builds break because new semver-compatible version is actually incompatible. But this happens rarely enough in practice that testing without lockfile is tenable. Most of the time, folks do appropriately bump the major.

      All this together means two things:

      • there’s ecosystem-wide cross-testing that most recent of most packages work together
      • accidental semver violations are quickly discovered and fixed

      So, while semver indeed doesn’t give a lot of direct value for each specific application author, the aggregate benefit seems to be substantial.

    5. 6

      Where does it say that a version change means no bugs? Maybe I’m wrong but I’ve always understood SemVer to be a means of communicating the scope of changes. A patch change means I shouldn’t have to change anything, minor means I can access new features, and major means I may have to change my code. Nothing there says anything about lack of bugs though.

      I’ve also come to believe less and less in placing 1.0 on a pedestal. So many companies and devs use 0.x software that the idea of 1.0 == production is just a silly things we tell ourselves to feel cozy. Many popular tools either stay at 0.x for years while being used in production and others hit 1.0 then 2.0, 3.0, etc all in quick succession making the idea of 1.0 meaning stability just a joke.

      1. 8

        Ehm, you have just somewhat summed up parts of the article making it sound like you’re contradicting. What did you think it was trying to say? 😅

    6. 6

      It’s true that semantic versioning doesn’t reliably indicate compatibility. However, the idea that newer versions are always better (or that anybody wants to be up to date with upstream at all times) is pretty naive.

      Newer versions of a piece of third party software often means that its own minimum dependency set has shifted, for one thing – so if you require compatibility with version 3.1 of package X and your other dependency Y when upgrading from 4.5 to 4.6 now requires X version 3.2 that has some major known bug or some weird compatibility problem, you’re fucked. Projects also often grow – so maybe you don’t personally know of a bug in 3.2 but you know that the source code of 3.2 is three times the size of 3.1 and adds six hundred features you don’t need or want, all of which represent new vulnerability surfaces.

      In my experience, this is the attitude of industry. We still run python 1.6 (not 2.6, that’s not a typo – 1.6) and we still run code on it, because somebody on our team exhaustively audited the entire python 1.6 codebase decades ago and determined it was safe to install on that machine, and he refuses to do the same for later versions and so we don’t use them in that flow. We have a lot of dependencies that are like that – where we had to exhaustively determine some third party thing was safe under certain circumstances, and when that package got harder to audit we froze it and stopped supporting newer versions. New releases mean new, often unknown bugs. If a piece of software has been in heavy use for 15 years, you may well know all of the easily-exploited bugs in it and be able to determine that none of them matter.

      The function of versioning is not to say which version is newer, but to allow you to specify which version(s) work – to give you a language for talking about dependency webs.

      Semantic versioning lets a package maintainer hint to the developer of a new project (and the creator of that project’s dependency web) about which version ranges are likely to be compatible – something that this developer nevertheless needs to test themselves. I think software engineers are, generally speaking, well aware of the fact that documentation (let alone hints made by numbering) can’t be blindly trusted – we’ve all run into situations where a piece of software doesn’t work the way it’s supposed to, and isn’t compatible with the things it claims compatibility with, or even claims to implement standards or specifications that it does not.

      There are ways to hint compatibility with all sorts of different granularities. For instance, you can claim compatibility with a particular standard (in which case, theoretically, your software should interoperate not just with previous and future versions of the same software but similar software written by other people). Or, you can expose interfaces and document their behavior (promising that this version of the software will work with other software that calls these functions in these particular ways). This is because maintaining compatibility is a constant problem that no automated system can currently solve, and hints are useful for helping humans solve it. (Sometimes, the appropriate response to some insane malformed input is to crash! Sometimes, a piece of software doesn’t need to be secure against fuzzers because it sits on an airgapped machine and gets all its input through a hex keypad!)

      Eschewing hints about compatibility in favor of always being compatible with only the newest version of whatever trash some incompetent third party puts out means you’re continuously rewriting the least interesting parts of your own software in order to track somebody else’s software – which, in the normal case, is literally only getting worse over time.

      1. 1

        Could you share more of your organization’s backstory around this:

        We still run python 1.6 (not 2.6, that’s not a typo – 1.6) and we still run code on it, because somebody on our team exhaustively audited the entire python 1.6 codebase decades ago and determined it was safe to install on that machine, and he refuses to do the same for later versions and so we don’t use them in that flow.

        1. 2

          There isn’t much to say.

          We put an internal wiki (which had python as a dependency) on a machine that was also running critical business logic, back before python 2.x came out, and the guy who audited the implementation of python to make sure that it was safe to do this was so disgusted at the state of the 1.6 code that, although he ultimately decided it was safe to run python internally on non-internet-facing hardware, refused to audit any future implementations & refused to allow any non-audited code on that machine.

          Up until a couple years ago, it was pretty normal in our organization to have 10+ year old versions of third party software anyway, basically because we audited most third party code & we needed everything to interact. So, we were limited not mostly by the last version of python we audited (since we had very little python code around) but the last version of GCC and GLIBC we audited. We were stuck with this until we migrated to LLVM+MUSL (which was easier to audit).

          Python 1.6 is not actually the most interesting case here.

          A few years ago, we had a project to upgrade f77 to support compiling more modern fortran to GCC 2.95 compliant C in order to replace a largeish third party graph-drawing package in C with a smaller third party graph-drawing package that was written in fortran – although we abandoned that project because the graph drawing package had a bunch of inefficient read code that couldn’t be trivially made to read quickly from a pipe. All of this was basically because nobody wanted to audit gfortran & nobody wanted to audit the original c graph drawing package.

          We also had this big campaign to replace GNU coretools with the netbsd versions of the same, which was partially justified by the average size of each tool in LOC (and thus ease of auditing) although it was probably partially motivated by licensing concerns. Turns out, though, that nawk, on top of missing a bunch of very useful features from gawk, was slower and had a number of really ugly bugs.

      2. 1

        Are you backporting fixes to your pinned version of Python 1.6? Why or why not?

        How you do weigh the pros and cons of one person doing a one-time extensive audit of Python 1.6 versus many* people using, testing, and improving Python every year?

        By “many” I would estimate:

        • at least hundreds of eyes looking deeply
        • at least thousands developing libraries
        • at least tens of thousands finding bugs
        1. 2

          This particular guy, in part because of his time spent auditing Python 1.6, had a poor opinion of the ability of the hundreds of core python developers to write secure, clean, and efficient C code. (Unsurprisingly, he had an even lower opinion of the competence of the developers of third party libraries for python – who he figured would not be writing python if they were competent. His attitude has softened basically because he’s worked with me & another reasonably competent guy, and we both like python while not ignoring or dismissing its flaws. Nevertheless, we do not have a lot of third party python libraries in use here.)

          I don’t think it’s controversial to say that until nearly 3.x, python was designed and implemented in an ad-hoc manner & that auditing the implementation is difficult for the same reason that weird tricks like pypy work – the ad-hoc-ness of the codebase leads to unexpected behavior and performance characteristics highly dependent on historical accident. His more controversial opinion is that while 3.x is better, it is not better enough to justify the risk of allowing new code to be written in it in our org.

          He allowed 1.6 on one particular machine to support one particular already-existing piece of internal-only code, but forebade us from running any python code in the parts of production he controlled in any other circumstance.

          We now have some python in production, but he doesn’t touch it & it’s completely divorced from the core package set & isolated from our core functionalities. (This is nice for me, because I like developing in python, and because we have other language implementations in production I personally trust a lot less, like perl.)

    7. 3

      I agree with the sentiment that you should pin all dependencies.

      But I never had the idea that SemVer would „save“ me - I only ever saw it as a means of communicating expected impact.

      1. 3

        That’s very correct, but look at the other comments and you’ll see that it isn’t universal consensus and even suggesting it seems rather triggering to some. 🤷‍♂️

    8. 2

      You want to claim that version 3.2 is compatible with version 3.1 somehow, but how do you know that? You know the software basically “works” because of your unit tests, but surely you changed the tests between 3.1 and 3.2 if there were any intentional changes in behavior. How can you be sure that you didn’t remove or change any functions that someone might be calling?

      Semantic versioning states that a minor release such as 3.2 should only add backwards compatible changes.

      So all your existing unit tests from 3.1 should still be in place, untouched. You should have new unit tests, for the functionality added in 3.2.

      I stopped reading after this, because the argument seems to boil down to either not understanding Semantic versioning, or not having full unit test coverage.

      1. 20

        I stopped reading after this

        If you stopped reading at 10% of the article, you should probably also have stopped yourself from commenting.

        not understanding Semantic versioning

        The fallacy you’re committing here is very well documented.

        1. 1

          If you are questioning whether the function you removed/changed is used by anyone when deciding the next version increment, you are not using semantic versioning correctly (unless you always increase the major, regardless of how many people used the feature you modified). As the parent said, if you need to edit 3.1 tests, you broke something, and the semver website is quite clear about what to do on breaking changes.

          1. 7

            If you don’t only test the public API, it’s entirely possible to introduce required changes in tests in bugfix versions.

            More importantly, my point about “no true Scotsman” was that saying “SemVer is great if and only if you follow some brittle manual process to the dot” proves the blog post’s narrative. SemVer is wishful thinking. You can have ambitions to adhere to it, you can claim your projects follow it, but you shouldn’t ever blindly rely on others doing it right.

          2. 5

            The question then becomes: why does nobody do it then? Do you truly believe that in a world, where it’s super rare that a major version exceeds “5” nobody ever had to change their tests, because some low-level implementation detail changed?

            We’re talking about real packages that have more than one layer. Not a bunch of pure functions. You build abstractions over implementation details and in non-trivial software, you can’t always test the full functionality without relying on the knowledge of said implementation details.

            Maybe the answer is: “that’s why everybody stays in ZeroVer” which is another way of saying that SenVer is impractical.

      2. 6

        The original fight about the PyCA cryptography package repeatedly suggested SemVer had been broken, and that if the team behind the package had adopted SemVer, there would have been far less drama.

        Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way, and thus SemVer would not have been broken by doing that (i.e., if you ran pip install cryptography before and after, the module that ended up installed on your system exposed a public API that was compatible after with what you got before).

        Unless you want to argue that SemVer requires version bump for any change that any third-party observer might notice. In which case A) you’ve deviated from what people generally say SemVer is about (see the original thread here, for example, where many people waffled between “only about documented API” and “but cryptography should’ve bumped major for this”) and B) have basically decreed that every commit increments major, because every commit potentially produces observable change.

        But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

        1. 1

          Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way

          I think you’re overlooking this little tidbit:

          Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).

          I’m not sure many people would consider “suddenly unavailable on 10 CPU architectures” to be “backwards compatible”.

          But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

          If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

          1. 8

            Let’s take a simplified example.

            Suppose I write a package called add_positive_under_ten. It exposes exactly one public function, with this signature:

            def add_positive_under_ten(x: int, y: int) -> int
            

            The documented contract of this function is that x and y must be of type int and must each be greater than 0 and less than 10, and that the return value is an int which is the sum of x and y. If the requirements regarding the types of x and y are not met, TypeError will be raised. If the requirements regarding their values are not met, ValueError will be raised. The package also includes an automated test suite which exhaustively checks behavior and correctness for all valid inputs, and verifies that the aforementioned exceptions are raised on sample invalid inputs.

            In the first release of this package, it is pure Python. In a later, second release, I rewrite it in C as a compiled extension. In yet a later, third release, I rewrite the compiled C extension as a compiled Rust extension. From the perspective of a consumer of the package, the public API of the package has not changed. The documented behavior of the functions (in this case, single function) exposed publicly has not changed, as verified by the test suite.

            Since Semantic Versioning as defined by semver.org applies to declared public API and nothing else whatsoever, Semantic Versioning would not require that I increment the major version with each of those releases.

            Similarly, Semantic Versioning would not require that the pyca/cryptography package increment major for switching a compiled extension from C to Rust unless that switch also changed declared public API of the package in a backwards-incompatible way. The package does not adhere to Semantic Versioning, but even if it did there would be no obligation to increment major for this, under Semantic Versioning’s rules.

            If you would instead like to argue that Semantic Versioning ought to apply to things beyond the declared public API, such as “any change a downstream consumer might notice requires incrementing major”, then I will point out that this is indistinguishable in practice from “every commit must increment major”.

            1. 1

              We don’t need a simplified, synthetic example.

              We have the real world example. Do you believe that making a change which effectively drops support for ten CPU architectures is a breaking change, or not? If not, why not? How is “does not work at all”, not a breaking change?

              1. 9

                The specific claim at issue is whether Semantic Versioning would have caused this to go differently.

                Although it doesn’t actually use SemVer, the pyca/cryptography package did not do anything that Semantic Versioning forbids. Because, again, the only thing Semantic Versioning forbids is incompatibility in the package’s declared public API. If the set of public classes/methods/functions/constants/etc. exposed by the package stays compatible as the underlying implementation is rewritten, Semantic Versioning is satisfied. Just as it would be if, for example, a function were rewritten to be more time- or memory-efficient than before while preserving the behavior.

                And although Gentoo (to take an example) seemed to be upset about losing support for architectures Gentoo chooses to support, they are not architectures that Python (the language) supported upstream, nor as far as I can tell did the pyca/cryptography team ever make any public declaration that they were committed to supporting those architectures. If someone gets their software, or my software, or your software, running on a platform that the software never committed to supporting, that creates zero obligation on their (or my, or your) part to maintain compatibility for that platform. But at any rate, Semantic Versioning has nothing whatsoever to say about this, because what happened here would not be a violation of Semantic Versioning.

          2. 7

            If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

            None of those architectures were maintained or promised by the maintainers, but were added by third parties. No matter what your opinion on SemVer is, activities of third parties about whose existence you possibly didn’t even know about, is not part of it.

            Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

            1. 0

              Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

              If you think your argument somehow shows that breaking support for 10 CPU architectures isn’t a breaking change, then yes, we all have much to learn.

              1. 8

                You still haven’t explained why you think Semantic Versioning requires this. Or why you think the maintainers had any obligation to users they had never made any promises to in the first place.

                But I believe I’ve demonstrated clearly that Semantic Versioning does not consider this to be a change that requires incrementing major, so if you’re still offering that $20…

                1. 0

                  Part of what they ship is code that’s compiled, and literally the first two sentences of the project readme are:

                  cryptography is a package which provides cryptographic recipes and primitives to Python developers. Our goal is for it to be your “cryptographic standard library”.

                  If your self stated goal is to be the “standard library” for something and you’re shipping code that is compiled (as opposed to interpreted code, e.g. python), I would expect you to not break things relating to the compiled part of the library in a minor release.

                  Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                  1. 8

                    Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                    There are many types of agreements – some formal, some less so – between developers of software and users of software regarding support and compatibility. Developers declare openly which parts of the software they consider to be supported with a compatibility promise, and consumers of the software declare openly that they will not expect support or compatibility promises for parts of the software which are not covered by that declaration.

                    Semantic Versioning is a mildly-formal way of doing this. But it is focused on only one specific part: the public API of the software. It is not concerned with anything else, at all, ever, for any reason, under any circumstances. No matter how many times you pound the table and loudly demand that something else – like the build toolchain – be covered by a compatibility guarantee, Semantic Versioning will not budge on it.

                    The cryptography change did not violate Semantic Versioning. The public API of the module after the rewrite was backwards-compatible with the public API before the rewrite. This is literally the one, only, exclusive thing that Semantic Versioning cares about, and it was not broken.

                    Meanwhile, you appear to believe that by releasing a piece of software, the author takes on an unbreakable obligation to maintain compatibility for every possible way the software might ever be used, by anyone, on any platform, in any logically-possible universe, forever. Even if the author never promised anything resembling that. I honestly do not know what the basis of such an obligation would be, nor what chain of reasoning would support its existence.

                    What I do know is that the topic of this thread was Semantic Versioning. Although the cryptography library does not use Semantic Versioning, the rewrite of the extension module in Rust did not violate Semantic Versioning. And I know that nothing gives you the right to make an enforceable demand of the developers that they maintain support and compatibility for building and running on architectures that they never committed to supporting in the first place, and nothing creates any obligation on their part to maintain such support and compatibility. The code is under an open-source license. If you depended on it in a way that was not supported by the developers’ commitments, your remedy is to maintain your own fork of it, as with any other upstream decision you dislike.

      3. 4

        “Should” is the key word here because I haven’t ever contributed to an open source project that has that as part of their policy neither have I observed it’s wide application given the state of third party packages.

        The article specifically speaks about the divergence between aspiration and reality and what conclusions can be drawn from that.

        1. 3

          Unfortunately the aspiration is broken too.

          1. 2

            Baby steps 😇

      4. 3

        It sounds like you’re proposing to use unit tests to prove that a minor release doesn’t introduce backwards-compatible changes. However, tests cannot substitute for proofs; there are plenty of infinite behaviors which we want to write down in code but we cannot exhaustively test.

        All of these same problems happen in e.g. Haskell’s ecosystem. It turns out that simply stating that minor releases should only add backwards-compatible changes is just an opinion and not actually a theorem about code.

        1. 1

          No I think they have a valid point. “Surely” implies that it’s normal to “change” unittests between minor versions, but the term “change” here mixes “adding new” and “modifying existing” in a misleading way. Existing unittests should not change between minor versions, as they validate the contract. Of course, they may change anyway, for instance if they were not functional at all, or tested something wrong, but it should certainly not be common.

          edit: I am mixing up unittests and system tests, my apologies. Unit tests can of course change freely, but they also have no relation to SemVer; the debate only applies to tests of the user-facing API.

          1. 2

            I know people use different terminology for the same things, but if the thing being tested is a software library, I would definitely consider any of the tests that aren’t reliant on something external (e.g. if you’re testing a string manipulation method) to be unit tests.

          2. 1

            Take any function from the natural numbers to the natural numbers. How do you unit-test it in a way that ensures that its behavior cannot change between semantic versions? Even property tests can only generate a finite number of test cases.

            1. 2

              I think the adage “code is written for humans to read, and only incidentally for computers to execute” applies to tests especially. Of course you can’t test every case, but intention does count.

        2. 1

          Aside:

          I just recently added a test that exercises the full API of a Rust library of mine, doing so in such a way that any backwards-compatible breaking changes would error if added. (The particular case was that I’d add a member to a config struct, and so anyone constructing that struct without including a ..StructName::default() at the end would suddenly have a compile error because they were missing a field.) This seemed to do the trick nicely and would remind me to bump the appropriate part of semver when making a release.

          I work on the library (and in the Rust ecosystem) infrequently so it’s not at the front of my mind. More recently I accepted a PR, and made a new release including it after. Then I got the warning, again, that I’d broken semver. Of course, the failing test was seen by the contributor and fixed up before they submitted the PR, so I never saw the alarm bells ringing.

    9. 2

      AFAIK “semver” is just a document describing what dotted version strings have always been for… the alternatives (date versioning, increment random parts of the number at a whim, or no version at all) all seem equivalent to major-version-only versioning – which anyone is free to use, of course. I’m not sure the appeal of putting dots in s version string if you don’t want them to mean anything but I guess it’s some kind of “fittting in” asthetics?

    10. 2

      Nobody Has Suggested That Semantic Versioning Will Save Anyone

      1. 3

        Here on this site, several discussions in the original thread about the pyca/cryptography change brought up SemVer and certainly appeared to my eyes to be suggesting that it would have prevented or mitigated the drama.

        While it is possible that you personally have never made such claims about SemVer (I have not bothered to check), it is an easily-demonstrated fact that others have, and the OP here reads to me as an argument against those claims as made by those people.

        1. 2

          Hmm. I remember that thread, and re-skimmed it now. I didn’t find anyone saying semver would have prevented the situation. It certainly would have mitigated it somewhat, though. And I don’t agree that “it is an easily-demonstrated fact” that any significant group of people believe that semver in itself is going to solve any problems. My experience has consistently been that most, or almost all, people in semver’s demographic understand it is an approximate tool and not a panacea.

      2. 3

        Hey Peter big fan here! Sadly there’s been plenty suggesting that in that particular fiasco. Repeatedly. Even right now on Twitter in my mentions.

        There’s still a lot of assumptions about what SemVer can do for someone. I needed to write down the explanation why that’s not the case so I don’t have to repeat myself.

        1. 2

          Can you link to one of these examples? As I said below, my experience has consistently been that most, or almost all, people in semver’s demographic understand it is an approximate tool and not a panacea.

          1. 2

            I have to admit that “maintainer of a popular package thinks ‘almost all users’ have a realistic expectation from SemVer” was not on my bingo card!

            I suspect the kicker is

            people in semver’s demographic

            And that your demographic is simply different from mine. Maybe Python vs Go is all that it takes. Who knows. One of the main drivers why I write is to avoid repeating myself and I assure you I wouldn’t taken the time to write it if I didn’t expect to save time in the future.

            Can you link to one of these examples?

            I don’t want to call out people in public and if in your lived reality this isn’t a problem that’s fair enough.

            I state my premise in first paragraph and if it doesn’t apply to you or your users, it’s fair to skip it. Not sure if the sardonic dunk without reading it was necessary though.

            1. 3

              What makes you think I didn’t read the article?

              Like others, I think you’re “dunking” on semver unnecessarily. I generally agree with your description of it as a tl;dr of the changelog — but that’s incredibly valuable! 99% of the time I can trust it’s accurate and that’s a huge boon to my productivity. I understand it’s not necessarily accurate — and I haven’t yet encountered anyone who doesn’t understand it’s not necessarily accurate — but that’s fine, when it fails it’s detected with tests and that’s just an inconvenience more than anything.

          2. 2

            The entire Haskell ecosystem overly depends on semantic versioning. As a result, there are over 8000 Haskell packages in nixpkgs which are broken:

            $ git grep ' broken = true;' pkgs/development/haskell-modules/ | wc -l
            8835
            
    11. 2

      Enthusiasts of 32-bit hardware from the 1990s aside

      Dedicated to Alex and Paul who are willing to take the heat for the rest of us.

      Right from the start this seems inflammatory.

      1. 3

        I don’t think people who openly stated they love their Amigas see that as inflammatory because it’s not. That part is no way judging, it’s just one half of the complaints.

        What’s supposed to be inflammatory about dedicating a post that tries to dispel some myths that caused massive abuse against two of my friends is also entirely unclear to me.

        1. 1

          There’s been 4 posts in just the last couple weeks on this topic, they all immediately rise to the top of the front page and create hundreds of comments. At this point I don’t think the community is gaining anything by reading more takes on this topic that do not attempt to come to a solution or a compromise.

          What’s supposed to be inflammatory about dedicating a post that tries to dispel some myths that caused massive abuse against two of my friends is also entirely unclear to me.

          I want to be clear that it is not cool that anyone receives abuse. No matter what position you take, as long as you do not hurl abuse at someone else, you should not receive abuse. That said, I think authoring a post on a controversial topic like this isn’t helped by immediately laying out that you’re here to defend your friends. Anger or defensiveness is probably not the spice for reasonable debate.

          Anyway I don’t want to belabor a thread on this so that’s my $0.02.

          1. 2

            There’s been 4 posts in just the last couple weeks on this topic, they all immediately rise to the top of the front page and create hundreds of comments.

            I think that has been been my original sin but just in my defense: that draft has been sitting around for a year and the whole thing made me finish it. I’ve had my first draft ready when the last two bigger articles appeared last weekend but I’m generally slower at writing.

            But I hope that the article delivers some timeless value that will be seen more kindly down the road.

    12. 2

      No, SemVer does not solve every problem with dependencies, but at least it tells people to please stop intentionally breaking compatability in “minor” releases. If you’re going to break compatibility in every release, then just have a single version number. Is that really too much to ask?

      1. 1

        Funny enough, a change in the build system that doesn’t affect the public interface wouldn’t warrant a major bump in SemVer – particularly if it breaks platforms that were never supported by the authors – but let’s leave that aside.

        1. 1

          Here’s how this plays out over and over and over. First, people like me suggest that universal adoption of SemVer would significantly simplify and improve dependency management. Second, people like you point to a specific case where SemVer does not solve your problem, as if that somehow disproves that universal adoption of SemVer would significantly simplify and improve dependency management.

          I’ll repeat what I said above, yet again: No, SemVer does not solve every problem with dependencies, but at least it tells people to please stop intentionally breaking compatability in “minor” releases.

    13. 1

      I just started doing calendar based versioning for most my stuff now. Most packaging has concepts like lockfiles and version pinning and when some checkbox marking compliance person tells you version 2018.1.12 version of xyz is broken, you at least know how old it is. Hell, I imagine lining up a calendar based version to a change log is much easier too. As the author mentioned, I’m not trying to pretend a version increase won’t break someone’s use-case doing this as well. the prerelease/build metadata in semver could be used for information in terms of backported fixes.

      I’m not in agreement on preventing version conflicts. this is work that has to be spent case-by-case or the language’s module system should be changed to allow for a resolution of diamond dependencies, with some kind of duplication.

    14. 1

      Right off the bat, this post misunderstands the point of versioning. The author opens with:

      Let’s set the stage by laying down the ultimate task of version numbers: being able to tell which version of an entity is newer than another.

      That is not quite right. It’s true that that’s one thing that versioning does, but it is not its ultimate task. Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”). Otherwise, why does anyone care which release is newer? What does that matter to a user of the software?

      The rest of the post seems to be reacting to people who believe that SemVer solves a lot of problems that it doesn’t, and throws out the baby with the bath water in doing so. SemVer is certainly imperfect. And maybe there are versioning schemes that are better! But it does have a legitimate claim to attempting to accomplish versioning’s “ultimate task”. And I think that this post fails to sufficiently recognize this fact.

      1. 4

        Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”).

        I’m sorry but that’s historically in that general sense just not true. There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

        I could start enumerating examples, but let’s assume you’re right because that’s not my point, what bothers me is this:

        and throws out the baby with the bath water in doing so

        How does the post do that? That was entirely not my intent and I state several times, that there’s value to SemVer as a means fo communication. As you correctly say that the rest goes to dispel some myths (thanks for actually reading the article!) so I’m a bit saddened that you came to that conclusion? I’ve got a lot of feedback in the form of “i like SemVer but the article is right”, so I’m a bit baffled.

        1. 3

          There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

          You can’t though, not with the vast majority of large projects.

          Which is more recent:

          • firefox 78.15esr or firefox 80.0.1?
          • OpenSSL_1_0_2u or OpenSSL_1_1_1c?
          • Linux 5.4.99 or 4.19.177?
          • Postgres 12.6 or Postgres 13.1?
          1. 1

            To be specific, many versioning systems only guarantee a partial ordering. This arises because they use a tree-like structure. (Contrast this with a total ordering.)

          2. 1

            That’s a very good point and it depends how you define “newer”. It certainly doesn’t mean “released after”.

        2. 1

          There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

          I do agree with that. But trying to establish what the “ultimate task” of a versioning scheme is means coming up with a description of what problem(s) versioning schemes are intended to solve. I don’t think that “being unable to figure out which software release is newer than another” is really a description of a problem, because it’s not yet clear why that is valuable. I can say personally as a user of software (thinking primarily of packages/libraries here) that I never just want to know whether some release is newer than another, I always want to know 1) what changed between subsequent releases and the one my current project uses, and 2) why or whether that matters to my project. I’d say then that the task of a versioning scheme is to help me solve those problems, and that we can judge different versioning schemes by how well they do that.

          How does the post do that?

          I think it’s a little hard to explain concisely because my read (and those of other commenters, I think) of the post as unfairly criticizing the value of SemVer (and maybe versioning schemes in general) is at least somewhat a consequence what is emphasized, and maybe exaggerated, and what’s not. But here’s an example—you say at one point, after talking about strategies “to prevent third-party packages from breaking your project or even your business,” that

          There is nothing a version scheme can do to make it easier.

          which I think is simply untrue. In fact, like I was saying above, I think that’s the whole point (task) of a versioning scheme—to make the process of upgrading dependencies easier/less likely to break your project. Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.

          1. 1

            and those of other commenters, I think

            I mean this in the least judgy way I can summon: I don’t think most other commenters have read the (whole) article. Part of that is poor timing on my side, but I didn’t expect two other articles riffing on that happening appear around the same time. :(

            Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.

            I’m curious: how do you think does that I practice? Like how does that affect your Workflows?

            1. 2

              I’m curious: how do you think does that I practice? Like how does that affect your Workflows?

              For SemVer in particular, the MAJOR.MINOR.PATCH distinction helps gives me a sense of how much time I should spend reviewing the changes/testing a new version of a package against my codebase. If I don’t want to audit every single line of code change of every package anytime I perform an upgrade (and I and many people don’t, or can’t), then I have to find heuristics for what subset of the changes to audit, and SemVer provides such a heuristic. If I’m upgrading a package from e.g. 2.0.0 to 4.0.0, it also gives me a sense of how to chunk the upgrade and my testing of it—in this case, it might be useful to upgrade first to 3.0.0 and test at that interval, and then upgrade from there to 4.0.0 and test that.

              Of course, as you note in your post, this is imperfect in lots of ways, and things could still break—but it does seem clearly better than e.g. a versioning scheme that just increments a number every time some arbitrary unit of code is changed.

              1. 1

                How many dependencies do you have though? I understand this is very much a cultural thing but to give you a taste from my production:

                • a Go project has 25 (from 9 direct)
                • a Python project has 48 (from 28 direct, some are internal though)
                • my homepage uses Tailwind CSS + PurgeCSS through PostCSS and the resulting package-lock.json has 171 dependencies (!!!!)

                It’s entirely untenable for me to check every project’s changelog/diff just because their major bumped – unless it breaks my test suites.

                I fully understand that there’s environments that require that sort of diligence (health, automotive, military, …) but I’m gonna go out on a limb and say that most people arguing about SemVer don’t live in that world. We could of course open a whole new topic about supply chain attacks but let’s agree that’s an orthogonal topic.

                P.S. All that said: nothing in the article said that SemVer is worthless, it explicitly says the opposite. I’m just trying to understand where you’re coming from.

                1. 3

                  When I’m “reviewing my dependencies” I certainly don’t look at indirect dependencies! I don’t use them directly, so changes to their interfaces are (almost) never my problem.

                2. 2

                  Like @singpolyma, I don’t bother with indirect dependencies either—I only review the changelogs of my direct dependencies.

                  The main project that I’m currently working on is an Elm/JS/TS app, and here’s the breakdown:

                  • Elm direct dependencies: 28
                  • JS direct dependencies: 22
                  • JS direct devDependencies: 70

                  I definitely read the changelog of every package that I update, and based on what I see there and what a smoke test of my app reveals I might dig in deeper, usually from there to the PRs that were merged between releases, and from there straight into the source code if necessary—although it rarely is. Dependabot makes this pretty easy, and upgrading Elm packages is admittedly much safer than upgrading JS ones. But I personally don’t find it to be all that time-consuming, and I think it yields pretty good results.

      2. -2

        Its ultimate task is to

        [citation needed]

        1. 1

          Are you saying that my claim as to what “versioning’s ultimate task” is requires citation? Or that the author’s does? I’m making a claim about what that is, just as the author is—I’m not trying to make an appeal to authority here.

    15. 1

      Nothing will save you from shoddy engineering - whether its your own or someone else’s.

      It’s really easy to assume the happy path is the only one that things will follow and leave it at that, but that’s not a good assumption to make. I’m honestly more surprised when my test suites pass and builds succeed than when they fail - it keeps me prepared to fix things and from making hard promises of deployment times (talking down-to-the-hour times - I can be more broad without giving up wiggle room).