Threads for oscar

  1. 2

    You want to claim that version 3.2 is compatible with version 3.1 somehow, but how do you know that? You know the software basically “works” because of your unit tests, but surely you changed the tests between 3.1 and 3.2 if there were any intentional changes in behavior. How can you be sure that you didn’t remove or change any functions that someone might be calling?

    Semantic versioning states that a minor release such as 3.2 should only add backwards compatible changes.

    So all your existing unit tests from 3.1 should still be in place, untouched. You should have new unit tests, for the functionality added in 3.2.

    I stopped reading after this, because the argument seems to boil down to either not understanding Semantic versioning, or not having full unit test coverage.

    1. 20

      I stopped reading after this

      If you stopped reading at 10% of the article, you should probably also have stopped yourself from commenting.

      not understanding Semantic versioning

      The fallacy you’re committing here is very well documented.

      1. 1

        If you are questioning whether the function you removed/changed is used by anyone when deciding the next version increment, you are not using semantic versioning correctly (unless you always increase the major, regardless of how many people used the feature you modified). As the parent said, if you need to edit 3.1 tests, you broke something, and the semver website is quite clear about what to do on breaking changes.

        1. 7

          If you don’t only test the public API, it’s entirely possible to introduce required changes in tests in bugfix versions.

          More importantly, my point about “no true Scotsman” was that saying “SemVer is great if and only if you follow some brittle manual process to the dot” proves the blog post’s narrative. SemVer is wishful thinking. You can have ambitions to adhere to it, you can claim your projects follow it, but you shouldn’t ever blindly rely on others doing it right.

          1. 5

            The question then becomes: why does nobody do it then? Do you truly believe that in a world, where it’s super rare that a major version exceeds “5” nobody ever had to change their tests, because some low-level implementation detail changed?

            We’re talking about real packages that have more than one layer. Not a bunch of pure functions. You build abstractions over implementation details and in non-trivial software, you can’t always test the full functionality without relying on the knowledge of said implementation details.

            Maybe the answer is: “that’s why everybody stays in ZeroVer” which is another way of saying that SenVer is impractical.

        2. 6

          The original fight about the PyCA cryptography package repeatedly suggested SemVer had been broken, and that if the team behind the package had adopted SemVer, there would have been far less drama.

          Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way, and thus SemVer would not have been broken by doing that (i.e., if you ran pip install cryptography before and after, the module that ended up installed on your system exposed a public API that was compatible after with what you got before).

          Unless you want to argue that SemVer requires version bump for any change that any third-party observer might notice. In which case A) you’ve deviated from what people generally say SemVer is about (see the original thread here, for example, where many people waffled between “only about documented API” and “but cryptography should’ve bumped major for this”) and B) have basically decreed that every commit increments major, because every commit potentially produces observable change.

          But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

          1. 1

            Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way

            I think you’re overlooking this little tidbit:

            Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).

            I’m not sure many people would consider “suddenly unavailable on 10 CPU architectures” to be “backwards compatible”.

            But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

            If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

            1. 8

              Let’s take a simplified example.

              Suppose I write a package called add_positive_under_ten. It exposes exactly one public function, with this signature:

              def add_positive_under_ten(x: int, y: int) -> int

              The documented contract of this function is that x and y must be of type int and must each be greater than 0 and less than 10, and that the return value is an int which is the sum of x and y. If the requirements regarding the types of x and y are not met, TypeError will be raised. If the requirements regarding their values are not met, ValueError will be raised. The package also includes an automated test suite which exhaustively checks behavior and correctness for all valid inputs, and verifies that the aforementioned exceptions are raised on sample invalid inputs.

              In the first release of this package, it is pure Python. In a later, second release, I rewrite it in C as a compiled extension. In yet a later, third release, I rewrite the compiled C extension as a compiled Rust extension. From the perspective of a consumer of the package, the public API of the package has not changed. The documented behavior of the functions (in this case, single function) exposed publicly has not changed, as verified by the test suite.

              Since Semantic Versioning as defined by applies to declared public API and nothing else whatsoever, Semantic Versioning would not require that I increment the major version with each of those releases.

              Similarly, Semantic Versioning would not require that the pyca/cryptography package increment major for switching a compiled extension from C to Rust unless that switch also changed declared public API of the package in a backwards-incompatible way. The package does not adhere to Semantic Versioning, but even if it did there would be no obligation to increment major for this, under Semantic Versioning’s rules.

              If you would instead like to argue that Semantic Versioning ought to apply to things beyond the declared public API, such as “any change a downstream consumer might notice requires incrementing major”, then I will point out that this is indistinguishable in practice from “every commit must increment major”.

              1. 1

                We don’t need a simplified, synthetic example.

                We have the real world example. Do you believe that making a change which effectively drops support for ten CPU architectures is a breaking change, or not? If not, why not? How is “does not work at all”, not a breaking change?

                1. 9

                  The specific claim at issue is whether Semantic Versioning would have caused this to go differently.

                  Although it doesn’t actually use SemVer, the pyca/cryptography package did not do anything that Semantic Versioning forbids. Because, again, the only thing Semantic Versioning forbids is incompatibility in the package’s declared public API. If the set of public classes/methods/functions/constants/etc. exposed by the package stays compatible as the underlying implementation is rewritten, Semantic Versioning is satisfied. Just as it would be if, for example, a function were rewritten to be more time- or memory-efficient than before while preserving the behavior.

                  And although Gentoo (to take an example) seemed to be upset about losing support for architectures Gentoo chooses to support, they are not architectures that Python (the language) supported upstream, nor as far as I can tell did the pyca/cryptography team ever make any public declaration that they were committed to supporting those architectures. If someone gets their software, or my software, or your software, running on a platform that the software never committed to supporting, that creates zero obligation on their (or my, or your) part to maintain compatibility for that platform. But at any rate, Semantic Versioning has nothing whatsoever to say about this, because what happened here would not be a violation of Semantic Versioning.

              2. 7

                If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

                None of those architectures were maintained or promised by the maintainers, but were added by third parties. No matter what your opinion on SemVer is, activities of third parties about whose existence you possibly didn’t even know about, is not part of it.

                Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                1. 0

                  Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                  If you think your argument somehow shows that breaking support for 10 CPU architectures isn’t a breaking change, then yes, we all have much to learn.

                  1. 8

                    You still haven’t explained why you think Semantic Versioning requires this. Or why you think the maintainers had any obligation to users they had never made any promises to in the first place.

                    But I believe I’ve demonstrated clearly that Semantic Versioning does not consider this to be a change that requires incrementing major, so if you’re still offering that $20…

                    1. 0

                      Part of what they ship is code that’s compiled, and literally the first two sentences of the project readme are:

                      cryptography is a package which provides cryptographic recipes and primitives to Python developers. Our goal is for it to be your “cryptographic standard library”.

                      If your self stated goal is to be the “standard library” for something and you’re shipping code that is compiled (as opposed to interpreted code, e.g. python), I would expect you to not break things relating to the compiled part of the library in a minor release.

                      Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                      1. 8

                        Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                        There are many types of agreements – some formal, some less so – between developers of software and users of software regarding support and compatibility. Developers declare openly which parts of the software they consider to be supported with a compatibility promise, and consumers of the software declare openly that they will not expect support or compatibility promises for parts of the software which are not covered by that declaration.

                        Semantic Versioning is a mildly-formal way of doing this. But it is focused on only one specific part: the public API of the software. It is not concerned with anything else, at all, ever, for any reason, under any circumstances. No matter how many times you pound the table and loudly demand that something else – like the build toolchain – be covered by a compatibility guarantee, Semantic Versioning will not budge on it.

                        The cryptography change did not violate Semantic Versioning. The public API of the module after the rewrite was backwards-compatible with the public API before the rewrite. This is literally the one, only, exclusive thing that Semantic Versioning cares about, and it was not broken.

                        Meanwhile, you appear to believe that by releasing a piece of software, the author takes on an unbreakable obligation to maintain compatibility for every possible way the software might ever be used, by anyone, on any platform, in any logically-possible universe, forever. Even if the author never promised anything resembling that. I honestly do not know what the basis of such an obligation would be, nor what chain of reasoning would support its existence.

                        What I do know is that the topic of this thread was Semantic Versioning. Although the cryptography library does not use Semantic Versioning, the rewrite of the extension module in Rust did not violate Semantic Versioning. And I know that nothing gives you the right to make an enforceable demand of the developers that they maintain support and compatibility for building and running on architectures that they never committed to supporting in the first place, and nothing creates any obligation on their part to maintain such support and compatibility. The code is under an open-source license. If you depended on it in a way that was not supported by the developers’ commitments, your remedy is to maintain your own fork of it, as with any other upstream decision you dislike.

            2. 4

              “Should” is the key word here because I haven’t ever contributed to an open source project that has that as part of their policy neither have I observed it’s wide application given the state of third party packages.

              The article specifically speaks about the divergence between aspiration and reality and what conclusions can be drawn from that.

              1. 3

                Unfortunately the aspiration is broken too.

                1. 2

                  Baby steps 😇

              2. 3

                It sounds like you’re proposing to use unit tests to prove that a minor release doesn’t introduce backwards-compatible changes. However, tests cannot substitute for proofs; there are plenty of infinite behaviors which we want to write down in code but we cannot exhaustively test.

                All of these same problems happen in e.g. Haskell’s ecosystem. It turns out that simply stating that minor releases should only add backwards-compatible changes is just an opinion and not actually a theorem about code.

                1. 1

                  No I think they have a valid point. “Surely” implies that it’s normal to “change” unittests between minor versions, but the term “change” here mixes “adding new” and “modifying existing” in a misleading way. Existing unittests should not change between minor versions, as they validate the contract. Of course, they may change anyway, for instance if they were not functional at all, or tested something wrong, but it should certainly not be common.

                  edit: I am mixing up unittests and system tests, my apologies. Unit tests can of course change freely, but they also have no relation to SemVer; the debate only applies to tests of the user-facing API.

                  1. 2

                    I know people use different terminology for the same things, but if the thing being tested is a software library, I would definitely consider any of the tests that aren’t reliant on something external (e.g. if you’re testing a string manipulation method) to be unit tests.

                    1. 1

                      Take any function from the natural numbers to the natural numbers. How do you unit-test it in a way that ensures that its behavior cannot change between semantic versions? Even property tests can only generate a finite number of test cases.

                      1. 2

                        I think the adage “code is written for humans to read, and only incidentally for computers to execute” applies to tests especially. Of course you can’t test every case, but intention does count.

                    2. 1


                      I just recently added a test that exercises the full API of a Rust library of mine, doing so in such a way that any backwards-compatible breaking changes would error if added. (The particular case was that I’d add a member to a config struct, and so anyone constructing that struct without including a ..StructName::default() at the end would suddenly have a compile error because they were missing a field.) This seemed to do the trick nicely and would remind me to bump the appropriate part of semver when making a release.

                      I work on the library (and in the Rust ecosystem) infrequently so it’s not at the front of my mind. More recently I accepted a PR, and made a new release including it after. Then I got the warning, again, that I’d broken semver. Of course, the failing test was seen by the contributor and fixed up before they submitted the PR, so I never saw the alarm bells ringing.

                  1. 41

                    I self host Bitwarden. Before that we’ve been thru Enpass and 1Password but nothing felt as secure. Just be sure to back up data regularly in either case!

                    The reason for going down self hosted solution was primarily privacy. The only drawback is that we can’t snyc without VPN (which we do not mind at all).

                    1. 7

                      I use pass as my standard password manger, but I’m thinking about switching to bitwarden, largely so that I can access passwords on my phone. I’m actually currently running self-hosted bitwarden-rs myself, although my instance only has a single test password in it so far. The main thing I’m concerned about is having access to my passwords if my self-hosted webserver goes down for whatever reason. I haven’t figured out if bitwarden-rs provides a convenient way to do this .

                      1. 8

                        I use bitwarden-rs. I have found that you generally have access to passwords on already-sync’d clients if the server is down. Sometimes, either due to elapsed time, or because the client has tried some operation that requires access to the server, the client will insist on a connection to the server before it will proceed. I haven’t yet cared enough to run this down; my existing clients are give access just fine for short server outages, and that’s the case I care about. (i.e. I never have to go fix a server in order to log in to something. I have had to go fix a server in order to set up a new browser, as you might expect.)

                        1. 6

                          If you’d like to stick with pass, you might want to take a look at the apps. For me, using the Android app with Syncthing is working very well. I especially like it because I only sync a subset of my passwords which are on the “phone” directory, these are configured to be encrypted with my GPG key as well as another created specifically for my phone.

                          This adds the syncthing dependency, which I didn’t mind because I was using it already for other data, so it was very easy to configure. However, you can also synchronize using Git (at least on Android).

                          1. 8

                            I use pass + git + GPG on my computers and Password Store (git built-in) + OpenKeychain on Android. The git repository is served from my server at home. No need for Syncthing, but in order to update any passwords I require being on my home network, however that is infrequent.

                            1. 3

                              You can use Syncthing in combination with git, by replacing the .git directory with a .git file with contents gitdir: /path/to/.git. Then the git index will be excluded from Syncthing sync. You get the best of both worlds.

                              1. 1

                                I use syncthing to sync my bare git repos, and push to them from other folders on my macbine. Works pretty well.

                          2. 2

                            I don’t have access to home network/VPN on my laptop for at least 8 hours a day and no issues so far. Just note that you can’t save new passwords without a connection.

                            1. 1

                              I use VPN for editing. Access is OK with a cached version.

                          1. 5

                            In their vivaldi review, they mentioned piwik as if it were spyware. And that seemed to be the loudest objection. It’s been a few years, but my recollection of piwik was just that it was local analytics for a site. By that I mean you could use it to see where someone came from, what they did in your site, and when they left. It wasn’t anything that could track you across sites, and nothing about it then made me nervous. Is there something nasty about it that either I missed or that has been developed since then?

                            1. 1

                              You can either self-host Matomo (the scenario you described) or use an already existing server (Matomo themselves offer that for a fee), which would give the possibility of tracking across sites, you would have to read the ToS/trust Matomo not to do it. I don’t know what Vivaldi was doing, but it wouldn’t be the first time I read “local” tracking described as spyware.

                              (Piwik was renamed to Matomo)

                            1. 2

                              A long-lasting stack is something very appealing to me as well, however I think the lack of templates would end up costing more than it saves. I ended up substituting SSGs for a small Python script that does templating and a couple other functions I wanted (like feeds and pagination). If someone is interested, I can recommend forking, which is what I did and worked very well, and your only dependency is Python (and its standard library).

                              1. 7

                                Yep! You can apply templating with GNU Make and m4, both of which haven’t introduced any breaking changes in several decades, so the idea that templating inherently introduces churn and maintenance headache is rather silly.

                                1. 3

                                  both of which haven’t introduced any breaking changes in several decades

                                  GNU make occasionally introduces minor incompatibilities (see the first two notes).

                                  But, yes, aside from that the core features have been the same for decades.

                                  1. 2

                                    Interesting, thanks!

                                    While it’s unlikely those changes would affect HTML generation for such a site, I guess my original statement was over-broad. =)

                                  2. 2

                                    Google and thee shall recieve:

                                1. 1

                                  This is very similar to how I’ve been running my own personal git server for years on my VPS.

                                  The git repositories live in the home directory of a user named git. My SSH key is in the authorized_keys file of that account. When I want to create a new repo, I have a shell function that logs in and runs the right command so that I don’t have to relearn how to do it every few months.

                                  I use cgit for a convenient web interface, but I may take a look at stagit since I’ve never been fully comfortable around the idea of a persistent CGI daemon written in C. (Even though it is completely hidden behind HTTPS and HTTP Basic Auth.)

                                  1. 2

                                    I like cgit a lot. I sometimes think about changing to it, but a static site is something hard to give up for me.

                                    If you want a similar experience to cgit and you have time, you can always try and program it yourself. I did that to add Mardown rendering for the READMEs which was something I really missed and I’m pretty satisfied.

                                  1. 2

                                    It’s nice to have all this information in one place. I have spent a lot of time looking around to get these results (most of them really). Most of the stuff brought up weren’t so much a Linux on Mac issue, but an i3 thing (if you are using GNOME, most of the mentioned features are already there out of the box). I would have loved to have this post a month ago when I was switching to i3.