1. 1

    Right off the bat, this post misunderstands the point of versioning. The author opens with:

    Let’s set the stage by laying down the ultimate task of version numbers: being able to tell which version of an entity is newer than another.

    That is not quite right. It’s true that that’s one thing that versioning does, but it is not its ultimate task. Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”). Otherwise, why does anyone care which release is newer? What does that matter to a user of the software?

    The rest of the post seems to be reacting to people who believe that SemVer solves a lot of problems that it doesn’t, and throws out the baby with the bath water in doing so. SemVer is certainly imperfect. And maybe there are versioning schemes that are better! But it does have a legitimate claim to attempting to accomplish versioning’s “ultimate task”. And I think that this post fails to sufficiently recognize this fact.

    1. 4

      Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”).

      I’m sorry but that’s historically in that general sense just not true. There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

      I could start enumerating examples, but let’s assume you’re right because that’s not my point, what bothers me is this:

      and throws out the baby with the bath water in doing so

      How does the post do that? That was entirely not my intent and I state several times, that there’s value to SemVer as a means fo communication. As you correctly say that the rest goes to dispel some myths (thanks for actually reading the article!) so I’m a bit saddened that you came to that conclusion? I’ve got a lot of feedback in the form of “i like SemVer but the article is right”, so I’m a bit baffled.

      1. 3

        There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

        You can’t though, not with the vast majority of large projects.

        Which is more recent:

        • firefox 78.15esr or firefox 80.0.1?
        • OpenSSL_1_0_2u or OpenSSL_1_1_1c?
        • Linux 5.4.99 or 4.19.177?
        • Postgres 12.6 or Postgres 13.1?
        1. 1

          That’s a very good point and it depends how you define “newer”. It certainly doesn’t mean “released after”.

          1. 1

            To be specific, many versioning systems only guarantee a partial ordering. This arises because they use a tree-like structure. (Contrast this with a total ordering.)

          2. 1

            There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.

            I do agree with that. But trying to establish what the “ultimate task” of a versioning scheme is means coming up with a description of what problem(s) versioning schemes are intended to solve. I don’t think that “being unable to figure out which software release is newer than another” is really a description of a problem, because it’s not yet clear why that is valuable. I can say personally as a user of software (thinking primarily of packages/libraries here) that I never just want to know whether some release is newer than another, I always want to know 1) what changed between subsequent releases and the one my current project uses, and 2) why or whether that matters to my project. I’d say then that the task of a versioning scheme is to help me solve those problems, and that we can judge different versioning schemes by how well they do that.

            How does the post do that?

            I think it’s a little hard to explain concisely because my read (and those of other commenters, I think) of the post as unfairly criticizing the value of SemVer (and maybe versioning schemes in general) is at least somewhat a consequence what is emphasized, and maybe exaggerated, and what’s not. But here’s an example—you say at one point, after talking about strategies “to prevent third-party packages from breaking your project or even your business,” that

            There is nothing a version scheme can do to make it easier.

            which I think is simply untrue. In fact, like I was saying above, I think that’s the whole point (task) of a versioning scheme—to make the process of upgrading dependencies easier/less likely to break your project. Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.

            1. 1

              and those of other commenters, I think

              I mean this in the least judgy way I can summon: I don’t think most other commenters have read the (whole) article. Part of that is poor timing on my side, but I didn’t expect two other articles riffing on that happening appear around the same time. :(

              Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.

              I’m curious: how do you think does that I practice? Like how does that affect your Workflows?

              1. 2

                I’m curious: how do you think does that I practice? Like how does that affect your Workflows?

                For SemVer in particular, the MAJOR.MINOR.PATCH distinction helps gives me a sense of how much time I should spend reviewing the changes/testing a new version of a package against my codebase. If I don’t want to audit every single line of code change of every package anytime I perform an upgrade (and I and many people don’t, or can’t), then I have to find heuristics for what subset of the changes to audit, and SemVer provides such a heuristic. If I’m upgrading a package from e.g. 2.0.0 to 4.0.0, it also gives me a sense of how to chunk the upgrade and my testing of it—in this case, it might be useful to upgrade first to 3.0.0 and test at that interval, and then upgrade from there to 4.0.0 and test that.

                Of course, as you note in your post, this is imperfect in lots of ways, and things could still break—but it does seem clearly better than e.g. a versioning scheme that just increments a number every time some arbitrary unit of code is changed.

                1. 1

                  How many dependencies do you have though? I understand this is very much a cultural thing but to give you a taste from my production:

                  • a Go project has 25 (from 9 direct)
                  • a Python project has 48 (from 28 direct, some are internal though)
                  • my homepage uses Tailwind CSS + PurgeCSS through PostCSS and the resulting package-lock.json has 171 dependencies (!!!!)

                  It’s entirely untenable for me to check every project’s changelog/diff just because their major bumped – unless it breaks my test suites.

                  I fully understand that there’s environments that require that sort of diligence (health, automotive, military, …) but I’m gonna go out on a limb and say that most people arguing about SemVer don’t live in that world. We could of course open a whole new topic about supply chain attacks but let’s agree that’s an orthogonal topic.

                  P.S. All that said: nothing in the article said that SemVer is worthless, it explicitly says the opposite. I’m just trying to understand where you’re coming from.

                  1. 3

                    When I’m “reviewing my dependencies” I certainly don’t look at indirect dependencies! I don’t use them directly, so changes to their interfaces are (almost) never my problem.

                    1. 2

                      Like @singpolyma, I don’t bother with indirect dependencies either—I only review the changelogs of my direct dependencies.

                      The main project that I’m currently working on is an Elm/JS/TS app, and here’s the breakdown:

                      • Elm direct dependencies: 28
                      • JS direct dependencies: 22
                      • JS direct devDependencies: 70

                      I definitely read the changelog of every package that I update, and based on what I see there and what a smoke test of my app reveals I might dig in deeper, usually from there to the PRs that were merged between releases, and from there straight into the source code if necessary—although it rarely is. Dependabot makes this pretty easy, and upgrading Elm packages is admittedly much safer than upgrading JS ones. But I personally don’t find it to be all that time-consuming, and I think it yields pretty good results.

            2. -2

              Its ultimate task is to

              [citation needed]

              1. 1

                Are you saying that my claim as to what “versioning’s ultimate task” is requires citation? Or that the author’s does? I’m making a claim about what that is, just as the author is—I’m not trying to make an appeal to authority here.

            1. 2

              You want to claim that version 3.2 is compatible with version 3.1 somehow, but how do you know that? You know the software basically “works” because of your unit tests, but surely you changed the tests between 3.1 and 3.2 if there were any intentional changes in behavior. How can you be sure that you didn’t remove or change any functions that someone might be calling?

              Semantic versioning states that a minor release such as 3.2 should only add backwards compatible changes.

              So all your existing unit tests from 3.1 should still be in place, untouched. You should have new unit tests, for the functionality added in 3.2.

              I stopped reading after this, because the argument seems to boil down to either not understanding Semantic versioning, or not having full unit test coverage.

              1. 20

                I stopped reading after this

                If you stopped reading at 10% of the article, you should probably also have stopped yourself from commenting.

                not understanding Semantic versioning

                The fallacy you’re committing here is very well documented.

                1. 1

                  If you are questioning whether the function you removed/changed is used by anyone when deciding the next version increment, you are not using semantic versioning correctly (unless you always increase the major, regardless of how many people used the feature you modified). As the parent said, if you need to edit 3.1 tests, you broke something, and the semver website is quite clear about what to do on breaking changes.

                  1. 7

                    If you don’t only test the public API, it’s entirely possible to introduce required changes in tests in bugfix versions.

                    More importantly, my point about “no true Scotsman” was that saying “SemVer is great if and only if you follow some brittle manual process to the dot” proves the blog post’s narrative. SemVer is wishful thinking. You can have ambitions to adhere to it, you can claim your projects follow it, but you shouldn’t ever blindly rely on others doing it right.

                    1. 5

                      The question then becomes: why does nobody do it then? Do you truly believe that in a world, where it’s super rare that a major version exceeds “5” nobody ever had to change their tests, because some low-level implementation detail changed?

                      We’re talking about real packages that have more than one layer. Not a bunch of pure functions. You build abstractions over implementation details and in non-trivial software, you can’t always test the full functionality without relying on the knowledge of said implementation details.

                      Maybe the answer is: “that’s why everybody stays in ZeroVer” which is another way of saying that SenVer is impractical.

                  2. 6

                    The original fight about the PyCA cryptography package repeatedly suggested SemVer had been broken, and that if the team behind the package had adopted SemVer, there would have been far less drama.

                    Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way, and thus SemVer would not have been broken by doing that (i.e., if you ran pip install cryptography before and after, the module that ended up installed on your system exposed a public API that was compatible after with what you got before).

                    Unless you want to argue that SemVer requires version bump for any change that any third-party observer might notice. In which case A) you’ve deviated from what people generally say SemVer is about (see the original thread here, for example, where many people waffled between “only about documented API” and “but cryptography should’ve bumped major for this”) and B) have basically decreed that every commit increments major, because every commit potentially produces observable change.

                    But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

                    1. 1

                      Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way

                      I think you’re overlooking this little tidbit:

                      Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).

                      I’m not sure many people would consider “suddenly unavailable on 10 CPU architectures” to be “backwards compatible”.

                      But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

                      If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

                      1. 8

                        Let’s take a simplified example.

                        Suppose I write a package called add_positive_under_ten. It exposes exactly one public function, with this signature:

                        def add_positive_under_ten(x: int, y: int) -> int
                        

                        The documented contract of this function is that x and y must be of type int and must each be greater than 0 and less than 10, and that the return value is an int which is the sum of x and y. If the requirements regarding the types of x and y are not met, TypeError will be raised. If the requirements regarding their values are not met, ValueError will be raised. The package also includes an automated test suite which exhaustively checks behavior and correctness for all valid inputs, and verifies that the aforementioned exceptions are raised on sample invalid inputs.

                        In the first release of this package, it is pure Python. In a later, second release, I rewrite it in C as a compiled extension. In yet a later, third release, I rewrite the compiled C extension as a compiled Rust extension. From the perspective of a consumer of the package, the public API of the package has not changed. The documented behavior of the functions (in this case, single function) exposed publicly has not changed, as verified by the test suite.

                        Since Semantic Versioning as defined by semver.org applies to declared public API and nothing else whatsoever, Semantic Versioning would not require that I increment the major version with each of those releases.

                        Similarly, Semantic Versioning would not require that the pyca/cryptography package increment major for switching a compiled extension from C to Rust unless that switch also changed declared public API of the package in a backwards-incompatible way. The package does not adhere to Semantic Versioning, but even if it did there would be no obligation to increment major for this, under Semantic Versioning’s rules.

                        If you would instead like to argue that Semantic Versioning ought to apply to things beyond the declared public API, such as “any change a downstream consumer might notice requires incrementing major”, then I will point out that this is indistinguishable in practice from “every commit must increment major”.

                        1. 1

                          We don’t need a simplified, synthetic example.

                          We have the real world example. Do you believe that making a change which effectively drops support for ten CPU architectures is a breaking change, or not? If not, why not? How is “does not work at all”, not a breaking change?

                          1. 9

                            The specific claim at issue is whether Semantic Versioning would have caused this to go differently.

                            Although it doesn’t actually use SemVer, the pyca/cryptography package did not do anything that Semantic Versioning forbids. Because, again, the only thing Semantic Versioning forbids is incompatibility in the package’s declared public API. If the set of public classes/methods/functions/constants/etc. exposed by the package stays compatible as the underlying implementation is rewritten, Semantic Versioning is satisfied. Just as it would be if, for example, a function were rewritten to be more time- or memory-efficient than before while preserving the behavior.

                            And although Gentoo (to take an example) seemed to be upset about losing support for architectures Gentoo chooses to support, they are not architectures that Python (the language) supported upstream, nor as far as I can tell did the pyca/cryptography team ever make any public declaration that they were committed to supporting those architectures. If someone gets their software, or my software, or your software, running on a platform that the software never committed to supporting, that creates zero obligation on their (or my, or your) part to maintain compatibility for that platform. But at any rate, Semantic Versioning has nothing whatsoever to say about this, because what happened here would not be a violation of Semantic Versioning.

                        2. 7

                          If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

                          None of those architectures were maintained or promised by the maintainers, but were added by third parties. No matter what your opinion on SemVer is, activities of third parties about whose existence you possibly didn’t even know about, is not part of it.

                          Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                          1. 0

                            Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                            If you think your argument somehow shows that breaking support for 10 CPU architectures isn’t a breaking change, then yes, we all have much to learn.

                            1. 8

                              You still haven’t explained why you think Semantic Versioning requires this. Or why you think the maintainers had any obligation to users they had never made any promises to in the first place.

                              But I believe I’ve demonstrated clearly that Semantic Versioning does not consider this to be a change that requires incrementing major, so if you’re still offering that $20…

                              1. 0

                                Part of what they ship is code that’s compiled, and literally the first two sentences of the project readme are:

                                cryptography is a package which provides cryptographic recipes and primitives to Python developers. Our goal is for it to be your “cryptographic standard library”.

                                If your self stated goal is to be the “standard library” for something and you’re shipping code that is compiled (as opposed to interpreted code, e.g. python), I would expect you to not break things relating to the compiled part of the library in a minor release.

                                Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                                1. 8

                                  Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                                  There are many types of agreements – some formal, some less so – between developers of software and users of software regarding support and compatibility. Developers declare openly which parts of the software they consider to be supported with a compatibility promise, and consumers of the software declare openly that they will not expect support or compatibility promises for parts of the software which are not covered by that declaration.

                                  Semantic Versioning is a mildly-formal way of doing this. But it is focused on only one specific part: the public API of the software. It is not concerned with anything else, at all, ever, for any reason, under any circumstances. No matter how many times you pound the table and loudly demand that something else – like the build toolchain – be covered by a compatibility guarantee, Semantic Versioning will not budge on it.

                                  The cryptography change did not violate Semantic Versioning. The public API of the module after the rewrite was backwards-compatible with the public API before the rewrite. This is literally the one, only, exclusive thing that Semantic Versioning cares about, and it was not broken.

                                  Meanwhile, you appear to believe that by releasing a piece of software, the author takes on an unbreakable obligation to maintain compatibility for every possible way the software might ever be used, by anyone, on any platform, in any logically-possible universe, forever. Even if the author never promised anything resembling that. I honestly do not know what the basis of such an obligation would be, nor what chain of reasoning would support its existence.

                                  What I do know is that the topic of this thread was Semantic Versioning. Although the cryptography library does not use Semantic Versioning, the rewrite of the extension module in Rust did not violate Semantic Versioning. And I know that nothing gives you the right to make an enforceable demand of the developers that they maintain support and compatibility for building and running on architectures that they never committed to supporting in the first place, and nothing creates any obligation on their part to maintain such support and compatibility. The code is under an open-source license. If you depended on it in a way that was not supported by the developers’ commitments, your remedy is to maintain your own fork of it, as with any other upstream decision you dislike.

                      2. 4

                        “Should” is the key word here because I haven’t ever contributed to an open source project that has that as part of their policy neither have I observed it’s wide application given the state of third party packages.

                        The article specifically speaks about the divergence between aspiration and reality and what conclusions can be drawn from that.

                        1. 3

                          Unfortunately the aspiration is broken too.

                          1. 2

                            Baby steps 😇

                        2. 3

                          It sounds like you’re proposing to use unit tests to prove that a minor release doesn’t introduce backwards-compatible changes. However, tests cannot substitute for proofs; there are plenty of infinite behaviors which we want to write down in code but we cannot exhaustively test.

                          All of these same problems happen in e.g. Haskell’s ecosystem. It turns out that simply stating that minor releases should only add backwards-compatible changes is just an opinion and not actually a theorem about code.

                          1. 1

                            Aside:

                            I just recently added a test that exercises the full API of a Rust library of mine, doing so in such a way that any backwards-compatible breaking changes would error if added. (The particular case was that I’d add a member to a config struct, and so anyone constructing that struct without including a ..StructName::default() at the end would suddenly have a compile error because they were missing a field.) This seemed to do the trick nicely and would remind me to bump the appropriate part of semver when making a release.

                            I work on the library (and in the Rust ecosystem) infrequently so it’s not at the front of my mind. More recently I accepted a PR, and made a new release including it after. Then I got the warning, again, that I’d broken semver. Of course, the failing test was seen by the contributor and fixed up before they submitted the PR, so I never saw the alarm bells ringing.

                            1. 1

                              No I think they have a valid point. “Surely” implies that it’s normal to “change” unittests between minor versions, but the term “change” here mixes “adding new” and “modifying existing” in a misleading way. Existing unittests should not change between minor versions, as they validate the contract. Of course, they may change anyway, for instance if they were not functional at all, or tested something wrong, but it should certainly not be common.

                              edit: I am mixing up unittests and system tests, my apologies. Unit tests can of course change freely, but they also have no relation to SemVer; the debate only applies to tests of the user-facing API.

                              1. 2

                                I know people use different terminology for the same things, but if the thing being tested is a software library, I would definitely consider any of the tests that aren’t reliant on something external (e.g. if you’re testing a string manipulation method) to be unit tests.

                                1. 1

                                  Take any function from the natural numbers to the natural numbers. How do you unit-test it in a way that ensures that its behavior cannot change between semantic versions? Even property tests can only generate a finite number of test cases.

                                  1. 2

                                    I think the adage “code is written for humans to read, and only incidentally for computers to execute” applies to tests especially. Of course you can’t test every case, but intention does count.

                            1. 4

                              I think Łukasz Langa, Python core developer, has some serious comments about the benchmark setup: https://twitter.com/llanga/status/1271719778324025349?s=19

                              1. 3

                                Thanks for linking this. A bit of rebuttal from me:

                                1. As I stated in the article, I did try 4 async workers. Performance was worse than with 5 workers (though not hugely). I don’t have a potted explanation for this I’m afraid - I can only say for sure that using 4 async workers instead of 5 did not change the results for the better in asyncland. (Note: I tried varying the worker numbers for all frameworks individually, not as a collective).

                                2. I take the point about running the whole thing on one machine. It would be better if I hadn’t of course. It seems unlikely that doing so would change the result since load on the other components was so low. I would be keen to read of any benchmark results using such a multi-machine setup, particularly any that find favourably for async, as I don’t know of any. I would add for anyone hoping to replicate my results (as friend of mine did): it takes a lot of time. It’s not enough in my opinion to just throw up these servers in naive manner, you need to make a good faith effort to tune and add infrastructure to improve performance. For example, when I ran the async servers without a connection pool they broke everything (including themselves).

                                3. Beyond my own results, there is a chunky body of extant “sysadmin lore” that says: async is problematic under load. I reference a few of the publicly available reports in my article: from Etsy; claims from inside a ridesharing startup; etc. I have also had negative private experiences too (prior to asyncio). The SQLAlchemy author wrote several years ago about this problem and kindly appeared in the HN thread to repeat his claims. The Flask author alluded to unfavourable private benchmarks, presumably from his workplace. The list goes on (including in other language communities).

                                1. 4

                                  Hi.

                                  The point about not scaling above ~4 workers on 4 vCPUs has little to do about 4 vs 5 workers. It’s about being able to saturate your CPU cores with much fewer processes compared to sync workers.

                                  You could at least acknowledge in your post that sync frameworks achieve on par performance by using more memory. Hard to do an exact apples to apples comparison but the idea stands: async frameworks allow much denser resource usage.

                                  The reason why running your database with your Python process is not a realistic case goes beyond the operational problems with it (no high availability, no seamless scaling, hard upgrades and backups). The problem is that it unrealistically minimizes latency between the services. It doesn’t take much for the sync case advantage to go away as soon as you put the database on a separate box.

                                  That separation would also allow for cheaper scaling: you can run just a few micro instances with little memory and a single vCPU and async workers will be perfectly happy with that.

                                  Finally, appealing to authority and “sysadmin lore” should be out of scope for a benchmark that tries to be objective. For every Etsy I can give you Facebook that moved entirely to an async request model, including Instagram which is using Python 3. And Nginx which you’re using yourself in your benchmark was a big upgrade over Apache largely because of its single-threaded async model vs. a pre-fork server.

                                  You also need to be careful whose authority you’re appealing to. Quoting Nathaniel J. Smith point out deficiencies of asyncio loses its intended strength when you add that he is such a strong proponent of asynchronous programming that he created his own framework. That framework, Trio, is a fantastic research environment and already has informed evolution of asyncio and I’m sure will keep doing so. That’s the point: Nathaniel’s posts aren’t saying “stop using async programming”. They are saying “here’s how we can make it better”.

                                  1. 2

                                    The memory point is fine - for sure less memory is used. How important that is depends on deployment, as traditionally memory usage is not a huge problem for webservers. I contend: not very important for most people.

                                    I don’t accept that the implication that I need to build a HA postgres cluster with backups and replication chains and whatnot in order to test. That would just raise the goalposts so high that it would just be a huge amount of effort and cost for anyone to construct a benchmark. If you’re aware of a cache of publicly available benchmarks that met your exacting criteria in this respect, referencing them would be great.

                                    Going to the harder nut of that - the lower latency via running on the same machine - I am doubtful about how much it matters. Adding more blocking IO operations is simply not going to help because (as I stated elsewhere on this page) IO model just does not seem relevant to throughput for “embarassingly parallel” tasks like webservers. The fact that UWSGI is native code is the biggest determinant of throughput. For response times of course, doing something else while waiting actually seems to hurt - async workloads don’t get scheduled as fairly as the kernel scheduler does for processes.

                                    Nginx using async is fine - everyone seems to think that nginx works ok and the Python community did not have to rewrite a large portion of their ecosystem in order to switch from apache2 to nginx.

                                    On the subject of syadmin lore - I’m afraid that I don’t agree that it is out of scope! I’m not bound by intergalactic law only to consider my own evidence and I think it’s probably a good idea to consider outside evidence as well as what I have available to myself - after all it’s not as though I will have many opportunities to replicate multi-year programmes of software engineering in a cleanroom environment.

                                    Thanks for taking the time to find me on a different medium in order to respond.

                                2. 1

                                  I mean you really shouldn’t go past the title here.

                                  The claim that sync code somehow would be faster is absurd in its own righ unless your program has absolute 0 IO wait the async overhead will always be lower than benefits.
                                  The only really argument here would be this increased code complexity increases likelihood of faults.

                                  1. 2

                                    The claim that sync code somehow would be faster is absurd in its own righ unless your program has absolute 0 IO wait the async overhead will always be lower than benefits.

                                    Maybe true in python, I don’t know. Demonstrably untrue for high-throughput work on servers with high core counts due to the locking overhead.

                                    1. 1

                                      And yet it is faster and I try hard to explain why in the body of the article (which of course I recommend strongly as the author of it :)). To briefly recap:

                                      1. IO model is irrelevant as OS scheduled multi-processing solves the problem of embarrassingly parallel workloads blocking on IO
                                      2. Use of native code matters a great deal and is otherwise the dominant factor
                                      1. 1

                                        And yet it is faster

                                        To me it seems like really digging for minute edge cases.
                                        Async code, especially in python, is about implicitly eliminating wait. Meaning I can deploy my app anywhere, on any machine, in any way and it will always choose to optimally manage IO wait time.