1. 35
    1. 41

      I think this, and many other similar cases, spanning back to the Python 2.7 fallout, reveal an interesting divide within the community of Python users.

      (This is entirely an observation, not a “but they should support it, if not until the thermal death of the Universe, at least until the Sun is reasonably close to being a Red Dwarf” rant. Also, seriously, 5 years is pretty good. It would’ve been pretty good even before the “move fast and break things” era).

      There are, on the one hand, the people who run Python as an application, or at the very least as an operational infrastructure language. They need security updates because running an unpatched Django installation is a really bad idea. Porting their codebase forward is not just necessary, it provides real value.

      And then there are the people who run Python for their automated testing framework, for deployment scripts, for project set-up/boilerplate scripts and so on. They are understandably pissed at things like these because their buildiso.py script might as well run on Python 1.4. Porting their codebase forward is a (sometimes substantial!) effort that doesn’t really yield any benefits. Even the security updates are barely relevant in many such environments. Most of the non-security fixes are technically really useless: the bulk of the code has been written X years ago and embeds all the workarounds that were necessary back then, too. Nobody’s going to go back and replace the “clunky workarounds” with a “clean, Pythonic” version, seeing how the code not only works fine but is literally correct and sees zero debugging or further development.

      Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago. That is to say, their maintenance budget has only “bugs” and “feature requests” items, and zero time for “keeping up with the hundreds of open source projects that made it feasible to write our SaaS product with less than 200 people in the first place and which have many other SaaS products to keep alive so they’re not gonna stop for us”.

      1. 19

        A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”. You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse. IMO, it is not only possible for someone to do that, but it is good, because the current paradigm is inherently unstable in the same way that capitalism’s “Exponential Growth” concept is. The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources (time, money, interest, etc), and the idea that new software grows to replace old is outright incorrect when you see people deliberately hunting out old versions of software with specific traits. (Actually, on the whole, features have been lost over time, but that’s a whole different discussion about the fact that the history of software is not taught, with much of it being bound up in long-dead companies and people).

        For people maintaining such software, why would it make sense to rewrite large swathes of the codebase, and have a high risk of introducing bugs in the process, many of them probably having already been fixed. Sure, there’s the “security” aspect of it, and there will always need to be minor maintenance needed here and there, but rewriting the code to be compatible with non-EOL platforms not only incurs extra weeks or months (or even years) of effort, but it invalidates all of the testing that you have accumulated against the current codebase, as well.

        What made me point this out, is that you seem to regard this form of software as a negative, or as a liability. But at least half of modern software development seems to be what is effectively treading water, all due to bad decisions related to dependencies that are chosen, or the sheer insufferable amount of astractional cost we have and are accumulating as an industry. Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

        1. 10

          You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse.

          Doing this requires you to find a language, compiler and/or interpreter, build toolchain, dev tooling, operating system, etc. all of which must in in the “Done” state with hard guarantees. And while you’re free to build your own “Done” software, the key here is that nobody else is obligated to provide you with “Done” software, so you may have to build your own “Done” stack to get it, or pay the going rate for the shrinking number of people who are fluent in languages which are effectively “Done” because the only work happening in those languages these days is maintenance of half-century-old software systems.

          Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing – especially when it turns out that, say, a few billion deployed devices worldwide are all vulnerable to a remote code execution bug.

          1. 5

            Not every upgrade has to be non backward compatible. Perl5 is a good example.

            1. 6

              Python releases are about as backwards incompatible as Perl releases. I think people just assume every upgrade is bad because of the Python 2 -> 3 upgrade. Worth remembering that realistically nobody tried doing a Perl 5 -> Raku (neé Perl 6) migration.

              1. 2

                That’s probably because Raku was such a long time coming, and marketed from the start (at least as far back as I can remember, not being a Perl coder) as an “apocalypse”. I think nobody expected to be able to migrate, so nobody even bothered trying. Python, on the other hand, had compatibility packages like “six” and AFAIK it was always intended to be doable at least to upgrade from 2 to 3 (and it was, for a lot of code, quite doable). But then when people actually tried, the nitty-gritty details caused so much pain (especially in the early days) that they didn’t want to migrate. And of course essential dependencies lagging behind made it all so much more painful, even if your own pure Python code itself was easy to port it might be undoable to port a library you’re using.

                So I guess it boils down to expectation management :)

          2. 5

            This is actually why standards are useful, they allow code to outlive any actually-existing platform. The code I have from 20 years ago that still builds and runs without changes usually has a single dependency on POSIX. I’m not running it on IRIX or BSD/OS like the author was, but it still works.

          3. 2

            Not necessarily, I think. One might want to do so in cases where there is external, potentially malicious user input. However, in highly regulated environments, where different parties exchange messages and are liable for their correctness, one can keep their tools without upgrading anything for a long time (or at least until the protocol changes significantly). There is simply no business reason to spend time on upgrading any part of the stack.

          4. 2

            Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing

            Good way to carefully misrepresent what I was talking about :)

            1. 1

              It’s more that this really is what “Done” means. My own stance, learned the hard way, is that the only way a piece of software can be “Done” is when it’s no longer used by anyone or anything, anywhere. If it’s in use, it isn’t and can’t be “Done”.

              And the fact that most software that approximates a “Done” state is abandonware, and the problems abandonware tends to cause, is the point.

              1. 1

                I disagree that this is what “Done” means, and I disagree with your implied point that this is in any way “inevitable”.

                1. 1

                  The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources

                  That’s what you wrote above. Which implies that your definition of “Done” involves ceasing work on the software at a certain point.

                  My point is that generally you get to choose “no further work will be done” or “people will continue to use it”. Not both. You mention people searching for older versions of software – what do you think a lot of communities do with old software? Many of them continue to maintain that software because they need it to stay working for as long as it’s used, which is incompatible with “Done” status.

                  1. 1

                    And yet if you read a paragraph down from there, you will see

                    “For people maintaining such software,”

        2. 6

          A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”.

          Oh, I’m passing over it because I prefer to open one can of worms at a time :-D.

          But since you’ve opened it, yeah, I’m with you on this one. There’s a huge chunk of our industry which is now subsisting on bikeshedding existing products because, having ran out of useful things to do, it nonetheless needs to do some things in order to keep charging money and to justify its continued existence.

          I don’t think it’s a grand strategy from the top offices, I think it’s a universal affliction that pops up in every enterprise department, sort of like a fungus which grows everywhere, from lowly vegetable gardens to the royal rose gardens, and it’s rooted in self-preservation as much as a narrow vision of growth. Lots of UI shuffling or “improvements” in language standards (cough C++ cough), to name just two offenders, happen simply because without them an entire generation of designers and language consultants and evangelists would find themselves out of a job.

          So a whole bunch of entirely useless changes piggyback on top of a few actually useful things. You still need some useful things, otherwise even adults would call on the emperor’s nakedness and point out that it’s a superfluous (or outright bad) release. But the proportion, erm, varies.

          The impact of this fungus is indeed terrible though. If you were to accrue 20 years’ worth of improvement on top of, say, Windows XP, you’d get the best operating system ever made. But people aren’t exactly ecstatic over Windows 11 because it’s not just 20 years’ worth of improvement, and there’s a lot of real innovation (ASLR, application sandboxing) and real “incremental” improvement (good UTF-8 support) mixed with a whole lot of things that are, at best, useless. So what you get is a really bad pile of brown, sticky material, whose only redeeming feature is that there’s a good OpenVMS-ish kernel underneath and it still runs the applications you need. Even that is getting shaky, though – you can’t help but think that, with so many resources being diverted towards the outer layer, the core is probably getting bad, too.

          Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

          I was one of them for about two years, let me assure you it is entirely unenviable, precisely because of all that stuff above. Even software that only needs to be maintained doesn’t exist and run in a vacuum, you have to keep it chugging with the rest of world. It may not need any major new features, but it still needs to be taught a new set of workarounds every other systemd release, for example. And, precisely because there’s no substantial growth in it, the resources you get for it get thinner and thinner every year, because growth has to be squeezed out of it somehow.

          Edit: FWIW, this is actually what the last part of my original message was about. As @ketralnis mentioned in their comment here, just keeping existing software up and running is not at all as simple as it’s made out to be, even if you don’t dig yourself into a hole of unreasonable dependencies.

      2. 8

        Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago.

        The funny thing is that the Python 2.x series was not the bastion of stability and compatibility people like to claim now, as they look back with nostalgia (or possibly just without experience of using Python back then). Idiomatic Python 2.0 and idiomatic Python 2.7 are vastly different languages, and many things that are now well-known and widely-liked/widely-relied-upon features of Python didn’t exist back in 2.0, 2.1, 2.2, etc. And the release notes for the 2.x releases are full of backwards-incompatible changes you had to account for if you were upgrading to newer versions.

        1. 8

          People probably remember Python 2 as the “bastion of stability and compatibility” because Python 2.7 was around and supported for 10 years as the main version of the Python 2 language. Which is pretty “bastion of stability and compatibility”-like. I know that wasn’t the intention when 3.0 was released, but it’s what ended up happening, and people liked it.

          1. 4

            So, obviously, the thing to do is to trick the Python core team into releasing Python 4, so that we get another decade of stability for Python 3.

      3. 5

        Is it a really problem in practice though? If the small tool is actually large enough that porting would take too much time, then there are precompiled releases going back forever, docker images going back to 3.2 at least, and there’s pyenv. That seems like an ok situation to me. Anyone requiring the old version still has it available and just has to think: should I spend effort to upgrade or spend effort to install older version.

        1. 25

          Yes, it really is a problem in practise to try to keep your older unchanging code running. It’s becoming increasingly difficult to opt out of the software update treadmill, even (especially!) on things that don’t ostensibly need updating at all.

          Python 3.6 depends on libssl which depends on glibc, the old version of which isn’t packaged for Ubuntu 16.04. But the security update for glib’s latest sscanf vulnerability that lets remote attackers shoot cream cheese frosting out of your CD-ROM isn’t available on 16.04 and 17 dropped support for your 32-bit processor. And your CD-ROM.

          Sadly you can’t just opt out of the treadmill anymore. The “kids these days” expect constant connectivity, permanent maintenance, and instant minor version upgrades. They leave Github issues on your projects titled “This COBOL parser hasn’t had an update in several minutes, is it abandoned?” They wrap their 4-line scripts with Docker images from dockerhub, and it phones home to their favourite analytics service that crashes if it’s not available. Even if you don’t depend on all of those hosted services (and that’s harder than you think with npm relying on github, apt relying on launchpad, etc), any internet connectivity will drag you in via vital security updates.

          1. 5

            I’m not sure I buy the argument that “kids these days” have anything to do with Ubuntu’s decisions on how long to support their OS, and for what platforms.

            I’d personally jump to one of Debian (for years of few changes), Alpine (if it met my needs) or OpenSUSE Tumbleweed (if rolling was acceptable). I was surprised of the last, but Tumbleweed is actually a pretty solid experience if you’re ok with rolling updates. If not, Debian will cover you for another few years at least.

            If you need an install with a CD drive, maybe https://netboot.xyz/ could be helpful. There are a variety of ways to boot the tool, even an existing grub.

            1. 11

              Maybe it didn’t come through but I mean almost all of that to be hyperbole. The only factual bit is that it really is harder to keep unchanging code running than you’d think, speaking as somebody that spends a lot of time trying to actually do that. It’s easy to “why don’t you just” it, but harder to do in real life.

              Plus the cream cheese frosting. That’s obviously 100% true.

              1. 4

                Plus the cream cheese frosting. That’s obviously 100% true.

                In case anyone is wondering, this is really legit! Back in 2015 or so I used to keep an Ubuntu honeypot machine in the office for this precise reason – it was infected with the cream cheese squirting malware and a bunch of crypto miners, which kept the CPU at 100% and, thus, kept the cream cheese hot. It was oddly satisfying to know that the company was basically paying for (part of) my lunch in such a contorted way, as I only had to supply the cream cheese.

          2. 1

            I was asked a while ago to do some minor improvements to a webshop system that had been working mostly fine for the customer. When I looked into it, it turned out to be a whole pile of custom code which was built on an ancient version of CakePHP, which only supported PHP versions up to 5.3. Of course PHP 5 had been deprecated for a while and was slated to be dropped by the (shared) hosting provider they were using.

            So I cautioned that their site would go down pretty soon, and indeed it did. I tried upgrading CakePHP, but eventually got stuck, not only because the code of the webshop was an absolute dumpster fire (without any tests…), but also because CakePHP made so many incompatible changes in a major release (their model layer for db storage was rewritten from scratch, as I understand it) that updating it was basically a rewrite.

            So after several days of heavy coding, I decided that it was basically an impossible task and had to tell the customer that it would be smarter to get the site rebuilt from scratch.

        2. 3

          It depends on how the whole thing is laid out. I’m a little out of my element here but I knew some folks who were wrestling with a humongous Python codebase in the second category and they weren’t exactly happy about how simple it was.

          For example, lots of these codebases see continuous, but low-key development. You have a test suite for like fourty products, spanning five or six firmware versions. You add support for maybe another one every year, and maybe once every couple of years you add a major new feature to the testing framework itself. So it’s not just a matter of deploying a legacy application that’s completely untouched, you also have to support a complete development environment, even if it’s just to add fifty lines of boilerplate and maybe one or two original test cases a year. Thing is, shipping a non-trivial Docker setup that interacts with the outside world a lot to QA automation developers who are not Linux experts is just… not always a very productive affair. It’s not that they can’t use Docker and don’t want to learn it, it’s just that non-trivial setups break a lot, in non-obvious ways, and their end users aren’t always equipped to un-break them.

          There’s also the matter of dependencies. These things have hundreds of them and there’s a surprising amount of impedance matching to do, not only between the “main package” and its dependencies, but also between dependencies and libraries on the host, for example. It really doesn’t help that Python distribution/build/deployment tools are the way they are.

          I guess what I’m saying is it doesn’t have to be a problem in practice, but it is a hole that’s uncannily easy to dig yourself in.

      4. 4

        It’s also hard to change code from any version of Python, just because it’s so permissive (which is part of the appeal, of course). Good luck understanding or refactoring code- especially authored by someone else- without type hinting, tests, or basic error-level checks by a linter (which doesn’t even ship out of the box).

    2. 12

      I don’t program in Python on the regular (I’ve contributed to a couple of projects) so most of my interaction with Python as a language comes from when they wanna EOL their versions, like when Python 2 died and with it all obscure rando throwavay (but vital in production) GNU Image Manipulation plugins I’ve picked up from rando forum posters over the years.

      (And also whenever trying to install a program written in Python, which is always hairy with venvs and such.)

      I’m sure my own pet languages are just as brittle and Python is just unusually responsible about it, about EOL announcments etc, but, that’s the main interaction I have with Python qua Python. So whenever people are talking about how easy and painless Python is, I just smile and say “Sure” 🧕

      All in good fun, I get that this happens with all languages. Just such a hassle every time.

      1. 6

        All in good fun, I get that this happens with all languages. Just such a hassle every time.

        As a counterpoint, AFAIK, most programs that are compiled in Java 1.x will run fine on the latest JVM.

        1. 7

          But could you compile it again?

          The equivalent to that in Python would be using one of the myriad python packaging tools to generate an executable bundle, and those still work. Granted, your choice with an interpreted language is either “bundle your VM with your compilation” or “compile on each run”, but that’s not unique to Python.

          1. 11

            Usually, somewhat to my surprise, yes! The most common case of actually existing Java code breaking that I know of is use of some stuff that was officially never supported (but was once semi-common) from the internal sun.* namespace. That was eventually removed (or closed off somehow, not sure of the details).

            I recently resurrected some Java code from my PhD that I wrote in 2009 and it just works! Compiles totally fine on a modern JDK. I’m not a huge Java fan otherwise, but that was very satisfying.

          2. 9

            What is unique to Python, of the languages I’ve used in anger, is that it is (in my experience) exceedingly eager to make backwards-incompatible deprecations with point releases. This was the case with Python 2, and initially it seemed like the subtext around Python 3 was that it wouldn’t be the case any longer, because the language had learned its lessons. And, yet.

            At this point, it feels like so many aspects of how the Python ecosystem works are basically just shouting at you to not use it as the implementation language for any reasonably large-scale project, and at this point I’m inclined to take those shouts at their word.

            1. 3

              Minor releases I imagine? I think it’s disingenuous to compare the Py2 -> Py 3.0 breakage to anything that has happened since then (source: I had to port a very big project to Python 3, it was a massive pain, even if it did help reveal/clarify loads of bugs). If your contention is that 3.6 -> 3.7 shouldn’t include any breaking changes, just shift the dot to the right once.

              I think that the main Python compat issue I’ve seen that hasn’t been like purely “bugfix” has been the async keyword. Python moving forward has made a new parser that allows for “soft keywords” (basically: this will be a keyword only under very specific syntax), to help avoid that stuff.

              My general feeling about breakage for a lot of stuff has been that it leans towards fixing initial design mistakes. The classic “str”/“unicode” thing in Python2 was basically wrong. It’s not universally this, but I think that the community has learned this and bends over backwards to support some stuff.

              I have found that, for example, Django and related projects have done an extremely good job of offering onboarding ramps for upgrading relative to some stuff I’ve seen (libraries in eternal “we’re design a new entirely incompatible version of the API to make things work better” instead of doing things gradually). But there is definitely a feeling of incompleteness and not really trusting being able to just have an old Python work forever (short of vendoring in all your dependencies I suppose).

              Standard libraries really help on this front, and honestly I would wish ecosystems leaned on more, rather than less. That way people can be less reliant on a whole other ecosystem.

              1. 3

                Python moving forward has made a new parser that allows for “soft keywords” (basically: this will be a keyword only under very specific syntax), to help avoid that stuff.

                async was a soft keyword in 3.6. They deliberately moved it to be a hard keyword in 3.7, which broke tons of stuff. :-/

          3. 2

            You can also instantiate these old classes in the new JVMs, and they work fine. This would be like being able to call Python 2 libraries in Python 3.

        2. 2

          Oh, wow! I had such a hard time getting java programs to run back in the day before Swing was free but now there’s such a treasure trove of old apps.

    3. 7

      I wish there was a programming language that had a stable version that only changed for security reasons.

      1. 6

        If you avoid undocumented APIs, Java code has amazing longevity. It’s not quite what you’re asking for (the language and the runtime are both evolving) and sometimes “avoid undocumented APIs” is harder than it should be because innocent-looking dependencies might be doing funny business under the covers. But vanilla Java code compiled 20 years ago still runs perfectly well today.

        Libraries are where it starts to get tricky, though. “Only changes for security reasons” can look pretty similar to, “Only works on obsolete OS versions and hardware that’s no longer being manufactured” for some kinds of libraries.

      2. 6

        Common Lisp. Largely because the standards committee came up with a quite decent base specification, then packed up and turned the lights off.

        1. 2

          I like programming in common lisp because I know that if I find a snippet of code from 20 years ago, there’s very little chance it won’t work today. And more importantly, that the code I wrote 10 years ago will still work in 10 years (unless it interacts with something outside (like a C library) that’s now out of date).

          1. 3

            Better yet, the code I write today is guaranteed to work if I can time travel back 30 years!

      3. 4

        C and FORTRAN still have good support for ancient codebases.

      4. 3

        So maybe an LTS? Nobody is going to support a project indefinitely without financial support, so the best you can get is extension of the lifecycle.

        1. 3

          And companies like Redhat will happily sell you that and use your money to pay developers to backport and test fixes. The system works!

      5. 1

        JavaScript does this. Node.js doesn’t, and browsers don’t (though they’re reasonably close), and the ecosystem definitely doesn’t. But the core language from TC39 does

    4. 6

      You might be better off trying to figure out why people aren’t upgrading from Python2.

      1. 19

        Oh, that’s easy, we’re still in the Python 2 hangover.

        Godspeed to Python 3.6, IMO it was the first Python 3.x that had enough weight, new good features, and ported libraries to actually make a case to win people over from Python 2.x. And it literally just hit EOL.

        I think the above comment as regards scripting vs applications is half of it, but the other half is that purely internal batch/data science/etc jobs and platforms also didn’t have a strong motivation to switch. If 2.7 was working and porting everything sucked (and it totally sucked, I did it for a team once) then why bother, especially if the benefits weren’t obvious.

        And Python 3.6 was when it was obvious enough to actually take the time to do that work. Which, in a large company, takes forever to get everybody to start moving over, let alone completely actually move over. And it came out 5 years ago. Yes, it can take that long, especially when you consider it took a year or two just to get people into the idea that things were going to be okay now.

        (edit: 5 years ago)

        1. 5

          Companies will keep using Python2 as long as some companies like RedHat are supporting it…

          1. 3

            Are redhat still supporting it? I don’t think it’s in rhel8. It is in rhel7 which is still in some kind of support but I’d have to check whether that was extended or not

    5. 4

      I feel like you can learn a lot about people’s use-cases for languages when you hear how they react to being asked to upgrade.

      Python is my primary language, and I don’t have a huge/complex codebase to worry about migrating, so I’m the kind of person who is frustrated that not everyone’s on the bleeding edge (maybe like the author).

      But if you just use python occasionally (maybe for one-off scripts you don’t really want to keep updated) or you do have a huge/complex codebase, you may be on the other side of the issue wishing for longer support cycles.

      I think in this situation, 5 years is pretty dang good, plus the extra few years you can get from RHEL/Ubuntu if really pressed. If that still feels too fast, I think the problem lies elsewhere, like no culture of maintenance, too many dependencies, too high turnover, bad testing practices, etc., not too short support cycles.

    6. 4

      Sometimes I really think that Python should have stayed a teaching language.

    7. 3

      Python 3.8 still works even on older Windows 7.

      What reason is there to stay on 3.6?

      1. 13

        I have a project I inherited that doesn’t run on 3.7 and I don’t want to figure out why. I assume the problem is the dependencies use async as a variable name, but I don’t want to figure out how to fix that.

      2. 4

        On larger projects you end up with enough dependencies and stuck in a weird situation because of it. (For us we had tried upgrading Celery multiple times through multiple releases, ended up with game-breaking bugs each time, and ended up just pinning that for a while, meaning we were stuck on Python 3.6 for a long time.

        I had tried to debug the Celery stuff to send the fixes upstream (and I did send up a timezone fix!), but the main thing is just that because of dependency graphs on larger projects you can end up getting a bit stuck, even if your own code works A-OK.

      3. 2

        I have a project where the packaging system (that generates an installer for the project, providing the exact python env to the user) only works on a specific python version.

        I’m sure I could spend hours getting it to work again, like I did for the current version. Or I could just stay on Python 3.6. What reason is there to upgrade?

        1. 2

          I would make the effort.

          It should be far easier to do this incrementally than find that everything is broken once you fall out of the support train.

    8. 3

      It’s that time of the year again, huh.

    9. 2

      On RHEL 8 has anyone made sense of the new dnf modules stuff? There is a python39 package but they don’t seem consistent with the way it is done for other modules like ruby and subversion. When I’ve attempted to upgrade to newer modules, I’ve invariably got things into a complete mess and all the necessary downstream dependencies from, e.g. EPEL end up being missing. It looks like it may work fine if you’re only hosting your own web service with clearly defined dependencies but we’ve got a hundred desktop systems running all sorts of stuff.

      1. 1

        This is my experience as well. The base OS includes Python 3.6 which works, and you can get 3.9 in a module but it’s not well integrated into the rest of the system. You could just as well run a self-compiled Python 3.9 or run your Python applications in containers.

    10. 1

      I idly noticed python36 in use for a GitHub actions script I maintain the other day. I better go and see if their runners offer a newer one.