Cue dozens of commenters sharing their experiences that contradict or support the author’s thesis b/c low and behold, everything is always more nuanced than it seems.
This is interesting and well written. I’m not understanding the reason for the middle-tier request services vs a caching layer. Is batching reads to a DB more performant than serving individual reads from memory like redis?
its release philosophy is supposed to avoid what I call “the problem with Python”: your code stops working if you don’t actively keep up with the latest version of the language.
This is a very real problem with the Python and Node ecosystems.
I’m going to be grumpy and ask what this “problem” is supposed to mean, exactly. I can deploy, say, a Python application on a particular version of Python and a particular operating system today, and then walk away for years, and as long as I pay the hosting bill there is no technical reason why it would suddenly stop working. There’s no secret kill switch in Python that will say “ah-ha, it’s been too long since we made you upgrade, now the interpreter will refuse to start!”
So what people always actually mean when they say things like this is that they want to keep actively developing their code but have everybody else stand still forever, so that they can permanently avoid platform upgrades but also the language and ecosystem will never leave them behind. So the real “problem” is that one day nobody else will be willing to provide further bug and security fixes (free or perhaps even paid) for the specific combination of language/libraries/operating system I initially chose to build my app with, and on that day I have to choose between upgrading to a new combination, or doing that maintenance myself, or going without such maintenance entirely.
And although I’ve seen a lot of claims, I’ve seen no evidence as yet that Rust has actually solved this problem — the editions system is still young enough that the Rust community has not yet begun to feel the pain and cost of what it committed them to. “Stability without stagnation” is a nice slogan but historically has ranged from extremely difficult to downright impossible to achieve.
“Stability without stagnation” is a nice slogan but historically has ranged from extremely difficult to downright impossible to achieve.
You just don’t make breaking changes. Add new APIs but don’t take away old ones. Go 1 has been stable for 10 years now. That means there are some ugly APIs in the standard library and some otherwise redundant bits, but it’s been fine. It hasn’t stopped them from adding major new features. Similarly, JavaScript in the browser only rolls forward and is great. Again, some APIs suck, but you can just ignore them for the most part and use the good versions. I’m not as familiar with the Linux kernel, but my impression there is that Linus’s law is you don’t break userland.
Most breaking changes are gratuitous. They make things a little nicer for the core developers and shove the work of repair out to everyone who uses their work. I understand why it happens, but I reserve the right to be grumpy about it, especially because I have the experience of working in ecosystems (Go, the browser) that don’t break, so I am very annoyed by ecosystems that do (NPM, Python).
I’ll be up-front. My stance on breaking changes is that there are two types of projects:
Those which have decided to accept they are an inevitable fact of software development, and so have committed to the process of handling them, and
Those which haven’t yet.
And remember that we are talking not just about core language/standard library, but also ecosystems. Go has already effectively broken on this — the “rename your module to include /v2” hack is an admission of defeat, and allows the previous version to lapse into a non-maintained status. Which in turn means that sooner or later you will have to make the choice I talked about (upgrade, or do maintenance yourself, or go without maintenance).
And Rust has had breaking changes built into the ecosystem from the beginning. The Cargo/crates ecosystem isn’t built around every crate having to maintain eternal backwards-compatibility, it’s built on semantic versioning. If I publish version 1.0 of a crate today, I can publish 2.0 with breaking changes any time I want and stop maintaining the 1.x series, leaving users of it stranded.
So even if Rust editions succeed as a way of preserving compatibility of the core language with no ecosystem splits and no “stagnation”, which I strongly doubt in the long term, the ecosystem has already given up, and in fact gave up basically immediately. It has “the Python problem” already, and the only real option is to learn how to manage and adapt to change, not to insist that change is never permitted.
(and of course my own experience is that the “problem” is wildly exaggerated compared to how it tends to impact projects in actual practice, but that’s another debate)
I think it’s hard to talk about this rigorously because there’s definitely some selection bias at play — we don’t have information about all the internal projects out there that might be bogged down by breaking changes between interpreter versions, and they’re likely motivated by very different incentives than the ones that govern open source projects — and there’s likely survivorship bias at play too, in that we don’t hear about the projects that got burnt out on the maintenance burden those breaking changes induce.
My anecdotal evidence is that I’ve worked at places with numerous Python projects bound to different, sometimes quite old, interpreter versions, and there just aren’t enough person-hours available to keep them all up to date, and updating them in the cases where it was truly necessary made for some real hassle. Even if you chalk that up to bad resource management, it’s still a pretty common situation for an organization to find itself in, and it’s reasonable to expect your tools to not punish you for having less than perfect operational discipline. In light of that, I think understanding it in this binary frame of either making breaking changes or not isn’t the most fruitful approach, because as you note it’s not realistic to expect that they never happen. But when they do happen, they cost, and I don’t think it’s unreasonable for an organization to weigh that total cost against their resources and decide against investing in the Python ecosystem. It’s not unreasonable to make the opposite choice either! I just don’t think that cost is trivial.
Even if you chalk that up to bad resource management, it’s still a pretty common situation for an organization to find itself in and it’s reasonable to expect your tools to not punish you for having less than perfect operational discipline.
Imagine 20 years ago saying this about a project that suffered because they lost some crucial files and it turned out they weren’t using any kind of version control system.
Because that’s basically how I feel about it. Regularly keeping dependencies, including language tooling/platform, up-to-date, needs to become table stakes for software-producing entities the way that version control has. I’ve seen this as a theme now at four different companies across more than a decade, and the solution is never to switch and pray that the next platform won’t change. The solution is always to make change an expected part of the process. It can be done, it can be done in a way that minimizes the overhead, and it produces much better results. I know because I have done it.
Because that’s basically how I feel about it. Regularly keeping dependencies, including language tooling/platform, up-to-date, needs to become table stakes for software-producing entities the way that version control has.
I don’t believe this is what’s being disputed: it’s that this being a fact is precisely why it’s important for the platform to facilitate ease of maintenance to the best of its ability in that regard. Doing so allows even under-resourced teams to stay on top of the upgrade treadmill, which is more of my point in the bit you quoted: tools that induce less overhead are more resilient to the practical exigencies that organizations face. I guess we’ll have to agree to disagree about where the Python ecosystem sits on that spectrum.
This sounds like the “don’t write bugs” school of thought to me. Yes, ideally anything that’s an operational concern will get ongoing maintenance. In the real world…
More anecedata: my coworker built a small Python scraper that runs as a cron job to download some stuff from the web and upload it to an S3 bucket. My coworker left and I inherited the project. The cron job was no longer a high priority for the company, but we didn’t want to shut it off either. I couldn’t get it to run on my machine for a while because of the Python version problem. Eventually I got to the point where I could get it to run by using Python 3.6, IIRC, so that’s what it’s using to this day. Ideally, if I had time and resources I could have figured out why it was stuck and unstick it. (Something to do with Numpy, I think?) But things aren’t always ideal.
If someone has to have discipline and try to stick closer to the “don’t write bugs” school, who should it be: language creators or end developers? It’s easy for me to say language creators, but there are also more of us (end developers) than them (language creators). :-) ISTM that being an upstream brings a lot of responsibility, and one of those should be the knowledge that your choices multiply out by all the people depending on you: if you impose a 1 hour upgrade burden on the 1 million teams who depend on you, that’s 1 million hours, etc.
Generalizing from anecdata and my own experience, the killer is any amount of falling behind. If something is not being actively maintained with dependency updates on at least a monthly and ideally a weekly cadence, it is a time bomb. In any language, on any platform. Because the longer you go without updating things, the more the pending updates pile up and the more work there will be to do once you do finally sit down and update (which for many projects, unfortunately, tends to be only when they are absolutely forced to start doing updates and not a moment sooner).
At my last employer I put in a lot of work on making the (Python) dependency management workflow as solid as I could manage with only the standard packaging tooling (which I believe you may have read about). But the other part of that was setting up dependabot to file PRs for all updates, not just security, to do so on a weekly basis, and to automate creation of Jira tickets every Monday to tell the team that owned a repository to go look at and apply their dependabot PRs. When you’re doing it on that kind of cadence it averages very little time to review and apply the updates, you find out immediately from CI on the dependabot PRs if something does have a breaking change so you can scope out the work to deal with it right then and there, and you never wind up in a situation where applying the one critical update you actually cared about takes weeks or months because of how much other stuff you let pile up in the meantime.
Meanwhile I still don’t think Python or its ecosystem are uniquely bad in terms of breaking changes. I also don’t think Go or Rust are anywhere near as good as the claims made for them. And the fact that this thread went so quickly from absolutist “no breaking changes ever” claims to basically people’s personal opinions that one language’s or ecosystem’s breaking changes are justified and tolerable while another’s aren’t really shows that the initial framing was bad and was more or less flamebait, and probably should not be used again.
I agree that for a Python or Node project it is recommended to set up dependabot to keep up to date or else you have a ticking time bomb. However, a) that isn’t always practical and b) it doesn’t have to be like that. I routinely leave my Go projects unattended for years at a time, come back, upgrade the dependencies, and have zero problems with it.
Here is the full Terminal output of me getting it to build again with the most recent version of Go:
(Fri, May 20 08:56:09 PM) (master|✔)
$ go build .
go: cannot find main module, but found Gopkg.lock in /var/folders/p7/jc4qc9n94r3f6ylg0ssh1rq00000gs/T/tmp.S6ZYg4FX/track-changes
to create a module there, run:
go mod init
# status: 1 #
(Fri, May 20 08:56:27 PM) (master|✔)
$ go mod init github.com/baltimore-sun-data/track-changes
go: creating new go.mod: module github.com/baltimore-sun-data/track-changes
go: copying requirements from Gopkg.lock
go: to add module requirements and sums:
go mod tidy
(Fri, May 20 08:57:00 PM) (master|…)
$ go mod tidy -v
go: finding module for package github.com/stretchr/testify/assert
go: finding module for package golang.org/x/text/unicode/norm
go: finding module for package golang.org/x/text/secure/bidirule
go: finding module for package golang.org/x/text/unicode/bidi
go: finding module for package golang.org/x/sync/errgroup
go: finding module for package github.com/stretchr/testify/suite
go: downloading golang.org/x/sync v0.0.0-20220513210516-0976fa681c29
go: found github.com/stretchr/testify/assert in github.com/stretchr/testify v1.7.1
go: found github.com/stretchr/testify/suite in github.com/stretchr/testify v1.7.1
go: found golang.org/x/text/secure/bidirule in golang.org/x/text v0.3.7
go: found golang.org/x/text/unicode/bidi in golang.org/x/text v0.3.7
go: found golang.org/x/text/unicode/norm in golang.org/x/text v0.3.7
go: found golang.org/x/sync/errgroup in golang.org/x/sync v0.0.0-20220513210516-0976fa681c29
(Fri, May 20 08:57:10 PM) (master|…)
$ go build .
(Fri, May 20 08:57:18 PM) (master|…)
$
As you can see, it took about a minute for me to get it building again. Note that this package predates the introduction of Go modules.
Let’s upgrade some packages:
(Fri, May 20 09:02:07 PM) (master|…)
$ go get -v -u ./...
go: downloading golang.org/x/net v0.0.0-20220520000938-2e3eb7b945c2
go: downloading cloud.google.com/go v0.101.1
go: downloading gopkg.in/Iwark/spreadsheet.v2 v2.0.0-20220412131121-41eea1483964
go: upgraded cloud.google.com/go v0.16.0 => v0.100.2
go: added cloud.google.com/go/compute v1.6.1
go: upgraded github.com/ChimeraCoder/anaconda v1.0.0 => v2.0.0+incompatible
go: upgraded github.com/andybalholm/cascadia v0.0.0-20161224141413-349dd0209470 => v1.3.1
go: upgraded github.com/garyburd/go-oauth v0.0.0-20171004151416-4cff9ef7b700 => v0.0.0-20180319155456-bca2e7f09a17
go: upgraded github.com/go-chi/chi v3.3.1+incompatible => v4.1.2+incompatible
go: upgraded github.com/golang/protobuf v0.0.0-20171113180720-1e59b77b52bf => v1.5.2
go: upgraded github.com/pkg/errors v0.8.0 => v0.9.1
go: upgraded golang.org/x/net v0.0.0-20171107184841-a337091b0525 => v0.0.0-20220520000938-2e3eb7b945c2
go: upgraded golang.org/x/oauth2 v0.0.0-20171117235251-f95fa95eaa93 => v0.0.0-20220411215720-9780585627b5
go: upgraded google.golang.org/appengine v1.0.0 => v1.6.7
go: added google.golang.org/protobuf v1.28.0
go: upgraded gopkg.in/Iwark/spreadsheet.v2 v2.0.0-20171026120407-29680c88e31d => v2.0.0-20220412131121-41eea1483964
(Fri, May 20 09:02:52 PM) (master|…)
$ go build .
# github.com/baltimore-sun-data/track-changes
./handler.go:28:14: undefined: middleware.DefaultCompress
# status: 2 #
(Fri, May 20 09:02:58 PM) (master|…)
$ go doc middleware
package middleware // import "github.com/go-chi/chi/middleware"
// snip
(Fri, May 20 09:03:12 PM) (master|…)
$ go doc middleware.Compress
package middleware // import "github.com/go-chi/chi/middleware"
func Compress(level int, types ...string) func(next http.Handler) http.Handler
Compress is a middleware that compresses response body of a given content
types to a data format based on Accept-Encoding request header. It uses a
given compression level.
NOTE: make sure to set the Content-Type header on your response otherwise
this middleware will not compress the response body. For ex, in your handler
you should set w.Header().Set("Content-Type",
http.DetectContentType(yourBody)) or set it manually.
Passing a compression level of 5 is sensible value
(Fri, May 20 09:03:32 PM) (master|…)
$ subl .
(Fri, May 20 09:04:12 PM) (master|…)
$ go build .
(Fri, May 20 09:04:59 PM) (master|✚1…)
$
Took about 3 minutes to upgrade the packages and fix the broken dependency (they renamed a middleware). Bear in mind that the upgrade I did deliberately did not try to upgrade past semantic version changes in its dependencies. Probably it would take another half hour or more if I wanted to chase down whatever breaking changes happened there.
Suffice it to say, yarn cannot even install its packages, and the last time I tried this stunt a couple of years ago, I got past that and then ran into a problem with webpack that I couldn’t easily solve.
Go is just a much more stable ecosystem than Node or Python. It’s not as stable as say browser JS, where one can reasonably expect working code to work until civilization collapses, but it’s fairly stable. And it’s not magic. If there were a communal expectation of this level of stability, it could exist everywhere. It’s a social value to keep things working in the Go ecosystem, and it’s not elsewhere.
The other day I returned to a Python package I hadn’t touched in about a year. The actual Python dependencies portion of updating it was done in a few minutes: I updated the supported versions of Python to those currently supported by upstream, did the same for the supported versions of Django, and then had to change a whopping four lines of code, all in a unit-test file, to deal with a deprecation in Django that had finally been removed.
The entire remainder of getting it ready for a new release was fighting with CI — updating from v1 to v3 of the GitHub Actions tasks for Python, which I mostly did by copy/pasting from a package by someone else who I trust.
I mention this because while you have anecdotes about Go projects updating more or less seamlessly, I have just as many about Python projects, and other people in this thread have anecdotes about Go projects breaking in ways they found annoying.
All of which is to say that you should stop trying to extrapolate from your anecdata to “Go is stable and values stability, while Python is not and does not”, because it just ends up looking silly when other people show up with their anecdata. Python is not uniquely “unstable” and neither Go nor Rust are uniquely “stable”. At best, some projects are sometimes lucky enough that they can give the false impression of stability in the language/ecosystem, despite the fact that the language/ecosystem is always moving on. And that’s why I have said, over and over, that the thing to do is embrace and accept change and build processes around it. Otherwise, you’re likely to wake up one day to find out that what you thought was stable and unchanging was neither, and that you are in for a lot of trouble.
When I get a chance, I thought of an equivalently old Python package for me to try updating. I’ll try to do it this weekend or next week.
But I just don’t buy this:
Python is not uniquely “unstable” and neither Go nor Rust are uniquely “stable”.
I don’t have experience with Rust, so I have no idea there. I do have years of working in Python, JavaScript, and Go and my experience is uniform: Python and JavaScript routinely have problems that make installing/updating take a workday, and Go does not. I’ve already given a lot of concrete examples, and I’m sure I could dig through my git history and find more. At a certain point, all I can say is this is my experience and if it’s not yours, great.
Okay, I tried this with a Datasette project from around the same time. (It was actually deployed again in early 2020, so it’s a bit fresher than the Go project, but whatever, close enough.) Again, Node didn’t work. I think the issue with that is that libsass is dead and doesn’t compile anymore, so you need to switch to dart-sass instead. In all likelihood, the fastest solution to the Node issues is to drop all of the dependencies and just start over from scratch with only my user code, since the dependencies were just there to build a Vue project.
On the Python side, it wouldn’t work with Python 3.9, but when I used Python 3.7, I got it to run again. Terminal output is below. It only took 15 minutes to get it going, but compare this to Go, which works with the current version of Go even though the package predates modules (which caused a lot of breakage not covered by the Go 1 guarantee) and I got the dependencies upgraded in a total of 5 minutes. By contrast, the Python installations all took long enough that they break flow: since the installation is going to take a while, I switch away from my Terminal, which is chance for me to get distracted and lose my place. I think this project did pretty well because it used your recommended pattern of having a requirements-freeze.txt file and it had a Bash script to automate the actual install commands. But the error when UVLoop was broken was pretty demoralizing: I have no idea how I would fix it, so getting up to Python 3.9 or 3.10 would probably involve a lot more Googling than I’m willing to do for an internet comments example. Again, the simplest fix might be to just blow away what I have now and start from scratch. I think Datasette has been relatively stable in spite of being <1.0, so I suspect that it wouldn’t be that hard to get it working, but again, it’s more than I want to do for an example. A nice thing about Go is that most dependencies don’t use C, so when something does go wrong, like that middleware that was broken in the other project, you aren’t confronted with errors in a language you don’t know using a build system you don’t understand. In general, it’s just much less intimidating to get a Go project back up to speed.
So this is fairly reflective of my lived experience: Node projects, especially those that use Webpack, break in ways that are more or less unfixable and need to be restarted from scratch; Python projects can be kept running if you are willing to pin old versions of Python but give a lot of scary compiler errors and don’t have clear paths forward; Go projects can typically be upgraded by typing go get -u ./... and maybe reading some release notes somewhere. Go isn’t perfect and there are still problems, but the quantity of problems is so much less than it creates a qualitative difference in feeling.
$ gh repo clone baltimore-sun-data/salaries-datasette
Cloning into 'salaries-datasette'...
remote: Enumerating objects: 1017, done.
remote: Counting objects: 100% (94/94), done.
remote: Compressing objects: 100% (72/72), done.
remote: Total 1017 (delta 63), reused 23 (delta 20), pack-reused 923
Receiving objects: 100% (1017/1017), 53.78 MiB | 4.25 MiB/s, done.
Resolving deltas: 100% (568/568), done.
Updating files: 100% (74/74), done.
(Sat, May 21 12:45:40 PM)
$ cd salaries-datasette/
(Sat, May 21 12:45:54 PM) (master|✔)
$ more README.md
# salaries-datasette
Public salary data acquired by the Baltimore Sun. Currently, we just have data from the state of Maryland for 2017.
## Usage
Run `./run.sh setup` to install locally. The script assumes you have either Python 3 or Homebrew for Mac installed. Run `./run.sh setup-frontend` to install front end dependencies.
Run `./run.sh create-db` to create a SQLite database out of the provided CSVs.
Run `./run.sh` or `./run.sh serve` to run server at http://localhost:9001.
Run the JS/CSS frontend server in another tab with `./run.sh frontend`.
`./run.sh format` will format Python and Javascript code according to the coding standards of the project.
`Dockerfile` is also provided for running/deploying with Docker. The image can be built with `./run.sh docker-build` and tested with `./run.sh docker`. The server only responds to correct hostnames (not localhost), so edit `/etc/hosts` to add `127.0.0.1 local.salaries.news.baltimoresun.com` and then test http://local.salaries.news.baltimoresun.com in the browser.
(Sat, May 21 12:46:06 PM) (master|✔)
$ ./run.sh setup
snip a ton of output from installing things including a lot of scary errors
× Encountered error while trying to install package.
╰─> uvloop
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
# status: 1 #
(Sat, May 21 12:49:40 PM) (master|✔)
$ # by reading the DOCKERFILE, I learn that this used Python 3.7 when it was made
(Sat, May 21 12:51:16 PM) (master|✔)
$ pyenv install 3.7.13
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.7.13.tar.xz...
-> https://www.python.org/ftp/python/3.7.13/Python-3.7.13.tar.xz
Installing Python-3.7.13...
(Stripping trailing CRs from patch.)
patching file Doc/library/ctypes.rst
(Stripping trailing CRs from patch.)
patching file Lib/test/test_unicode.py
(Stripping trailing CRs from patch.)
patching file Modules/_ctypes/_ctypes.c
(Stripping trailing CRs from patch.)
patching file Modules/_ctypes/callproc.c
(Stripping trailing CRs from patch.)
patching file Modules/_ctypes/ctypes.h
(Stripping trailing CRs from patch.)
patching file setup.py
(Stripping trailing CRs from patch.)
patching file 'Misc/NEWS.d/next/Core and Builtins/2020-06-30-04-44-29.bpo-41100.PJwA6F.rst'
(Stripping trailing CRs from patch.)
patching file Modules/_decimal/libmpdec/mpdecimal.h
(Stripping trailing CRs from patch.)
patching file setup.py
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Installed Python-3.7.13 to /Users/adhoc/.pyenv/versions/3.7.13
(Sat, May 21 12:55:00 PM) (master|✔)
$ ./run.sh setup
snip
× Encountered error while trying to install package.
╰─> uvloop
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
# status: 1 #
(Sat, May 21 12:57:08 PM) (master|✔)
$ # I become worried that maybe Pyenv didn't activate for some reason, so I try explicitly adding it to my PATH
(Sat, May 21 12:59:30 PM) (master|✔)
$ PATH=$HOME/.pyenv/versions/3.7.13/bin/:$PATH ./run.sh setup
snip
success!
(Sat, May 21 01:01:58 PM) (master|✔)
$
You’re still missing the point I’m trying to make, though, because you keep diving for anecdotes to support your case, and I keep trying to remind you that if we allow generalizing from anecdotes then your own claims will get contradicted, because there are people who can bring similar anecdotes about languages like Go that you think don’t have this “problem”.
My stance here is and always has been that no language or ecosystem is free of the need to keep up with dependencies and no language or ecosystem is free from breaking changes. People in this comment thread have posted experiences with Go breaking on them and you’ve mostly just ignored that or disallowed generalizations from them, while insisting that your own anecdotes support generalizations, about the objective state of particular languages/ecosystems.
That is what I’m trying to get you to see, and trying to break you out of. No number of anecdotes one way or another will make your case for you, because the issue is not an insufficient number of presented anecdotes.
At a certain point, all I can say is this is my experience and if it’s not yours, great.
This is very close to where I’m trying to lead you, but
You never actually just say that and stop — you always feel a need to throw on another anecdote and then insist that your experience generalizes to objective statements about particular languages/ecosystems, and that’s where it goes off the rails.
You never really accept other people having experiences that don’t match yours, and thus disputing your generalizations. I don’t have the kinds of problems you do with Python. You don’t have the kinds of problems other people have had with Go. But I’m doing my best to acknowledge experiences other than my own when I say that no language/ecosystem is free of this issue: I expect that there will be people who do run into it in Python, and also in Go, and also in Rust, and also in every other language/ecosystem. You seem to be trying to erase the experiences of people who run into it in languages that you subjectively don’t believe have this problem, because it doesn’t fit the “this language has it, that language doesn’t” generalization you want to make.
your own claims will get contradicted, because there are people who can bring similar anecdotes about languages like Go that you think don’t have this “problem”.
I have given a ton of anecdotes and done two experiments in support of my view. On the other side, there is your a priori assertion that all ecosystems have the same problems (but is it of the same magnitude?) and an unreadably long article by a pro-Rust troll. If other people have anecdotes about Go being hard to upgrade, I’m happy to read them (assuming they can get to a point in less than 10,000 words) and theorize about why someone else might have that problem when I don’t. But that hasn’t happened in this thread.
Well, right here in this thread someone mentioned problems with upgrading in Go. But the way you and another user were trampling all criticism of Go with insults — “troll”, “clickbait”, and so on — literally drove that person out of the thread.
I honestly don’t know what else to tell you. It seems very clear that you’re not prepared to hear anything that contradicts the narrative you’ve got going, so I guess I’ll bow out too since it’s pointless to continue trying to get you to acknowledge stuff that was literally right in front of you.
I could never tell what the actual problems were based on the links posted by m0th were. They linked to long pieces with no summaries, so while I would love to comment, I cannot. Maybe it’s because they felt targeted by ad hominem. I have a different theory for why, which is that the criticisms were scattershot and not serious. Again, if they want to come back and provide a useable summary, great.
And yet I’d bet anything that if you’d been given short concise statements of problems you’d dismiss them as being trivial or minor on those grounds. Like someone else already said, I simply cannot assume that you’re actually trying in good faith to engage with criticism of a thing you like; instead all I’ve seen you do is smear and deflect and dismiss without argument.
If you call this “without argument” I don’t know what to say. I really feel that the “without argument” side is yours, and it is very frustrating to me, because you’re a writer whose work I respect which makes it very hard for me to let this go. But when you write things like “anecdotes don’t count”… What is a personal dev blog but anecdotes? I agree that supporters of language X can be vociferous and annoying, and Go is one such X. In that case of Go, the vociferous attitude comes from having been attacked as using a “bad language” by people like fasterthanlime and our own experience of initially thinking things like the v2 are dumb and then slowly seeing the benefits and defending them. I agree that my initial claim was a bit too glib in its characterization, but I stand by a less inflammatory version of the same idea. I don’t agree that anecdotes aren’t the data of software engineering, and I don’t agree it’s somehow impossible to characterize software ecosystems with the understanding that any generalization will always be at best partial and incomplete but nevertheless it’s better to have the generalization than not.
People brought up criticism of Go. You didn’t engage with it or argue with the points made — you ignored them, or dismissed them as too long to be worth your time to read, or just outright insulted the author of one piece of Go criticism.
It’s hard for me to see a way to take this as good faith. It appears that you like Go, which is fine! Same with Python: if you don’t like it, then don’t like it — that’s fine!
What’s not fine is the double-standard “my anecdotes are generalizable to broad claims about the language/ecosystem as a whole, but other people’s anecdotes are not”. Which is basically what you’re doing. If your anecdotes about Python generalize, then so do mine and so do other people’s anecdotes about Go. Which would then contradict the claims you want to make, which is probably why you’re being so cagey about not allowing others’ anecdotes to generalize while insisting that yours do. But again it’s going to be, at least, extremely difficult to take that as a good-faith argument.
When I engage with the criticism, I’m just dismissing it. If I don’t engage because I don’t have enough detail (or drowning in irrelevant details so I can’t tell why monotonic time is supposedly bad), I’m ignoring it. It’s no win.
As far as I can tell the only “engagement” you’ve given to criticism of Go was the second paragraph in this comment, and I think that’s being charitable since it’s also possible to read as a “well, this breaking change didn’t count, and it just means Go is great instead of the greatest”. If I said Python’s stability guarantees are good but just “don’t go far enough”, I doubt you’d view it charitably.
You have, on the other hand, repeatedly dismissed critiques both by users in this thread and linked to on other sites as being too long or too “scattershot” or “fair to handwave … away”. Again: if I tried pulling that sort of thing in defense of Python you would, I hope, call me on it. You’re doing it here in defense of Go. I’m calling you on it.
I wrote a long reply, decided to sleep on it, and I give up. You haven’t conceded any points. You’ve repeatedly accused me of bad faith. You’ve decided Python is as good as it gets and by definition nothing else can be better in some ways but not others. All I can say is I didn’t invent the phrase “the problem with Python”. Other people feel this way too. Maybe we’re wrong, and everything has the same problem to the same degree. I don’t think so though.
You seem to be simultaneously saying there is nothing especially broken with python while defending the culture of constantly breaking changes and saying it is inevitable. I think your attitude sums up the problem, there are a lot of enablers.
This thread started out with an accusation that there is a problem that uniquely exists in certain languages, and does not exist at all in others. People were very confidently stating not that the breaking changes in Go are less frequent, or less annoying, or more justified — people were very confidently stating that breaking changes simply do not exist at all in Go.
Don’t believe me? Here, and a lot of the rest of this sub-thread is people trying to quibble about “oh well that was just a change to (thing that shouldn’t count)” or explain “well these changes were justified and tolerable while those other languages’ changes aren’t” in order to somehow cling to the notion that Go is a bastion of perfect stable compatibility, even in the face of solid evidence that it isn’t.
And I’m sorry if people don’t like to hear it, but I’ve just gotta call bullshit on that. Every language and ecosystem has breaking changes. There is no magic in Go that makes it somehow unchanging and perfectly compatible forever, nor is there any magic in the ecosystem that somehow makes all maintainers good stewards who never ever make unjustified changes.
Still, people seem to be wildly exaggerating both the frequency of breaking changes in the languages they want to criticize, while minimizing for the languages they want to praise. It’s very human and very subjective, and very not a good way to have a reasoned discussion about this.
Especially when it’s coupled with very obvious bad-faith tactics like the way critics of Go keep being dismissed or even smeared in this thread.
If you dislike Python, then by all means dislike it. There are things I dislike about it. But there’s way too much in this thread, and in our industry, of people being unable to handle anything other than extremes — something is either 100% good or 0%, either completely perfectly backwards-compatible always or never — and of very clearly contradicting themselves and making really bad arguments to try to justify their subjective dislikes. Just come out and say you don’t like a thing and move on. You don’t also have to prove that it’s the worst in order to justify your dislike.
I don’t know if python is especially bad. I personally found npm projects I have tried break when updating dependencies a few years later, and I have not had that experience in Go. Maybe I am just lucky.
It seems to me that the disconnect here is about the scope of what constitutes “breaking changes” in a language. There certainly isn’t an objective definition! Some people consider only the core language and it’s standard library, and others include everything out to its tooling.
This thread has grown a lot overnight while I was sleeping! I still haven’t caught up, but here is my top of head takeaway:
I think it’s fair to complain that the Go 1 stability guarantee doesn’t go far enough, but it seems like some people elsewhere in the thread are trying to claim that adding monotonic time somehow breaks the guarantee, which makes no sense to me. At this point, someone citing Fasterthanlime basically means they don’t know anything about Go, IMO because he has been so egregious in his misrepresentation of the language. Just recently, I had to pin a specific subversion of Go 1.18 in my tests because I needed a particular behavior for the Go tool. I don’t consider that to have been the Go team breaking the stability guarantee, just an example of how you can run into the limits of it when you’re talking about the tooling.
Obviously, it’s going to be impossible to never break anything. (Which is what I take from your points 1 and 2.) If nothing else, security problems can force breaking changes. Certainly, you need to get to a point where you’re happy enough to commit to a design before you stabilize it: Go was not stable at all before version 1. JavaScript wasn’t stable before ECMA etc.
But the question to me is whether the upstream developers take breaking changes seriously or commit them wantonly. To rephrase your points:
Projects which accept that breaking changes are fact of software development, so they do it whenever it’s convenient.
Projects which accept that breaking changes are fact of software development, so they do it as little as possible.
I was being a bit too flip when I wrote “You ‘just’ don’t make breaking changes.” That’s like saying, you just don’t write bugs. It is unavoidable sometimes. But there is a qualitative difference as an end developer in relying on a project which attempts to avoid breaking changes and one which does not.
In Node, I routinely run into software that will not run on the latest versions of Node. When a new version of Node comes out, I don’t think “Oh great, more speed, more features!” I think, “Ugh, crap, what’s going to break this time?” I had a hard to diagnose break in Babel (a very popular library!) that was caused not by a major version jump in Node, but a minor version bump. It was a waste of a day of development time for me for no reason. You wrote “as long as I pay the hosting bill there is no technical reason why it would suddenly stop working.” But in this case, there wasn’t anything I did intentionally which broke my setup. I just hadn’t pinned my version of Node, Homebrew moved up by a minor version because of some other thing, and suddenly I lost a workday. It also broke production because prod was only pinned to the major version not the minor or patch.
A similar example was that I had an Amazon ECS server using a Docker image to serve some Python app, and one day it just stopped working in prod. The problem was that when the Dockerfile was written, it didn’t specify the version of Python, and then the version where async went from a semi-keyword to a full keyword came out and broke the dependencies I was using. It wouldn’t have mattered if the Docker image had been already baked, but the way this project was setup, the Docker image would be periodically rebuilt from the Dockerfile whenever things would restart or something. That at least was relatively easy to track down and fix because the change of Python versions was noticeable once I started debugging it.
You call the /v2 thing a “hack” and it is ugly and inelegant, but it does solve this problem: when you make a breaking change, let users upgrade at their own pace, instead of having them accidentally get moved onto the new thing without being aware of it. (In browser JS, "strict mode" and type="module" also have the effect of being opt-in upgrades!) That’s really the core of the so-called “Python problem” to me. Just let me decide when to work through the breaking changes. I’m not expecting eternal support or whatever. I just want something that works today to work tomorrow unless I press the big red “attempt upgrade” button.
At this point, someone citing Fasterthanlime basically means they don’t know anything about Go, IMO because he has been so egregious in his misrepresentation of the language.
I’m the one who cited the monotonic time issue. I’m having trouble seeing good faith in this statement, so I’ll bow out of this thread after this. But just so you understand that this is adamantly not the case: I’ve written Go since before 1.0, contributing to some of the largest projects in the ecosystem, and as a manager moved several teams over to using it (from Python, no less.) I also don’t think those credentials should be necessary for a critique.
I had a hard to diagnose break in Babel (a very popular library!) that was caused not by a major version jump in Node, but a minor version bump.
To reiterate, this has happened to me with minor version bumps in Go, due to both modules and stdlib changes. This is even worse because it was Go itself, not a third-party library.
You call the /v2 thing a “hack” and it is ugly and inelegant, but it does solve this problem: when you make a breaking change, let users upgrade at their own pace
This actually touches on a major reason why Go’s ecosystem could be so unstable before modules: you could either vendor which had its own can of worms, or rely on authors using this hack. Both patterns emerged as common practice only after years of Go devs experiencing breaking changes in libraries. You could argue that those instabilities don’t apply to Go itself, which I suppose is fair, but that applies to your argument about Babel as well.
Eh no. Python does not follow semantic versioning and makes intentionally breaking changes between minor versions. In fact Python 3.x releases come with documentation titled “Porting to Python 3.x” which lists all breaking changes made intentionally.
I’ve never been affected by stdlib changes in python in a minor release. I have by go, where it suddenly stopped respecting system DNS settings. Or there were the backwards-incompatible changes to time, maybe best summarized here.
I know this isn’t likely to change anyone’s mind, but the thing you linked is an example of something that had been deprecated and raising deprecation warnings for years. Here’s an example warning I get from a Python 3.7 install I had lying around (oldest I could find, I tend not to keep EOL’d Pythons on my personal laptop):
DeprecationWarning: Using or importing the ABCs from ‘collections’ instead of from ‘collections.abc’ is deprecated since Python 3.3,and in 3.9 it will stop working
And it appears they actually held off and didn’t finally remove that until 3.10. That’s at least four years after Python 3.7 (which issued that warning). It’s nearly ten years after Python 3.3.
Python does use a rolling deprecation cycle, yes. So do some major Python projects like Django. This means you should ensure you’re bubbling up deprecation warnings in your CI, and probably read release notes when new versions come out, yes.
But these things don’t just come surprise out of nowhere; they’re documented, they raise warnings, they’re incorporated into the release cycles. Knowing how Python’s deprecation cycles work is, for a Python user, the same kind of table stakes as knowing how semantic versioning works is for a Rust/cargo/crates user – if you updated a crate across a major version bump and it broke your code, everyone would tell you that you could have seen that coming.
It happens, no doubt, but IME Python’s changes are much more egregious than Go’s few changes over the years. Where Go does make changes, they are typically as solutions to bugs, not gratuitous API changes.
Also, speaking personally, I try to be as objective as possible and fasterthanli.me is not an objective source in any sense. It’s clickbait. There’s no pretence of objectivity.
I’d like to point out that this thread has gone from a claim that Go has maintained backwards compatibility to, now, a claim that Go’s backwards-incompatible changes just aren’t as “egregious” as Python’s, and to what’s basically an ad hominem attack on someone who criticized Go.
Which is kind of a microcosm of this whole framing of “Python problem” or “Node problem”. What it’s really about is not some magical language and ecosystem that never has backwards-incompatible changes, what it’s about is subjective taste. You think Go’s breaking changes are not as “egregious” and are “solutions to bugs” while Python’s are “gratuitous”. But that’s a subjective judgment based on your tastes and opinions. Someone else can have an opposite and equally-valid subjective opinion.
Or, more bluntly: “No breaking changes” nearly always turns out to actually mean “Breaking changes, but only ones I personally think are justified/tolerable, and I don’t think yours are”. Which is where this thread predictably went within the space of just a few replies.
Getting back to my original claim: change is inevitable. Entities which produce software can adapt to it and make it a normal and expected part of their processes, or they can suffer the consequences of not doing so. There is no third option for “all external change stops”. Nothing that lives does so changelessly.
Or, more bluntly: “No breaking changes” nearly always turns out to actually mean “Breaking changes, but only ones I personally think are justified/tolerable, and I don’t think yours are”. Which is where this thread predictably went within the space of just a few replies.
No I don’t think so. It has nothing to do with opinion and everything to do with experience. What someone sees directly is how they perceive reality. I’m expressing my perception of reality as someone who has experience of both Python and Go, and m0th is expressing their experience.
I’m not minimising m0th’s view, which is why I phrased it as “in my experience”.
Getting back to my original claim: change is inevitable. Entities which produce software can adapt to it and make it a normal and expected part of their processes, or they can suffer the consequences of not doing so. There is no third option for “all external change stops”. Nothing that lives does so changelessly.
Change is inevitable, I agree, but I do think the degree matters. Python makes breaking changes often, removing APIs, etc. Go does not and only with very good reason.
Change is inevitable, I agree, but I do think the degree matters. Python makes breaking changes often, removing APIs, etc. Go does not and only with very good reason.
Again, the original claim at the start was that there are languages which don’t have breaking changes, and Go was specifically named as an example. That has now been walked back to the kinds of statements you are making. And your statements are, effectively, just that in your opinion one language’s (Go) breaking changes are justified and not too frequent, while another language’s (Python) breaking changes are not justified and too frequent. Which is just you stating your own personal tastes and opinions. And it’s fine as long as you are willing to admit that. It is not so fine to present one’s personal tastes and opinions as if they are objective facts. It also is not so fine to engage in the kind of ad hominem you did about criticism of Go.
Again, the original claim at the start was that there are languages which don’t have breaking changes, and Go was specifically named as an example.
I didn’t make that claim, so I’m not sure why you’re arguing with me about it.
Which is just you stating your own personal tastes and opinions. And it’s fine as long as you are willing to admit that.
I did? I explicitly said “In My Experience”. I’m not sure how I can be any clearer.
It is not so fine to present one’s personal tastes and opinions as if they are objective facts. It also is not so fine to engage in the kind of ad hominem you did about criticism of Go.
What are you even talking about? I legitimately can’t even understand what you’re referring to. Where’s the “ad hominem” comment I made? I think the most controversial thing I said was
“Python makes breaking stdlib changes in minor version releases.”
Which is objectively true as mentioned by other commenters. I made no judgement about whether it was justified or not. It’s also not “ad hominem”, which would require Python to have a position on this.
Anyway, I think you’re taking this way too personally for some reason. I like Python, I have used it for many years. But I’m out.
IIUC, ubernostrum is referring to you explaining that fasterthanli.me is not a reliable source of information. That’s not an ad hominem though. His attacks are so scattershot that if someone cites them, it’s totally fair to just handwave it away.
Also, speaking personally, I try to be as objective as possible and fasterthanli.me is not an objective source in any sense. It’s clickbait. There’s no pretence of objectivity.
You don’t bother to engage with the content, just brush off the author with a quick smear and then move on. Which is a problem, because the content is reasonably well-argued, from my perspective as someone who has a bit of Go experience and initially couldn’t quite describe why it rubbed me the wrong way. Putting into words the way that Go (which is not unique in this, but is a prominent example of it) likes to pretend complex things are simple, rather than just admitting they’re complex and dealing with it, was extremely useful for me, both for giving me a way to talk about what bugged me in Go and because it relates to things I’ve ranted about in the Python world previously (most notably why Python 3’s string changes were the right thing to do).
Go the language doesn’t have breaking changes, in the sense that code written against a 1.0 language spec will — generally— continue to compile and run the same way against any 1.x language spec compiler. This isn’t an absolute statement of fact, but it’s a design principle that’s taken very seriously and is violated very rarely. The tooling doesn’t necessarily abide the same rules.
I only remember Modules. After Modules landed I never experienced any breaking build when updating to a new Go version. A whole different story was updating Java versions and the dozens of subtle ways they can break your service, especially at run-time (caused by some Spring DI magic).
I mean, Go module is not part of Go 1 stability guarantee. In my opinion, this shows how limited Go’s stability guarantee is. Go 1 is stable, but Go is not, at least if you are using “go build” to build your Go code.
I agree but that’s not a language change, and the impact is not as significant as random language changes. I can still use and build a package from 8 years ago with no issue. And whatever tool they happened to use to manage dependencies 8 years ago will still work fine (probably).
I saw some breaking changes in one of our old projects. Written in the Go v1.4 era, IIRC. I checked it with a Go v1.14 release, and boom, it doesn’t compile due to the module changes.
Yes, it wasn’t that hard to fix (it only took a few minutes of Internet searching), but I still count that as a breaking change.
V8 the interpreter breaks working JS code very seldom
The libraries that come with Node break JS code a little more often, but still IME not very often.
V8 breaks native extensions very often. The NodeJS ecosystem discourages writing them because of this.
Some add-on packages from npm break their consumers all the time.
Many packages are extremely careful not to break consumers. Others are less so. The experience you have with backwards compatibility tends to track the backwards compatibility stance of the worst thing in your entire dependency tree. When you have tens of thousands of transitive dependencies, you usually end up seeing 99.99%-ile bad behaviour somewhere in there at least once.
The core problem is that a handful of “core” packages break, and nodes ecosystem is way too much about plugins, so many things that you use have 3 layers to them (all maintained by different people)
The ecosystem would be a lot more stable if we were vendoring in packages more
Node is a development target. The pure abstract notion of a “language” doesn’t really matter here, because people write code for Node specifically.
And Node does make breaking changes. It literally makes semver-major releases, with backwards-incompatible changes that can break existing npm packages (mainly due to changes in the native API, but there also have been changes to Node’s standard library).
For example, I can’t build any projects that used Gulp v3. Node broke packages that it depends on, and I either have to run deprecated/unsupported/outdated version of Node, or rewrite my build scripts. OTOH a Rust/Cargo project I’ve developed at the same time still works with the latest version of Rust/Cargo released yesterday.
ecmascript itself is a language but not a language implementation, so while you can write a lot of code in ecmascript, at the end of the day the ecmascript specification can’t run it, so you’ll need to run it using an implementation, that’s why people are talking about the Node ecosystem, the language didn’t break the ecosystem did
Okay, that’s a few less keystrokes, but a harder mental load. I’d rather press F3 to go to the next result (or even, re-hit ctrl+f and enter) than remember if I’m in the “search” mode and hit some random button that has a different meaning in a different context.
When programming, you already have to juggle so many different things in your mind - why complicate it further? I feel like all those vim/emacs articles are just written to justify the time spent learning all those keystrokes and quirks, and all the time setting up the plugins.
I get that concern, but the truth is that after a couple weeks of using vim all the commands and things you use daily become second nature. I’ve had times where I couldn’t tell someone how to do something in vim without having my fingers on the keys so I see what my movements were. It’s pretty amazing how many shortcuts you can keep in your head.
I’m able to mostly do that by playing the air-qwerty- keyboard. Definitely keeping most of my vim knowledge in my muscles, leaving my brain free for how I want the text to change.
You’re actually looking at it the wrong way around. F3 is the random key here. Nobody would ever figure out that key without help. On the other hand, in VI most keys are part of a logical pattern, even though some of those are historical. For example: n is the key you’d press to get to the next search result.
So while most shortcuts in modern day GUI have to be memorized without much context to help*, Vim commands are a language built out of related patterns.
*) That’s of course not the full story. All those shortcuts have a history as well and some are an abbreviation for the action as in ctrl+f(ind) or ctrl+c(opy). But there’s no “copy a word and paste it to the next line” or anything similar one could express with those.
People figure out the F3 key by seeing it in the IDE’s menus - which vim doesn’t have. With vim, you have to put in the effort and actively learn the shortcuts. But even then, I said you can just hit Ctrl+F and enter again to get the same behavior, which is obvious because most software has the same shortcuts, and work the same way.
But there’s no “copy a word and paste it to the next line” or anything similar one could express with those.
Ctrl+Shift+Right to select the word, then Ctrl+C, Down arrow, Ctrl+V, am I missing something?
Yes, if you use GVim you get those little helpers in menus as well. That’s a different interface. But the topic should be about concepts. VIM commands are a concept, a language, rather than a list of unrelated commands.
Of course you can do everything that you can do in VIM in any other editor as well. I’m referring to concepts and I might not be very good in conveying that. Sorry.
In the end you can express pretty much anything in any editor with enough keystrokes: the arrow keys exist, after all.
Modal editing tends to be a lot more efficient than non-modal though, and the keystrokes don’t require you to move your hands much e.g. to the arrow keys (way off the home row) or to use modifiers like Ctrl that require stretching your hands. Modal editors allow you to use the keys that are the easiest to reach: the letter keys, since the modal editor knows whether you’re intending to write vs intending to issue commands. These days I mostly use VSCode, rather than Vim, but I always have Vim emulation turned on because it’s much faster than non-modal editing once you’re familiar with it. Vim is particularly nice because it has a mini language for expressing edits; for example, w means “word,” and can be combined with deletion to delete a word (dw), selection to select a word (vw), “yank” to copy a word (yw), etc — or it can be used alone without a prefacing action, in which case it simply jumps to the next word after the cursor position. And there are many other “motion” nouns like w, and those can also be combined with the action verbs in the same manner — to copy letters, paragraphs, etc, or even more complex ideas such as letter search terms. Command sequences are first-class and your last command can be replayed with a single keystroke, and there’s even a built-in macro definition verb q, which stores up indexable lists of command sequences you issue and can replay the entire lists for executing complex but tedious refactors.
Sure — the bottleneck in programming is typically not between your hands and the keyboard; it’s typically thought. But once you know what you want to express, it’s a very pleasant experience to be able to do it with such efficiency of motion. And since we do ultimately spend quite a while typing, it’s not irrational to spend some time learning a new system to have that time feel nicer.
I don’t see it as much for programming, but for writing prose a modal editor is great for forcing me to separate writing from editing. When I write an article / book chapter in vim, I try to write the first draft and then go back end edit. At worst, I try to write entire paragraphs or sections and then edit. I find this produces much better output than if I use an editor that makes it easy to edit while I’m writing.
This is something that the article comes close to saying, but doesn’t actually say: Vim doesn’t just provide a bunch of shortcuts and keybindings for arbitrary operations, instead it provides a fairly well thought out programming language for dealing with text, that happens to also be bound to convenient keys. Crucially, operations in this language can be composed such as in the examples that the article gives, so you can build up your own vocabulary of operations even if the developers didn’t account for them. Does this take longer to learn than looking up the? Yes, probably. But I suspect that for most vim fans, there comes a “aha moment” where the power of this approach becomes apparent, and once you get used to it, you can’t live with it.
The mental load goes away after a while and it just becomes muscle memory. Each saving by itself is small, but they all add up over time. One I used recently was %g/^State \d/norm! f:lD: “On every line that starts with State followed by a number, delete everything after the first colon. Would have taken several minutes without that, with it it’s just a couple of seconds. When I’m constantly getting savings like that, it’s worth it.
Cue dozens of commenters sharing their experiences that contradict or support the author’s thesis b/c low and behold, everything is always more nuanced than it seems.
This is interesting and well written. I’m not understanding the reason for the middle-tier request services vs a caching layer. Is batching reads to a DB more performant than serving individual reads from memory like redis?
This is a very real problem with the Python and Node ecosystems.
I’m going to be grumpy and ask what this “problem” is supposed to mean, exactly. I can deploy, say, a Python application on a particular version of Python and a particular operating system today, and then walk away for years, and as long as I pay the hosting bill there is no technical reason why it would suddenly stop working. There’s no secret kill switch in Python that will say “ah-ha, it’s been too long since we made you upgrade, now the interpreter will refuse to start!”
So what people always actually mean when they say things like this is that they want to keep actively developing their code but have everybody else stand still forever, so that they can permanently avoid platform upgrades but also the language and ecosystem will never leave them behind. So the real “problem” is that one day nobody else will be willing to provide further bug and security fixes (free or perhaps even paid) for the specific combination of language/libraries/operating system I initially chose to build my app with, and on that day I have to choose between upgrading to a new combination, or doing that maintenance myself, or going without such maintenance entirely.
And although I’ve seen a lot of claims, I’ve seen no evidence as yet that Rust has actually solved this problem — the editions system is still young enough that the Rust community has not yet begun to feel the pain and cost of what it committed them to. “Stability without stagnation” is a nice slogan but historically has ranged from extremely difficult to downright impossible to achieve.
You just don’t make breaking changes. Add new APIs but don’t take away old ones. Go 1 has been stable for 10 years now. That means there are some ugly APIs in the standard library and some otherwise redundant bits, but it’s been fine. It hasn’t stopped them from adding major new features. Similarly, JavaScript in the browser only rolls forward and is great. Again, some APIs suck, but you can just ignore them for the most part and use the good versions. I’m not as familiar with the Linux kernel, but my impression there is that Linus’s law is you don’t break userland.
Most breaking changes are gratuitous. They make things a little nicer for the core developers and shove the work of repair out to everyone who uses their work. I understand why it happens, but I reserve the right to be grumpy about it, especially because I have the experience of working in ecosystems (Go, the browser) that don’t break, so I am very annoyed by ecosystems that do (NPM, Python).
I’ll be up-front. My stance on breaking changes is that there are two types of projects:
And remember that we are talking not just about core language/standard library, but also ecosystems. Go has already effectively broken on this — the “rename your module to include
/v2
” hack is an admission of defeat, and allows the previous version to lapse into a non-maintained status. Which in turn means that sooner or later you will have to make the choice I talked about (upgrade, or do maintenance yourself, or go without maintenance).And Rust has had breaking changes built into the ecosystem from the beginning. The Cargo/crates ecosystem isn’t built around every crate having to maintain eternal backwards-compatibility, it’s built on semantic versioning. If I publish version 1.0 of a crate today, I can publish 2.0 with breaking changes any time I want and stop maintaining the 1.x series, leaving users of it stranded.
So even if Rust editions succeed as a way of preserving compatibility of the core language with no ecosystem splits and no “stagnation”, which I strongly doubt in the long term, the ecosystem has already given up, and in fact gave up basically immediately. It has “the Python problem” already, and the only real option is to learn how to manage and adapt to change, not to insist that change is never permitted.
(and of course my own experience is that the “problem” is wildly exaggerated compared to how it tends to impact projects in actual practice, but that’s another debate)
I think it’s hard to talk about this rigorously because there’s definitely some selection bias at play — we don’t have information about all the internal projects out there that might be bogged down by breaking changes between interpreter versions, and they’re likely motivated by very different incentives than the ones that govern open source projects — and there’s likely survivorship bias at play too, in that we don’t hear about the projects that got burnt out on the maintenance burden those breaking changes induce.
My anecdotal evidence is that I’ve worked at places with numerous Python projects bound to different, sometimes quite old, interpreter versions, and there just aren’t enough person-hours available to keep them all up to date, and updating them in the cases where it was truly necessary made for some real hassle. Even if you chalk that up to bad resource management, it’s still a pretty common situation for an organization to find itself in, and it’s reasonable to expect your tools to not punish you for having less than perfect operational discipline. In light of that, I think understanding it in this binary frame of either making breaking changes or not isn’t the most fruitful approach, because as you note it’s not realistic to expect that they never happen. But when they do happen, they cost, and I don’t think it’s unreasonable for an organization to weigh that total cost against their resources and decide against investing in the Python ecosystem. It’s not unreasonable to make the opposite choice either! I just don’t think that cost is trivial.
Imagine 20 years ago saying this about a project that suffered because they lost some crucial files and it turned out they weren’t using any kind of version control system.
Because that’s basically how I feel about it. Regularly keeping dependencies, including language tooling/platform, up-to-date, needs to become table stakes for software-producing entities the way that version control has. I’ve seen this as a theme now at four different companies across more than a decade, and the solution is never to switch and pray that the next platform won’t change. The solution is always to make change an expected part of the process. It can be done, it can be done in a way that minimizes the overhead, and it produces much better results. I know because I have done it.
I don’t believe this is what’s being disputed: it’s that this being a fact is precisely why it’s important for the platform to facilitate ease of maintenance to the best of its ability in that regard. Doing so allows even under-resourced teams to stay on top of the upgrade treadmill, which is more of my point in the bit you quoted: tools that induce less overhead are more resilient to the practical exigencies that organizations face. I guess we’ll have to agree to disagree about where the Python ecosystem sits on that spectrum.
This sounds like the “don’t write bugs” school of thought to me. Yes, ideally anything that’s an operational concern will get ongoing maintenance. In the real world…
More anecedata: my coworker built a small Python scraper that runs as a cron job to download some stuff from the web and upload it to an S3 bucket. My coworker left and I inherited the project. The cron job was no longer a high priority for the company, but we didn’t want to shut it off either. I couldn’t get it to run on my machine for a while because of the Python version problem. Eventually I got to the point where I could get it to run by using Python 3.6, IIRC, so that’s what it’s using to this day. Ideally, if I had time and resources I could have figured out why it was stuck and unstick it. (Something to do with Numpy, I think?) But things aren’t always ideal.
If someone has to have discipline and try to stick closer to the “don’t write bugs” school, who should it be: language creators or end developers? It’s easy for me to say language creators, but there are also more of us (end developers) than them (language creators). :-) ISTM that being an upstream brings a lot of responsibility, and one of those should be the knowledge that your choices multiply out by all the people depending on you: if you impose a 1 hour upgrade burden on the 1 million teams who depend on you, that’s 1 million hours, etc.
Generalizing from anecdata and my own experience, the killer is any amount of falling behind. If something is not being actively maintained with dependency updates on at least a monthly and ideally a weekly cadence, it is a time bomb. In any language, on any platform. Because the longer you go without updating things, the more the pending updates pile up and the more work there will be to do once you do finally sit down and update (which for many projects, unfortunately, tends to be only when they are absolutely forced to start doing updates and not a moment sooner).
At my last employer I put in a lot of work on making the (Python) dependency management workflow as solid as I could manage with only the standard packaging tooling (which I believe you may have read about). But the other part of that was setting up dependabot to file PRs for all updates, not just security, to do so on a weekly basis, and to automate creation of Jira tickets every Monday to tell the team that owned a repository to go look at and apply their dependabot PRs. When you’re doing it on that kind of cadence it averages very little time to review and apply the updates, you find out immediately from CI on the dependabot PRs if something does have a breaking change so you can scope out the work to deal with it right then and there, and you never wind up in a situation where applying the one critical update you actually cared about takes weeks or months because of how much other stuff you let pile up in the meantime.
Meanwhile I still don’t think Python or its ecosystem are uniquely bad in terms of breaking changes. I also don’t think Go or Rust are anywhere near as good as the claims made for them. And the fact that this thread went so quickly from absolutist “no breaking changes ever” claims to basically people’s personal opinions that one language’s or ecosystem’s breaking changes are justified and tolerable while another’s aren’t really shows that the initial framing was bad and was more or less flamebait, and probably should not be used again.
No, I still think you’re wrong. :-)
I agree that for a Python or Node project it is recommended to set up dependabot to keep up to date or else you have a ticking time bomb. However, a) that isn’t always practical and b) it doesn’t have to be like that. I routinely leave my Go projects unattended for years at a time, come back, upgrade the dependencies, and have zero problems with it.
Here is a small project last touched in 2017 that uses Go and Node: https://github.com/baltimore-sun-data/track-changes
Here is the full Terminal output of me getting it to build again with the most recent version of Go:
As you can see, it took about a minute for me to get it building again. Note that this package predates the introduction of Go modules.
Let’s upgrade some packages:
Took about 3 minutes to upgrade the packages and fix the broken dependency (they renamed a middleware). Bear in mind that the upgrade I did deliberately did not try to upgrade past semantic version changes in its dependencies. Probably it would take another half hour or more if I wanted to chase down whatever breaking changes happened there.
Suffice it to say, yarn cannot even install its packages, and the last time I tried this stunt a couple of years ago, I got past that and then ran into a problem with webpack that I couldn’t easily solve.
Go is just a much more stable ecosystem than Node or Python. It’s not as stable as say browser JS, where one can reasonably expect working code to work until civilization collapses, but it’s fairly stable. And it’s not magic. If there were a communal expectation of this level of stability, it could exist everywhere. It’s a social value to keep things working in the Go ecosystem, and it’s not elsewhere.
The other day I returned to a Python package I hadn’t touched in about a year. The actual Python dependencies portion of updating it was done in a few minutes: I updated the supported versions of Python to those currently supported by upstream, did the same for the supported versions of Django, and then had to change a whopping four lines of code, all in a unit-test file, to deal with a deprecation in Django that had finally been removed.
The entire remainder of getting it ready for a new release was fighting with CI — updating from v1 to v3 of the GitHub Actions tasks for Python, which I mostly did by copy/pasting from a package by someone else who I trust.
I mention this because while you have anecdotes about Go projects updating more or less seamlessly, I have just as many about Python projects, and other people in this thread have anecdotes about Go projects breaking in ways they found annoying.
All of which is to say that you should stop trying to extrapolate from your anecdata to “Go is stable and values stability, while Python is not and does not”, because it just ends up looking silly when other people show up with their anecdata. Python is not uniquely “unstable” and neither Go nor Rust are uniquely “stable”. At best, some projects are sometimes lucky enough that they can give the false impression of stability in the language/ecosystem, despite the fact that the language/ecosystem is always moving on. And that’s why I have said, over and over, that the thing to do is embrace and accept change and build processes around it. Otherwise, you’re likely to wake up one day to find out that what you thought was stable and unchanging was neither, and that you are in for a lot of trouble.
When I get a chance, I thought of an equivalently old Python package for me to try updating. I’ll try to do it this weekend or next week.
But I just don’t buy this:
I don’t have experience with Rust, so I have no idea there. I do have years of working in Python, JavaScript, and Go and my experience is uniform: Python and JavaScript routinely have problems that make installing/updating take a workday, and Go does not. I’ve already given a lot of concrete examples, and I’m sure I could dig through my git history and find more. At a certain point, all I can say is this is my experience and if it’s not yours, great.
Okay, I tried this with a Datasette project from around the same time. (It was actually deployed again in early 2020, so it’s a bit fresher than the Go project, but whatever, close enough.) Again, Node didn’t work. I think the issue with that is that libsass is dead and doesn’t compile anymore, so you need to switch to dart-sass instead. In all likelihood, the fastest solution to the Node issues is to drop all of the dependencies and just start over from scratch with only my user code, since the dependencies were just there to build a Vue project.
On the Python side, it wouldn’t work with Python 3.9, but when I used Python 3.7, I got it to run again. Terminal output is below. It only took 15 minutes to get it going, but compare this to Go, which works with the current version of Go even though the package predates modules (which caused a lot of breakage not covered by the Go 1 guarantee) and I got the dependencies upgraded in a total of 5 minutes. By contrast, the Python installations all took long enough that they break flow: since the installation is going to take a while, I switch away from my Terminal, which is chance for me to get distracted and lose my place. I think this project did pretty well because it used your recommended pattern of having a requirements-freeze.txt file and it had a Bash script to automate the actual install commands. But the error when UVLoop was broken was pretty demoralizing: I have no idea how I would fix it, so getting up to Python 3.9 or 3.10 would probably involve a lot more Googling than I’m willing to do for an internet comments example. Again, the simplest fix might be to just blow away what I have now and start from scratch. I think Datasette has been relatively stable in spite of being <1.0, so I suspect that it wouldn’t be that hard to get it working, but again, it’s more than I want to do for an example. A nice thing about Go is that most dependencies don’t use C, so when something does go wrong, like that middleware that was broken in the other project, you aren’t confronted with errors in a language you don’t know using a build system you don’t understand. In general, it’s just much less intimidating to get a Go project back up to speed.
So this is fairly reflective of my lived experience: Node projects, especially those that use Webpack, break in ways that are more or less unfixable and need to be restarted from scratch; Python projects can be kept running if you are willing to pin old versions of Python but give a lot of scary compiler errors and don’t have clear paths forward; Go projects can typically be upgraded by typing
go get -u ./...
and maybe reading some release notes somewhere. Go isn’t perfect and there are still problems, but the quantity of problems is so much less than it creates a qualitative difference in feeling.You’re still missing the point I’m trying to make, though, because you keep diving for anecdotes to support your case, and I keep trying to remind you that if we allow generalizing from anecdotes then your own claims will get contradicted, because there are people who can bring similar anecdotes about languages like Go that you think don’t have this “problem”.
My stance here is and always has been that no language or ecosystem is free of the need to keep up with dependencies and no language or ecosystem is free from breaking changes. People in this comment thread have posted experiences with Go breaking on them and you’ve mostly just ignored that or disallowed generalizations from them, while insisting that your own anecdotes support generalizations, about the objective state of particular languages/ecosystems.
That is what I’m trying to get you to see, and trying to break you out of. No number of anecdotes one way or another will make your case for you, because the issue is not an insufficient number of presented anecdotes.
This is very close to where I’m trying to lead you, but
I have given a ton of anecdotes and done two experiments in support of my view. On the other side, there is your a priori assertion that all ecosystems have the same problems (but is it of the same magnitude?) and an unreadably long article by a pro-Rust troll. If other people have anecdotes about Go being hard to upgrade, I’m happy to read them (assuming they can get to a point in less than 10,000 words) and theorize about why someone else might have that problem when I don’t. But that hasn’t happened in this thread.
Well, right here in this thread someone mentioned problems with upgrading in Go. But the way you and another user were trampling all criticism of Go with insults — “troll”, “clickbait”, and so on — literally drove that person out of the thread.
I honestly don’t know what else to tell you. It seems very clear that you’re not prepared to hear anything that contradicts the narrative you’ve got going, so I guess I’ll bow out too since it’s pointless to continue trying to get you to acknowledge stuff that was literally right in front of you.
I could never tell what the actual problems were based on the links posted by m0th were. They linked to long pieces with no summaries, so while I would love to comment, I cannot. Maybe it’s because they felt targeted by ad hominem. I have a different theory for why, which is that the criticisms were scattershot and not serious. Again, if they want to come back and provide a useable summary, great.
And yet I’d bet anything that if you’d been given short concise statements of problems you’d dismiss them as being trivial or minor on those grounds. Like someone else already said, I simply cannot assume that you’re actually trying in good faith to engage with criticism of a thing you like; instead all I’ve seen you do is smear and deflect and dismiss without argument.
If you call this “without argument” I don’t know what to say. I really feel that the “without argument” side is yours, and it is very frustrating to me, because you’re a writer whose work I respect which makes it very hard for me to let this go. But when you write things like “anecdotes don’t count”… What is a personal dev blog but anecdotes? I agree that supporters of language X can be vociferous and annoying, and Go is one such X. In that case of Go, the vociferous attitude comes from having been attacked as using a “bad language” by people like fasterthanlime and our own experience of initially thinking things like the v2 are dumb and then slowly seeing the benefits and defending them. I agree that my initial claim was a bit too glib in its characterization, but I stand by a less inflammatory version of the same idea. I don’t agree that anecdotes aren’t the data of software engineering, and I don’t agree it’s somehow impossible to characterize software ecosystems with the understanding that any generalization will always be at best partial and incomplete but nevertheless it’s better to have the generalization than not.
People brought up criticism of Go. You didn’t engage with it or argue with the points made — you ignored them, or dismissed them as too long to be worth your time to read, or just outright insulted the author of one piece of Go criticism.
It’s hard for me to see a way to take this as good faith. It appears that you like Go, which is fine! Same with Python: if you don’t like it, then don’t like it — that’s fine!
What’s not fine is the double-standard “my anecdotes are generalizable to broad claims about the language/ecosystem as a whole, but other people’s anecdotes are not”. Which is basically what you’re doing. If your anecdotes about Python generalize, then so do mine and so do other people’s anecdotes about Go. Which would then contradict the claims you want to make, which is probably why you’re being so cagey about not allowing others’ anecdotes to generalize while insisting that yours do. But again it’s going to be, at least, extremely difficult to take that as a good-faith argument.
When I engage with the criticism, I’m just dismissing it. If I don’t engage because I don’t have enough detail (or drowning in irrelevant details so I can’t tell why monotonic time is supposedly bad), I’m ignoring it. It’s no win.
As far as I can tell the only “engagement” you’ve given to criticism of Go was the second paragraph in this comment, and I think that’s being charitable since it’s also possible to read as a “well, this breaking change didn’t count, and it just means Go is great instead of the greatest”. If I said Python’s stability guarantees are good but just “don’t go far enough”, I doubt you’d view it charitably.
You have, on the other hand, repeatedly dismissed critiques both by users in this thread and linked to on other sites as being too long or too “scattershot” or “fair to handwave … away”. Again: if I tried pulling that sort of thing in defense of Python you would, I hope, call me on it. You’re doing it here in defense of Go. I’m calling you on it.
I wrote a long reply, decided to sleep on it, and I give up. You haven’t conceded any points. You’ve repeatedly accused me of bad faith. You’ve decided Python is as good as it gets and by definition nothing else can be better in some ways but not others. All I can say is I didn’t invent the phrase “the problem with Python”. Other people feel this way too. Maybe we’re wrong, and everything has the same problem to the same degree. I don’t think so though.
You seem to be simultaneously saying there is nothing especially broken with python while defending the culture of constantly breaking changes and saying it is inevitable. I think your attitude sums up the problem, there are a lot of enablers.
This thread started out with an accusation that there is a problem that uniquely exists in certain languages, and does not exist at all in others. People were very confidently stating not that the breaking changes in Go are less frequent, or less annoying, or more justified — people were very confidently stating that breaking changes simply do not exist at all in Go.
Don’t believe me? Here, and a lot of the rest of this sub-thread is people trying to quibble about “oh well that was just a change to (thing that shouldn’t count)” or explain “well these changes were justified and tolerable while those other languages’ changes aren’t” in order to somehow cling to the notion that Go is a bastion of perfect stable compatibility, even in the face of solid evidence that it isn’t.
And I’m sorry if people don’t like to hear it, but I’ve just gotta call bullshit on that. Every language and ecosystem has breaking changes. There is no magic in Go that makes it somehow unchanging and perfectly compatible forever, nor is there any magic in the ecosystem that somehow makes all maintainers good stewards who never ever make unjustified changes.
Still, people seem to be wildly exaggerating both the frequency of breaking changes in the languages they want to criticize, while minimizing for the languages they want to praise. It’s very human and very subjective, and very not a good way to have a reasoned discussion about this.
Especially when it’s coupled with very obvious bad-faith tactics like the way critics of Go keep being dismissed or even smeared in this thread.
If you dislike Python, then by all means dislike it. There are things I dislike about it. But there’s way too much in this thread, and in our industry, of people being unable to handle anything other than extremes — something is either 100% good or 0%, either completely perfectly backwards-compatible always or never — and of very clearly contradicting themselves and making really bad arguments to try to justify their subjective dislikes. Just come out and say you don’t like a thing and move on. You don’t also have to prove that it’s the worst in order to justify your dislike.
I don’t know if python is especially bad. I personally found npm projects I have tried break when updating dependencies a few years later, and I have not had that experience in Go. Maybe I am just lucky.
It seems to me that the disconnect here is about the scope of what constitutes “breaking changes” in a language. There certainly isn’t an objective definition! Some people consider only the core language and it’s standard library, and others include everything out to its tooling.
This thread has grown a lot overnight while I was sleeping! I still haven’t caught up, but here is my top of head takeaway:
I think it’s fair to complain that the Go 1 stability guarantee doesn’t go far enough, but it seems like some people elsewhere in the thread are trying to claim that adding monotonic time somehow breaks the guarantee, which makes no sense to me. At this point, someone citing Fasterthanlime basically means they don’t know anything about Go, IMO because he has been so egregious in his misrepresentation of the language. Just recently, I had to pin a specific subversion of Go 1.18 in my tests because I needed a particular behavior for the Go tool. I don’t consider that to have been the Go team breaking the stability guarantee, just an example of how you can run into the limits of it when you’re talking about the tooling.
Obviously, it’s going to be impossible to never break anything. (Which is what I take from your points 1 and 2.) If nothing else, security problems can force breaking changes. Certainly, you need to get to a point where you’re happy enough to commit to a design before you stabilize it: Go was not stable at all before version 1. JavaScript wasn’t stable before ECMA etc.
But the question to me is whether the upstream developers take breaking changes seriously or commit them wantonly. To rephrase your points:
Projects which accept that breaking changes are fact of software development, so they do it whenever it’s convenient.
Projects which accept that breaking changes are fact of software development, so they do it as little as possible.
I was being a bit too flip when I wrote “You ‘just’ don’t make breaking changes.” That’s like saying, you just don’t write bugs. It is unavoidable sometimes. But there is a qualitative difference as an end developer in relying on a project which attempts to avoid breaking changes and one which does not.
In Node, I routinely run into software that will not run on the latest versions of Node. When a new version of Node comes out, I don’t think “Oh great, more speed, more features!” I think, “Ugh, crap, what’s going to break this time?” I had a hard to diagnose break in Babel (a very popular library!) that was caused not by a major version jump in Node, but a minor version bump. It was a waste of a day of development time for me for no reason. You wrote “as long as I pay the hosting bill there is no technical reason why it would suddenly stop working.” But in this case, there wasn’t anything I did intentionally which broke my setup. I just hadn’t pinned my version of Node, Homebrew moved up by a minor version because of some other thing, and suddenly I lost a workday. It also broke production because prod was only pinned to the major version not the minor or patch.
A similar example was that I had an Amazon ECS server using a Docker image to serve some Python app, and one day it just stopped working in prod. The problem was that when the Dockerfile was written, it didn’t specify the version of Python, and then the version where async went from a semi-keyword to a full keyword came out and broke the dependencies I was using. It wouldn’t have mattered if the Docker image had been already baked, but the way this project was setup, the Docker image would be periodically rebuilt from the Dockerfile whenever things would restart or something. That at least was relatively easy to track down and fix because the change of Python versions was noticeable once I started debugging it.
You call the /v2 thing a “hack” and it is ugly and inelegant, but it does solve this problem: when you make a breaking change, let users upgrade at their own pace, instead of having them accidentally get moved onto the new thing without being aware of it. (In browser JS,
"strict mode"
andtype="module"
also have the effect of being opt-in upgrades!) That’s really the core of the so-called “Python problem” to me. Just let me decide when to work through the breaking changes. I’m not expecting eternal support or whatever. I just want something that works today to work tomorrow unless I press the big red “attempt upgrade” button.I’m the one who cited the monotonic time issue. I’m having trouble seeing good faith in this statement, so I’ll bow out of this thread after this. But just so you understand that this is adamantly not the case: I’ve written Go since before 1.0, contributing to some of the largest projects in the ecosystem, and as a manager moved several teams over to using it (from Python, no less.) I also don’t think those credentials should be necessary for a critique.
To reiterate, this has happened to me with minor version bumps in Go, due to both modules and stdlib changes. This is even worse because it was Go itself, not a third-party library.
This actually touches on a major reason why Go’s ecosystem could be so unstable before modules: you could either vendor which had its own can of worms, or rely on authors using this hack. Both patterns emerged as common practice only after years of Go devs experiencing breaking changes in libraries. You could argue that those instabilities don’t apply to Go itself, which I suppose is fair, but that applies to your argument about Babel as well.
Go makes breaking changes all of the time. The last one that bit me was when modules were enabled by default. Suddenly packages starting behaving differently depending on a variety of factors, including the version number of the package: https://stackoverflow.com/questions/57355929/what-does-incompatible-in-go-mod-mean-will-it-cause-harm/57372286#57372286
Python at least had the sense to bundle up all the breakages in a major version change, which was released 15 years ago.
Eh no. Python does not follow semantic versioning and makes intentionally breaking changes between minor versions. In fact Python 3.x releases come with documentation titled “Porting to Python 3.x” which lists all breaking changes made intentionally.
Python makes breaking stdlib changes in minor version releases. Not even remotely the same as a tooling change.
I’ve never been affected by stdlib changes in python in a minor release. I have by go, where it suddenly stopped respecting system DNS settings. Or there were the backwards-incompatible changes to time, maybe best summarized here.
I believe you, but you are lucky. With search I found things like https://github.com/wazuh/wazuh/issues/13365 which matches my experience.
I know this isn’t likely to change anyone’s mind, but the thing you linked is an example of something that had been deprecated and raising deprecation warnings for years. Here’s an example warning I get from a Python 3.7 install I had lying around (oldest I could find, I tend not to keep EOL’d Pythons on my personal laptop):
And it appears they actually held off and didn’t finally remove that until 3.10. That’s at least four years after Python 3.7 (which issued that warning). It’s nearly ten years after Python 3.3.
Python does use a rolling deprecation cycle, yes. So do some major Python projects like Django. This means you should ensure you’re bubbling up deprecation warnings in your CI, and probably read release notes when new versions come out, yes.
But these things don’t just come surprise out of nowhere; they’re documented, they raise warnings, they’re incorporated into the release cycles. Knowing how Python’s deprecation cycles work is, for a Python user, the same kind of table stakes as knowing how semantic versioning works is for a Rust/cargo/crates user – if you updated a crate across a major version bump and it broke your code, everyone would tell you that you could have seen that coming.
It happens, no doubt, but IME Python’s changes are much more egregious than Go’s few changes over the years. Where Go does make changes, they are typically as solutions to bugs, not gratuitous API changes.
Also, speaking personally, I try to be as objective as possible and fasterthanli.me is not an objective source in any sense. It’s clickbait. There’s no pretence of objectivity.
I’d like to point out that this thread has gone from a claim that Go has maintained backwards compatibility to, now, a claim that Go’s backwards-incompatible changes just aren’t as “egregious” as Python’s, and to what’s basically an ad hominem attack on someone who criticized Go.
Which is kind of a microcosm of this whole framing of “Python problem” or “Node problem”. What it’s really about is not some magical language and ecosystem that never has backwards-incompatible changes, what it’s about is subjective taste. You think Go’s breaking changes are not as “egregious” and are “solutions to bugs” while Python’s are “gratuitous”. But that’s a subjective judgment based on your tastes and opinions. Someone else can have an opposite and equally-valid subjective opinion.
Or, more bluntly: “No breaking changes” nearly always turns out to actually mean “Breaking changes, but only ones I personally think are justified/tolerable, and I don’t think yours are”. Which is where this thread predictably went within the space of just a few replies.
Getting back to my original claim: change is inevitable. Entities which produce software can adapt to it and make it a normal and expected part of their processes, or they can suffer the consequences of not doing so. There is no third option for “all external change stops”. Nothing that lives does so changelessly.
No I don’t think so. It has nothing to do with opinion and everything to do with experience. What someone sees directly is how they perceive reality. I’m expressing my perception of reality as someone who has experience of both Python and Go, and m0th is expressing their experience.
I’m not minimising m0th’s view, which is why I phrased it as “in my experience”.
Change is inevitable, I agree, but I do think the degree matters. Python makes breaking changes often, removing APIs, etc. Go does not and only with very good reason.
Again, the original claim at the start was that there are languages which don’t have breaking changes, and Go was specifically named as an example. That has now been walked back to the kinds of statements you are making. And your statements are, effectively, just that in your opinion one language’s (Go) breaking changes are justified and not too frequent, while another language’s (Python) breaking changes are not justified and too frequent. Which is just you stating your own personal tastes and opinions. And it’s fine as long as you are willing to admit that. It is not so fine to present one’s personal tastes and opinions as if they are objective facts. It also is not so fine to engage in the kind of ad hominem you did about criticism of Go.
I didn’t make that claim, so I’m not sure why you’re arguing with me about it.
I did? I explicitly said “In My Experience”. I’m not sure how I can be any clearer.
What are you even talking about? I legitimately can’t even understand what you’re referring to. Where’s the “ad hominem” comment I made? I think the most controversial thing I said was
“Python makes breaking stdlib changes in minor version releases.”
Which is objectively true as mentioned by other commenters. I made no judgement about whether it was justified or not. It’s also not “ad hominem”, which would require Python to have a position on this.
Anyway, I think you’re taking this way too personally for some reason. I like Python, I have used it for many years. But I’m out.
IIUC, ubernostrum is referring to you explaining that fasterthanli.me is not a reliable source of information. That’s not an ad hominem though. His attacks are so scattershot that if someone cites them, it’s totally fair to just handwave it away.
It’s this:
You don’t bother to engage with the content, just brush off the author with a quick smear and then move on. Which is a problem, because the content is reasonably well-argued, from my perspective as someone who has a bit of Go experience and initially couldn’t quite describe why it rubbed me the wrong way. Putting into words the way that Go (which is not unique in this, but is a prominent example of it) likes to pretend complex things are simple, rather than just admitting they’re complex and dealing with it, was extremely useful for me, both for giving me a way to talk about what bugged me in Go and because it relates to things I’ve ranted about in the Python world previously (most notably why Python 3’s string changes were the right thing to do).
Go the language doesn’t have breaking changes, in the sense that code written against a 1.0 language spec will — generally— continue to compile and run the same way against any 1.x language spec compiler. This isn’t an absolute statement of fact, but it’s a design principle that’s taken very seriously and is violated very rarely. The tooling doesn’t necessarily abide the same rules.
Your comment is a bit of a roller coaster.
I’m not even sure what that’s trying to say.
If you just ignore all those breaking changes they made..
I only remember Modules. After Modules landed I never experienced any breaking build when updating to a new Go version. A whole different story was updating Java versions and the dozens of subtle ways they can break your service, especially at run-time (caused by some Spring DI magic).
What breaking language changes have been made?
Modules seems like the biggest change.
I mean, Go module is not part of Go 1 stability guarantee. In my opinion, this shows how limited Go’s stability guarantee is. Go 1 is stable, but Go is not, at least if you are using “go build” to build your Go code.
I agree but that’s not a language change, and the impact is not as significant as random language changes. I can still use and build a package from 8 years ago with no issue. And whatever tool they happened to use to manage dependencies 8 years ago will still work fine (probably).
I saw some breaking changes in one of our old projects. Written in the Go v1.4 era, IIRC. I checked it with a Go v1.14 release, and boom, it doesn’t compile due to the module changes.
Yes, it wasn’t that hard to fix (it only took a few minutes of Internet searching), but I still count that as a breaking change.
When was there ever a breaking change in a new version of ecmascript? Node is not a language.
There’s a bunch of things that vary in stability.
Many packages are extremely careful not to break consumers. Others are less so. The experience you have with backwards compatibility tends to track the backwards compatibility stance of the worst thing in your entire dependency tree. When you have tens of thousands of transitive dependencies, you usually end up seeing 99.99%-ile bad behaviour somewhere in there at least once.
The core problem is that a handful of “core” packages break, and nodes ecosystem is way too much about plugins, so many things that you use have 3 layers to them (all maintained by different people)
The ecosystem would be a lot more stable if we were vendoring in packages more
Node is a development target. The pure abstract notion of a “language” doesn’t really matter here, because people write code for Node specifically.
And Node does make breaking changes. It literally makes semver-major releases, with backwards-incompatible changes that can break existing npm packages (mainly due to changes in the native API, but there also have been changes to Node’s standard library).
For example, I can’t build any projects that used Gulp v3. Node broke packages that it depends on, and I either have to run deprecated/unsupported/outdated version of Node, or rewrite my build scripts. OTOH a Rust/Cargo project I’ve developed at the same time still works with the latest version of Rust/Cargo released yesterday.
Yes, that’s why I said “Node ecosystem” which breaks all the damn time and not “browser ECMAScript” which has not broken since standardization.
The quote you posted says language, not ecosystem. Your comparison was a false equivalency.
ecmascript itself is a language but not a language implementation, so while you can write a lot of code in ecmascript, at the end of the day the ecmascript specification can’t run it, so you’ll need to run it using an implementation, that’s why people are talking about the Node ecosystem, the language didn’t break the ecosystem did
Okay, that’s a few less keystrokes, but a harder mental load. I’d rather press F3 to go to the next result (or even, re-hit ctrl+f and enter) than remember if I’m in the “search” mode and hit some random button that has a different meaning in a different context.
When programming, you already have to juggle so many different things in your mind - why complicate it further? I feel like all those vim/emacs articles are just written to justify the time spent learning all those keystrokes and quirks, and all the time setting up the plugins.
I get that concern, but the truth is that after a couple weeks of using vim all the commands and things you use daily become second nature. I’ve had times where I couldn’t tell someone how to do something in vim without having my fingers on the keys so I see what my movements were. It’s pretty amazing how many shortcuts you can keep in your head.
I’m able to mostly do that by playing the air-qwerty- keyboard. Definitely keeping most of my vim knowledge in my muscles, leaving my brain free for how I want the text to change.
You’re actually looking at it the wrong way around. F3 is the random key here. Nobody would ever figure out that key without help. On the other hand, in VI most keys are part of a logical pattern, even though some of those are historical. For example: n is the key you’d press to get to the next search result.
So while most shortcuts in modern day GUI have to be memorized without much context to help*, Vim commands are a language built out of related patterns.
*) That’s of course not the full story. All those shortcuts have a history as well and some are an abbreviation for the action as in ctrl+f(ind) or ctrl+c(opy). But there’s no “copy a word and paste it to the next line” or anything similar one could express with those.
People figure out the F3 key by seeing it in the IDE’s menus - which vim doesn’t have. With vim, you have to put in the effort and actively learn the shortcuts. But even then, I said you can just hit Ctrl+F and enter again to get the same behavior, which is obvious because most software has the same shortcuts, and work the same way.
Ctrl+Shift+Right to select the word, then Ctrl+C, Down arrow, Ctrl+V, am I missing something?
Yes, if you use GVim you get those little helpers in menus as well. That’s a different interface. But the topic should be about concepts. VIM commands are a concept, a language, rather than a list of unrelated commands.
Of course you can do everything that you can do in VIM in any other editor as well. I’m referring to concepts and I might not be very good in conveying that. Sorry.
In the end you can express pretty much anything in any editor with enough keystrokes: the arrow keys exist, after all.
Modal editing tends to be a lot more efficient than non-modal though, and the keystrokes don’t require you to move your hands much e.g. to the arrow keys (way off the home row) or to use modifiers like Ctrl that require stretching your hands. Modal editors allow you to use the keys that are the easiest to reach: the letter keys, since the modal editor knows whether you’re intending to write vs intending to issue commands. These days I mostly use VSCode, rather than Vim, but I always have Vim emulation turned on because it’s much faster than non-modal editing once you’re familiar with it. Vim is particularly nice because it has a mini language for expressing edits; for example,
w
means “word,” and can be combined with deletion to delete a word (dw
), selection to select a word (vw
), “yank” to copy a word (yw
), etc — or it can be used alone without a prefacing action, in which case it simply jumps to the next word after the cursor position. And there are many other “motion” nouns likew
, and those can also be combined with the action verbs in the same manner — to copy letters, paragraphs, etc, or even more complex ideas such as letter search terms. Command sequences are first-class and your last command can be replayed with a single keystroke, and there’s even a built-in macro definition verbq
, which stores up indexable lists of command sequences you issue and can replay the entire lists for executing complex but tedious refactors.Sure — the bottleneck in programming is typically not between your hands and the keyboard; it’s typically thought. But once you know what you want to express, it’s a very pleasant experience to be able to do it with such efficiency of motion. And since we do ultimately spend quite a while typing, it’s not irrational to spend some time learning a new system to have that time feel nicer.
Real gain is in reduced load on one’s muscles and tendons. Moving to vim bindings has helped me overcome pain in my wrists.
I don’t see it as much for programming, but for writing prose a modal editor is great for forcing me to separate writing from editing. When I write an article / book chapter in vim, I try to write the first draft and then go back end edit. At worst, I try to write entire paragraphs or sections and then edit. I find this produces much better output than if I use an editor that makes it easy to edit while I’m writing.
This is something that the article comes close to saying, but doesn’t actually say: Vim doesn’t just provide a bunch of shortcuts and keybindings for arbitrary operations, instead it provides a fairly well thought out programming language for dealing with text, that happens to also be bound to convenient keys. Crucially, operations in this language can be composed such as in the examples that the article gives, so you can build up your own vocabulary of operations even if the developers didn’t account for them. Does this take longer to learn than looking up the? Yes, probably. But I suspect that for most vim fans, there comes a “aha moment” where the power of this approach becomes apparent, and once you get used to it, you can’t live with it.
I’m not sure “n” for next is “some random key”? And “N” (shift-n[ext]) for previous.
And slash/question mark for search might be a bit arbitrary, but slash sorta-kinda “means” pattern/regex (as in /^something/).
Ed: ok, I’m not sure why op is exited about “find” - I generally “search”.. . Or move by word (w) beginning/end of line (| and $). See also: https://stackoverflow.com/questions/12495442/what-do-the-f-and-t-commands-do-in-vim#12495564
Move by word - hold ctrl, works in any textarea in any operating system.
Move to beginning/end of line - press “home” or “end” keys, works in any textarea in any operating system.
n is “some random key” on every other software with a text box on your computer. Vim is inferior in that aspect, not superior.
)
moves by sentence and}
moves by paragraph. I miss that all the time while writing prose outside of vim.Since they’re motions, you can combine them with delete/yank/replace/whatever. So
di)
deletes the whole sentence.Option up/down moves by paragraph.
I’m on Windows
The mental load goes away after a while and it just becomes muscle memory. Each saving by itself is small, but they all add up over time. One I used recently was
%g/^State \d/norm! f:lD
: “On every line that starts withState
followed by a number, delete everything after the first colon. Would have taken several minutes without that, with it it’s just a couple of seconds. When I’m constantly getting savings like that, it’s worth it.