A couple years ago Go, like with so many of its decisions, flouted conventional wisdom and went with a brand new approach to package management that did away with lock files and fancy dependency version constraint solvers, and introduced vgo with its “minimal version selection” algorithm. (https://research.swtch.com/vgo)
At the time I was very curious how it would turn out:
But I’ve had a very hard time finding any “debriefings” or “retrospectives” on how this bold approach is working out. So for those of you who code in Go seriously, but also have a lot of experience with Bundler-lineage package managers (e.g. Javascript’s npm or yarn, Rust’s cargo, Elixir’s mix/hex, Python’s poetry by not pipenv or others, Dart’s pub, etc), what’s the verdict? How’s the experience compare?
Have there been any talks given by Russ Cox or other related team members where they’ve opined on this decision?
It works incredibly well in my experience. There are two main pain points.
First is related to certain projects not really following the rules: grpc-go and Kubernetes. grpc-go in particular has caused lots of headaches due to removing APIs in minor releases. Kubernetes has had a crazy build system that preceded Go modules, so maybe it will get better over time.
The second pain point is around managing multi-module repositories. It kinda works by using replace directives but it’s pretty hacky, to the point where I’m actively avoiding using multiple modules in projects even when it “should” be a good fit. There’s a proposal on the table to add workspace support to the go command that might make this work better.
Overall, despite these pain points, it’s easily the best package manager I’ve used. It’s simple and predictable.
Minimal Version Selection itself is fine, in my experience neither better nor worse than what I guess is the de facto standard of… Maximal {Patch, Major} Version Selection? It biases for churn reduction during upgrades, maybe, which at least for me doesn’t have much net impact on anything.
But a lot of the ancillary behaviors and decisions that come with MVS as expressed in Go modules are enormously frustrating — I would go so far as to say fundamentally incompatible with the vast majority of dependency management workflows used by real human beings. As an example, in modules, it is effectively not possible to constrain the minor or patch versions of a dependency. You express
module v1.2.3
but if another dependency in your dep graph wantsmodule v1.4.4
you’ll get v1.4.4. There is no way to expressv1.2.x
orv1.2.3+
. You can abuse the require directive to lock to a single specific version, but that constraint isn’t transitive. And module authors can retract specific versions, but that power is not granted to consumers.edit: for more of my thoughts on this exciting topic see Semantic Import Versioning is unsound
I hope that this post with the experience reports of many real human beings claiming that it works well for them will help you reconsider this opinion. Perhaps it’s not as much of a vast majority as you think it is.
None of the experience reports really speak to the points I raise. Maybe that’s your point.
The position of the Go team is (was?) that packages with the same import path must be backwards compatible. I guess their point is that
1.4.4
should be compatible with1.2.3
, but that’s not how the rest of the software world has worked in the past decade. It’s a big if, but if all the go programmers agree, it works.That’s not (only?) the position of the Go team, it’s a requirement of semantic versioning. People fuck it up all the time but that’s the definition. One way to look at the problem with modules is that they assume nobody will fuck it up. Modules assumes all software authors treat major version API compatibility as a sacrosanct and inviolable property, and therefore provides no affordances to help software consumers deal the fuckups that inevitably occur. If one of your dependencies broke an API in a patch version bump, well, use a better different dependency, obviously!
Ivory Tower stuff, more or less. And in many ways Go is absolutely an Ivory Tower language! It tells you the right way to do things and if you don’t agree then too bad. But the difference is that if you don’t like the “I know better than you” decisions that Go the language made, you can pick another language. But if you don’t like the “I know better than you” decisions that modules made, you can’t pick another package management tool. Modules has a monopoly on the space. That means it doesn’t have the right to make the same kind of normative assumptions and assertions that the language itself makes. It has to meet users where they are. But the authors don’t understand this.
Semantic versioning is, unfortunately, incompatible with graceful deprecation. Consider the following example:
In SemVer, these thing would be numbered something like 1.0, 1.1, 2.0. The jump from 1.1 to 2.0 is a breaking change because the old API went away at that point. If you paid attention to deprecation warnings when you upgraded to 1.1 and fixed them then the 1.1 -> 2.0 transition is not a breaking change though and SemVer has no way of expressing this with a single exported version and this leads to complex constraints on the import version (in this three-version case, the requirement is >=1.1, <3.0). A lot of these things would be simpler if APIs, not packages, were versioned and the package advertised the range of the API that it advertised. Then you’d see:
As a consumer, I’d just specify that I need the 1.0 API until I’ve migrated my code and then that I need the 2.0 API.
So does everyone else in the world who builds with
^version
and no lock file, except for them, the breakage happened when the author of their dependency published a new version rather than when they themselves performed a conscious action to alter dependencies in some way.Yeah but in most package management systems you can pin specific versions.
You can use an
exclude
directive to remove the breaking version from consideration. If upstream delays fixing the breaking change, you can fork the last good version and use areplace
directive to substitute your own patched module in place of the old module.It’s hard to imagine this hypothetical happening, however (and please correct me if I’m wrong). MVS selects the maximum of minimum required versions. If some module called “broken” patches and the new version breaks your own software, there is no way for that update to propagate to your software unless both 1) a different dependency of your module called “dep” decides to start requiring the new version of “broken” and 2) you update your dependencies to require the updated version of “dep”. (1) Implies that “dep” truly requires the patch (that broke your code), and (2) implies that you truly require the new features in module “dep”. By transitivity… there is no conceivable way to fix the problem without patching it yourself and replacing.
There’s actually a whole paragraph on this topic of “high fidelity” in Russ Cox’s original blog post about the version selection algorithm.
I meant “broke an API” in the “API compatibility” sense, not in the “wrote a bug” sense. That kind of broken carries forward.
So your article states: “It’s a contract with its consumers, understood by default to be supported and maintained indefinitely.”
I don’t think this follows from anything you have written or anything I have read about SIV. The way SIV works sounds to me like if you want to deprecate features from your library you should provide a reasonable deprecation policy which includes a time period for which you will provide bug-fixes for the old major version and a time period for which you will backport security-fixes for the old major version at which point you stop supporting that version since you’ve done the best you could to get old users moved to a new version. This to me seems like a lot of major software (not written in the past 5 years) basically works.
“At Google, package consumers expect their dependencies to be automatically updated with e.g. security fixes, or updated to work with new e.g. infrastructural requirements, without their active intervention.”
I expect this to happen on my linux desktop too. I don’t see a difference in expectations there.
“Stability is so highly valued, in fact, that package authors are expected to proactively audit and PR their consumers’ code to prevent any observable change in behavior.”
I think if you feel like writing a library/module/dependency then this is the kind of mindset you are obliged to take. Anything short of this kind of approach to writing a library/module/dependency is irresponsible and makes you unqualified to write libraries, modules or dependencies. This, to me, seems to have been the mindset for a long time in software until languages came along with language package managers and version pinning in the last few years. And I don’t think that this has been a positive change for anyone involved.
“As I understand it, even issues caused by a dependency upgrade are considered the fault of the dependency, for inadequate risk analysis, rather than the fault of the consumer, for inadequate testing before upgrading to a new version.”
And I agree with this wholeheartedly, in fact this is the mindset used by users of linux distributions and distribution maintainers.
“Modules’ extraordinary bias toward consumer stability may be ideal for the software ecosystem within Google, but it’s inapproriate for software ecosystems in general.”
I think it’s not inappropriate, it’s totally appropriate. I just think that modern software ecosystems have gotten lazy because it’s easier than doing it right (which is what google seems to be advocating for a return to).
I should say, I don’t disagree with the point you make that intrinsically linking the major version to the package name is a good idea. Go should definitely NOT do that for the reasons you outlined. It would also be an easy indicator for me when picking a project to use in my codebase: Is the codebase on major version 156? Yes? Then I probably don’t want to touch it because the developers are not taking the responsibility of maintaining a dependency very seriously.
People who want to play in the sandpit of version pinning and ridiculously high major version numbers because they think software development is an area where no thought or effort should be put into backwards compatibility should be welcome to use whatever language they want to without being artificially limited.
Now, conversely, I would say, there seems like an obvious solution to this problem too. If you want to use semver while keeping to the golang rules, why not just encode the real semver version within the go version: “0.015600310001”. Sure, it’s not exactly so human readable, but it seems to encode the right information and you just need to pretty print it.
“Additionally, policies that bias for consumer stability rely on a set of structural assumptions that may exist in a closed system like Google, but simply don’t exist in an open software ecosystem in general.”
I will take things back to the world of linux distributions where these policies actually do seem to exist.
“A bias towards consumers necessarily implies some kind of bias against authors.”
Yes, and this is a good thing. Being an author of a dependency is a very big responsibility and a lot of modern build systems and language package managers fail to make that very clear
“But API compatibility isn’t and can’t be precisely defined, nor can it even be discovered, in the P=NP sense.”
This is true, but in reality there’s a pretty big gulf between best effort approaches to API compatibility (see: linux kernel) and zero effort approaches to API compatibility (see: a lot of modern projects in modern languages).
“Consequently, SIV’s model of versioning is precisely backwards.”
Actually it would be semver’s fault not SIV’s surely.
“Finally, this bias simply doesn’t reflect the reality of software development in the large. Package authors increment major versions as necessary, consumers update their version pins accordingly, and everyone has an intuitive understanding of the implications, their risk, and how to manage that risk. The notion that substantial version upgrades should be trivial or even automated by tooling is unheard of.”
Maybe today this is the case, but I am pretty sure this is only a recent development. Google isn’t asking you to do something new, google is asking you to do something old.
“Modules and SIV represent a normative argument: that, at least to some degree, we’re all doing it wrong, and that we should change our behavior.”
You’re all doing it wrong and you should change your behavior.
“The only explicit benefit to users is that they can have different versions of the “same” module in their compilation unit.”
You can achieve this without SIV, SIV to me actually seems like just a neat hack to avoid having to achieve this without SIV.
In any case, I think I’ve made my point mostly and at this point I would be repeating myself.
I wonder what you think.
…is a means and not an end are the norm, not the exception. And the fact that people work this way is absolutely not because they’re lazy, it’s because it’s the rational choice given their conditions, the things they’re (correctly!) trying to optimize for, and the (correct!) risk analysis they’ve done on all the variables at play.
I appreciate your stance but it reflects an Ivory Tower approach to software development workflows (forgive the term) which is generally both infeasible and actually incorrect in the world of market-driven organizations. That’s the context I speak from and the unambiguous position I’ve come to after close to 20 years’ experience in the space, working myself in a wide spectrum of companies and consulting in exactly these topics for ~100 orgs at this point.
Google has to work this way because their codebase is so pathological they have no other choice. Many small orgs, or orgs decoupled from typical market dynamics, can work this way because they have the wiggle room, so to speak. They are the exceptions.
Disagree.
At least don’t call the majority of software developers “engineers” if you’re going to go this way.
The fact that this is considered an engineering discipline with such low standards is really an insult to actual engineering disciplines. I can totally see how certain things don’t need to be that rigorous, but really, seriously, what is happening is not par for the course.
The fact that everyone including end users has become used to the pathological complacency of modern software development is really seriously ridiculous and not an excuse to continue down this path. I would go so far to say that it’s basically unethical to keep pretending like nothing matters more than making something which only just barely works within some not even that optimal constraints for the least amount of money. It’s a race to the bottom, and it won’t end well. It’s certainly not sustainable.
It’s incorrect in the world of market-driver organizations only because there’s a massive gap between the technical ability of the consumers of these technologies and the producers, so much so that it’s infeasible to expect a consumer of these technologies to be able to see them for the trash they are. But I think that this is not “correct” it’s just “exploitative”. Exploitative of the lack of technical skill and understanding of the average consumer of these technologies.
I don’t think the “correct” response is “do it because everyone else is”. It certainly seems unethical to me.
That being said, you are talking about this from a business point of view not an open source point of view. At least until open source got hijacked by big companies, it used to be about small scale development by dedicated developers interested in making things good for the sake of being good and not for the sake of a bottom line. This is why for the most part my linux system can “just update” and continue working, because dedicated volunteers ensure it works.
Certainly I don’t expect companies to care about this kind of thing. But if you’re talking about solely the core open source world, “it’s infeasible in this market” isn’t really an argument.
I honestly don’t like or care about google or know how they work internally. I also don’t like go’s absolutely inane idea that it’s sensible to “depend” on a github hosted module and download it during a build. There’s lots of things wrong with google and go but I think that this versioning approach has been a breath of fresh air which suggests that maybe just maybe things may be changing for the better. I would never have imagined google (one of the worst companies for racing to the bottom) to be the company to propose this idea but it’s not a bad idea.
…reality. I appreciate your stance, but it’s unrealistic.
I think it’s just the bees knees. At this point I’m irritated using anything besides MVS. In practice, it does exactly what I want, dependencies get updated at a decent schedule (and I can always force them to update myself), and everything has a layer of predictability that is lacking in other systems. It’s too bad that it took so much drama to get to this new optimum state but I’m glad Go pushed through with it and hope more systems adopt MVS soon.
Honestly, it makes me wonder what other local optima need to be re-evaluated with a fresh perspective.
I use Go a lot at work, and honestly while the MVS makes me incredibly uncomfortable, I haven’t had a ton of problems with it. I’m not aware of any CVEs/security issues for our deps, but it’s very possible that some are lurking in our products and will stay there. I also haven’t run into bugs in transitive deps that have caused issues, but again, it’s something I’ve worried about.
However, because of these fears, I have tried to use
go get -u -t
to update all my deps and that typically always breaks. Between subtle backwards incompatible-changes that modules have snuck in and repository renames, usually trying to upgrade all my deps breaks in some way.I don’t think Go’s approach is really that unusual: the conventional wisdom in other ecosystems is that you should always use a lockfile, so dependencies are already version-locked and upgrades have to be performed manually.
The only new thing Go does is, in an oversimplified description, that it synchronizes the version lock of direct dependencies back from the lock file (go.sum) to the dependency declaration file (go.mod). If you use the CLI to manage dependencies, you’ll barely notice any peculiarity.
MVS is also not a very radical idea. What it does is restricting version constraints to only one operators, ≥. It does not need more operators because Go requires backward-incompatible versions (“major versions” in semantic versioning) to be effectively different modules. This means that a module is expected to maintain backward compatibility, so you only need ≥. If the module developer accidentally broke backward compatibility, the expectation is that they’ll fix it shortly, and the few incompatible versions can be manually marked with the
exclude
directive.go.sum is not a lock file. It has no effect on the dep graph. It’s used only to verify that modules are not corrupt.
You’re right. The analog I had in mind was that
go.sum
contains the version of transitive dependencies, like a lock file. But it is not actually needed to determine the versions; that can be derived by solely looking at thego.mod
file of all the modules involved.I can’t edit my comment now, but it should just say that
go.mod
contains the version lock for the direct dependencies.That’s right, and even more, it’s not consulted to determine versions.
Not exactly — it expresses a lower bound, but modules can and will substitute any equal or greater version.
Right… My oversimplification is indeed, oversimplified. :)
The comparison I’m really trying to make is that
go.mod
has some property of a lock file - it does not say that “dependency X can be any version >= V”, it says “dependency X must be version V, unless another module (in the dependency DAG) requires a higher version”.A hopefully accurate analog:
The collection of all
go.mod
in the dependency graph forms the lock file. Eachgo.mod
“semi-locking” its direct dependencies - “semi-lock” meaning that the version can be overridden by anothergo.mod
requiring a higher version.The
go.sum
file is a lockfile in content, but not in role. Since the version lock can already be calculated from thego.mod
files,go.sum
is just a record of the calculated result and used to prevent corruptions (as opposed to the source of truth).Go’s key contribution is the observation that any ecosystem already has to adopt some kind of convention to reflect backward compatibility in version numbers. Even if your tool can handle arbitrary version constraints, the humans will have a very difficult time managing dependencies if they break backward compatibility arbitrarily, at which point you’ll more likely end up vendoring or forking, rather than coming up with the correct version constraint.
Go just chose a reasonable and well understood convention (semantic versioning) and codified that, which allowed it to be much simpler. It also gave you the “nuke” option of
replace
to deal with badly behaving dependencies, which can swap out transitive dependencies without touching the intermediate dependencies.Go modules’ interpretation of semver is significantly stricter than what semver actually mandates.
For example, in semver, a major version expresses API compatibility and an undefined notion of stability — which is good! Stability isn’t an objective measure, it’s different from project to project, workflow to workflow, ecosystem to ecosystem. But Go modules asserts a specific and strict definition of stability which only applies to a very narrow slice of all software. If your project isn’t in that band, the result for you (and your consumers) is enormous and unnecessary toil. This mistake — essentially the assumption that every module is maintained by a team of engineers and consumed by an enormous number of users — is repeated, fractally, throughout modules’ design.
I wouldn’t call that a mistake; it’s partially a deliberate design choice to optimize for the kind of libraries you refer to (which tend to already maintain a high degree of backward compatibility), and partially an optimistic view that these design choices will institute a stronger culture of backward compatibility (within the same major version) in the Go ecosystem than other ecosystems.
I haven’t maintained Go projects with very complex dependencies, so I don’t really know how this has turned out for people in that situation. From your other comments it seems that you have had a bad experience, but - not to belittle your personal experience - I’d really like to know how that data looks like on the scale of the entire Go community.
The latest edition of the Go developer survey reported a “satisfaction rate” of 77% on the module system, but it didn’t have any breakdown in different aspects of the module system. I hope that future editions of the survey will dive more into the aspect of managing version upgrades.
For the record, MVS is more or less what Clojure’s deps uses as well. I think there are two things that makes MVS pretty good in practice:
I think the latter is the most important aspect: Library authors seem to care about backwards compatibility much more than they used to. That automatically makes upgrading easier, regardless of the dependency algorithm.
One problem with Go’s dependency management (but not really MVS) I experience now and then is the entire “built on top of Git(Hub)” issue: Sometimes libraries and programs can just disappear.
The worst part of that aspect is the case where you depend on a library, found and resolved a bug, and need to use that version before it’s merged into the upstream repository. In those cases, you’re left with two options:
i.e. you have to pick between potentially failing builds or potentially stale dependencies. Granted, I don’t think this is worse than what other dependency management solutions do, but it’s a trap that I’ve fallen into a couple of times.
I’m largely in agreement with what Peter has pointed out in a few other places in this thread - talking about MVS often isn’t that helpful, so much as how some of its requirements, and the poor assumptions they’re based on about how software gets written, amplify out into tooling. I won’t rehash those points, but will instead try to focus more narrowly on MVS itself.
MVS has turned out pretty much as i expected. Its best feature is its predictability, as others have pointed to in this thread. It’s not the first version selection algorithm to be predictable/avoid NP-hard search - Maven’s had already been around for years before MVS, and there may be other, older ones in this family of which i’m unaware. Maven makes a different tradeoff - “rootmost declaration wins,” vs. “largest version number wins” in MVS.
The scenario i was most concerned about was a social one: that the baked-in premise of “maintainers are expected to promptly adapt to the inevitable, unavoidable changes-that-break-them in their dependencies in order to keep MVS’ assumptions true” would become justification to pile onto maintainers even more and demand free labor from them. That hasn’t happened AFAICT, and in retrospect, it seems a bit silly to have been so concerned about it. (I was feeling quite momma-bear, wanting to protect the community from the same kinds of “use you for your labor then discard you” feels i was in.)
The biggest problem with MVS is ambiguity: it exits 0 - false positive - whether there’s an obvious, reasonably-knowable, or non-obvious problem in the set of dependency versions its picked. Existing algorithms generally in the bundler/cargo/dep/pub/npm7 family (NP-hard search; preferring the latest version and avoiding known incompatibilities) clearly can do better at avoiding some such problems, at the cost of predictability. And, of course, they’re also still ambiguous - they’re not omniscient, and thus can’t avoid all false positives.
(Aside, a pet peeve: “predictability” != “determinism.” i am not aware of any nondeterministic version selection algorithms in production tooling.)
The price of this ambiguity is visible in more to the sort of issues Peter’s mentioned. It’s felt more in larger projects with deeper dependency graphs - e.g., k8s and anything depending on it. When things seem to work “well enough” with that exit 0 from MVS, you accept it and push ahead, because who’s gonna spend an hour spelunking a huge dep graph when everything at least seems fine? The result is what i called contagion failure in slow motion, manifesting like this. And once the seal’s broken, does doing it more really matter?
Ultimately, my view is that dealing with the information and ambiguity problems is the main path forward in this problem space. In that light, MVS is a local optimum, and the ideal version selection algorithms are on the NP-hard side, assuming we can also get predictability under control at the same time as ambiguity. (Clearly, i think that’s feasible.) Once we have those, the only sound basis for choosing MVS anymore will be as an easy-to-implement stepping stone. My guess is that Maven’s algorithm and encoding (the POM equivalent to
require
statements) will be preferable for that purpose.I can only say that it works so well that I never needed to understand it deeper. My biggest chore related to dependencies is to clean go.sum files from time to time. I use
go mod tidy
for that.I also run local go modules proxy (Athena) at home simply because I often update dependencies in multiple projects.
You do not need to clean go.sum. That file is an (almost) append-only log of checksums of modules that may be considered when building your dependency graph, and it’s used only to detect corruption or malfeasance. It’s not a lock file, and it doesn’t really represent the size or state of your dependency graph. (At best, it has a correlative relationship.) You should check it in for safety reasons, but there is basically no reason you should ever look at it, or care how big it is.
Thanks, it is good to know! I believe it is just a personality deformation, to delete lines that are not required.
I don’t think “required” is actually well-defined, when it comes to lines in go.sum.
In a situation when you were using foo@v1.0.0 and then updated to v1.0.1. The record in go.sum related to the older version does not serve any purpose. Is this a valid assumption?
Well, does a transitive dependency depend on v1.0.0? Depending on how optimized the modules tool is, it could still need to download and evaluate it.
I see this makes sense, and my understanding was the same.
I have slowly navigated towards a minimal possible number of dependencies in my code. Bumping a single dependency typically (but not constantly) leaves me with just one version required.
IIRC
go.sum
should only “shrink” when you switch or remove a dependency, right?I don’t think you can make very many authoritative claims about the relationship between your dependency graph and the contents of go.sum. The rules that govern that relationship change from version to version.
Yup. I remember fiddling about with packages in Node and Python so much, but never really had to worry in Go.
With Go, there was the initial occasional issue with people changing the capitalisation of their usernames (and causing package breakages), or initial teething issues, but when everyone follows the rules (and most do), it works fine!
Here is a useful description of vgo from Cox with the details if you aren’t familiar with vgo and want to follow along with the discussion: https://research.swtch.com/vgo-principles
I used Go at work 2+ years ago, right when the new tooling was in beta, and it was such a relief then to ditch glide, dep, and the rest. For what it’s worth, the MVS algorithm seemed like it just codified the semantics that bundler, et al have in practice — in production applications, you move a single package at a time forward and let the minimum amount change.
I haven’t heard loud complaining in my circles, but I also exited Go development as I spent more time with sugary but still type-safe languages like Kotlin (mmm extension methods) and Typescript (mapped types give type-heavy Typescript an interesting feel), so I didn’t end up with a lot of first-hand experience with the new Go tooling.
My experience is that everything was shit for a few months surrounding the beta and “stable” release, then solidified nicely… So YMMV. :)
Rust/Cargo uses maximum semver-compatible versions + lockfiles. It works ok. It’s not perfect, but it’s waaaay more reliable than
npm
— probably because of Rust’s strictness, built-in testing, and because the Rust compiler has much stronger backwards-compatibility than Node.js.There’s an occasional “oops” when someone releases a patch update that should have been semver-major. It’s usually promptly resolved with a bugtracker complaint + yank. I’m a regular user of Rust, and I’d say I roughly waste an hour per year on the fallout of such things.
Cargo also supports an unstable
-Z minimal-versions
flag that resolves versions the way Go does. In theory it should work, as long as everyone has set minimum required versions precisely. However, the Cargo ecosystem is not compatible with MVS. Crate authors rely on Cargo picking latest versions, and don’t bother bumping version requirements themselves. There are many crates that requirefoo = "1.*"
, but then by accident use a feature added in1.1
. People do care about respecting semver for maximum version resolution.Cargo/npm: I dread doing updates.
In cargo I have had breakage (requiring refactoring of my code) with updates to rust AND with updates to dependencies (direct or indirect).
In npm I have had breakage with updates to dependencies. Again requiring me to refactor code.
With Go: no breakage.
So. Many. Dependabot PRs.
I’ve followed its development and tooling issues from the start. I love MVS and the overall design. I think the biggest pain points have been tooling UX (which has come a long way and is getting quite nice) and SIV (semantic import versioning). In hindsight, I think SIV causes people a lot of UX issues. I think possibly there could have been an alternative convention for breaking changes since SIV is really just a convention that tooling supports and library authors have to understand and remember to use correctly anyway.
Surprisingly well. I wasn’t sure about it in the beginning, but stayed open-minded.
To be fair though, I think that things working great might be an after-effect of originally having no versioning. So initially there was a strong emphasis on creating good, stable APIs and less of a “let’s try and see” approach.
This means that various widely used libraries are more (API-)stable than in other ecosystems on average. Maybe that slowly changes, but since there’s that set of early, now established libraries I don’t think that effects would be seen too quickly.
Of course one might argue that the philosophy of things like the compatibility promise also have an effect and they certainly do, but I would not say this is a general theme, just like style guides, etc. laid out in official documentation isn’t followed by everyone, at least not beyond what the tooling tells you.
In other words, I would not assume that another language’s ecosystem would work that well, without all of that context. Of course not saying it wouldn’t work, just maybe things to keep in mind when making a decision based on Go doing this.