On one side I am always impressed by the improvements that each release brings.
On the other side I feel that some of the energy spent to add new features should be better spent by polishing existing features (thinking of https://github.com/golang/go/issues/60940 which is “accepted”, but stuck for months…)
I agree. Honestly I think there seems to be a shift in focus of the paid development. Go n used to be developed more as a general language focused on certain goals, reflected in design decisiont and language changes.
However that appears to have basically replaced with *Go is a language by and die Google “. This isn’t bad in and if itself and probably not all that surprising.
However for people whose ideals aligned with the initial design, goals and essentially the people around it (the plan 9 fols, etc.) it feels a bit like a slap in the face.
This ranges from “don’t expect more big language changes other than generics” (which btw was mentioned to probably come in there faq since before 1.0), as well as trying to get people away from thinking about the garbage collector, and trying to make it hard to impossible to really affect it to more and more exported runtime methods regarding such topics. Compabiloty has also shifted a bit towards “let’s find hacks and rationales so we technically keep it” which to be fair is a whole lot better than what you find elsewhere.
Please don’t misunderstand me. I’m not angry at these changes per se, but for people who trusted claims and promises made it’s still not a good picture. If people wanted a different language of course they’d have chosen it. It’s not like there’s a shortage of them. I remember how basically the reason for creating Go was not liking C++ (and Java) and regardless of one thinks about them it seems obvious that pushing C++ and Java people (and others) into the language makes Go go into that direction.
This isn’t meant to put hate on Go and certainly not people and work, time and effort. I think they are great. This is meant as an observation of shifts in priorities that might also explain why that issue standing is still there. The issue mentioned is from a claim made in a proposal process. It was claimed that this would be covered, which one has to assume is part of the reason why it has been accepted in the first place and while the majority of the proposal have been implemented this part wasn’t.
Isn’t that easily achievable from user code? Why do you think this needs to be in the standard library?
(If you’re looking for the “reverse” module, I created a caching http.RoundTripper that can make use of the caching headers and lower the impact of multiple requests on remote servers. Feedback always welcome on it.)
That’s beside the point. My statement was on how it was put into a proposal for acceptance, but put into a separate, still unresolved issue during the implementation, which is considered complete.
And on the why I’d argue it’s to align with the already existing fileserver code which does take care of such things. Also because this is somewhat tied to embedding so such things would make sense to be in code, just like the files are rather than at runtime.
But again I’m not really arguing about the issue here but about the fact that reaching consensus by claiming features and then not implementing what is decided on kind of makes the consensus process/getting acceptance part kind of pointless.
pushing C++ and Java people (and others) into the language makes Go go into that direction
Alas, in the last few years, I began to receive ever increasing push back on architecture; as if all of a sudden the community valued clean, (big 4) design patterns etc. It’s been frustrating to encounter. Unwilling to defend the community’s old consensus and relitigate old debates, it slowly pushes me away from Go. (And in spite of my love for DSLs and more exotic production systems, I actually don’t want new e.g. functional features in Go.)
If your meter is digital but not quite “smart” enough to have its own connection and API, it might at least have an LED labelled “1000 imp/kWh” or similar. Then you can stick a photo diode on it with a raspi pico and go from there. I have it going into InfluxDB with a grafana dashboard. 1 Wh resolution is not perfect, but not terrible either.
This is an interesting solution to the problem, but I’d prefer being able to perform a “real” reboot and maybe even switch to a different OS, for example.
One idea I thought of is to embed a listener program in the initramfs that simply waits for some external device to send the password (securely via asymmetric key encryption with forward secrecy), pipes the password into cryptsetup as soon as it receives it, and then continues the boot process. Then to reboot you’d have to use some custom program running on some other secure device (phone/laptop) which initiates a reboot over ssh, then waits for the device to show back up and send it the password. It should definitely be possible.
Of course you could also just use a TPM but you’d still need to be careful.
Note that I can still perform a “normal” reboot anytime I really want it (since this is an old laptop, I can type the password on the attached keyboard)
As in you have a small init-system that just listens for ssh connections and then after a power-outage you can ssh into your init dropbear and give it the disk encryption password and it boots the real system?
I would have never thought this to be possible, but I somehow recalled a comment by @pl that it had already been done! (I would have been interested in the solution btw ;)
Aww thank you, feels good. Glad you figured it out, especially the oddity of kexec on NixOS at the moment, I’ll definitively have to revisit this. Somehow noticed that I can’t get the key from dmicode any longer over the weekend, not sure what the appropriate way is nowadays.
“Use the pattern if got, want := ....; got != want”
“Define the function func assertEq[T comparable](t testing.TB, got, want T) in each package, and use that”
“Import testify/assert, use assert.Equal”
Which of those 3 is not using the language? Are if statements part of the language, but function calls aren’t? Which of those is inventing a new domain-specific language? Is got, want := not a domain specific language?
Nitpick: I would however not recommend testify (anymore) since it predates generics (so no type checking, which does not play well with untyped constants).
Shameless plug: my minimal (generics-compatible) assertion library https://code.pfad.fr/check/ (which has some refinements over your proposed assertEq, like Fatal and Log)
To handle fatal v. non-fatal errors, I have a helper that wraps a testing.TB in a struct that converts Fatal calls to mere Error calls. I thought that was a relatively simple way to do it, compared to having to call .Fatal everywhere or switch from using testify to testify/assert as a prefix.
I actually took inspiration from your library for check!
fatal v. non-fatal errors
Yes, this is one of the main differences:
check is non-fatal by default (append .Fatal() to opt in)
be is fatal by default (wrap t with be.Relaxed(t) to opt out)
Besides since the Log method is attached to the check.Failure object, the emitting line is correctly logged (however it cannot be called during some previous computation, like be.DebugLog. One must use testing.Log directly)
I want to see the next library that is more minimalist than check. It will have a single function called ensure.Good and it will use AI to decide if the result was good. 😂
The problem isn’t having a single assertEq, it’s that in your typical test framework you have like 20 different ones for every kind of comparison you can imagine.
The benefit there is typically one of pretty printing, right? That is, if the assertion fails, you can very quickly see exactly what went wrong and what the difference was. I don’t write much Go, but when I use those sorts of assertion libraries in other languages, that’s what I get out of them, and it’s usually a feature that’s worth its weight in gold when a test starts failing and you want to see exactly why that’s happened.
Can’t really say whether this particular team or the advertised product is any good; but I agree with several of the points in that manifesto. In particular I still don’t quite understand why there is so little demand for a) local-first and b) vendor-independent CI/CD pipelines.
I don’t want to be locked into Github or Gitlab or even into the on-premise Jenkins, and instead want to be able to switch to different CI/CD tools without rewriting the pipeline definition. And I especially want to run the pipelines locally, just like I can run Make/Ninja locally. Debugging a CI/CD problem with our on-premise Jenkins litters the Git history with trial-and-error commits – this doesn’t happen for any other problem I have to debug!
Why do I so rarely read about this? Am I overlooking some essential feature of CI/CD pipelines that makes locally-run pipelines really unfeasible? Or is this just an idea whose time is yet to come?
I think there is plenty of demand – the problem is that it’s a lot of work to make! You actually get into programming language and (distributed) operating system issues, which people don’t realize until they are halfway into the problem.
That is, people underestimate the problem of CI. Building a general CI system is essentially the same problem as building a “distributed OS” like Kubernetes – you are mediating code/data/users across hardware/networks.
This also means there is a strong tendency toward an “inner platform effect”
Looking at Glu, I don’t believe Go is the right language for it. e.g. the builder.New example here:
It might be good enough in many cases, but probably not something I want to use. I have been maintaining a big heterogeneous CI for many years now.
“Vendor independent” is very hard – it’s a big enough, and profitable enough, problem that naturally “vendors” will arise, like Github Actions, Gitlab, etc.
I think Earthly has some good ideas – however they seem to have fallen into the “trap” of inventing their own ad hoc language. Instead of a YAML-ish language, it’s more like a Docker/Make-ish language. (Though I’d definitely be interested a defense of Earthly)
Here’s a post I wrote 3 years ago about this problem:
As far as I can tell, basically all the same problems exist that existed 3 years ago … All the complaints we have now are probably in that thread. But it’s not for lack of demand! It’s because it’s hard to build
Also, YSH is supposed to be for this exact use case. I mentioned a few days ago that the best way to describe YSH is shell+Python+YAML squished together. Which is what a CI needs.
I am not expecting that many people to be convinced without building a demo myself, but if anyone does happen to be convinced, definitely feel free to ping me by e-mail or on Zulip :-)
Oils is 100% open source and has been for 8 years. That will continue to be true, but it’s also taught me why there is no high quality open source CI – basically because you need a bunch of skilled people to build it full time.
And also because developing a programming language is hard/expensive, and I think the problem needs a custom language. Again, the Earthly example is strong evidence of that.
It’s fine to have an opinion on python, but remember that it’s yours, and other people might not care about it
tbh, this kind of opinion comes across as a defense of Python by someone who hasn’t used better languages. For the record, my day job is writing Python, but even I can acknowledge its shortcomings despite being my livelihood.
Am I overlooking some essential feature of CI/CD pipelines that makes locally-run pipelines really unfeasible?
Good question. I can try to answer it based on my experience with a fairly specialized CI implementation (cross-platform C/C++ testing).
The first clarifying question to answer is what you mean by “local”? Most CI implementations these days run jobs in containers or VMs. So local could mean run a VM/container on my local machine and the job inside or it could mean run the job directly on my local machine. I am going to assume you mean the latter since you are talking about debugging the job, which would still be a pain in a local VM/container.
In our CI implementation we can actually run the worker (the program that executed the CI job) locally, though it is used mostly for testing the worker rather than the jobs. And it does require some ceremony. Firstly, the worker expects the job description in a machine-readable format so to make it convenient to run locally, we would need to replicate this as some sort of a command line interface. Likewise, the result is in a machine-readable format, with logs, etc. Dumping the logs to stderr would probably be good enough. More importantly, the worker expects certain setup in the VM (we use VMs) in which it runs. In our case this is pretty minimal (we have an indirection in the form of “environment scripts” which set up the execution environment). But I can imagine that in other CI implementations there would be a lot more assumptions (the main reason why I think we ended up with the minimum of assumptions is because we planned for local testing from the outset).
Finally, I think I can also answer why we haven’t bothered with better local support yet: our CI implementation has an interactive mode where you can submit a job, instruct it to break at a certain step (typically on first error), log into the VM (SSH, VNC), and investigate/debug/etc. This is probably the best one can hope for if the problem you are trying to debug is platform-specific and the platform in question differs from your local (e.g., Windows, Mac OS). This is unfortunately the common case with cross-platform C/C++.
This is one thing I like about SourceHut’s CI/CD offering: it’s literally just shell scripts, so running your build locally is trivial. There are some drawbacks - reusing stuff is harder, for one. But you get a full programming language instead of whatever YAML abomination.
Storing an offset and a symbolic timezone seems mostly counterproductive, the entire point of a symbolic timezone is that the offset can change between the moment you create the event and its actual occurrence.
I guess it could make sense to warn viewers from non-local timezones that the offset (and thus their own time) has changed, but then those are the ones you’d want to inform, not the creator of the event (whom you’d assume is in the local timezone, thus created an event set to 00:14:07 which is still set to 00:14:07, that the offset of their local timezone to UTC at the moment of the event has changed is unlikely to be relevant)
I store the datetimeWithOffset := "2006-06-02T15:04:05+02:00" (originally computed offset, likely a good fit for the TIMESTAMPTZ type of Postgres - @tonyfinn) and the intended location Europe/Paris.
From there, I “convert” the time to the intended location and compare the offset with the stored one:
if they match, everything is fine
otherwise a user intervention is likely needed.
@simonw I think this would address the interesting issue you raised here, no?
First the timezone change problem here is only a problem in the forward direction. No political entity has proposed retroactively changing dates in 2006. So let’s assume a date in 2026 instead of 2006.
Secondly, TIMESTAMPTZ does not store “2026-06-02T15:04:05+02:00”. It stores no timezone or offset info whatever. What it stores is 1780401845000000 (microseconds since 1970-01-01T00:00:00Z). It’s basically a wrapper around transforming from the input zoned time to microseconds since the unix epoch on write, and producing a time in the connection timezone (default: system TZ) on read. But for most systems these days, a single global timezone at connection level or system level is insufficiently granular, since the timezone is a property of the data and not of the system.
This has a couple of interesting effects:
It’s no different to storing “2026-06-02T13:04:05Z”. The postgres docs describe this as converting to UTC and I’ve seen threads where people quibble about whether that’s really a UTC conversion or not, but the important part is the any offset or timezone information is lost, as is the time in the original timezone 2026-06-02T13:04:05Z might be the same as 2026-06-02T15:04:05+02:00 but until that time comes, you cannot definitively say it is the same as 2026-06-02T15:04:05[Europe/Paris].
The date you get back depends on the connection time zone. So if you query it with a system with its time zone set to Europe/Paris, and nothing changes with Paris’s timezone rules, you’ll get back 2026-06-02T15:04:05+02:00 sure, but if you query it with a system with its time zone set to America/New_York it’ll be 2026-06-02T08:04:05-05:00. So your offset has no value as an error checking mechanism, it’s just made up based on whatever the connection time zone is.
If Paris does change it’s DST rules in the time being (remember: the EU even has a passed resolution on the books where they’re planning to abolish DST), the time you actually want back is 2026-06-02T15:04:05+01:00, but for Postgres to give that result, 1780405445000000 would need to be stored in the TIMESTAMPTZ (while the actual value is 1780401845000000) and your connection timezone would have to be set to Europe/Paris.
Thanks, I wrongly thought that TIMESTAMPTZ would store the offset somehow.
So I would need a third column, storing this offset (in seconds probably). So that I can check if the offset (at the time of the event creation) is still correct (at a later time, eventually after a zone change of Paris).
You don’t want to store the offset, you want to store the city name/location. Politics dictate that the offset will change on occasion, whenever politicians get bored(globally this happens several times a year).
Also, cultures sometimes have their own offsets, different from the legal offsets, which further complicate things.
Or just assume it is always changed, and convert through the TZ database.
In either case you have to round trip through the TZ DB to see if it changed, wouldn’t it be easier to just assume it has and move on with life? There might be special use-cases where you NEED to know if it changed, but in most cases, you just care what the right value should be at the moment.
Neither of Postgres’ timestamp types are actually very helpful for this scheduling use case. TIMESTAMPTZ converts the date from the input timezone to UTC but this is an operation you want late binding for rather than early binding (as this article specifies), while TIMESTAMP will implicitly use the system TZ for many operations.
Some options:
Two columns: TIMESTAMP + timezone. You then need to be careful to use convert with AT TIME ZONE before using any database date functions
String + timezone. Again you have to convert for any date functions, in a more expensive way, but it’s harder to fix
String, timezone and denormalized UTC - you get a sortable date column, but you need to manage when to regenerate the denormalized column
The two types of SSH key I have mentioned so far, ecdsa-sk and ed25519-sk, do not actually live in the TPM technically. These keys are “non-resident”, meaning the keypair is still generated outside of the TPM, but the TPM is needed for signing.
This is probably not accurate, the keys are created on the TPM and then wrapped as objects you can load back into the TPM. The TPM does not have a lot of memory to store keys, so the concept is to have a determinsitic parent keys that can wrap keys and securely exported outside the TPM.
When you sign the object is loaded, unwrapped and the signature happens on the TPM.
Some times ago, I wanted to formalize a way to inform about supported releases of a software via DNS: https://codeberg.org/forgejo-contrib/rvcodns (unfinished and more or less abandoned for now)
_release.example.org. 3600 IN TXT "v=1.2.3;answer=42;other_key=spaced=20value"
Note that I really think that informing about a new version should be apart from actually deploying the upgrade (which depends on the way the software is packaged: built from source, docker, package manager, gobuilds…)
interesting, we came to almost the same solution and rationale! i searched online at the time, but couldn’t find anyone announcing releases over dns, while it made so much sense to me. i’m sending you an email about this.
it looks like a compatible dns api endpoint could be implemented in the gopherwatch.org dns server.
Submitted this story because it mentions “point-cloud documents” and the judicial fight to get access to high-quality 3d models, which should be considered as “commons”.
I hope it isn’t considered to be too far from the “computing focus” of this community.
Institutions like the Baltimore Museum of Art, for example, which has been too timid to publish its own 3D scan of The Thinker due to unfounded fear of somehow violating musée Rodin’s moral rights, should take note.
The release notes for the release containing that patch seem to be:
While we don’t have any flashy new features to show off this time around, rest assured, our teams are hard at work crafting an even more reliable Arc for you.
Listen to the latest edition of Imagining Arc on Overcast, Spotify, or Apple Podcasts.
Point taken, but I was more getting at, with both their blog and their release notes, they haven’t mentioned the security issue that was in their product.
everything on the page is just html/css crafted with love. no images, javascript, or other external resources, and just 31kB gzipped (that’s 5 seconds over dial-up)! it takes a lot of time and effort compared to just throwing screenshots on the page, but i think it’s really fun to have a blogpost come to life like that, with interactivity and all. and it’s responsive!
Java is still the de facto king of backwards compatibility that actually has the years to show for it. Just based on the title, that ‘title’ would go to Java in my opinion.
I would argue that it is at most as bad as with any other language, but arguably better as the ecosystem is significantly more stable than any other. (What is in the same ballpark in size are JS, and Python, neither is famous for their stability).
But of course third-party libs may decide to break their public APIs at any point, that’s irrelevant to language choice, besides “culture” — and Java’s is definitely not “going fast and breaking stuff”.
Sadly, starting with Java 9, while the core language is backwards compatible, the library situation is a nightmare. And when you’re doing an upgrade, you don’t care if it’s the core language or the library that’s changed, either way, it’s work to upgrade and ensure you don’t have regressions.
Lots of things have changed the package that they live in. In principle, this sounds like it’s not a very difficult change to accomodate, just find and replace, but it wreaks havoc when you have libraries that have to run on multiple JVM versions.
If you’re unlucky enough to be using Spring, it’s even worse. That framework has no concept of backwards compatibility, and is like a virus that spreads throughout your entire codebase.
I can’t share your experience at all. Even large jumps pre-8 to post-11 (though after like 11 they are basically trivial) are reasonably easy, and the few libraries that made use of JVM internals that got increasingly locked down (to prevent further breaking changes in the future) have largely been updated to a post-module version, so very often it’s just bumping the version numbers.
I don’t understand some of what you’re describing.
And when you’re doing an upgrade, you don’t care if it’s the core language or the library that’s changed, either way, it’s work to upgrade and ensure you don’t have regressions.
Are you saying that other languages can somehow prevent the problem of third-party libraries breaking backwards compatibility? Because, if you aren’t saying that, then the core language being stable is going to make the situation objectively better to deal with than if you have to worry about breaking changes in libraries and in the core language…
Lots of things have changed the package that they live in. In principle, this sounds like it’s not a very difficult change to accomodate, just find and replace, but it wreaks havoc when you have libraries that have to run on multiple JVM versions.
Yes, it can be tricky to write code to target multiple versions of Java at the same time, but in my experience, it’s about 100 times less tricky to do this with Java than almost any other language. JavaScript might be the only one that’s even better about running old code on newer runtimes without much problem. Are there others you consider better than Java at this? Specifically for running the SAME code on multiple versions of the language? I remember back in the day when I worked on a lot of C++, it was a nightmare to figure out what features I could use from C++11 to allow the project to build on various versions of GCC that shipped with different Linux distros (Ugh, RedHat’s ancient versions!).
Are you saying that other languages can somehow prevent the problem of third-party libraries breaking backwards compatibility? Because, if you aren’t saying that, then the core language being stable is going to make the situation objectively better to deal with than if you have to worry about breaking changes in libraries and in the core language…
I think that depending on the community, some languages have less BC issues because of libraries than others. Case in point for me was Clojure: both the core language and community have a stance of avoiding breaking compatibility, even if the language itself doesn’t offer any special mechanism for that. Quite the opposite actually, since the language is really dynamic and you can even access private functions without much difficulty.
Are you saying that other languages can somehow prevent the problem of third-party libraries breaking backwards compatibility?
Of course they can’t, and that’s not my point.
My point is that communities develop norms, and those norms include more or less careful treatment of backwards compatibility in libraries. My sense is that Haskell and JavaScript are probably the worst. Java is actually mixed, as there are a lot of libraries that do great work. Some even have a single version that runs on Java 5 through Java 21. But at my day job, we’re using Spring, and the situation is bad.
Though with regard to languages, I will say dynamic class-loading can make the situation worse. I’ve dealt with issues where the order of requests to a web server determined which version of a service provider was loaded. So 9 times out of 10, the code ran on the newer version of Java, but failed 1 time in 10.
Because, if you aren’t saying that, then the core language being stable is going to make the situation objectively better to deal with than if you have to worry about breaking changes in libraries and in the core language…
It sounds like your argument is “having two problems is worse than having one”. But two little problems are better than one big problem.
I greatly appreciate the backwards compatibility of the Java language. But I would be happier with a slightly less backwards compatible core language, if I could trade it for an ecosystem that’s much better at backwards compatibility.
My point is that communities develop norms, and those norms include more or less careful treatment of backwards compatibility in libraries. My sense is that Haskell and JavaScript are probably the worst. Java is actually mixed, as there are a lot of libraries that do great work. Some even have a single version that runs on Java 5 through Java 21. But at my day job, we’re using Spring, and the situation is bad.
Right. I understand and agree about ecosystem “culture”. But, at the end of the day, someone said Java was good with backwards compatibility because it makes it easy to write code that will continue to work “forever”.
I guess maybe your point is that the language prioritizing backwards compatibility is only necessary, but not sufficient, for developers actually getting to experience the benefit of the backwards compatibility. Would you say that’s a decent interpretation/restating? I do agree with that. And if the language itself doesn’t care about backwards compatibility, then it’s impossible for the ecosystem to have stability.
Though with regard to languages, I will say dynamic class-loading can make the situation worse. I’ve dealt with issues where the order of requests to a web server determined which version of a service provider was loaded. So 9 times out of 10, the code ran on the newer version of Java, but failed 1 time in 10.
Definitely true! Luckily, I only remember one time where I had a nightmare of a time figuring out some class or service loading bug.
It sounds like your argument is “having two problems is worse than having one”. But two little problems are better than one big problem.
Yeah, that’s exactly what I was saying. But there’s no reason to think the “one problem” (community doesn’t value backwards compat.) would be bigger or smaller for either case. So, it’s like comparing x to x + 1.
I greatly appreciate the backwards compatibility of the Java language. But I would be happier with a slightly less backwards compatible core language, if I could trade it for an ecosystem that’s much better at backwards compatibility.
I really feel like this is mostly just Spring. I hate Spring, too, so I have been fortunate enough to not have to use it in several years now. But, as for the Java libraries I am using, I honestly couldn’t tell you about their JVM version compatibility over time. But, just judging by API stability, it seems that most “general purpose” or “utility” style libraries stay pretty backward compatible (thinking of the Apache and Guava libs, etc). It’s mostly the frameworky ones (like Spring, Spark, etc) that like to rewrite big chunks of their APIs all the time.
Can you give an example of what you’re referring to?
I’m semi-active on the PHP internals mailing list (i.e. the list where development of php itself is discussed) and BC breaks are a near constant discussion point with any proposed change, so I’m kind of curious what breaking change you’re referring to here?
Well, since I’m not the person you originally asked, I obviously can’t speculate on which breaking changes they’ve actually bumped into. But, I’ve works on several long-lived PHP projects going all the way back to version 5.3 and all the way up to 8.1 or 8.2 (I don’t quite remember which), and I’ve definitely had to fix broken code from minor version bumps. Luckily, we do have those migration lists, so I learned pretty early on to read them carefully and grep through any code base to fix things on the list before even running unit tests.
But, I’m not sure what the point is. The person said that PHP introduces breaking changes in minor version bumps, and that it frustrates them. Maybe they’re misguided for being frustrated by that, but it’s objectively true.
Personally, I’m perfectly fine with breaking bug fixes in minor versions. It’s not like you’re going to accidentally or unknowingly version bump the programming language of your projects. On the other hand, many of these changes are NOT bug fixes, but just general “improvements”, like the range() changes or the “static properties in traits” change in the 8.3 list. I can see how this would be frustrating for someone who wants the bug fix changes, but doesn’t want to pay the extra “tax” of dealing with the non-essential breaking changes.
But, again, I personally don’t care. I treat PHP minor versions as a big deal. PHP 8.2 is not the same language as PHP 8.1, which is fine. But, if you care about backwards compatibility and painless upgrades, Java is pretty much the king.
It’s mostly the ripple effects into libraries and “big” frameworks. For instance, I remember a few years ago a client was on still on PHP 5.3 (I think) and his hosting provider was about to shut down this version, so he had to get his site working on newer versions. This site was built on an in-house CMS of questionable code quality, using CakePHP.
The problem I ran into was that I had to step-by-step upgrade CakePHP through some major versions because the older versions wouldn’t run in the newer PHP due to BC breakage. Eventually, I completely hit a wall when I got to one major version where I discovered in the changelog that they completely rewrote their ORM. Because of the shitty code quality of the CMS (with no tests), I had to admit defeat. The client ended up having to commission the building of a new site.
Java and Go are really great in this regard.
Every other language, including Python (which makes breaking changes from 3.12.3 -> 3.12.4!) is horrible on that front.
I am impressed that they added so many new features, while still apparently being mindful about minimizing dependencies:
On one side I am always impressed by the improvements that each release brings.
On the other side I feel that some of the energy spent to add new features should be better spent by polishing existing features (thinking of https://github.com/golang/go/issues/60940 which is “accepted”, but stuck for months…)
I agree. Honestly I think there seems to be a shift in focus of the paid development. Go n used to be developed more as a general language focused on certain goals, reflected in design decisiont and language changes.
However that appears to have basically replaced with *Go is a language by and die Google “. This isn’t bad in and if itself and probably not all that surprising.
However for people whose ideals aligned with the initial design, goals and essentially the people around it (the plan 9 fols, etc.) it feels a bit like a slap in the face.
This ranges from “don’t expect more big language changes other than generics” (which btw was mentioned to probably come in there faq since before 1.0), as well as trying to get people away from thinking about the garbage collector, and trying to make it hard to impossible to really affect it to more and more exported runtime methods regarding such topics. Compabiloty has also shifted a bit towards “let’s find hacks and rationales so we technically keep it” which to be fair is a whole lot better than what you find elsewhere.
Please don’t misunderstand me. I’m not angry at these changes per se, but for people who trusted claims and promises made it’s still not a good picture. If people wanted a different language of course they’d have chosen it. It’s not like there’s a shortage of them. I remember how basically the reason for creating Go was not liking C++ (and Java) and regardless of one thinks about them it seems obvious that pushing C++ and Java people (and others) into the language makes Go go into that direction.
This isn’t meant to put hate on Go and certainly not people and work, time and effort. I think they are great. This is meant as an observation of shifts in priorities that might also explain why that issue standing is still there. The issue mentioned is from a claim made in a proposal process. It was claimed that this would be covered, which one has to assume is part of the reason why it has been accepted in the first place and while the majority of the proposal have been implemented this part wasn’t.
Isn’t that easily achievable from user code? Why do you think this needs to be in the standard library?
(If you’re looking for the “reverse” module, I created a caching
http.RoundTripperthat can make use of the caching headers and lower the impact of multiple requests on remote servers. Feedback always welcome on it.)That’s beside the point. My statement was on how it was put into a proposal for acceptance, but put into a separate, still unresolved issue during the implementation, which is considered complete.
And on the why I’d argue it’s to align with the already existing fileserver code which does take care of such things. Also because this is somewhat tied to embedding so such things would make sense to be in code, just like the files are rather than at runtime.
But again I’m not really arguing about the issue here but about the fact that reaching consensus by claiming features and then not implementing what is decided on kind of makes the consensus process/getting acceptance part kind of pointless.
Alas, in the last few years, I began to receive ever increasing push back on architecture; as if all of a sudden the community valued clean, (big 4) design patterns etc. It’s been frustrating to encounter. Unwilling to defend the community’s old consensus and relitigate old debates, it slowly pushes me away from Go. (And in spite of my love for DSLs and more exotic production systems, I actually don’t want new e.g. functional features in Go.)
Context: https://upspin.io/
Where there is now mention of this shutdown…
I want to track my electricity consumption. However getting a smart meter or fully automated meter reading seems too complex.
So I now have the following chain:
If your meter is digital but not quite “smart” enough to have its own connection and API, it might at least have an LED labelled “1000 imp/kWh” or similar. Then you can stick a photo diode on it with a raspi pico and go from there. I have it going into InfluxDB with a grafana dashboard. 1 Wh resolution is not perfect, but not terrible either.
Yes, that would be a sensible solution (1Wh resolution is much better than my current 40kWh resolution:)
I have two issues:
Both not very hard, but requires some investment.
Related: future of gitlab.freedesktop.org https://lobste.rs/s/lozl1i/equinix_sunset_future_gitlab
This is an interesting solution to the problem, but I’d prefer being able to perform a “real” reboot and maybe even switch to a different OS, for example.
One idea I thought of is to embed a listener program in the initramfs that simply waits for some external device to send the password (securely via asymmetric key encryption with forward secrecy), pipes the password into
cryptsetupas soon as it receives it, and then continues the boot process. Then to reboot you’d have to use some custom program running on some other secure device (phone/laptop) which initiates a reboot over ssh, then waits for the device to show back up and send it the password. It should definitely be possible.Of course you could also just use a TPM but you’d still need to be careful.
Sounds like boot.initrd.network.ssh.enable
Note that I can still perform a “normal” reboot anytime I really want it (since this is an old laptop, I can type the password on the attached keyboard)
The wiki has a nice page to draw the rest of the owl: Remote disk unlocking.
I can vouch for it, including the Wireguard in initrd bit!
This is what dropbear is commonly used for!
I think what you might want to consider is also tang/clevis from latchset that solve this problem really well.
As in you have a small init-system that just listens for ssh connections and then after a power-outage you can ssh into your init dropbear and give it the disk encryption password and it boots the real system?
https://www.cyberciti.biz/security/how-to-unlock-luks-using-dropbear-ssh-keys-remotely-in-linux/
Here’s a blog post explaining it well enough. :)
If you want to take this further, you automate the ‘client ssh’ portion, and now you have a whole automatic unlocking system.
I would have never thought this to be possible, but I somehow recalled a comment by @pl that it had already been done! (I would have been interested in the solution btw ;)
Aww thank you, feels good. Glad you figured it out, especially the oddity of kexec on NixOS at the moment, I’ll definitively have to revisit this. Somehow noticed that I can’t get the key from dmicode any longer over the weekend, not sure what the appropriate way is nowadays.
https://unix.stackexchange.com/a/119832
Is this just rediscovering equality assertion?
I guess
assert_eq!or a macro like it would make the language too complex? /sThe point is to use the language instead of inventing a new domain-specific language for test assertions.
What’s the difference between:
if got, want := ....; got != want”func assertEq[T comparable](t testing.TB, got, want T)in each package, and use that”Which of those 3 is not using the language? Are if statements part of the language, but function calls aren’t? Which of those is inventing a new domain-specific language? Is
got, want :=not a domain specific language?I totally agree with you.
Nitpick: I would however not recommend testify (anymore) since it predates generics (so no type checking, which does not play well with untyped constants).
Shameless plug: my minimal (generics-compatible) assertion library https://code.pfad.fr/check/ (which has some refinements over your proposed
assertEq, like Fatal and Log)Interesting. I wrote my own minimalist, generic test assertion package, but you are more minimalist than me. :-)
To handle fatal v. non-fatal errors, I have a helper that wraps a testing.TB in a struct that converts Fatal calls to mere Error calls. I thought that was a relatively simple way to do it, compared to having to call .Fatal everywhere or switch from using testify to testify/assert as a prefix.
I actually took inspiration from your library for
check!Yes, this is one of the main differences:
checkis non-fatal by default (append.Fatal()to opt in)beis fatal by default (wrap t withbe.Relaxed(t)to opt out)Besides since the
Logmethod is attached to thecheck.Failureobject, the emitting line is correctly logged (however it cannot be called during some previous computation, likebe.DebugLog. One must usetesting.Logdirectly)I want to see the next library that is more minimalist than check. It will have a single function called ensure.Good and it will use AI to decide if the result was good. 😂
The problem isn’t having a single assertEq, it’s that in your typical test framework you have like 20 different ones for every kind of comparison you can imagine.
The benefit there is typically one of pretty printing, right? That is, if the assertion fails, you can very quickly see exactly what went wrong and what the difference was. I don’t write much Go, but when I use those sorts of assertion libraries in other languages, that’s what I get out of them, and it’s usually a feature that’s worth its weight in gold when a test starts failing and you want to see exactly why that’s happened.
For job security or in case one gets paid per LoC?
Can’t really say whether this particular team or the advertised product is any good; but I agree with several of the points in that manifesto. In particular I still don’t quite understand why there is so little demand for a) local-first and b) vendor-independent CI/CD pipelines.
I don’t want to be locked into Github or Gitlab or even into the on-premise Jenkins, and instead want to be able to switch to different CI/CD tools without rewriting the pipeline definition. And I especially want to run the pipelines locally, just like I can run Make/Ninja locally. Debugging a CI/CD problem with our on-premise Jenkins litters the Git history with trial-and-error commits – this doesn’t happen for any other problem I have to debug!
Why do I so rarely read about this? Am I overlooking some essential feature of CI/CD pipelines that makes locally-run pipelines really unfeasible? Or is this just an idea whose time is yet to come?
I think there is plenty of demand – the problem is that it’s a lot of work to make! You actually get into programming language and (distributed) operating system issues, which people don’t realize until they are halfway into the problem.
That is, people underestimate the problem of CI. Building a general CI system is essentially the same problem as building a “distributed OS” like Kubernetes – you are mediating code/data/users across hardware/networks.
This also means there is a strong tendency toward an “inner platform effect”
Looking at Glu, I don’t believe Go is the right language for it. e.g. the
builder.Newexample here:https://blog.flipt.io/introducing-glu
It might be good enough in many cases, but probably not something I want to use. I have been maintaining a big heterogeneous CI for many years now.
“Vendor independent” is very hard – it’s a big enough, and profitable enough, problem that naturally “vendors” will arise, like Github Actions, Gitlab, etc.
I think Earthly has some good ideas – however they seem to have fallen into the “trap” of inventing their own ad hoc language. Instead of a YAML-ish language, it’s more like a Docker/Make-ish language. (Though I’d definitely be interested a defense of Earthly)
Here’s a post I wrote 3 years ago about this problem:
which prompted by this very good post, which got a lot of discussion:
As far as I can tell, basically all the same problems exist that existed 3 years ago … All the complaints we have now are probably in that thread. But it’s not for lack of demand! It’s because it’s hard to build
Also, YSH is supposed to be for this exact use case. I mentioned a few days ago that the best way to describe YSH is shell+Python+YAML squished together. Which is what a CI needs.
If anyone wants to build a CI on top of YSH, that’s what it’s for … although the “Hay” part needs another pass
I am not expecting that many people to be convinced without building a demo myself, but if anyone does happen to be convinced, definitely feel free to ping me by e-mail or on Zulip :-)
Oils is 100% open source and has been for 8 years. That will continue to be true, but it’s also taught me why there is no high quality open source CI – basically because you need a bunch of skilled people to build it full time.
And also because developing a programming language is hard/expensive, and I think the problem needs a custom language. Again, the Earthly example is strong evidence of that.
Some people, when confronted with a CI problem, think “I know, I’ll use Python.” Now they have two problems.
Seriously, though, I think a general-purpose CI service will unfortunately need a first-class Windows support.
YSH is influenced by those 3 languages, but it is its own separate language and implementation
It’s fine to have an opinion on python, but remember that it’s yours, and other people might not care about it
tbh, this kind of opinion comes across as a defense of Python by someone who hasn’t used better languages. For the record, my day job is writing Python, but even I can acknowledge its shortcomings despite being my livelihood.
What I mean is that I don’t really care about “I have two problems” kind of quotes … it’s a waste of time
If you want me to care, show me something interesting.
Glu seems to be similar to https://dagger.io/ (which is also written in Go, but much older).
I’ve always wanted to give it a try, but did not find a chance yet…
Good question. I can try to answer it based on my experience with a fairly specialized CI implementation (cross-platform C/C++ testing).
The first clarifying question to answer is what you mean by “local”? Most CI implementations these days run jobs in containers or VMs. So local could mean run a VM/container on my local machine and the job inside or it could mean run the job directly on my local machine. I am going to assume you mean the latter since you are talking about debugging the job, which would still be a pain in a local VM/container.
In our CI implementation we can actually run the worker (the program that executed the CI job) locally, though it is used mostly for testing the worker rather than the jobs. And it does require some ceremony. Firstly, the worker expects the job description in a machine-readable format so to make it convenient to run locally, we would need to replicate this as some sort of a command line interface. Likewise, the result is in a machine-readable format, with logs, etc. Dumping the logs to
stderrwould probably be good enough. More importantly, the worker expects certain setup in the VM (we use VMs) in which it runs. In our case this is pretty minimal (we have an indirection in the form of “environment scripts” which set up the execution environment). But I can imagine that in other CI implementations there would be a lot more assumptions (the main reason why I think we ended up with the minimum of assumptions is because we planned for local testing from the outset).Finally, I think I can also answer why we haven’t bothered with better local support yet: our CI implementation has an interactive mode where you can submit a job, instruct it to break at a certain step (typically on first error), log into the VM (SSH, VNC), and investigate/debug/etc. This is probably the best one can hope for if the problem you are trying to debug is platform-specific and the platform in question differs from your local (e.g., Windows, Mac OS). This is unfortunately the common case with cross-platform C/C++.
This is one thing I like about SourceHut’s CI/CD offering: it’s literally just shell scripts, so running your build locally is trivial. There are some drawbacks - reusing stuff is harder, for one. But you get a full programming language instead of whatever YAML abomination.
I use the same shell scripts with both sourcehut and Github Actions, and there is no real difference. sourcehut uses YAML too.
our sourcehut YAML, which calls shell scripts:
https://github.com/oils-for-unix/oils/tree/master/.builds
our github YAML, which calls the same shell scripts:
https://github.com/oils-for-unix/oils/tree/master/.github/workflows
sourcehut doesn’t have some features that Github Actions does, like dependencies/caching.
I like a lot of things about sourcehut, but you can use Github Actions in the same way if you want. And Gitlab, etc.
Source code: https://github.com/zersh01/iptables_interactive_scheme
This looks impressive, but is currently lacking a license: https://www.npmjs.com/package/@rocicorp/zero?activeTab=code
According to https://bugs.rocicorp.dev/issue/3007
and
However the main npm package does not reflect this…
Edit: the monorepo is apache2-licensed: https://github.com/rocicorp/mono/
Stupid question: how do you store the “ intended” time zone and date time? (e.g. in Postgres).
As strings ? (Arbitrary + RFC3339 for the date time)
Just discovered RFC9557 which suggests
2022-07-08T00:14:07+02:00[Europe/Paris](if the +2 is inconsistent with the location, inform the user)Storing an offset and a symbolic timezone seems mostly counterproductive, the entire point of a symbolic timezone is that the offset can change between the moment you create the event and its actual occurrence.
I guess it could make sense to warn viewers from non-local timezones that the offset (and thus their own time) has changed, but then those are the ones you’d want to inform, not the creator of the event (whom you’d assume is in the local timezone, thus created an event set to 00:14:07 which is still set to 00:14:07, that the offset of their local timezone to UTC at the moment of the event has changed is unlikely to be relevant)
Yes, the point of the duplicate storage is to raise an error if they don’t match. The textual form controls if you want to ignore errors.
Exactly. I just made a PoC in Go: https://go.dev/play/p/HDXB_K6DyT_f
I store the
datetimeWithOffset := "2006-06-02T15:04:05+02:00"(originally computed offset, likely a good fit for the TIMESTAMPTZ type of Postgres - @tonyfinn) and the intended locationEurope/Paris.From there, I “convert” the time to the intended location and compare the offset with the stored one:
@simonw I think this would address the interesting issue you raised here, no?
There’s two problems here.
First the timezone change problem here is only a problem in the forward direction. No political entity has proposed retroactively changing dates in 2006. So let’s assume a date in 2026 instead of 2006.
Secondly, TIMESTAMPTZ does not store “2026-06-02T15:04:05+02:00”. It stores no timezone or offset info whatever. What it stores is 1780401845000000 (microseconds since 1970-01-01T00:00:00Z). It’s basically a wrapper around transforming from the input zoned time to microseconds since the unix epoch on write, and producing a time in the connection timezone (default: system TZ) on read. But for most systems these days, a single global timezone at connection level or system level is insufficiently granular, since the timezone is a property of the data and not of the system.
This has a couple of interesting effects:
Thanks, I wrongly thought that TIMESTAMPTZ would store the offset somehow.
So I would need a third column, storing this offset (in seconds probably). So that I can check if the offset (at the time of the event creation) is still correct (at a later time, eventually after a zone change of Paris).
You don’t want to store the offset, you want to store the city name/location. Politics dictate that the offset will change on occasion, whenever politicians get bored(globally this happens several times a year).
Also, cultures sometimes have their own offsets, different from the legal offsets, which further complicate things.
If I only store the location, I won’t know if the offset changed (due to politics or whatever).
The goal is to know if the offset changed, to ask the user if the time should be updated as well.
Or just assume it is always changed, and convert through the TZ database.
In either case you have to round trip through the TZ DB to see if it changed, wouldn’t it be easier to just assume it has and move on with life? There might be special use-cases where you NEED to know if it changed, but in most cases, you just care what the right value should be at the moment.
Neither of Postgres’ timestamp types are actually very helpful for this scheduling use case.
TIMESTAMPTZconverts the date from the input timezone to UTC but this is an operation you want late binding for rather than early binding (as this article specifies), whileTIMESTAMPwill implicitly use the system TZ for many operations.Some options:
and if you want this for Linux I wrote
ssh-tpm-agentlast year.https://github.com/Foxboron/ssh-tpm-agent
This is probably not accurate, the keys are created on the TPM and then wrapped as objects you can load back into the TPM. The TPM does not have a lot of memory to store keys, so the concept is to have a determinsitic parent keys that can wrap keys and securely exported outside the TPM.
When you sign the object is loaded, unwrapped and the signature happens on the TPM.
Thank you so much for your work! (I am using it daily on nixos)
Some tools don’t support asking for a pin, but making a patch is usually not hard, e.g. https://github.com/jesseduffield/lazygit/pull/4018
I think this is accurate. I believe only the
-skpart is done by the tpm in the article.Sounds like https://github.com/psanford/tpm-fido/issues/33
I found the DNS part really interesting.
Some times ago, I wanted to formalize a way to inform about supported releases of a software via DNS: https://codeberg.org/forgejo-contrib/rvcodns (unfinished and more or less abandoned for now)
I also created a Go library to fetch such release information: https://code.pfad.fr/rvcodns/
Forgejo already uses DNS to indicate when a new version is available (this was the initial experiment to the proposal above): https://www.whatsmydns.net/#TXT/release.forgejo.org
Note that I really think that informing about a new version should be apart from actually deploying the upgrade (which depends on the way the software is packaged: built from source, docker, package manager, gobuilds…)
interesting, we came to almost the same solution and rationale! i searched online at the time, but couldn’t find anyone announcing releases over dns, while it made so much sense to me. i’m sending you an email about this.
it looks like a compatible dns api endpoint could be implemented in the gopherwatch.org dns server.
Submitted this story because it mentions “point-cloud documents” and the judicial fight to get access to high-quality 3d models, which should be considered as “commons”.
I hope it isn’t considered to be too far from the “computing focus” of this community.
This looks specific to France. Does it have any further implications for people in other countries?
Indirectly yes. According to the article:
I mean, they’ll be able to access the scans if it succeeds. That’s kind of cool.
The release notes for the release containing that patch seem to be:
Isn’t the patch mainly server-side? (Firebase rule, regarding the various access rights)
There is also the client side patch to stop leaking URLs to the server.
AFAICT that change hasn’t actually landed on the client. The post mentions it should come in v1.61.1 but the current latest for mac seems to be 1.58.
Point taken, but I was more getting at, with both their blog and their release notes, they haven’t mentioned the security issue that was in their product.
That being said, they now have
This is quite impressive!
Other review of the Glove80: https://lobste.rs/s/7rzyl2/review_glove80_ergonomic_keyboard
I used the first suggestion to demo my IBAN package: https://code.pfad.fr/swift/iban.html
Build steps: https://git.sr.ht/~oliverpool/code.pfad.fr/tree/main/item/Makefile#L13
PHP makes breaking changes between minor versions (when activating all errors), which is certainly great to keep developers working, but a major PITA.
After 10 years of PHP, I value the Go 1.0 compatibility promise very much…
Java is still the de facto king of backwards compatibility that actually has the years to show for it. Just based on the title, that ‘title’ would go to Java in my opinion.
Until you try out the nightmare that is upgrading the maven, ant or gradle dependencies of an old project, then sure.
I would argue that it is at most as bad as with any other language, but arguably better as the ecosystem is significantly more stable than any other. (What is in the same ballpark in size are JS, and Python, neither is famous for their stability).
But of course third-party libs may decide to break their public APIs at any point, that’s irrelevant to language choice, besides “culture” — and Java’s is definitely not “going fast and breaking stuff”.
Sadly, starting with Java 9, while the core language is backwards compatible, the library situation is a nightmare. And when you’re doing an upgrade, you don’t care if it’s the core language or the library that’s changed, either way, it’s work to upgrade and ensure you don’t have regressions.
Lots of things have changed the package that they live in. In principle, this sounds like it’s not a very difficult change to accomodate, just find and replace, but it wreaks havoc when you have libraries that have to run on multiple JVM versions.
If you’re unlucky enough to be using Spring, it’s even worse. That framework has no concept of backwards compatibility, and is like a virus that spreads throughout your entire codebase.
I can’t share your experience at all. Even large jumps pre-8 to post-11 (though after like 11 they are basically trivial) are reasonably easy, and the few libraries that made use of JVM internals that got increasingly locked down (to prevent further breaking changes in the future) have largely been updated to a post-module version, so very often it’s just bumping the version numbers.
I don’t understand some of what you’re describing.
Are you saying that other languages can somehow prevent the problem of third-party libraries breaking backwards compatibility? Because, if you aren’t saying that, then the core language being stable is going to make the situation objectively better to deal with than if you have to worry about breaking changes in libraries and in the core language…
Yes, it can be tricky to write code to target multiple versions of Java at the same time, but in my experience, it’s about 100 times less tricky to do this with Java than almost any other language. JavaScript might be the only one that’s even better about running old code on newer runtimes without much problem. Are there others you consider better than Java at this? Specifically for running the SAME code on multiple versions of the language? I remember back in the day when I worked on a lot of C++, it was a nightmare to figure out what features I could use from C++11 to allow the project to build on various versions of GCC that shipped with different Linux distros (Ugh, RedHat’s ancient versions!).
I think that depending on the community, some languages have less BC issues because of libraries than others. Case in point for me was Clojure: both the core language and community have a stance of avoiding breaking compatibility, even if the language itself doesn’t offer any special mechanism for that. Quite the opposite actually, since the language is really dynamic and you can even access private functions without much difficulty.
Of course they can’t, and that’s not my point.
My point is that communities develop norms, and those norms include more or less careful treatment of backwards compatibility in libraries. My sense is that Haskell and JavaScript are probably the worst. Java is actually mixed, as there are a lot of libraries that do great work. Some even have a single version that runs on Java 5 through Java 21. But at my day job, we’re using Spring, and the situation is bad.
Though with regard to languages, I will say dynamic class-loading can make the situation worse. I’ve dealt with issues where the order of requests to a web server determined which version of a service provider was loaded. So 9 times out of 10, the code ran on the newer version of Java, but failed 1 time in 10.
It sounds like your argument is “having two problems is worse than having one”. But two little problems are better than one big problem.
I greatly appreciate the backwards compatibility of the Java language. But I would be happier with a slightly less backwards compatible core language, if I could trade it for an ecosystem that’s much better at backwards compatibility.
Right. I understand and agree about ecosystem “culture”. But, at the end of the day, someone said Java was good with backwards compatibility because it makes it easy to write code that will continue to work “forever”.
I guess maybe your point is that the language prioritizing backwards compatibility is only necessary, but not sufficient, for developers actually getting to experience the benefit of the backwards compatibility. Would you say that’s a decent interpretation/restating? I do agree with that. And if the language itself doesn’t care about backwards compatibility, then it’s impossible for the ecosystem to have stability.
Definitely true! Luckily, I only remember one time where I had a nightmare of a time figuring out some class or service loading bug.
Yeah, that’s exactly what I was saying. But there’s no reason to think the “one problem” (community doesn’t value backwards compat.) would be bigger or smaller for either case. So, it’s like comparing x to x + 1.
I really feel like this is mostly just Spring. I hate Spring, too, so I have been fortunate enough to not have to use it in several years now. But, as for the Java libraries I am using, I honestly couldn’t tell you about their JVM version compatibility over time. But, just judging by API stability, it seems that most “general purpose” or “utility” style libraries stay pretty backward compatible (thinking of the Apache and Guava libs, etc). It’s mostly the frameworky ones (like Spring, Spark, etc) that like to rewrite big chunks of their APIs all the time.
Happy to agree with your last paragraph. It’s very close to what I think.
Hey, have you heard of Perl?
Can you give an example of what you’re referring to?
I’m semi-active on the PHP internals mailing list (i.e. the list where development of php itself is discussed) and BC breaks are a near constant discussion point with any proposed change, so I’m kind of curious what breaking change you’re referring to here?
PHP maintains change logs for every minor version. Here’s the most recent list for migrating from version 8.2 to 8.3 (a minor version bump): https://www.php.net/manual/en/migration83.incompatible.php
Yes, I’m aware of the migration lists. I was asking to try and get an example of a real world issue that’s likely to affect users.
In that migration list specifically, the vast majority of the “breaks” are fixing inconsistencies.
Yes they are technically a BC break. But part of the discussion on each RFC for PHP is the expected real-world impact if there is a BC break.
Well, since I’m not the person you originally asked, I obviously can’t speculate on which breaking changes they’ve actually bumped into. But, I’ve works on several long-lived PHP projects going all the way back to version 5.3 and all the way up to 8.1 or 8.2 (I don’t quite remember which), and I’ve definitely had to fix broken code from minor version bumps. Luckily, we do have those migration lists, so I learned pretty early on to read them carefully and grep through any code base to fix things on the list before even running unit tests.
But, I’m not sure what the point is. The person said that PHP introduces breaking changes in minor version bumps, and that it frustrates them. Maybe they’re misguided for being frustrated by that, but it’s objectively true.
Personally, I’m perfectly fine with breaking bug fixes in minor versions. It’s not like you’re going to accidentally or unknowingly version bump the programming language of your projects. On the other hand, many of these changes are NOT bug fixes, but just general “improvements”, like the
range()changes or the “static properties in traits” change in the 8.3 list. I can see how this would be frustrating for someone who wants the bug fix changes, but doesn’t want to pay the extra “tax” of dealing with the non-essential breaking changes.But, again, I personally don’t care. I treat PHP minor versions as a big deal. PHP 8.2 is not the same language as PHP 8.1, which is fine. But, if you care about backwards compatibility and painless upgrades, Java is pretty much the king.
It’s mostly the ripple effects into libraries and “big” frameworks. For instance, I remember a few years ago a client was on still on PHP 5.3 (I think) and his hosting provider was about to shut down this version, so he had to get his site working on newer versions. This site was built on an in-house CMS of questionable code quality, using CakePHP.
The problem I ran into was that I had to step-by-step upgrade CakePHP through some major versions because the older versions wouldn’t run in the newer PHP due to BC breakage. Eventually, I completely hit a wall when I got to one major version where I discovered in the changelog that they completely rewrote their ORM. Because of the shitty code quality of the CMS (with no tests), I had to admit defeat. The client ended up having to commission the building of a new site.
Java and Go are really great in this regard. Every other language, including Python (which makes breaking changes from 3.12.3 -> 3.12.4!) is horrible on that front.