At the risk of being the fun police, following along with processes isn’t an activity well suited to a link aggregator where a post like this one is going to hang around on the front page for a couple of days. It would be kinder to everyone to save them a pointless click and wait a moment for the actual release announcements.
The debian team is actively promoting the release process on Twitter. The title of my submission also reflects that. If you are not interested, then ignore it. Geez, do people really have to complain about everything these days?
I would grant you your point, but the actual release post was already downvoted once with „already posted“ even though that would be the better post to have a discussion of the release compared to this one
That’s interesting, because I can’t stand it. Things are changed at random so programs are different, and package management seems to like to break at random. What makes it your favourite?
I’ve been using stable since 2008. I can remember one instance where pulling in a package update broke a program: it was when Chromium switched to requiring a GPU and then had a horrid CVE that no one could backport to stable; suddenly I couldn’t use it over ssh -X any more. Hard to place much blame on Debian for that though.
I’d imagine you’d see more breakage running unstable but like … that’s literally what it says on the tin.
Why would you think it’s for no reason? Actually that’s an example of exceptional care being taken to avoid things breaking at random. It means that you can have packages that provide Apache modules, and packages that provide entire websites (say, Wordpress or netdata), and expressible dependencies between them, and they don’t step on each other, local config is never blown away by upgrading packages, and generally speaking the sysadmin doesn’t get unpleasant surprises.
Admittedly, it doesn’t really feel “different” to me, because I got used to it 20 years ago and have only rarely had to deal with Apache on systems that didn’t do it that way, but what I will say is that it works, and that it’s not arbitrary — basically all of the packaging guidelines (which amounts to a book worth of material) come from a place of “how to be predictable, reproducible, and avoid breakage”. But it is assumed that admins know their system; being different from some other distro or upstream defaults isn’t breakage in and of itself, when those differences are justified and documented (which they certainly are in the case of Apache).
I’d rather have the config files match with examples online (and the official docs!) than have any of those features, though. What do other distros do with Apache modules? I’ve never had breakage in them and they don’t do anything like debian.
It’s been mine since around 1996. I try running other operating systems but I’m so used to the availability of software, the stability of updates and the general attention to detail that I have a hard time using anything else.
Debian 11 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers and engines with backported security fixes. Additionally, library interdependencies make it extremely difficult to update to newer upstream releases. Therefore, browsers built upon e.g. the webkit and khtml engines[6] are included in bullseye, but not covered by security support.
The Debian infrastructure currently has problems with rebuilding packages of types that systematically use static linking. Before buster this wasn’t a problem in practice, but with the growth of the Go ecosystem it means that Go-based packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably.
This is a personal opinion but I think it is annoying to see this kind of notes in a “new release”, that they are releasing a stable version with fixed version packages… that contains various security issues and bugs.
End softwares (binaries, not libraries) should be upgraded to the latest version every time I think, except when you have a lot of non-static libraries inside…
My read of that is they are just re-explaining the shared lib problem: If there’s a new security release of openssl, and you have some random packages that static-linked openssl, Debian can’t promise they will be fixed at the same time as the security update of openssl. (This seems like common sense, but sometimes it’s good to be explicit about the risks.)
Thanks for the response.
Indeed, I understood this for the browsers issue, but not for the Go-based packages actually, as they state that Debian infrastructure has issues with static linking…
It’s kind of a long-standing issue because distro toolchains are built on the assumption of dynamic linking and having only one copy of any particular library installed, so that if there’s a vulnerability in the library they can ship an update and everything which links against it will get the update.
But Go and Rust are built entirely on static linking, and the distros’ current toolchains struggle with that, because the problem changes from “update libfoo.so in place” to “identify literally everything that links libfoo, either as a direct dependency or transitively from something else, and recomplile them all using the new libfoo”. That requires a different sort of visibility into dependency chains, and a lot more recompilation, than distros are really used to (and is potentially further complicated by the fact that two different things which use libfoo may use and even require different releases of it – imagine one thing that requires libfoo 1.x only, while another requires libfoo 2.1 or newer).
I don’t think it’s different visibility. Debian compiles everything from local packages (unless it changed?) so all go/rust/other static binaries already build-depend on everything they need to. They can easily query the tree for everything that needs updating.
That won’t help your own apps of course, but… we should be tracking dependencies for third party apps anyway.
I think it’s worse than that in that upstream webkit and khtml don’t ship security patches that can be applied without breaking ABI. Apple ship a whole new OS release when there’s a WebKit patch - almost a gig on my phone - and that isn’t consistent with the Debian model.
The alternative is just “don’t ship webkit” which … honestly I’d respect that decision if that’s what they did. But it precludes a lot of useful software that uses webkit in completely safe contexts like browsing HTML documentation and things.
This isn’t a change. Browser engines haven’t been covered by debian security for some time now. The only browser covered, iirc, is Firefox because they break with proper protocol and just ship new versions during a cycle.
The situation is fairly specific to browser engines. They are prone to many security bugs, do nothing to aid in backporting fixes, so just would be a lot of work for a volunteer security team to stay on top of.
That’s a browser, not an engine, and is indeed the only browser that can be used securely from debian since they bend the rules and just update the whole thing on new ESR
I noticed that when you look up a package on packages.debian.org, bullseye is still listed as “testing”, and buster is still listed as “stable”. So I guess the release is still in progress.
More likely p.d.o just needs a manual update and lags behind a little bit. dists/stable points to bullseye on the mirrors, and debian.org/download gives you a bullseye ISO, and that’s pretty much what counts.
That’s right: bullseye soft freeze was February, GNOME 40 released in March.
IMHO (as a Debian developer) we should have delayed the soft freeze and got 40 into bullseye, if there was sufficient confidence that 40 really was stable enough (we’d have had to evaluate that before 40 actually shipped)
FWIW (not much, I know!), I think that this is exactly the right approach to take. There’s always one more update, one more feature. If you are aiming for stable releases (and I think Debian should), then you gotta draw a line in the sand at some point.
I love that Debian is run so well. I only wish more projects had a similar, healthy respect for stability.
Python 3.9.1 and PostgreSQL 13 - those should do nicely for a few years for my projects that use them.
Also SQLite 3.34.1: https://packages.debian.org/testing/database/sqlite3 - that’s pretty recent, from January this year.
This is what I’m looking forward the most. We couldn’t resist upgrading to pg13, but py3.9 is a much anticipated upgrade!
Merge with https://lobste.rs/s/nd8wir/debian_11_is_being_released
The opposite would make more sense, since this is the actual event and that’s just the foreplay leading up to it.
Any way is ok.
Can we not wait for the official announcement on Debian’s news page?
I think it is fun to follow along the release process
At the risk of being the fun police, following along with processes isn’t an activity well suited to a link aggregator where a post like this one is going to hang around on the front page for a couple of days. It would be kinder to everyone to save them a pointless click and wait a moment for the actual release announcements.
The debian team is actively promoting the release process on Twitter. The title of my submission also reflects that. If you are not interested, then ignore it. Geez, do people really have to complain about everything these days?
I would grant you your point, but the actual release post was already downvoted once with „already posted“ even though that would be the better post to have a discussion of the release compared to this one
More content / details in the release notes here: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-whats-new.en.html
News is trickling out on https://micronews.debian.org/ as well
Debian has become my favorite Linux distro recently. It is surprisingly good.
Why surprisingly? It’s the foundation of a lot lot lot of distros
That’s interesting, because I can’t stand it. Things are changed at random so programs are different, and package management seems to like to break at random. What makes it your favourite?
I’ve been using stable since 2008. I can remember one instance where pulling in a package update broke a program: it was when Chromium switched to requiring a GPU and then had a horrid CVE that no one could backport to stable; suddenly I couldn’t use it over
ssh -X
any more. Hard to place much blame on Debian for that though.I’d imagine you’d see more breakage running unstable but like … that’s literally what it says on the tin.
What in the world? There is hardly a less “things break at random” system than Debian.
Apache’s configs being entirely different for no reason has bitten me more than once. Still not sure why that is, it’s pretty horribly annoying.
Why would you think it’s for no reason? Actually that’s an example of exceptional care being taken to avoid things breaking at random. It means that you can have packages that provide Apache modules, and packages that provide entire websites (say, Wordpress or netdata), and expressible dependencies between them, and they don’t step on each other, local config is never blown away by upgrading packages, and generally speaking the sysadmin doesn’t get unpleasant surprises.
Admittedly, it doesn’t really feel “different” to me, because I got used to it 20 years ago and have only rarely had to deal with Apache on systems that didn’t do it that way, but what I will say is that it works, and that it’s not arbitrary — basically all of the packaging guidelines (which amounts to a book worth of material) come from a place of “how to be predictable, reproducible, and avoid breakage”. But it is assumed that admins know their system; being different from some other distro or upstream defaults isn’t breakage in and of itself, when those differences are justified and documented (which they certainly are in the case of Apache).
I’d rather have the config files match with examples online (and the official docs!) than have any of those features, though. What do other distros do with Apache modules? I’ve never had breakage in them and they don’t do anything like debian.
This might have been around 2004, when I encountered a legacy Debian Stable that had missed out on a stable release. So it was like oldoldstable.
Upgraded directly to the contemporary stable and everything went perfectly, despite skipping over a major release altogether.
It was beautiful.
It’s been mine since around 1996. I try running other operating systems but I’m so used to the availability of software, the stability of updates and the general attention to detail that I have a hard time using anything else.
From the releases notes:
This is a personal opinion but I think it is annoying to see this kind of notes in a “new release”, that they are releasing a stable version with fixed version packages… that contains various security issues and bugs.
End softwares (binaries, not libraries) should be upgraded to the latest version every time I think, except when you have a lot of non-static libraries inside…
EDIT: Link to the notes: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-information.en.html#limited-security-support
My read of that is they are just re-explaining the shared lib problem: If there’s a new security release of openssl, and you have some random packages that static-linked openssl, Debian can’t promise they will be fixed at the same time as the security update of openssl. (This seems like common sense, but sometimes it’s good to be explicit about the risks.)
Thanks for the response.
Indeed, I understood this for the browsers issue, but not for the Go-based packages actually, as they state that Debian infrastructure has issues with static linking…
Or maybe I missed something? :/
It’s kind of a long-standing issue because distro toolchains are built on the assumption of dynamic linking and having only one copy of any particular library installed, so that if there’s a vulnerability in the library they can ship an update and everything which links against it will get the update.
But Go and Rust are built entirely on static linking, and the distros’ current toolchains struggle with that, because the problem changes from “update
libfoo.so
in place” to “identify literally everything that linkslibfoo
, either as a direct dependency or transitively from something else, and recomplile them all using the newlibfoo
”. That requires a different sort of visibility into dependency chains, and a lot more recompilation, than distros are really used to (and is potentially further complicated by the fact that two different things which uselibfoo
may use and even require different releases of it – imagine one thing that requireslibfoo
1.x only, while another requireslibfoo
2.1 or newer).I don’t think it’s different visibility. Debian compiles everything from local packages (unless it changed?) so all go/rust/other static binaries already build-depend on everything they need to. They can easily query the tree for everything that needs updating.
That won’t help your own apps of course, but… we should be tracking dependencies for third party apps anyway.
I think it’s worse than that in that upstream webkit and khtml don’t ship security patches that can be applied without breaking ABI. Apple ship a whole new OS release when there’s a WebKit patch - almost a gig on my phone - and that isn’t consistent with the Debian model.
The alternative is just “don’t ship webkit” which … honestly I’d respect that decision if that’s what they did. But it precludes a lot of useful software that uses webkit in completely safe contexts like browsing HTML documentation and things.
This isn’t a change. Browser engines haven’t been covered by debian security for some time now. The only browser covered, iirc, is Firefox because they break with proper protocol and just ship new versions during a cycle.
To me it is more “shared lib management” issues (again) than keeping an old / unmaintained package, but I may misunderstood the text :)
The situation is fairly specific to browser engines. They are prone to many security bugs, do nothing to aid in backporting fixes, so just would be a lot of work for a volunteer security team to stay on top of.
Wouldn’t Firefox ESR (extended support release) work?
That’s a browser, not an engine, and is indeed the only browser that can be used securely from debian since they bend the rules and just update the whole thing on new ESR
I see.
Thanks for the explanation :)
Woot!
My servers are updated. No problems so far.
I noticed that when you look up a package on packages.debian.org, bullseye is still listed as “testing”, and buster is still listed as “stable”. So I guess the release is still in progress.
More likely p.d.o just needs a manual update and lags behind a little bit.
dists/stable
points to bullseye on the mirrors, and debian.org/download gives you a bullseye ISO, and that’s pretty much what counts.I thought Gnome 40 was already considered stable. Why are new distro releases still shipping 3.x?
Because Gnome 40 was released after Debian 11 features freeze.
That’s right: bullseye soft freeze was February, GNOME 40 released in March.
IMHO (as a Debian developer) we should have delayed the soft freeze and got 40 into bullseye, if there was sufficient confidence that 40 really was stable enough (we’d have had to evaluate that before 40 actually shipped)
Does gnome not make it into back ports?
FWIW (not much, I know!), I think that this is exactly the right approach to take. There’s always one more update, one more feature. If you are aiming for stable releases (and I think Debian should), then you gotta draw a line in the sand at some point.
I love that Debian is run so well. I only wish more projects had a similar, healthy respect for stability.