So, if you have a Linux kernel driver that is not in the main kernel tree, what are you, a developer, supposed to do? (…) Simple, get your kernel driver into the main kernel tree (remember we are talking about drivers released under a GPL-compatible license here, if your code doesn’t fall under this category, good luck, you are on your own here, you leech).
Yeah, I can already see they’re allowing a kernel driver that works with only one product released by a single company. Moreover, interacting with those individuals from the LKML is the last thing any sane person would want to do if they wish to maintain their mental health.
It’s actually a pretty compelling reason to release something under a GPL license, at least to my non-business-brained self. Get free forward compatibility support and bugfixes from the community! You can even hide all the secret sauce in the device firmware and nobody cares except the Linux-Libre people. Do they actually accept single-company single-product drivers though?
You don’t really get to rely community bugfixes. Some trivial changes may be done to keep the driver working in new versions of the kernel, but otherwise you’re supposed to be the maintainer. Nobody will buy your device and debug it just to keep it working. Some changes may happen from people who already own it, but only if you have the right kind of users.
One extreme says APIs must remain stable and may not change ever, even when requirements change. Windows often tries to provide this (in practice, it falls short - I have more success running ‘90s Windows programs in WINE on an AArch64 Mac than on an x86-64 Windows 11 PC). This makes it hard to evolve to meet changing requirements.
The other extreme says that interfaces are unstable and can change whenever it’s needed. This makes it hard for anyone to live downstream. You end up with a load of half-finished things where people gave up chasing API changes while trying to upstream things, or people simply giving up. The pressure to upstream things can also cause the same failure mode as the other extreme: once an API has a load of in-tree consumers, the person changing it has to update them all and so APIs are de-facto frozen because no one wants to risk breaking their in-tree consumers (this is mitigated if you have a lot of tests).
For kernels, I think FreeBSD has the right balance. The KPI / KBI is expected to remain stable within a major release. A kernel module built against 14.0 should work with all of the 14.x series (note: if it depends on things outside the base system, this is not the case. Drivers ported from Linux often depend on the LinuxKPI kernel module, which tracks Linux KPIs and so may break consumers on a regular basis). Between major releases, the KBI will definitely change (struct fields may be added in core data structures: some of these are designed to allow addition during a major release, others may have padding added just prior to a .0 release), but KPI changes that are not backwards compatible should be intentional and documented.
In the past few years I’ve sometimes worried maybe the world doesn’t want to support more than one POSIXy open source OS. But working on FreeBSD drivers lately, I agree with what David says here and hope we can survive because it’s a lot more pleasant to develop on FreeBSD. Including simple stuff like supporting back and forth N-2 releases.
It’s not about GPL or open source, you can meet the spirit of that (see i.e. any board support package for a wifi router or other complex SoC) while still suffering greatly from this policy… if you are maintaining something complicated, like say an Ethernet switch OS, you are going to have to incur a massive technical debt from day 1. There’s no way to navigate the complexities of billion dollar IP and manufacturing super powers into behaving the way Linux policy wants. So in practice that means you’re on a frozen Linux kernel version, hope their SDK isn’t a shit show, and then pay some heavy price down the line once that inevitably becomes untenable. In layman’s terms that eventually looks like unpatchable bugs/security issues/CVEs for potentially expensive and still relevant goods.
I once tested a statically linked executable that I built on a recent 6.x system — it didn’t use any especially modern kernel interfaces and was limited to classic POSIX system calls, and it ran fine on a 2.6 Fedora system from 2007.
Also, the classic ifconfig still works, so I presume lots of old programs would work as well, even if they rely on interfaces that aren’t actively extended or maintained anymore.
html version
Yeah, I can already see they’re allowing a kernel driver that works with only one product released by a single company. Moreover, interacting with those individuals from the LKML is the last thing any sane person would want to do if they wish to maintain their mental health.
It’s actually a pretty compelling reason to release something under a GPL license, at least to my non-business-brained self. Get free forward compatibility support and bugfixes from the community! You can even hide all the secret sauce in the device firmware and nobody cares except the Linux-Libre people. Do they actually accept single-company single-product drivers though?
You don’t really get to rely community bugfixes. Some trivial changes may be done to keep the driver working in new versions of the kernel, but otherwise you’re supposed to be the maintainer. Nobody will buy your device and debug it just to keep it working. Some changes may happen from people who already own it, but only if you have the right kind of users.
This is at the core of why Linux is ultimately going to be replaced by something else.
The overhead from the flowing API and the changes they impose all over the place (and bugs these changes introduce) is unsustainable.
The replacement is no doubt going to be a microkernel multiserver system.
Changing APIs are good. They allow things to get better rather than just having more and more layers of legacy crap that is poorly supported.
Neither extreme is healthy.
One extreme says APIs must remain stable and may not change ever, even when requirements change. Windows often tries to provide this (in practice, it falls short - I have more success running ‘90s Windows programs in WINE on an AArch64 Mac than on an x86-64 Windows 11 PC). This makes it hard to evolve to meet changing requirements.
The other extreme says that interfaces are unstable and can change whenever it’s needed. This makes it hard for anyone to live downstream. You end up with a load of half-finished things where people gave up chasing API changes while trying to upstream things, or people simply giving up. The pressure to upstream things can also cause the same failure mode as the other extreme: once an API has a load of in-tree consumers, the person changing it has to update them all and so APIs are de-facto frozen because no one wants to risk breaking their in-tree consumers (this is mitigated if you have a lot of tests).
For kernels, I think FreeBSD has the right balance. The KPI / KBI is expected to remain stable within a major release. A kernel module built against 14.0 should work with all of the 14.x series (note: if it depends on things outside the base system, this is not the case. Drivers ported from Linux often depend on the LinuxKPI kernel module, which tracks Linux KPIs and so may break consumers on a regular basis). Between major releases, the KBI will definitely change (struct fields may be added in core data structures: some of these are designed to allow addition during a major release, others may have padding added just prior to a .0 release), but KPI changes that are not backwards compatible should be intentional and documented.
In the past few years I’ve sometimes worried maybe the world doesn’t want to support more than one POSIXy open source OS. But working on FreeBSD drivers lately, I agree with what David says here and hope we can survive because it’s a lot more pleasant to develop on FreeBSD. Including simple stuff like supporting back and forth N-2 releases.
It’s not about GPL or open source, you can meet the spirit of that (see i.e. any board support package for a wifi router or other complex SoC) while still suffering greatly from this policy… if you are maintaining something complicated, like say an Ethernet switch OS, you are going to have to incur a massive technical debt from day 1. There’s no way to navigate the complexities of billion dollar IP and manufacturing super powers into behaving the way Linux policy wants. So in practice that means you’re on a frozen Linux kernel version, hope their SDK isn’t a shit show, and then pay some heavy price down the line once that inevitably becomes untenable. In layman’s terms that eventually looks like unpatchable bugs/security issues/CVEs for potentially expensive and still relevant goods.
Does this still hold true even on Linux 6.x kernels?
I once tested a statically linked executable that I built on a recent 6.x system — it didn’t use any especially modern kernel interfaces and was limited to classic POSIX system calls, and it ran fine on a 2.6 Fedora system from 2007.
Also, the classic
ifconfigstill works, so I presume lots of old programs would work as well, even if they rely on interfaces that aren’t actively extended or maintained anymore.I once downloaded a copy of Blender from like 2003 and it worked. Graphics were a bit glitchy.