This is a pretty strange article. I like and use FreeBSD, but I couldn’t find their own motivation for the change in the text. All these lists are pretty generic – what I was expecting is to see where Linux couldn’t fit in their use case and why FreeBSD does.
I agree. It’s not very technical either and it would have been nice if there were actual, relevant comparisons.
Something I’ve seen is actually not really OS-related - not that I know, but for some reason FreeBSD does an excellent job at providing the latest software packages while keeping stability.
In Linux land you usually have to choose. Do you want an old package, do you want to add some official repository, do you want to have some less stable rolling release thing. And it gets even harder when combining packages (PostgresSQL and PostGIS being a famous example), which is why Docker is much more needed in my opinion.
On FreeBSD I can say I want Postgres 11 with PostGIS 3.2 and I get it, without self-compiling, without third party packages. Or let’s say I want nginx with certain compile options or third party modules. I can just pkg install it and it works.
That’s something that feels very strange to me, because one would imagine that there user count really would make a difference and strongly favor Linux. But in reality you have something Debian or RedHat based that’s usually very out of date, unless you add third party repositories which bring in their own problems, or you have something like Arch that is small for official packages, large, but highly unstable with AUR, doesn’t provide configuration options and forces whatever is the latest release onto you. So PostgreSQL 11 with PostGIS 3 isn’t an option.
This tends to really baffle me. I know it’s part of why Docker picked up, but it feels like an oversized hack for something that obviously can be solved, even with a comparatively small amount of developers. I would really love to see something that compares in Linux world. And no, snap and flatpak aren’t really solutions here.
It’s also easy in FreeBSD to build a custom package set if you do want to build from source (for example, to enable extra security mitigations that aren’t the default or disable an optional dependency that increases your attack surface but doesn’t add features that you’re using). The tool that builds the entire ports tree to produce packages is open source and it can also be used to create VM disk images that contain freshly-built packages (and, optionally, a freshly built source tree if you want some custom options there) and other things that you’ve built separately. If you’re deploying VMs, then it’s easy to have a ‘git ops’ workflow where pushes to your repo trigger Poudriere to build a new VM image with the latest package versions and so on and to aggressively customise this (for example, excluding bits of the base system that you don’t want, such as the toolchain).
I use FreeBSD (if I’m going to use Unix, I might as well use one with good taste), but:
UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.
I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.
My problem with Bhyve is guest support, but that’s why I run ESXi.
I am similarly biased towards FreeBSD (if I’m going to use an implementation of bad ideas from the ‘70s, at least I’d like a clean and consistent implementation of those bad ideas) and wanted to amplify this point
I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.
Jails are a superior mechanism for doing shared-kernel virtualisation to the mixture of seccomp-bpf, cgroups, and namespaces that can be assembled on Linux to look like jails. Lots of FreeBSD-related articles like to make that point and they are completely missing the value of the OCI ecosystem. Containers are a mix of three things:
A reproduceable build system with managed dependencies and caching of intermediate steps. FreeBSD has some of this in the form of poudriere, but it’s very specialised.
A distribution and deployment format for self-contained units.
An isolation mechanism.
Of these, the isolation mechanism is the least important. Even on Linux, there’s a trend to just using KVM to run a separate kernel for the container and using FUSE-over-VirtIO to mount filesystems from the outside. The overhead of an extra cut-down Linux kernel is pretty small in comparison to the size of a large application.
The value in OCI containers is almost entirely in the distribution and deployment model. FreeBSD doesn’t yet have anything here. containerd works on FreeBSD (and with the ZFS snapshotter, works well) but runj is still very immature.
My problem with Bhyve is guest support, but that’s why I run ESXi.
I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.
Bhyve may or may not be better than KVM but the separation of concerns is weaker. There’s a lot of exciting stuff (e.g. Kata Containers) that’s being built on top of KVM. Windows now provides a set of APIs to Hyper-V that are a direct equivalents to the KVM ioctls, which means that it’s easy to build systems that are portable between KVM and Hyper-V. There’s no equivalent for bhyve.
UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.
I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).
I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.
VMware has drivers for weirder guest OSes (including older versions of mainstream stuff… you know, NT), KVM doesn’t. That and I’ve had very bad experience with KVM virtio, but that doesn’t reflect on Bhyve
I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).
This is probably my own paranoia fed by misinfo (or just plain outdated info) about ZFS resource usage.
ZFS doesn’t really need that much RAM. The ARC is supposed to yield to your programs’ demand. But if you’re not comfortable with how much it occupies you can just set arc_max to a small size.
The ZFS recommendations seem to be based around the idea that you want the best performance out of ZFS, running large NFS/iSCSI/SMB hosts with many users. I think FreeNAS also set the bar high just so that users trying to use minimal hardware would not have a reason to complain if it didn’t work very well for them.
However, in practice, I rarely need top performance out of ZFS, so even with 512MB of RAM I can use it comfortably for a small VM with just a few services. Granted this was a few years ago, so maybe 1GB is needed nowadays.
I wouldn’t say UFS is great, but I really think Ext4 is worse. I think at least part of the bad reputation is also coming from the fact that UFS isn’t as much info version numbers. UFS-implementaions change.
But to not make this FreeBSD vs Linux, see XFS vs Ext4 where there’s a similar situation. Every time Ext4 gets an edge, like metadata speed XFS ends up being improved surpassing Ext4 again.
Similar things can be said about UFS, at least for the remark of it being “fragile”.
But I’d like to hear it if you have anything to bank that claim.
That said I would have agreed with you there about a decade ago.
Lots of bogus excuses for a lot of busy work. Sorry I know that is not constructive but I just could not find a single business reason in this post that would justify this migration.
Well, if the business case was “we had problems with Linux and we’re (pretty) sure that how FreeBSD is done by one team those will not pop up” that’s a massive business reason.
A few years ago there were lots of people running their webservers on FreeBSD. The difference is just very small (if your stack is available or easily buildable) and so the business case can be very small difference of speed, maintainability, knowledge of the team.
But yeah, the article doesn’t really persuade me either.
Linux and related distributions now have contributions from many companies, many of which (e.g. Red Hat) push (justifiably) in the direction of what is convenient for them, their products, and their services. Being big contributors to the project they have a big clout so, indeed, their solutions often become de-facto standards. Consider systemd - was there really a need for such a system? While it brought some advantages, it added some complexity to an otherwise extremely simple and functional system.
Again the same tired argument:
Have you considered that Red Hat’s commercial interests could aling with yours? The article speaks of “we” so I assume some commerical enterprise.
Yes, systemd is absolutely better technically from the (der-distribution different) mess of init scripts or upstart files that it replaced. We even have Upstart as an example of a new system from a commercial actor (Canonical in this case) that actually didn’t get traction because it technically wasn’t very good.
Distributions mostly come with start-up scripts so if the fact that the way services are started is a large source of work for you then there is something wrong with your workflow.
Anyway, isn’t it sorta similar with FreeBSD too? iXsystems supports ZFS on FreeBSD, hence the support is pretty good. Refusing contributions from commercial actors because they’re commercial and have money to throw at the problem of developing a good system is a good thing, in my book.
I wonder if we won’t be seeing more of this. I feel like the whole systemd debacle but perhaps more importantly the philosophical, financial and organizational changes that caused it speak to a real problem in Linux-space where a small number of large-ish companies are determining the future of the platform based on their best interests that may not actually intersect with those of the community.
Like, systemd actually DOES seem to bring some value to the desktop, but to the server? The value it brings is much less clear, and the harm it causes by violating decades old interface contracts is non trivial.
Like, systemd actually DOES seem to bring some value to the desktop, but to the server? The value it brings is much less clear, and the harm it causes by violating decades old interface contracts is non trivial.
The decades old interfaces were a crufty hack put together by grad students. All the commercial unix variants ditched it long ago. Yes, BSD’s user space is stable, but it’s also awful. It was awful in the 1990’s when I started in on Linux and BSD, and it’s awful today.
I only use Linux on servers these days, and I love systemd. If I never have to write another SysV init script again, I will be a happier person. Add things like eBPF? And finally we’re getting btrfs on root with SuSE and getting some of the nice stuff that Solaris had (snapshot before upgrade for trivial rollback). It’s not traditional BSD, and it’s better.
And harder to just change some flags and get an overview about how the system is configured.
FreeBSD also has daemon which is a simple tool that takes care of everything you might need for running a daemon. Logging, automatic restarts, managing subproceses, chaining user, forking into background, creating a pid file.
I don’t disagree that systemd is better than what many distributions had before. It’s actually a reason of why I got interested in the BSDs. Some systems, like Gentoo or Arch Linux in the past did a better job as well. Especially Gentoo’s OpenRC (also used by some others) comes pretty close to a perfect init in my opinion.
Sorry, I don’t wanna talk about systemd or init systems. There is enough about that elsewhere, but the notion that things on the BSDs are as bad as they were on SysV init based Linux distributions is simply wrong.
About eBPF. dtrace is there and has great integration from Python to Postgres. And btrfs feels like a never ending story and I think even on Linux it’s largely obsoleted by ZFS which just like dtrace has been used in production in very large setups for many years.
See, that’s the thing with Linux though. Things tend to be considered obsolete when they finally manage to stabilize. See Pulseaudio and Primusrun. I don’t think that’s necessarily bad. In fact it’s good that bad stuff gets replaced, but often it’s not about the better option but the newer one it seems and RedHat certainly has interest in pushing their products.
And then you have to hire huge DevOps/SRE-Teams just to keep things compatible with whatever is still supported. Of course that leads to the idea that you need to be able to pull things off Dockerhub and don’t actually manage the system, but outsource things to EKS or GKE.
And I say that as someone whose main source income is consulting helping companies with DevOps related issues, Docker, Kubernetes, etc. It’s a mess, which is why companies throw large sums of money at people like us blowing out the fires.
Whatever it is, being the “cool new thing” managers read in their magazines and “Google uses it” will always win. There’s no shortage of work in the industry if you jump on what’s hot. ;)
Which I would is more involved than having a central config file often with additional specific options. Not by much, but having an rc.conf often with not just flags, but actually specifying listen address or in case of network adapter even which device is configured how is certainly nicer than doing the same in systemd. Saying that as someone who does both at a semi-daily basis.
It’s because at one point I feel more as a user and on the other more as a the designer. I think it scales very well with how involved the service is compared, to Unit files, where I have to know both how the service works and how Unit Files work.
Listing override files to get an overview of a system isn’t exactly a nice workflow in my opinion. I feel spoiled when I have things groups, commented, etc. It’s a bit of that terraform feel where in a well written config you see everything grouped together at first glance, have useful comments where necessary.
Anyways, that’s not even what I tried to argue about. I just wanted to support the statement that the BSDs rc.d isn’t Linux SysV init. And while I understand people have different tastes, switching to BSD isn’t going to bring that back nor is using one of the many other options out there bringing that back. I know my way around with systemd (and the slur of related tools and services), since I have to work with it on a daily basis, because it’s currently the dominant init system, yet I still wished something like OpenRC would be adapted more widely. Aside from some small distributions with Alpine and Gentoo there seem to be two more widely used ones, for which it appears to works just fine.
The issue here is not the change, but an unwillingness to actually COMMUNICATE that change so sysadmins managing production systems in the field have to find out the hard way that their expectations have not been met.
This is most emphatically NOT the way to develop a very complex software stack.
When I want to change some detail around DNS resolution, I go to modify /etc/resolv.conf, but that’s not actually the correct mechanism anymore, but I can’t find what the right mechanism IS in the man pages or anywhere else I know to look.
There are two important issues in good UI design (okay, more, but two that are relevant here):
Discoverability
Consistency
*NIX systems are typically terrible at the first of these. FreeBSD isn’t actually too bad in the first order here because you can add nameservers by running bsdconfig and going to the network settings part. This isn’t great though because it doesn’t tell you what it’s editing and so the only thing that you learn is that you can edit the settings via that UI, not what the underlying service is. RedHat has a similar tool whose name I’ve forgotten. I don’t know what the Debian / Ubuntu equivalent is but I assume there is one.
I learned about resolv.conf on Linux around 2000. For consistency, I’d expect to go and edit it today. Trying this on a FreeBSD and Ubuntu system, I learn quite similar things: it’s not the right thing to do anymore. Both then do well on discoverability: On Ubuntu, it tells me that the file was created by systemd-resolved, on FreeBSD is tells me that it was created by resolvconf. In both cases, I can go to the relevant man page and find out what the new thing is. Whether I prefer systemd-resolved or resolvconf is largely a matter of personal preference (I do enjoy the fact that there’s now a file, complete with man page, on FreeBSD called resolvconf.conf, because what problem isn’t made better by adding an extra layer of indirection?).
Can we please stop this back and forth? I realize I was being un-necessarily incendiary by using the word ‘debacle’ and if you contribute to the systemd project and I hurt your feeling I sincerely apologize.
I’ve been growing more curious about BSD lately, and it occurs to me that the tildeverse is a good way to try out lots of OSs in at least a server (non-GUI, non-desktop) setting. At the least I’m having fun collecting them all.
Relying on dockerhub images is a major security risk. You don’t receive any message whenever the image author stop updating their image. I am sad however how i can’t use docker images built by nix on freebsd.
You are right. Outdated docker images from docker hub can be a problem. However, there are tools available that scan docker images for security issues. Google Cloud uses this, for instance.
Sorry, but that’s just pure marketing talk, especially at times where Kubernetes and others move away from Docker
Aside from that you can run Docker (and Kubernetes btw.) on FreeBSD.
I heard the ELF comparison before but for any technical purpose it’s simply wrong/marketing talk.
Oh and I have to admit that I actually said something similar before and I’m ashamed for it.
I think the thing coming closest is the Dockerfile (and alike) being a way of executing software that is somewhat dominant. It feels a bit like the procfile that came slightly before, but it’s easy more associated with Docker, Docker images, etc. than what matters for actual real life production setups. It’s part of why do many “docker clones” exist out there allowing devs to write Dockerfiles even though no Docker runs when the instructions are executed.
Yes, Docker did a great job defining an interface between who writes software and who executes it where the great part is simply that it’s not too complex and that it got popular. So that’s why people like to compare it with ELF.
But that’s really the. Dockerfile as well as the fact that certain ways for configuration are used/preferred, that huge installation processes were simplified and that there is no random state holding directories anymore.
Parts of this also became necessary for cloud computing taking over and hard boundaries were defined for software making deployment a lot easier.
But that doesn’t just work when executing on the could or actually using Docker.
Sorry, but that’s just pure marketing talk, especially at times where Kubernetes and others move away from Docker
This is true, but misleading. Docker (Moby) is finishing a transition to be built on top of the OCI stack. Docker is a tool that uses containerd to build OCI containers. Kubernetes is a tool for managing containerd instances and using them to run OCI containers.
Aside from that you can run Docker (and Kubernetes btw.) on FreeBSD.
Kind of. You can run containerd and it can use runj to manage jails for deployment, but runj is still alpha quality code. It’s easy to get into a situation where it can’t shut down a jail. You can also build Moby for FreeBSD, but it can also wedge in the same way and require a reboot to clean up the stale state.
There is some good points here but I think they’re overlooking security. Admittedly I am an OpenBSD fan boy but FreeBSD, at least the last time I looked at it, lacked a lot of security features that are available in Linux (and of course, OpenBSD).
Would you like to elaborate what security features you were missing from FreeBSD? Perhaps they could be added to the project ideas list on the FreeBSD wiki.
This is a pretty strange article. I like and use FreeBSD, but I couldn’t find their own motivation for the change in the text. All these lists are pretty generic – what I was expecting is to see where Linux couldn’t fit in their use case and why FreeBSD does.
I agree. It’s not very technical either and it would have been nice if there were actual, relevant comparisons.
Something I’ve seen is actually not really OS-related - not that I know, but for some reason FreeBSD does an excellent job at providing the latest software packages while keeping stability.
In Linux land you usually have to choose. Do you want an old package, do you want to add some official repository, do you want to have some less stable rolling release thing. And it gets even harder when combining packages (PostgresSQL and PostGIS being a famous example), which is why Docker is much more needed in my opinion.
On FreeBSD I can say I want Postgres 11 with PostGIS 3.2 and I get it, without self-compiling, without third party packages. Or let’s say I want nginx with certain compile options or third party modules. I can just pkg install it and it works.
That’s something that feels very strange to me, because one would imagine that there user count really would make a difference and strongly favor Linux. But in reality you have something Debian or RedHat based that’s usually very out of date, unless you add third party repositories which bring in their own problems, or you have something like Arch that is small for official packages, large, but highly unstable with AUR, doesn’t provide configuration options and forces whatever is the latest release onto you. So PostgreSQL 11 with PostGIS 3 isn’t an option.
This tends to really baffle me. I know it’s part of why Docker picked up, but it feels like an oversized hack for something that obviously can be solved, even with a comparatively small amount of developers. I would really love to see something that compares in Linux world. And no, snap and flatpak aren’t really solutions here.
It’s also easy in FreeBSD to build a custom package set if you do want to build from source (for example, to enable extra security mitigations that aren’t the default or disable an optional dependency that increases your attack surface but doesn’t add features that you’re using). The tool that builds the entire ports tree to produce packages is open source and it can also be used to create VM disk images that contain freshly-built packages (and, optionally, a freshly built source tree if you want some custom options there) and other things that you’ve built separately. If you’re deploying VMs, then it’s easy to have a ‘git ops’ workflow where pushes to your repo trigger Poudriere to build a new VM image with the latest package versions and so on and to aggressively customise this (for example, excluding bits of the base system that you don’t want, such as the toolchain).
I use FreeBSD (if I’m going to use Unix, I might as well use one with good taste), but:
I am similarly biased towards FreeBSD (if I’m going to use an implementation of bad ideas from the ‘70s, at least I’d like a clean and consistent implementation of those bad ideas) and wanted to amplify this point
Jails are a superior mechanism for doing shared-kernel virtualisation to the mixture of seccomp-bpf, cgroups, and namespaces that can be assembled on Linux to look like jails. Lots of FreeBSD-related articles like to make that point and they are completely missing the value of the OCI ecosystem. Containers are a mix of three things:
poudriere
, but it’s very specialised.Of these, the isolation mechanism is the least important. Even on Linux, there’s a trend to just using KVM to run a separate kernel for the container and using FUSE-over-VirtIO to mount filesystems from the outside. The overhead of an extra cut-down Linux kernel is pretty small in comparison to the size of a large application.
The value in OCI containers is almost entirely in the distribution and deployment model. FreeBSD doesn’t yet have anything here.
containerd
works on FreeBSD (and with the ZFS snapshotter, works well) butrunj
is still very immature.I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.
Bhyve may or may not be better than KVM but the separation of concerns is weaker. There’s a lot of exciting stuff (e.g. Kata Containers) that’s being built on top of KVM. Windows now provides a set of APIs to Hyper-V that are a direct equivalents to the KVM ioctls, which means that it’s easy to build systems that are portable between KVM and Hyper-V. There’s no equivalent for bhyve.
I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).
VMware has drivers for weirder guest OSes (including older versions of mainstream stuff… you know, NT), KVM doesn’t. That and I’ve had very bad experience with KVM virtio, but that doesn’t reflect on Bhyve
This is probably my own paranoia fed by misinfo (or just plain outdated info) about ZFS resource usage.
Please check page 102 of this:
ZFS doesn’t really need that much RAM. The ARC is supposed to yield to your programs’ demand. But if you’re not comfortable with how much it occupies you can just set
arc_max
to a small size.The ZFS recommendations seem to be based around the idea that you want the best performance out of ZFS, running large NFS/iSCSI/SMB hosts with many users. I think FreeNAS also set the bar high just so that users trying to use minimal hardware would not have a reason to complain if it didn’t work very well for them.
However, in practice, I rarely need top performance out of ZFS, so even with 512MB of RAM I can use it comfortably for a small VM with just a few services. Granted this was a few years ago, so maybe 1GB is needed nowadays.
NFS from host?
I use ESXi as my host, so probably not.
Shared, from another guest, then?
I wouldn’t say UFS is great, but I really think Ext4 is worse. I think at least part of the bad reputation is also coming from the fact that UFS isn’t as much info version numbers. UFS-implementaions change.
But to not make this FreeBSD vs Linux, see XFS vs Ext4 where there’s a similar situation. Every time Ext4 gets an edge, like metadata speed XFS ends up being improved surpassing Ext4 again.
Similar things can be said about UFS, at least for the remark of it being “fragile”.
But I’d like to hear it if you have anything to bank that claim.
That said I would have agreed with you there about a decade ago.
Lots of bogus excuses for a lot of busy work. Sorry I know that is not constructive but I just could not find a single business reason in this post that would justify this migration.
Well, if the business case was “we had problems with Linux and we’re (pretty) sure that how FreeBSD is done by one team those will not pop up” that’s a massive business reason.
A few years ago there were lots of people running their webservers on FreeBSD. The difference is just very small (if your stack is available or easily buildable) and so the business case can be very small difference of speed, maintainability, knowledge of the team.
But yeah, the article doesn’t really persuade me either.
Learning how to manage Linux systems is probably a lot cheaper than migrating to another OS if the reasons are mostly opinionated and superficial.
Again the same tired argument:
Anyway, isn’t it sorta similar with FreeBSD too? iXsystems supports ZFS on FreeBSD, hence the support is pretty good. Refusing contributions from commercial actors because they’re commercial and have money to throw at the problem of developing a good system is a good thing, in my book.
I wonder if we won’t be seeing more of this. I feel like the whole systemd debacle but perhaps more importantly the philosophical, financial and organizational changes that caused it speak to a real problem in Linux-space where a small number of large-ish companies are determining the future of the platform based on their best interests that may not actually intersect with those of the community.
Like, systemd actually DOES seem to bring some value to the desktop, but to the server? The value it brings is much less clear, and the harm it causes by violating decades old interface contracts is non trivial.
The decades old interfaces were a crufty hack put together by grad students. All the commercial unix variants ditched it long ago. Yes, BSD’s user space is stable, but it’s also awful. It was awful in the 1990’s when I started in on Linux and BSD, and it’s awful today.
I only use Linux on servers these days, and I love systemd. If I never have to write another SysV init script again, I will be a happier person. Add things like eBPF? And finally we’re getting btrfs on root with SuSE and getting some of the nice stuff that Solaris had (snapshot before upgrade for trivial rollback). It’s not traditional BSD, and it’s better.
systemd is the main thing I’m missing on FreeBSD. BSD rc is an improvement over System V init, but not by much.
Comparing Linux style SysV init to systemd I get your point, but that’s not how rc.d typically works on a modern BSD.
Take Consul on OpenBSD for example.
It mainly contains:
Then I can add
consul_flags="whatever"
in /etc/rc.local which describes all my services, what runs and how it is configured. I can add comments, etc.The equivalent in systemd, after removing everything not needed:
And harder to just change some flags and get an overview about how the system is configured.
FreeBSD also has daemon which is a simple tool that takes care of everything you might need for running a daemon. Logging, automatic restarts, managing subproceses, chaining user, forking into background, creating a pid file.
I don’t disagree that systemd is better than what many distributions had before. It’s actually a reason of why I got interested in the BSDs. Some systems, like Gentoo or Arch Linux in the past did a better job as well. Especially Gentoo’s OpenRC (also used by some others) comes pretty close to a perfect init in my opinion.
Sorry, I don’t wanna talk about systemd or init systems. There is enough about that elsewhere, but the notion that things on the BSDs are as bad as they were on SysV init based Linux distributions is simply wrong.
About eBPF. dtrace is there and has great integration from Python to Postgres. And btrfs feels like a never ending story and I think even on Linux it’s largely obsoleted by ZFS which just like dtrace has been used in production in very large setups for many years.
See, that’s the thing with Linux though. Things tend to be considered obsolete when they finally manage to stabilize. See Pulseaudio and Primusrun. I don’t think that’s necessarily bad. In fact it’s good that bad stuff gets replaced, but often it’s not about the better option but the newer one it seems and RedHat certainly has interest in pushing their products.
And then you have to hire huge DevOps/SRE-Teams just to keep things compatible with whatever is still supported. Of course that leads to the idea that you need to be able to pull things off Dockerhub and don’t actually manage the system, but outsource things to EKS or GKE.
And I say that as someone whose main source income is consulting helping companies with DevOps related issues, Docker, Kubernetes, etc. It’s a mess, which is why companies throw large sums of money at people like us blowing out the fires.
Whatever it is, being the “cool new thing” managers read in their magazines and “Google uses it” will always win. There’s no shortage of work in the industry if you jump on what’s hot. ;)
“systemctl edit consul” and listing .override files should be the equivalent in systemd world.
Which I would is more involved than having a central config file often with additional specific options. Not by much, but having an rc.conf often with not just flags, but actually specifying listen address or in case of network adapter even which device is configured how is certainly nicer than doing the same in systemd. Saying that as someone who does both at a semi-daily basis.
It’s because at one point I feel more as a user and on the other more as a the designer. I think it scales very well with how involved the service is compared, to Unit files, where I have to know both how the service works and how Unit Files work.
Listing override files to get an overview of a system isn’t exactly a nice workflow in my opinion. I feel spoiled when I have things groups, commented, etc. It’s a bit of that terraform feel where in a well written config you see everything grouped together at first glance, have useful comments where necessary.
Anyways, that’s not even what I tried to argue about. I just wanted to support the statement that the BSDs rc.d isn’t Linux SysV init. And while I understand people have different tastes, switching to BSD isn’t going to bring that back nor is using one of the many other options out there bringing that back. I know my way around with systemd (and the slur of related tools and services), since I have to work with it on a daily basis, because it’s currently the dominant init system, yet I still wished something like OpenRC would be adapted more widely. Aside from some small distributions with Alpine and Gentoo there seem to be two more widely used ones, for which it appears to works just fine.
Glad you’re happy with systemd.
The issue here is not the change, but an unwillingness to actually COMMUNICATE that change so sysadmins managing production systems in the field have to find out the hard way that their expectations have not been met.
This is most emphatically NOT the way to develop a very complex software stack.
What type of communication, and from whom, would be good enough?
Does this count? https://github.com/systemd/systemd/blob/main/NEWS
I’m looking forward to the communication SysV init has. Oh wait, it doesn’t and every distribution was just doing its own thing?
Honestly? No.
As a UNIX administrator I expect man pages to document the interfaces necessary to operate the system.
What’s lacking in the systemd man pages?
https://man7.org/linux/man-pages/man1/systemd.1.html
Can you provide examples of what you feel was not communicated? I’m not sure I understand the complaint.
When I want to change some detail around DNS resolution, I go to modify /etc/resolv.conf, but that’s not actually the correct mechanism anymore, but I can’t find what the right mechanism IS in the man pages or anywhere else I know to look.
And in which man page do you spot that /etc/resolv.conf was the right place to look, apart from the man page for resolv.conf?
Discovering /etc/resolv.conf is not the most intuitive thing in the world either.
There are two important issues in good UI design (okay, more, but two that are relevant here):
*NIX systems are typically terrible at the first of these. FreeBSD isn’t actually too bad in the first order here because you can add nameservers by running
bsdconfig
and going to the network settings part. This isn’t great though because it doesn’t tell you what it’s editing and so the only thing that you learn is that you can edit the settings via that UI, not what the underlying service is. RedHat has a similar tool whose name I’ve forgotten. I don’t know what the Debian / Ubuntu equivalent is but I assume there is one.I learned about resolv.conf on Linux around 2000. For consistency, I’d expect to go and edit it today. Trying this on a FreeBSD and Ubuntu system, I learn quite similar things: it’s not the right thing to do anymore. Both then do well on discoverability: On Ubuntu, it tells me that the file was created by
systemd-resolved
, on FreeBSD is tells me that it was created byresolvconf
. In both cases, I can go to the relevant man page and find out what the new thing is. Whether I prefersystemd-resolved
orresolvconf
is largely a matter of personal preference (I do enjoy the fact that there’s now a file, complete with man page, on FreeBSD calledresolvconf.conf
, because what problem isn’t made better by adding an extra layer of indirection?).Can we please stop this back and forth? I realize I was being un-necessarily incendiary by using the word ‘debacle’ and if you contribute to the systemd project and I hurt your feeling I sincerely apologize.
Were you writing these yourself before? Serious question: why?
Because I wrote daemons and they had to have init scripts.
I’ve been growing more curious about BSD lately, and it occurs to me that the tildeverse is a good way to try out lots of OSs in at least a server (non-GUI, non-desktop) setting. At the least I’m having fun collecting them all.
Docker images has become like an executable format for Linux server applications.
Not being able to support docker images is comparable to not being able to support ELF executables, IMO.
Relying on dockerhub images is a major security risk. You don’t receive any message whenever the image author stop updating their image. I am sad however how i can’t use docker images built by nix on freebsd.
…Or just use Nix without Docker. Except I don’t think Nix runs on BSDs either (expect if you count
nix-darwin
)“except”?
You are right. Outdated docker images from docker hub can be a problem. However, there are tools available that scan docker images for security issues. Google Cloud uses this, for instance.
Sorry, but that’s just pure marketing talk, especially at times where Kubernetes and others move away from Docker
Aside from that you can run Docker (and Kubernetes btw.) on FreeBSD.
I heard the ELF comparison before but for any technical purpose it’s simply wrong/marketing talk.
Oh and I have to admit that I actually said something similar before and I’m ashamed for it.
I think the thing coming closest is the Dockerfile (and alike) being a way of executing software that is somewhat dominant. It feels a bit like the procfile that came slightly before, but it’s easy more associated with Docker, Docker images, etc. than what matters for actual real life production setups. It’s part of why do many “docker clones” exist out there allowing devs to write Dockerfiles even though no Docker runs when the instructions are executed.
Yes, Docker did a great job defining an interface between who writes software and who executes it where the great part is simply that it’s not too complex and that it got popular. So that’s why people like to compare it with ELF.
But that’s really the. Dockerfile as well as the fact that certain ways for configuration are used/preferred, that huge installation processes were simplified and that there is no random state holding directories anymore.
Parts of this also became necessary for cloud computing taking over and hard boundaries were defined for software making deployment a lot easier.
But that doesn’t just work when executing on the could or actually using Docker.
This is true, but misleading. Docker (Moby) is finishing a transition to be built on top of the OCI stack. Docker is a tool that uses
containerd
to build OCI containers. Kubernetes is a tool for managingcontainerd
instances and using them to run OCI containers.Kind of. You can run
containerd
and it can userunj
to manage jails for deployment, butrunj
is still alpha quality code. It’s easy to get into a situation where it can’t shut down a jail. You can also build Moby for FreeBSD, but it can also wedge in the same way and require a reboot to clean up the stale state.According to https://wiki.freebsd.org/Docker, this is the current status of Docker on FreeBSD:
There is some good points here but I think they’re overlooking security. Admittedly I am an OpenBSD fan boy but FreeBSD, at least the last time I looked at it, lacked a lot of security features that are available in Linux (and of course, OpenBSD).
Would you like to elaborate what security features you were missing from FreeBSD? Perhaps they could be added to the project ideas list on the FreeBSD wiki.
I agree. Every non-FreeBSD platform where I try to write compartmentalised software causes me to struggle due to the lack of Capsicum.