Right? :) I’ve thought about it and, actually, a publisher recently approached me to “write a book on something”. I do have ideas, but… when I research how much they take vs. how much I’d take… well, it’s really, really hard to justify the effort considering how little free time I have.
This has happened to me too. My limited experience is that if a publisher approaches you to “write a book on something” then they’re usually a fairly scummy publisher. :-/
The evidence suggests that the popularity of an operating system (indeed, of most
software) is inversely proportional to its technical excellence.
You know, this is true - the more people use an OS, the more crappy things they discover about it. After all, it’s easy to imagine a particular OS is flawless if nobody has ever used it.
I’ve spent some time pondering the Plan 9 code and would lean toward this take. It is cute, borderline quaint, and often enough annoying in (to me) awkward terseness. The stuff I care a lot about in an OS like architecture support, bus and device drivers, complications like block storage and networking stacks (which is where all the pain is if you want to be relevant) are nowhere near industrial grade even adjusting for the era it was written. So if you acknowledge that it was a research project with some interesting ideas, particularly around exposing various system resources over a network namespace, that part is cool – a trendy academic topic in the early 1990s – and that alone wasn’t enough to loft into the category of a better mousetrap.
I was not being duplicitous, this file existed as such under the commercial stewardship of the OS. 9Front looks like it has begun an abstract PCI into a MI layer which is a fundamental, but the code is still largely the same broken out into a few files i.e. https://git.9front.org/plan9front/plan9front/2b8e615cfc98718314ddc1151934ef2f24db8de3/sys/src/9/pc/pcipc.c/f.html. It is not an industrial grade bus implementation and this is the most important bus of 30 years.
The bus is an industry standard - in this case it means meeting the needs of real world system construction. The inverse, not meeting the industry standard, would be a prototype, experiment, demonstration, limited use system, toy, etc.
Death by 1000 cuts. If you don’t need to see it, blissful ignorance is still bliss which is what the quoted comment was all about.
Offhand MI: AER, various bridge and affinity topology, iommu, sr-iov, passthrough, various link power management. So if you could even boot on a modern 8000-series epyc server (which I am skeptical), you are going to use more power to do significantly less work before we even get into MI things like lock/lockless scaling primitives, NUMA, scheduling.
ReactOS is aiming to be a rewrite. ArcaOS is a bundle of drivers and tools around a commercial operating system.
The Windows equivalent would be if somebody had a license to distribute Windows 2000, bundled it with drivers for modern hardware, backported Firefox to it, and created a UEFI loader for it.
The Windows equivalent would be if somebody had a license to distribute Windows 2000, bundled it with drivers for modern hardware, backported Firefox to it, and created a UEFI loader for it.
Lovely comparison. Well done.
I am with @fs111 here. I think that would interest me, too.
For me personally, W2K was the peak of the NT timeline and it’s been accelerating downhill since.
Just as MS Office 97 seemed bloated and sluggish when new, now, it’s my go-to version of MS Word, because it’s tiny and fast. XP seemed bloated when new, but compared even to Win7, it’s tiny and fast. On an 8MB Core 2 Duo it flies along.
As far as I understand from the eComstation days, nobody outside IBM has the full OS/2 kernel source code, so there is never going to be a 64-bit OS/2. This in contrast to ReactOS, which is open source.
OS/2 is a dead-end. This product is primarily especially interesting to companies that still have legacy OS/2-based systems running.
Can that be true? I would imagine the features ArcaOS has done would require the full sources, particularly ACPI and EFI booting. If not, while bizarre, it would be a phenomenal testament to whatever modular kernel engineering decisions allowed this level of evolution.
It’s a remarkable piece of work. It’s still a pig to install, as OS/2 always was. It’s still fussy about hardware and disk partitioning, as ever. But thanks to lots of generic drivers, it’s way less so.
I could only get it to dual-boot with FreeDOS, nothing newer. If a disk was set up by Windows or Linux, then ArcaOS couldn’t understand it.
But it’s blazingly fast, it can talk to USB and SATA and UEFI, and to Wifi. It has a useful browser, which is more than eComStation does.
It felt even faster than XP. It can run rings around any 64-bit version of Windows. It has DOS, Win16, native OS/2 16-bit and 32-bit apps, and some Linux ports. There’s a WINE-like layer called Odin that can let some Win32 apps run. It can drive 64 CPU cores and given over 4GB of RAM allocate the stuff above 4GB as a RAMdisk.
It is astonishingly capable for an OS whose kernel is from 1998 or so (with later fixpacks and updates).
It probably doesn’t understand GPT partitioning which is the default on newer OS installers, you could make at least Linux comply not sure if Windows will still oblige MBR.
(I don’t know whether to use a laugh or cry response.) Oh no no no. Nothing remotely so simple and easy.
The big new feature in ArcaOS 5.1 and the main thing that drove the entire project is UEFI support. That means it has to support GPT as UEFI firmware and GPT partitions go hand in hand.
ArcaOS can boot from both BIOS and UEFI, and it can boot from MBR on both and from GPT when using UEFI. (I am not sure if it can boot from GPT on BIOS.)
No no. When I say it can’t understand partitioning schemes from other OSes I am being literal.
On BIOS on MBR, its native format, in my testing, it can handle 1 primary FAT partition and then having a second partition with ArcaOS in it.
It will not attempt to install if there is a primary with anything else but DOS. It can’t handle it if there’s a primary with NT. It can’t handle extended partitions created by other OSes. It can’t handle Linux setups, primary or logical or both. It can’t handle BSD setups; I tried FreeBSD, OpenBSD and NetBSD. WinXP 32, and 64, and Win7, and Win10.
For instance ArcaOS needs gaps between partitions. You must have at least 1 empty cylinder between partitions. Primary, gap, extended, gap, 1st logical, gap, 2nd logical, gap, etc. But even carefully creating this in (for example) Gparted is not enough.
You need to create the partitions in ArcaOS or in an OS/2-compatible partitioning tool, such as DFSee.
ArcaOS has its own internal LVM system and that can’t coexist with modern LBA-aware partitioning. The OS/2 kernel still seems to think in terms of cylinders, heads and tracks, and the modern interpretation of other OSes confuses it – fatally.
I could not get it to dual boot with any other 32-bit or 64-bit OS, at all, full stop.
Only with DOS. A single copy in a single partition.
The docs tell you to create all partitions only with ArcaOS itself before installing anything else. The snag is that other OSes then see that partitioning setup as corrupt and won’t use it, and if you let Linux or Windows repair it, then ArcaOS can’t use it.
Basically, you need to treat ArcaOS like ChromeOS: it needs to be the only OS on the hardware and it does not want to share with anything else. Do that, and there’s a much better chance things will work.
P.S. Yes, Win10 still supports MBR. It has a unique requirement though. As far as I can tell, you can only use MBR on BIOS machines, and only use GPT on UEFI machines. Windows won’t boot from GPT on BIOS or from MBR on UEFI.
Linux and other OSes don’t care; they can handle both, in any combination.
From what I’ve heard, they don’t have the source to some components (it may be simple as IBM losing the source), but they are allowed to do binary patches for what they don’t have source for. I’m not sure what components they’re binary patching versus having the source for though.
It’s a real shame. There will never be a 64-bit OS/2 but an x86-32 OS with in-kernel PAE, so it could allocate lots of RAM and have a big disk cache and lots and lots of 2GB apps, would be all I needed, I think.
I think that is partly why there is no FOSS release of OS/2.
IBM does not really care any more. Microsoft doesn’t either. I don’t think anyone in management really knows what they are any more.
I suspect the main motivations are just 2 non-technical issues:
There’s 3rd party code in there neither companies have the right to release. Nobody wants to spend the money to go through it and clean it up.
Simple shame. I suspect there are a lot of ugly hacks in there.
In an ideal world IBM and MS would do some kind of mutual accord where they give each other full rights to the code of each others’ that each company has, including to open source it. Maybe talk to any surviving companies whose code is in there: RealPlayer is long gone, MP3 is open now, there can only be ancient audio/video codecs… Maybe some hardware drivers? Try to get blanket permissions to release.
I’d believe that “MS <3 FOSS” if it released the source of all versions of DOS, Windows 1/2/3/9x, all forms of OS/2, and made all its DOS apps freeware. There is precedent: it did with MS Word 5.5 for DOS, as a Y2K fix for all older releases.
Just feels like more e-waste and pointlessness. We love buying all this cool, cute-sized tat, but you don’t need to spend £200 on raspis and £200 on auxiliary rack units to simulate k8s. Happy to see the YouTube comments calling this out.
I don’t think there’s anything morally objectionable about the existence of a class of physically-small-scale computer equipment aimed at homelab enthusiasts (but probably also genuinely useful for more practical uses).
I generally dislike criticisms of consumer products as wasteful, just because they’re not something the accuser has any use for. I don’t think I personally need this kind of small form factor rack, but it’s possible I will in the future, and I don’t think it’s any more wasteful than all of the computer equipment that I and other people do make heavy use of.
Yeah, I will be a bit harsh and say this kind of content is like the old infomercial for chronically online people. I noticed it recently with PC and gaming hardware reviews but it has been going on for a long time in many different formats. Geeks and “even” engineers are consumers to market to too.
I appreciate miniaturisation as much as anybody else, but I have to agree that I really don’t see the point in any of this. I don’t care as much about the waste – it’s not like this will ever get to the scale where that matters, but the pointlessness of all this effort.
You put things on a rack because you have too many things otherwise; if you then go ahead and shrink it back down then why not just use a single machine. I don’t quite understand why you’d put 4 rpis in a rack when a single larger cpu would be strictly superior in pretty much all aspects.
It’s difficult to experiment with the kind of networking that needs real hardware if you only have one machine. POE, PXE, switching hardware, failure modes, …
It’s not like you have to use a miniature rack to hold a bunch of raspis. I would totally get one of these tiny racks, put in an UPS, a switch, some sort of SBC with GPIOs for sensors and control, a few U of HDDs for storage, and I guess an M4 Mac Mini for compute. The real waste would be me buying something full sized instead of “cute-sized” since I don’t have any need for enough stuff to fill a full size rack.
The first part of the video was pretty wow (the firmware dance), this is what I would expect from a low volume TI or Xilinx(amd) dev kit not a premium device like Orin which is more consumer oriented and the SoM is meant for dev all the way to production. The throttling also seems to be a common complaint. I’d wait a bit to see if this is some kind of early release faux pas or if the platform is stable.
There are two problems I experienced in the video, that are now solved.
1: I had it in 7w mode, I switched it to 25 and stopped getting throttling errors. Got way better performance too.
2: The NVME drive I was using had problems. Scanned it, found the problems and replaced it. Runs great.
Now the machine has been remarkably stable. I’ll be building another video with some updates.
Unicomp Endurapro with some modified key caps, QWERTY layout. The trackpoint sucks compared to an M13 (which is too precious for daily use) which still sucks compared to any ThinkPad trackpoint, but I like it over any other keyboard and the trackpoint is just handy enough to not move to some Model F or Beam Spring project.
One extreme says APIs must remain stable and may not change ever, even when requirements change. Windows often tries to provide this (in practice, it falls short - I have more success running ‘90s Windows programs in WINE on an AArch64 Mac than on an x86-64 Windows 11 PC). This makes it hard to evolve to meet changing requirements.
The other extreme says that interfaces are unstable and can change whenever it’s needed. This makes it hard for anyone to live downstream. You end up with a load of half-finished things where people gave up chasing API changes while trying to upstream things, or people simply giving up. The pressure to upstream things can also cause the same failure mode as the other extreme: once an API has a load of in-tree consumers, the person changing it has to update them all and so APIs are de-facto frozen because no one wants to risk breaking their in-tree consumers (this is mitigated if you have a lot of tests).
For kernels, I think FreeBSD has the right balance. The KPI / KBI is expected to remain stable within a major release. A kernel module built against 14.0 should work with all of the 14.x series (note: if it depends on things outside the base system, this is not the case. Drivers ported from Linux often depend on the LinuxKPI kernel module, which tracks Linux KPIs and so may break consumers on a regular basis). Between major releases, the KBI will definitely change (struct fields may be added in core data structures: some of these are designed to allow addition during a major release, others may have padding added just prior to a .0 release), but KPI changes that are not backwards compatible should be intentional and documented.
In the past few years I’ve sometimes worried maybe the world doesn’t want to support more than one POSIXy open source OS. But working on FreeBSD drivers lately, I agree with what David says here and hope we can survive because it’s a lot more pleasant to develop on FreeBSD. Including simple stuff like supporting back and forth N-2 releases.
It’s not about GPL or open source, you can meet the spirit of that (see i.e. any board support package for a wifi router or other complex SoC) while still suffering greatly from this policy… if you are maintaining something complicated, like say an Ethernet switch OS, you are going to have to incur a massive technical debt from day 1. There’s no way to navigate the complexities of billion dollar IP and manufacturing super powers into behaving the way Linux policy wants. So in practice that means you’re on a frozen Linux kernel version, hope their SDK isn’t a shit show, and then pay some heavy price down the line once that inevitably becomes untenable. In layman’s terms that eventually looks like unpatchable bugs/security issues/CVEs for potentially expensive and still relevant goods.
I don’t understand this. Instead of talking about how much water datacenters waste, why not talk about the real issue: why do we let datacenters waste water at all for cooling, when they don’t need to?
The answer, of course, is money. So long as datacenters can do the wrong thing and waste water instead of building proper cooling, they will. This is simply a problem of economics. Water should cost datacenters significantly more than it costs residential users, which would encourage them to build proper cooling.
We can’t think that it’s normal and/or expected for companies to simply use resources until they negatively impact their surrounding environment enough to force change. It’s like dumping chemicals in to a river until people start getting cancer - it’s not the right way to do things.
Evaporating water is the most efficient way to cool large thermal gradients. I think the issue is nuanced, there are places where water is abundant and places it is not. That should be taken into account when locating mega datacenter projects, and when they reside in i.e. the US Southwest alternative heat baths like geothermal should be used. There are secondary concerns like cooling tower design.. the big power plant style cooler can use water that is not heavily treated, a small chiller like you see in a city on or adjacent to a building might expect a much cleaner supply.
The overall problem is big companies are good at greenwashing and patting themselves on the back for whatever they are doing without regard for whole picture thinking.
I don’t think residential users should get discounted water.
I think water (and fuel and electricity and maybe all things) should be priced according to the social and environmental costs of supplying it.
If that would make it unaffordable for regular people then wages and benefits must rise. People and businesses should be supported in making their dwellings/lives/operations more water and energy-efficient with cheap loans or grants.
Compared to raising wages, discounting residential use disproportionately benefits the most wasteful and produces weird incentives (like people running loads of ASICs in residences).
I found myself gravitating to Debian unstable on most of my personal laptops. It offers a reasonable tradeoff of “industry standard” and not being frustratingly out of date without using a boutique OS (I’ve used Gentoo and Arch in the past which can be fun but also consume a lot of time) or quasi-commercial OS (Ubuntu, Fedora, OpenSUSE) with the regular rug pulls and weird forced decisions. I’ve never really been drawn to the different takes on state management or partitioned security OSes and Debian lets me focus on getting whatever else done.
I use FreeBSD on a variety of systems and it doesn’t suffer from these same drawbacks, especially on servers, but it has others.. no current wifi standards support, suspend/resume is very hardware dependent, and graphics are always a sudden unexpected learning event. If you have a desktop where standby doesn’t matter, the official nvidia driver is very stable and performant and I typically use Ethernet - I do my best work on this system.
I have some NetBSD systems (in particular a T480) and really like it but it needs more developer help to offer a compelling mobile option. I do run a couple OpenBSD as well but wouldn’t want to use it for a desktop for performance and filesystem reasons.
I’m pretty wrapped up in FreeBSD so there will be some implicit biases for a variety of reasons.
If I use a five dollar word, I would say FreeBSD has much better orthogonality than any Linux distro (although some approach it). Enough of this is cultural that it isn’t an accident, but to borrow a term src is a “monorepo” (the base system is all in one) and this causes some natural alignments between the build system, compiler, libc, administration utilities, man pages and kernel. There seems to always be some low temperature debate on what should be in there, for instance maybe it doesn’t make sense to have the compiler, but having the kernel, libc, man and base utilities in one is a big benefit versus Linux. It makes evolving interfaces a lot more natural, and yet FreeBSD still provides excellent backwards compatibility to at least 4.x.
What this means practically is that if you used a FreeBSD system 20 years ago and today there would not be a lot of surprises, and mostly pleasant ones like a better filesystem (ZFS) and block abstraction (geom) now being the norm.
For system development, it is really low friction versus others. Especially the Linux kernel which intentionally moves and breaks interfaces all the time to make out of tree development painful.
I benefited greatly from getting involved in FreeBSD, even when I was working strictly on Linux systems, so I would recommend it to anyone that is interested in operating systems. It will channel you in different directions than the loudest norms today like Docker and Kubernetes, although there is no technical reason these can be implemented directly or analogously.. and this isn’t always a bad thing. Keeping an open mind and noting what is done well or what is lacking will help in both directions when comparing or deciding when to use what.
In terms of security, do BSD Jails offer a better solution to isolate applications?
Linux is a bit behind in this respect, and the X Window System makes the problem worse by essentially letting any graphical application to see key events from other applications or inject spurious ones.
Linux has so much, it is hard to pin down any fair comparison. I view jails as more of an administrative concern, like Linux namespace+cgroup combo, and not a hard security feature although these things don’t hurt to help separate concerns. Landlock is a compelling option on Linux and is inspired by FreeBSD’s capsicum (for some reason, when Google sponsored a direct port, that didn’t get in) – for something like a server process or even a browser I think these are good enough for the masses.
If you really care about isolation the Qubes approach is a bit more plausible because a virtual machine, even though an immense abstraction, has a much easier to reason about boundary in the hardware than any pure kernel approach. The CHERI project, which is FreeBSD adjacent, aims to amend this extreme by moving capabilities into hardware - which might make you trust a kernel construct like jails more.
Linux is a bit behind in this respect, and the X Window System makes the problem worse by essentially letting any graphical application to see key events from other applications or inject spurious ones.
X has all the capabilities to block this (this is the difference between ssh -X and ssh -Y; -X uses an “untrusted” connection which puts it in an isolation group, -Y doesn’t, and there’s more beyond that which is even harder to use like the access control extension, or outright running nested servers for even more), it just isn’t turned on typically. Some distros even configure it out entirely in their builds. But if you really wanted to, there are options available.
That is probably the strongest solution - that article mentions a couple of the other options but states the nested servers is the tightest barrier, which I’d agree with. Though interesting to note that it then punches holes in that barrier anyway in the name of convenient usability, like the clipboard. That’s the problem with these isolation schemes - you usually do end up poking a lot of holes in it anyway since a fully isolated thing is a pain to use.
I’m honestly surprised you said Debian unstable. Do you run into many issues with broken packages or broken upgrades?
I’ve used Testing for about 15 years, but the few times I’ve tried unstable ended up with broken stuff at the worst possible times. The only problems I have with testing involve proprietary firmware (thanks Apple and Nvidia…) either moving around or dropping support for things.
I have not experienced an unusable system in 2 years. The case for unstable is that testing does not receive any security priority so you can be running vulnerable versions longer. Investing some time into setting up Btrfs snapshots would probably be wise but I haven’t done so myself yet.
I check apt dist-upgrade for anything peculiar before accepting, in particular there are sometimes days or even weeks where a dependency chain may be broken. An ’apt upgrade` instead will usually work around that until it is fixed. This cycle towards 13 has had some big events, like 64-bit time, rearranging kernel firmware packages, and Qt changes.. Plasma 6 might be a big one too. I would expect future cycles to be a bit more tame t64 was a big one.
I have grievances against OpenBSD file system. Every time OpenBSD crash, and it happens very often for me when using it as a desktop, it ends with file corrupted or lost files. This is just not something I can accept.
!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default.
Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
This does seem a bit over the top. MPLS does not negate the utility of a traceroute because it never even claimed to work below the IP level, and if you’ve actually worked in a service provider network (I just started at my third) it comes in pretty handy. You still have to use your brain and possibly relationships to navigate any network path. But seeing basic layer 3 pathing is a bread and butter skill in any multihomed network and you don’t need to be a genius to see and decipher some useful information about which providers you are using for a particular traceroute because these are guaranteed to be layer 3 adjacent.
Does anyone have any insight into this, & what latest is going on at VMWare after the acquisition? Is this a good thing for Fusion/Workstation users, or just the first step on a long road of gradual decay toward an eventual unsupported dusty death?
My suspicion is that Broadcom is winding down these products but have to maintain them for at least a few years yet due to existing support contracts. Someone in middle management thought it would improve the company’s image a little bit to release these for “free” while they are still supported.
(The company I work for has a lot of vSphere, which was already eye-wateringly expensive before VMware was purchased by private equity. Earlier this year when it came time to renew the support contracts, they literally tripled the price. Our company said, “no thanks,” and we are now running thousands of vSphere hosts and a bunch of vCenters with zero support while whole teams scramble to transition our services to a mix of OpenStack and Kubernetes.
I’m not a heavy user of either product but Broadcom previously added some kind of non-commercial license for both that was useful for me for playing with retro operating systems and checking/improving various FreeBSD emulated device drivers. From the casual perspective it seems like they are still working on both and it doesn’t seem like either were the main focus of VMWare before the acquisition so no perceptible change in terms of quality (which is merely acceptable).
It’s a little crazy this didn’t happen a long time ago, under VMWare, to try and keep some level of relevance to the underlying hypervisor and device model. People seem to think Broadcom is the only greedy company but VMWare was always a very greedy company.
Self-host miniflux, which is great at fetching and storing the feeds and read status. The default web UI is mediocre but there are some nice third party clients like Reeder on iOS.
The default web UI is minimal yet robust and powerful for what I expect from such reader. I love the ability they recently added to execute some global user script by default so while I do lot of UI tweaking through TamperMonkey, now I use to register these scripts directly in Miniflux settings which enable me to enjoy my UI tweaks in any device without extension.
Some other things I do with Miniflux : https://morgan.zoemp.be/reading-rss-in-peace-with-a-few-miniflux-hacks/
A system that is intentionally regularly restarted is going to have certain things figured out like load shedding or a maintenance window. But it is also good to know that something can keep on ticking.. no leaks, no excessive fragmentation, whatever. You might ideally figure out a way to know you can do both.
I wonder; you could just keep one instance (or a tenth of instances or something) on no-restart, and hopefully you can pick up/record/instrument and address whatever weirdness comes up on them without having to worry about your whole fleet being subject to it.
Your own C++ feels surprisingly good to write and usually read. Somehow this rarely extends to Other People’s C++ (OPCPP) but maybe environments that have some draconian restrictions like embedded or kernel would avoid that.
I’ve found that far less true in C++ code written after C++11. LLVM’s ‘tasteful subset’ of C++ ended up looking a lot like modern C++ and most projects have converged on a similar style since then. Picking up C++ that was written prior to C++11 is exciting because it will typically be one of three largely distinct languages:
C, where someone decided to start using C++ features. Characterised by still calling malloc and casting the result, usually with a C-style cast. Modest use of inheritance but unsafe down casts wherever they’re needed. Often manages to be less safe than the C equivalent because people use one half of a feature that could allow safety.
Java-inspired C++ where everything is a virtual function, abstract classes are used as interfaces, and class hierarchies are very deep. Performance suffers and the need to inherit from many base classes as interfaces leads to diamond inheritance issues and means that you get bizarre behaviour if you forget to inherit the right way. The deep hierarchies are often fragile.
Functional-style C++ where people went way overboard with templates, but C++98 lacked r-value references and variadic templates and so these also end up being mixed with C preprocessor things. Once you understand the DSL that they’ve built, it’s fairly readable, but making changes is hard because everything goes through a huge number of indirection layers (which are erased at compile time).
In contrast, modern C++ typically uses smart pointers everywhere for memory management (these may or may not be the standard-library ones). Generic functions are implemented in headers and may forward to type-erased implementations. Virtual inheritance is mostly an implementation detail for providing the type-erased versions.
I’m pretty firmly in the community OS camp at this point based on so many negative experiences over the years with commercial OS projects.. that means Debian stable for Linux servers/containers and unstable for my personal Linux laptops/desktops. But I do tinker with a great many OS and keep Fedora on a modded ThinkPad x430t (quad-core BGA rework). The new version is really well polished, especially the KDE/Plasma 6.2 spin. DNF went from bizarrely slow to quite acceptable. The release upgrade experience is still a bit more harrowing than an apt distro, but they have added some seatbelts and it has worked ok.
The only major critique I have is the version of Firefox they ship is a lot slower on Fedora and there is no great way to deal with it there.. it is built with the system gcc instead of llvm/clang like Mozilla does, and probably lacks PGO. The Debian inbox firefox is slower than necessary as well, but Mozilla has an official Debian repo which is highly recommended over any other gimmicks like Flatpak/Snap that cause more issues than good for such a critical component.
There was a lot of activity in the 1990s around rethinking the relationship between applications and storage. I still have a whole bookshelf of material about “persistent object systems”, a topic that used to be big enough to have its own conference but which has now completely disappeared from view.
Some of these ideas did ship commercially. There’s the obvious example of IBM OS/400 (quite a deep dive into the idea). Several filesystems appeared around this time with first-class attributes (BeOS, HFS, NTFS, etc.). None of them developed serious indexing or transaction features, though.
In 1995 I was working on the second release of Newton OS, which — unlike any other mass market OS I can think of — took this seriously enough that it had no filesystem, even internally, just a flat transactional blob store with an object serialization and indexing layer on top. Several years later I worked on WinFS, which attempted to merge SQL Server with NTFS (that did not go well). It turns out filesystems and databases look the same from a philosophical perspective, but actually have very different expectations from clients.
At this point everyone seems to have converged on a combination of (1) filesystems that don’t blow up with lots of little files, (2) a bunch of data in SQLite files, and (3) asynchronous indexing. You can find stuff a lot better, but applications are still as siloed as ever.
By bookshelf do you mean actual books? That sounds like an interesting collection for folk around here to reference. Any chance we can bother you to catalog the titles here or on librarything or something?
Thanks to Claude, that’s not as much trouble as it sounds! But good luck finding any of these now…
“Object-Oriented Concepts, Databases, and Applications” edited by Kim Lochovsky, published by ACM Press
“Persistent Object Systems, Newcastle, Australia 1989” edited by Rosenberg and Koch, Springer Workshops in Computing
“Advances in Object-Oriented Database Systems” edited by Dittrich (Springer Lecture Notes in Computer Science #334)
“Object-Oriented Database Programming” by Alagić, Springer Monographs (1989)
“Computer Systems with a Very Large Address Space and Garbage Collection” by Bishop (MIT/LCS/TR-178)
“Implementing Persistent Object Bases: Principles and Practice” edited by Dearle, Shaw, Zdonik, labeled as “The Fourth International Workshop on Persistent Object Systems”
“Database Programming Languages: Second International Workshop” by Richard Hull, Ron Morrison, David Stemple, published by Morgan Kaufmann (1989)
“Data Types and Persistence”, Atkinsin, Buneman, Morrison (Eds.), Springer Topics in Information Systems
“Readings in Object-Oriented Database Systems”, Zdonik, Maier (Eds.), Morgan Kaufmann
“Fifth international workshop on object orientation in operating systems”, Cabrera, Islam (Eds.), IEEE Computer Society, 1996
Probably the most inspirational things to me then were Peter Bishop’s doctoral thesis from 1977 (that MIT TR-178) and Eliot Moss’s work on persistent objects at U Mass (his papers are here).
I forgot two other important inspirations: Symbolics Statice (which I can’t even seem to find a description of) and the C++ product the same people made, ObjectStore.
which we may imagine results in less overall code, and more reuse.
Though the classic problem is that any bug where the application corrupts state, result in a bug in persisted state.
It breaks the “reboot computer” or “restart application” method of solving problems / working around bugs, which is honestly the most reliable and timeless one.
It also reminds me of the arguments around RPC – whether network protocols should be tightly coupled to the programming language, or more loosely coupled. Compared to what was proposed in the 90’s and such, it looks like looser coupling is what worked.
Most of these systems weren’t “passive” single-level stores like a huge virtual memory. There was some concept of committing or rolling back transactional changes to persistent objects, so the post-crash situation is basically the same as any database.
Schema evolution, on the other hand, was a big consideration, and I don’t know if anybody found a great solution to that. It’s the same problem we have today with document databases. But the obvious place to address it is the (de)serialization layer, and part of the point of persistent objects was that you didn’t have one.
While normally I would take this opportunity go remind people interested in database-oriented operating systems to check out IBM i (you should!), I’ve covered it a lot already. Instead, it’s worth noting the social context of this talk, since a lot of people here probably aren’t familiar with it. The classic Mac OS was starting to begin its decline, but it had a developer culture of its own. It was decidedly not a plain text oriented OS, and instead totally graphical, lacking the concept of a command line. There was a focus on direct manipulation of objects, making them user accessible, and as the paper mentions, the resource fork could provide structured data that was easily editable with tools.
I would also be interesting in reading more about this. The main IBM site is very enterprise-focused, so I’m having a hard time making sense of what it is, what it does, and how it works
Frank Soltis “Inside the AS/400” or “Fortress Rochester” (3 editions of essentially the same book under those 2 titles, any of them are fine since the used price has gone up) are worth getting for anyone really into Operating Systems. It goes very deep into the conceptual and how everything fits together but not as deep as a UNIX internals book which at the very least have public headers to show off data structures and interfaces.
I’ll keep repeating it until I die: people don’t use desktop environments, people use apps.
Linux desktops have been fine for a long time now. But if a user can’t run Photoshop/MS Office/whatever else they run on their Macs and Windows, there’s no point.
Even as an atypical computer user (professional developer) this has been a blessing. Between Firefox and a reasonable POSIX environment I can do what I need to do on a computer for the past couple decades. I do keep some Windows systems around to stay abreast of developments there but I would be able to function just fine without them or other commercial operating system like macOS.
The more specialized your pursuits, the more the commercial operating systems are entrenched, specifically with things like gaming or content creation (video, photo, 3D, audio production tend to favor Windows and macOS). Cubase, Ableton, and a bunch of commercial VSTs remain on my Windows machines but that is just an occasional hobby for me.
I tried switching my non-technical partner from Pop!_OS to Linux Mint (GNOME to Cinnamon). She hated it, mostly because text never looked right (too small in some places, too big in others), and we couldn’t figure out how to get it looking right.
So, she never realized she was using a desktop environment, but she certainly cared when the desktop environment became less user-friendly.
The economics are not usually favorable but you /could/ write an awesome book on all these topics.
Right? :) I’ve thought about it and, actually, a publisher recently approached me to “write a book on something”. I do have ideas, but… when I research how much they take vs. how much I’d take… well, it’s really, really hard to justify the effort considering how little free time I have.
This has happened to me too. My limited experience is that if a publisher approaches you to “write a book on something” then they’re usually a fairly scummy publisher. :-/
You know, this is true - the more people use an OS, the more crappy things they discover about it. After all, it’s easy to imagine a particular OS is flawless if nobody has ever used it.
I’ve spent some time pondering the Plan 9 code and would lean toward this take. It is cute, borderline quaint, and often enough annoying in (to me) awkward terseness. The stuff I care a lot about in an OS like architecture support, bus and device drivers, complications like block storage and networking stacks (which is where all the pain is if you want to be relevant) are nowhere near industrial grade even adjusting for the era it was written. So if you acknowledge that it was a research project with some interesting ideas, particularly around exposing various system resources over a network namespace, that part is cool – a trendy academic topic in the early 1990s – and that alone wasn’t enough to loft into the category of a better mousetrap.
An example https://github.com/plan9foundation/plan9/blob/main/sys/src/9/pc/pci.c
To be fair, you are pointing to a 20 to 30 years old source. 9front is under active development, this file doesn’t even exist there anymore:
https://git.9front.org/plan9front/plan9front/2b8e615cfc98718314ddc1151934ef2f24db8de3/sys/src/9/pc/f.html
I was not being duplicitous, this file existed as such under the commercial stewardship of the OS. 9Front looks like it has begun an abstract PCI into a MI layer which is a fundamental, but the code is still largely the same broken out into a few files i.e. https://git.9front.org/plan9front/plan9front/2b8e615cfc98718314ddc1151934ef2f24db8de3/sys/src/9/pc/pcipc.c/f.html. It is not an industrial grade bus implementation and this is the most important bus of 30 years.
“Industrial grade” is something I typically use as an insult when describing code. What do you mean by that?
The bus is an industry standard - in this case it means meeting the needs of real world system construction. The inverse, not meeting the industry standard, would be a prototype, experiment, demonstration, limited use system, toy, etc.
And, in what ways does the 9front code not meet the needs of a real world system?
Death by 1000 cuts. If you don’t need to see it, blissful ignorance is still bliss which is what the quoted comment was all about.
Offhand MI: AER, various bridge and affinity topology, iommu, sr-iov, passthrough, various link power management. So if you could even boot on a modern 8000-series epyc server (which I am skeptical), you are going to use more power to do significantly less work before we even get into MI things like lock/lockless scaling primitives, NUMA, scheduling.
To be also fair that file has been rewritten in http://www.collyer.net/who/geoff/9/9k-pf.tgz as per the comment:
How does ArcaOS compare to ReactOS? It looks like they have commercial funding, that’s promising.
ReactOS is aiming to be a rewrite. ArcaOS is a bundle of drivers and tools around a commercial operating system.
The Windows equivalent would be if somebody had a license to distribute Windows 2000, bundled it with drivers for modern hardware, backported Firefox to it, and created a UEFI loader for it.
I feel like there would be a market for that OS.
There were a lot of folks I knew who ran win2k as our desktop OS for probably longer than we should because it really Just Worked.
Lovely comparison. Well done.
I am with @fs111 here. I think that would interest me, too.
For me personally, W2K was the peak of the NT timeline and it’s been accelerating downhill since.
Same, though I might draw the “peak” line at server 2k3 R2 x64.
It’s a slippery slope, but that was more or less the last of the old GDI-based line, before Vista and its built-in compositor.
I tried running XP 64 as my main OS just 2Y ago. It was a surprisingly good experience. https://www.theregister.com/2023/07/24/dangerous_pleasures_win_xp_in_23/
Just as MS Office 97 seemed bloated and sluggish when new, now, it’s my go-to version of MS Word, because it’s tiny and fast. XP seemed bloated when new, but compared even to Win7, it’s tiny and fast. On an 8MB Core 2 Duo it flies along.
As far as I understand from the eComstation days, nobody outside IBM has the full OS/2 kernel source code, so there is never going to be a 64-bit OS/2. This in contrast to ReactOS, which is open source.
OS/2 is a dead-end. This product is primarily especially interesting to companies that still have legacy OS/2-based systems running.
Can that be true? I would imagine the features ArcaOS has done would require the full sources, particularly ACPI and EFI booting. If not, while bizarre, it would be a phenomenal testament to whatever modular kernel engineering decisions allowed this level of evolution.
I think it is true, and yes, it is a testament.
I interviewed Lewis Rosenthal: https://www.theregister.com/2023/01/19/retro_tech_week_arca_os/
And I reviewed ArcaOS: https://www.theregister.com/2023/09/04/arcaos_51/
It’s a remarkable piece of work. It’s still a pig to install, as OS/2 always was. It’s still fussy about hardware and disk partitioning, as ever. But thanks to lots of generic drivers, it’s way less so.
I could only get it to dual-boot with FreeDOS, nothing newer. If a disk was set up by Windows or Linux, then ArcaOS couldn’t understand it.
But it’s blazingly fast, it can talk to USB and SATA and UEFI, and to Wifi. It has a useful browser, which is more than eComStation does.
It felt even faster than XP. It can run rings around any 64-bit version of Windows. It has DOS, Win16, native OS/2 16-bit and 32-bit apps, and some Linux ports. There’s a WINE-like layer called Odin that can let some Win32 apps run. It can drive 64 CPU cores and given over 4GB of RAM allocate the stuff above 4GB as a RAMdisk.
It is astonishingly capable for an OS whose kernel is from 1998 or so (with later fixpacks and updates).
It probably doesn’t understand GPT partitioning which is the default on newer OS installers, you could make at least Linux comply not sure if Windows will still oblige MBR.
(I don’t know whether to use a laugh or cry response.) Oh no no no. Nothing remotely so simple and easy.
The big new feature in ArcaOS 5.1 and the main thing that drove the entire project is UEFI support. That means it has to support GPT as UEFI firmware and GPT partitions go hand in hand.
ArcaOS can boot from both BIOS and UEFI, and it can boot from MBR on both and from GPT when using UEFI. (I am not sure if it can boot from GPT on BIOS.)
No no. When I say it can’t understand partitioning schemes from other OSes I am being literal.
On BIOS on MBR, its native format, in my testing, it can handle 1 primary FAT partition and then having a second partition with ArcaOS in it.
It will not attempt to install if there is a primary with anything else but DOS. It can’t handle it if there’s a primary with NT. It can’t handle extended partitions created by other OSes. It can’t handle Linux setups, primary or logical or both. It can’t handle BSD setups; I tried FreeBSD, OpenBSD and NetBSD. WinXP 32, and 64, and Win7, and Win10.
For instance ArcaOS needs gaps between partitions. You must have at least 1 empty cylinder between partitions. Primary, gap, extended, gap, 1st logical, gap, 2nd logical, gap, etc. But even carefully creating this in (for example) Gparted is not enough.
You need to create the partitions in ArcaOS or in an OS/2-compatible partitioning tool, such as DFSee.
https://www.dfsee.com/
Paid, not included with ArcaOS.
ArcaOS has its own internal LVM system and that can’t coexist with modern LBA-aware partitioning. The OS/2 kernel still seems to think in terms of cylinders, heads and tracks, and the modern interpretation of other OSes confuses it – fatally.
I could not get it to dual boot with any other 32-bit or 64-bit OS, at all, full stop.
Only with DOS. A single copy in a single partition.
The docs tell you to create all partitions only with ArcaOS itself before installing anything else. The snag is that other OSes then see that partitioning setup as corrupt and won’t use it, and if you let Linux or Windows repair it, then ArcaOS can’t use it.
Basically, you need to treat ArcaOS like ChromeOS: it needs to be the only OS on the hardware and it does not want to share with anything else. Do that, and there’s a much better chance things will work.
P.S. Yes, Win10 still supports MBR. It has a unique requirement though. As far as I can tell, you can only use MBR on BIOS machines, and only use GPT on UEFI machines. Windows won’t boot from GPT on BIOS or from MBR on UEFI.
Linux and other OSes don’t care; they can handle both, in any combination.
From what I’ve heard, they don’t have the source to some components (it may be simple as IBM losing the source), but they are allowed to do binary patches for what they don’t have source for. I’m not sure what components they’re binary patching versus having the source for though.
I think this is correct.
It’s a real shame. There will never be a 64-bit OS/2 but an x86-32 OS with in-kernel PAE, so it could allocate lots of RAM and have a big disk cache and lots and lots of 2GB apps, would be all I needed, I think.
I wonder if it’s because Microsoft still has licensing rights to chunks of OS/2 and are still holding a grudge.
I don’t think so.
I think that is partly why there is no FOSS release of OS/2.
IBM does not really care any more. Microsoft doesn’t either. I don’t think anyone in management really knows what they are any more.
I suspect the main motivations are just 2 non-technical issues:
There’s 3rd party code in there neither companies have the right to release. Nobody wants to spend the money to go through it and clean it up.
Simple shame. I suspect there are a lot of ugly hacks in there.
In an ideal world IBM and MS would do some kind of mutual accord where they give each other full rights to the code of each others’ that each company has, including to open source it. Maybe talk to any surviving companies whose code is in there: RealPlayer is long gone, MP3 is open now, there can only be ancient audio/video codecs… Maybe some hardware drivers? Try to get blanket permissions to release.
I’d believe that “MS <3 FOSS” if it released the source of all versions of DOS, Windows 1/2/3/9x, all forms of OS/2, and made all its DOS apps freeware. There is precedent: it did with MS Word 5.5 for DOS, as a Y2K fix for all older releases.
Just feels like more e-waste and pointlessness. We love buying all this cool, cute-sized tat, but you don’t need to spend £200 on raspis and £200 on auxiliary rack units to simulate k8s. Happy to see the YouTube comments calling this out.
I don’t think there’s anything morally objectionable about the existence of a class of physically-small-scale computer equipment aimed at homelab enthusiasts (but probably also genuinely useful for more practical uses).
I generally dislike criticisms of consumer products as wasteful, just because they’re not something the accuser has any use for. I don’t think I personally need this kind of small form factor rack, but it’s possible I will in the future, and I don’t think it’s any more wasteful than all of the computer equipment that I and other people do make heavy use of.
Yeah, I will be a bit harsh and say this kind of content is like the old infomercial for chronically online people. I noticed it recently with PC and gaming hardware reviews but it has been going on for a long time in many different formats. Geeks and “even” engineers are consumers to market to too.
I appreciate miniaturisation as much as anybody else, but I have to agree that I really don’t see the point in any of this. I don’t care as much about the waste – it’s not like this will ever get to the scale where that matters, but the pointlessness of all this effort.
You put things on a rack because you have too many things otherwise; if you then go ahead and shrink it back down then why not just use a single machine. I don’t quite understand why you’d put 4 rpis in a rack when a single larger cpu would be strictly superior in pretty much all aspects.
It’s difficult to experiment with the kind of networking that needs real hardware if you only have one machine. POE, PXE, switching hardware, failure modes, …
It’s not like you have to use a miniature rack to hold a bunch of raspis. I would totally get one of these tiny racks, put in an UPS, a switch, some sort of SBC with GPIOs for sensors and control, a few U of HDDs for storage, and I guess an M4 Mac Mini for compute. The real waste would be me buying something full sized instead of “cute-sized” since I don’t have any need for enough stuff to fill a full size rack.
The first part of the video was pretty wow (the firmware dance), this is what I would expect from a low volume TI or Xilinx(amd) dev kit not a premium device like Orin which is more consumer oriented and the SoM is meant for dev all the way to production. The throttling also seems to be a common complaint. I’d wait a bit to see if this is some kind of early release faux pas or if the platform is stable.
There are two problems I experienced in the video, that are now solved.
1: I had it in 7w mode, I switched it to 25 and stopped getting throttling errors. Got way better performance too. 2: The NVME drive I was using had problems. Scanned it, found the problems and replaced it. Runs great.
Now the machine has been remarkably stable. I’ll be building another video with some updates.
At least it runs UEFI. You can update firmware from a host PC instead of doing the update from inside the OS too.
Unicomp Endurapro with some modified key caps, QWERTY layout. The trackpoint sucks compared to an M13 (which is too precious for daily use) which still sucks compared to any ThinkPad trackpoint, but I like it over any other keyboard and the trackpoint is just handy enough to not move to some Model F or Beam Spring project.
When he finds out about Thunderbolt..
This is at the core of why Linux is ultimately going to be replaced by something else.
The overhead from the flowing API and the changes they impose all over the place (and bugs these changes introduce) is unsustainable.
The replacement is no doubt going to be a microkernel multiserver system.
Changing APIs are good. They allow things to get better rather than just having more and more layers of legacy crap that is poorly supported.
Neither extreme is healthy.
One extreme says APIs must remain stable and may not change ever, even when requirements change. Windows often tries to provide this (in practice, it falls short - I have more success running ‘90s Windows programs in WINE on an AArch64 Mac than on an x86-64 Windows 11 PC). This makes it hard to evolve to meet changing requirements.
The other extreme says that interfaces are unstable and can change whenever it’s needed. This makes it hard for anyone to live downstream. You end up with a load of half-finished things where people gave up chasing API changes while trying to upstream things, or people simply giving up. The pressure to upstream things can also cause the same failure mode as the other extreme: once an API has a load of in-tree consumers, the person changing it has to update them all and so APIs are de-facto frozen because no one wants to risk breaking their in-tree consumers (this is mitigated if you have a lot of tests).
For kernels, I think FreeBSD has the right balance. The KPI / KBI is expected to remain stable within a major release. A kernel module built against 14.0 should work with all of the 14.x series (note: if it depends on things outside the base system, this is not the case. Drivers ported from Linux often depend on the LinuxKPI kernel module, which tracks Linux KPIs and so may break consumers on a regular basis). Between major releases, the KBI will definitely change (struct fields may be added in core data structures: some of these are designed to allow addition during a major release, others may have padding added just prior to a .0 release), but KPI changes that are not backwards compatible should be intentional and documented.
In the past few years I’ve sometimes worried maybe the world doesn’t want to support more than one POSIXy open source OS. But working on FreeBSD drivers lately, I agree with what David says here and hope we can survive because it’s a lot more pleasant to develop on FreeBSD. Including simple stuff like supporting back and forth N-2 releases.
It’s not about GPL or open source, you can meet the spirit of that (see i.e. any board support package for a wifi router or other complex SoC) while still suffering greatly from this policy… if you are maintaining something complicated, like say an Ethernet switch OS, you are going to have to incur a massive technical debt from day 1. There’s no way to navigate the complexities of billion dollar IP and manufacturing super powers into behaving the way Linux policy wants. So in practice that means you’re on a frozen Linux kernel version, hope their SDK isn’t a shit show, and then pay some heavy price down the line once that inevitably becomes untenable. In layman’s terms that eventually looks like unpatchable bugs/security issues/CVEs for potentially expensive and still relevant goods.
I don’t understand this. Instead of talking about how much water datacenters waste, why not talk about the real issue: why do we let datacenters waste water at all for cooling, when they don’t need to?
The answer, of course, is money. So long as datacenters can do the wrong thing and waste water instead of building proper cooling, they will. This is simply a problem of economics. Water should cost datacenters significantly more than it costs residential users, which would encourage them to build proper cooling.
We can’t think that it’s normal and/or expected for companies to simply use resources until they negatively impact their surrounding environment enough to force change. It’s like dumping chemicals in to a river until people start getting cancer - it’s not the right way to do things.
Exactly. Local authorities and water providers are free to prioritize drinking water and water for agriculture over keeping spicy autocomplete cool.
Data center companies are already eyeing nuclear power plats for power, they might as well develop water plants as well.
Evaporating water is the most efficient way to cool large thermal gradients. I think the issue is nuanced, there are places where water is abundant and places it is not. That should be taken into account when locating mega datacenter projects, and when they reside in i.e. the US Southwest alternative heat baths like geothermal should be used. There are secondary concerns like cooling tower design.. the big power plant style cooler can use water that is not heavily treated, a small chiller like you see in a city on or adjacent to a building might expect a much cleaner supply.
The overall problem is big companies are good at greenwashing and patting themselves on the back for whatever they are doing without regard for whole picture thinking.
I don’t think residential users should get discounted water.
I think water (and fuel and electricity and maybe all things) should be priced according to the social and environmental costs of supplying it.
If that would make it unaffordable for regular people then wages and benefits must rise. People and businesses should be supported in making their dwellings/lives/operations more water and energy-efficient with cheap loans or grants.
Compared to raising wages, discounting residential use disproportionately benefits the most wasteful and produces weird incentives (like people running loads of ASICs in residences).
I found myself gravitating to Debian unstable on most of my personal laptops. It offers a reasonable tradeoff of “industry standard” and not being frustratingly out of date without using a boutique OS (I’ve used Gentoo and Arch in the past which can be fun but also consume a lot of time) or quasi-commercial OS (Ubuntu, Fedora, OpenSUSE) with the regular rug pulls and weird forced decisions. I’ve never really been drawn to the different takes on state management or partitioned security OSes and Debian lets me focus on getting whatever else done.
I use FreeBSD on a variety of systems and it doesn’t suffer from these same drawbacks, especially on servers, but it has others.. no current wifi standards support, suspend/resume is very hardware dependent, and graphics are always a sudden unexpected learning event. If you have a desktop where standby doesn’t matter, the official nvidia driver is very stable and performant and I typically use Ethernet - I do my best work on this system.
I have some NetBSD systems (in particular a T480) and really like it but it needs more developer help to offer a compelling mobile option. I do run a couple OpenBSD as well but wouldn’t want to use it for a desktop for performance and filesystem reasons.
Leaving hardware support aside, what would draw you to run FreeBSD instead of a minimal Linux distribution?
I’ve never used BSD extensively, but it is an option I am looking at now for a new workstation I am building.
I’m pretty wrapped up in FreeBSD so there will be some implicit biases for a variety of reasons.
If I use a five dollar word, I would say FreeBSD has much better orthogonality than any Linux distro (although some approach it). Enough of this is cultural that it isn’t an accident, but to borrow a term src is a “monorepo” (the base system is all in one) and this causes some natural alignments between the build system, compiler, libc, administration utilities, man pages and kernel. There seems to always be some low temperature debate on what should be in there, for instance maybe it doesn’t make sense to have the compiler, but having the kernel, libc, man and base utilities in one is a big benefit versus Linux. It makes evolving interfaces a lot more natural, and yet FreeBSD still provides excellent backwards compatibility to at least 4.x.
What this means practically is that if you used a FreeBSD system 20 years ago and today there would not be a lot of surprises, and mostly pleasant ones like a better filesystem (ZFS) and block abstraction (geom) now being the norm.
For system development, it is really low friction versus others. Especially the Linux kernel which intentionally moves and breaks interfaces all the time to make out of tree development painful.
I benefited greatly from getting involved in FreeBSD, even when I was working strictly on Linux systems, so I would recommend it to anyone that is interested in operating systems. It will channel you in different directions than the loudest norms today like Docker and Kubernetes, although there is no technical reason these can be implemented directly or analogously.. and this isn’t always a bad thing. Keeping an open mind and noting what is done well or what is lacking will help in both directions when comparing or deciding when to use what.
Thanks for your detailed explanation.
In terms of security, do BSD Jails offer a better solution to isolate applications?
Linux is a bit behind in this respect, and the X Window System makes the problem worse by essentially letting any graphical application to see key events from other applications or inject spurious ones.
Linux has so much, it is hard to pin down any fair comparison. I view jails as more of an administrative concern, like Linux namespace+cgroup combo, and not a hard security feature although these things don’t hurt to help separate concerns. Landlock is a compelling option on Linux and is inspired by FreeBSD’s capsicum (for some reason, when Google sponsored a direct port, that didn’t get in) – for something like a server process or even a browser I think these are good enough for the masses.
If you really care about isolation the Qubes approach is a bit more plausible because a virtual machine, even though an immense abstraction, has a much easier to reason about boundary in the hardware than any pure kernel approach. The CHERI project, which is FreeBSD adjacent, aims to amend this extreme by moving capabilities into hardware - which might make you trust a kernel construct like jails more.
X has all the capabilities to block this (this is the difference between
ssh -Xandssh -Y; -X uses an “untrusted” connection which puts it in an isolation group, -Y doesn’t, and there’s more beyond that which is even harder to use like the access control extension, or outright running nested servers for even more), it just isn’t turned on typically. Some distros even configure it out entirely in their builds. But if you really wanted to, there are options available.Not an expert on this, but AFAIK you need to run a pretty convoluted jailing setup with applications inside their own isolated X11 servers: https://wiki.gentoo.org/wiki/User:Sakaki/Sakaki%27s_EFI_Install_Guide/Sandboxing_the_Firefox_Browser_with_Firejail
That is probably the strongest solution - that article mentions a couple of the other options but states the nested servers is the tightest barrier, which I’d agree with. Though interesting to note that it then punches holes in that barrier anyway in the name of convenient usability, like the clipboard. That’s the problem with these isolation schemes - you usually do end up poking a lot of holes in it anyway since a fully isolated thing is a pain to use.
I’m honestly surprised you said Debian unstable. Do you run into many issues with broken packages or broken upgrades?
I’ve used Testing for about 15 years, but the few times I’ve tried unstable ended up with broken stuff at the worst possible times. The only problems I have with testing involve proprietary firmware (thanks Apple and Nvidia…) either moving around or dropping support for things.
I have not experienced an unusable system in 2 years. The case for unstable is that testing does not receive any security priority so you can be running vulnerable versions longer. Investing some time into setting up Btrfs snapshots would probably be wise but I haven’t done so myself yet.
I check
apt dist-upgradefor anything peculiar before accepting, in particular there are sometimes days or even weeks where a dependency chain may be broken. An ’apt upgrade` instead will usually work around that until it is fixed. This cycle towards 13 has had some big events, like 64-bit time, rearranging kernel firmware packages, and Qt changes.. Plasma 6 might be a big one too. I would expect future cycles to be a bit more tame t64 was a big one.!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
For comparison, FreeBSD merged soft updates (a variant of journaling) in the 1990’s, and in 2008 announced ZFS support.
OpenBSD had soft updates, but they recently pulled it.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default. Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Thanks for the feedback. I’ve been wanting to check out SU+J for a while, you got me hyped to dig into the concepts and the code!
Re WABPL: an interesting thread on netbsd-tech-kern
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
OpenBSD FFS has soft updates since a very long time too.
Soft updates have been removed in Feb 2024: https://marc.info/?l=openbsd-cvs&m=171489385310956&w=2
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
This does seem a bit over the top. MPLS does not negate the utility of a traceroute because it never even claimed to work below the IP level, and if you’ve actually worked in a service provider network (I just started at my third) it comes in pretty handy. You still have to use your brain and possibly relationships to navigate any network path. But seeing basic layer 3 pathing is a bread and butter skill in any multihomed network and you don’t need to be a genius to see and decipher some useful information about which providers you are using for a particular traceroute because these are guaranteed to be layer 3 adjacent.
Does anyone have any insight into this, & what latest is going on at VMWare after the acquisition? Is this a good thing for Fusion/Workstation users, or just the first step on a long road of gradual decay toward an eventual unsupported dusty death?
My suspicion is that Broadcom is winding down these products but have to maintain them for at least a few years yet due to existing support contracts. Someone in middle management thought it would improve the company’s image a little bit to release these for “free” while they are still supported.
(The company I work for has a lot of vSphere, which was already eye-wateringly expensive before VMware was purchased by private equity. Earlier this year when it came time to renew the support contracts, they literally tripled the price. Our company said, “no thanks,” and we are now running thousands of vSphere hosts and a bunch of vCenters with zero support while whole teams scramble to transition our services to a mix of OpenStack and Kubernetes.
“May you live in interesting times.”
I don’t know. Could be @icefox’s Tenth Law.
Oooh, what’s that?
https://lobste.rs/s/u3t4sg/xmpp_forgotten_gem_instant_messaging#c_rawvsq has it as
“@icefox’s Tenth Law: Never attribute to anything else what can be explained by embrace-extend-extinguish.”
https://lobste.rs/s/u3t4sg/xmpp_forgotten_gem_instant_messaging#c_rawvsq
It’s something I named in this other thread:
https://lobste.rs/s/4ll6vo/vmware_fusion_workstation_now_free_for#c_zfwmi4
(I may have been wrong about it that time.)
I’m not a heavy user of either product but Broadcom previously added some kind of non-commercial license for both that was useful for me for playing with retro operating systems and checking/improving various FreeBSD emulated device drivers. From the casual perspective it seems like they are still working on both and it doesn’t seem like either were the main focus of VMWare before the acquisition so no perceptible change in terms of quality (which is merely acceptable).
It’s a little crazy this didn’t happen a long time ago, under VMWare, to try and keep some level of relevance to the underlying hypervisor and device model. People seem to think Broadcom is the only greedy company but VMWare was always a very greedy company.
Self-host miniflux, which is great at fetching and storing the feeds and read status. The default web UI is mediocre but there are some nice third party clients like Reeder on iOS.
The default web UI is minimal yet robust and powerful for what I expect from such reader. I love the ability they recently added to execute some global user script by default so while I do lot of UI tweaking through TamperMonkey, now I use to register these scripts directly in Miniflux settings which enable me to enjoy my UI tweaks in any device without extension. Some other things I do with Miniflux : https://morgan.zoemp.be/reading-rss-in-peace-with-a-few-miniflux-hacks/
A system that is intentionally regularly restarted is going to have certain things figured out like load shedding or a maintenance window. But it is also good to know that something can keep on ticking.. no leaks, no excessive fragmentation, whatever. You might ideally figure out a way to know you can do both.
I wonder; you could just keep one instance (or a tenth of instances or something) on no-restart, and hopefully you can pick up/record/instrument and address whatever weirdness comes up on them without having to worry about your whole fleet being subject to it.
Your own C++ feels surprisingly good to write and usually read. Somehow this rarely extends to Other People’s C++ (OPCPP) but maybe environments that have some draconian restrictions like embedded or kernel would avoid that.
I’ve found that far less true in C++ code written after C++11. LLVM’s ‘tasteful subset’ of C++ ended up looking a lot like modern C++ and most projects have converged on a similar style since then. Picking up C++ that was written prior to C++11 is exciting because it will typically be one of three largely distinct languages:
In contrast, modern C++ typically uses smart pointers everywhere for memory management (these may or may not be the standard-library ones). Generic functions are implemented in headers and may forward to type-erased implementations. Virtual inheritance is mostly an implementation detail for providing the type-erased versions.
I’m pretty firmly in the community OS camp at this point based on so many negative experiences over the years with commercial OS projects.. that means Debian stable for Linux servers/containers and unstable for my personal Linux laptops/desktops. But I do tinker with a great many OS and keep Fedora on a modded ThinkPad x430t (quad-core BGA rework). The new version is really well polished, especially the KDE/Plasma 6.2 spin. DNF went from bizarrely slow to quite acceptable. The release upgrade experience is still a bit more harrowing than an apt distro, but they have added some seatbelts and it has worked ok.
The only major critique I have is the version of Firefox they ship is a lot slower on Fedora and there is no great way to deal with it there.. it is built with the system gcc instead of llvm/clang like Mozilla does, and probably lacks PGO. The Debian inbox firefox is slower than necessary as well, but Mozilla has an official Debian repo which is highly recommended over any other gimmicks like Flatpak/Snap that cause more issues than good for such a critical component.
There was a lot of activity in the 1990s around rethinking the relationship between applications and storage. I still have a whole bookshelf of material about “persistent object systems”, a topic that used to be big enough to have its own conference but which has now completely disappeared from view.
Some of these ideas did ship commercially. There’s the obvious example of IBM OS/400 (quite a deep dive into the idea). Several filesystems appeared around this time with first-class attributes (BeOS, HFS, NTFS, etc.). None of them developed serious indexing or transaction features, though.
In 1995 I was working on the second release of Newton OS, which — unlike any other mass market OS I can think of — took this seriously enough that it had no filesystem, even internally, just a flat transactional blob store with an object serialization and indexing layer on top. Several years later I worked on WinFS, which attempted to merge SQL Server with NTFS (that did not go well). It turns out filesystems and databases look the same from a philosophical perspective, but actually have very different expectations from clients.
At this point everyone seems to have converged on a combination of (1) filesystems that don’t blow up with lots of little files, (2) a bunch of data in SQLite files, and (3) asynchronous indexing. You can find stuff a lot better, but applications are still as siloed as ever.
By bookshelf do you mean actual books? That sounds like an interesting collection for folk around here to reference. Any chance we can bother you to catalog the titles here or on librarything or something?
Thanks to Claude, that’s not as much trouble as it sounds! But good luck finding any of these now…
Probably the most inspirational things to me then were Peter Bishop’s doctoral thesis from 1977 (that MIT TR-178) and Eliot Moss’s work on persistent objects at U Mass (his papers are here).
I forgot two other important inspirations: Symbolics Statice (which I can’t even seem to find a description of) and the C++ product the same people made, ObjectStore.
The persistent object systems sounds a bit like various ideas around “single level stores” … https://en.wikipedia.org/wiki/Single-level_store
which we may imagine results in less overall code, and more reuse.
Though the classic problem is that any bug where the application corrupts state, result in a bug in persisted state.
It breaks the “reboot computer” or “restart application” method of solving problems / working around bugs, which is honestly the most reliable and timeless one.
It also reminds me of the arguments around RPC – whether network protocols should be tightly coupled to the programming language, or more loosely coupled. Compared to what was proposed in the 90’s and such, it looks like looser coupling is what worked.
Most of these systems weren’t “passive” single-level stores like a huge virtual memory. There was some concept of committing or rolling back transactional changes to persistent objects, so the post-crash situation is basically the same as any database.
Schema evolution, on the other hand, was a big consideration, and I don’t know if anybody found a great solution to that. It’s the same problem we have today with document databases. But the obvious place to address it is the (de)serialization layer, and part of the point of persistent objects was that you didn’t have one.
While normally I would take this opportunity go remind people interested in database-oriented operating systems to check out IBM i (you should!), I’ve covered it a lot already. Instead, it’s worth noting the social context of this talk, since a lot of people here probably aren’t familiar with it. The classic Mac OS was starting to begin its decline, but it had a developer culture of its own. It was decidedly not a plain text oriented OS, and instead totally graphical, lacking the concept of a command line. There was a focus on direct manipulation of objects, making them user accessible, and as the paper mentions, the resource fork could provide structured data that was easily editable with tools.
I would also be interesting in reading more about this. The main IBM site is very enterprise-focused, so I’m having a hard time making sense of what it is, what it does, and how it works
Frank Soltis “Inside the AS/400” or “Fortress Rochester” (3 editions of essentially the same book under those 2 titles, any of them are fine since the used price has gone up) are worth getting for anyone really into Operating Systems. It goes very deep into the conceptual and how everything fits together but not as deep as a UNIX internals book which at the very least have public headers to show off data structures and interfaces.
Everything I’ve ever read about IBM i has been interesting! Do you have pointers to where you’ve covered it?
I’ll keep repeating it until I die: people don’t use desktop environments, people use apps.
Linux desktops have been fine for a long time now. But if a user can’t run Photoshop/MS Office/whatever else they run on their Macs and Windows, there’s no point.
Most things regular people do happen in a browser
Even as an atypical computer user (professional developer) this has been a blessing. Between Firefox and a reasonable POSIX environment I can do what I need to do on a computer for the past couple decades. I do keep some Windows systems around to stay abreast of developments there but I would be able to function just fine without them or other commercial operating system like macOS.
The more specialized your pursuits, the more the commercial operating systems are entrenched, specifically with things like gaming or content creation (video, photo, 3D, audio production tend to favor Windows and macOS). Cubase, Ableton, and a bunch of commercial VSTs remain on my Windows machines but that is just an occasional hobby for me.
That too. Which is another reason they don’t really care about the desktop environment outside of the browser.
I tried switching my non-technical partner from Pop!_OS to Linux Mint (GNOME to Cinnamon). She hated it, mostly because text never looked right (too small in some places, too big in others), and we couldn’t figure out how to get it looking right.
So, she never realized she was using a desktop environment, but she certainly cared when the desktop environment became less user-friendly.