There are no native antivirus solutions or similar software for Linux
I don’t buy this. Antivirus is more or less useless anyway on windows. The tools you need to protect yourself from viruses are not OS-specific.
Certain applications that exist both for Windows and Linux start up faster in Windows than in Linux
Odd. My experience has been exactly opposite.
All native Linux filesystems (except ext4) are case sensitive about filenames which utterly confuses most users. This wonderful peculiarity doesn’t have any sensible rationale. Less than 0.01% of users in the Linux world depend on this feature.
Also don’t quite understand this one. Ext4 is case sensitive, and–when would you not want case sensitivity?
No native production-ready file system
ZFS-on-linux is there. Not in kernel - sure - but ubuntu ships it, making it native enough if you can use ubuntu.
Depends on your needs. Ext4 is very good for some use cases (though I prefer xfs). But here’s the specific list of requirements:
de-duplication, data and metadata checksumming and file compression
Though the OP doesn’t mention it, for cases where you want those attributes, you probably also want cow, which rules out pretty much everything except for btrfs (a mess), zfs (amazing), and bcachefs (promising, if not quite there yet).
No Linux filesystem will ever be case insensitive for this reason. See, for example, Linus’s take on HFS (originally on Google+ but that has since been nuked, so this quote is yoinked from reddit)
Did anybody check that “..” can’t be fooled to do the same thing on HFS+? In particular, how does the character sequence “dot” “zero-width-utf8” and “dot” work? Or “zerowidth” “dot” “zerowidth”? Does it work like “..”? Because if it does, your fix is incomplete, and people can populate things in random places above the git tree.
Finally, did you check that “tolower” works on a ucs_char_t? It’s not supposed to, afaik.
Quite frankly, HFS+ is probably the worst filesystem ever. Christ what shit it is. NTFS used to have similar issues with canonicalizing utf8 (ie using non-canonical representations of slashes etc). I think they at least fixed them. The OS X problems seem to be fundamental.
…but while +John Siracusa isn’t a fan of HFS+, he’s not even ranting about the true insanities of that filesystem.
Sure, it’s old. Sure, it does a horrible job of actually protecting your data. But those are more “it’s not a great filesystem” issues. They aren’t “that’s incredible crap designed by morons that have a hard time figuring out how to feed themselves”.
The true horrors of HFS+ are not in how it’s not a great filesystem, but in how it’s actively designed to be a bad filesystem by people who thought they had good ideas.
The case insensitivity is just a horribly bad idea, and Applie could have pushed fixing it. They didn’t. Instead, they doubled down on a bad idea, and actively extended it - very very badly - to unicode. And it’s not even UTF-8, it’s UCS2 I think.
Ok, so NTFS did some of the same. But apple really took it to the next level with HFS+.
There’s some excuse for case insensitivity in a legacy model (“We didn’t know better”). But people who think unicode equivalency comparisons are a good idea in a filesystem shouldn’t be allowed to play in that space. Give them some paste, and let them sit in a corner eating it. They’ll be happy, and they won’t be messing up your system.
And then picking NFD normalization - and making it visible, and actively converting correct unicode into that absolutely horrible format, that’s just inexcusable. Even the people who think normalization is a good thing admit that NFD is a bad format, and certainly not for data exchange. It’s not even “paste-eater” quality thinking. It’s actually actively corrupting user data. By design. Christ.
And Apple let these monkeys work on their filesystem? Seriously?
There are lots of good reasons to not move to ZFS (cough-Oracle-cough), but they could have pushed people to case-sensitive HFS+, which would have then made it much easier to (in the long run) migrate to anything else saner. But no. There is a case sensitive option, but Apple actively hides it and doesn’t support it.
The stupidity, it burns.
So you had all these people who made really bad decisions and actively coded for them. And I find that kind of “we actively implement shit” much more distasteful than just the “ok, we don’t implement a lot of clever things” that John complained about.
Rant over.
Now much of this is on its crappy normalization practices, but I think that sort of thing is central to any kind of case-insensitive filesystem.
I understand them, but I use a case-insensitive filesystem on MacOS and haven’t actually ever noticed. I just had to check to see what it was set to because I couldn’t remember.
I’m not sure what happens with Turkish, no, and I’m aware that case-insensitivity requires crazy logic that can never be ‘correct’, but yet it’s the default on MacOS - so there must be some good reason, probably around user experience?
No it’s mostly about poorly-written software that will fail if it suddenly finds itself on a case-sensitive filesystem. Photoshop was one of them, last time I looked.
Doesn’t prevent you from having ‘read me’, nor separate COPYRIGHT and LICENSE. The solution to such problems is to be careful, not a crutch that works only occasionally (because it’s impossible to encode intent). Oh, also, does your scheme know that réadme and RÉADME are the same? Because ascii-only case normalization is obviously inconsistent and user-unfriendly. But unicode-aware case normalization causes incompatibility—I create this file using foofs w/unicode v20; you then try to read it back with foofs w/unicode v15, but it errors out because it can’t do case-normalization on the new code points, even though the on-disc format hasn’t changed otherwise—and is also a huge dependency for a file system to have (have you read the source code to libicu? It’s not pretty.), not to mention a potential attack vector.
Also, as the sibling mentions, case normalization is locale-sensitive. There is no good way to handle that.
I’ve dropped people onto Linux systems, and have never actually seen anyone confused by case-sensitivity except for command-line junkies. If you’re using a point-and-click interface, you don’t have to remember what case something is. You can see the real name. They might not like it, but they always understand it.
Having both license and Iicense is way worse, and no filesystem prevents it.
Trying to get Linux not to suck on the desktop is a losing proposition.
To put it into context: When enough of the hardware works, Haiku offers a better free desktop experience today, despite the manpower it has isn’t even comparable. Coherent UI, easy to understand desktop that behaves as expected, responsiveness, avoidance of stalls on load. BeOS did achieve the same, but as a proprietary OS, in the mid nineties.
Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.
A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.
If looking at the state of the art, you’ll find out that I am describing Genode, paired with seL4. If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago. One that no current proprietary solution would be able to come near.
Haiku doesn’t support any hardware or software and is missing all the features that end up introducing the complexity that ends up introducing the bugs that you’re complaining about anyway.
Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.
Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach. If anything it would cause even more fragmentation because every distro could fragment on implementations of core services instead of that fragmentation only being possible within a single Linux repository.
If anything, a good desktop experience seems to require more monolithic design: the entire desktop environment and operating system developed in a single cohesive project.
This is why ‘Linux on the desktop’ is a stupid goal. Ubuntu on the desktop should be the goal, or Red Hat on the desktop, or whatever. Pick a distro and make it the Linux desktop distro that strives above all else to be the best desktop distro. There you can have your cohesiveness and your anti-fragmentation decision making (project leader decides this is the only supported cron, the only supported init, only supported WM, etc.).
A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.
Literally everything would be better if we could design it again from the ground up with the knowledge we have now. The problem is that we have this stuff called time and it’s a big constraint on development, so we like to be able to reuse existing work in the form of existing software. If you want to write an operating system from scratch and then rewrite all the useful software that works on top of it that’s fine but it’s way more work than I’m interested in doing, personally. Because ‘a good desktop operating system’ requires not just good fundamentals but a wide variety of usable software for all the tasks people want to be able to do.
You can write compatibility layers for all the other operating systems if you want. Just know that the osdev.org forums are littered with example after example after example of half-finished operating system projects that promised to have compatibility layers for Windows, Linux and macOS software.
If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago.
The Linux desktop is already good. So clearly it is not the case that it cannot satisfy the requirements.
This is super subjective. I for one does not consider Linux particularly good – I love the command line, and I many times tried to make Linux my primary work desktop. However I need excellent touchpad input with zero latency and precise acceleration, HiDPI support that Just Works with all apps (no pixel zooming), 60 fps scrolling without stuttering and just the right amount of inertia.
To me both Linux and Android fail miserably on things like 60 fps scrolling and most people don’t even notice that it stutters. I know that’s some very subjective criteria that many people don’t have. I’m excited about projects like Wayland, cause maybe there is light at the end of the tunnel?
However I need excellent touchpad input with zero latency and precise acceleration
Never had a problem with this, personally, and I certainly dispute that anyone needs it. People have productively used computers for decades without it and it isn’t a desktop issue anyway. It’s a laptop issue. I’m sure Linux still has a long way to go on the laptop but shifting the goalposts isn’t helping anyone. What it means for Linux to be viable on the desktop seems to be changing every time it gets close. Now it apparently includes laptop-specific functionality?
HiDPI support that Just Works with all apps (no pixel zooming)
And I’d like a supermodel girlfriend. HiDPI support is fucked on every platform, because it fundamentally requires support from the programmes that are running. It’s far superior on Linux to Windows. Windows can’t even scale its file browser properly, half the time the icons are the wrong size. It’s bizarre. It’s like they didn’t test anything before they claimed to ‘support’ it.
60 fps scrolling without stuttering and just the right amount of inertia.
“Just the right amount of inertia” is subjective. I hate scrolling on macOS or iPhones, the inertia is far too high. I’m sure others feel the opposite and think it’s too low. Yet if it’s made configurable I’m sure people will complain about those damn software developers wanting to make everything configurable when things should ‘Just Work’. You can never win.
Also, a lot of monitors these days have really atrocious ghosting. Smooth scrolling on those monitors makes me feel sick. So please at least make it easy to turn it off and keep it functional if I do.
I get that none of what I said changes that you want those features and so do others, and they’ll never be satisfied until those features are there. I get it. Those features are requirements for you. They’re not inherently bad. But it’s worth bearing in mind that nobody is approaching this with the goal of fucking up the Linux desktop. Nobody wants you to have a bad experience. Things are the way they are because solving these problems is really hard without having the resources to just rewrite everything from scratch. Wayland is rewriting a huge chunk of the stack from scratch and that’s having some positive impact but it’s still for me in a state where it invalidates far too much of the stuff that was working fine already that I don’t want to use it any more. I’ve gone back to X.
Exactly. It’s 100% fair to keep moving the goalposts, because the rest of the industry isn’t taking a break waiting for Linux on the desktop to catch up to the state of the art.
Never had a problem with this, personally, and I certainly dispute that anyone needs it.
Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.
The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level. In comparisons of Andoid and iOS, what do they talk about? Input latency. In comparisons between Wayland and X11, it’s all about getting latency down (and avoiding graphics glitches, and effective sandboxing, and reducing the amount of code running as root; there’s a lot of ways to improve on Xorg).
Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.
Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.
Personally I’ve had far worse experiences with remote desktop on Windows than on Linux. For example, remote desktoping into another Windows computer logs you out on that computer, or at least locks the screen, on Windows 10. Worse latency and relevant to this discussion too: terrible interaction with HiDPI (things scaled completely crazily when remoting into something with a different scaling factor).
The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level.
Touchpad latency has nothing to do with desktop Linux.
Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.
Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.
Personally I’ve had far worse experiences with remote desktop on Windows than on Linux.
I know that Linux’s input layer isn’t 10sec-latency bad. That horrible situation was entirely the fault of the overloaded corporate VPN. I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.
Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.
That’s why I didn’t compare it to Windows NT. I compared it to iOS.
I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.
Just because it’s not perfect or optimal doesn’t make it unacceptable, and it’s still not relevant to our discussion which is about desktop Linux. You seem happy to introduce other unrelated devices and operating systems and platforms when they help you push your view but then as soon as I respond to those points you retreat to a different position.
That’s why I didn’t compare it to Windows NT. I compared it to iOS.
No, you compared Android (not desktop Linux) to iOS. I wasn’t responding to that. I was discussing input latency in the context of the discussion we’re actually having in this thread: desktop Linux. The alternative to desktop Linux (given you specifically mentioned ‘playing action games’) is clearly Windows and not iOS.
Windows can’t even scale its file browser properly, half the time the icons are the wrong size.
What version of Windows are you discussing here? At least for me on Windows 10, I haven’t noticed any problems with HiDPI in Explorer. And the fact still remains that when using 2 monitors with different DPIs, Linux handles this significantly worse than Windows does.
On Windows 10 at my last job I continually had errors with Windows Explorer not scaling its own icons correctly. This was with two screens with different DPIs.
In contrast I’ve never had any issues with this on Linux and in fact with sway I can even have fractional scaling so that my older monitors can pretend to have the same DPI as my main monitor if I want to.
Well, “with all apps” is a ridiculous requirement. You can’t magically shove the good stuff into JavaFX, Swing, Tk, GTK2, GTK1, Motif, etc. :)
My short guide to a great touchpad experience would be:
use wayland, of course
stick to GTK apps as much as possible (I have a list by the way)
apply this gtk patch (and the mentioned “relevant frame-clock ones” for good measure)
MOZ_ENABLE_WAYLAND=1 MOZ_WEBRENDER=1 firefox
about:config widget.wayland_vsync.enabled=true (hopefully will become default soon)
Android fail miserably on things like 60 fps scrolling
huh? Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.
Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.
Maybe I’ve just been unlucky with hardware but my android experience has always been plagued with microstutters when scrolling. I haven’t used iOS though, maybe the grass is always greener on the other side.
As my hat will tell you, I am biased in this matter, but still…
Haiku doesn’t support any hardware or software
“Any”? Well, I have a 2015-era ThinkPad sitting on my desk here in which I have a Haiku install in which all major hardware (WiFi, ethernet, USB, …) works one way or another (the sound is a little finicky), and the only things that don’t work are sleep and GPU acceleration (which Haiku does not support broadly.) I also have a brand-new Ryzen 7 3700X-based machine with an NVMe hard drive that I just installed Haiku on, and everything there seems to work except WiFi, but this appears to be a bug in the FreeBSD driver that I’m looking to fix. It can even drive my 4K display at native resolution!
You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.
I’ll preface this by saying: I’m not saying Haiku is bad, just that you clearly can’t compare Haiku’s software and hardware support with Linux’s and pretend that Haiku comes out on top.
Well ThinkPads generally have excellent hardware support on many free software operating systems. Of course it boils down to separate questions, doesn’t it: are we asking ‘is there a machine where it works?’ or ‘is there a machine where it doesn’t work?’. You can say something has ‘good hardware support’ if there are machines you can buy where everything works, but I would say it only really counts as ‘good hardware support’ if the average machine you go out and buy will work. You shouldn’t have to seek out supporting hardware.
Based on that evaluation I would say that Linux certainly doesn’t have good laptop hardware support, because you need to do your research pretty carefully when buying any remotely recent laptop, but by the first standard it’s fine: there are recent laptops that are well supported and all the new features are well supported.
But I would say that Linux has excellent desktop hardware support, and this is a thread about desktop Linux. I never need to check if something will be supported, it just always is. Often it’s supported before the products are actually released, like anything from Intel.
the sound is a little finicky
Sound is definitely major hardware and should not be finicky. Sound worked perfectly on Linux in the early 2000s.
sleep and GPU acceleration (which Haiku does not support broadly.)
But there you go, right? It’s like people going ‘oh my laptop supports Linux perfectly, except if I close the lid it’s bricked and the WiFi doesn’t work but other than that it’s perfect’. Well not really perfect at all. Not all major hardware at all. Sleeping is pretty basic stuff. GPU support is hideously overcomplicated and that’s not your fault, but it doesn’t change that it isn’t there.
You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.
Right and I’m sure that software is great for what it is, but in a context where people are saying Linux isn’t a viable desktop operating system because of bad touchpad latency people are claiming that the problem is its monolithic kernel and that we actually need Haiku to save the day with… no sleep support? No GPUs?
I think you are moving the goalposts here. You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true. What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.
We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?
You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true.
It was also clearly not intended to be taken literally.
What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.
Except it isn’t actually ahead of Linux in any of those things from any objective standpoint, just in the opinion of one person that will advocate for anything that isn’t Linux because they have a hate boner for anything popular.
We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?
Results aren’t proportional to effort. Getting something working is easy. Getting something really polished, with wide-ranging hardware and software support, very long term backwards and forwards compatibility, that has to be highly performant across a huge range of differently powered machines from really weak microprocessors all the way through to supercomputers? That’s really hard.
Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area. It’s mostly volunteer work that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.
But you can have an actually good user experience on Linux if you forgo all the GNOME/KDE crap and just use a tiling wayland compositor like sway. Still has a few little issues to iron out but it’s mostly there, and if it’s missing features you need to use then you can just use i3wm on Xorg and it works perfectly.
Is it not still the case that Haiku doesn’t support multi-monitor setups? I would hardly describe that as a ‘far better UI/UX experience’ given that before you can have UI/UX you have to actually be able to display something.
Except it isn’t actually ahead of Linux in any of those things from any objective standpoint
At least in UI responsiveness, Haiku is most definitely ahead of Linux. You can see the difference with just a stopwatch, not to mention a high-speed camera, for things like opening apps, mouse clicks, keyboard input, etc.
Plenty of people have talked about how Haiku is ahead of both GTK and KDE in terms of UX, so it’s not just me (or us.) Maybe you disagree, but it’s certainly not a rare opinion among those who know about Haiku.
just in the opinion of one person that will advocate for anything that isn’t Linux
The BSDs are not Linux, and have the same problems because they use the same desktop. Our opposition to Linux has not a ton to do with Linux itself and more the architectural model of “stacking up” software from disparate projects, which we see as the primary source of the problem.
Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area.
Uh, last I checked, a number of Red Hat developers worked on GNOME as part of their jobs. I think KDE also has enough funding to pay people. The point is, Haiku has 0 full-time developers, and the Linux desktop ecosystem has, very clearly, a lot more than 0.
that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.
Yes. And that’s why Haiku exists, because we think those competing concerns probably cannot coexist, at least in the Linux model, and desktop usage deserves its own full OS.
Is it not still the case that Haiku doesn’t support multi-monitor setups?
We have drivers that can drive multiple displays in mirror mode on select AMD and Intel chips, but the plumbing for separate mode is not there quite yet. As you mentioned, graphics drivers are hard; I and a few others are trying to find the time and structure to bite the bullet and port Linux’s KMS-DRM drivers.
you have to actually be able to display something.
Obviously true multi-display would be nice, but one display works already. Pretty sure that counts as “displaying something.”
“Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach.”
On the contrary, microkernels like QNX and Minix 3 have strong fault isolation with self-healing being easier. Some make it easier to maintain static or hot-swap live components. The RTOS’s among them keep one process from stalling others on top of good latency. People who used QNX desktop demo told me they could do compiles on weak hardware of the time with no sluggishness monoliths had.
GEMSOS, INTEGRITY-178B, and seL4 had mathematical proofs of their designs’ security claims. INTEGRITY-178B required user processes to donate their own CPU and memory to complete kernel actions to accomplish both goals. seL4 was small enough for code-level verification. Small enough for bullet-proof implementation of privileged code and easier modification of system are persistent benefits of microkernels.
I took a look at Zircon recently, at it appears to be a first generation µkernel, as it seems to ignore everything Liedtke brought forward. I would particularly stress the principle of minimality (zircon is functionality-bloated). It would have been an impressive µkernel thirty years ago. Today, it’s a potato.
But it is still better than Mach, used in IOS/OSX. I have no doubt the overall system will be nice to work with (APIs), considering they have people from the old NewOS/BeOS team in there. It will, for Google, likely become a workable replacement path for Linux, giving them much more control, but from a systems design perspective, it is nothing else than shameful. They had the chance to use any of many competitive contemporary µkernels. but went with such a terrible solution just because NIH. It taints the whole project, making it worthless from an outsider perspective.
Because of HelenOS ties, I expect Huawei’s HarmonyOS to be better at a fundamental level.
Although Fuchsia applies many of the concepts popularized by microkernels, Fuchsia does not strive for minimality. For example, Fuchsia has over 170 syscalls, which is vastly more than a typical microkernel. Instead of minimality, the system architecture is guided by practical concerns about security, privacy, and performance. As a result, Fuchsia has a pragmatic, message-passing kernel.
IMO, there’s no shame in favoring pragmatism over purity. From what I’ve read, it looks like Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.
there’s no shame in favoring pragmatism over purity
They do think they are pragmatic. It’s not the same as actually being pragmatic.
Those “pragmatic concerns” show, if anything, that they’ve heard about minimality, but did not care to understand the why; They actually mention performance, and think putting extra functionality inside the kernel helps them with that; They ignored Liedtke’s research.
A wasted opportunity. If only if they did a little more research on the state of the art before deciding to roll their own.
Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.
Assuming OSX/IOS and Linux are the contenders you have in mind, this looks like a really low bar to meet, and thus very feasible.
Starting off a blog post by talking about ‘microkernel hatred’ is pretty funny given that 99% of people that prefer monolithic kernels just get on with their lives while the microkernel people seem to be obsessed with comparing them and arguing for them and slagging off monolithic kernels and generally doing anything other than actually working on them.
The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s. There’s not much performance data indicated from this millennium, and nothing newer than 2010.
It’s 2020. I expect to see performance data comparing realistic server and desktop workloads across production operating systems running on top of microkernels and monolithic kernels and explanations for why these differences should be attributed towards the kernel designs and not towards other aspects of the operating system designs. Because sure as hell Linux is not a theoretically optimally performant monolithic kernel, and I’m sure there are much faster monolithic kernel designs out there that don’t have all the legacy bits slowing things down that are there in Linux, or for that matter, the various things slowing Linux down that exist for reasons of flexibility across multiple domains of work, or to patch security vulnerabilities, etc.
It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this. Operating system speed in 2020 is dominated by context switching and disk I/O speeds, especially the former since the Spectre crap, so anything you can do to cut down on those bottlenecks is going to give you by far the most bang for your buck in performance.
Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.
Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.
The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).
There’s just nothing open that’s actually quite there, yet. And it is indeed a shame.
It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this.
Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.
context switching (dominates performance…)
Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.
disk I/O speeds
Are largely considered non-deterministic, and out of scope.
File system performance is relevant, but Linux has known, serious issues with this, with pathological i/o stalls which, finally, are being discussed in Linux Plumbing conferences, but remain unsolved.
This is in no small way a symptom of an approach to operating systems design that makes reasoning about latency so difficult it is not tractable.
The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s.
More or less correct. More or less. (highlighted a word).
nothing newer than 2010.
The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.
99% of people that prefer monolithic kernels just get on with their lives
There’s a several-steps difference between “using a system that happens to be monolithic”, and “preferring monolithic kernels”. The latter would imply an (unlikely) understanding of the different system architectures and their pros/cons.
Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.
I have not encountered such a person who’d regardless prefer the monolithic approach. Not even Linus Torvalds has a clue, which I believe has no small role on the systemic echo chamber problem the Linux community has with the µkernel approach.
Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.
And thus I realize I have put too much effort in this post, relatively speaking. This is why it is likely I will not humor you further than I already have.
The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).
I didn’t say it isn’t fit for purpose. I’m not the one saying that. You are. You are the one claiming that a monolithic kernel is fundamentally a broken model. Well prove it. Build a better example. Show me the code.
My perspective is all the working kernels I’ve had the pleasure of using have been monolithic and I’ve never seen any demonstration that there are viable other alternatives. I’m sure there are, but until you build one and show it to us and give us performance benchmarks for real world usage, we just don’t know. There are hundreds of design choices in an operating system that affect throughput vs latency vs security etc. etc. and until you’ve actually gone and built a real world practical operating system with a microkernel at its core you can’t even demonstrate it’s possible to build one that’s fast, let alone that the thing that makes it faster is that particular design choice.
Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.
If you can’t actually build a working example in 25 years then maybe your idea isn’t so great in the first place.
Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.
And how fast is seL4 when Spectre and Meltdown mitigations are introduced? Linux context switches aren’t slow because slow context switches are fun. They’re slow because they do a lot of work and that work is necessary. seL4 has to do that work anyway, and context switches need to happen a lot more often.
The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.
I don’t need a paper to support my hypothesis. I don’t have a hypothesis. It’s simply a statement of fact that these benchmarks are outdated. I’m not saying that if they were redone today they would be any different, or that they’d be the same. I don’t know, and neither do you. That’s the point.
Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.
99% of people that prefer monolithic kernels. That prefer them.‘99% of people that have blonde hair just get on with their lives’ does not mean ‘99% of men have blonde hair’ and ‘99% of people that prefer monolithic kernels just get on with their lives’ does not mean ‘99% of people prefer monolithic kernels’. Christ alive this isn’t hard. The irony of quoting Latin phrases at me in an attempt to make yourself look clever when you can’t even parse basic English syntax…
Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.
You’re quite right. I skimmed it. It didn’t address the problems I have with your aggressive argumentative bullshit and so it isn’t relevant to the discussion.
Nobody actually cares whether microkernels are faster or monolithic kernels are faster. It doesn’t matter. It isn’t even a thing that exists. It’s like saying ‘compiled languages are faster’ or ‘interpreted languages are faster’. Sure, maybe, who cares? Specific language implementations can be faster or slower for certain tasks, and specific kernels can be faster or slower for certain tasks.
Perhaps it will turn out, when you eventually come back here and show us your new microkernel-based operating system, that your system has better performance characteristics for interactive graphical use than Linux. Perhaps you’ll even somehow be able to justify this as being due to your choice of microkernel over monolithic kernel and not the hundreds of other design choices you’ll have made along the way. And yet it might turn out that Linux is faster for batch processing and server usage.
We don’t know, and we won’t know until you demonstrate the performance characteristics of your new operating system by building it. But arguing about it on the internet and calling everyone that doesn’t care a ‘Linus Torvalds fanboy’ definitely isn’t going to convince anyone.
I’m curious about why you say that the state of the art is Genode specifically paired with seL4. What advantages does seL4 have over Nova? It looks like Sculpt is based on Nova. Would it be difficult to change it to use seL4?
Nova is a microhypervisor that, IIRC, was partly verified for correctness. seL4 is a microkernel that was verified for security and correctness down to the binary. Genode is designed to build on and extend secure microkernels. seL4 is the best option if one is focusing on security above all else.
An alternative done in SPARK Ada for verification is the Muen separation kernel for x86.
Genode offers binary (ABI) compatibility across kernels. They’re using the same components, the same drivers, the same application binaries.
I do not know the current state of their seL4 support. The last time I looked into it (more than a year ago), you could use either NOVA or seL4 for Sculpt, and seL4 had to be patched (with some patches they were on their way to upstreaming) or you’d get slow (extra layer of indirection due to a missing feature in MMU code) framebuffer support.
From a UX perspective, using either microkernel should feel about the same. I do of course find seL4 more interesting because of the proofs it provides, and because it provides them (assuming they got the framebuffer problem solved) without any drawbacks; seL4 team does claim their µkernel to be the fastest at every chance.
I also do favor seL4’s approach to hypervisor functionality, as it forwards hypercalls and exceptions to VMM, a user process with no higher capabilities than the VM itself, making an otherwise successful VM escape attack fruitless.
The funding is actually solved by electing someone who is willing to fund the FLOSS ecosystem instead of filling pockets of proprietary software vendors by requiring the software stack purchased by the public sector organizations to be open and come with a real warranty. You don’t need billions of dollars to “hire every Linux developer”, you employ them indirectly by slightly adjusting the rules of the public tender.
Main policy selling points:
FLOSS is independently verifiable.
Relevant with Cisco, Juniper, Huawei and other companies having serious trouble not introducing back doors to their software.
Releasing to public stimulates economy.
Improvements to e.g. LibreOffice will save everyone’s time.
Releasing reference software to comply with regulation will save R&D costs and may be good enough to be used directly. For example when introducing a VAT on-line sale reporting system or a digital services such as e-delivery, notary or small dispute settlement systems.
Reuse creates new markets and competition.
Releasing an ERP system for a public sector organizations will create an opportunity for local IT companies to provide support to e.g. municipalities. Removing vendor lock-in from the equation will actually lead to a cheaper and higher-quality user care while simultaneously making it easier to implement system-wide policy changes.
It will probably not solve gaming directly, though.
The problem isn’t a technical, it’s political. In order to successfully lobby your points above, you have to have enough cash to outspend the Microsofts, Oracles, etc. The worse solutions will prevail because the worse solutions have business models that generate heaps of cash that they can use to convince governments to give them heaps of cash.
There is this generational thing going on right now. So don’t overestimate lobbying money, selling bad ideas is much more costly than selling the good ones.
Microsoft is losing the % of installations fast and is pivoting to aggressively selling Azure credits and O365 subscriptions instead. Not ideal from the security POW (NSA running most EU public infra and so on), but there is an opportunity of switching the end-user devices to Linux + Firefox. The next step is building FLOSS information systems to replace the crappy proprietary ones, gradually nudging people away from virtual paper towards shared databases while keeping the critical infrastructure away from public clouds.
Oracle is just a bunch of old people unable to understand the situation. I wouldn’t worry about them long-term.
Both parties back the military-industrial complex. They want backdoors for spying. The big tech companies also get lots of contracts. They might even be mission-critical dependencies. So, it might be an uphill battle getting rid of companies that cooperate with the NSA.
This list is highly opinionated. It seems like some of the issues are listed because of the author’s personal tastes rather than being real issues users will run into (reading for instance the Wayland-related ones, because I know Wayland).
Years ago, I remember reading some article why Linux wouldn’t succeed, and it was because Ubuntu’s default desktop lacked a refresh option in the context menu. Not everyone shares your same workflow.
While these are true technical issues, I really see no problem with “desktop linux” today. I set up a PC with the mate desktop for my kids (8 and 10 year old). While they use windows10 at school for the simple tasks they are asked about, they have found no problem using the computer at home. My only interaction has been to help them setup their user/password. After that, they can explore the menu, launch programs, games, print a document they have written using libreoffice. I honestly see no problem with the fact that the computer runs linux; it is just a computer and newcomers can use it without much trouble.
Of course, they didn’t install linux, but most people do not install windows either. I guess at this point both systems are equally easy to install on an “empty” computer.
This is the classic text which gets updated every year. Inspired by the other thread, but here, we actually get bug references, so it’s a bit more concrete.
I’ve experienced a great many of these. Still preferable to a five-second telemetry delay every time I open the calculator on Windows.
Alan Kay mentioned the value in developing hardware and OS in tandem in one shop. The PC is a disaster as-is, and something has got to give. Rust and formally verified micro-kernels might buy some time, but anyone who has to reach into the guts of these things knows that’s just lipstick on the pig.
Glanced. On the topic of µkernels, none of the participants seem to have a clue. Some go as far as to claim µkernels are hopelessly slow without batting an eyelid. By all indications, they have never read Liedtke’s µkernel construction paper (1995), nor have a clue about the state of the art.
I do however agree that Fuchsia (mainly the Zircon µkernel) is brain-dead and a shame to have been written so recently with such a dated, inefficient design.
Unfortunately, I know nothing of the sort covering µkernels in general. Most of what I know about the history, I’ve extracted from Gernot Heiser’s blog, which I’ve read from the oldest article to the newest (over the course of weeks), and pieced together in my head. This is mostly post-L4 information. Whole 1st/2nd/3rd µkernel generation concept I know I heard from in there. Gernot is one of the most prominent academic figures in µkernel-related research, and not one to skip.
Then there’s this paper which basically reviews L4 and whether it holds, 20 years later.
The incisive µkernels are slow and elvis didn’t do no drugs does reference a load of non-L4 designs, some of them older, providing interesting quotes from papers and research literature, and linking to the sources of the quotes. It should prove helpful.
But, to answer your question, No. I unfortunately do not know of a good one-article full µkernel history review. There’s no going around reading a lot. Fortunately, you do still somewhat get to choose how deep to go.
It uses a newish, experimental language, and that’s the highlight. From a research perspective, trying too many things at once is not a good idea, so I truly hope the rest of the design is very traditional
I haven’t looked into the details, because if I am right then it would be boring anyway.
Broadly agree, but specific nits:
I don’t buy this. Antivirus is more or less useless anyway on windows. The tools you need to protect yourself from viruses are not OS-specific.
Odd. My experience has been exactly opposite.
Also don’t quite understand this one. Ext4 is case sensitive, and–when would you not want case sensitivity?
ZFS-on-linux is there. Not in kernel - sure - but ubuntu ships it, making it native enough if you can use ubuntu.
ext4 is a native production-ready file system.
Depends on your needs. Ext4 is very good for some use cases (though I prefer xfs). But here’s the specific list of requirements:
Though the OP doesn’t mention it, for cases where you want those attributes, you probably also want cow, which rules out pretty much everything except for btrfs (a mess), zfs (amazing), and bcachefs (promising, if not quite there yet).
Deduplication is a questionable requirement tbh. Most modern ZFS guides usually have “DO NOT USE DEDUPLICATION” in all caps somewhere :)
And HAMMER2 (not linux, but no licensing issues).
When you end up with both readme and Readme and README.
But having both
Straße.jpg
andSTRAẞE.jpg
would be fine?And how will that work out in locales where uppercase
i
isİ
?Really, case insensitive FS are a silly idea of people who don’t understand languages/locales/Unicode.
No Linux filesystem will ever be case insensitive for this reason. See, for example, Linus’s take on HFS (originally on Google+ but that has since been nuked, so this quote is yoinked from reddit)
Now much of this is on its crappy normalization practices, but I think that sort of thing is central to any kind of case-insensitive filesystem.
I understand them, but I use a case-insensitive filesystem on MacOS and haven’t actually ever noticed. I just had to check to see what it was set to because I couldn’t remember.
I’m not sure what happens with Turkish, no, and I’m aware that case-insensitivity requires crazy logic that can never be ‘correct’, but yet it’s the default on MacOS - so there must be some good reason, probably around user experience?
No it’s mostly about poorly-written software that will fail if it suddenly finds itself on a case-sensitive filesystem. Photoshop was one of them, last time I looked.
Doesn’t prevent you from having ‘read me’, nor separate COPYRIGHT and LICENSE. The solution to such problems is to be careful, not a crutch that works only occasionally (because it’s impossible to encode intent). Oh, also, does your scheme know that réadme and RÉADME are the same? Because ascii-only case normalization is obviously inconsistent and user-unfriendly. But unicode-aware case normalization causes incompatibility—I create this file using foofs w/unicode v20; you then try to read it back with foofs w/unicode v15, but it errors out because it can’t do case-normalization on the new code points, even though the on-disc format hasn’t changed otherwise—and is also a huge dependency for a file system to have (have you read the source code to libicu? It’s not pretty.), not to mention a potential attack vector.
Also, as the sibling mentions, case normalization is locale-sensitive. There is no good way to handle that.
I’ve dropped people onto Linux systems, and have never actually seen anyone confused by case-sensitivity except for command-line junkies. If you’re using a point-and-click interface, you don’t have to remember what case something is. You can see the real name. They might not like it, but they always understand it.
Having both license and Iicense is way worse, and no filesystem prevents it.
Indeed. But at least case-sensitive does not pretend to solve it, thus preferable.
Trying to get Linux not to suck on the desktop is a losing proposition.
To put it into context: When enough of the hardware works, Haiku offers a better free desktop experience today, despite the manpower it has isn’t even comparable. Coherent UI, easy to understand desktop that behaves as expected, responsiveness, avoidance of stalls on load. BeOS did achieve the same, but as a proprietary OS, in the mid nineties.
Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.
A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.
If looking at the state of the art, you’ll find out that I am describing Genode, paired with seL4. If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago. One that no current proprietary solution would be able to come near.
A proof of concept is available through Sculpt (new release expected within days), demonstrated in this FOSDEM talk. Another FOSDEM talk covers the current state of seL4.
Full disclosure: Currently using Linux (with all its faults) as main desktop OS. I have done so for 20 years. AmigaOS before that.
Haiku doesn’t support any hardware or software and is missing all the features that end up introducing the complexity that ends up introducing the bugs that you’re complaining about anyway.
Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach. If anything it would cause even more fragmentation because every distro could fragment on implementations of core services instead of that fragmentation only being possible within a single Linux repository.
If anything, a good desktop experience seems to require more monolithic design: the entire desktop environment and operating system developed in a single cohesive project.
This is why ‘Linux on the desktop’ is a stupid goal. Ubuntu on the desktop should be the goal, or Red Hat on the desktop, or whatever. Pick a distro and make it the Linux desktop distro that strives above all else to be the best desktop distro. There you can have your cohesiveness and your anti-fragmentation decision making (project leader decides this is the only supported cron, the only supported init, only supported WM, etc.).
Literally everything would be better if we could design it again from the ground up with the knowledge we have now. The problem is that we have this stuff called time and it’s a big constraint on development, so we like to be able to reuse existing work in the form of existing software. If you want to write an operating system from scratch and then rewrite all the useful software that works on top of it that’s fine but it’s way more work than I’m interested in doing, personally. Because ‘a good desktop operating system’ requires not just good fundamentals but a wide variety of usable software for all the tasks people want to be able to do.
You can write compatibility layers for all the other operating systems if you want. Just know that the osdev.org forums are littered with example after example after example of half-finished operating system projects that promised to have compatibility layers for Windows, Linux and macOS software.
The Linux desktop is already good. So clearly it is not the case that it cannot satisfy the requirements.
This is super subjective. I for one does not consider Linux particularly good – I love the command line, and I many times tried to make Linux my primary work desktop. However I need excellent touchpad input with zero latency and precise acceleration, HiDPI support that Just Works with all apps (no pixel zooming), 60 fps scrolling without stuttering and just the right amount of inertia.
To me both Linux and Android fail miserably on things like 60 fps scrolling and most people don’t even notice that it stutters. I know that’s some very subjective criteria that many people don’t have. I’m excited about projects like Wayland, cause maybe there is light at the end of the tunnel?
Never had a problem with this, personally, and I certainly dispute that anyone needs it. People have productively used computers for decades without it and it isn’t a desktop issue anyway. It’s a laptop issue. I’m sure Linux still has a long way to go on the laptop but shifting the goalposts isn’t helping anyone. What it means for Linux to be viable on the desktop seems to be changing every time it gets close. Now it apparently includes laptop-specific functionality?
And I’d like a supermodel girlfriend. HiDPI support is fucked on every platform, because it fundamentally requires support from the programmes that are running. It’s far superior on Linux to Windows. Windows can’t even scale its file browser properly, half the time the icons are the wrong size. It’s bizarre. It’s like they didn’t test anything before they claimed to ‘support’ it.
“Just the right amount of inertia” is subjective. I hate scrolling on macOS or iPhones, the inertia is far too high. I’m sure others feel the opposite and think it’s too low. Yet if it’s made configurable I’m sure people will complain about those damn software developers wanting to make everything configurable when things should ‘Just Work’. You can never win.
Also, a lot of monitors these days have really atrocious ghosting. Smooth scrolling on those monitors makes me feel sick. So please at least make it easy to turn it off and keep it functional if I do.
I get that none of what I said changes that you want those features and so do others, and they’ll never be satisfied until those features are there. I get it. Those features are requirements for you. They’re not inherently bad. But it’s worth bearing in mind that nobody is approaching this with the goal of fucking up the Linux desktop. Nobody wants you to have a bad experience. Things are the way they are because solving these problems is really hard without having the resources to just rewrite everything from scratch. Wayland is rewriting a huge chunk of the stack from scratch and that’s having some positive impact but it’s still for me in a state where it invalidates far too much of the stuff that was working fine already that I don’t want to use it any more. I’ve gone back to X.
Works great literally 100% of the time on my Mac. I can even mix external monitor types and everything “just works”.
Exactly. It’s 100% fair to keep moving the goalposts, because the rest of the industry isn’t taking a break waiting for Linux on the desktop to catch up to the state of the art.
I use mixed monitor densities with i3 as my daily driver and everything works perfectly 100% of the time. Working software for linux exists.
For reasons which escape me, the biggest distributions have not fixed their defaults to make it work.
Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.
The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level. In comparisons of Andoid and iOS, what do they talk about? Input latency. In comparisons between Wayland and X11, it’s all about getting latency down (and avoiding graphics glitches, and effective sandboxing, and reducing the amount of code running as root; there’s a lot of ways to improve on Xorg).
Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.
Personally I’ve had far worse experiences with remote desktop on Windows than on Linux. For example, remote desktoping into another Windows computer logs you out on that computer, or at least locks the screen, on Windows 10. Worse latency and relevant to this discussion too: terrible interaction with HiDPI (things scaled completely crazily when remoting into something with a different scaling factor).
Touchpad latency has nothing to do with desktop Linux.
Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.
I know that Linux’s input layer isn’t 10sec-latency bad. That horrible situation was entirely the fault of the overloaded corporate VPN. I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.
That’s why I didn’t compare it to Windows NT. I compared it to iOS.
Just because it’s not perfect or optimal doesn’t make it unacceptable, and it’s still not relevant to our discussion which is about desktop Linux. You seem happy to introduce other unrelated devices and operating systems and platforms when they help you push your view but then as soon as I respond to those points you retreat to a different position.
No, you compared Android (not desktop Linux) to iOS. I wasn’t responding to that. I was discussing input latency in the context of the discussion we’re actually having in this thread: desktop Linux. The alternative to desktop Linux (given you specifically mentioned ‘playing action games’) is clearly Windows and not iOS.
What version of Windows are you discussing here? At least for me on Windows 10, I haven’t noticed any problems with HiDPI in Explorer. And the fact still remains that when using 2 monitors with different DPIs, Linux handles this significantly worse than Windows does.
On Windows 10 at my last job I continually had errors with Windows Explorer not scaling its own icons correctly. This was with two screens with different DPIs.
In contrast I’ve never had any issues with this on Linux and in fact with sway I can even have fractional scaling so that my older monitors can pretend to have the same DPI as my main monitor if I want to.
Well, “with all apps” is a ridiculous requirement. You can’t magically shove the good stuff into JavaFX, Swing, Tk, GTK2,
GTK1, Motif,etc. :)My short guide to a great touchpad experience would be:
MOZ_ENABLE_WAYLAND=1 MOZ_WEBRENDER=1 firefox
widget.wayland_vsync.enabled=true
(hopefully will become default soon)huh? Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.
Maybe I’ve just been unlucky with hardware but my android experience has always been plagued with microstutters when scrolling. I haven’t used iOS though, maybe the grass is always greener on the other side.
The stutters all over Android were one of the issues that made me give iOS a try. I haven’t gone back.
As my hat will tell you, I am biased in this matter, but still…
“Any”? Well, I have a 2015-era ThinkPad sitting on my desk here in which I have a Haiku install in which all major hardware (WiFi, ethernet, USB, …) works one way or another (the sound is a little finicky), and the only things that don’t work are sleep and GPU acceleration (which Haiku does not support broadly.) I also have a brand-new Ryzen 7 3700X-based machine with an NVMe hard drive that I just installed Haiku on, and everything there seems to work except WiFi, but this appears to be a bug in the FreeBSD driver that I’m looking to fix. It can even drive my 4K display at native resolution!
You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.
I’ve been inspired to try to find some time to install Haiku on an old laptop running Elementary OS!
I’ll preface this by saying: I’m not saying Haiku is bad, just that you clearly can’t compare Haiku’s software and hardware support with Linux’s and pretend that Haiku comes out on top.
Well ThinkPads generally have excellent hardware support on many free software operating systems. Of course it boils down to separate questions, doesn’t it: are we asking ‘is there a machine where it works?’ or ‘is there a machine where it doesn’t work?’. You can say something has ‘good hardware support’ if there are machines you can buy where everything works, but I would say it only really counts as ‘good hardware support’ if the average machine you go out and buy will work. You shouldn’t have to seek out supporting hardware.
Based on that evaluation I would say that Linux certainly doesn’t have good laptop hardware support, because you need to do your research pretty carefully when buying any remotely recent laptop, but by the first standard it’s fine: there are recent laptops that are well supported and all the new features are well supported.
But I would say that Linux has excellent desktop hardware support, and this is a thread about desktop Linux. I never need to check if something will be supported, it just always is. Often it’s supported before the products are actually released, like anything from Intel.
Sound is definitely major hardware and should not be finicky. Sound worked perfectly on Linux in the early 2000s.
But there you go, right? It’s like people going ‘oh my laptop supports Linux perfectly, except if I close the lid it’s bricked and the WiFi doesn’t work but other than that it’s perfect’. Well not really perfect at all. Not all major hardware at all. Sleeping is pretty basic stuff. GPU support is hideously overcomplicated and that’s not your fault, but it doesn’t change that it isn’t there.
Right and I’m sure that software is great for what it is, but in a context where people are saying Linux isn’t a viable desktop operating system because of bad touchpad latency people are claiming that the problem is its monolithic kernel and that we actually need Haiku to save the day with… no sleep support? No GPUs?
I think you are moving the goalposts here. You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true. What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.
We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?
It was also clearly not intended to be taken literally.
Except it isn’t actually ahead of Linux in any of those things from any objective standpoint, just in the opinion of one person that will advocate for anything that isn’t Linux because they have a hate boner for anything popular.
Results aren’t proportional to effort. Getting something working is easy. Getting something really polished, with wide-ranging hardware and software support, very long term backwards and forwards compatibility, that has to be highly performant across a huge range of differently powered machines from really weak microprocessors all the way through to supercomputers? That’s really hard.
Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area. It’s mostly volunteer work that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.
But you can have an actually good user experience on Linux if you forgo all the GNOME/KDE crap and just use a tiling wayland compositor like sway. Still has a few little issues to iron out but it’s mostly there, and if it’s missing features you need to use then you can just use i3wm on Xorg and it works perfectly.
Is it not still the case that Haiku doesn’t support multi-monitor setups? I would hardly describe that as a ‘far better UI/UX experience’ given that before you can have UI/UX you have to actually be able to display something.
At least in UI responsiveness, Haiku is most definitely ahead of Linux. You can see the difference with just a stopwatch, not to mention a high-speed camera, for things like opening apps, mouse clicks, keyboard input, etc.
Plenty of people have talked about how Haiku is ahead of both GTK and KDE in terms of UX, so it’s not just me (or us.) Maybe you disagree, but it’s certainly not a rare opinion among those who know about Haiku.
The BSDs are not Linux, and have the same problems because they use the same desktop. Our opposition to Linux has not a ton to do with Linux itself and more the architectural model of “stacking up” software from disparate projects, which we see as the primary source of the problem.
Uh, last I checked, a number of Red Hat developers worked on GNOME as part of their jobs. I think KDE also has enough funding to pay people. The point is, Haiku has 0 full-time developers, and the Linux desktop ecosystem has, very clearly, a lot more than 0.
Yes. And that’s why Haiku exists, because we think those competing concerns probably cannot coexist, at least in the Linux model, and desktop usage deserves its own full OS.
We have drivers that can drive multiple displays in mirror mode on select AMD and Intel chips, but the plumbing for separate mode is not there quite yet. As you mentioned, graphics drivers are hard; I and a few others are trying to find the time and structure to bite the bullet and port Linux’s KMS-DRM drivers.
Obviously true multi-display would be nice, but one display works already. Pretty sure that counts as “displaying something.”
“Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach.”
On the contrary, microkernels like QNX and Minix 3 have strong fault isolation with self-healing being easier. Some make it easier to maintain static or hot-swap live components. The RTOS’s among them keep one process from stalling others on top of good latency. People who used QNX desktop demo told me they could do compiles on weak hardware of the time with no sluggishness monoliths had.
GEMSOS, INTEGRITY-178B, and seL4 had mathematical proofs of their designs’ security claims. INTEGRITY-178B required user processes to donate their own CPU and memory to complete kernel actions to accomplish both goals. seL4 was small enough for code-level verification. Small enough for bullet-proof implementation of privileged code and easier modification of system are persistent benefits of microkernels.
https://gs.statcounter.com/os-market-share/desktop/worldwide
Anything that is unpopular is bad? Anything that is popular is good? What are you trying to say with this ridiculous comment?
Do you think Fucshia would fit the bill?
No, Fuchsia won’t.
I took a look at Zircon recently, at it appears to be a first generation µkernel, as it seems to ignore everything Liedtke brought forward. I would particularly stress the principle of minimality (zircon is functionality-bloated). It would have been an impressive µkernel thirty years ago. Today, it’s a potato.
But it is still better than Mach, used in IOS/OSX. I have no doubt the overall system will be nice to work with (APIs), considering they have people from the old NewOS/BeOS team in there. It will, for Google, likely become a workable replacement path for Linux, giving them much more control, but from a systems design perspective, it is nothing else than shameful. They had the chance to use any of many competitive contemporary µkernels. but went with such a terrible solution just because NIH. It taints the whole project, making it worthless from an outsider perspective.
Because of HelenOS ties, I expect Huawei’s HarmonyOS to be better at a fundamental level.
As the Fuchsia overview page says:
IMO, there’s no shame in favoring pragmatism over purity. From what I’ve read, it looks like Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.
They do think they are pragmatic. It’s not the same as actually being pragmatic.
Those “pragmatic concerns” show, if anything, that they’ve heard about minimality, but did not care to understand the why; They actually mention performance, and think putting extra functionality inside the kernel helps them with that; They ignored Liedtke’s research.
A wasted opportunity. If only if they did a little more research on the state of the art before deciding to roll their own.
Assuming OSX/IOS and Linux are the contenders you have in mind, this looks like a really low bar to meet, and thus very feasible.
The fact is that nobody has ever actually demonstrated a performant operating system based on a microkernel.
Have you heard about QNX?
It’s proprietary software and I haven’t used it.
That’s quite the liberal use of the word fact.
If you can handle foul language, you might enjoy reading this article.
Starting off a blog post by talking about ‘microkernel hatred’ is pretty funny given that 99% of people that prefer monolithic kernels just get on with their lives while the microkernel people seem to be obsessed with comparing them and arguing for them and slagging off monolithic kernels and generally doing anything other than actually working on them.
The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s. There’s not much performance data indicated from this millennium, and nothing newer than 2010.
It’s 2020. I expect to see performance data comparing realistic server and desktop workloads across production operating systems running on top of microkernels and monolithic kernels and explanations for why these differences should be attributed towards the kernel designs and not towards other aspects of the operating system designs. Because sure as hell Linux is not a theoretically optimally performant monolithic kernel, and I’m sure there are much faster monolithic kernel designs out there that don’t have all the legacy bits slowing things down that are there in Linux, or for that matter, the various things slowing Linux down that exist for reasons of flexibility across multiple domains of work, or to patch security vulnerabilities, etc.
It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this. Operating system speed in 2020 is dominated by context switching and disk I/O speeds, especially the former since the Spectre crap, so anything you can do to cut down on those bottlenecks is going to give you by far the most bang for your buck in performance.
Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.
The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).
There’s just nothing open that’s actually quite there, yet. And it is indeed a shame.
Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.
Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.
Are largely considered non-deterministic, and out of scope.
File system performance is relevant, but Linux has known, serious issues with this, with pathological i/o stalls which, finally, are being discussed in Linux Plumbing conferences, but remain unsolved.
This is in no small way a symptom of an approach to operating systems design that makes reasoning about latency so difficult it is not tractable.
More or less correct. More or less. (highlighted a word).
The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.
There’s a several-steps difference between “using a system that happens to be monolithic”, and “preferring monolithic kernels”. The latter would imply an (unlikely) understanding of the different system architectures and their pros/cons.
Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.
I have not encountered such a person who’d regardless prefer the monolithic approach. Not even Linus Torvalds has a clue, which I believe has no small role on the systemic echo chamber problem the Linux community has with the µkernel approach.
Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.
And thus I realize I have put too much effort in this post, relatively speaking. This is why it is likely I will not humor you further than I already have.
I didn’t say it isn’t fit for purpose. I’m not the one saying that. You are. You are the one claiming that a monolithic kernel is fundamentally a broken model. Well prove it. Build a better example. Show me the code.
My perspective is all the working kernels I’ve had the pleasure of using have been monolithic and I’ve never seen any demonstration that there are viable other alternatives. I’m sure there are, but until you build one and show it to us and give us performance benchmarks for real world usage, we just don’t know. There are hundreds of design choices in an operating system that affect throughput vs latency vs security etc. etc. and until you’ve actually gone and built a real world practical operating system with a microkernel at its core you can’t even demonstrate it’s possible to build one that’s fast, let alone that the thing that makes it faster is that particular design choice.
If you can’t actually build a working example in 25 years then maybe your idea isn’t so great in the first place.
And how fast is seL4 when Spectre and Meltdown mitigations are introduced? Linux context switches aren’t slow because slow context switches are fun. They’re slow because they do a lot of work and that work is necessary. seL4 has to do that work anyway, and context switches need to happen a lot more often.
I don’t need a paper to support my hypothesis. I don’t have a hypothesis. It’s simply a statement of fact that these benchmarks are outdated. I’m not saying that if they were redone today they would be any different, or that they’d be the same. I don’t know, and neither do you. That’s the point.
99% of people that prefer monolithic kernels. That prefer them.‘99% of people that have blonde hair just get on with their lives’ does not mean ‘99% of men have blonde hair’ and ‘99% of people that prefer monolithic kernels just get on with their lives’ does not mean ‘99% of people prefer monolithic kernels’. Christ alive this isn’t hard. The irony of quoting Latin phrases at me in an attempt to make yourself look clever when you can’t even parse basic English syntax…
You’re quite right. I skimmed it. It didn’t address the problems I have with your aggressive argumentative bullshit and so it isn’t relevant to the discussion.
Nobody actually cares whether microkernels are faster or monolithic kernels are faster. It doesn’t matter. It isn’t even a thing that exists. It’s like saying ‘compiled languages are faster’ or ‘interpreted languages are faster’. Sure, maybe, who cares? Specific language implementations can be faster or slower for certain tasks, and specific kernels can be faster or slower for certain tasks.
Perhaps it will turn out, when you eventually come back here and show us your new microkernel-based operating system, that your system has better performance characteristics for interactive graphical use than Linux. Perhaps you’ll even somehow be able to justify this as being due to your choice of microkernel over monolithic kernel and not the hundreds of other design choices you’ll have made along the way. And yet it might turn out that Linux is faster for batch processing and server usage.
We don’t know, and we won’t know until you demonstrate the performance characteristics of your new operating system by building it. But arguing about it on the internet and calling everyone that doesn’t care a ‘Linus Torvalds fanboy’ definitely isn’t going to convince anyone.
I’m curious about why you say that the state of the art is Genode specifically paired with seL4. What advantages does seL4 have over Nova? It looks like Sculpt is based on Nova. Would it be difficult to change it to use seL4?
Nova is a microhypervisor that, IIRC, was partly verified for correctness. seL4 is a microkernel that was verified for security and correctness down to the binary. Genode is designed to build on and extend secure microkernels. seL4 is the best option if one is focusing on security above all else.
An alternative done in SPARK Ada for verification is the Muen separation kernel for x86.
Because operating system and kernel enthusiast communities are full of people that are obsessed with dead, irrelevant technology.
Genode offers binary (ABI) compatibility across kernels. They’re using the same components, the same drivers, the same application binaries.
I do not know the current state of their seL4 support. The last time I looked into it (more than a year ago), you could use either NOVA or seL4 for Sculpt, and seL4 had to be patched (with some patches they were on their way to upstreaming) or you’d get slow (extra layer of indirection due to a missing feature in MMU code) framebuffer support.
From a UX perspective, using either microkernel should feel about the same. I do of course find seL4 more interesting because of the proofs it provides, and because it provides them (assuming they got the framebuffer problem solved) without any drawbacks; seL4 team does claim their µkernel to be the fastest at every chance.
I also do favor seL4’s approach to hypervisor functionality, as it forwards hypercalls and exceptions to VMM, a user process with no higher capabilities than the VM itself, making an otherwise successful VM escape attack fruitless.
The funding is actually solved by electing someone who is willing to fund the FLOSS ecosystem instead of filling pockets of proprietary software vendors by requiring the software stack purchased by the public sector organizations to be open and come with a real warranty. You don’t need billions of dollars to “hire every Linux developer”, you employ them indirectly by slightly adjusting the rules of the public tender.
Main policy selling points:
It will probably not solve gaming directly, though.
The problem isn’t a technical, it’s political. In order to successfully lobby your points above, you have to have enough cash to outspend the Microsofts, Oracles, etc. The worse solutions will prevail because the worse solutions have business models that generate heaps of cash that they can use to convince governments to give them heaps of cash.
There is this generational thing going on right now. So don’t overestimate lobbying money, selling bad ideas is much more costly than selling the good ones.
Microsoft is losing the % of installations fast and is pivoting to aggressively selling Azure credits and O365 subscriptions instead. Not ideal from the security POW (NSA running most EU public infra and so on), but there is an opportunity of switching the end-user devices to Linux + Firefox. The next step is building FLOSS information systems to replace the crappy proprietary ones, gradually nudging people away from virtual paper towards shared databases while keeping the critical infrastructure away from public clouds.
Oracle is just a bunch of old people unable to understand the situation. I wouldn’t worry about them long-term.
“not introducing back doors to their software.”
Both parties back the military-industrial complex. They want backdoors for spying. The big tech companies also get lots of contracts. They might even be mission-critical dependencies. So, it might be an uphill battle getting rid of companies that cooperate with the NSA.
EU is a large market. Mandating open firmware will be met with supply.
I am more worried about losing Taiwan before people realize you cannot outsource ICT without outsourcing information and communication policy.
Re Taiwan. Same concern. I was eyeballing South Korea since it had fabs.
This list is highly opinionated. It seems like some of the issues are listed because of the author’s personal tastes rather than being real issues users will run into (reading for instance the Wayland-related ones, because I know Wayland).
Years ago, I remember reading some article why Linux wouldn’t succeed, and it was because Ubuntu’s default desktop lacked a refresh option in the context menu. Not everyone shares your same workflow.
While these are true technical issues, I really see no problem with “desktop linux” today. I set up a PC with the mate desktop for my kids (8 and 10 year old). While they use windows10 at school for the simple tasks they are asked about, they have found no problem using the computer at home. My only interaction has been to help them setup their user/password. After that, they can explore the menu, launch programs, games, print a document they have written using libreoffice. I honestly see no problem with the fact that the computer runs linux; it is just a computer and newcomers can use it without much trouble.
Of course, they didn’t install linux, but most people do not install windows either. I guess at this point both systems are equally easy to install on an “empty” computer.
This is the classic text which gets updated every year. Inspired by the other thread, but here, we actually get bug references, so it’s a bit more concrete.
Didn’t we just have a thread about this on the front page with all the same points and discussion?
I’ve experienced a great many of these. Still preferable to a five-second telemetry delay every time I open the calculator on Windows.
Alan Kay mentioned the value in developing hardware and OS in tandem in one shop. The PC is a disaster as-is, and something has got to give. Rust and formally verified micro-kernels might buy some time, but anyone who has to reach into the guts of these things knows that’s just lipstick on the pig.
I found one of the linked slashdot threads pretty interesting. From personal experience, many of the claims they make sound realistic.
Glanced. On the topic of µkernels, none of the participants seem to have a clue. Some go as far as to claim µkernels are hopelessly slow without batting an eyelid. By all indications, they have never read Liedtke’s µkernel construction paper (1995), nor have a clue about the state of the art.
I do however agree that Fuchsia (mainly the Zircon µkernel) is brain-dead and a shame to have been written so recently with such a dated, inefficient design.
Do you have an idea whether there exists some overview article that shows the significant improvements µkernels went through throughout the decades?
Unfortunately, I know nothing of the sort covering µkernels in general. Most of what I know about the history, I’ve extracted from Gernot Heiser’s blog, which I’ve read from the oldest article to the newest (over the course of weeks), and pieced together in my head. This is mostly post-L4 information. Whole 1st/2nd/3rd µkernel generation concept I know I heard from in there. Gernot is one of the most prominent academic figures in µkernel-related research, and not one to skip.
Then there’s this paper which basically reviews L4 and whether it holds, 20 years later.
The incisive µkernels are slow and elvis didn’t do no drugs does reference a load of non-L4 designs, some of them older, providing interesting quotes from papers and research literature, and linking to the sources of the quotes. It should prove helpful.
But, to answer your question, No. I unfortunately do not know of a good one-article full µkernel history review. There’s no going around reading a lot. Fortunately, you do still somewhat get to choose how deep to go.
Thanks a lot, I will look into it!
Would love to hear your take on Redox, btw. … do you have some thoughts on it?
It uses a newish, experimental language, and that’s the highlight. From a research perspective, trying too many things at once is not a good idea, so I truly hope the rest of the design is very traditional
I haven’t looked into the details, because if I am right then it would be boring anyway.
Please let me know if you ever have a look at it, would be really interested in your opinion.