1. 6

    I don’t have much (really any) use for it, but man it makes me happy every time I see that these guys are keeping the spirit of BeOS alive.

    1. 14

      It’s not just “for the spirit of BeOS” anymore; we’re trying to become a legitimate competitor of the “Linux desktop” (and by some respects at least, succeeding.) Obviously we have a lot of catching up to do, still.

      1. 2

        I wish you the best of luck.

        I really liked Haiku when I tried it a few weeks ago, but I am still tethered to Firefox for useful addons like uBlock Origin. Perhaps Haiku will become more popular if Firefox continues to decline and the addon ecosystem goes to shit with it (a real possibility IMO).

        Do Haiku users tend to use /etc/hosts for ad blocking?

        1. 2

          Some users use Otter Browser which is QtWebKit-based and has an adblocker built in, I think. I have heard of users using /etc/hosts, yes.

        2. 1

          I know, that’s mainly why I’m not all that interested, we already have a million UNIX-alikes. But I’m just glad it’s still out there, just the same.

          1. 7

            It’s not so a UNIX-alike, Haiku inherits from BeOS a unique philosophy in the field of user experience and interface design. It’s a really satisfying system to use.

        1. 2

          Does Haiku make it a goal to be fully POSIX-compliant? Does it do so completely natively without wrapping other functions? I saw your post about it here and understand that it’s already POSIX-compliant enough to be considered a proper UNIX-like OS, but I’m curious if you plan to take it all the way or if you already have.

          1. 6

            Yes, it is POSIX “natively”; most of the POSIX APIs directly invoke syscalls (or other POSIX functions.)

            The only POSIX APIs we do not have are ones which do not make a lot of sense anymore (like “hostid”) and are barely used, and also some XSI extensions not in POSIX proper (like XSI shared memory; we do have mman shared memory as well as file-mapping shared memory.) We may eventually get around to implementing these; but at least we don’t that often run into missing POSIX APIs when porting new software.

            1. 1

              Oh wow, that’s awesome! So if POSIX compatibility isn’t a problem, what are the biggest hurdles when it comes to porting new software to Haiku? If I target POSIX-compliant Linux, do I automatically achieve Haiku support?

              1. 10

                I think you will be surprised at just how many nonstandard API calls and flags Linux has. You will have to read the manpages very carefully, and use one of the “strict” macros before including headers, to be sure you are using pure POSIX :)

                But yes, porting POSIX-compliant applications is very easy. Command line tools that already run on at least Linux and FreeBSD usually can be ported in the space of a few hours if they are not precisely compliant and require build system patches, type changes, etc., or a few minutes if they really and truly use POSIX only. So, more or less, yes, you “automatically” get Haiku support.

                Most of the hurdles when porting applications are in using platform-specific APIs (more complex things like browsers do a lot of this for memory management, for instance) or GUI toolkits (we have Qt and now wxWidgets, but the GTK3 port is still a work-in-progress and not in the package repos yet.)

          2. 1

            Do any graphics cards have 3D acceleration yet? It doesn’t seem so from looking at the wiki but it’s always possible that it isn’t totally accurate or I missed something.

            1. 3

              No, none do. But there are things in the works…

          1. 17

            Trying to get Linux not to suck on the desktop is a losing proposition.

            To put it into context: When enough of the hardware works, Haiku offers a better free desktop experience today, despite the manpower it has isn’t even comparable. Coherent UI, easy to understand desktop that behaves as expected, responsiveness, avoidance of stalls on load. BeOS did achieve the same, but as a proprietary OS, in the mid nineties.

            Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.

            A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.

            If looking at the state of the art, you’ll find out that I am describing Genode, paired with seL4. If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago. One that no current proprietary solution would be able to come near.

            A proof of concept is available through Sculpt (new release expected within days), demonstrated in this FOSDEM talk. Another FOSDEM talk covers the current state of seL4.

            Full disclosure: Currently using Linux (with all its faults) as main desktop OS. I have done so for 20 years. AmigaOS before that.

            1. 21

              Haiku doesn’t support any hardware or software and is missing all the features that end up introducing the complexity that ends up introducing the bugs that you’re complaining about anyway.

              Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.

              Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach. If anything it would cause even more fragmentation because every distro could fragment on implementations of core services instead of that fragmentation only being possible within a single Linux repository.

              If anything, a good desktop experience seems to require more monolithic design: the entire desktop environment and operating system developed in a single cohesive project.

              This is why ‘Linux on the desktop’ is a stupid goal. Ubuntu on the desktop should be the goal, or Red Hat on the desktop, or whatever. Pick a distro and make it the Linux desktop distro that strives above all else to be the best desktop distro. There you can have your cohesiveness and your anti-fragmentation decision making (project leader decides this is the only supported cron, the only supported init, only supported WM, etc.).

              A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.

              Literally everything would be better if we could design it again from the ground up with the knowledge we have now. The problem is that we have this stuff called time and it’s a big constraint on development, so we like to be able to reuse existing work in the form of existing software. If you want to write an operating system from scratch and then rewrite all the useful software that works on top of it that’s fine but it’s way more work than I’m interested in doing, personally. Because ‘a good desktop operating system’ requires not just good fundamentals but a wide variety of usable software for all the tasks people want to be able to do.

              You can write compatibility layers for all the other operating systems if you want. Just know that the osdev.org forums are littered with example after example after example of half-finished operating system projects that promised to have compatibility layers for Windows, Linux and macOS software.

              If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago.

              The Linux desktop is already good. So clearly it is not the case that it cannot satisfy the requirements.

              1. 5

                The Linux desktop is already good.

                This is super subjective. I for one does not consider Linux particularly good – I love the command line, and I many times tried to make Linux my primary work desktop. However I need excellent touchpad input with zero latency and precise acceleration, HiDPI support that Just Works with all apps (no pixel zooming), 60 fps scrolling without stuttering and just the right amount of inertia.

                To me both Linux and Android fail miserably on things like 60 fps scrolling and most people don’t even notice that it stutters. I know that’s some very subjective criteria that many people don’t have. I’m excited about projects like Wayland, cause maybe there is light at the end of the tunnel?

                1. 7

                  However I need excellent touchpad input with zero latency and precise acceleration

                  Never had a problem with this, personally, and I certainly dispute that anyone needs it. People have productively used computers for decades without it and it isn’t a desktop issue anyway. It’s a laptop issue. I’m sure Linux still has a long way to go on the laptop but shifting the goalposts isn’t helping anyone. What it means for Linux to be viable on the desktop seems to be changing every time it gets close. Now it apparently includes laptop-specific functionality?

                  HiDPI support that Just Works with all apps (no pixel zooming)

                  And I’d like a supermodel girlfriend. HiDPI support is fucked on every platform, because it fundamentally requires support from the programmes that are running. It’s far superior on Linux to Windows. Windows can’t even scale its file browser properly, half the time the icons are the wrong size. It’s bizarre. It’s like they didn’t test anything before they claimed to ‘support’ it.

                  60 fps scrolling without stuttering and just the right amount of inertia.

                  “Just the right amount of inertia” is subjective. I hate scrolling on macOS or iPhones, the inertia is far too high. I’m sure others feel the opposite and think it’s too low. Yet if it’s made configurable I’m sure people will complain about those damn software developers wanting to make everything configurable when things should ‘Just Work’. You can never win.

                  Also, a lot of monitors these days have really atrocious ghosting. Smooth scrolling on those monitors makes me feel sick. So please at least make it easy to turn it off and keep it functional if I do.

                  I get that none of what I said changes that you want those features and so do others, and they’ll never be satisfied until those features are there. I get it. Those features are requirements for you. They’re not inherently bad. But it’s worth bearing in mind that nobody is approaching this with the goal of fucking up the Linux desktop. Nobody wants you to have a bad experience. Things are the way they are because solving these problems is really hard without having the resources to just rewrite everything from scratch. Wayland is rewriting a huge chunk of the stack from scratch and that’s having some positive impact but it’s still for me in a state where it invalidates far too much of the stuff that was working fine already that I don’t want to use it any more. I’ve gone back to X.

                  1. 7

                    HiDPI support is fucked on every platform

                    Works great literally 100% of the time on my Mac. I can even mix external monitor types and everything “just works”.

                    1. 3

                      Exactly. It’s 100% fair to keep moving the goalposts, because the rest of the industry isn’t taking a break waiting for Linux on the desktop to catch up to the state of the art.

                      1. 1

                        I use mixed monitor densities with i3 as my daily driver and everything works perfectly 100% of the time. Working software for linux exists.

                        For reasons which escape me, the biggest distributions have not fixed their defaults to make it work.

                      2. 3

                        Never had a problem with this, personally, and I certainly dispute that anyone needs it.

                        Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.

                        The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level. In comparisons of Andoid and iOS, what do they talk about? Input latency. In comparisons between Wayland and X11, it’s all about getting latency down (and avoiding graphics glitches, and effective sandboxing, and reducing the amount of code running as root; there’s a lot of ways to improve on Xorg).

                        Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.

                        1. 1

                          Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.

                          Personally I’ve had far worse experiences with remote desktop on Windows than on Linux. For example, remote desktoping into another Windows computer logs you out on that computer, or at least locks the screen, on Windows 10. Worse latency and relevant to this discussion too: terrible interaction with HiDPI (things scaled completely crazily when remoting into something with a different scaling factor).

                          The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level.

                          Touchpad latency has nothing to do with desktop Linux.

                          Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.

                          Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.

                          1. 2

                            Personally I’ve had far worse experiences with remote desktop on Windows than on Linux.

                            I know that Linux’s input layer isn’t 10sec-latency bad. That horrible situation was entirely the fault of the overloaded corporate VPN. I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.

                            Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.

                            That’s why I didn’t compare it to Windows NT. I compared it to iOS.

                            1. 2

                              I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.

                              Just because it’s not perfect or optimal doesn’t make it unacceptable, and it’s still not relevant to our discussion which is about desktop Linux. You seem happy to introduce other unrelated devices and operating systems and platforms when they help you push your view but then as soon as I respond to those points you retreat to a different position.

                              That’s why I didn’t compare it to Windows NT. I compared it to iOS.

                              No, you compared Android (not desktop Linux) to iOS. I wasn’t responding to that. I was discussing input latency in the context of the discussion we’re actually having in this thread: desktop Linux. The alternative to desktop Linux (given you specifically mentioned ‘playing action games’) is clearly Windows and not iOS.

                        2. 3

                          Windows can’t even scale its file browser properly, half the time the icons are the wrong size.

                          What version of Windows are you discussing here? At least for me on Windows 10, I haven’t noticed any problems with HiDPI in Explorer. And the fact still remains that when using 2 monitors with different DPIs, Linux handles this significantly worse than Windows does.

                          1. 1

                            On Windows 10 at my last job I continually had errors with Windows Explorer not scaling its own icons correctly. This was with two screens with different DPIs.

                            In contrast I’ve never had any issues with this on Linux and in fact with sway I can even have fractional scaling so that my older monitors can pretend to have the same DPI as my main monitor if I want to.

                        3. 2

                          Well, “with all apps” is a ridiculous requirement. You can’t magically shove the good stuff into JavaFX, Swing, Tk, GTK2, GTK1, Motif, etc. :)

                          My short guide to a great touchpad experience would be:

                          • use wayland, of course
                          • stick to GTK apps as much as possible (I have a list by the way)
                          • apply this gtk patch (and the mentioned “relevant frame-clock ones” for good measure)
                          • MOZ_ENABLE_WAYLAND=1 MOZ_WEBRENDER=1 firefox
                            • about:config widget.wayland_vsync.enabled=true (hopefully will become default soon)

                          Android fail miserably on things like 60 fps scrolling

                          huh? Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.

                          1. 2

                            Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.

                            Maybe I’ve just been unlucky with hardware but my android experience has always been plagued with microstutters when scrolling. I haven’t used iOS though, maybe the grass is always greener on the other side.

                            1. 2

                              The stutters all over Android were one of the issues that made me give iOS a try. I haven’t gone back.

                        4. 5

                          As my hat will tell you, I am biased in this matter, but still…

                          Haiku doesn’t support any hardware or software

                          “Any”? Well, I have a 2015-era ThinkPad sitting on my desk here in which I have a Haiku install in which all major hardware (WiFi, ethernet, USB, …) works one way or another (the sound is a little finicky), and the only things that don’t work are sleep and GPU acceleration (which Haiku does not support broadly.) I also have a brand-new Ryzen 7 3700X-based machine with an NVMe hard drive that I just installed Haiku on, and everything there seems to work except WiFi, but this appears to be a bug in the FreeBSD driver that I’m looking to fix. It can even drive my 4K display at native resolution!

                          You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.

                          1. 3

                            I’ve been inspired to try to find some time to install Haiku on an old laptop running Elementary OS!

                            1. 2

                              I’ll preface this by saying: I’m not saying Haiku is bad, just that you clearly can’t compare Haiku’s software and hardware support with Linux’s and pretend that Haiku comes out on top.

                              Well ThinkPads generally have excellent hardware support on many free software operating systems. Of course it boils down to separate questions, doesn’t it: are we asking ‘is there a machine where it works?’ or ‘is there a machine where it doesn’t work?’. You can say something has ‘good hardware support’ if there are machines you can buy where everything works, but I would say it only really counts as ‘good hardware support’ if the average machine you go out and buy will work. You shouldn’t have to seek out supporting hardware.

                              Based on that evaluation I would say that Linux certainly doesn’t have good laptop hardware support, because you need to do your research pretty carefully when buying any remotely recent laptop, but by the first standard it’s fine: there are recent laptops that are well supported and all the new features are well supported.

                              But I would say that Linux has excellent desktop hardware support, and this is a thread about desktop Linux. I never need to check if something will be supported, it just always is. Often it’s supported before the products are actually released, like anything from Intel.

                              the sound is a little finicky

                              Sound is definitely major hardware and should not be finicky. Sound worked perfectly on Linux in the early 2000s.

                              sleep and GPU acceleration (which Haiku does not support broadly.)

                              But there you go, right? It’s like people going ‘oh my laptop supports Linux perfectly, except if I close the lid it’s bricked and the WiFi doesn’t work but other than that it’s perfect’. Well not really perfect at all. Not all major hardware at all. Sleeping is pretty basic stuff. GPU support is hideously overcomplicated and that’s not your fault, but it doesn’t change that it isn’t there.

                              You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.

                              Right and I’m sure that software is great for what it is, but in a context where people are saying Linux isn’t a viable desktop operating system because of bad touchpad latency people are claiming that the problem is its monolithic kernel and that we actually need Haiku to save the day with… no sleep support? No GPUs?

                              1. 3

                                I think you are moving the goalposts here. You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true. What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.

                                We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?

                                1. 1

                                  You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true.

                                  It was also clearly not intended to be taken literally.

                                  What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.

                                  Except it isn’t actually ahead of Linux in any of those things from any objective standpoint, just in the opinion of one person that will advocate for anything that isn’t Linux because they have a hate boner for anything popular.

                                  We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?

                                  Results aren’t proportional to effort. Getting something working is easy. Getting something really polished, with wide-ranging hardware and software support, very long term backwards and forwards compatibility, that has to be highly performant across a huge range of differently powered machines from really weak microprocessors all the way through to supercomputers? That’s really hard.

                                  Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area. It’s mostly volunteer work that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.

                                  But you can have an actually good user experience on Linux if you forgo all the GNOME/KDE crap and just use a tiling wayland compositor like sway. Still has a few little issues to iron out but it’s mostly there, and if it’s missing features you need to use then you can just use i3wm on Xorg and it works perfectly.

                                  Is it not still the case that Haiku doesn’t support multi-monitor setups? I would hardly describe that as a ‘far better UI/UX experience’ given that before you can have UI/UX you have to actually be able to display something.

                                  1. 4

                                    Except it isn’t actually ahead of Linux in any of those things from any objective standpoint

                                    At least in UI responsiveness, Haiku is most definitely ahead of Linux. You can see the difference with just a stopwatch, not to mention a high-speed camera, for things like opening apps, mouse clicks, keyboard input, etc.

                                    Plenty of people have talked about how Haiku is ahead of both GTK and KDE in terms of UX, so it’s not just me (or us.) Maybe you disagree, but it’s certainly not a rare opinion among those who know about Haiku.

                                    just in the opinion of one person that will advocate for anything that isn’t Linux

                                    The BSDs are not Linux, and have the same problems because they use the same desktop. Our opposition to Linux has not a ton to do with Linux itself and more the architectural model of “stacking up” software from disparate projects, which we see as the primary source of the problem.

                                    Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area.

                                    Uh, last I checked, a number of Red Hat developers worked on GNOME as part of their jobs. I think KDE also has enough funding to pay people. The point is, Haiku has 0 full-time developers, and the Linux desktop ecosystem has, very clearly, a lot more than 0.

                                    that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.

                                    Yes. And that’s why Haiku exists, because we think those competing concerns probably cannot coexist, at least in the Linux model, and desktop usage deserves its own full OS.

                                    Is it not still the case that Haiku doesn’t support multi-monitor setups?

                                    We have drivers that can drive multiple displays in mirror mode on select AMD and Intel chips, but the plumbing for separate mode is not there quite yet. As you mentioned, graphics drivers are hard; I and a few others are trying to find the time and structure to bite the bullet and port Linux’s KMS-DRM drivers.

                                    you have to actually be able to display something.

                                    Obviously true multi-display would be nice, but one display works already. Pretty sure that counts as “displaying something.”

                            2. 2

                              “Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach.”

                              On the contrary, microkernels like QNX and Minix 3 have strong fault isolation with self-healing being easier. Some make it easier to maintain static or hot-swap live components. The RTOS’s among them keep one process from stalling others on top of good latency. People who used QNX desktop demo told me they could do compiles on weak hardware of the time with no sluggishness monoliths had.

                              GEMSOS, INTEGRITY-178B, and seL4 had mathematical proofs of their designs’ security claims. INTEGRITY-178B required user processes to donate their own CPU and memory to complete kernel actions to accomplish both goals. seL4 was small enough for code-level verification. Small enough for bullet-proof implementation of privileged code and easier modification of system are persistent benefits of microkernels.

                              1. 2
                                1. 2

                                  Anything that is unpopular is bad? Anything that is popular is good? What are you trying to say with this ridiculous comment?

                              2. 5

                                Do you think Fucshia would fit the bill?

                                1. 2

                                  No, Fuchsia won’t.

                                  I took a look at Zircon recently, at it appears to be a first generation µkernel, as it seems to ignore everything Liedtke brought forward. I would particularly stress the principle of minimality (zircon is functionality-bloated). It would have been an impressive µkernel thirty years ago. Today, it’s a potato.

                                  But it is still better than Mach, used in IOS/OSX. I have no doubt the overall system will be nice to work with (APIs), considering they have people from the old NewOS/BeOS team in there. It will, for Google, likely become a workable replacement path for Linux, giving them much more control, but from a systems design perspective, it is nothing else than shameful. They had the chance to use any of many competitive contemporary µkernels. but went with such a terrible solution just because NIH. It taints the whole project, making it worthless from an outsider perspective.

                                  Because of HelenOS ties, I expect Huawei’s HarmonyOS to be better at a fundamental level.

                                  1. 3

                                    As the Fuchsia overview page says:

                                    Fuchsia is not a microkernel

                                    Although Fuchsia applies many of the concepts popularized by microkernels, Fuchsia does not strive for minimality. For example, Fuchsia has over 170 syscalls, which is vastly more than a typical microkernel. Instead of minimality, the system architecture is guided by practical concerns about security, privacy, and performance. As a result, Fuchsia has a pragmatic, message-passing kernel.

                                    IMO, there’s no shame in favoring pragmatism over purity. From what I’ve read, it looks like Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.

                                    1. 2

                                      there’s no shame in favoring pragmatism over purity

                                      They do think they are pragmatic. It’s not the same as actually being pragmatic.

                                      Those “pragmatic concerns” show, if anything, that they’ve heard about minimality, but did not care to understand the why; They actually mention performance, and think putting extra functionality inside the kernel helps them with that; They ignored Liedtke’s research.

                                      A wasted opportunity. If only if they did a little more research on the state of the art before deciding to roll their own.

                                      Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.

                                      Assuming OSX/IOS and Linux are the contenders you have in mind, this looks like a really low bar to meet, and thus very feasible.

                                      1. 3

                                        The fact is that nobody has ever actually demonstrated a performant operating system based on a microkernel.

                                        1. 3

                                          Have you heard about QNX?

                                          1. 1

                                            It’s proprietary software and I haven’t used it.

                                          2. 1

                                            That’s quite the liberal use of the word fact.

                                            If you can handle foul language, you might enjoy reading this article.

                                            1. 3

                                              Starting off a blog post by talking about ‘microkernel hatred’ is pretty funny given that 99% of people that prefer monolithic kernels just get on with their lives while the microkernel people seem to be obsessed with comparing them and arguing for them and slagging off monolithic kernels and generally doing anything other than actually working on them.

                                              The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s. There’s not much performance data indicated from this millennium, and nothing newer than 2010.

                                              It’s 2020. I expect to see performance data comparing realistic server and desktop workloads across production operating systems running on top of microkernels and monolithic kernels and explanations for why these differences should be attributed towards the kernel designs and not towards other aspects of the operating system designs. Because sure as hell Linux is not a theoretically optimally performant monolithic kernel, and I’m sure there are much faster monolithic kernel designs out there that don’t have all the legacy bits slowing things down that are there in Linux, or for that matter, the various things slowing Linux down that exist for reasons of flexibility across multiple domains of work, or to patch security vulnerabilities, etc.

                                              It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this. Operating system speed in 2020 is dominated by context switching and disk I/O speeds, especially the former since the Spectre crap, so anything you can do to cut down on those bottlenecks is going to give you by far the most bang for your buck in performance.

                                              Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.

                                              1. 3

                                                Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.

                                                The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).

                                                There’s just nothing open that’s actually quite there, yet. And it is indeed a shame.

                                                It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this.

                                                Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.

                                                context switching (dominates performance…)

                                                Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.

                                                disk I/O speeds

                                                Are largely considered non-deterministic, and out of scope.

                                                File system performance is relevant, but Linux has known, serious issues with this, with pathological i/o stalls which, finally, are being discussed in Linux Plumbing conferences, but remain unsolved.

                                                This is in no small way a symptom of an approach to operating systems design that makes reasoning about latency so difficult it is not tractable.

                                                The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s.

                                                More or less correct. More or less. (highlighted a word).

                                                nothing newer than 2010.

                                                The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.

                                                99% of people that prefer monolithic kernels just get on with their lives

                                                There’s a several-steps difference between “using a system that happens to be monolithic”, and “preferring monolithic kernels”. The latter would imply an (unlikely) understanding of the different system architectures and their pros/cons.

                                                Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.

                                                I have not encountered such a person who’d regardless prefer the monolithic approach. Not even Linus Torvalds has a clue, which I believe has no small role on the systemic echo chamber problem the Linux community has with the µkernel approach.

                                                Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.

                                                And thus I realize I have put too much effort in this post, relatively speaking. This is why it is likely I will not humor you further than I already have.

                                                1. 2

                                                  The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).

                                                  I didn’t say it isn’t fit for purpose. I’m not the one saying that. You are. You are the one claiming that a monolithic kernel is fundamentally a broken model. Well prove it. Build a better example. Show me the code.

                                                  My perspective is all the working kernels I’ve had the pleasure of using have been monolithic and I’ve never seen any demonstration that there are viable other alternatives. I’m sure there are, but until you build one and show it to us and give us performance benchmarks for real world usage, we just don’t know. There are hundreds of design choices in an operating system that affect throughput vs latency vs security etc. etc. and until you’ve actually gone and built a real world practical operating system with a microkernel at its core you can’t even demonstrate it’s possible to build one that’s fast, let alone that the thing that makes it faster is that particular design choice.

                                                  Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.

                                                  If you can’t actually build a working example in 25 years then maybe your idea isn’t so great in the first place.

                                                  Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.

                                                  And how fast is seL4 when Spectre and Meltdown mitigations are introduced? Linux context switches aren’t slow because slow context switches are fun. They’re slow because they do a lot of work and that work is necessary. seL4 has to do that work anyway, and context switches need to happen a lot more often.

                                                  The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.

                                                  I don’t need a paper to support my hypothesis. I don’t have a hypothesis. It’s simply a statement of fact that these benchmarks are outdated. I’m not saying that if they were redone today they would be any different, or that they’d be the same. I don’t know, and neither do you. That’s the point.

                                                  Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.

                                                  99% of people that prefer monolithic kernels. That prefer them.‘99% of people that have blonde hair just get on with their lives’ does not mean ‘99% of men have blonde hair’ and ‘99% of people that prefer monolithic kernels just get on with their lives’ does not mean ‘99% of people prefer monolithic kernels’. Christ alive this isn’t hard. The irony of quoting Latin phrases at me in an attempt to make yourself look clever when you can’t even parse basic English syntax…

                                                  Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.

                                                  You’re quite right. I skimmed it. It didn’t address the problems I have with your aggressive argumentative bullshit and so it isn’t relevant to the discussion.

                                                  Nobody actually cares whether microkernels are faster or monolithic kernels are faster. It doesn’t matter. It isn’t even a thing that exists. It’s like saying ‘compiled languages are faster’ or ‘interpreted languages are faster’. Sure, maybe, who cares? Specific language implementations can be faster or slower for certain tasks, and specific kernels can be faster or slower for certain tasks.

                                                  Perhaps it will turn out, when you eventually come back here and show us your new microkernel-based operating system, that your system has better performance characteristics for interactive graphical use than Linux. Perhaps you’ll even somehow be able to justify this as being due to your choice of microkernel over monolithic kernel and not the hundreds of other design choices you’ll have made along the way. And yet it might turn out that Linux is faster for batch processing and server usage.

                                                  We don’t know, and we won’t know until you demonstrate the performance characteristics of your new operating system by building it. But arguing about it on the internet and calling everyone that doesn’t care a ‘Linus Torvalds fanboy’ definitely isn’t going to convince anyone.

                                  2. 3

                                    I’m curious about why you say that the state of the art is Genode specifically paired with seL4. What advantages does seL4 have over Nova? It looks like Sculpt is based on Nova. Would it be difficult to change it to use seL4?

                                    1. 4

                                      Nova is a microhypervisor that, IIRC, was partly verified for correctness. seL4 is a microkernel that was verified for security and correctness down to the binary. Genode is designed to build on and extend secure microkernels. seL4 is the best option if one is focusing on security above all else.

                                      An alternative done in SPARK Ada for verification is the Muen separation kernel for x86.

                                      1. 0

                                        Because operating system and kernel enthusiast communities are full of people that are obsessed with dead, irrelevant technology.

                                        1. 1

                                          Genode offers binary (ABI) compatibility across kernels. They’re using the same components, the same drivers, the same application binaries.

                                          I do not know the current state of their seL4 support. The last time I looked into it (more than a year ago), you could use either NOVA or seL4 for Sculpt, and seL4 had to be patched (with some patches they were on their way to upstreaming) or you’d get slow (extra layer of indirection due to a missing feature in MMU code) framebuffer support.

                                          From a UX perspective, using either microkernel should feel about the same. I do of course find seL4 more interesting because of the proofs it provides, and because it provides them (assuming they got the framebuffer problem solved) without any drawbacks; seL4 team does claim their µkernel to be the fastest at every chance.

                                          I also do favor seL4’s approach to hypervisor functionality, as it forwards hypercalls and exceptions to VMM, a user process with no higher capabilities than the VM itself, making an otherwise successful VM escape attack fruitless.

                                      1. 10

                                        Kind of amazing the progress this project is making. On the flip side I can’t help but wonder what is attracting people to it over Haiku. They’re both BSD licensed, both use C++. Both hark back to a similar era. Both GUI first. Haiku is arguably a lot more functional and further along though.

                                        Some ideas that come to mind:

                                        • I wouldn’t be surprised if at least part of it was that Serenity is on GitHub and uses a GitHub pull request workflow.
                                        • Serenity looks easy to build and supports a wide range of build hosts (Linux, macOS, FreeBSD, OpenBSD).
                                        • Being newer it’s easier for someone to just show up with an idea, implement it and have it accepted.
                                        • Perhaps it appeals to a broader audience since it’s Windows inspired vs. BeOS.
                                        1. 9

                                          Once upon a time in the early/mid ’00s, there were a plethora of operating system projects that existed “just for the heck of it”, and most of them are essentially dead at this point for one reason or another. Haiku had a purpose that most of them did not: an unrealized vision of a better future for computing.

                                          To me, at least, SerenityOS feels like a callback to those days when developers got together and learned something about computers and operating systems by building one. Which is ultimately pretty cool; but it’s a pipe dream to think you will be able to use it as your primary OS anytime soon. Most of the people working on SerenityOS seem to be doing it for the fun of it, which is great! Obviously Haiku has a ton of “solved problems” that SerenityOS, being newer, does not, and you can learn a ton by working on it.

                                          But in terms of being a realistic possibility for a “daily driver”? Yeah, SerenityOS is years and years away from that. And at least when this project first made the rounds, I know some of the developers at least then said that they wanted to get there. That’s not a sentiment uncommon to new OS developers, but, well, Haiku has been around for two decades, and once made progress as rapidly and as impressively as SerenityOS, but as you can see, our install-base is still rather small, and people still have lists of things that we would need to do in order for them to make the jump.

                                          It’s also worth noting here that SerenityOS has (or had?) a policy of not using any imported code whatsoever, even for things like ACPI, where Linux, *BSD, Haiku, etc. all use Intel’s ACPICA (and even the OSDev wiki recommends hobbyist OS developers do, too, simply because of how absurdly complicated ACPI is), or the libc, or the coreutils, or the shell, or any number of other things which Haiku et al. reuse from one another. That means that SerenityOS has a task ahead of it that is unbelievably massive even in comparison to Haiku, which uses the GNU coreutils, bash, musl’s libm, etc. and does not completely and totally re-create every wheel. Again, doing those things is an excellent way to learn; but it’s more or less incompatible with using the system as an actual daily driver.

                                          1. 3

                                            Linux, *BSD, Haiku, etc. all use Intel’s ACPICA

                                            Except OpenBSD. They have to deal with some fun bugs due to their own implementation sometimes :)

                                            1. 1

                                              Inspired by this thread, I spent some time on the SerenityOS issue tracker. I am interested in a well-designed permissively licensed OS that is not written in C. I was not convinced that SerenityOS is going to be that system.

                                              It currently targets x86-32 and has a single userspace ABI. Adding good layering for these with a clean set of abstractions is really hard to get right and causes massive pain later on if you don’t. Their approach is to just incrementally refactor to get x86-64 support, without thinking about a final design. Once you have two architectures supported, cleaning up the abstractions is hard because your testing burden is high. Once you have three, your mistakes are basically baked in forever.

                                              The BSD family was quite lucky here, because the VAX port required them to think hard about these abstractions, get them wrong, and then copy the ones that Mach built based on their experience. Linux was less fortunate and so ended up with a split between architecture-specific and architecture-agnostic code that is quite painful in some places (for example, system call numbers are architecture specific, managing signal delivery for the product of architecture and ABI is quite ugly).

                                            2. 11

                                              I’ve played a lot with Serenity and made a couple of modest contributions. I have also played about with Haiku a little bit. For me, I was more attracted to the former, for a few reasons. Firstly, the GUI is much nicer - it’s almost exactly what I want in a classic style desktop environment, and makes me feel a little nostalgic for the Windows interfaces on which I learnt to use computers. For the most part I find Haiku’s interface to be a little ugly - although I do love the boot screen! Secondly, Serenity is a clean sheet design - built in a thoroughly modern way simply in accordance with the intuition of Andreas and the other developers, rather than in an attempt to cling on to compatability with an obscure OS that was dead before I was even old enough to use a computer. Thirdly - and this is mostly as a consequence of Andreas’ videos - Serenity felt to me like a system that was alive and blossoming, that I could jump into and make a difference to, while Haiku seemed to me an anachronism kept alive by a cabal of mysterious maintainers who refuse to let go of the past. I’m sure that’s not the case, and that Haiku’s community is welcoming and forward thinking, but it’s hard to be inspired into lending a hand by simply seeing a new set of patch notes every two years.

                                              TL;DR - Haiku is clinging to the past while Serenity is taking interfaces of the past back to the future.

                                              1. 13

                                                For the most part I find Haiku’s interface to be a little ugly

                                                If you are speaking purely of the “look and feel” of Haiku, why not just write a Windows “Decorator” (window border styling) and “Control Look” (control theming)? You could get almost a pixel-perfect recreation of the Serenity GUI on Haiku. We personally just like the way Haiku looks now, but anyone can customize it!

                                                rather than in an attempt to cling on to compatability with an obscure OS that was dead before I was even old enough to use a computer.

                                                I think you will find that we are more modern-minded than even Linux in terms of how the system is put together. Maybe not quite as modern-minded as Serenity, but the Be origins have not constrained us. The package filesystem is proof enough of that, as are the use of C++ in the kernel and quite a lot of other things under the hood.

                                                while Haiku seemed to me an anachronism kept alive by a cabal of mysterious maintainers who refuse to let go of the past

                                                Dude, when I started contributing to Haiku the better part of a decade ago, I was in high school. We’re not all (or even at this point, mostly) “old geezers”. We have forums, an IRC channel, mailing lists, a bug tracker, and it is pretty easy to see who we are; and our technical decisions are pretty good proof that we absolutely know how to let go of the past.

                                                but it’s hard to be inspired into lending a hand by simply seeing a new set of patch notes every two years.

                                                We’ve been publishing monthly Activity Reports on the blog detailing what’s been going on in the Haikusphere for multiple years now, and new software (and screenshots) appear in the Depot on a weekly basis.

                                                1. 4

                                                  If you are speaking purely of the “look and feel” of Haiku…

                                                  My impressions are naturally, if unfairly, formed off what the system looks like in its default state, not what could be achieved with a weekend’s worth of programming.

                                                  I think you will find that we are more modern-minded than even Linux…

                                                  Fair enough. I always got the sense skimming through the project that it was a bit tied down by its adherence to BeOS but I happy to be wrong on this point.

                                                  We’re not all (or even at this point, mostly) “old geezers”.

                                                  Again, I don’t doubt you’re right, but the impression I got of the community was of a very old project, and the assumption I made from that was that it would be quite set in its ways. Perhaps I am completely wrong about that.

                                                  We’ve been publishing monthly Activity Reports on the blog

                                                  Like most people, the honest truth is that with the exception of a few blogs that I go out of my way to check, I only really see what bubbles up on HN, lobsters, Reddit, /g/, etc. Hence my exposure to Haiku is pretty limited.

                                                  I really don’t want my original post to be interpreted as ‘this is why Serenity is better than Haiku’. My intention was to rather explain ‘this is why a bored student browsing the techy parts of the internet might be more drawn to Serenity than to Haiku’. I have a great deal of respect for your project and your comments in this thread have inspired me to perhaps check in on it with a little more regularity :-)

                                                  1. 2

                                                    The website indeed could use a refresh with more information as to what we do and what we are about, sure. But, I mean, if you go look at Fedora or Ubuntu or something, are their websites really that much more engaging as to getting involved with the project? Not really. So it’s a hard balance to find for us; because on one hand we are initially if not ultimately targeting the same market the “Linux Desktop” is, while we have a fraction of both the volunteers or the financial support they do.

                                                    1. 1

                                                      It’s perhaps natural, then, that hobbyists with a bit of time on their hands are more likely to feel able to get stuck in with a GitHub project which features, front-and-centre, a YouTube channel of a guy making near-daily coding logs, rather than something like Haiku, which - to its credit - looks far more like a professional endeavour than an amateur collective’s labour-of-love.

                                                      1. 1

                                                        Well, the “ports” portion of the Haiku project lives on GitHub, and there is a GitHub mirror of the main repository with a very friendly README.

                                                        Yes, we are more focused on actually getting development done in what precious little spare time we have than making YouTube video logs about it. But Kyle Ambroff-Kao, one of the newer names (he was granted commit access last month :) has started doing development screencasts, so maybe some of us do have the time…

                                                2. 7

                                                  I’m not sure I completely agree with some of your characterisation of Haiku but I get your point.

                                                  and makes me feel a little nostalgic for the Windows interfaces on which I learnt to use computers. For the most part I find Haiku’s interface to be a little ugly

                                                  It’s funny, I grew up on Mac OS and consider classic Windows supremely ugly. To me Haiku (or Platinum Mac OS) is my ideal classic vibe. So, for me the appearance of SerenityOS puts me off a little, I guess in the same way Haiku might for you. :)

                                                  Anyway, thanks for responding. It seems my intuition might be on the right track. I’m interested to watch how the project progresses.

                                                  1. 3

                                                    From its home page, SerenityOS’s key selling point seems to be it’s “a love letter to ’90s user interfaces” … something I don’t grok at all, the 90s being that awkward age of “wow, if we color the top and left edge darker and the bottom and right lighter. It looks like it’s inset!!” in GUI design. But at least they’re not aping Motif…

                                                    And this matters, because all those sharp contrasts and hard lines create a ton of visual noise that makes it hard to parse the interface and focus on the important stuff. I freely admit today’s GUIs have their problems and silly fads, but they’re so much better.

                                                    Behind the GUI, I don’t see the website describing any new and different architecture that would entice me to work on this, or pick it for a desktop over a stable Linus or BSD distro.

                                                    tl;dr: You damn kids and your “retro” stuff! You don’t know how much better you have it now than in the old days. Now turn off that “vaporwave” and get off my lawn!

                                                    1. 2

                                                      I am completely willing to put my hands up in the air and say that my impressions of Haiku are just that - my impressions - formed from the collective sum of the few times I’ve seen the project pop up on aggregator sites and an hour and a half playing with an ISO in qemu. That is to say, I am in no way qualified to make reasonable assertions about the Haiku project, either technically or with regards to its community. At the end of the day, Serenity just captured my imagination more than Haiku, and that’s what ultimately matters when it comes to deciding whose codebase to spend your afternoon trawling through.

                                                1. 5

                                                  I don’t really understand the rationale for storing a file’s icon in its inode. Most of the time the icon is based on the file type/extension, which is obviously shared by many files, so you can use an in-memory cache to avoid any disk I/O at all. The cache can contain pre-rendered pixmaps so you don’t have to waste time rendering either.

                                                  For files with custom icons, almost always the icon is a thumbnail of the contents, which means a pixmap, not vectors. (Even a vector based document wouldn’t use a vector thumbnail, because the thumbnail would be just as complex as the entire document, only shrunk down, unless you did some complex post processing on it to remove details.)

                                                  I do agree a vector format is great for storing the file-type icons, though. With retina displays, macOS icons are now recommended to go up to 512x512px, which is becoming ridiculous. As someone who programmed a Mac SE back in the day, I find the idea of my app’s icon not fitting on a floppy disk appalling.

                                                  1. 5

                                                    Most of the time the icon is based on the file type/extension…

                                                    Yes, and the same is true on Haiku. Applications, which have their own icons, are the primary consumers of the “icons-in-inodes” feature.

                                                  1. 28

                                                    A number of the “wishes” in here were already in the BeOS, and of course now live on in Haiku:

                                                    Configuration files and their various formats exist because the filesystem can’t save a structure. If it could, you could just save a given object or dictionary, with all its keys and values, and you’d be able to use them directly next time you’d needed them. … When a program needs to send data to other program, it doesn’t send a serialized version, messages are not sent as stream of bytes, but as structured messages, natively supported by the OS kernel.

                                                    This mostly lines up with the “Messages and Ports” concepts in BeOS and Haiku.

                                                    Programs as collections of addressable code blocks in a database … Once you publish them, you stop having to communicate using the old stream-based methods (sockets, files, …) - it suffices you to just return structured data.

                                                    Since on Haiku, every window is its own thread, applications communicate even with themselves using messages; and if they allow it, any application can send them messages, and even ask what messages they accept.

                                                    A database instead of a filesystem … I’m talking here about a generic structured system of storing data on media that supports data types, atomicity, indexing, transactions, journals and storage of arbitrarily structured data, including large blocks of purely binary data.

                                                    BeOS and Haiku make extensive use of (typed) extended attributes, which are then indexed and can be queried. Haiku’s Email client utilizes this by storing each e-mail as its own file, with attributes like MAIL:from, MAIL:subject, etc. There are also attributes like BE:caret_position, which specifies where the caret last was when you closed a (text) file, so any editor can restore your position, no matter which one you last opened a file in.

                                                    1. 6

                                                      Since on Haiku, every window is its own thread, applications communicate even with themselves using messages; and if they allow it, any application can send them messages, and even ask what messages they accept.

                                                      This is also the case, I believe, in Plan 9

                                                      1. 2

                                                        I’ll have to look into Haiku some day, thanks for the pointers.

                                                      1. 6

                                                        I’ve been following Haiku OS development, and it’s been exciting to see all the progress it’s made. I’m really hoping it becomes a viable alternative to Linux one day.

                                                        1. 2

                                                          For many people, it already is, if you don’t need GPU acceleration…

                                                          1. 4

                                                            Or want to watch Netflix etc.

                                                        1. 3

                                                          I encourage Mr. Rietschin to look at the Haiku project as an example of how much kernel compatibility can be attained without access to the source code.

                                                          1. 17

                                                            His claim is a little more nuanced than “kernel compatibility is impossible”[0] and goes like this:

                                                            1. ReactOS uses the same symbols and sometimes macros in a couple of places (where somebody with access to both cared to look)

                                                            2. The only way to get to such detailed information (that he is aware of, see below) is through the “Microsoft confidential” marked binders he posted a photo of.

                                                            3. Therefore there must have been knowledge leaking into ReactOS that can only have been attained through copyright violating means (or worse)

                                                            4. Since ReactOS, for the most part, aims for a circa Windows 2003 architecture, and since there were leaks of that code base, that must have been the source they tapped.

                                                            What he’s not considering is that it’s kind of a sport for lots of folks to dig up all details they can get about the internals, and entire books were written detailing the design. Software that hooks into Windows in unstable interfaces was also rather common in the 90s and early 2000s.

                                                            Sources for this stuff were, for example: “checked” builds that Microsoft released to driver developers for a long time (build with debug flags and all assertions in - and there are many assertions in Windows code, and for compatibility reasons they even keep the wrong ones once they’ve made it in[1]); insufficiently stripped symbol data on their symbols server (windbg et al can download symbols to binaries for easier analysis) that were released every now and then.

                                                            There are hoarders that collect stuff like that. Sometimes they write books about the knowledge they gleam from that, sometimes they’re available for answering pointed questions.

                                                            That also explains why ReactOS is remaining broadly in the Win2003 era: people did a lot of digging back then, and a lot of documenting, and it’s a reasonable base for a reimplementation because most of today’s software can still be made to work on “Win2003 + a select few APIs”.

                                                            Since then, Microsoft got in the habit[2] of using their software vendors (who did such time consuming work) as market research, obsoleting their work with something that ships in the next Windows version for free: no use competing with that, which is why the Windows ecosystem became a lot more dull in the last ~15 years.

                                                            [0] Besides: Haiku isn’t kernel driver level compatible, both because it’s not worth keeping that level of compatibility with a kernel that never had many drivers to begin with, and because, while newOS was started by one of BeOS’ kernel developers (who’s now working on Fuchsia’s kernel Zircon), it’s a rather different beast.

                                                            [1] See https://www.coreboot.org/ACPI#Using_checked_builds: When I was debugging ACPI in coreboot for Windows XP compatibility, I ran into blue screens that simply made no sense. The issue was that the assert was inverted, something alone the lines of (very freely paraphrased) handleElseOp() { ASSERT(prevOp == IfOp, “Else must only follow If”); … }. It should have been “prevOp != IfOp”. That issue appeared in Windows 2000 as far as I can tell, and XP still kept in that assert (which had only an effect on checked builds), apparently to prevent platforms from using that opcode now that there are (developer-only) Windows versions that fail on it. It usually works out because iasl and similar ACPI compilers do away with the “Else” (presumably for Windows 2000 :-) ), but coreboot has its own ACPI code generator for a few tasks, and we weren’t aware of that constraint.

                                                            [2] they did before, but in my opinion that only grew worse.

                                                            1. 5

                                                              All good points, though they’re not nearly as pithy as mine ;)

                                                              I’ll add to your differences that Haiku has the luxury of dealing with an unmoving target. Nobody is producing new BeOS versions, so nobody can get suspicious that Haiku is targeting compatibility with 2001 software.

                                                              I’ll also add an important point to your criticism of the author: accusations like this were persistent enough that the project performed a complete code audit and added strict rules regarding reverse engineering over a decade ago. That audit did not find any suspicious code, and the new rules make it unlikely any has been added in the meantime.

                                                              1. 3

                                                                [0] Besides: Haiku isn’t kernel driver level compatible, both because it’s not worth keeping that level of compatibility with a kernel that never had many drivers to begin with

                                                                This is incorrect; Haiku was for a long time driver-level compatible. We are much less so now (notably filesystem drivers changed KABI, and audio drivers changed ioctl interface versions, among other things.) But in theory if you have a random special sauce PCI driver from BeOS R5, you can run it against a current Haiku kernel and it will (potentially) still work; and if it doesn’t, making it work is likely just a recompile away.

                                                                and because, while newOS was started by one of BeOS’ kernel developers (who’s now working on Fuchsia’s kernel Zircon), it’s a rather different beast.

                                                                Indeed it is.

                                                            1. 4

                                                              no virtualbox is a handicap

                                                              1. 16

                                                                I use qemu on OpenBSD to run Linux, Windows (admittedly it’s Windows XP :~/), sortix and other virtual machines - I don’t know how it compares to virtualbox but it works well.

                                                                1. 6

                                                                  In my experience (as an OS programmer), VirtualBox is very buggy and unreliable. QEMU/KVM is much better, indeed. (OpenBSD’s native hypervisor will likely eventually be ready for prime time here, if it isn’t already.)

                                                                  1. 3

                                                                    vmm(4) is already great at running OpenBSD virtual images - but I’ve not tested it with other OS yet - it’s on my todo list

                                                                    1. 2

                                                                      Depends a little if you for example use Vagrant and then VirtualBox might be a lot more handy. In my last job I gave up being the only person with lxc images because everyone else was on mac+virtualbox, so I had to relent.

                                                                      1. 2

                                                                        I have the exact opposite experience. I have been using VBox for many years on Linux and MacOS and it is very reliable.

                                                                        1. 4

                                                                          If your guests are Linux or Windows, it’s well-polished and stable. If your guests are anything else (especially if you’re doing active OS dev on the guest!), I imagine you’ll find holes in short order.

                                                                        2. 2

                                                                          Except OpenBSD doesn’t have KVM, but its own VMM. You’re stuck with TCG for QEMU.

                                                                      2. 1

                                                                        As others have suggested, vmd is your friend here.

                                                                        While not perfect in terms of speed, I managed to completely get rid of linux @work as a main system.

                                                                        Now I just fire up an ubuntu instance (to be able to install it you need to make the image boot in console mode, nothing too hard) with x2go-server installed. Using x2go-client, I fire up emacs and I’m set.

                                                                        Other (+) points (for me at least):

                                                                        • the ability to just test on the fly an older release. Just a vmctl start <vm-name> away!
                                                                        • as some days I work remotely, there’s no need to ‘pollute’ my base installation with stuff I do not need (php, a web server, the whole npm ecosystem, etc)

                                                                        Of course all the above are dependent to one’s needs. :)

                                                                        Using the aforementioned work flow, I truly do not care if the vm breaks; and if it does, I just copy my backup image and I’m set. :)

                                                                        [edited for grammar]

                                                                      1. 11

                                                                        Working right out of the box: WiFi, Ethernet, video, trackpad, keyboard, USB storage devices, and a Logitech Optical Notebook Mouse Plus.

                                                                        Some of you may not understand how absolutely incredible that is. I ran BeOS as my primary operating system for quite a while and I basically had to build a custom machine to get hardware that worked due to lack of driver support. I remember the early days of Linux and (desktop) BSD and how incredibly meager the hardware support was.

                                                                        The Haiku team has really done just incredible work.

                                                                        1. 3

                                                                          I realize this, and I’m not sure how they managed to do this. As far as I understand, there a gazilion drivers for each piece of hardware, and hardware vendors mostly provide drivers for Windows (because of market share), but everyone else has to write their own. For Linux, the drivers are maintained in the kernel tree, and make up a significant chunk of the source code. From looking around it looks like they used some BSD drivers, but also wrote their own. I’ve had a lot of driver problems on Linux, so they must have done something very right with Haiku.

                                                                          1. 4

                                                                            From looking around it looks like they used some BSD drivers, but also wrote their own

                                                                            IIRC, they wrote a tool to automatically convert FreeBSD ethernet drivers. It wouldn’t surprise me if they looked to FreeBSD for other drivers, as well.

                                                                            1. 5

                                                                              Nope, just ethernet & WiFi drivers. Everything else is of our own design, though if we get stuck on specs we may look at other OSes’ drivers (FreeBSD included, or even preferred) for reference :)

                                                                        1. 18

                                                                          This is amazing, I love it, and OP if you’re the author, I love you, and thank you for sharing.

                                                                          An OS is just a piece of software. It’s possible for someone with time and motivation and perseverance to build one for themselves, even if they’re not going to be running it during $DAYJOB. Maybe especially if they’re not going to run it during $DAYJOB.

                                                                          You can tell a homebrew networking stack when the features list includes ARP support.

                                                                          We need more people working on homebrew computers and operating systems. Something important in the world ended when the average computer user stopped understanding how it worked. There’s nothing magic, just applied accumulated knowledge and research and a wiki.

                                                                          Just Start. It’s not as hard as you think.

                                                                          You can use C, or C++ or Zig or Nim or D or Pascal or C#!! or Go or whatever.

                                                                          Get a message into the VGA text buffer.

                                                                          Get some interrupt handlers going.

                                                                          Just build a thing for yourself.

                                                                          God damned fantastic.

                                                                          1. 4

                                                                            Reminded of Rob Pike’s Systems Software Research is Irrelevant.

                                                                            1. 4

                                                                              I always liked how Pike was being very charitable to MS in that article, instead of the Linux crowd that lionized him.

                                                                            2. 4

                                                                              An OS is just a piece of software. It’s possible for someone with time and motivation and perseverance to build one for themselves

                                                                              It is just a piece of software, but it’s not so easy to just “build one for yourself”. If you are doing it to learn OS fundamentals, to get your hands dirty with driver code, or just to relax and toy with systems design? Sure, go for it! It’s an awesome experience.

                                                                              But if you are trying to make something viable for use as a daily driver at all, and you need to be able to use hardware that was created after 1996 and not just run in a VM (like, most USB peripherals, network cards, etc.), it’s a job. SkyOS was the only major effort I know of by essentially one person, and that got abandoned for all the reasons you’d guess. Haiku hasn’t been abandoned, but we have about 15-20 regular contributors working on everything from drivers to ports, and it’s only now we’re approaching general viability.

                                                                              1. 4

                                                                                100% agreed that you shouldn’t expect a one-person project to get you to daily-driver. But I think my point is that you shouldn’t let that stop you - in fact it’s probably better you don’t expect to get to daily driver, because that removes the pressure and lets you focus on building a thing. And building a thing is a human impulse we should celebrate!

                                                                                I have gotten myself lost innumerable times in the depressing space around “nothing I could do by myself matters”, which is both true and totally and completely irrelevant.

                                                                                Contributing to the mission of Haiku (or Redox or whatever) is awesome, and more people should do it! I really really don’t want to you or anyone to take away from my random rant that you should go do useless projects only and never join a team or a project. Honestly, I assume based on your contributions to Haiku that you’re past the kind of ennui that I’m aiming at, and that’s awesome.

                                                                                But the fact that Haiku isn’t going to switch to your first-pass bootloader is okay! Don’t not build that bootloader, build it anyway because bootloaders are cool!

                                                                                1. 4

                                                                                  Sure, that’s a fine goal. It just seems people are getting overly excited about Serenity in a “woah, I’d love to use this as my primary OS, can’t wait for that” kind of way, not a “this is a neat project to hack on in the evenings” kind of way.

                                                                                  1. 3

                                                                                    Oh dude, totally, I don’t really understand down-thread at all.

                                                                              2. 2

                                                                                Not the author, I just thought it was amazing!

                                                                              1. 10

                                                                                I love this.

                                                                                Linux is my primary operating system, and has been for many years. Linux is great, but always, in the back of my mind, I remember that there are quite literally thousands of features that I am not using and will never use just sitting there. To many (most?) people the fact that they’re there but unused isn’t a problem, but it’s a serious source of irritation to me.

                                                                                I don’t need support for 80 different filesystems, 6 different application profiling mechanisms, 74 different syscall sandboxing mechanisms, 3 different hypervisor mechanisms….hell, I don’t even need multiuser support for my laptop where it’s only ever just me (though of course I recognize the important of privilege separation so there should still be something like OpenBSD’s pledge).

                                                                                All of these things can be disabled at compile time in modern systems, but I know they were still there and it irks me. Call me a minimalist.

                                                                                1. 10

                                                                                  There’s always Haiku, if you want something simple and fast.

                                                                                  1. 4

                                                                                    Excellent point, though Haiku has added some…big…things: the package filesystem (which is amazing, but a lot of kernel code), the systemd-style service manager, rumblings of multiuser support, etc.

                                                                                    I’m not complaining (and who am I to judge?), just pointing out that there is a lot of stuff that wasn’t present in the original BeOS there. :)

                                                                                    If Haiku supported just a couple of things more that I need (fast VMs, multi-monitor support, Google Meet support), it would definitely be my daily driver. It’s the best of what’s out there.

                                                                                    1. 11

                                                                                      Each person has a few must-haves, not all of them overlap and that’s how you get the bloat you see in Linux ;-p

                                                                                      1. 7

                                                                                        the package filesystem (which is amazing, but a lot of kernel code)

                                                                                        Huh? It’s one more filesystem driver, that’s all. Linux has how many filesystem drivers in-tree now…?

                                                                                        the systemd-style service manager

                                                                                        It’s really not systemd-style, it’s Haiku-style, and service init/daemon management is all it does. You can still just run random shell scripts on startup the way you could 10 years ago via the bootscripts…

                                                                                        rumblings of multiuser support

                                                                                        We already have chown/chmod/su/etc., just not the GUI functionality interfacing with them, that’s all.

                                                                                        It turns out that most people actually need these things to use an OS as a daily driver. In fact you still list some things you need. OSes can’t stay stuck in the 90s :-p

                                                                                        1. 3

                                                                                          Wow, you’re a celebrity to me. Thank you for all your work on Haiku. I used BeOS for many years as my primary OS (even after Be was Was) and I would say it’s my favorite of all time, but I’m too fond of the Amiga to actually bring myself to say it.

                                                                                          Huh? It’s one more filesystem driver, that’s all. Linux has how many filesystem drivers in-tree now…?

                                                                                          As I said, it’s amazing. It’s the most revolutionary thing to happen to package management in years if not decades. It’s the right way to do things. I absolutely have no justification for my opinion on it other than that “it wasn’t done that way in 1996.” It’s purely nostalgia and an irrational minimalism on my part. :)

                                                                                          OSes can’t stay stuck in the 90s

                                                                                          I know, I know, but a big part of me wishes they could be.

                                                                                    2. 4

                                                                                      I’m curious, do you feel that the fact that this bothers you is rational? Or do you think it’s kind of an OCD-like thing? For example, it used to bother me to step on cracks in the sidewalk or even on tile floors, particularly when I was a child. Nowadays I occasionally “relapse” when I’m lost in thought, though it doesn’t really bother me any more. I’ve always known that this isn’t rational, but I still do it.

                                                                                      1. 11

                                                                                        It irks my sense of elegance, I suppose. It’s not so bothersome that it keeps me from running Linux, of course, but it’s bad enough that I often daydream about writing an OS from scratch, like TFA.

                                                                                        And I do software engineering and information security both as a hobby and for a living, so I suppose it might affect me more than someone else. So it’s not rational, but it’s not necessarily as irrational as disliking something that’s unrelated to what I do.

                                                                                        1. 5

                                                                                          I appreciate that @glesica asked and you answered. Thank you both for that. I have friends who have this exact same sensibility and I have never truly understood it.

                                                                                          How can we have general purpose operating systems and not make them usable for … general purposes? :)

                                                                                          1. 3

                                                                                            I don’t have quite the same appreciation or nostalgia for vintage Unix UIs, but I do feel the same way about unused / unuseful-to-me features. And I think it’s reasonable enough to talk about it as pure aesthetic sensibility… but consider both how “legacy” and “backwards compatible” and “general purpose” features accrete into an attack surface, and also how excess abstraction layers and other forms of indirection accrete into performance-sapping bloat. Purely rational concerns from both the security and engineering perspectives are also relevant.

                                                                                          2. 3

                                                                                            I think you’re overstating how irrational your gripes with existing OS’s can be. You might have seen some better designs that let you know current stacks aren’t the upper bound or are actually worsening in various ways. I’d say it’s pretty objective to want our systems to at least achieve the best properties of 1960’s-1980’s designs. Especially given we have the hardware to try anything they thought was prohibitive due to performance or hardware limitations.

                                                                                            On security, the Burroughs and IBM designs still reign with hardware-enforced protections. On concurrency, Hansen and BeOS with DragonflyBSD making nice strides. On availability, VMS and NonStop clusters. On virtualization, separation kernels or Nova-like designs. On productivity, modern systems can’t match all arguments in Genera brochure. The ideal system would be a mix of stuff like that which balanced those attributes.

                                                                                            1. 1

                                                                                              Sounds like your weekend project is running Linux from Scratch. :) It’s not as hard as it sounds.

                                                                                            2. 11

                                                                                              The fact that our PCs are slow and struggle despite having orders of magnitude more performance than previous generations does bother me.

                                                                                              1. 4

                                                                                                “Software is getting slower faster than hardware is getting faster.” - Niklaus Wirth

                                                                                          1. 3

                                                                                            For those interested, here’s the mitigation in NetBSD, which seems to be the simplest one.

                                                                                            Essentially: Intel released a microcode update which makes the verw instruction now magically flush MDS-affected buffers. On vulerable CPUs, this instruction now needs to be run on kernel exit; the microcode update won’t do it automatically on sysexit, unfortunately.

                                                                                            1. 2

                                                                                              I wish we had more non-unix operating systems to play with… From the top of my mind, I remember Windows, FreeDOS, Plan9, Haiku, AROS, MorphOS, AmigaOS, ReactOS. Anyone has other links?

                                                                                              1. 1

                                                                                                The thing is that most of these (I can’t say since I haven’t used all of them) are either directly or at least to a significant degree inspired by Unix. That’s why I focused on Unix, since even though it’s not that commonly used itself (Linux, *BSD, MacOS are more popular) it’s ideas continue to be found everywhere, and are assumed to be the default, proper or real way if there even is any alternative.

                                                                                                1. 1

                                                                                                  Haiku is technically a UNIXlike under the hood, yes; but if you used the native APIs for developing GUIs, you wouldn’t really know it for the most part. Some of the items you listed in the post which have solutions that are functionally “nicer” but technically more difficult, Haiku (or BeOS before it) just solved the technical problem in the way of the nicer solution.

                                                                                                  For instance, the native email client is “maildir based”, relying on the underlying filesystem’s extended attributes, structured data can be passed between applications arbitrarily via BMessages, and so on.

                                                                                                  So while Haiku certainly doesn’t escape the UNIX philosophy (we have mostly full POSIX compatibility, bash is the default shell, etc.) we certainly are not like Linux in terms of “advanced users change things on the command line.” But you can if you still want to. :)

                                                                                              1. 5

                                                                                                I’ve been thinking about these issues in depth too. Like the problem you mention of passing a float between programs, what if we could pass more than text - but arbitrary structured, typed data. This is not science-fiction, it’s really simple to implement. We know things about programming that weren’t really known or understood back in the early UNIX times. The difficulty is that we can’t just implement it in the shell, it also needs to be common to the userspace binaries we work with all the time (like all the GNU coreutils). At the very least, why couldn’t we at least experiment with a UNIX shells and applications that interchange using tab separated values?

                                                                                                On hypertext, there’s been so much excellent theory worked out - but other than the WWW - very little of it rolled out it’s worth mentioning purplenumbers on the topic of being able to link to specific parts of a document https://communitywiki.org/wiki/PurpleNumbers - we can also ask about annotations and snapshotting segments of documents to be included in others.

                                                                                                1. 10

                                                                                                  Had me at “arbitrary structured, typed data.” - immediately thought of PowerShell, which - while just one tool of many - meets your requirement of “be(ing) common to the user space binaries we work with…”

                                                                                                  1. 6

                                                                                                    The structured-ness of PowerShell is nice in theory, but, like every other Microsoft product, the rest of PS is totally botched in implementation. Between long, awkward-to-type command names, lack of laziness for piped data, generally poor COM support in software, and pervasive slowness, it’s easier and faster to write a shell script with grep, awk, and sed.

                                                                                                    The unix shell is arcane, but I’ve yet to find anything that’s as good for quickly getting to the information I need. SQL is probably the closest, but requires data to be structured from the start.

                                                                                                    1. 5

                                                                                                      PowerShell Core is even available for Linux, complete with installation instructions. I haven’t yet taken time to get my head around PowerShell, even on Windows, but maybe I should.

                                                                                                      1. 4

                                                                                                        It’s not obvious to me how to do this completely properly. OO is the wrong approach IMHO, as you don’t want to be passing behaviour downstream. Something like algebraic data types might be promising, but then you have to send a description of the data down the pipe before any data itself?

                                                                                                        Erlang’s a bit on my mind today (RIP Joe), but maybe something like its pattern-matching system would work well here?

                                                                                                        1. 3

                                                                                                          As a paradigm, object-oriented may not be the best approach, especially when attempting to solve the problem mentioned in the link.

                                                                                                          PowerShell and ‘OO’ also don’t have to go hand-in-hand.

                                                                                                        2. 2

                                                                                                          PowerShell is an interesting approach. Unfortunately it has one giant mistake that makes me hate using it — the choice of Verb-Noun naming rather than Noun-Verb means I have to use Google to figure out the command I need rather than just hitting tab. (Because I nearly always know the noun already — what I need is the verb.) Unless I’m missing something, which would be great…

                                                                                                        3. 8

                                                                                                          what if we could pass more than text - but arbitrary structured, typed data. This is not science-fiction, it’s really simple to implement. We know things about programming that weren’t really known or understood back in the early UNIX times.

                                                                                                          Wasn’t that done back in UNIX times on Lisp machines? And then, given it’s dynamic, maybe a bit later on Smalltalk systems?

                                                                                                          If not arbitrary, Flex Machine seems like it fit. That was late 1970’s to 1980’s. Fewer people knew about it, though. There was also people like Hansen doing OS’s in Pascal. A brief skim of Solo paper led me to this statement: “Pascal programs can call one another recursively and pass arbitrary parameters among themselves… Solo is the first major example of a hierarchical concurrent program implemented in terms of abstract data types (classes, monitors and processes) with compile-time control of most access rights.”

                                                                                                          Clearly, there were already ideas there for people to build on for doing something other than text. They just couldn’t be or weren’t applied. Then, the UNIX authors and others continued building on their ideas with minimal adoption of alternatives. Eventually, Linux ecosystem started adopting pieces of many ideas kind of bolted into and on top of it the existing system. Although it’s a legacy system, a clean-slate project could certainly do things differently. We’re seeing that start to happen with virtual machines and unikernels that reuse hosts’ drivers.

                                                                                                          1. 3

                                                                                                            Shells using some kind of messagepack-style interface could be interesting

                                                                                                            1. 2

                                                                                                              what if we could pass more than text - but arbitrary structured, typed data. This is not science-fiction, it’s really simple to implement

                                                                                                              So, a BMessage then? :)

                                                                                                            1. 6

                                                                                                              There is a comparison to Plan9 and Hurd, but not to GenodeOS (which superficially seems more similar).

                                                                                                              I am not sure if Fuchsia brings anything actually good compared to Genode (and it would be interesting to see); of course, the adoption question is drivers, and Fuchsia will get adoption if Google forces manufacturers to support it, now without any problems around closed-source drivers.

                                                                                                              1. 8

                                                                                                                I heard more BeOS influence, due to the Be alumni working on it.

                                                                                                                1. 6

                                                                                                                  And we don’t get a BeOS comparison, either.

                                                                                                                  A bit like explaining Git while being careful to never compare it to preexisting DVCSes, only to Monotone — on the other hand, that approach usually works…

                                                                                                                  (And I guess running Android apps on Fuchsia also means that security changes will matter more in terms of protecting manufacturer from consumers actually owning the device, than in providing consumers with securities, as most of the user-facing security problems are apps being happy to leak their entire state — although indeed it might also make it simpler for a custm build to provide fake sensitive data)

                                                                                                                  1. 1

                                                                                                                    Well, with no global file system, it should be easier for apps to have their own private storage to protect the customer from greedy data scouting.

                                                                                                                    1. 2

                                                                                                                      Hard to say how many vulnerabilities are exploited as single-app takeovers (these will probably stay in Fuchsia).

                                                                                                                      On the other hand, cross-application interoperation becomes harder and users lose the ability to have file managers…

                                                                                                                      (And some kind of cross-app message-passing will probably formally exist — inherited from intents — but will continue to require too much opt-in from apps to be widely useful — like OpenIntents that apparently didn’t even take off among apps explicitly wishing to ship in F-Droid)

                                                                                                                    2. 1

                                                                                                                      The speaker isn’t actually working on the OS so perhaps wasn’t aware of those comparisons to be made.

                                                                                                                    3. 2

                                                                                                                      If Fushia internals are remotely comparable to Be’s KernelKit then it’s a great architecture. I wrote an embedded kernel in grad school using Be’s documented Kernel API, and it’s extremely well designed. The VFS is a piece of art*; I still have fond memories implementing vfs_walk(). It’s a shame BeOS’ architecture is not better studied.

                                                                                                                      *not technically part of the kernel and well detailed in Practical Filesystem Design de D. Giampaolo.

                                                                                                                      1. 4

                                                                                                                        It’s not really like BeOS, no; BeOS was a monokernel, and Fuchsia is a microkernel. There are some vague similarities, but not anything too major.

                                                                                                                        On the other hand, Haiku is a complete re-implementation of BeOS including the kernel, and IMO our APIs are even more beautiful than Be’s … but I am biased, of course. :)

                                                                                                                      2. 1

                                                                                                                        And we don’t get a BeOS comparison, either.

                                                                                                                        A bit like explaining Git while being careful to never compare it to preexisting DVCSes, only to Monotone — on the other hand, that approach usually works…

                                                                                                                        (And I guess running Android apps on Fuchsia also means that security changes will matter more in terms of protecting manufacturer from consumers actually owning the device, than in providing consumers with securities, as most of the user-facing security problems are apps being happy to leak their entire state — although indeed it might also make it simpler for a custm build to provide fake sensitive data)

                                                                                                                    1. 6

                                                                                                                      I always find it annoying when existing concepts (pipes/fifos) are renamed (channels).

                                                                                                                      Are Cox, Pike et al involved in the project at all?

                                                                                                                      1. 9

                                                                                                                        I don’t know all the details about channels but it’s clear that they’re not the same as pipes/fifos. For one thing, it seems like it takes discrete messages and not a stream.

                                                                                                                        1. 0

                                                                                                                          That just means they are slightly less flexible than pipes/fifos, and I’m not sure why. Isn’t sending data “as it’s received” over a “channel” still a valid usecase?

                                                                                                                          So once again, the wheel is reinvented in a worse way than before with no reason for doing so…

                                                                                                                          1. 9

                                                                                                                            I’m not sure why.

                                                                                                                            Exactly. I don’t think you should rush to judgment on this one. I don’t know the answer but a good way to find out would be to ask the developers or use the source.

                                                                                                                            no reason for doing so…

                                                                                                                            Do you have to trap to send data on a channel? If not, it could be a lot more efficient than pipes are.

                                                                                                                            1. 0

                                                                                                                              OK, so then the analogy is a UNIX domain socket, not a pipe/FIFO. It still does not make a ton of sense that you can’t send stream data over them…

                                                                                                                              1. 2

                                                                                                                                I just RTFM’d a bit on channels and it seems to have very specific semantics which distinguish it from other IPC.

                                                                                                                                I think it makes sense to have the features that channels do – they help satisfy the microkernel design and the partitioning/encapsulation of the different OS functions while hopefully not introducing significant overhead beyond what a monolithic kernel would have.

                                                                                                                            2. 1

                                                                                                                              Exactly. I don’t think you should rush to judgment on this one. I don’t know the answer but a good way to find out would be to ask the developers or use the source.

                                                                                                                              Why worse? If a message is a single thing, then you don’t need to worry about PIPE_BUF size when sending? Could you not just send a stream of 4096 sized byte arrays to get a pipe?

                                                                                                                        1. 10

                                                                                                                          Reading this and realising crummy Linux trackpads is a big part of what drove me to tiling WMs, many years ago.

                                                                                                                          (Off topic, but) I wonder if a similar effort for Linux desktop responsiveness would be popular. Heavy background system load (I/O or CPU) is still able to drive my i7-6500 driven X11 interface to multi-second-latency for mouse clicks and keypresses. Have tried -ck “desktop latency” kernels and currently using -zen kernels, they’re a little better but it still happens to me most weeks.

                                                                                                                          1. 5

                                                                                                                            Yep, it’s a problem. When I/O load starts maxing out, my X11 just completely locks up also (AMD Phenom II here, so slightly older but still beefy enough by modern standards.)

                                                                                                                            Interestingly as un-mature as Haiku is in certain respects, I’ve never had this kind of lockup on it. But the kernel schedulers were specifically designed for GUI usecases, so, that is probably a large part of it. I don’t think Linux will start really prioritizing that anytime soon.

                                                                                                                            1. 4

                                                                                                                              I definitely have the same issues, and have since I started using Linux. It feels like it’s only gotten worse, to the point on lower end systems, it can take 30 minutes to get to a VTY to kill the program causing the lag.

                                                                                                                              1. 3

                                                                                                                                Ouch! I’ve been there where it takes maybe 20 seconds to get to the VTY, but not 30 minutes.

                                                                                                                              2. 3

                                                                                                                                Heavy background system load (I/O or CPU) is still able to drive my i7-6500 driven X11 interface to multi-second-latency for mouse clicks and keypresses.

                                                                                                                                Wow. I don’t think I’ve ever experienced this. I use a stock kernel. Some thoughts/questions:

                                                                                                                                1. Are you sure you aren’t running out of RAM? (Or swapping, if that’s even a thing any more.)
                                                                                                                                2. What window manager and/or desktop environment are you using?
                                                                                                                                3. Does the latency occur in all applications it’s just specific ones?
                                                                                                                                1. 2

                                                                                                                                  I was half hoping someone else might have some clues for this. :)

                                                                                                                                  1. Fairly certain. I have 16GB of RAM and I’ve experimented with and without a 4GB swap partition. Have checked free -m during/after such events and usually more than half is free.
                                                                                                                                  2. i3 window manager, no desktop environment (I run xsettingsd). Have wondered if maybe this is somehow too minimal and I’m missing something that would make it better…
                                                                                                                                  3. Everything, particularly in complex apps like browsers but even in terminals/emacs or pressing the i3 keys to switch workspaces becomes laggy. I have wondered if it may be related to Intel embedded graphics (HD520+modesetting driver).
                                                                                                                                  1. 5

                                                                                                                                    I’m stumped. I run a similarly stripped down environment (Wingo as my WM, no DE, but it’s a “classic” non-compositing WM just like i3) across many machines. They run the gamut from i3 to i7 Intel CPUs, with between 8GB and 64GB of RAM. Some of them use a AMD graphics card to drive three monitors and others just use the Intel embedded graphics to drive one monitor. None of them experience lag like what you’re describing. (When I used to use Chrome, it could be laggy at times, but that was specific to Chrome.)

                                                                                                                                    The only other thing that might be different between our setups is that I run a compositor (compton) on top of my WM, mostly to smooth things out and support transparent windows. But I don’t quite understand how this could eliminate the lag you’re seeing, but maybe worth a try?

                                                                                                                                    The only other lag I can think of is that sometimes my WM lags a tiny but perceptible amount when switching workspaces where one of the workspaces has a lot of windows on it.

                                                                                                                                    What terminal emulator do you use? I use Alacritty now (with tmux), but I used to use Konsole from KDE, and I don’t really notice any lag difference (that is, neither lags for me).

                                                                                                                                    Lag sucks though. Bummer. Wish I had better ideas for you.

                                                                                                                                    1. 1

                                                                                                                                      Thanks. I will try spinning up compton, just in case.

                                                                                                                                      EDIT: Just started using urxvt recently, was using sakura. Haven’t noticed any difference yet.

                                                                                                                                2. 2

                                                                                                                                  I’ve often wondered, but for a while assumed we may have just covered up the problem with the inevitable march forward in hardware performance, if there would be any value in constructing a desktop operating system with hard guarantees on input/feedback latency.

                                                                                                                                1. 4

                                                                                                                                  Also off the top of my head:

                                                                                                                                  • OpenGL or any other 3D graphics API – I meant to learn it some years ago (and bought a few books, even) but just never had the time, or the need to do so. I know more about how DRM/Mesa implement OpenGL then about how to use OpenGL at this point…
                                                                                                                                  • Objective-C – I know some people swear by it even outside of Apple platforms; but I just don’t get it. The syntax looks impenetrable to me.
                                                                                                                                  • Assembly of any kind – I know enough about it to be able to make educated guesses about matching generated assembly to original source code (from crash dumps without debug info) but that’s it.
                                                                                                                                  • Hardware specifications – I’m not the biggest fan of reading RFCs, but I can do it; hardware specs on the other hand I’m terrible at.
                                                                                                                                  1. 1

                                                                                                                                    Happy to help with grokking ObjC if I can: a handy book is the Big Nerd Ranch guide which spends 11 chapters on C stuff before introducing Objects, so it can be read whether you have C experience or not. [mild disclaimer: I have worked, but no longer work, for BNR]

                                                                                                                                  1. 3

                                                                                                                                    I am powerfully tempted to repartition one of my drives and give this a shot.

                                                                                                                                    1. 5

                                                                                                                                      Do it! :)

                                                                                                                                      1. 1

                                                                                                                                        I think you should