1. 15

    Trying to get Linux not to suck on the desktop is a losing proposition.

    To put it into context: When enough of the hardware works, Haiku offers a better free desktop experience today, despite the manpower it has isn’t even comparable. Coherent UI, easy to understand desktop that behaves as expected, responsiveness, avoidance of stalls on load. BeOS did achieve the same, but as a proprietary OS, in the mid nineties.

    Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.

    A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.

    If looking at the state of the art, you’ll find out that I am describing Genode, paired with seL4. If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago. One that no current proprietary solution would be able to come near.

    A proof of concept is available through Sculpt (new release expected within days), demonstrated in this FOSDEM talk. Another FOSDEM talk covers the current state of seL4.

    Full disclosure: Currently using Linux (with all its faults) as main desktop OS. I have done so for 20 years. AmigaOS before that.

    1. 19

      Haiku doesn’t support any hardware or software and is missing all the features that end up introducing the complexity that ends up introducing the bugs that you’re complaining about anyway.

      Linux tries to do everything at once, with a design (UNIX-like monolith) that favors server usage in disregard for latency and is thus ill-suited for the desktop. On top of that, Its primarily corporate funding effectively steers it towards directions that have nothing to do with the desktop and are in detriment of it. It is hopeless.

      Monolithic kernels vs microkernels have no actual impact on the ability for a desktop operating system to function properly and none of the problems with ‘Linux on the desktop’ are best solved with a microkernel approach. If anything it would cause even more fragmentation because every distro could fragment on implementations of core services instead of that fragmentation only being possible within a single Linux repository.

      If anything, a good desktop experience seems to require more monolithic design: the entire desktop environment and operating system developed in a single cohesive project.

      This is why ‘Linux on the desktop’ is a stupid goal. Ubuntu on the desktop should be the goal, or Red Hat on the desktop, or whatever. Pick a distro and make it the Linux desktop distro that strives above all else to be the best desktop distro. There you can have your cohesiveness and your anti-fragmentation decision making (project leader decides this is the only supported cron, the only supported init, only supported WM, etc.).

      A system design fit for a Desktop if we could have a clean slate today would be something with clean APIs, well-understood relationships between components and engineered for low, bounded response times (thus an RTOS); Throughput is secondary, particularly when the impact on throughput is low to negligible. As desktop computers are networked these days, security (confidentiality, integrity and availability) is a requirement.

      Literally everything would be better if we could design it again from the ground up with the knowledge we have now. The problem is that we have this stuff called time and it’s a big constraint on development, so we like to be able to reuse existing work in the form of existing software. If you want to write an operating system from scratch and then rewrite all the useful software that works on top of it that’s fine but it’s way more work than I’m interested in doing, personally. Because ‘a good desktop operating system’ requires not just good fundamentals but a wide variety of usable software for all the tasks people want to be able to do.

      You can write compatibility layers for all the other operating systems if you want. Just know that the osdev.org forums are littered with example after example after example of half-finished operating system projects that promised to have compatibility layers for Windows, Linux and macOS software.

      If a thousandth of the effort put into the Linux desktop (which does not and can not satisfy the requirements) was redirected to these projects, we would have an excellent Open Source desktop OS years ago.

      The Linux desktop is already good. So clearly it is not the case that it cannot satisfy the requirements.

      1. 5

        The Linux desktop is already good.

        This is super subjective. I for one does not consider Linux particularly good – I love the command line, and I many times tried to make Linux my primary work desktop. However I need excellent touchpad input with zero latency and precise acceleration, HiDPI support that Just Works with all apps (no pixel zooming), 60 fps scrolling without stuttering and just the right amount of inertia.

        To me both Linux and Android fail miserably on things like 60 fps scrolling and most people don’t even notice that it stutters. I know that’s some very subjective criteria that many people don’t have. I’m excited about projects like Wayland, cause maybe there is light at the end of the tunnel?

        1. 6

          However I need excellent touchpad input with zero latency and precise acceleration

          Never had a problem with this, personally, and I certainly dispute that anyone needs it. People have productively used computers for decades without it and it isn’t a desktop issue anyway. It’s a laptop issue. I’m sure Linux still has a long way to go on the laptop but shifting the goalposts isn’t helping anyone. What it means for Linux to be viable on the desktop seems to be changing every time it gets close. Now it apparently includes laptop-specific functionality?

          HiDPI support that Just Works with all apps (no pixel zooming)

          And I’d like a supermodel girlfriend. HiDPI support is fucked on every platform, because it fundamentally requires support from the programmes that are running. It’s far superior on Linux to Windows. Windows can’t even scale its file browser properly, half the time the icons are the wrong size. It’s bizarre. It’s like they didn’t test anything before they claimed to ‘support’ it.

          60 fps scrolling without stuttering and just the right amount of inertia.

          “Just the right amount of inertia” is subjective. I hate scrolling on macOS or iPhones, the inertia is far too high. I’m sure others feel the opposite and think it’s too low. Yet if it’s made configurable I’m sure people will complain about those damn software developers wanting to make everything configurable when things should ‘Just Work’. You can never win.

          Also, a lot of monitors these days have really atrocious ghosting. Smooth scrolling on those monitors makes me feel sick. So please at least make it easy to turn it off and keep it functional if I do.

          I get that none of what I said changes that you want those features and so do others, and they’ll never be satisfied until those features are there. I get it. Those features are requirements for you. They’re not inherently bad. But it’s worth bearing in mind that nobody is approaching this with the goal of fucking up the Linux desktop. Nobody wants you to have a bad experience. Things are the way they are because solving these problems is really hard without having the resources to just rewrite everything from scratch. Wayland is rewriting a huge chunk of the stack from scratch and that’s having some positive impact but it’s still for me in a state where it invalidates far too much of the stuff that was working fine already that I don’t want to use it any more. I’ve gone back to X.

          1. 7

            HiDPI support is fucked on every platform

            Works great literally 100% of the time on my Mac. I can even mix external monitor types and everything “just works”.

            1. 3

              Exactly. It’s 100% fair to keep moving the goalposts, because the rest of the industry isn’t taking a break waiting for Linux on the desktop to catch up to the state of the art.

              1. 1

                I use mixed monitor densities with i3 as my daily driver and everything works perfectly 100% of the time. Working software for linux exists.

                For reasons which escape me, the biggest distributions have not fixed their defaults to make it work.

              2. 3

                Never had a problem with this, personally, and I certainly dispute that anyone needs it.

                Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.

                The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level. In comparisons of Andoid and iOS, what do they talk about? Input latency. In comparisons between Wayland and X11, it’s all about getting latency down (and avoiding graphics glitches, and effective sandboxing, and reducing the amount of code running as root; there’s a lot of ways to improve on Xorg).

                Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.

                1.  

                  Of course you don’t need it for things like writing documents. I’ve worked over RDC connections with about 10 seconds of latency (I counted). I also raged the whole time, and my productivity tanked.

                  Personally I’ve had far worse experiences with remote desktop on Windows than on Linux. For example, remote desktoping into another Windows computer logs you out on that computer, or at least locks the screen, on Windows 10. Worse latency and relevant to this discussion too: terrible interaction with HiDPI (things scaled completely crazily when remoting into something with a different scaling factor).

                  The point is that low input latency is a competitive advantage, and it’s known to improve ergonomics and productivity, even at 100ms levels, though it’s more something you feel than notice at that level.

                  Touchpad latency has nothing to do with desktop Linux.

                  Good input latency is also necessary to play action games, or delicate remote controls like drones, of course.

                  Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.

                  1.  

                    Personally I’ve had far worse experiences with remote desktop on Windows than on Linux.

                    I know that Linux’s input layer isn’t 10sec-latency bad. That horrible situation was entirely the fault of the overloaded corporate VPN. I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.

                    Input latency on Linux is not an area of concern, it’s working perfectly fine. I have better performance in games (lower input latency, lower network latency, higher framerates, better support for different refresh rates on different monitors) on Linux than on Windows.

                    That’s why I didn’t compare it to Windows NT. I compared it to iOS.

                    1.  

                      I brought it up as a reducto ad absurdum to anyone claiming not to care about input latency; just because you are capable of getting work done does not make a scenario acceptable.

                      Just because it’s not perfect or optimal doesn’t make it unacceptable, and it’s still not relevant to our discussion which is about desktop Linux. You seem happy to introduce other unrelated devices and operating systems and platforms when they help you push your view but then as soon as I respond to those points you retreat to a different position.

                      That’s why I didn’t compare it to Windows NT. I compared it to iOS.

                      No, you compared Android (not desktop Linux) to iOS. I wasn’t responding to that. I was discussing input latency in the context of the discussion we’re actually having in this thread: desktop Linux. The alternative to desktop Linux (given you specifically mentioned ‘playing action games’) is clearly Windows and not iOS.

                2. 3

                  Windows can’t even scale its file browser properly, half the time the icons are the wrong size.

                  What version of Windows are you discussing here? At least for me on Windows 10, I haven’t noticed any problems with HiDPI in Explorer. And the fact still remains that when using 2 monitors with different DPIs, Linux handles this significantly worse than Windows does.

                  1.  

                    On Windows 10 at my last job I continually had errors with Windows Explorer not scaling its own icons correctly. This was with two screens with different DPIs.

                    In contrast I’ve never had any issues with this on Linux and in fact with sway I can even have fractional scaling so that my older monitors can pretend to have the same DPI as my main monitor if I want to.

                3. 2

                  Well, “with all apps” is a ridiculous requirement. You can’t magically shove the good stuff into JavaFX, Swing, Tk, GTK2, GTK1, Motif, etc. :)

                  My short guide to a great touchpad experience would be:

                  • use wayland, of course
                  • stick to GTK apps as much as possible (I have a list by the way)
                  • apply this gtk patch (and the mentioned “relevant frame-clock ones” for good measure)
                  • MOZ_ENABLE_WAYLAND=1 MOZ_WEBRENDER=1 firefox
                    • about:config widget.wayland_vsync.enabled=true (hopefully will become default soon)

                  Android fail miserably on things like 60 fps scrolling

                  huh? Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.

                  1. 2

                    Android does 90 fps scrolling very well in my experience (other than, sometimes, in the Play Store updates list), and there are even 144Hz phones now and the phone reviewers say they can feel how smooth all these new high refresh rate phones are.

                    Maybe I’ve just been unlucky with hardware but my android experience has always been plagued with microstutters when scrolling. I haven’t used iOS though, maybe the grass is always greener on the other side.

                    1. 2

                      The stutters all over Android were one of the issues that made me give iOS a try. I haven’t gone back.

                4. 5

                  As my hat will tell you, I am biased in this matter, but still…

                  Haiku doesn’t support any hardware or software

                  “Any”? Well, I have a 2015-era ThinkPad sitting on my desk here in which I have a Haiku install in which all major hardware (WiFi, ethernet, USB, …) works one way or another (the sound is a little finicky), and the only things that don’t work are sleep and GPU acceleration (which Haiku does not support broadly.) I also have a brand-new Ryzen 7 3700X-based machine with an NVMe hard drive that I just installed Haiku on, and everything there seems to work except WiFi, but this appears to be a bug in the FreeBSD driver that I’m looking to fix. It can even drive my 4K display at native resolution!

                  You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.

                  1. 3

                    I’ve been inspired to try to find some time to install Haiku on an old laptop running Elementary OS!

                    1.  

                      I’ll preface this by saying: I’m not saying Haiku is bad, just that you clearly can’t compare Haiku’s software and hardware support with Linux’s and pretend that Haiku comes out on top.

                      Well ThinkPads generally have excellent hardware support on many free software operating systems. Of course it boils down to separate questions, doesn’t it: are we asking ‘is there a machine where it works?’ or ‘is there a machine where it doesn’t work?’. You can say something has ‘good hardware support’ if there are machines you can buy where everything works, but I would say it only really counts as ‘good hardware support’ if the average machine you go out and buy will work. You shouldn’t have to seek out supporting hardware.

                      Based on that evaluation I would say that Linux certainly doesn’t have good laptop hardware support, because you need to do your research pretty carefully when buying any remotely recent laptop, but by the first standard it’s fine: there are recent laptops that are well supported and all the new features are well supported.

                      But I would say that Linux has excellent desktop hardware support, and this is a thread about desktop Linux. I never need to check if something will be supported, it just always is. Often it’s supported before the products are actually released, like anything from Intel.

                      the sound is a little finicky

                      Sound is definitely major hardware and should not be finicky. Sound worked perfectly on Linux in the early 2000s.

                      sleep and GPU acceleration (which Haiku does not support broadly.)

                      But there you go, right? It’s like people going ‘oh my laptop supports Linux perfectly, except if I close the lid it’s bricked and the WiFi doesn’t work but other than that it’s perfect’. Well not really perfect at all. Not all major hardware at all. Sleeping is pretty basic stuff. GPU support is hideously overcomplicated and that’s not your fault, but it doesn’t change that it isn’t there.

                      You can also install just about any Qt-based software and quite a lot of WX-based software, with a port of GTK3 in the works. LibreOffice, OpenSSH, etc. etc. So, there’s quite a lot of software available, too.

                      Right and I’m sure that software is great for what it is, but in a context where people are saying Linux isn’t a viable desktop operating system because of bad touchpad latency people are claiming that the problem is its monolithic kernel and that we actually need Haiku to save the day with… no sleep support? No GPUs?

                      1.  

                        I think you are moving the goalposts here. You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true. What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.

                        We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?

                        1.  

                          You claimed Haiku does not support “any” hardware or software, when that is rather pointedly not true.

                          It was also clearly not intended to be taken literally.

                          What the original comment you replied to was saying, is that in many respects, Haiku is ahead of Linux – in UI responsiveness, overall design, layout, etc.

                          Except it isn’t actually ahead of Linux in any of those things from any objective standpoint, just in the opinion of one person that will advocate for anything that isn’t Linux because they have a hate boner for anything popular.

                          We are a tiny team of volunteers doing this in our spare time; we have a tiny fraction of the development effort that goes into the “Linux desktop.” The point is that we are actually not so radically far behind them; and the ways we are behind, perhaps doubling or tripling our efforts would be enough to catch up. How, then, does Linux spend so much time and money, and winds up with a far worse desktop UI/UX experience than we do?

                          Results aren’t proportional to effort. Getting something working is easy. Getting something really polished, with wide-ranging hardware and software support, very long term backwards and forwards compatibility, that has to be highly performant across a huge range of differently powered machines from really weak microprocessors all the way through to supercomputers? That’s really hard.

                          Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area. It’s mostly volunteer work that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.

                          But you can have an actually good user experience on Linux if you forgo all the GNOME/KDE crap and just use a tiling wayland compositor like sway. Still has a few little issues to iron out but it’s mostly there, and if it’s missing features you need to use then you can just use i3wm on Xorg and it works perfectly.

                          Is it not still the case that Haiku doesn’t support multi-monitor setups? I would hardly describe that as a ‘far better UI/UX experience’ given that before you can have UI/UX you have to actually be able to display something.

                          1.  

                            Except it isn’t actually ahead of Linux in any of those things from any objective standpoint

                            At least in UI responsiveness, Haiku is most definitely ahead of Linux. You can see the difference with just a stopwatch, not to mention a high-speed camera, for things like opening apps, mouse clicks, keyboard input, etc.

                            Plenty of people have talked about how Haiku is ahead of both GTK and KDE in terms of UX, so it’s not just me (or us.) Maybe you disagree, but it’s certainly not a rare opinion among those who know about Haiku.

                            just in the opinion of one person that will advocate for anything that isn’t Linux

                            The BSDs are not Linux, and have the same problems because they use the same desktop. Our opposition to Linux has not a ton to do with Linux itself and more the architectural model of “stacking up” software from disparate projects, which we see as the primary source of the problem.

                            Linux doesn’t spend ‘so much time and money’ on the desktop user experience. In fact there’s very little commercial investment in that area.

                            Uh, last I checked, a number of Red Hat developers worked on GNOME as part of their jobs. I think KDE also has enough funding to pay people. The point is, Haiku has 0 full-time developers, and the Linux desktop ecosystem has, very clearly, a lot more than 0.

                            that’s constantly being made harder by people more interested in server usage scenarios constantly replacing things out from under the desktop people. Having to keep up with all the stupid changes to systemd, for example, which is completely unfit for purpose.

                            Yes. And that’s why Haiku exists, because we think those competing concerns probably cannot coexist, at least in the Linux model, and desktop usage deserves its own full OS.

                            Is it not still the case that Haiku doesn’t support multi-monitor setups?

                            We have drivers that can drive multiple displays in mirror mode on select AMD and Intel chips, but the plumbing for separate mode is not there quite yet. As you mentioned, graphics drivers are hard; I and a few others are trying to find the time and structure to bite the bullet and port Linux’s KMS-DRM drivers.

                            you have to actually be able to display something.

                            Obviously true multi-display would be nice, but one display works already. Pretty sure that counts as “displaying something.”

                    2. 2
                      1.  

                        Anything that is unpopular is bad? Anything that is popular is good? What are you trying to say with this ridiculous comment?

                    3. 5

                      Do you think Fucshia would fit the bill?

                      1. 2

                        No, Fuchsia won’t.

                        I took a look at Zircon recently, at it appears to be a first generation µkernel, as it seems to ignore everything Liedtke brought forward. I would particularly stress the principle of minimality (zircon is functionality-bloated). It would have been an impressive µkernel thirty years ago. Today, it’s a potato.

                        But it is still better than Mach, used in IOS/OSX. I have no doubt the overall system will be nice to work with (APIs), considering they have people from the old NewOS/BeOS team in there. It will, for Google, likely become a workable replacement path for Linux, giving them much more control, but from a systems design perspective, it is nothing else than shameful. They had the chance to use any of many competitive contemporary µkernels. but went with such a terrible solution just because NIH. It taints the whole project, making it worthless from an outsider perspective.

                        Because of HelenOS ties, I expect Huawei’s HarmonyOS to be better at a fundamental level.

                        1. 3

                          As the Fuchsia overview page says:

                          Fuchsia is not a microkernel

                          Although Fuchsia applies many of the concepts popularized by microkernels, Fuchsia does not strive for minimality. For example, Fuchsia has over 170 syscalls, which is vastly more than a typical microkernel. Instead of minimality, the system architecture is guided by practical concerns about security, privacy, and performance. As a result, Fuchsia has a pragmatic, message-passing kernel.

                          IMO, there’s no shame in favoring pragmatism over purity. From what I’ve read, it looks like Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.

                          1. 2

                            there’s no shame in favoring pragmatism over purity

                            They do think they are pragmatic. It’s not the same as actually being pragmatic.

                            Those “pragmatic concerns” show, if anything, that they’ve heard about minimality, but did not care to understand the why; They actually mention performance, and think putting extra functionality inside the kernel helps them with that; They ignored Liedtke’s research.

                            A wasted opportunity. If only if they did a little more research on the state of the art before deciding to roll their own.

                            Fuchsia will still be a major step up in security compared to any current widely used general-purpose OS.

                            Assuming OSX/IOS and Linux are the contenders you have in mind, this looks like a really low bar to meet, and thus very feasible.

                            1. 3

                              The fact is that nobody has ever actually demonstrated a performant operating system based on a microkernel.

                              1. 3

                                Have you heard about QNX?

                                1.  

                                  It’s proprietary software and I haven’t used it.

                                2. 1

                                  That’s quite the liberal use of the word fact.

                                  If you can handle foul language, you might enjoy reading this article.

                                  1. 3

                                    Starting off a blog post by talking about ‘microkernel hatred’ is pretty funny given that 99% of people that prefer monolithic kernels just get on with their lives while the microkernel people seem to be obsessed with comparing them and arguing for them and slagging off monolithic kernels and generally doing anything other than actually working on them.

                                    The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s. There’s not much performance data indicated from this millennium, and nothing newer than 2010.

                                    It’s 2020. I expect to see performance data comparing realistic server and desktop workloads across production operating systems running on top of microkernels and monolithic kernels and explanations for why these differences should be attributed towards the kernel designs and not towards other aspects of the operating system designs. Because sure as hell Linux is not a theoretically optimally performant monolithic kernel, and I’m sure there are much faster monolithic kernel designs out there that don’t have all the legacy bits slowing things down that are there in Linux, or for that matter, the various things slowing Linux down that exist for reasons of flexibility across multiple domains of work, or to patch security vulnerabilities, etc.

                                    It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this. Operating system speed in 2020 is dominated by context switching and disk I/O speeds, especially the former since the Spectre crap, so anything you can do to cut down on those bottlenecks is going to give you by far the most bang for your buck in performance.

                                    Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.

                                    1. 3

                                      Nowhere does this blog post quote the ‘myth’ of “if they are so great, why don’t you install one on your desktop system and use it” because they know there’s no good answer to it.

                                      The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).

                                      There’s just nothing open that’s actually quite there, yet. And it is indeed a shame.

                                      It’s often said that you rewrite your system and it’s twice as fast, then you fix all the bugs and it’s 50% faster than the original, then you add all the missing features and it’s back to being just as slow as the original again. I suspect that performance figures being better on demos is probably mostly due to this.

                                      Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.

                                      context switching (dominates performance…)

                                      Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.

                                      disk I/O speeds

                                      Are largely considered non-deterministic, and out of scope.

                                      File system performance is relevant, but Linux has known, serious issues with this, with pathological i/o stalls which, finally, are being discussed in Linux Plumbing conferences, but remain unsolved.

                                      This is in no small way a symptom of an approach to operating systems design that makes reasoning about latency so difficult it is not tractable.

                                      The blog post then goes on to complain that people slag off microkernels based on old outdated performance benchmarks, followed by quoting some benchmarks and comparisons from the early 1990s.

                                      More or less correct. More or less. (highlighted a word).

                                      nothing newer than 2010.

                                      The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.

                                      99% of people that prefer monolithic kernels just get on with their lives

                                      There’s a several-steps difference between “using a system that happens to be monolithic”, and “preferring monolithic kernels”. The latter would imply an (unlikely) understanding of the different system architectures and their pros/cons.

                                      Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.

                                      I have not encountered such a person who’d regardless prefer the monolithic approach. Not even Linus Torvalds has a clue, which I believe has no small role on the systemic echo chamber problem the Linux community has with the µkernel approach.

                                      Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.

                                      And thus I realize I have put too much effort in this post, relatively speaking. This is why it is likely I will not humor you further than I already have.

                                      1.  

                                        The absence (Sculpt is on the works covering this scenario) of a open source desktop OS based in a modern operating system architecture is, indeed, not an argument against the µkernel approach being fit for the purpose. It simply hasn’t been done as open source. The article goes as far as to cite successful commercial examples (QNX used to offer a desktop, and it was great).

                                        I didn’t say it isn’t fit for purpose. I’m not the one saying that. You are. You are the one claiming that a monolithic kernel is fundamentally a broken model. Well prove it. Build a better example. Show me the code.

                                        My perspective is all the working kernels I’ve had the pleasure of using have been monolithic and I’ve never seen any demonstration that there are viable other alternatives. I’m sure there are, but until you build one and show it to us and give us performance benchmarks for real world usage, we just don’t know. There are hundreds of design choices in an operating system that affect throughput vs latency vs security etc. etc. and until you’ve actually gone and built a real world practical operating system with a microkernel at its core you can’t even demonstrate it’s possible to build one that’s fast, let alone that the thing that makes it faster is that particular design choice.

                                        Your hypothetical does indeed apply to monoliths, and could possibly apply to some pre-Liedtke µkernels. It cannot apply to 2nd nor 3rd generation µkernels, due to the advances covered in the µkernel construction (1995) paper. µkernels just aren’t constructed the same way. Do note we’re talking about a paper from 25 years ago, thus your hypothetical is only a reasonable one to put forward 25+ years ago, certainly not today.

                                        If you can’t actually build a working example in 25 years then maybe your idea isn’t so great in the first place.

                                        Is something Linux is a sloth at. Not just a little slower than seL4, but the magnitude orders sort of slower.

                                        And how fast is seL4 when Spectre and Meltdown mitigations are introduced? Linux context switches aren’t slow because slow context switches are fun. They’re slow because they do a lot of work and that work is necessary. seL4 has to do that work anyway, and context switches need to happen a lot more often.

                                        The implication (supported by the “outdated” above, correct me if I am wrong) is that all the performance data is obsolete. But it is left implicit, possibly because you suspect this data is obsolete, but you couldn’t actually find a paper that supported your hypothesis. Thus the validity of these papers is now strengthened by your failed attempt at refutation.

                                        I don’t need a paper to support my hypothesis. I don’t have a hypothesis. It’s simply a statement of fact that these benchmarks are outdated. I’m not saying that if they were redone today they would be any different, or that they’d be the same. I don’t know, and neither do you. That’s the point.

                                        Thus, you could get away with claiming that 99% of people use operating systems that happen to be built around monolithic kernels. But to claim that 99% of people actually prefer monolithic kernels is nothing but absurd, even as an attempt at Argumentum ad populum.

                                        99% of people that prefer monolithic kernels. That prefer them.‘99% of people that have blonde hair just get on with their lives’ does not mean ‘99% of men have blonde hair’ and ‘99% of people that prefer monolithic kernels just get on with their lives’ does not mean ‘99% of people prefer monolithic kernels’. Christ alive this isn’t hard. The irony of quoting Latin phrases at me in an attempt to make yourself look clever when you can’t even parse basic English syntax…

                                        Overall, having read your post and the points you tried to make in it, and realizing how fast your response has written, I conclude you cannot possibly have read the article I linked to you. At most, you skimmed it. To put this into context, it took me several days to go through it (arguably just one or two hours per day, the density is quite high), and I spent weeks after that going through the papers referenced by it.

                                        You’re quite right. I skimmed it. It didn’t address the problems I have with your aggressive argumentative bullshit and so it isn’t relevant to the discussion.

                                        Nobody actually cares whether microkernels are faster or monolithic kernels are faster. It doesn’t matter. It isn’t even a thing that exists. It’s like saying ‘compiled languages are faster’ or ‘interpreted languages are faster’. Sure, maybe, who cares? Specific language implementations can be faster or slower for certain tasks, and specific kernels can be faster or slower for certain tasks.

                                        Perhaps it will turn out, when you eventually come back here and show us your new microkernel-based operating system, that your system has better performance characteristics for interactive graphical use than Linux. Perhaps you’ll even somehow be able to justify this as being due to your choice of microkernel over monolithic kernel and not the hundreds of other design choices you’ll have made along the way. And yet it might turn out that Linux is faster for batch processing and server usage.

                                        We don’t know, and we won’t know until you demonstrate the performance characteristics of your new operating system by building it. But arguing about it on the internet and calling everyone that doesn’t care a ‘Linus Torvalds fanboy’ definitely isn’t going to convince anyone.

                        2. 3

                          I’m curious about why you say that the state of the art is Genode specifically paired with seL4. What advantages does seL4 have over Nova? It looks like Sculpt is based on Nova. Would it be difficult to change it to use seL4?

                          1. 4

                            Nova is a microhypervisor that, IIRC, was partly verified for correctness. seL4 is a microkernel that was verified for security and correctness down to the binary. Genode is designed to build on and extend secure microkernels. seL4 is the best option if one is focusing on security above all else.

                            An alternative done in SPARK Ada for verification is the Muen separation kernel for x86.

                            1. 0

                              Because operating system and kernel enthusiast communities are full of people that are obsessed with dead, irrelevant technology.

                              1. 1

                                Genode offers binary (ABI) compatibility across kernels. They’re using the same components, the same drivers, the same application binaries.

                                I do not know the current state of their seL4 support. The last time I looked into it (more than a year ago), you could use either NOVA or seL4 for Sculpt, and seL4 had to be patched (with some patches they were on their way to upstreaming) or you’d get slow (extra layer of indirection due to a missing feature in MMU code) framebuffer support.

                                From a UX perspective, using either microkernel should feel about the same. I do of course find seL4 more interesting because of the proofs it provides, and because it provides them (assuming they got the framebuffer problem solved) without any drawbacks; seL4 team does claim their µkernel to be the fastest at every chance.

                                I also do favor seL4’s approach to hypervisor functionality, as it forwards hypercalls and exceptions to VMM, a user process with no higher capabilities than the VM itself, making an otherwise successful VM escape attack fruitless.

                            1. 3

                              I recommend taking a look at ActivityPub federated blogging platforms such as Plume and Write.as.

                              1. 1

                                What’s the benefit of AP-supporting platforms vs a regular old site? Is it mainly so that people can follow your posts from e.g. mastodon?

                                1. 1

                                  Yeah, it basically allows people using any platform that supports it, and to communicate with each other across different platforms. So, you get better discoverability overall. I tend to use Mastodon as my main feed akin to RSS nowadays.

                                  AP also makes it easier for new service to bootstrap. For example, Pixelfed didn’t have the ability to follow people from different instances initially, but it was possible to follow different Pixelfed instanced via Mastodon. So, existing functionality in one part of the network helped boostrap another.

                              1. 3

                                it is clearly malicious behavior and may fall on the wrong side of the law.

                                Malicious, yes

                                It is really illegal though?

                                1. 4

                                  People have gone to court over port scanning in the USA IIRC, but I don’t know if it’s illegal per se

                                  1. 4

                                    Purely based on hearsay/reading stuff on the internet:

                                    Of course it depends on where you are, but based on intent port scanning can be considered “preparation for a crime”. I think for this reason the action taken by eBay might not constitute a crime. There the fingerprinting and maybe how it is justified might actually be more relevant.

                                    Even in law the analogy of knocking at people’s doors to check whether someone is home seems to hold. This can also be considered a preparation for a crime, if your intention is to rob them.

                                    1. 2

                                      If it is done for fingerprinting/tracking reasons, I’m pretty sure it’s illegal in the EU (the GDPR requires you to ask the user to opt-in to tracking).

                                    1. 7

                                      As a firefox user, I had no idea that the alternate styles came from the site itself! I’m also surprised that FF still supports this feature. I might add this to my site just for fun.

                                      1. 3

                                        That’s the spirit. I decided to post this because it is a feature that has been there since forever and I noticed most people don’t know about it. It is quite fun. I can see people who are better with CSS than I am doing some crazy stuff, CSS Zen Garden style, with this feature. Even if only for fun.

                                      1. 3

                                        Very good write up about why to use video formats over .gif but you should add some caveats about autoplay with video. Some browsers/platforms block autoplay even when muted, so you will absolutely need to implement error handling and a UI to initiate playback via a user gesture.

                                        Rule number 1 of autoplay on the modern web is never assume autoplay will work without fail.

                                        1. 8

                                          Autoplay is evil and should never have been allowed in the first place. a GIF has no audio so it’s slightly more acceptable although I would argue that autoplay of a GIF is probably also worth considering blocking. Video however has audio and it’s very intrusive to have a video with audio start playing when you open a page. While there are probably valid ergonomic reasons a site may want to allow autoplay for the most part websites on the internet can’t be trusted to not abuse it.

                                          1. 6

                                            Autoplay is evil and should never have been allowed in the first place.

                                            For advertising, perhaps, but autoplay itself is not inherently evil, but it can certainly be used for evil. It’s like The Force.

                                            Video however has audio and it’s very intrusive to have a video with audio start playing when you open a page.

                                            You are perusing a list of music videos and you click on a link to one of them, I should have to have another click to play the music video?

                                            You are on a website waiting for a live stream to start, and instead of transparently being able to start playback, you have to force a user gesture to start playback?

                                            You have an artsy background video, it probably needs to autoplay, but it would be nice if browser’s offered a way to tell if you’re on a metered connection so you could fallback to an image instead to save your end users money.

                                            There more examples, all of perfectly valid autoplay use cases, the real issues are:

                                            1. Many user’s do not have control over their own devices (Chrome)
                                            2. Browser vendors still have not implemented a way to tell if someone is on a metered connection so you cannot choose to be responsible about bandwith usage, which creates problems for Good Actors in many other systems (p2p, etc)

                                            While there are probably valid ergonomic reasons a site may want to allow autoplay for the most part websites on the internet can’t be trusted to not abuse it.

                                            I don’t actually disagree. It should have used the permissions system so that websites can request autoplay permissions, it defeats bad actors, and allows for good actors to benefit from improved UX.

                                            Instead what we got was Google’s MEI (Media Engagement Index) so they can basically whitelist bad actors that pay them enough advertising revenue ala, CNN and those absolutely annoying popup autoplaying + audio videos. And it introduced a non deterministic system which makes debugging an absolute PITA

                                            1. 3

                                              I think we very much agree actually. There are valid reasons for autoplay on videos to be useful. I just think that human nature and the reality of website operators on the web mean the downsides for users in aggregate outweigh the usefulness of autoplay.

                                            2. 4

                                              GIF has tremendous downside when compared to video: energy consumption. With GIF one must render every frame: raster frame, transfer that to GPU, render it, discard it and start raster next one. Caching frames is not probably going to be an option as those textures are going to take lot of space, which especially mobile devices can’t afford. With video all that computing cost is paid during encoding. This is also why loading spinners made from gifs are.. silly: pretty often running gif through all the sandboxing and extra buffer handling overhead uses 100% from one core.

                                              1. 2

                                                I don’t think anyone would seriously argue that you were wrong. Video is indeed a superior encoding for animated images. But the downside of autoplay for video has nothing to do with that. It’s purely an issue with the fact that browsers in general in the past have done a very poor job of protecting their users from bad actors in the autoplaying video space. Firefox has been taking steps recently in this direction but it’s not enough.

                                                1. 1

                                                  Isn’t decoding h.264 much more power demanding than reading a raw video buffer? I’ve heard people say that devices that struggle playing videos are often able to play animated gifs (at the cost of ram)

                                                  1. 3

                                                    GIF is more expensive to decode than modern video codecs. It’s very poor at compressing, so the sheer volume of data is huge. It gives you 10x more data to decompress, but it’s nowhere near 10x simpler to process it. In fact it’s pretty expensive for modern CPUs, because it requires chasing pointers in a dynamic dictionary (memory and branching are slow). Modern codecs give you way less data to decompress and then some transformations and filtering that are SIMD and cache-friendly.

                                            1. 4

                                              I think this might be reddit’s shark-jumping moment. Cryptocurrency people are excited about this because ADOPTION!!!11 but reddit already has issues with the quality of stuff on there. People aim to get as many upvotes as possible by posting low-quality, inflammatory, or generally-agreeable comments. Giving people the ability to cash that out will just make this problem much worse.

                                              1. 2

                                                reddit went Fonzie on waterskis like 8 years ago. This is the Mega Shark vs Giant Octopus moment. It’s stupid, and not in any way good, but it has Debbie Gibson and Lorenzo Lamas, and you might be able to derive some unintended humor from watching it.

                                                1. 1

                                                  Oh for sure, I can’t wait for the drama to pop off (assuming they don’t drop this project)

                                                1. 1

                                                  I ended up doing something similar on my site too, except it’s just one giant markdown file that’s rendered to barebones HTML. I like how low-friction it is to add stuff there.

                                                  1. 0

                                                    The WWW might seem hopeless, but there’s still Gopher.

                                                    There’s still some life in it, with gopher blogs and such. Plaintext is standard there. Much pleasant in contrast with web sites.

                                                    Gopherus is a nice, low-footprint, well-maintained gopher client.

                                                    1. 5

                                                      Gopher is an interesting protocol, but what’s stopping people from just writing bare HTML (eg Dan Luu’s site) instead? As an (IMHO) added benefit, you get reflowing text + inline images and text formatting.

                                                      1. 3

                                                        I love the video on that site. They show a 80286, to which our current machines are basically supercomputers. The machine boots fast (DOS FTW) and browsing gopherspace is nearly instant. Yet, here we are with our supercomputers burning cycles loading and running megabytes of Javascript just to view some pages. Many tabs with basic ‘web apps’ take hundreds of megabytes of memory.

                                                        It’s such a waste.

                                                        1. 1

                                                          I just compiled Gopherus on my Raspberry Pi 4. It’s much faster than a 286, cheaper when new, and requires less power to operate.

                                                          I don’t see why one cannot enjoy the fruits of Moore’s Law, instead of being nostalgic for a (crappy) past.

                                                          (My first computer was a 386 without a math coprocessor. That sucked)

                                                          1. 2

                                                            I don’t see why one cannot enjoy the fruits of Moore’s Law, instead of being nostalgic for a (crappy) past.

                                                            As someone who is currently working with old software on (emulated) old systems, rest assured that what we have now is better in many ways.

                                                          2. 1

                                                            I have used it on a 386/25, which is faster but not that much. Gopherus supposedly works on 8088, as long as there’s ~400KB of free RAM for it to use, which most remaining PCs and XTs likely do have.

                                                            I believe the only reason it isn’t actually instant is that the information is fetched from the Internet.

                                                        1. 12

                                                          0x6. I like monospaced fonts.

                                                          Unfortunately, I learned about kerning and kerning is impossible to do even decently with monospace fonts.

                                                          1. 6

                                                            Kerning is useless for monospaced fonts, almost by definition.

                                                            Kerning is so that combinations like “AV” don’t have a wide space between them. AV will have that, because the horisontal space taken up by each character is the same.

                                                            1. 3

                                                              There are advantages to kerning, and you miss out on them with monospaced fonts. Obviously you gain other benefits while writing code with monospaced fonts, but fit prose? Not so clear.

                                                            2. 5

                                                              Ditto. Maybe it’s just me, but I find it very easy to lose my place when reading monospace text.

                                                              1. 3

                                                                So don’t do kerning? Not sure if there are any readability studies or something that you’re thinking about but as a programmer I am also happy to read articles in monospaced font.

                                                                1. 17

                                                                  Monospaced fonts make prose objectively harder to read. They’re an inappropriate choice for body text, unless you’re trying to make a specific e.g. stylistic statement.

                                                                  1. 1

                                                                    Do you have any links for some studies about it? I’m wondering since you’ve used the objectively term, which I find confusing, since I’m not impacted by monospaced formatting at all. Film scripts are being written in monospaced script, books were written in it (at least in the manual typewriter days), I think this wouldn’t be the case if monospaced fonts would be objectively harder to read?

                                                                    1. 4

                                                                      Do you have any links for some studies about it? I’m wondering since you’ve used the objectively term, which I find confusing, since I’m not impacted by monospaced formatting at all.

                                                                      This is a subject that has been studied for a long time. A quick search turned up Typeface features and legibility research, but there is a lot more out there on this topic.

                                                                      The late Bill Hill at Microsoft has a range of interesting videos on ClearType.

                                                                      1. 2

                                                                        Your first link was fascinating, thanks!

                                                                      2. 1

                                                                        Manuscripts and drafts are not the end product of a screenplay or a book. They’re specialized products intended for specialized audiences.

                                                                        There are no works intended for a mainstream audience that are set in a monospaced typeface that I know of. If a significant proportion of the population found it easier to read monospaced, that market would be addressed - for example, in primary education.

                                                                        1. 1

                                                                          Market could prefer variable width fonts because monospaced are wider, thus impacting the space that is taken by the text, which in turn impacts production cost. This alone could have more weight for market preference than the actual ease of reading. Bigger text compression that is achieved by using variable width could improve speed of reading by healthy individuals, but that isn’t so obvious for people with vision disability.

                                                                          Individuals with central loss might be expected to read fixed-pitch fonts more easily owing to the greater susceptibility of crowding effects of the eccentric retina with which they must read. On the other hand, their difficulty in making fixative eye movements in reading should favor the greater compression of variable pitch. Other low-vision patients, reading highly magnified text, might benefit from the increased positional certainty of characters of fixed pitch. Our preliminary results with individuals with macular disease show fixed pitch to be far more readable for most subjects at the character size at which they read most comfortably. (“Reading with fixed and variable character pitch”: Arditi, Knoblauch, Grunwald)

                                                                          Since at least some research papers attribute superiority of variable width font to the horizontal compression of the text – which positively influences the reading speed and doesn’t require as many eye movements – I’m wondering if the ‘readability’ of monospaced typefaces can be improved with clever kerning instead of changing the actual width of the letters.

                                                                          The reading time (Task 1) with the variable-matrix character design was 69.1 s on the average, and the mean reading time with the fixed-matrix character set was 73.3 s, t (8) = 2.76, p < 0.02. The difference is 4.2 s or 6.1% (related to fixed-matrix characters). (“RESEARCH NOTE Fixed versus Variable Letter Width for Televised Text”: Beldie, Pastoor, Schwarz)

                                                                          The excerpt from the paper above suggests that the superiority of variable width vs monospaced isn’t as crushing as one could think when reading that human preference for variable width is an “objective truth”.

                                                                          Also, the question was if monospaced fonts are really harder to read than variable fonts, not if monospaced fonts are easier to read. I think there are no meaningful differences between both styles.

                                                                          1. 1

                                                                            Market could prefer variable width fonts because monospaced are wider, thus impacting the space that is taken by the text, which in turn impacts production cost.

                                                                            So it’s more readable and costs less? No wonder monospaced fonts lose out.

                                                                            I’d love to read the paper you’ve referenced, but cannot find a link in your comment.

                                                                            1. 1

                                                                              So it’s more readable and costs less? No wonder monospaced fonts lose out.

                                                                              Low quality trolling.

                                                                              I’d love to read the paper you’ve referenced, but cannot find a link in your comment.

                                                                              They could be paywalled. I’ve provided the name of papers plus authors, everyone should be able to find them on the internet.

                                                                              1. 2

                                                                                Low quality trolling.

                                                                                What?! I put a lot of effort into my trolling!

                                                                                (To be honest: you’re right and I apologize. It was a cheap shot).

                                                                                I found the first paper (https://www.ncbi.nlm.nih.gov/pubmed/2231111), and while I didn’t read it all I found a link to a font that’s designed to be easier to read for people who suffer from macular degeneration (like my wife). The font (Maxular), shares some design cues from monospaces fonts, but critically, wide characters (like m and w) are wider than narrow ones, like i.

                                                                                That’s what I think is a big problem with monospaced fonts, at small sizes characters like m and w get very compressed and are hard to distinguish.

                                                                    2. 5

                                                                      I also tried to code with a variable width font. It works ok with Lisp code but not the others. The tricky part is aligning stuff. You need elastic tabstops.

                                                                      1. 1

                                                                        Oh, wow. That’s a cool idea. Yeah, that might be enough.

                                                                        1. 1

                                                                          Very cool idea, but that means using actual tabs in a file, and I know a lot of programmers who hate tabs in files.

                                                                          1. 1

                                                                            Good point. I think the cases when different sized tabs would cause problems should also cause problems with a system like this.

                                                                    1. 2

                                                                      Personally Python’s massive standard library makes it more popular for random projects for me. I might delete this code tomorrow, so I might as well try to get something hacked together as fast as possible.

                                                                      1. 5

                                                                        That’s one place where Clojure really shines, I get access to all of the JVM, Js ecosystem, and now Python. I can use the same language with the same tooling and get all the benefits of these ecosystems.

                                                                      1. 2

                                                                        I love the simplicity of GOAP. It really is a constraint solver at its core, but every action exponentially increases the search space for plans and you can get some surprising interactions as a result.

                                                                          1. 0

                                                                            but if we can’t track everything you do online, how are we going to make money? /s

                                                                          1. 1

                                                                            Oh weird. Is it something to do with RTL overrides?

                                                                            1. 2

                                                                              The browser is rendering the entire text within the parentheses as bidi instead of just the text within the quotes. Hard to fault it for that though, since we can hardly expect a general purpose text rendering engine to treat code specially.

                                                                              1. 5

                                                                                Seems to be rendering fine for me. What are y’all seeing?

                                                                                1. 3

                                                                                  Looks normal for me too. I’m using Safari on iPhone. It’s possible I’ll get different results on desktop.

                                                                                  1. 2

                                                                                    You sure?

                                                                                    If you break the expression into multiple lines, can you see the problem?

                                                                                    Or try removing one of the whitespaces on the side of the colon. Did that do what you expect?

                                                                                    1. 1

                                                                                      Ya, I see it now that I removed one of the whitespaces adjacent to the colon. If I don’t do that, it looks normal to me

                                                                                      1. 1

                                                                                        “normal”, meaning you can’t tell which string is longer?

                                                                              1. 1

                                                                                The ultimate form of server-side rendering would be something like Google Stadia that doesn’t even need HTML, JS or API calls.

                                                                                Though I heard that we gonna need near-client-rendering-server would be required to solve latency issue.

                                                                                1. 4

                                                                                  Didn’t Opera used to do that back in the flip-phone era? A server would render the website on your behalf and send you an image of the page. I can’t imagine it working too well in the current era.

                                                                                  1. 3

                                                                                    Yes! Opera mini. It worked pretty well on 2g/3g.

                                                                                1. 2

                                                                                  Did you observe any screen testing, by the way? I just installed 20.04 on my intel laptop and I got very visible screen tearing. It went away after I changed the default DE to gnome wayland, but then I lost the new fractional scaling options and had to go back to either 100/200% scaling.

                                                                                  1. 1

                                                                                    Do you know if 20.04 removed the tear free option?

                                                                                    https://wiki.archlinux.org/index.php/intel_graphics#Tearing

                                                                                    I’m using this in Xubuntu 19.10 without any issues but I have not upgraded yet.

                                                                                    1. 2

                                                                                      I have two laptops with Intel video and TearFree is absolutely necessary in order to watch any video. I don’t know why it isn’t the default.

                                                                                      1. 2

                                                                                        I tried using that but I got very nasty graphical corruption (eg unreadable text). :/

                                                                                      2. 1

                                                                                        I can’t say I’ve noticed any screen tearing on my machine.

                                                                                      1. 3

                                                                                        I love how this guy is going like “Linux isn’t secure, because I can elevate arbitrary code execution into full system compromise, provide the user types in their sudo or root password”.

                                                                                        Well of course you can. This applies equally to all computing systems.

                                                                                        1. 6

                                                                                          Well of course you can. This applies equally to all computing systems.

                                                                                          Not really, no. In many systems, it is not possible to gain privileges, only discard them.

                                                                                          Try Genode’s Sculpt sometime.

                                                                                          1. 5

                                                                                            Does it though? IIRC on Mac OS there are some filesystem locations and executables that you can’t run with sudo, thanks to SIP.

                                                                                            1. 4

                                                                                              Gitlab does the same. If you don’t want to be subject to US sanctions, then you’ll have to use a service hosted outside of the US: https://about.gitlab.com/blog/2018/07/19/gcp-move-update/

                                                                                              1. 1

                                                                                                I stopped trusting and relying on third-party services for critical stuff long ago (git/email/storage/etc) and self-host all these.

                                                                                            1. 7

                                                                                              Unless it’s intentional, you might want to make it so that text it correctly justified and hyphenated instead of splitting lines in the middle of words. It would also be great if the body text were at a more readable size.

                                                                                              Either way, it looks and feels really cool! I like the lo-fi graphics.

                                                                                              1. 1

                                                                                                I tried using word-break: break-word and text-align: justify on my browser and it’s slightly (?) better. But there are still massive rivers in the text :/

                                                                                                1. 1

                                                                                                  hyphens: auto, maybe?

                                                                                                  1. 3

                                                                                                    I have the following in my CSS that seems to work fine. I’ve forgotten what each one does though.

                                                                                                    hyphens:               auto;
                                                                                                    hyphenate-limit-chars: 6 3 3;
                                                                                                    hyphenate-limit-lines: 2;
                                                                                                    hyphenate-limit-last:  always;
                                                                                                    hyphenate-limit-zone:  8%
                                                                                                    
                                                                                                2. 1

                                                                                                  yeah you’re right, i’ve been playing around with different methods of line breaking, and will probably continue to do so. The biggest challenge with this has been getting articles of arbitrary size to fit within a given layout. I might move the backend to work out the best layout dynamically based on the articles, rather than finding the best articles for a given layout.

                                                                                                  I’m torn on text size, to me this is one of the things that most authentically make it feel like a newspaper, and the application i’m interested in is reading on my ipad, where i can zoom in on any particular article, but i really value accessibility, so i’ll have a think about what i can do.

                                                                                                  1. 2

                                                                                                    About the font size: I just installed all of the proprietary Microsoft fonts on my Linux system (for a different reason) and now it reads much more easily. Going from Liberation Serif to actual Georgia makes the text appear to be a lot larger as well. I think this is an issue just because Georgia looks really big and most other fonts are much smaller in comparison. Maybe you could specify Merriweather as a fallback since it also is pretty large. That said, maybe you don’t want to introduce a web font dependency.

                                                                                                    Edit: you already have another .woff, so maybe you could add one for the body font.

                                                                                                    1. 1

                                                                                                      I’m torn on text size, to me this is one of the things that most authentically make it feel like a newspaper,

                                                                                                      The only thing which makes newspaper column width legible is the justification of the text. You also need larger spaces between columns.

                                                                                                  1. 1

                                                                                                    His intro quote calls out the difficulty of editing a SSG on an iPad. This is an easy problem to avoid. One of the main reasons I like SSG is the ease of editing and deployment. I run a blog off of Jekyll that is rebuilt every time I edit a file on GitHub. Just a simple Action that builds and ftps it over to a host. So I typically just use GitHub’s browser editor on my phone. Can’t get much simpler than that and the security on GitHub is probably better than the security on the WordPress sites I run.

                                                                                                    It’s cool that people like WordPress, it’s a popular tool. But the post seems to tl:dr; to “I am familiar with WordPress and like it.” And that’s perfectly fine.

                                                                                                    But I like SSGs because they are much harder to crash and easier to recover and scale. WordPress takes more skill to maintain and requires a more expensive host.

                                                                                                    1. 1

                                                                                                      I run a blog off of Jekyll that is rebuilt every time I edit a file on GitHub. Just a simple Action that builds and ftps it over to a host.

                                                                                                      How do you handle authentication? I’ve thought about setting up something similar, but the idea of putting SSH keys on github/gitlab/… makes me nervous.

                                                                                                      1. 2

                                                                                                        I created a limited access ftp account and store the password as a GitHub encrypted secret. GitHub’s security seems pretty decent and worst case scenario is that the password is compromised and all they can do is put something up on my site. At which point, I reset the account, wipe, and rebuild.