1. 1

    I also had to get very familiar with CMake which as far as I know is the only reasonable way of getting something similar to what you get with npm packages

    CMake is, to be frank, not good. It’s barely better than autotools. It’s not built with layers and layers of separate text macro processing tools, but it’s about the same level of abstraction. It doesn’t really do “convention over configuration”.

    There’s a reason all the freedesktop/etc. projects ignored CMake, but switched to Meson ;)

    1.  

      Meson may be better in some aspects (generally shorter and cleaner-looking build files), but it’s much worse than CMake in others (awful documentation in comparison, at least in my view, and it’s very difficult to make it do certain things; it also lacks CMake’s neat GUI and TUI, and doesn’t have any packaging capabilities).

      GNOME, for example, ended up adding a custom module directly to Meson. It would be horrible to use without.

    1. 2

      Wide gamut and HDR displays are becoming more common and will be increasingly important, so wide gamut and HDR color picking is definitely a topic for further research and development, but it will not be considered here.

      I personally don’t see much purpose in building any further upon sRGB instead of embracing that DCI-P3, et. al. are here now and should be added fromthe get-go. I believe soon enough P3’ll be standard gamut instead of ‘wide’, and HDR will be coming right with it so it feels short-sighted.

      1. 1

        Uninformed pipe dream.

        sRGB is not going anywhere. In fact, scRGB, a “trivial” extension of it, is how HDR is and will be handled.

        Moreover, there isn’t a singular P3. Apple uses Display P3, which uses the gamma curve from sRGB.

        And then there’s the Rec. 2020 colour space, which will eventually make DCI-P3 obsolete. For how long do you want to keep making standards that are just about to become obsolete already?

        scRGB does the wonderful thing of extending sRGB to contain all colours, including those that do not exist, while being compatible with the vast majority of content in existence, and being easy to process.

        I’m not sure how this all relates to the article, though.

      1. 4

        Almost exactly 5 years after Nvidia explicitly asked for collaboration on the subject to no avail. If anyone is curious to a rough (it takes more than this but..) breakdown of what that would entail https://www.x.org/wiki/Events/XDC2016/Program/xdc-2016-hdr.pdf

        1. 9

          The alliance with nVidia is uneasy; they’re always trying to subvert it. Also the nVidia proposals are always over-engineered. This is a good example; application authors really don’t want to manage monitor-specific metadata, they just want to know that rendering with floats will give good HDR. This cuts the actual amount of proposed work in half, now only requiring changes to Wayland and Mesa so that FP16 is a valid channel depth.

          I suppose that now I have to contact RH about the job listing. Curses.

          1. 5

            So would IBM, Qualcomm or anyone else. A company with as large a techology gap on such a strong control surface would act just as bad given the opportunity – that doesn’t exclude competitive intelligence practices but rather the opposite. Listening to Nvidia doesn’t mean that you have to place nice, just accept that they are ahead of the curve, listen to what they have to say and wait for the day to stab them back.

            This is a good example; application authors really don’t want to manage monitor-specific metadata, they just want to know that rendering with floats will give good HDR. This cuts the actual amount of proposed work in half, now only requiring changes to Wayland and Mesa so that FP16 is a valid channel depth.

            The problem space is much more involved, even for the most trivial fullscreen case (see how Kodi does it in their DRM backend if you don’t believe me) – for composited desktop where you have different color spaces, primaries, transfer functions, … there is no established solution and all the controls aren’t in place still (backlight and ambient sensor especially). The clients do need to know the presentation colourspace versus the content mastering colourspace for both games, videos and photos – which are the ones that will need HDR anyhow. Any mismatch that will require tonemapping is a substantial bandwidth hog.

            I worked on this very problem on behalf of Sony for a while ~10 years ago, and still do scRGB scanout/composition here on displays from 400 to 1000 nits. FP16 is just the packing format/depth, not what the coordinates >represents< and the transfer function(s) between spaces and primaries between that and what the hardware can present. It’ll fight your other composition pipeline (alpha channels and effects) even if you have the bandwidth to composite and scanout in scRGB. EGL wasn’t prepared for this (or anything really), GL sure wasn’t prepared for this. While something for the employer, the actual gear needed for verification (colorimeter and so on) is far from cheap either.

          2. 1

            Thanks a lot for the link, now I know everything I wanted to about HDR.

          1. 1

            Well that’s scathing. Sounds like GTK needs a fork.

            1. 1

              There will be no fork without the people to maintain it. They will not appear out of thin air. It is not an easy job to make any substantial changes in the GNOME stack, and not make things worse in the process.

              GTK+ 3 is a horrible hodgepodge of obsoleted, partially removed features, and badly integrated half-baked new ones. GTK+ 4 has received considerable clean-ups in that regard, at the cost of discarding some desirable things with it. It’s not strictly worse, nor is it strictly better.

              You can see version 3 as the best fork there is, until the next major version gets mostly abandoned by upstream.

            1. 31

              It’s odd to see C described as boring. How can it be boring if you’re constantly navigating a minefield and a single misstep could cause the whole thing to explode? Writing C should be exhilarating, like shoplifting or driving a motorcycle fast on a crowded freeway.

              1. 17

                Hush! We don’t need more reasons for impressionable youngsters to start experimenting with C.

                1. 11

                  Something can be boring while still be trying to kill you. One example is described in Things I Won’t Work With.

                  1. 1

                    ‘Boring’ is I suspect the author’s wording for ‘I approve of this language based on my experiences’.

                    1. 10

                      I suspect “boring” is used to describe established languages whose strengths and weaknesses are well known. These are languages you don’t spend any “weirdness points” for picking.

                      1. 6

                        ‘Boring’ is I suspect the author’s wording for ‘I approve of this language based on my experiences’.

                        I’m curious if you read the post, and if so, how you got that impression when I said things like “it feels much nicer to use an interesting language (like F#)”, “I still love F#”, etc.

                        Thanks for the feedback.

                        1. 4

                          I found your article pretty full of non-sequiturs and contradictions, actually.

                          boring languages are widely panned. … One thing I find interesting is that, in personal conversations with people, the vast majority of experienced developers I know think that most mainstream langauges are basically fine,

                          Are they widely panned or are they basically fine?

                          But when I’m doing interesting work, the boilerplate is a rounding error and I don’t mind using a boring language like Java, even if that means a huge fraction of the code I’m writing is boilerplate.

                          Is it a rounding error or is it a huge fraction? Once the code has been written down, it doesn’t matter how much effort it was to mentally wrestle with the problem. That was a one-time effort, you don’t optimize for that. The only thing that matters is clearly communicating the code to readers. And if it’s full of boilerplate, that is not great for communication. I want to optimize for clear, succinct communication.

                          Of course, neither people who are loud on the internet nor people I personally know are representative samples of programmers, but I still find it interesting.

                          I’m fairly sure, based on this, that you are just commenting based on your own experiences, and are not claiming to have an unbiased sample?

                          To me it basically seems that your argument is, ‘the languages which should be used are the ones which are already used’. The same argument was used against C, C++, Java, Python, and every other boring language you can think of.

                          1. 3

                            Are they widely panned or are they basically fine?

                            I think the point is that the people who spend a lot of time panning boring languages (and advocating their favourite “interesting” one) are not representative of “experienced developers”. They’re just very loud and have an axe to grind.

                            1. 1

                              Having a tough time reconciling this notion that a narrow section of loudmouths criticize ‘boring languages’, against ‘widely panned’, which to me means ‘by a wide or significant section’.

                              But it’s really quite interesting how the experienced programmers who like ‘boring languages’ are the ones being highlighted here. It begs the question, what about the experienced programmers who don’t? Are they just not experienced enough? Sounds like an unquestionable dogma to me. If you don’t like the boring languages in the list, you’re just not experienced enough to realize that languages ultimately don’t matter.

                              Another interesting thing, some essential languages of the past few decades are simply not in this list. E.g. SQL, JavaScript, shell. Want to use a relational database, make interactive web pages, or just bash out a quick script? Sorry, can’t, not boring enough 😉

                              Of course that’s a silly argument. The point is to use the right tool for the job. Sometimes that’s a low-level real-time stuff that needs C, sometimes it’s safety-critical high-perf stuff that needs Ada or Rust, sometimes you need a performant language with good domain modelling and safety properties like OCaml or F#. Having approved lists of ‘boring languages’ is a silly situation to get into.

                              1. 2

                                To be honest, I don’t really see why that’s hard to reconcile at all. Take an extreme example:

                                Let’s say programming language X is used for the vast majority of real world software development. Through some strange mechanism (doesn’t matter), programmers who write language X never proselytize programming languages on the Internet. Meanwhile, among the set of people who do, they almost always have nasty things to say about X. So, all the articles you can find on the general topic are at least critical of X, and a lot of them are specifically about how X is the devil.

                                Is saying that X is “widely panned” accurate? Yes.

                                Of course that’s a silly argument.

                                Yes it is.

                                The point is to use the right tool for the job.

                                Indeed.

                        2. 5

                          Normally I’d lean towards this interpretation, but I’ve read many other posts by this author and he strikes me as being more thoughtful than that. Perhaps a momentary lapse in judgement; happens to everyone I suppose.

                        3. 1

                          That does not sound any different from most other languages. You have described programming.

                          To expand a bit on that, GNOME is full of assertions, and it’s quite hard to make it crash internally.

                        1. 3

                          Heh, seems this is the third time this has been posted. I wonder how much real world use it’s gotten?

                          I love the idea of being able to program this way, in sensitive regions of the code. I’m less thrilled by using an entirely different language to do it, esp. one with some odd syntactic quirks like resurrecting Pascal’s <> operator instead of !=.

                          I’d love to see these features added as extensions to C/++, like specifying a valid range for an integer variable so a static analyzer and/or undefined-behavior sanitizer can use that knowledge. I think Clang already has some of these, like “facts that guarantee a condition is true. Better support for pre and postconditions would be great.

                          1. 4

                            frama-C already supports this via the wp plugin. You can prove sections of code free from overflow.

                            A good tutorial: https://allan-blanchard.fr/publis/frama-c-wp-tutorial-en.pdf

                            1. 1

                              The main part of the project is its image loading library, which unites several formats under one roof. I’m working on an image viewer that uses it. And I’m about as slow at it as Nigel is at adding all the things that I need.

                            1. 2

                              What I’d really like to see is a company like System76 investing more in applications rather than the desktop environment. The thing that System76 has gotten right IMO is support for gaming on Linux, I’ve set up PopOS on a Core i7 machine with a mid-range NVIDIA card and it was really easy to get Proton working and download a few games with Steam.

                              My wants from a DE are very minimal. On my PopOS machine I’ve installed Regolith and love it. The PopOS tiling implementation is just a little wanting IMO. (For one thing I kept inadvertently toggling… something and switching the tiles from one mode to another that was unusable.)

                              I’d be happy to pay for a decent mail client and music player. None of the alternatives are quite right, these days. I recall loving Banshee back in the day, but it’s stagnated since Novell stopped supporting people working on it.

                              After 20+ years I’ve given up on the hope that the Linux desktop will ever congeal into something that appeals to more than 3-5% of users. I honestly don’t know who’s right or wrong in this latest kerfuffle, or if it matters, just that as a user it’s hard to feel like you can count on any particular stack being stable for more than a few years at a time. (Not that proprietary alternatives are any better…)

                              1. 2

                                I’d be happy to pay for a decent mail client and music player. None of the alter

                                I agree wholeheartedly. I like the approach of Claws mail, but it was not as polished and reliable as I need for archiving all my SMTP accounts.

                                mac’s Mail was the best, but… that’s on a “locked-down vampiric device” to quote Eben Moglen hehe

                                1. 1

                                  My wife is completely tied into the macOS / iOS ecosystem. It has its allure. I wish I could, with a straight face, recommend Linux to her, but she’d be frustrated with it in less than a week.

                                2. 1

                                  What do you want from a music player that cannot be found?

                                1. 25

                                  I don’t think the choice of programming language is one of the fundamental problems preventing the Linux desktop from being competitive with its proprietary equivalents.

                                  1. 18

                                    I don’t know what the reason is, honestly. But I do think that programming language(s) are an actual issue.

                                    The Linux ecosystem has this really strange Stockholm Syndrome around using the C programming language, even for high-level tasks, like GUI apps. That’s… insane. What a huge waste of time and energy to deal with such a primitive and bug-prone language when we have tons of other options that are much more productive and will probably perform exactly as well.

                                    I spent 7-ish years writing C++ and while I theoretically can read and understand C code that’s presented to me, I have ZERO confidence that I could write reasonably correct C, even if I sat and read a modern C book cover-to-cover. Then, I’m also sure that I’d spend WAY more time and energy making sure that I managed memory correctly and using the correct string functions (because, you know, basic text manipulation being a huge security problem in C makes it a GREAT choice for a user interface), than actually figuring out the best data structures or correct business logic.

                                    Honestly, Rust might not be the best language, either, because it’s a little too fiddly for writing applications, IMO (I think Rust would be a great fit for utilities, libraries, toolkits, etc). I think a language like Kotlin would be pretty good for GUI apps, actually. Maybe even Go.

                                    1. 1

                                      One of the main reasons I keep writing even GUIs in C is that everything substantial written for *nix is also based on C, and extremely laborious to reimplement. Language abstractions are painful to deal with–I might as well spare myself the hours of troubleshooting broken or incomplete bindings. New core libraries are also written in C, or at minimum export C ABIs, because just about anything can make use of C. It’s the lingua franca of Unix.

                                      Writing correct C isn’t particularly harder than it is with many other languages, so long as you use decent abstractions. Text in particular is almost a non-issue, if we’re not talking about Unicode.

                                      I think I spend the most time on making my C look pretty.

                                    2. 13

                                      To me, the issue is one of focus. I hope that System76’s incentives (to sell more computers) aligns better with what I want from a computing environment than the existing open source environments. I’m not counting on it, but I think that, depending on project governance, they could do some things.

                                      ETA: I DGAF about Rust, but if that tightens scope, then good on them.

                                      1. 1

                                        Try to do anything involved within the GNOME ecosystem, and say that again. I dare you.

                                        That being said, KDE doesn’t have the same impediment (could be better, isn’t really), and macOS builds on top of some horrendous stuff (and it still is competitive).

                                        1. 2

                                          Haha, I actually played with the CORBA stuff in GNOME back in the day (wrote a Ruby wrapper for it). C++ is certainly superior to C for GUI programming, I won’t deny that. But I still think the fundamental problem is misaligned incentives. A polished desktop experience just requires a lot of boring work that volunteer developers don’t want to do, and a lot of money to get it done.

                                      1. 3

                                        This is an enormous amount of work, you can’t imagine just how much. You’ll get to know the toolkit and adjacent libraries inside-out. The basic set of standards is https://gitlab.freedesktop.org/xdg/xdg-specs plus https://gitlab.freedesktop.org/xdg/shared-mime-info/.

                                        1. 3

                                          Most of these toolkits require GL and thus Cgo. It’s good that they exist, but sad that they’re dirty in this way.

                                          Though, the unikernel puzzles me.

                                          1. 4

                                            [I do not have a hat for it, but am a Gio maintainer]

                                            Yes, Gio does require GL and CGO for most platforms. On Windows it actually bypasses CGO, so you can cross compile a Gio application for windows from any other OS trivially. The requirement for CGO is less onerous than I expected it to be though. It’s been really easy to build and distribute Gio applications for all OSes in my experience.

                                            The unikernel was a demonstration that you can build special-purpose applications with GUIs easily. I think (though I’m having trouble remembering) that Elias’ goals there were both do demonstrate a way of sandboxing applications without running a whole virtualized OS, as well as a possible Kiosk-style deployment option. Looks like Elias talked about it during the First Community Call if you want to hear it from him.

                                            1. 2

                                              I’m still trying to figure out how I’m going to maintain the independence from cgo on Windows while incorporating my Rust-based AccessKit project. I don’t think reimplementing AccessKit in Go will be a reasonable solution, nor do I want to implement it solely in Go, as that would hinder adoption by code in other languages. And I do indeed want AccessKit to be used across multiple languages, so as not to spread accessibility efforts too thin. I’m not even sure that implementing UI Automation (the Windows accessibility API) in pure Go would be feasible anyway; UIA tends to make many calls into the COM interfaces implemented by the application, and we’d need to measure the current overhead of calling into Go from outside.

                                              I plan to provide a C ABI wrapper for AccessKit. So one option would be to compile that as a DLL, then call that DLL using Go’s syscall package. But that would require Gio users to distribute a DLL with their Windows applications, which they don’t have to do now. And if you make the DLL optional, you can bet that some developers will omit it, leading to inaccessible applications. I saw that happen when Qt implemented accessibility in a plugin. One of my goals with AccessKit is to eliminate as many excuses as possible for omitting accessibility, and make it impossible for downstream developers to turn off, including accidentally. So if we go with the DLL option, Gio would need to fail to run on Windows without that DLL, and I understand this may be unacceptable.

                                              Of course, one option is to simply require cgo on Windows. But that would require application developers to have a MinGW toolchain, which they don’t need now. That would lead to another excuse for omitting accessibility.

                                              Elias suggested that it might be possible to use a .syso file. But given that I’m going to be using Rust, it may require some elaborate toolchain hacking to produce a suitably self-contained .syso file. It would also likely require using a GNU toolchain, which AFAIK isn’t an option for Windows on ARM64. I don’t know if Gio is running on that combination of OS and architecture yet, but I know Go is.

                                              So I see no really good option right now.

                                            2. 1

                                              Is there a quick, obvious way to tell if a project requires cgo without having to grep the source or trying to build with cgo disabled? I’m allergic to cgo, and am often annoyed with how long it takes to figure out if an external thing needs it when I’m evaluating a long list of possible external things I might want to use.

                                            1. 2

                                              I have some objections about the lack of list nesting, or any option to include preformatted blocks within them, though if desperately needed, one could probably pick, e.g., – or —, and have it rendered as a preformatted block, as it did before.

                                              The reformatting rules look rather invasive, but besides even being presented as catering to Markdown extremists, one can see merit behind most of them.

                                              It will need some getting used to. Let’s see how it develops before merging.

                                              1. 3

                                                I’m confused… Haven’t we had great NTFS support for years?

                                                1. 28

                                                  More context: https://arstechnica.com/gadgets/2021/08/paragon-is-working-to-get-its-ntfs3-filesystem-into-the-linux-kernel/

                                                  Both existing implementations have problems, however. The in-kernel implementation of NTFS is extremely old, poorly maintained, and should only be used read-only. As a result, most people who actually need to mount NTFS filesystems on Linux use the ntfs-3g driver instead.

                                                  Ntfs-3g is in reasonably good shape—it’s much newer than the in-kernel ntfs implementation, and as Linux filesystem guru Ted Ts’o points out, it actually passes more automated filesystem tests than Paragon’s own ntfs3 does.

                                                  Unfortunately, due to operating in userspace rather than in-kernel, ntfs-3g’s performance is abysmal. In Ts’o’s testing, Paragon’s ntfs3 completed automated testing in 8,106 seconds—but the FUSE-based ntfs-3g required a whopping 34,783 seconds.

                                                  (In summary, what the other commenter said.)

                                                  1. 17

                                                    The kernel driver has never had stable read-write support. The FUSE driver has been reasonably stable but is quite slow.

                                                  1. 17

                                                    Though it’d be useful in some ways, I’m not convinced the juice from “JSON all the things” would be worth the squeeze. If you’re going to touch every tool on a system for a reason like this, it might make more sense to go all the way, and sling objects around powershell-style. The resulting system would be more powerful and have less overhead, and you could easily pipe those objects through a “jsonify” utility to get json out of any tool where that’d be beneficial.

                                                    1. 3

                                                      I enthusiastically agree!

                                                      I’ve been exploring Powershell lately, and I think its object pipelines model is so incredibly powerful, I’m gonna gush about it for a minute.

                                                      When you express everything as objects with a list of named parameter bearing methods, and include mechanisms for making documentation trivial to add, you wind up with this amazingly rich system that’s 100% explorable by users interactively from the command line.

                                                      I can’t express as a 30+ year UNIX fan how liberating this is. Rather than having to rely on man pages which may or may not be present, I can query the command itself for what its parameters are. And I can combine objects in all sorts of interesting ways that would be very difficult if not impossible through strict adherence to the “everything is a stream of bytes” mantra.

                                                      Other systems like AppleScript and ARexx have done this before to greater or lesser extents, and IMO Jeffrey Snover (an astonishingly smart dude who was involved in the POSIX shell spec in some way) has learned from all of them.

                                                      We should totally steal back his great ideas and help move UNIX forward!

                                                      1. 1

                                                        It won’t really be UNIX any more, so it’s move on from UNIX.

                                                        1. 3

                                                          So UNIX is forever fixed to a set of rules around how its userland shell and applications will interact and innovation is verboten?

                                                          That feels like a mistake to me. UNIX is what we say it is, and it must either evolve or ultimately, over the VERY long haul, die.

                                                          I’m not suggesting that it’s Powershell style objects or bust, that’s just one model I personally find very attractive, but to my mind the question the author is asking the UNIX community is a valid one: Is there a richer model we can use to allow applications to interact with each other and promote a richer palette of possibilities for users and application developers?

                                                          I see that as a question worth asking, and I think changing the way UNIX shells and apps interact should be on the table for this ongoing dialog.

                                                          1. 2

                                                            Is there anything wrong with UNIX dying? Isn’t it already, with the madness going on in Linux land? The changes you want or propose are radical—like making a sports car out of a Humvee. They have far-reaching consequences. And trying to make a bad superset seems deeply unappealing.

                                                            1. 2

                                                              No, there isn’t, but if there’s one thing I’ve learned from hard experience over 30 years in this business it’s that being UTTERLY closed and inflexible to change is rarely the correct strategy.

                                                              You don’t need to want any particular change, or be open to every change, but becoming hidebound about ANYTHING in this industry is bound to cause problems for you if not for the technology in quesetion.

                                                      2. 3

                                                        What should go in an object that’s missing from json? Methods? But then you’re talking bi-directional not pipes

                                                        1. 4

                                                          This is a pretty good intro, I think. Even if you want to avoid methods, though, think that json could serialize everything you care about reasonably, and don’t consider the extra serialization needlessly wasteful, passing object handles from one tool to another gives you a certain liveness missing from a serialized snapshot. For example, the ifconfig gadget in the OP could pass a set of handles which tools later in the pipeline could query for properties. So if the system’s dhcp client updates the IPv4 address for an interface between the time the list is created and the time the property is inspected, the second tool would see the up-to-date address.

                                                          1. 3

                                                            Right, so you want live objects, not piped data. Dbus is pretty good at this.

                                                      1. 7

                                                        The fact that a substantial part of the FOSS community seriously prefers using what is effectively the Windows 1.01 interface with a few more features and anti-aliasing instead of any of the results of nearly a decade of UX-focused work in KDE, Gnome, or Cinnamon is a pretty convincing hint

                                                        I’m not sure this person understands the motivation of using i3. I would argue it’s got nothing to do with the topic presented. This happened also on the other side while it was still viable. https://sourceforge.net/projects/blueboxshell/ for windows 2k was my barebones alternative shell for a long time.

                                                        trying to emulate the best parts of Windows’ GUI for about twenty years now

                                                        Grass is greener and all that… Which Windows GUI? Win32 which doesn’t fit in modern world but is everywhere, winforms which is kind of the same thing but not, XAML which “everyone” says is dead now, UWP which is the same thing but different, WinUI will maybe win now? Windows has lots of its own GUI toolkit identity crisis situations.

                                                        1. 10

                                                          Windows has lots of its own GUI toolkit identity crisis situations.

                                                          It does, but backwards compatibility makes a huge difference, and is a big part of the reason why there are so many apps for everything on Windows, and so few of them on Linux. Linux had a lot of applications over the years, it’s just they (or their dependencies) get ritually burned and abandoned every 8-10 years so there’s a perpetual lack of availability.

                                                          Lots of Windows applications are actually pretty old. I use IrfanView on my one Windows machine because that’s what I used 20+ years ago when I was a Windows user . It uses ye ole’ Win32 toolkit, which is maybe not the prettiest (that’s obviously subjective though, I actually like it a lot, it’s fast, responsive and efficient) but it’s there, and it has been for a long time, and it has thirty years’ worth of bugfixes at this point.

                                                          Few Linux applications get to be 25 years old though – it takes a lot of effort just to keep up with the changes and deprecations in the GUI toolkits. Back in 2009 I wrote a small GTK app that folks continued to use at the lab where I worked at the time until last year or so. It has not received a single new feature since then and it got maybe four or five bugfixes. But I’ve pushed almost 100 commits (give or take, I’m just grepping for gtk in the commit log) just to keep it working through the GTK 3 saga. Some of these are workarounds for the more brain-damaged “features” like active window dimming, but most of them just do impedance matching.

                                                          (Edit: this isn’t really GTK-specific. Things are a little better on the Qt side but GUI toolkits are just the tip of the iceberg).

                                                          IrfanView has had one release a year for the longest time now. I hear things have calmed down a bit nowadays, but back in 2013, if this had been a generally-available application, and not just something a bunch of nerds in a lab use, I’d have had to make 2 or 3 maintenance releases a year just to keep the damn thing compiling.

                                                          This has a lot of far-reaching consequences. Say, if you compile a 15 year-old codebase against the latest Windows SDK, it’s still the same 15 year-old app and it still uses ye ole’ Win32 API but with all the fixes up to yersterday. If you want to run XMMS because that’s your thing, you can, but you’re going to compile it against unmaintained, Unicode-unaware GTK 1.9x from 15 years ago, with everything that entails.

                                                          1. 1

                                                            I’ve also written a GTK+ program a decade ago, and GTK+ 3 mostly brought unfixable problems, offering nothing but perhaps the option to achieve height-for-width size negotiation without a (reliable) hack.

                                                        1. 17

                                                          Genuinely I don’t understand the point of this article.

                                                          I would pick even gnome or kde over windows’s awful GUI (really any of the recent ones, but certainly windows 10) even if I use i3. Using windows is just… annoying… frustrating… painful… I have top a of the line laptop from dell with an nvidia iGPU, 32GiB of RAM and a top of the line (at the time) intel mobile class CPU. But the machine still finds a reason to bluescreen, randomly shut-down without safely powering down my VMs, break or god knows what all the time. And when such a thing happens there’s no options to debug it, there’s no good documentation, no idea of where to even start. I’m glad windows works for some people, but it doesn’t work for me. What wakeup call? What do I need to wake up to? I use linux among other things, it’s not perfect but for me it’s the best option.

                                                          1. 10

                                                            (NB: I’m the author of the article, although not the one who submitted it)

                                                            Genuinely I don’t understand the point of this article.

                                                            The fact that it’s tagged “rant” should sort of give it away :P. (I.e. it’s entirely pointless!)

                                                            There is a bit of context to it that is probably missing, besides the part that @crazyloglad pointed out here. There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                            And I think one of the reasons for that is the constant software churn in the desktop space. Lots of things, including various GTK/Gnome or KDE components, ritually get tore down, burnt and rebuilt every 6-8 years or so, and at one point you just get perpetual beta fatigue. I’m not sure how else to call it. Much of it, in the last decade, has been in the name of “better” or “more modern” UX, and yet we’re not in a much better position than ten years ago in terms of userbase. Meanwhile, Microsoft swoops in and, on their second attempt, comes up with a pretty convincing Linux desktop, with a small crew and very little original thought around it, just by focusing on things that actually make a difference.

                                                            1. 15

                                                              I suspect that Microsoft is accidentally the cause of a lot of the problems with the Linux desktop. Mac OS, even back in the days when it didn’t have protected memory and barely qualified as an operating system, had a clear and coherent set of human interface guidelines. Nothing on the system was particularly flashy[1] and so it was hard to really understand the value of this consistency unless you used it for a few months. Small things like the fact that you bring up preferences in every application in exactly the same way (same menu location, same keyboard shortcut), text field navigation with mouse (e.g. selecting whole words) or shortcut keys is exactly the same, button order is consistent in ever dialog box. A lot of apps brought their own widget set, in part because ‘90s Microsoft didn’t want to give away the competitive edge of Office and so didn’t provide things in the system widget set that would have made writing an Office competitor too easy.

                                                              In contrast, the UI situation on Windows has always been a mess. Most dialog boxes put the buttons the wrong way around[2], but even that isn’t consistent and some put them the right way around. The ones that do get it right just put ‘okay’ and ‘cancel’ on the buttons instead of verbs (for example, on a Mac if you close a window without saving the buttons are ‘delete’, ‘cancel’, ‘save’).

                                                              Macs are expensive. Most of the people working on *NIX desktop environments come from Windows. If they’ve used a Mac, it’s only for a short period, not long enough to learn the value of a consistent UI[3]. People always copy the systems that they’re familiar with and when you’re trying to copy a system that’s a bit of a mess, it’s really hard to come up with something better. The systems that have tried to copy the Mac UI have typically managed the superficial bits (Aqua themes) and not got any of the parts that actually make the Mac productive to use.

                                                              [1] When OS X came out, Apple discovered that showing people the Genie animations for minimising in shops increased sales by a measurable amount. Flashiness can get the first sale, but it isn’t the thing that keeps people on the platform. Spinning cubes get old after a week of use.

                                                              [2] Until the ‘90s, it was believed that this should be a locale-dependent thing. In left-to-right reading order, the button implying go back should be on the left and the one implying go forwards should be on the right. In left-to-right reading order locales, it should be the converse. More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read. Getting this wrong is really glaring now that web browsers are dominant applications because they all have a pair of arrows where <- means ‘go back’ and -> means ‘go forwards’, and yet will still pop up dialogs with the buttons ordered as [proceed] [go back] as if a human might find that intuitive.

                                                              [3] Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                              1. 13

                                                                IMHO the fact that, despite how messy it is, the Windows UI is so successful, points out at something that a lot of us don’t really want to admit, namely that consistency just isn’t that important. It’s not pointless, as the original Macintosh very convincingly demonstrated, especially with users who aren’t into computers as a hobby. But it’s not the holy grail, either.

                                                                Lots of people sneer at CAD apps (or medical apps, I have some experience with that), for example, because their UIs are old and clunky, and they’re happy to ascribe it to the fact that the megacorps behind them just don’t know how to design user interfaces for human users.

                                                                But if they were, in fact, to make a significant facelift, flat, large buttons, and hamburger menus and all, their existing users, who rely on these apps for 8 hours/day to make those mind-bogglingly complex PCBs and ICs, and who (individually or via their employers) pay those eye-watering licenses, would hate them and would demand for their money back and a downgrade. A facelift that modernized the interface and made it more “intuitive”, “cleaner” and “more discoverable” would be – justifiably! – treated as a (hopefully, but not necessarily) temporary productivity killer that’s entirely uncalled for: they already know how to use it, so there’s no point in making it more intuitive or more discoverable. Plus, these are CAD apps, not TikTok clones. The stakes are higher and you’re not going to rely on guts and interface discoverability, if you’re in doubt, you’re going to read the manual.

                                                                If you make applications designed to offer a quick distraction, or to hook people up and show them ads or whatever, it is important to get these things right, because it takes just two seconds of frustration for them to close that stupid app and move on – after all it’s not like they get anything out of it. Professional users obviously don’t want bad interfaces, either, but functionality is far more important to get right. If your task for the day is to get characteristic impedance figures for the bus lines on your design, and you have to choose between the ugly app that can do it automatically and the beautiful, distraction-free, modern-looking app that doesn’t, you’re gonna go with the ugly one, because you don’t get paid for staring at a beautiful app. And once you’ve learned how to do it, if the interface gets changed and you have to spend another hour figuring out how to do it, you’re going to hate it, because that’s one hour you spend learning how to do something you already knew how to do, and which is not substantially different than before – in other words, it’s just wasted time.

                                                                Lots of FOSS applications get this wrong (and I blame ESR and his stupid Aunt Tilly essay for that): they ascribe the success of some competitors to the beautiful UIs, rather than functionality. Then beautiful UIs do get done, sometimes after a long time of hard work and often at the price of tearing down old functionality and ending up with a less capable version, and still nobody wants these things. They’re still a footnote of the computer industry.

                                                                I’ve also slowly become convinced of something else. Elegant though they may be, grand, over-arching theories of human-computer interactions are just not very useful. The devil is in the details, and accounting for the quirky details of quirky real-life processes often just results in quirky interfaces. Thing is, if you don’t understand the real life process (IC design, neurosurgery procedures, operation scheduling, whatever), you look at the GUIs and you think they’re overcomplicated and intimidating, and you want to make them simpler. If you do understand the process, they actually make a lot of sense, and the simpler interfaces are actually hard to use, because they make you work harder to get all the details right.

                                                                That’s why academic papers on HCI are such incredible snoozefests to read compared to designer blogs, and so often leave you with questions and doubts. They make reserved, modest claims about limited scenarios, instead of grand, categorical statements about everyone and everything. But they do survive contact with the real world, and since they’re falsifiable, incorrect theories (like localised directionality) get abandoned. Whereas the grand esoteric theories of UX design can quickly weasel their way around counter-examples by claiming all sorts of exceptions or, if all else fails, by simply decreeing that users don’t know what they want, and that if a design isn’t as efficient as it’s supposed to be, they’re just holding it wrong. But because grand theories make for attractive explanations, they catch up more easily.

                                                                (Edit: for shits and giggles, a few years ago, I did a quick test. Fitts’ Law gets thrown around a lot as a reason for making widgets bigger, because they’re easier to hit. Nevermind that’s not really what Fitts measured 50 years ago – but if you bother to run the numbers, it turns out that a lot of these “easier to hit” UIs actually have worse difficulty figures, because while the targets get bigger, the extra padding from so many targets adds up and travel distances increase enough that the difficulty index is, at best, only marginally improved. I don’t remember what I tried to run numbers on, I think it was some dialogs in the new GTK3 release of Evolution and some KDE apps when the larger Oxygen theme – in some cases they were worse by 15%)

                                                                Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                                This isn’t limited to Apple, though, it’s been a general regression everywhere, including FOSS. I’m pretty sure you can use Planet Gnome to test hypertension meds at this point, some of the UX posts there are beyond enraging.

                                                                1. 1

                                                                  AutoCAD did make a significant facelift, cloning the Office 2007 “ribbon” interface, also a significant facelift.

                                                                  1. 1

                                                                    AutoCAD is in a somewhat “privileged” position, in that it has an excellent command interface that most old-time users are using (I haven’t used AutoCAD in years, but back when I did, I barely knew what was in the menus). But even in their case, the update took a while to trickle down, it was not very well received, and they shipped the “classic workspace” option for years along with the ribbon interface (I’m not sure if they still do but I wouldn’t be surprised if they did).

                                                                2. 4

                                                                  More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read.

                                                                  Do you have a good source for this? Arabic and Hebrew are prominent (and old!) right-to-left languages; it would seem more likely (to me) that a toss of the coin decided which direction a civilization wrote rather than “left-to-right is more natural and a huge chunk of civilization got it backwards.”

                                                                3. 2

                                                                  There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                                  I think that chasing the shiny object is to blame for a lot of that. Some times the shiny object really is better (systemd, for all its multitude of flaws, failures, misfeatures and malfeasances really is an improvement on the state of things before), sometimes it might be (Wayland might be worth it, in another decade, maybe), and sometimes it was not, is not and never shall be (here I think of the removal of screensavers from GNOME, of secure password sync from Firefox[0] and of extensions from mobile Firefox).

                                                                  I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                  But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                  [0] Yes, Firefox still offers password sync, but it is now possible for Mozilla to steal your decryption key by delivering malicious JavaScript on a Firefox Account login. The old protocol really was secure

                                                                  1. 3

                                                                    I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                    They are, but it’s also really disappointing. The fact that tiling a bunch of VT-220s on a monitor is substantially better, or at least a sufficiently good alternative for so many people, to GUIs developed 40 years after the Xerox Star, really says a lot about the quality of said GUIs.

                                                                    But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                    This obviously varies a lot, I don’t wanna claim that what I know is anything more than anecdata. But e.g. everyone in what used to be my local LUG has a Mac now. Some of them use Windows with Cygwin or WSL, mostly because they still use some old tools they wrote or their fingers are very much used to things like bc. I still run Linux and OpenBSD on most of my machines, just not the one I generally work on, that’s a Mac, and I don’t like it, I just dislike it the least.

                                                                  2. 1

                                                                    That churn is extremely superficial, though. I can work comfortably on anything from twm to latest ubuntu.

                                                                  3. 9

                                                                    I do have a linux machine for my work stuff running KDE. And I love the amount of stuff I can customize, hotkeys that can be changed out of the box, updates I can control etc.

                                                                    But if you get windows to run in a stable manner (look out for updates, disable fast start/stop, disable some annoying services, get a professional version so it allows you to do that, get some additions for a tabbed explorer, remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search), then you will have a better experience on windows. You’ll not have to deal with broken GPU drivers, you’ll not have to deal with broken multi-display multi-DPI stuff, which includes no option to scale differently, display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display. You’ll not have to deal with your pricey GPU not getting used for video encoding and decoding. Browsers not using hardware acceleration and rendering 90% on the CPU. Games being broken or not using the GPU fully. Sleep mode sometimes not waking up some PCIE device, leading to a complete hangup of the laptop. So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows. And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB. That is the sad truth.

                                                                    Maybe Wayland will change at least the display problems, but that doesn’t fix anything regarding broken GPU support. And no matter whose fault it is, I don’t buy a PC for 1200€, just so I can watch my PC trying to render my desktop in 4k on the CPU, tearing in videos and random flickering when doing stuff with blender. I’m not up to tinkering with that, I want to tinker with software I built, not with some bizarre GPU driver and 9k Stackoverflow/Askubuntu/Serverfault entries of people who all can’t do anything, because proprietary GPU problems are simply a blackbox. I haven’t had any bluescreen in the last 5 years except one, and that was my fault for overloading the VRAM in windows.

                                                                    And at that point WSL2 might actually be a threat, because it might allow me to just ditch linux on my box entirely and get the good stuff in WSL2 but remove the driver pain (while the reverse isn’t possible). Why bother with dual boot or two machines if you can use everything with a WSL2 setup. It might even fix the hardware acceleration problem in linux, because windows can just hand over a virtualized GPU that uses the real one underneath using the official drivers. I won’t have to tell people to try linux on the desktop, they can just use WSL2 for the stuff that requires it and leave the whole linux desktop on the side, along with all the knowledge of installing it or actually trying out a full linux desktop. (I haven’t used it at this point) What this will do is remove momentum and eventually interest from people to get a good linux desktop up and running, maybe even cripple the linux kernel in terms of hardware support. Because why bother with all those devices if you’re reduced to running on servers and in a virtualized environment of windows, where all you need are the generic drivers.

                                                                    I can definitely see that coming. I’ve used linux primarily pre corona, and now that I’m home most of them time I’m dreading to start my linux box.

                                                                    1. 1

                                                                      look out for updates

                                                                      What do you mean by this? Are you saying I should manually review and read about every update?

                                                                      disable fast start/stop

                                                                      Done

                                                                      disable some annoying services

                                                                      I’m curious which ones but I think I disabled most of them.

                                                                      get a professional version so it allows you to do that

                                                                      Windows 10 enterprise.

                                                                      get some additions for a tabbed explorer

                                                                      Can you recommend some?

                                                                      remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search)

                                                                      Done and done and done

                                                                      broken GPU drivers

                                                                      I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                      multi-display multi-DPI

                                                                      I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                      display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display

                                                                      I have had the first one happen a few times on windows, the second issue is something I don’t run into since I don’t currently run my laptop with the lid closed while using external displays, but it’s something I’ve planned to move to. I’ve been procrastinating moving to this setup because of the number of times I’ve seen it break for coworkers (running the same hardware and software configuration). I’ve never had a display switch crash anything on linux, although I’ve had games cause X to crash but at least I had a debug log to work from at that point and could at least see if I can do something about it.

                                                                      Games being broken or not using the GPU fully.

                                                                      Gaming on linux, if you don’t mind doing an odd bit of tinkering, has certainly been a lot less stressful than gaming on windows, which works fine until something breaks and then there’s absolutely zero information which is available to fix it. It’s not ideal but I play VR games on linux, I take advantage of my hardware, it’s a very viable platform especially when I don’t want to deal with the constant shitty mess of windows. I’ve never heard of a game not using the GPU fully (when it works).

                                                                      So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows.

                                                                      I use windows and linux on a daily basis. I’m pleased to use linux, I sometimes want to change jobs because of having to use windows.

                                                                      And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB.

                                                                      Or when you update windows and your printer+scanner no longer works. My printing experience on linux has generally been more pleasant than linux because printers don’t suddenly become bricks just because microsoft decides to force you to update to a new version of windows overnight.

                                                                      Printers still suck (and so do scanners) but I’ve mitigated most problems by sticking to supported models (of which there are plenty of good online databases).

                                                                      1. 1

                                                                        I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                        I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                        Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing). And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display. And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                        I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                        I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                        I’ve never heard of a game not using the GPU fully

                                                                        Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity. Fixing a bad GPU driver on linux had me reinstalling the whole OS multiple times.

                                                                        1. 2

                                                                          I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                          Same experience here. I tried using a Linux + Windows laptop for 7 months or so. Windows mixed DPI support is generally good, including fractional scaling (which is what you really want on a 14” 1080p screen). The exception are some older applications, which have blurry fonts. Mixed DPI on macOS is nearly flawless.

                                                                          On Linux + GNOME it is ok if you use Wayland and all your screens use integer scaling. It all breaks down once you use fractional scaling. X11 applications are blurry (even on integer-scaled screens) because they are scaled up. Plus rendering becomes much slower with fractional scaling.

                                                                          Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                          I did get it to work, both on AMD and NVIDIA (proprietary drivers). But it pretty much only works on applications that have good support for VA-API (e.g. mpv) or NVDEC and to some extend with Firefox (you have to enable enable experimental options, force h.264 on e.g. youtube, and it crashes more often). With a lot of applications, like Zoom, Skype, or Chrome, rendering happens on the CPU and it blows away your battery life and you have constantly spinning fans.

                                                                          1. 1

                                                                            Yeah the battery stuff is really annoying. I really hope wayland will finally take over everything and we’ll have at least some good scaling. Playback on VLC works, but I actually don’t want to have to download everything to play it smoothly, so firefox would have to work first with that.. (And for movie streaming you can’t download stuff.)

                                                                          2. 1

                                                                            I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                            If you completely move a window between two displays, the problem is easy-ish to solve with some hacks, it’s easier to solve if your dpi is a multiple of the other dpi. And issues especially occur when windows straddle the screen boundary. Try running a game across two displays on a multi-dpi setup, you will either end up with half the game getting downscaled from 4k (which is a waste of resources and your gpu probably can’t handle that at 60fps) or you end up with a blurry mess on the other screen. When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                            But like I said, this problem is easily solved by not having a multi-dpi setup. No modern software fully supports this properly, and no solution is fully seamless, just because YOU can’t personally spot all the problems doesn’t mean that they don’t exist. Some people’s standards for “working” are different or involve different workloads.

                                                                            Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                            Sounds like issues with your configuration. I run 4k videos at 60Hz with HDR from a single board computer running linux, it would run at 10fps if it had to rely solely on the CPU. It’s a solved problem. If you’re complaining because it doesn’t work in your web browser, I can sympathise there, but that’s not because there’s no support for it, it’s just that by default it’s disabled (at least on firefox) for some reason. You can enable it by following a short guide in 5 minutes and never have to worry about it again. A small price to pay for an operating system that actually does what you ask it to.

                                                                            And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display.

                                                                            Wayland does support this (I think), but like I said, there is no real solution to this which wouldn’t involve completely redesigning everything including core graphics libraries and everyone’s mental model of how screens work.

                                                                            Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                            And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                            Then don’t use plasma.

                                                                            At least on linux you get the choice not to use plasma. When windows explorer has its regular weekly breakage the only option I have is rebooting windows. I can’t even replace it.

                                                                            Heck, if you are still hung up on wanting to use KDE then fix the bug. At least with linux you have the facilities to do this. When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed. I don’t keep track but I’ve regularly encountered dozens of different bugs in windows over the course of using it for the past 15 years.

                                                                            I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                            Good for you. My point is that your experience is not universal and that there are people for whom linux breaks a lot less than windows. You insisting this isn’t the case won’t make it so.

                                                                            Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity.

                                                                            Which matters why?

                                                                            If someone wrote a third party open source nvidia driver for windows would you claim that windows can’t take full advantage of hardware? What kind of argument is this?

                                                                            Nouveau is one option, it’s not supported by nvidia, no wonder it doesn’t work as well when it’s based on reverse engineering efforts. However, this would only be a valid criticism if there were not nvidia supported proprietary nvidia gpu drivers for linux which worked just fine. If you want a better experience with open source drivers then pick hardware which has proper linux support like intel or amd gpus. I’ve ran both and although I now refuse to buy nvidia on the principle that they just refuse to try to cooperate with anyone, it actually worked fine for over 5 years of linux gaming.

                                                                            1. 5

                                                                              I agree with a lot of your post, so I’m not going to repeat that (other than adding a strong +1 to avoiding nvidia on that principle), but I want to call out this:

                                                                              Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                              It may not be a concern to you, but that doesn’t mean it doesn’t affect others. There are many cases where you’d have displays with different densities, and two different-density monitors is just one. Two examples that I personally have:

                                                                              1. My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.
                                                                              2. I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                              There are likely many other scenarios where it’s not “simply” a case of upgrading a single monitor, but also, the “Just buy [potentially very expensive thing]” argument is incredibly weak and dismissive in its own right.

                                                                              1. 1

                                                                                My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.

                                                                                I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                Really, the problem I have with this kind of criticism is that although valid, I would rather have some DPI problems and a slightly ugly UI because I had to display 1080p on a 4k display than have all the annoying problems I have with windows, especially when I have actual work to do. It’s incredibly stressful to have the hardware and software I am required by my company to use cause hours of downtime or work lost per week. With linux, there is a lot less stress, I just have to be cognizant of making the right hardware buying decisions.

                                                                                I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                                I think you should try wayland. It can do scaling and I think I have even seen it work (about as well as multi-dpi solutions can work given the state of things).

                                                                                If you are absolutely stuck on X there are a couple of workarounds, one is launching your drawing application at a higher DPI. It won’t change if you move it to a different screen but it is not actually that big of a hack and will probably solve your particular problem. I even found a reddit post for it: https://old.reddit.com/r/archlinux/comments/5x2syg/multiple_monitors_with_different_dpis/

                                                                                The other hack is to run 2 X servers but that’s really unpleasant to work with. But since you are using a specific application on that display this may work too.

                                                                                potentially very expensive thing

                                                                                If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                Enterprise Windows 10 licenses cost money too, not as much as good monitors, but they’re not an order of magnitude more expensive (although I guess it depends on if you buy them from apple).

                                                                                1. 2

                                                                                  I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                  Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                  I think you should try wayland. It can do scaling

                                                                                  Wayland may be suitable in my particular case (it’s not), but it’s also not near a general solution yet.

                                                                                  If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                  I was using it as an example - forget I used the word “work” and it holds just as true. My current setup is “fine” for me, but I’m not the only person in the world with a macbook, a monitor, and a desire to plug the two together.


                                                                                  The entire point of my comment wasn’t to ask for solutions to two very specific problems I personally have; it was to point out that you’re being dismissive of issues that you yourself don’t have, while also pointing out that someone else’s issues are not everyone’s. To use your own words, “My point is that your experience is not universal”.

                                                                                  1. 0

                                                                                    Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                    No, actually, let’s bring this thread back to it’s core.

                                                                                    Some strangers on the internet (not you) are telling me that windows is so great and that it will solve all my problems, or that linux has massive irredeemable problems and then proceed to list “completely fucking insignificant” (in my opinion) UI and scaling issues compared to my burnout inducing endless hell of windows issues. Regarding the problems they claim it solves: it either doesn’t solve (because they don’t exist on linux so there is nothing to solve), or are not things that windows solves to my satisfaction, or are not things I consider problems at all (and in multiple cases, I don’t think that’s just me, I think the person is just mislead as to either the definition of a linux problem or just has a unique bad experience).

                                                                                    What’s insulting is the rest of this thread (not you) of people who keep telling me how wrong I am about my consistent negative experience with windows and positive experience with linux and how amazing windows is because you can play games with intrusive kernel mode anti cheat as if not being able to run literal malware is one of the biggest problems I should, according to them, be having with linux.

                                                                                    My needs are unconventional, they are not met in an acceptable manner by windows. I started off by saying “I’m glad windows works for some people, but it doesn’t work for me.” I wish people actually read that part before they started listing off how windows can solve all my problems. I use windows on a daily basis and I hate it.

                                                                                    So really, what is “an incredibly dismissive and weak argument” is people insisting that the solutions that work for me are somehow not acceptable when I’m the only one who has to accept them.

                                                                                    I am not surprised you got turned around and started thinking that I was trying to dismiss other people’s experiences with windows in linux because that’s what it would look like if you read this thread as me defending linux as a viable tool for everyone. It is not, I am simply defending linux as a viable tool for me.

                                                                              2. 3

                                                                                I don’t want to use things on multiple screens at the same time, I want them to be able to move across different displays while changing their scaling accordingly.. And that is already something I want when connecting one display to one laptop, you don’t want your 1080p laptop scaled like your 1080p display. And I certainly like writing on higher-res displays for work.

                                                                                When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                                Which are 0 of my daily drivers. Not browsers, explorer, taskmanager, telegram, discord, steam, VLC, VS, VSCode..

                                                                                Then don’t use plasma

                                                                                And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed.

                                                                                And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                You insisting this isn’t the case won’t make it so.

                                                                                And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows. My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience. One were people complain about windows and hate it its update policy. But love it for simply working with games(*), scaling where linux fails flat on its head and other features. You seem to simply ignore everyone that doesn’t want to tinker around with their GPU setup. No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default. We even had submissions here about how broken those interfaces are, so firefox and chrome disabled their support on linux for GPU acceleration and only turned it back on for some card after some time. Seems to be very stable..

                                                                                I like linux, but I really dread its shortcomings for everything that is consumer facing and not servers I can hack with and forget about UIs. And I know for certain how bad windows can be. I’ve set up my whole family on linux, so it can definitely work. I only have to explain to them again why blender on linux may just crash randomly.

                                                                                (*) Yes, all of them, including anti-cheats, which won’t work on linux or you’ll gamble when they will bann you. I know some friends running a hyperv-emulation in KVM to get them to run on rainbow..

                                                                                1. 1

                                                                                  taskmanager

                                                                                  The fact that taskmanager is one of your daily driver applications is quite funny.

                                                                                  … VS, VSCode

                                                                                  I certainly use more obscure applications than these, so it explains why I have more obscure problems.

                                                                                  And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                  KDE has never been the most stable option, it has usually been the prettiest though. I’m sorry about the issues you’re having but really at least you have options unlike on windows.

                                                                                  And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                  You have to pray someone hears you regardless. The point is that on linux you can actually fix it yourself, or switch the component out for something else. On windows you don’t have either option.

                                                                                  And then have fun bringing that fix upstream.

                                                                                  Usually much easier than trying to get someone else to fix it. Funnily enough projects love bug fixes.

                                                                                  It’s not that simple.

                                                                                  I’ll gladly take not simple over impossible any day.

                                                                                  And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows.

                                                                                  I genuinely have not had this mythical gpu worst case disaster scenario you keep describing. So I’m not “down-sizing” anything, I am just suggesting that maybe it’s your own fault. Really, I’ve used a very diverse set of hardware over the past few years. The point I’ve been making repeatedly is that “tinkering” to get something to work on linux is far easier than “copy pasting random commands from blog posts which went dead 10 years ago until something works” on windows. When things break on linux it’s a night and day difference in debugging experience compared to windows, and you do need to know a little bit about how things work, but I’ve used windows for longer than I have used linux and I know less about how it works despite my best efforts to learn.

                                                                                  Your GPU problems seem to stem from the fact that you are using nouveau. Stop using nouveau. It won’t break anything, it will just mean you can stop complaining about everything being broken. It might even fix your plasma crashes when you connect a second monitor.

                                                                                  My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience.

                                                                                  I could also pull out a large suite of anecdotes but really that won’t make an argument, so maybe let’s not go there?

                                                                                  But love it for simply working with games(*),

                                                                                  Some games not working on linux is not a linux problem. Despite absolute best efforts by linux users to make it their problem. Catastrophically anti-consumer and anti-privacy anti-cheat solutions are not something you can easily make work on linux for sure, but I’m not certain I want it to work.

                                                                                  scaling where linux fails flat on its head

                                                                                  I’ll take some scaling issues and being able to actually use my computer and get it to do what I want over work lost, time lost and incredible stress.

                                                                                  No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default.

                                                                                  Good to know you read the bit of my comment where I already addressed this.

                                                                                  Seems to be very stable..

                                                                                  Okay, at this point you’re close to just being insulting. Let me spell it out for you:

                                                                                  Needing to configure firefox to use hardware acceleration, not having a hacky automatic solution for multi-DPI on X, not being able to play games which employ anti-cheat solutions which orwell couldn’t imagine, some UI inconsistencies, having to tinker sometimes. These are all insignificant problems compared to the issues I have with windows on a regular basis. You said it yourself, you use a web browser, two web browser based programs, 3 programs developed by microsoft to work on windows (although that’s never stopped them from being broken for me) and a media player which statically links mplayer libraries which were developed not for windows and a chat client. Your usecase is vanilla.

                                                                                  My daily driver for work is running VMWare workstation running on average about 3 VMs, firefox, emacs, teams, outlook, openvpn, onenote. I sometimes also have to run a gpu accellerated password cracker. For everything else I use a linux vm running arch and i3 because it’s so much faster to actually get shit done. Honestly my usecase isn’t that much more exciting either. I have daily issues with teams, outlook, and onenote (but those are not windows issues, it’s just that microsoft can’t for the life of them write anything that works). The windows UI regularly stops working after updates (I think this is due to the strict policies applied on the computer to harden it, these were done via group policy). The windows UI regularly crashes when connecting and disconnecting a thunderbolt dock. I have suspend and resume issues all the time, including issues where the machine will bluescreen coming out of suspend when multiple VMs are running. VM hardware passthrough has a tendency to be regularly broken requiring a reboot.

                                                                                  To top it off, the windows firewall experience is crazy, even if it has application level control, I still can’t understand why you would want something so confusing to configure.

                                                                                  And I know for certain how bad windows can be.

                                                                                  And I think you’re used to it, to the point that you don’t notice it. The fact that linux is bad in different ways doesn’t necessarily mean it’s as bad.

                                                                                  or you’ll gamble when they will bann you

                                                                                  Seems illegal. Maybe don’t give those companies money?

                                                                              3. 1

                                                                                All that comes obviously with the typical Microsoft problems. Like account bindings of your license, while 2FA may even make it harder to get your account back, because apparently not using your license account primarily on windows is weird and 2FA prevents them from “unlocking” your account again.

                                                                                The same goes for all the tracking, weird “Trophies” that are now present and stuff like that. But not having to tinker with GPU stuff (and getting a system that has no desktop anymore at 3AM) is very appealing.

                                                                                Can you recommend some?

                                                                                http://qttabbar.sourceforge.net/ works ok.
                                                                                Installed 2012 on windows 7, haven’t reinstalled my windows since, program still works except for 1-2 quirks.

                                                                        1. 21

                                                                          Of all of these, the most important is text handling. This is the biggest reason that macOS apps provide a clean and consistent UI: NSTextView is sufficiently powerful to do everything from displaying a label to rendering multi-page documents with tables and figures inline. Almost equally importantly, it’s coupled with a rich text abstraction that allows coupling arbitrary bits of metadata to a range of characters so users can attach semantic markup information and then generate instructions for the text renderer with the same data structure.

                                                                          The WebView is similarly complicated. If you’re building your own widget set then you either need to make your web view use them or it will feel weird. When Google forked WebKit to make Blink, they ripped out a load of the UI toolkit abstraction layers, so you’re probably stuck with either WebKit or Gecko, both of which have tremendously exciting build systems.

                                                                          For input, it’s really worth looking at Taligent, which had a fantastic model that decoupled raw UI events from higher-level inputs and provided mechanisms for adding custom ones easily. Things like gestures are trivial to add in this model.

                                                                          macOS is also about the only system I’ve used that gets drag-and-drop right (I think X11 does? Wayland gets it wrong): Drag and drop is fundamentally different from copy-and-paste because copy-and-paste has to handle the case where the source program has quit before the paste, whereas drag-and-drop must be completely responsive. This means that a drag should provide a list of possible types and nothing else, you should not require the actual data to be provided (which can take a few seconds if you’re dragging something like a video) until the drop event.

                                                                          1. 10

                                                                            I think X11 does?

                                                                            X drag and drop works very similarly to the X clipboard. Now I know a lot of people love to hate the X clipboard since it doesn’t persist when the program exits but I actually like it a lot but regardless, like you said,t hat model works really well for drag and drop anyway.

                                                                            The dragging application advertises that it has something and tells windows it passes over that it is passing over, then the window can ask for the list of types you offer. If they like one, they tell you what operations they will allow (the dragger application is always responsible for updating the cursor so it needs to know). If the user drops, then the drop application asks the drag information for a particular format from the advertised list and the data is transferred in incremental chunks. All these interprocess asks are done through the X server as the middle man rather than strictly peer to peer, to ensure it works even if the applications are running on separate closed-off machines.

                                                                            X copy paste is basically that same process, just without the dialog with the dragger. The “copy” application advertises it has something, which takes ownership away from whatever previous app (if any) had it. Then the pasting application knows it can ask the “copying” application for the list of formats. If it likes one, it asks for the data and it is transferred over. (I used scare quotes because nothing is actually copied until the transfer of a specific format requested. This is why people hate it but also why I like it - it is flexible and efficient, once you get to know it at least.)

                                                                            The Windows way for drag and drop actually can be really similar to the X way btw: you provide a COM interface that has methods that provide formats on-demand. One method lists the available options and another method requests a copy of one of the formats. You don’t need to actually copy things up front…

                                                                            EDIT: I guess lobsters doesn’t let me do two comments on separate topics to the same parent. oh well

                                                                            Of all of these, the most important is text handling.

                                                                            I think the text handling thing is why HTML did so well. You can pretty easily mix and match rich text and in-flow controls. That kind of thing was a legit pain to do in old desktop apis - they’d tend to offer something high level like a pre-packaged RTF widget and something low level like the text metrics…. but that middle level of html’s mix took a lot of work. The RTF with OLE components is about as close as you got but like html is soooo much simpler.

                                                                            1. 16

                                                                              That kind of thing was a legit pain to do in old desktop apis - they’d tend to offer something high level like a pre-packaged RTF widget and something low level like the text metrics…. but that middle level of html’s mix took a lot of work. The RTF with OLE components is about as close as you got but like html is soooo much simpler.

                                                                              NSTextView with NSText (NSAttributeString) gives you a very similar level of control to HTML. It’s not a coincidence. The original WorldWideWeb was a very thin wrapper around NSTextView that stripped the tags from a string and converted them into NSAttributedString tags before passing the result to the NSTextView for rendering.

                                                                            2. 4

                                                                              tremendously exciting build systems

                                                                              I’m not sure, but I’m guessing this isn’t a compliment…

                                                                              1. 1

                                                                                I thought it was excellently put tho :)

                                                                              2. 3

                                                                                Similarly, Win32 used to have its RichEditEx decades ago, although not as able.

                                                                                Windows also had a universal web view component. It forms some of the most jarring additions to Windows 98 and mainly 2000.

                                                                                I don’t see how Wayland does it wrong: https://wayland.app/protocols/wayland#wl_data_device:request:set_selection. X11 and Wayland have fully “responsive” DnD and selections, which is what you seem to be concerned about. On the other hand, they share the aspect of the selection disappearing with its owner, so in both you need a clipboard manager to retain them, preferably for text/images only–it’s a decision to make.

                                                                                1. 9

                                                                                  Similarly, Win32 used to have its RichEditEx decades ago, although not as able.

                                                                                  It’s been around for ages, but it’s not very useful. It didn’t have a real abstraction over the rich text (it kind-of does now) so if you wanted to use it as anything other than an opaque source or consumer of RTF data, you ended up having to write your own RTF parser and generator.

                                                                                  I am led to believe (with no insider knowledge, through things I’ve read in the press) that in the ‘90s the Office team was deeply concerned that a more powerful rich text widget would make it easy for people to write a Word competitor. With NSTextView, you can write something that’s got the core feature set of Word ’97 (styles, multi-column layout, live spell checking, inline images, printing, and so on) in around a thousand lines of code and something quite polished in about 10K. Keeping a bunch of those features Office-exclusive probably led to a lot of the growth of web apps early on: It was easier to write a web app and delegate UI text layout to IE than it was to deal with RichTextEx.

                                                                                  I don’t see how Wayland does it wrong:

                                                                                  Oh, great! It looks as if this has been fixed since I last looked at Wayland in detail (which, I now realise, was probably at least 10 years ago, back when I was actively contributing to GNUstep). Back then, their protocol was built on top of copy-and-paste: you put every format that you could provide onto a pasteboard (I can’t remember the Wayland terminology) at the start of the drag. To be honest, getting this so fundamentally wrong was the main reason I stopped paying attention to Wayland. It’s probably been fixed for a long time without my noticing.

                                                                                  1. 5

                                                                                    I also used to be angry at Wayland, but it’s developing, albeit very slowly.

                                                                                    In particular, I thought it wouldn’t be possible to monitor all selected text, though Sway, which is what I’d naturally use for a WM, now supports both Xwayland synchronisation, and some unstable protocols to do it natively. But, e.g., I still don’t get 10-bit colour or indicators like nm-applet, so it’s plainly worse than X.org + i3, and will likely stay that way for a few years to come.

                                                                                    My dislike of Wayland has hence shifted towards this complexity. The core is tiny, and you need to have the extensions you want supported by the compositor. Not much can be depended on.

                                                                                    Something about throwing the baby out with the bathwater.

                                                                                    1. 8

                                                                                      My dislike of Wayland has hence shifted towards this complexity. The core is tiny, and you need to have the extensions you want supported by the compositor. Not much can be depended on.

                                                                                      Well, it’s good to know that they didn’t replicate the problems with X11, which has a tiny core and implements everything in different extensions (some in the X server, some provided by the window manager or compositing manager) that don’t always compose well and are not universally supported.

                                                                                      1. 1

                                                                                        I’m not a person that uses tray icons, but I think the nm-applet weirdness only applies to swaybar, which is a specific status bar and not anything inherent to the protocol itself. It doesn’t reflect on Wayland any more than a buggy GNOME status bar would reflect on X.

                                                                                        That being said, the “tiny core + extensions” thing bothers me. Though wlroots does appear to give a decent baseline for stuff.

                                                                                        1. 1

                                                                                          Sway and the bar are just indicative of the practical usability of Wayland–it still hasn’t caught up–also, the protocol doesn’t live in a vacuum. More examples can be found. E.g., someone on IRC had a problem with Wayland not supporting QWindow::requestActivate(), and when I looked into it, the protocol in staging (for once not unstable) meant to support this kind of functionality still can’t be used to implement this method. I taught him how to force the program to run under XWayland.

                                                                                          1. 1

                                                                                            Oh yeah, that’s a fair point. I misread you as blaming it on the protocol itself; my bad.

                                                                                            I’ve been using it to work on, but screen recording was iffy for a while, and I miss all the fancy tools X has. I hear it’s better in GNOME-land, but that’s because they have their own protocols for everything… bleh. And not having window urgency or a way for the currently focused window to transfer that focus is just bizarre.

                                                                                            (I used sway until very recently; I switched to river because I like its layout model more, and because of frustration with sway’s “no features that aren’t in i3” philosophy.)

                                                                                  2. 1

                                                                                    For input, it’s really worth looking at Taligent

                                                                                    Can you point me to a resource that describes the way Taligent did this? I’m curious about how this works…

                                                                                    1. 1

                                                                                      It’s about 15 years since I last read the docs, back when we were designing EtoileUI. The API docs were online somewhere, but I didn’t bookmark them and I’ve no idea if they’re still there now…

                                                                                  1. 3

                                                                                    What git command line do I run to remove a maintainer?

                                                                                    1. 9
                                                                                       git clone https://github.com/git/git
                                                                                       find git -type f -exec sed -i "s/git/nit/g" {} \; // And some more refactoring
                                                                                       git commit -am "Fork git to nit"
                                                                                      

                                                                                      Git is a dvcs after all.

                                                                                      1. 1

                                                                                        Nah, you also need to update every shell script to use not instead of git.

                                                                                        My theory is that this is why CLI reforms suchas gitless haven’t taken off.

                                                                                      2. 1

                                                                                        No need to be authoritarian, you can always make your own fork. Or simply apply the patches.

                                                                                        1. 13

                                                                                          It’s not authoritarian to think that people who aren’t engaging in good faith shouldn’t be running core tooling projects used by the entire software industry. Applying a fork doesn’t solve the issue that a toxic person is leading a massive community effort.

                                                                                          Furthermore, this isn’t about solving it for me – I know how to use git already. It’s about increasing accessibility for newcomers, who won’t know how to apply patches and recompile.

                                                                                          1. 5

                                                                                            So long as you only think so.

                                                                                            Where can I see a single example of engaging in bad faith, or any toxicity for that matter?

                                                                                            It could be argued that core tooling shouldn’t change at all, and a change like this would confuse the documentation, or break things. Though this has happened already with the master → main switch, as well as with some changes to the porcelain. git is rather bad from both viewpoints.

                                                                                            1. 2

                                                                                              It could be argued that core tooling shouldn’t change at all, and a change like this would confuse the documentation, or break things.

                                                                                              Thank goodness neither of those points is relevant to the linked discussion. All of this stuff is backwards-compatible.

                                                                                              1. 6

                                                                                                This is under the confusing of documentation. The more ways to do it, the more confused it is. Changing terms in any place would also be a source of confusion. I admit to not having read it in detail, but no miracle is possible.

                                                                                                I mostly just scanned it, early on found out it literally lies (“everyone” means “people I agree with”), and figured out it’s just someone publicly moaning, so not worth the attention.

                                                                                                And then there was this comment where someone disrespects other people’s work, of course.

                                                                                            2. 3

                                                                                              You’re free to fork and improve git. Or even implement your own from scratch.

                                                                                              The more forks and independent implementations, the better the ecosystem – and maybe some ideas filter across.

                                                                                              1. 4

                                                                                                Furthermore, this isn’t about solving it for me – I know how to use git already. It’s about increasing accessibility for newcomers, who won’t know how to apply patches and recompile.

                                                                                                1. 2

                                                                                                  Well then you better put in the effort to get your fork into distribution repositories.

                                                                                                  1. 1

                                                                                                    Why would you be unable to provide downloads and packages?

                                                                                                    It’s not like X.org, Jenkins, LibreOffice, and other forks are significantly harder to install than the original.

                                                                                                2. 2

                                                                                                  Yes it is; the way the term “shouldn’t” is used there presumes it is or should be in your authority.

                                                                                                  1. 1

                                                                                                    Applying a fork doesn’t solve the issue that a toxic person is leading a massive community effort.

                                                                                                    Sure it does: if you do better, people switch projects, and the origin of the fork stops being a massive community effort. How many Hudson developers are there today? How many Jenkins developers are there today?

                                                                                                    How about Gogs vs Gittea?

                                                                                              1. 8

                                                                                                Oh, also: whatever approach you choose, you are going to also need to provide an ergonomic, performant animation API.

                                                                                                This is where I died inside. Just no. Animations are one of the most despicable developments of recent UIs. I don’t want to keep either waiting for the computer to do its trivial jobs, or distract me in general altogether. Want to make your phone faster? Disable animations!

                                                                                                You need to support emoji.

                                                                                                Also no, this is a largely pointless complication.

                                                                                                Async You do have nice ergonomic async support, don’t you?

                                                                                                What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                                He’s describing an opinionated, everything-but-the-kitchen-sink approach. But a decent overview nonetheless.

                                                                                                1. 33

                                                                                                  Complaints about animations an emoji sound very much like “old man yells at cloud”.

                                                                                                  Emoji (and generally Unicode support for more than BMP) is now an expected feature of modern software.

                                                                                                  UI animations can be done well and be helpful, and hint how UI items relate to each other spatially. It’s really weird that even low-end mobile devices have GPUs that can effortlessly animate millions of 3D triangles, but sliding a menu is considered a big effort.

                                                                                                  1. 7

                                                                                                    I’ve honestly never seen a single helpful UI animation in my life, other than spinning wheels and the likes, indicating that the application/system hasn’t frozen up completely. Instead of “click, click, click, be done”, things tend to shift into “click, have your attention grabbed, click, have your attention grabbed, throw your computer out of the window and go live in the woods”. GNOME Builder and GIMP 3 settings dialogues are spectacular examples of GNOME keeping digging its grave.

                                                                                                    One would expect the computer to be a tool, rather than a work of art.

                                                                                                    1. 19

                                                                                                      Maybe that’s just a failing of GNOME?

                                                                                                      For example, macOS has generally well-done animations that don’t get in the way. It tends to animate after an action, but not before. Animations are very quick, unless they’re used to hide loading time. There are also animations to bring your attention to newly created/focused elements. Reordering of elements (menu icons, tabs) is smooth. IMHO they usually add value.

                                                                                                      Touchscreen OSes also animate majority of elements that can be swiped or dragged. The animation is not on a fixed timer, but “played” by your finger movements, so you may not think about it as an animation, but technically it is animating positions and sizes of UI elements. Users would think UI is broken if it instantly jumped instead of “sticking” under the finger.

                                                                                                      1. 4

                                                                                                        Eh, macOS has sort of jumped the gun on animation, too. I’ve activated the “Reduce motion” for it because, by default, the transition between full-screen apps is done by having windows “slide” in and out of view. It’s not just slow (it takes substantially more time to do the transition that in takes to Cmd-Tab between the applications), it legit makes you dizzy if you keep switching back and forth.

                                                                                                        I imagine it’s taken verbatim from iPad (last macOS version I used before Big Sur was… Tiger, I think? so I’m not sure…) where it makes some sense, but I can’t for the life of me understand what’s the point of it on a thing that has a keyboard and doesn’t have a touch screen.

                                                                                                        1. 3

                                                                                                          I can’t for the life of me understand what’s the point of it on a thing that has a keyboard and doesn’t have a touch screen.

                                                                                                          Probably because most people will use their MacBook trackpad to swipe between full screen apps/desktops. I only rarely use the Ctrl-arrow shortcuts for that. A lot of macOS decisions make more sense when you assume the use of a trackpad rather than a mouse (which is why I’ve always found it weird they don’t ship a trackpad with iMacs by default)

                                                                                                          1. 1

                                                                                                            You can still cmd-tab between full-screen applications, which is what a lot of people actually do – a “modern” workplace easily gets you an email client, several Preview windows, a Finder window, and a spreadsheet. Trackpad swiping works great when you have two apps open, not so much when you got a bunch of them.

                                                                                                            When you’re on the seventh out of the fifteen invoices you got open, you kindda want to get back to the spreadsheet without seeing seven invoices again. That’s actually a bad example because full-screen windows from the same app are handled very poorly but you get the point: lots of apps, you don’t always want to see all of them before you get to the one you need…

                                                                                                          2. 1

                                                                                                            These animations are helpful, and have been shown to be so. It’s not something some asshole down at Cupertino just cooked up because he thought it would look cool. Cues as to the spatial relations of things (as the desktops have an order and you use left/right gestures to navigate them) are very valuable to a lot of people, and they even let you turn them off, I don’t really see anything worth complaining about.

                                                                                                            I mean there’s a lot of questionable things Apple is doing these days, but that’s not one of them.

                                                                                                            1. 2

                                                                                                              I’m not talking about desktops, but “maximized” applications (i.e. the default, full-screen maximisation you get when you press what used to be the green button).

                                                                                                              You get full-screen sliding animations when you Cmd-Tab between apps in this case, even though there’s no spatial relations between them, as the order in which they’re shown in the Cmd-Tab stack has nothing to do with the order in which they’re shown in Mission Control (the former is obviously mutable, since it’s in a stack, the latter is fixed).

                                                                                                              In fact, precisely because one’s a stack and the other one isn’t, the visual cue is wrong half the time: you switch to an application to the right of the stack, but the screen slides out to the left.

                                                                                                              Animation when transitioning between virtual desktops is a whole other story and yes, it makes every bit of sense there.

                                                                                                              and have been shown to be so.

                                                                                                              Do you have a study/some data for that? I know of some (I don’t have the papers at hand but I can look it up if you’re curious), but it explicitly expects only occasional use so it doesn’t even attempt to discuss the “what if you use it too much” case. So it’s not even close to applying to the use case of e.g. switching between a spreadsheet and the email client several times a minute.

                                                                                                              (Edit: just for the fun of it, I tried to count how often I Cmd-Tab between a terminal window and the reference manual after I ended up ticking the reduce animation box in accessibility options. I didn’t automate it so I gave up after about half an hour, at which point I was already well past 100. Even if this did encode any spatial cues, I think spatial cues are not quite as valuable as not throwing up my guts.)

                                                                                                        2. 9

                                                                                                          There are many legitimate uses for animations. Yes loading spinners are one of them, unless you think that every single operation can be performed instantly. Sliding and moving elements around on mobile especially is another one. A short animations to reveal new elements after a user action can also improve the UX.

                                                                                                          Not everything has to be one extreme or another - it’s not pointless animations everywhere, or no animations at all. When used well they can improve usability.

                                                                                                          1. 1

                                                                                                            Loading spinners are far from ideal though. A progress bar would be generally better so you have some idea of if it is actually still working and how far is left to go. Or anything else that provides similar information.

                                                                                                            I’ve seen so many times when a loading spinner sits there spinning forever because, for example, the wifi disconnected. The animation then is misleading, since it will never complete.

                                                                                                            1. 4

                                                                                                              That’s an entirely different loading element. You can have animated progress bar. And when progress is indeterminate a spinner makes the most sense. A spinner not stopping on error is a UI bug, not a problem with the concept. If you want to get mad at bad design, how about programming languages and paradigms that don’t make you explicitly handle errors to get in this state.

                                                                                                            2. 1

                                                                                                              The standard way of indicating length is to say so to the user, and possibly add a progress bar, for when progress can be sensibly measured. Like adding a spinner, it needs to be done explicitly. Revealing new elements can be done responsively by improving the design–the standard way is by turning an arrow.

                                                                                                              The notion of phones invading GUIs, as x64k hinted at, is interesting here (though not new). Transplanting things that do not belong, just because of familiarity.

                                                                                                              Going back to the article, it said I needed to. I don’t. And still, I can make anything I desire with the toolkit, with no real change to the required effort. Except for “modern” IM, when I don’t care to implement pictures as pseudo-characters.

                                                                                                            3. 7

                                                                                                              One would expect the computer to be a tool, rather than a work of art.

                                                                                                              The computers I remember most fondly all had some qualities of art in it. Sometimes in the hardware, others in the software, and the ones I like the most had it in both.

                                                                                                              Animations are important in devices in which there is mostly visual feedback. Most computers don’t have haptics, and we often get annoyed at audio feedback. Visual cues that an action was performed or that something happened are important. There is a difference between UI animation and a firework explosion in the corner of the app.

                                                                                                              1. 5

                                                                                                                One would expect the computer to be a tool, rather than a work of art.

                                                                                                                You, me, and everybody in this thread is in the top 0.001% of computer power users. What’s important to us is totally different than what’s important to the rest of the population. Most normies I talk to value pleasing UIs. If you want to appeal to them, you’re going to need animations.

                                                                                                                1. 6

                                                                                                                  I’d argue even further that the 0.001% of computer power users also need animations. When done well, animations really effectively convey state changes in a way that speaks to our psychology. A great example is that when I scroll a full page in my web browser, the brief scroll animation shows my brain how where I am now relates to where I was a split second ago. Contrast this to TUIs which scroll by a full page in a single frame. It’s easy to consciously believe that we can do without things like animations, but I’m pretty sure that all the little immediate state changes can add up to a subperceptual bit of cognitive load that nevertheless can be fatiguing.

                                                                                                                  1. 2

                                                                                                                    I think good animations are ‘invisible’, but bad ones aren’t. So people remember the bad more than the good.

                                                                                                              2. 3

                                                                                                                UI animations can be done well

                                                                                                                Absolutely. I love the subtle animations in iOS. Like when I long-press on the brightness slider to access more controls like the dark mode toggle. I’m already making an action that takes more time than a simple tap, and the OS responds with a perfectly timed animation to indicate that my action is being processed.

                                                                                                                On the other hand, animations can be very easily abused. Plenty of examples, like today’s Grumpy Website post, show animations that hinder accessibility. I think the cases where animation goes wrong are where it was thrown in only because “it’s modern” rather than primarily as a means to convey information.

                                                                                                              3. 15

                                                                                                                I respectfully disagree with your opinion about animations. There are lots of times when I genuinely feel that the animation is important and actually conveys information. For example, the Exposè-style interface in macOS and GNOME is much better since the windows animate from their “physical” locations to their “overview” locations; the animation provides important context about which windows went where, so your eyes get to track the moving objects. It also helps that those animations track your finger movements on the trackpad perfectly, with one pixel of movement per tiny distance travelled across the trackpad (though the animation also has value when triggered using a hotkey IMO).

                                                                                                                But there’s definitely a lot of software which over-uses animations. The cardinal sin of a lot of GUI animations is to make the user wait for your flashy effects. A lot of Apple’s and GNOME’s animations do fit this description, as well as arguably most animations in general. So I think a GUI framework needs a robust animation system for when it’s appropriate, but application programmers must show much more discretion about when and how they choose to use animations. For example, I’m currently in Safari on macOS, and when I do the pinch gesture to show an overview of all the tabs, I have to wait far too long for the machine to finish its way too flashy zoom animation until I actually get the information I need in order to interact further.

                                                                                                                1. 6

                                                                                                                  Bad news, even smooth scrolling is a kind of animation.

                                                                                                                  1. 2

                                                                                                                    I’ll admit, this is an improvement in the browser. Bumping my counter to one case of a useful animation.

                                                                                                                  2. 3

                                                                                                                    What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                                                    Not really. It is very easy to block the event loop with various other calls (including things like clipboard which like the thing said is async under the hood on X, but qt and gtk don’t present it that way). The gui thing needs to have some way for other operations to hook into the event loop so you can avoid blocking.

                                                                                                                    Not terribly difficult but still you do need to at least think about it, it isn’t fully automatic. (Even on Windows, where it is heavenly bliss compared to unix, you do still need to call a function in your loop like SleepEx to opt into some additional sync processing.)

                                                                                                                    1. 2

                                                                                                                      GTK+ does present clipboard retrieval asynchronously, see https://docs.gtk.org/gtk3/method.Clipboard.request_contents.html which contains a callback argument–that much I remember. Setting clipboard contents can be done blindly, you can have ownership snatched at any moment.

                                                                                                                      Going the way of recursive event loops requires caution that I would avoid imparting on users as much as possible, in particular because of callbacks vs state hell. Typically, this is reserved for modal dialogues, and the API user knows what they’re dealing with.

                                                                                                                      There’s also the possibility of processing the event pump selectively, though that’s another thing you don’t want to do to yourself.

                                                                                                                    2. 1

                                                                                                                      Want to make your phone faster? Disable animations!

                                                                                                                      This is generally the point of animations as a UX feature - they mask the slow operation of applications with a silly animation to keep you distracted/indicate a process is occurring. Want to notice when your app is using a jpeg to pretend it’s loaded? Disable animations!

                                                                                                                      1. 21

                                                                                                                        It’s not only that. The human visual system is very good at tracking movement, much worse at noting that a thing has disappeared and reappeared. If a thing disappears, you’ll typically notice quickly but not immediately be aware of what has disappeared. If you animate movement then there’s lower cognitive load because a part of your brain that evolved to track prey / predators is used rather than anything related to understanding the context of the application.

                                                                                                                        1. 4

                                                                                                                          This is it exactly. The human visual system evolved in a world where things don’t instantly flicker out of existence or appear out of nothing.

                                                                                                                        2. 2

                                                                                                                          I don’t think OP is talking about things like progress bars and spinning indicators, which are pretty legitimate everywhere, but things like “gliding” page transitions between application screens. If a framework is indeed so slow that you notice it rendering widgets, an animation API will help now and then, but won’t make that big a dent. (Edit: also, unless you’re loading things off a network, loading JPEGs cannot be a problem anymore, it hasn’t been a problem in twenty years!)

                                                                                                                          I do think this piece could’ve been more aptly titled “so you want to write a GUI framework for smartphones”. Animations are important on touch screen driven by gestures (e.g. swiping) – gestures are failure-prone, they need incremental feedback and so on, plus nobody who’s not high on their 2013-era post-PC supply expects efficiency out of phone UIs.

                                                                                                                          But they are pretty cringy on desktop. E.g. KDE’s Control Center (and many Kirigami apps) has an annoying misfeature, where you click “back” and the page moves out of view as if you’d swiped it. But you didn’t swipe. Regardless of what you think about animation, it’s not even the right animation.

                                                                                                                          That’s why so many people see them as useless eye candy. If you go all Borg on it and only think in absolute terms, you get a very Borg user experience.

                                                                                                                          Edit: yes, I know, “a modern GUI toolkit” should have all these things. The point is you can drop a lot of them and still write useful and efficient applications. Just because Google is doing something on Android doesn’t mean everyone has to do it everywhere.

                                                                                                                          1. 3

                                                                                                                            It’s funny you mention page transitions. I have my ebook reader set up to do a 200ms-ish animation when I tap the ‘next page’ button where the current page slides off to the left and the next one slides in from the right. It has an option to disable it, but I actually find that disorienting in this vague way I can’t explain. But on my desktop, it’s fine with no animations.

                                                                                                                          2. 1

                                                                                                                            Empirically, that is not how it’s used. This masking is a minority of use cases, and even then it’s bad. To some people that aren’t me, it might be better described as “eye candy” and “smoothness”.

                                                                                                                            Being able to disable this irritation is being lucky, e.g. it’s CSS-hardcoded in Firefox and GTK+ (/org/gnome/desktop/interface/enable-animations only works partially).

                                                                                                                          3. 1

                                                                                                                            What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                                                            I think they’re talking about Rust async, since this is all in the context of writing a cross-platform GUI toolkit in Rust. This is more of a problem than it seems because if you’re doing a cross-platform toolkit that uses native widgets, it’s not at all trivial to impedance-match whatever model the native widget toolkit uses behind the scenes with your toolkit, which exposes an async model.

                                                                                                                            (Edit: there are some additional complications here, too. For example there are toolkits that (generally) async, but still do some operations synchronously. The author mentions the copy-paste API as an example.)

                                                                                                                            One might conclude that it’s better to not do any of that and instead expose the platform’s intended programming model, as second-guessing thirty year-old code bases tends to backfire spectacularly, but maybe I’m just being grumpy here…

                                                                                                                          1. 4

                                                                                                                            Oh, thanks for the interview with Bourne, that adds considerable context to my recent writing. In exchange, I offer the explanation of why Bell Labs got a miserable PDP rather than a powerful machine. Unix is partly Hamming’s fault.

                                                                                                                            I went in to Ed David’s office and said, ``Look Ed, you’ve got to give your researchers a machine. If you give them a great big machine, we’ll be back in the same trouble we were before, so busy keeping it going we can’t think. Give them the smallest machine you can because they are very able people. They will learn how to do things on a small machine instead of mass computing.’’ As far as I’m concerned, that’s how UNIX arose. We gave them a moderately small machine and they decided to make it do great things. They had to come up with a system to do it on. It is called UNIX!

                                                                                                                            1. 2

                                                                                                                              Ha, constraints breed creativity :) I think there’s something to that and the inefficiency of the modern cloud.

                                                                                                                              “Software is a gas; it expands to fill its container”, and the modern cloud is basically a container of unbounded size (since new capacity is being built as fast as people will pay for it).

                                                                                                                            1. 2

                                                                                                                              One of the biggest problems with this stream based composibility is similar to another problem that also is on the front page. It’s possible to conceive of edge cases in the exchange of these streams that can cause catastrophic failure. Not sure what can be done from a GUI standpoint. I think just having a live visualization of changes would be a good first step, but I’m not sure things that are naturally code-related can ever be made into a gui. Maybe something like Node-RED sort of fits?

                                                                                                                              Also, since no one else is saying it; you probably want to avoid linking to code that uses blatantly anti-semetic slur. Even the body of your article mentions an irc bot named “ZyklonB.”

                                                                                                                              1. 3

                                                                                                                                I probably want to, on the other hand it’s just the result of a joke on letter iteration. git filter-branch because of something so silly? Shrug. Though I might have just come up with even stupider names for the bunch.

                                                                                                                                1. 1

                                                                                                                                  I’m not sure what you mean, since those attacks appear HTTP/2 specific, and don’t relate to HTTP 1, which is also based on text streams. Example?