1. 49
  1. 27

    Now one of the worst parts is that everywhere I only even hint at not completely loving the new libadwaita theme I instantly get shut down and disagreed with before I can even get the chance to give some feedback. Apparently not liking flat themes makes me a madman in this world. Why am I not allowed to even have opinions about the look of the operating system I’m using?

    IMHO this illustrates one of the main failure mechanisms in FOSS software development. Some projects – and Gnome is one of them – are effectively open-loop systems. The success of development effort is not gauged by how much users like it, but by how well it conforms to a particular “vision”, something which can inherently be gauged only internally.

    That’s why these projects practice such a strict design orthodoxy. If you’re guided by the fulfillment of vision, when something fails, it’s very tempting to think that it failed because you simply didn’t pursue your vision hard enough. That’s why buttons get flatter and bigger: The Vision said, once upon a time, that widgets must be less distracting and easier to hit, so every iteration removes some of the colours and some of the bevels and makes them bigger. Both are good ideas but, because nobody has stopped to gather meaningful user data in forever (“we asked six interns to try these tasks” isn’t meaningful data, that’s just six anecdotes) they’ve been discoloured and flattened long past the point of uselessness.

    That’s also why dissenting feedback is so violently received: normally, user feedback is the bread and butter of design refinement, but when you’re aiming for the fulfillment of a vision rather than making things work for as many other people as possible, anything short of enthusiastic praise is a personal attack on the project’s identity and on its contributors’ work.

    It’s particularly disheartening that “the vision” itself is so old by now, and has been defended in all sorts of absurd ways, that even many of Gnome’s contributors are unable to really articulate it anymore. Hence the loss of identity that the author mentions. It’s a vision that originated while the “post-PC era is going to screw us all” sentiment was mounting, but its community still lives in a world where Windows 8 and its convergent post-PC interface wasn’t a resounding disaster that was universally loathed by practically everyone except tech evangelists. That late-stage corporate behemoths (Microsoft changed course shortly after and effectively lives a new life) would cultivate this absurd separation of users and product management is understandable and I bet it wasn’t even deliberate, but to see it happen in a free software project… honestly, it’s pretty disappointing.

    1. 14

      because nobody has stopped to gather meaningful user data in forever

      This is a very good point, and another issue with FOSS: implement any kind of data gathering (aka telemetry) in your open source program, and I can assure you that a shit storm will be coming your way very, very soon. Look at what happened with Audacity, for instance.

      Mozilla has been gathering a lot of user data for years now, and I think that has helped them shape what Firefox is looking like. But once again, they took a lot of bad rep for it… It’s a difficult decision.

      Of course, in proprietary software, this is not a problem, since everything is hidden, and, unless some power user starts to monitor their network traffic, it usually goes unnoticed (and even when it’s noticed, it’s often not easy to find out what’s being sent over to the mothership).

      1. 25

        A lot of this can be done with user studies. Fortunately, you don’t need to because various folks did the studies 30+ years ago and you can just learn from the conclusion: users are able to navigate interfaces faster if there is a consistent visual clue about which elements of a UI are clickable/touchable and which aren’t. When we designed the Nesedah theme, we had a very simple rule: anything that is clickable has a gradient background, everything that is decoration / explanation (e.g. labels) has a flat background. Prior systems (mostly from back when rendering an image on every widget was expensive) used bevels to convey the same information. We did some very small-scale ad-hoc user studies and participants were all able to correctly identify all of the places that they would expect to click and get some behaviour. This is not, in my experience, the case for flat themes, but it is for early Windows, Motif, and NeXT themes.

        1. 11

          Telemetry is just one side of the story, and Mozilla’s bad rep is a good example of why. When applied to interaction with external agents, rather than purely internal interaction, telemetry data is extremely difficult to gauge. This is no different than other fields of engineering, FWIW – data related to, say, mechanical wear-and-tear due to environmental factors is practically useless without information on what drives it (i.e. on environmental factors), just like telemetry related to user interaction is practically useless without information about what drives it, such as user intent.

          Without that kind of information, it’s impossible to tell – based on telemetry data alone – if a particular UX feature is used a lot because it’s really important, or because it’s so poorly-designed that users have to spend a long time interacting with it, or even because it literally doesn’t work. Poor-quality type-to-search, for example, looks extremely well in telemetry metrics: it looks like people are using it a lot, when what the metrics actually show is that it’s wrong half the time. Refining telemetry to account for that (e.g. to highlight subsequent searches for closely-related terms, like “Mem” instead of “Memo”) is only possible after data-based interpretation has been shown to be wrong based on user feedback, but that kind of reactive approach is only good for hindsight.

          I don’t want to say telemetry isn’t useful, it absolutely is, and it’s a shame that bad actors have ruined it for everyone. But it’s no substitute for proper usability studies. It can be an excellent tool in validating design decisions that were made based on more restricted studies, but it’s not a very good tool for making new decisions. Even when it happens to suggest good ideas, the feedback cycle associated with it is so long that results still remain hard to interpret.

        2. 9

          Some projects … are effectively open-loop systems

          I have a glum suspicion that this might be inevitable. Possibly unfixable.

          When I start a project and it has dozens of users, I can talk to them all individually and get feedback. Doesn’t matter how technically sophisticated they are or aren’t, there are so few of that a single person can have 1:1 conversations with all of them.

          When the project grows hugely, the user base is now so huge that every feedback channel becomes unreadable under a firehose of noise.

          As we know, every change breaks someone’s workflow so all channels are permanently saturated with complaints. When the project is really big, the number of people who write in with silly complaints like space-bar heating is “way too many to read”, and the number of people who write in with sensible complains like “a bug deleted random files in my home directory” is also “way too many to read”, and it gets harder to distinguish the two.

          What’s worse is that sometimes it does happen that you are the Santiago Calatrava of software design and you genuinely keep ruining things for other people, breaking stuff that used to work and installing shit-awful new impositions in their lives. But if the projects you’re damaging have big enough user bases then you can brush it off as the too-much-feedback problem indefinitely because it’s ambiguous.

          (Regarding that last paragraph, I would like to state for the record that I do not actually dislike Poettering and systemd is a good idea with an imperfect implementation.)

          1. 6

            Programming languages communities got in the habit of running “User Surveys” yearly. (A twenty-minutes survey where you ask your user base various questions, to understand their experience, use-case, needs, and what most needs improving.) Do Desktop environment do the same? I don’t remember a link to a “KDE user survey” or “Gnome user survey” in the recent years.

            (My experience helping run such a survey for the OCaml programming language is that it’s a bit of work to prepare, and open-ended questions result in a lot of unstructured feedback that can be taxing to restructure into clearer signal. But I would say it’s worth it, doable by a small all-volunteer team, and you can gradually improve the survey each year.)

            1. 2

              Programming languages communities got in the habit of running “User Surveys” yearly.

              I’m not so fond of these “user surveys”… I’ve participated in both Go and Rust surveys for the past couple of years, and, especially for Go, many questions fall in the category of “we are great, aren’t we?”, and the rest of the questions are mainly about the status-quo, or about obvious issues… (For example the Rust survey keep asking about compile times; it’s obvious what the response is…)

              I don’t think any of those surveys are actually meant to discover anything that is not already “mainstream”…

              Thus, I believe so would happen also in the DE / GUI domain with regard to surveys.

              1. 1

                Do you think there is a fundamental flaw with the idea of running surveys, or just that the Go surveys were not actually aimed at gathering frank opinions to understand what users think? If the idea is “surveys are nice in theory but often flawed in practice”, I would argue that having this process in place (in addition to some other ways to gather feedback, I guess?) is still generally a good idea. If the idea is “surveys are fundamentally not the right way to gather feedback”, do you have recommendations on which process to use?

                My own experience with the OCaml survey was more positive. In particular, it was interesting to learn about the proportion of use of programming tools (for example: everyone knows that Visual Studio Code is popular these days, but how does it compare to older editors in terms of actual userbase within our community?), I was surprised by some of the answers to questions we asked about the pain points (in some cases, people were more positive about the user experience than the survey authors expected !). Overall the results of just the first survey in 2020 suggested a few actions that were useful.

                1. 2

                  I am of the opinion that surveys are a useful tool, but not an easy tool to master… Thus I believe that if a project actually wants to gather some “information” (as opposed to just “data”), thus insight into their users, they must approach such surveys with consideration, and perhaps with help from professionals (sociologists?), else they’ll just end up with surveys that further their biases…

                  For example, with regard to your “VS Code vs older editors”, you can easily assess the current situation (“how do they work now?”) but I think it’s harder to answer the real questions “are they as good enough?” or “what would be the next IDE improvement that could make your life easier?”, for these many just use open ended questions, but then it’s hard to compile back useful information.

                  Also, what I find frustrating about all these “user surveys”, is that none of them actually releases their raw data for others to assess / compile. Why?

            2. 4

              Yeah, and when things are just fine, users won’t make an effort to give you that feedback. Nobody’s going to register on a bug tracker or write a blog post to tell you they’ve found a button without any issues, and definitely knew it’s clickable.

              1. 10

                There’s a story that one small Haskell development tool nearly got deleted / withdrawn from hackage because the author thought nobody was using it. When they posted about it on the mailing lists, a bunch of active users came out of the woodwork all at once. It turned out the maintainer hadn’t been getting any communication from end users because the piece of software didn’t have any bugs. :)

              2. 1

                Once the user base scales past a few dozen users, yeah, you obviously can’t operate at the same level of detail. It’s hard to strike a balance but there is certainly a balance to be struck, somewhere between talking to each user individually (which obviously you can’t do when you have thousands of them) and passive-aggresively ignoring not just general feedback but outright bug reports (which is what often happens on the Gnome bug tracker).

                It’s certainly true that every change breaks someone’s workflow as in XKCD #1172, but there really is such a thing as bad design, too.

                Lots of people don’t remember it but there were people complaining about Gnome 2 back in the day, too. It even had a fork. It went nowhere, which is why no one remembers it. Gnome 3 has so far spawned one pretty successful fork of Gnome 2 (Mate), made Cinnamon a thing, and its most successful incarnation (Ubuntu’s, single-handedly responsible for a good part of its user base, if not most of it) deviates from upstream quite significantly, and not just for branding purposes. At least some of the “it broke my workflow” complaints are probably not the kind of “it broke my workflow” that XKCD made famous :-).

                1. 2

                  there really is such a thing as bad design, too

                  Yes, that’s why I wrote a paragraph mentioning Santiago Calatrava. :)

                  I probably shouldn’t have brought bug reports into the discussion. The main topic of conversation here is design. Design has more ambiguity in distinguishing relevant from silly complaints.

                  1. 1

                    Oh! Sorry, that totally flew past me, I had no idea who he was and it just didn’t register on my “I should Google it!” radar :-D.

              3. 4

                The problem is not unique to FOSS: macOS and Windows suffer from the same flat design trend. On macOS and iOS it’s often unclear what is a label and what is a button, or whether something is disabled or merely has been made semi-transparent to be less noticeable.

                I don’t know if Apple is not testing their UIs any more, or maybe they only pay attention to feedback on how “clean” things look.

                1. 8

                  I think the problem most commercial projects suffer is slightly different, although it has the same effect – and it’s still a problem of misaligned metrics.

                  Once a project ships a universally-acclaimed version of an interface, no head of a UX team can walk into a meeting room and say you know what, this is literally the best there is, from here on we’re just going to fix bugs, unless we have to come up with new interaction models for things that just don’t exist right now. That’s a sure way to torpedo a management career.

                  So you seek quality metrics elsewhere, generally in internal teams. (Some) user feedback is still gathered of course but it’s compartmentalized so that you can show growth and engagement, at least on specific segments.

                  1. 2

                    For OS X 10.0, Apple started doing some user testing that they hadn’t done before: testing based on limited exposure to find measure shifts in buying intention. A lot of the visual effects in 10.0 were based on real usability concerns. For example, the genie effect when you minimised a window gave a very strong visual cue where the window had gone, especially important if a user hit command-M by mistake and given that it wasn’t the same place as on Classic MacOS or Windows, so the user needed to learn this. Unfortunately, animations like this had a far bigger impact on purchasing intentions than they did on usability. Users looked at OS X side-by-side with Windows XP and saw something sleek and with dynamic animations on one side and something plain and static on the other side. This led to a lot of people buying Macs who otherwise would not have done and created a feedback mechanism where having lots of animations was the important thing, if they improved usability then that was a nice bonus.

                    Since then, I suspect that there’s been a lot more pressure in commercial GUIs to introduce things that drive sales, rather than things that improve usability. Remember that people make purchasing decisions based on a fairly limited exposure to the new system, which isn’t long enough to gain a benefit from most usability improvements, and once they’ve invested in a platform they’re less likely to switch (if they’ve invested in a more-expensive platform, cognitive dissonance helps you keep them).

                    These problems don’t necessarily apply to open source environments but when you’re judged by the size of the userbase and Linux distros make it easy for people to try your environment for 10 minutes and then switch to the other one then you see similar constraints. The DE that looks the most polished in a 5-10-minute trial will stay installed and will appear in RedHat and Canonical’s telemetry and be tagged for more investment.

                2. 23

                  From my POV, the pinnacle of Linux design was GNOME2 + GTK2.

                  I seriously dislike anything related to the new “GUI designs” that make everything monocolored and flat, in order to save money on assets and artists.

                  1. 7

                    My first Linux desktop experience was RHEL 5 and I was extremely impressed by the responsiveness, reliability, consistency, font hinting, and comforting aesthetics. It’s probably nostalgia but I would have loved if some project maintained that DE in perpetuity. MATE with GTK3 just doesn’t cut it.

                    Sort of felt like the GNOME 3 debacle was a prelude to the systemd debacle. Large industry funded projects were like no it’ll be good you’ll like it. 10 years later it has not panned out.

                    1. 5

                      While I don’t totally dislike GNOME 3, it’s hard to disagree with what you say.

                      I have very fond memories running GNOME 2. Everything, even the sound theme from Pidgin, felt very welcoming and functional.

                      Since that design is no longer around, I just pretend major desktops don’t exist and run bare X with a tiling window manager, a terminal, an editor and a web browser. It’s all I need, it’s really functional and latency is much lower. I hate how GNOME and friends spin dozens of processes.

                      1. 1

                        I use XFCE, which mostly feels like it’s stuck in time in about 2006 or so, which is just the way I like it.

                      2. 8

                        I would love to take part in a study. Regularly.

                        I wouldn’t mind telemetry, if it is opt-in and pre-aggregated on my machine a bit to remove e.g. specific times, locations and so on. Then sent weekly.

                        1. 6

                          Does anyone have / know of a gallery of screenshots of Gnome / KDE throughout the years?

                          I found some on Wikipedia:

                          (I remember when I first saw Gnome and KDE, but especially Gnome around 2004, and I was astonished of the visuals. However I was never a Gnome / KDE user, instead I’ve quickly settled on Fluxbox, then moved on to Ion, and since long I’m an i3 user.)

                          1. 5

                            Toastytech to the rescue!

                            http://toastytech.com/guis/indexunix.html

                            These are screenshots of default setups. A less curated, but possibly fairly representative bunch of screenshots is available in the uncanniest of places: gnome-look.org (now part of Pling), if you go through the screenshots sections backwards: https://www.pling.com/s/Gnome/browse?cat=257&page=198&ord=latest .

                            There’s also a certain website from a certain country that no one has good things to say about these days which maintains a large archive of user-contributed screenshots. The archive is here: https://www.linux.org.ru/gallery/archive/ . Please note that I don’t speak the language, though – I understand only a few words, and ironically enough I understand them because they’re similar to their Ukrainian or Serbian counterparts, neither of which I speak, but I have family and friends who do. If there’s any misinformation of offensive material posted there these days I absolutely don’t condone it, I just can’t tell it’s there. As far as I can tell, there isn’t any, but things are pretty volatile these days. I considered sharing the link by PM but I figured “I have a Russian link but I can’t post it here, drop me a PM” would be even worse and more suspicious.

                            1. 1

                              When KDE 3 came out I was really impressed. It was disappointing when they changed directions with KDE 4 where everything went flat and dull. I’ve wondered how well these older interfaces would have scaled up to HD and 4k displays. IIRC I was first using KDE 3 on a 1024x768 CRT and then upgraded to a 1280x960 LCD. It never felt this cramped to me but looking back the screenshots appear really cramped in a way larger screens could alleviate.

                              http://toastytech.com/guis/mkdewindowing.png

                              1. 3

                                While the old style plugins that powered it no longer work, you can still emulate many of them with QtCurve. In my experience they work very well. Flat, airy, touch-optimized interfaces on 30” desktop screens are absolutely nightmarish, I hate them with a passion, which is how I know that QtCurve still works :-).

                            2. 5

                              My main gripe with libadwaita is that it effectively hardcodes the theme. GNOME has officially discouraged theming (by distributions) for some years now, but that’s mainly because the existing theming system is both too powerful and too ad-hoc. In practice, as long as you don’t go crazy with it, GNOME theming works fine. The switch from libhandy to libadwaita changes that. There’s some talk about this being accompanied by a new theming API, but I have no idea if that will actually happen.

                              1. 6

                                I’m surprised people care about themeing so much - once you stop being a teenager with no skills and lots of time, it stops being interesting (because you have more rewarding pursuits) and you don’t have the time for it anymore.

                                1. 10

                                  People love theming things. They paint their IKEA furniture, buy colourful phone cases, set backgrounds.

                                  If you make it possible, they’ll also theme software they use daily based on their own preferences. Just look at how many shell/terminal colorschemes exist.

                                  1. 5

                                    Saying that no one needs theming is effectively the same as saying that no one needs different clothes. Orange jumpsuits is the only clothes anyone will need—with ball and chain as a decoration. ;)

                                    1. 1

                                      Color scheme is one thing. Functionality is another. Oh, who thought scroll bars were a good idea? Oh, now they’re on the left? Oh, it’s proportional now. Oh wait! They’re on the right. Now they’re invisible unless you hover over the right edge. Now you have to hover over the left edge.

                                      1. 1

                                        Why is it okay if every app has its own scroll bar and button position, in different places, with different functionality?

                                        And why is it not okay if the user themes the apps so that every app has all its buttons in the same places, and the same scroll bars?

                                        1. 1

                                          What I’m commenting on is scroll bars exist on the left in version X of the UI. The next version they’ve been moved to the right. The version after that they’re invisible.

                                          Can a theme define where scroll bars appear? Or how they work? Can I get back the look and feel that I’m used to?

                                    2. 6

                                      I sort of get it. I really don’t care about theming on any platform except Linux, and in particular, except GTK applications.

                                      My external monitor is… mid-range, I guess, it’s not a particularly expensive one. But my eyesight isn’t what it used to be, and in my reasonably well-lit office, I can’t read the text in inactive windows unless I lean over my desk and squint. It doesn’t help that it dims everything, not just toolbar or menu items (the way macOS and Windows do) but the actual window content, too.

                                      The huge widgets are unpleasant but I think Linux users tolerated it better because we always had oversized widgets. Back in the day, GTK2 was sensibly bigger than Windows and Aqua. It’s even bigger now (you can fit two or three macOS buttons in a GTK3 button…). Frankly, it only bothers me on large screens. On small, high-res screens it’s tolerable.

                                      (Edit: FWIW, I am a little personally invested in this topic. I have pretty good eyesight, as in, I can read without glasses and all – but only in one eye. My bad eye is practically useless now, so I try to take good care of my one good eye, because I’m one stupid accident away from blindness. Not having to either squint or hack CSS themes after each GTK release is basically half the reason why I got a Mac, which was particularly disappointing for me because Gnome used to have pretty good reputation for accessibility.)

                                      1. 5

                                        once you stop being a teenager with no skills and lots of time, it stops being interesting

                                        I don’t much care for changing themes, myself, but I think you are being uncharitable here and confusing your own preferences with a universal rule.

                                        I do take advantage of themes to have one consistent theme between Emacs, my terminal, my screen locker, my browser, StumpWM and X11 applications. I use one theme on my work machine and another on my personal one, so I always know at a glance which machine I use.

                                        But I don’t fiddle with themes — as you note, there are more rewarding pursuits in the world for me. I don’t begrudge someone else who does, though. It’s a free world!

                                        1. 3

                                          It’s less about theming, and more about not being in a position where well-meaning designers can sabotage my user experience.

                                          1. 2

                                            I don’t care about theming as a user, I want the GUI provider to pick a good one. Theming is a backup mechanism for when the GUI provider has made bad choices. If I care about theming for your GUI, it means that your defaults are bad.

                                            I do care about theming as a developer because I want to integrate my look and feel with multiple host platforms. Not looking and behaving like native Windows / macOS apps has been a problem for a lot of open-source toolkits and means that I can’t adopt them in places where users expect a native experience.

                                            I do care about theming as a packager / integrator because I want to ensure that things all work well together. If some things use a menu bar at the top of the screen and others use one in-window, or if some things put scrollbars on the left and others on the right, that’s a bad user experience. Supporting theming makes it possible for me to provide a single theme for each toolkit that makes it behave in a specific way, so GTK, Qt, XUL, and so on apps all behave in a more-or-less consistent way.

                                          2. 5

                                            Also doesn’t help that adwaita is a touch centric anti real work padded mess.

                                            There has been countless studies showing that having to scroll a window to see more information, be it spreadsheet (accounts etc), code (the worst being having to scroll up and down a page), document writing, leads to productivity issues.

                                            But yet here we are, with libadwaita, effectively making lower screen resolutions (like 768p) completely unusable with 1/8 of your display taken up with huge padded titlebars and tabs. Absolutely awful. No one is going to take productivity on a linux desktop seriously with this oversized rubbish.

                                            And to make it worse, most input devices have got more accurate not less over the years. (high dpi mice vs old style).

                                            1. 6

                                              I find it funny people seem to find Adwaita so touch-driven when Gnome doesn’t work well on touchscreens and the designers explicitly say touch isn’t a priority.

                                              There has been countless studies showing that having to scroll a window to see more information, be it spreadsheet (accounts etc), code (the worst being having to scroll up and down a page), document writing, leads to productivity issues.

                                              Source? Last I used Gnome 3 on a 1024x768 screen (on an old ThinkPad X61) a couple of years ago, I didn’t have any problems. People get into hysteria over whitespace, but even older desktops had whitespace for the sake of making things easier to scan visually and provide better click targets. (i.e. a lot of apps by devs that complain about whitespace look even denser than say, Windows 2000 was, for the worse.) With a trackpoint, that helped a lot.

                                              1. 4

                                                The parent’s claim is a lot broader than serious HMI investigations can afford, and it’s also not something that has been investigated that much due to a combination of technical and historical reasons (tl;dr regardless of how efficient scrolling is, it was, for a long time, the only feasible way to present long information). That being said, just about every (narrower) result there is suggests that scrolling is associated with some cognitive impairment or loss of efficiency, or just isn’t preferred. For example:

                                                https://journals.sagepub.com/doi/abs/10.1177/0018720809352788 (specifically, about reading long texts)

                                                https://www.tandfonline.com/doi/abs/10.1080/00140138308963363 (specifically about navigating hierarchies, and it’s particularly interesting given that it’s literally 40 years old – the test installation used PDP-11s!)

                                                https://www.researchgate.net/profile/Michael-Bernard/publication/242537467_Paging_vs_Scrolling_Looking_for_the_Best_Way_to_Present_Search_Results/links/545a41410cf2c46f66424d85/Paging-vs-Scrolling-Looking-for-the-Best-Way-to-Present-Search-Results.pdf is possibly not as relevant but worth a read nonetheless.

                                                I don’t have my notes from back when I was working on a project where this was a relevant question, these are only the ones I could recall by title or author name. There were a bunch of other, less-cited papers that I’d read at the time (cca. 2015-ish).

                                                Methodology quality varies (as in, the closer you get to present day, the more “relaxed” things are, which I think says a thing or two about this, too…) but my understanding is that, in general, no scrolling is better than any amount of scrolling, and less scrolling is better than more.

                                                1. 2

                                                  There is loads https://blog.codinghorror.com/does-more-than-one-monitor-improve-productivity/

                                                  This one was measuring screensize, which is basically the inverse of screen estate (ie adding more inches == more space).

                                                  People using the 24-inch screen completed the tasks 52% faster than people who used the 18-inch monitor People who used the two 20-inch monitors were 44% faster than those with the 18-inch ones.

                                                  GTK theming wastes space. Fact.

                                                  1. 2

                                                    You cut off the last bullet point in the findings:

                                                    Productivity dropped off again when people used a 26-inch screen.

                                                    Which, besides being, frankly, a bit intellectually dishonest, also draws attention to an anti-pattern I see in online discussions: people arguing this way or that way when the real answer is some number

                                                    I do think a lot of modern designs waste too much space, but if we suppose the reasoning that got here was as simple as:

                                                    • Bigger buttons are easier to press…
                                                    • Make buttons bigger!

                                                    Then we should not really expect to get better results by dogmatically moving in the other direction; oversimplifying the results is not helpful.

                                                    1. 1

                                                      No the exact opposite, there is also issues with having to move eyes / head to track from edge to edge. Reduce the padding / still can see more.

                                                2. 3

                                                  headerbars take less space than titlebar + menubar - they waste less space by using the empty area in titlebars

                                                  1. 3

                                                    I mean… maybe if every other widget was a little thinner, we wouldn’t need to reclaim space by reinventing titlebars and active/inactive state feedback along with them.

                                                    Plus, FWIW, this obviously depends on the theme, but back when headerbars were introduced and I tried them, they were definitely bigger than a titlebar from an average theme + the menubar of a Qt application using QtCurve or the default Fusion style. Depending on widget fatness, they were maybe a few pixels thinner than a titlebar + a menubar + a toolbar. Headerbars made a world of difference on Gnome because the titlebar and the padding around menu items in the default theme were humongous but that’s not what everyone was coming from.

                                                    1. 1

                                                      I never said that headerbars aren’t good. I like CSD, and headerbars. I just hate the complete waste of space that GTK3/4 headerbars are.

                                                3. 5

                                                  I think we got here by a sequence steps that all make sense:

                                                  1. As desktops’ graphics capabilities increased and resolutions increased, desktop UI added more detailed graphics (compare a button in Windows 95 to a button in Windows 10, or in MacOS 9 to MacOS 10.4).
                                                  2. Mobile shows up with less power and resolution.
                                                  3. Companies want to use the same code base on both because developing software is expensive, but buttons that work well on mobile look jagged on high resolution desktop screens and buttons that look good on high resolution desktop screens are cluttered and computationally expensive on mobile.
                                                  4. Someone realizes that “draw nothing” looks the same on both!
                                                  5. “Draw nothing” turns out to be poor UX on both, so in the choice between “keep costs down” and “make good UX” everyone chooses “keep costs down.”
                                                  6. Humans being humans, they rationalize this decision for the sake of their self respect. Certain skeuomorphic overreaches make this easier to sell. As part of this rationalization they reject serious user interaction testing. This lowers costs further.
                                                  7. The rationalization spreads in the UX community since it’s a simple answer with the confidence of a dogma, which is much easier to proselytize than complicated answers involving lots of data.

                                                  And here we are with flat interfaces.

                                                  1. 1

                                                    I’m not convinced by that because ever modern (i.e. can run Android or iOS) phone has a GPU. In contrast, Windows XP and Mac OS X were both designed to run well on devices with a dumb frame buffer. You can render much richer UIs smoothly on a 2010-era smartphone than you could on a 2000-era desktop.

                                                    1. 1

                                                      MacOS X I don’t think ever targeted a dumb frame buffer, and 2D compositing in hardware was pretty standard back into the 1990’s, so I don’t think Windows XP was ever designed to target a dumb framebuffer either.

                                                      1. 2

                                                        MacOS X I don’t think ever targeted a dumb frame buffer

                                                        10.0 ran with no hardware acceleration. All compositing was on the CPU. It wasn’t until 10.2 that compositing was offloaded to the GPU. This required a GPU with 16 MiB of RAM, which was available on all Macs. The first iMac that ran OS X shipped with an ATI Rage IIc with 2 MB of SGRAM. Quartz used this as a dumb frame buffer.

                                                        I don’t think Windows XP was ever designed to target a dumb framebuffer either.

                                                        XP, out of the box, used a VESA driver, which exposed a dumb frame buffer. Nothing in the windowing system depended on hardware acceleration. The Aero interface on Vista was the first Windows UI to use hardware compositing.

                                                        2D compositing in hardware was pretty standard back into the 1990’s

                                                        Kind of. In the ’80s, most graphics cards were dumb frame buffers with maybe some text acceleration. In the early ‘90s, they added 2D acceleration for things like line drawing, BitBlt, and sometimes for sprite composition. Often the sprites were fixed power-of-two sized, and so weren’t useful for a general-purpose windowing system. By the late ’90s, most CPUs could do 2D rendering faster than commodity graphics cards and so windowing systems started dropping support for them. A lot of awful XFree86 code went away when X.org dropped support for 2D acceleration (after several years of everyone using the VESA driver because it was faster and more stable than a lot of the ‘Windows accelerator’ cards).

                                                        Compositing for windowing systems didn’t happen until 3D accelerators were common. A 1024x768 display at 16-bit colour needs 1.5 MiB of RAM for the frame buffer. On a graphics card with 2 MiB of RAM (very common in the late ‘90s), that doesn’t leave anything like enough for window buffers. Double buffering (one copy of the frame buffer being written to the display, another being written by the CPU, to avoid tearing) doubled this.

                                                        Windowing systems of the era typically allowed applications to draw directly to the frame buffer. For example, QuickDraw on Classic MacOS set up a clipping rectangle for the window’s area and allowed direct access to the frame buffer. Other systems would buffer in main memory and then copy into the frame buffer. Even buffering full windows in main memory was too expensive for most systems. The first iMac had 32 MiB of RAM, which was quite a lot for that time. If you had enough windows open to cover three times the screen area if you put them all side-by-side, then that’s 6 MiB, so a very sizeable chunk of the total RAM, before applications use any of it for their own non-display state. Windowing systems typically kept a list of exposed rectangles for each window and forced redraw on exposed bits. You can see this if you drag a window in Windows XP, for example: the exposed bit is first drawn as white and later filled in with real contents. As an optimisation, the buffered rectangles would slightly exceed the area and so a small drag would trigger redraws of a larger region but have data available to render in the newly exposed bit immediately.

                                                        With a composting windowing system, this is very different. Each window renders to a texture. If a window is exposed, the application doesn’t need to do anything, the windowing system already has the texture that backs it (typically in GPU memory) and just composites it in the newly exposed space. The windowing system may tell you that some parts of your window are occluded so that you can avoid drawing them if it’s expensive. The flow generally happens in the opposite direction: the application decides some portion of a window is updated, renders it, and notifies the windowing system that a portion of the texture needs replacing with new contents. This is only possible if you have enough video memory for some multiple of the frame buffer size.

                                                        1. 1

                                                          Thanks! That’s a lot of good information and a compelling argument that I’m wrong.

                                                          I have no other excuse for flat design in that case. Brain damage.

                                                      2. 1

                                                        MacOS X I don’t think ever targeted a dumb frame buffer, and 2D compositing in hardware was pretty standard back into the 1990’s, so I don’t think Windows XP was ever designed to target a dumb framebuffer either.

                                                    2. 1

                                                      I do like the old Adwaita buttons… but they also feel really heavy when there’s enough of them. New Adwaita still needs some more polishing, but I like the overall direction.

                                                      1. 1

                                                        I understand why people write these articles, but the argumentation here is just “I don’t like this”. Can’t we talk about design with better arguments? Maybe even data? (Not that I have any at hand.)

                                                        1. 8

                                                          but the argumentation here is just “I don’t like this”

                                                          That’s just the title. The article goes a bit more in depth on why the author considers the new design to be worse, though you’re right about the data, the article only presents anecdotes:

                                                          I feel like the designers of this new theme have never sit down with anyone who’s not a “techie” to explain to them how to use a computer. While a lot of people now instinctively hunt for labels that have hover effects, for a lot of people who are just not represented in online computer communities because they’re just using the computer as a tool this is completely weird. I have had to explain to people tons of times that the random word in the UI somewhere in an application is actually a button they can press to invoke an action.

                                                          1. 2

                                                            That’s fair, I missed that paragraph. I agree that it’s not data though.

                                                            1. 6

                                                              “Data” doesn’t make arguments automatically better. Quantitative analysis isn’t appropriate for everything, and even when it may be useful, you still need a qualitative analysis to even know what data to look at and how to interpret it.

                                                              1. 1

                                                                This is the kind of reply that’s easy to agree with. 🙂