I recently encountered Firefox’ megabar, and was shocked, as it seem like such a bad idea. And I usually hesitate to say think like these regarding UIs, as most of the time it’s just a matter of getting used to. What surprised me more, was that my father, who was sitting behind me didn’t understand what I was even talking about. He has been developing and using various environments (passively) for almost 30 years, and I notice that most modern UIs just confuse him. For example, while using Ubuntu and wanting to check a calendar, he didn’t bother trying to find where Unity hides it, but opened a terminal and ran cal. When he needs to open a text file he uses nedit. But what I realized is that because he has been dragged through so many different UIs and UXs, that he has lost “sight” for such by comparison small changes as a “megabar” or a few more/less hamburger menus – why worry about it if the next fad is just around the corner?
this is most of the reason why i tend to stay in the terminal for most work-related things. generally, if i can do something in the shell and i’m not trading off a bunch of functionality for doing so, i’ll try to do it in the shell. UIs change, command-line tools can’t because they’re open APIs.
People are reinventing term tools every day. There are a lot of “alternatives” for things like top, ls, cat, (…), generally written in a safer language, and adding blinken lights. Of course you’re not forced to use those.
The same thing happened to Windows or Gnome, except it’s much more visible since it is what most people see everyday with no alternative (at least for Windows).
I think you’re right that the difference is that a tool like ripgrep can be written by one person, and is not to the exclusion of the original.
A web browser is necessarily so complicated in 2020 that immense resources are required to make one happen. There is an extent to which users are held hostage to the whims and capricious changes in those UIs because it’s difficult or impossible to do anything else.
when i first read this comment i thought you were referring to firefox’s default configuration with no search bar, and blank space on either side of the address bar.
Lucky it’s easy to disable via about:config, but seeing that there is no “user friendly” way to do the same, it’s obvious that Mozilla wants this to be used.
Some believe that desktops should copy all UI conventions from mobile devices. Because we can achieve some unicorn hybrid system which is fit both for touch and traditional desktop use. (Maybe such an optimum exists, but no one is remotely close).
Web apps taking over. Electron et al. are cheap for whoever provides the app (you only have to develop one app in place of one per platform + you can hire web developers with generally lower salaries). But externalize the cost to users, both in terms of resources and by throwing away all platform look & feel. Heck, even macOS is not immune these days. Arq, which used to be an excellent native Mac app had its interface rewritten in Electron.
Wrong incentives within companies. Continuously shipping new version + making promotion by making a visible mark on an application. How can you make promotion if you hammer out issues in an application with a familiar boring UI?
It’s really sad how these trends have utterly destroyed the Windows and GNOME UIs and are even slowly chipping away from macOS (which, as long as you are running native Cocoa apps, at least still largely follows conventions: they have somewhat uniform title bars, still have a menu bar, etc).
Though it may also have to do something with age. In general, it seems that younger folks who have grown up with mobile UIs do not mind electron apps etc. as much as those who have used computers in the Mac OS classic, Windows 95, GNOME 1.x-2.x times. In fact, the often appreciate that applications look the same between platforms.
My biggest grief with modern “Apps” is latency. Even the most simple mobile application requires a roundtrip to some server for almost every interaction, over a potentially flaky wireless connection, so you can expect to have random stalls in your interaction with the software.
It’s not just web apps that have this issue either - I cannot use Word to write documents because the characters only appear on the screen after half a second. I feel like as people use more Javascript-heavy web apps they have become acclimatised to the situation and regard it is perhaps less-than-ideal, but that it’s not possible to do anything about it.
Part of the issue with Word is that the cursor is animated to follow the text at a slight delay so that it feels smoother. I kind of like how the effect looks, but it does definitely make the text input feel laggy even if it actually isn’t any laggier than other apps. There’s an iPhone app called “Is It Snappy?” that uses the camera at 240 FPS to help measure latency. On my machine, the lowest latency comes from the Kitty terminal emulator (it’s GPU-accelerated, which helps) and Chromium (also GPU-accelerated). Your keyboard itself can make a difference to input lag as well; this is one of the only good reasons for buying a “gaming” keyboard (aside from NKRO, if that matters to you).
The lowest latency on my machine is in the TTYs - even when I used kitty, typing felt noticeably more responsive in the framebuffer. I do have a QMK keyboard somewhere, which I’ve never measured the latency of, but to be honest I mostly just use the laptop keyboard because it’s more convenient.
Yes, I was thinking of GUI apps, but typing in a framebuffer is faster (probably since all of the relevant code is in the kernel). I remember reading something about how GUI Emacs has a very low input latency, but I haven’t been able to reproduce that finding myself.
I never used BeOS, but I’ve used Haiku quite heavily, including as my primary OS for about six months last year. Eventually I moved back because the mouse-driven workflow really wasn’t working for me, and there were quite a few crashes, but it really made me sad for the way that computer systems had gone..
I couldn’t upvote this fast enough. The now-dominant consensus and ideology of UX, whatever we’re calling it, is making astonishing progress at destroying every shred of reasonable consistency and well-tested convention across every interface surface I can think of aside from, maybe, the terminal and CLI tooling.
The now-dominant consensus and ideology of UX, whatever we’re calling it,
That one is easy: if people know what they’re talking about, they call it HCI. If they don’t know what they’re talking about, they call it UX. It’s a great filter word. As soon as someone starts talking about UX, you immediately know that they have no understanding of cognitive psychology, won’t be able to cite any research to back up their assertions, and have are highly unlikely to have any opinions worth listening to.
Good usability is hard. It’s a global optimisation problem. The worst thing that’s happened to software in the last two decades is the rise in people who think usability is an art and not a science.
Anyone thinking of designing an interface (including an API, so pretty much anyone writing any software) should read The Humane Interface. Some of the things that Raskin says are a bit dated (for example, his discussion of Fitts’ Law doesn’t cover how the concepts apply to touchscreens) but most of them are still good guiding principles (especially Raskin’s First Law: a program may not harm a user’s data or, through inaction, allow a user’s data to come to harm).
CLI tooling is not exempt, either. See: a million node js tools that joyfully barf ANSI escape sequences or ‘reactive’ progress bars into their output, even when it’s not a TTY.
I have, um, following this with quite some interest, then frustration, then rage, ever since 2012 or so, when flat, convergent UIs started to become hip. I am, at this point, quite convinced that the current trend will soon start to get reversed.
Up until two or three years ago simply questioning any of these things was faux pas. Of course applications that look the same on desktop, laptops, tablets and phones are the right way, that’s where the future lies. Of course flat icons, flat buttons, flat everything is the way to go, the cleaner look helps you stay productive. Of course larger widgets are important, Fitts’ law says they’re easier to use (nevermind it doesn’t say anything of the sort…).
For the last year or so, though, I’ve been hearing more and more people openly speaking out against this stuff. It’s certainly aided by the commercial unfulfilled promises – or outright failures – that it brought. Convergent UIs were a disillusionment – an idea that looked great on paper but turned out to be pretty awful in practice. Laptops with touch screens looked like the future was finally here at some point – then it turned out they’re pretty awkward to use to do real work (mine was great for like the first two hours. I don’t think I’ve been as enthusiastic about a mouse after that, not even the first time I’ve seen a mouse). It’s getting harder and harder to argue that it’s a good idea to design something to be used across umpteen devices when that’s clearly not gonna happen. Consequently, it’s increasingly hard to come up with reasons to make applications that are meant to be ran on devices with Retina or desktop 4K monitors, keyboards and high-resolution mice and trackpads the same way you make apps designed for 5” screens that you can interact with only by poking them and shaking them.
But it’s also aided by the fact that, now that the novelty has worn off, people are starting to realize a lot of these things really are awful. Flat, thin-bordered icons look like anonymous blobs of colour unless they’re 128x128 – the way all screenshots show them, although very few people use them that way. Flat widgets look great until you try to get them into an application that legitimately needs three dozen widgets on a screen, like CAD apps or media production apps – turns out being able to tell a button from a label when you’ve been staring at the same screen for 13 hours is way more important than looking good in screenshots. Large, spacious widgets look like a good idea until you plug them into a productivity app. Disappearing scrollbars are great for infinitely scrolling a list of cat pictures but they’re a nightmare in CAD or other graphics software.
There’s now a small, but growing, and increasingly cohesive group of people who actively question these things, and I think there is some hope that the wheel is going to start turning.
I’m not really involved in UI/UX design anymore. I used to work with a UX group many, many years ago, at one of my first gigs, but all I did was help people smarter (and way older than me, at the time) set up test rigs for usability studies. I haven’t written fronted code in more than 13 years now. I’ve been following these things for a very selfish reason. I have very poor eyesight in my left eye (my right eye, thank God, is fine). I’m effectively one-eyed, and large, spacious interfaces mean that I have to move my one good eye around a lot more. Which is pretty awful when your job involves looking at a monitor all the time :).
At this point the Search under help in macOS will keep me using it until someone matches it. Searching dropdowns by hand has been a staple of GUIs since the ANSI graphics era. Automating that search saves me so much time and makes all other operating systems seem broken UX wise. I’ll still use them for specific needs, but they’ve all been navel gazing for a decade or so. The sad part is a cross platform app like Harrison Mixbus is 100x more usable under Mac than windows or linux (and I’ve used it under all 3) simply because of this feature and the deep menus it has as a DAW.
It bugs me so much that Office has almost copied this feature. If you type something into the search box, it will find the relevant ribbon option for you, just like the macOS help search box. On macOS; however, when you select one of those options it opens the relevant (sub)menu and points at the option so that you can find it again. With the Office Ribbon, it just shows you a copy and doesn’t tell you how to find it again and so the user doesn’t learn anything (though if you do it enough, eventually the thing you’re looking for is promoted to the help menu). Both are missing one of the great features from NeXTSTEP, where you could tear off any submenu to make a floating panel, so if you were repeatedly using a button three deep in a menu you could leave that menu somewhere near your cursor.
The menu tearing option was in GTK for a while as well together with my favorite: hover over an option in a menu, press a key combination and, hey!, this is now your keyboard shortcut to that menu item.
Did a little search, the explanation shows how at the whim of the base of the pyramid developers we are as users. Not that Mac hasn’t taken away good things, but from the aqua beta to today, I’d argue there’s been the least disruptive change.
“GtkTearoffMenuItem is deprecated and should not be used in newly written code. Menus are not meant to be torn around.”
I think one underappreciated issue here may be webapps. They are a window within a tab within a window. And they never really /had/ an option to conform with the old style.
I won’t comment on the other things, but I do really like the Chromium tooltip. When you have a few hundred tabs open in one window (not hard to do gradually when you’re not paying attention and just trying to solve some problem under time pressure), the only thing you can see other than the tooltip is the favicon.
I think Firefox has a better story here. There is a drop down menu on the right that allows you to see the full title of each tab in a more classical vertical list when necessary.
I still miss Tab Mix Plus and having rows of tabs wide enough to see more than the favicon.
Simple Tab Groups helps a bit; I usually have two windows, three tops if I’m also playing Youtube or whatever, as it keeps work organized and a single row of tabs encourages to keep the amount low.
Sucks for Chrome users who have to develop worse a Stockholm Syndrome.
It’s not the same thing, but Chromium also has a built-in tab groups feature (although it’s hidden behind a flag): chrome://flags/#tab-groups
I personally switch back and forth between Firefox and Chromium depending on which is being the least annoying for what I’m doing—Firefox continues to not do font rendering the way I like it, which is annoying (on Windows it’s the opposite). Plus, they still haven’t made it possible for scrollbars to be customized with just CSS, so sites like the ArcGIS/JHU COVID-19 dashboard look really ugly on Firefox.
That said, Tree Style Tab is sometimes worth using Firefox for. The web fonts inspector in Firefox’s devtools is also occasionally useful. Neither browser is perfect; they just have slightly different compromises.
The red lines in the image denote the remaining hotspots available for resizing. The blue lines denote the remaining hotspots available for moving the window.
I think I actually fell victim to this, but dismissed it as some temporary glitch.
I recently encountered Firefox’ megabar, and was shocked, as it seem like such a bad idea. And I usually hesitate to say think like these regarding UIs, as most of the time it’s just a matter of getting used to. What surprised me more, was that my father, who was sitting behind me didn’t understand what I was even talking about. He has been developing and using various environments (passively) for almost 30 years, and I notice that most modern UIs just confuse him. For example, while using Ubuntu and wanting to check a calendar, he didn’t bother trying to find where Unity hides it, but opened a terminal and ran
cal
. When he needs to open a text file he uses nedit. But what I realized is that because he has been dragged through so many different UIs and UXs, that he has lost “sight” for such by comparison small changes as a “megabar” or a few more/less hamburger menus – why worry about it if the next fad is just around the corner?this is most of the reason why i tend to stay in the terminal for most work-related things. generally, if i can do something in the shell and i’m not trading off a bunch of functionality for doing so, i’ll try to do it in the shell. UIs change, command-line tools can’t because they’re open APIs.
People are reinventing term tools every day. There are a lot of “alternatives” for things like top, ls, cat, (…), generally written in a safer language, and adding blinken lights. Of course you’re not forced to use those.
The same thing happened to Windows or Gnome, except it’s much more visible since it is what most people see everyday with no alternative (at least for Windows).
I think you’re right that the difference is that a tool like
ripgrep
can be written by one person, and is not to the exclusion of the original.A web browser is necessarily so complicated in 2020 that immense resources are required to make one happen. There is an extent to which users are held hostage to the whims and capricious changes in those UIs because it’s difficult or impossible to do anything else.
when i first read this comment i thought you were referring to firefox’s default configuration with no search bar, and blank space on either side of the address bar.
then i saw the megabar. oh my goodness.
Lucky it’s easy to disable via
about:config
, but seeing that there is no “user friendly” way to do the same, it’s obvious that Mozilla wants this to be used.Only for two months. Firefox 77 will remove the pref, at which point you will need a user style instead.
you serious?
That’s what’s been posted to bugzilla.
UI and UX have nothing to do with usability anymore, all they are is fashion. They chase trends and fads.
That, plus:
Some believe that desktops should copy all UI conventions from mobile devices. Because we can achieve some unicorn hybrid system which is fit both for touch and traditional desktop use. (Maybe such an optimum exists, but no one is remotely close).
Web apps taking over. Electron et al. are cheap for whoever provides the app (you only have to develop one app in place of one per platform + you can hire web developers with generally lower salaries). But externalize the cost to users, both in terms of resources and by throwing away all platform look & feel. Heck, even macOS is not immune these days. Arq, which used to be an excellent native Mac app had its interface rewritten in Electron.
Wrong incentives within companies. Continuously shipping new version + making promotion by making a visible mark on an application. How can you make promotion if you hammer out issues in an application with a familiar boring UI?
It’s really sad how these trends have utterly destroyed the Windows and GNOME UIs and are even slowly chipping away from macOS (which, as long as you are running native Cocoa apps, at least still largely follows conventions: they have somewhat uniform title bars, still have a menu bar, etc).
Though it may also have to do something with age. In general, it seems that younger folks who have grown up with mobile UIs do not mind electron apps etc. as much as those who have used computers in the Mac OS classic, Windows 95, GNOME 1.x-2.x times. In fact, the often appreciate that applications look the same between platforms.
My biggest grief with modern “Apps” is latency. Even the most simple mobile application requires a roundtrip to some server for almost every interaction, over a potentially flaky wireless connection, so you can expect to have random stalls in your interaction with the software.
It’s not just web apps that have this issue either - I cannot use Word to write documents because the characters only appear on the screen after half a second. I feel like as people use more Javascript-heavy web apps they have become acclimatised to the situation and regard it is perhaps less-than-ideal, but that it’s not possible to do anything about it.
Part of the issue with Word is that the cursor is animated to follow the text at a slight delay so that it feels smoother. I kind of like how the effect looks, but it does definitely make the text input feel laggy even if it actually isn’t any laggier than other apps. There’s an iPhone app called “Is It Snappy?” that uses the camera at 240 FPS to help measure latency. On my machine, the lowest latency comes from the Kitty terminal emulator (it’s GPU-accelerated, which helps) and Chromium (also GPU-accelerated). Your keyboard itself can make a difference to input lag as well; this is one of the only good reasons for buying a “gaming” keyboard (aside from NKRO, if that matters to you).
The lowest latency on my machine is in the TTYs - even when I used kitty, typing felt noticeably more responsive in the framebuffer. I do have a QMK keyboard somewhere, which I’ve never measured the latency of, but to be honest I mostly just use the laptop keyboard because it’s more convenient.
Yes, I was thinking of GUI apps, but typing in a framebuffer is faster (probably since all of the relevant code is in the kernel). I remember reading something about how GUI Emacs has a very low input latency, but I haven’t been able to reproduce that finding myself.
I haven’t tested it, but GUI Emacs doesn’t feel slow. But it also doesn’t feel significantly faster than anything else, either.
I’ve noticed that disabling compositing – if you can do that – helps tremendously under X11.
I run just a window manager and haven’t installed a standalone compositor, so I don’t think I’ve got one of those. Thanks though :)
Sure thing! That’s pretty much my setup as well – precisely because of the latency. BeOS spoiled computing for me, I guess :(.
I never used BeOS, but I’ve used Haiku quite heavily, including as my primary OS for about six months last year. Eventually I moved back because the mouse-driven workflow really wasn’t working for me, and there were quite a few crashes, but it really made me sad for the way that computer systems had gone..
funny coincidence, but I recently modded a lenovo s10-2 with a sunlight reflective screen to run Haiku-OS as an outdoor “writing laptop”.
https://www.engadget.com/2010-07-19-how-to-install-pixel-qis-3qi-display-on-your-netbook-and-why.html
I couldn’t upvote this fast enough. The now-dominant consensus and ideology of UX, whatever we’re calling it, is making astonishing progress at destroying every shred of reasonable consistency and well-tested convention across every interface surface I can think of aside from, maybe, the terminal and CLI tooling.
That one is easy: if people know what they’re talking about, they call it HCI. If they don’t know what they’re talking about, they call it UX. It’s a great filter word. As soon as someone starts talking about UX, you immediately know that they have no understanding of cognitive psychology, won’t be able to cite any research to back up their assertions, and have are highly unlikely to have any opinions worth listening to.
Good usability is hard. It’s a global optimisation problem. The worst thing that’s happened to software in the last two decades is the rise in people who think usability is an art and not a science.
Anyone thinking of designing an interface (including an API, so pretty much anyone writing any software) should read The Humane Interface. Some of the things that Raskin says are a bit dated (for example, his discussion of Fitts’ Law doesn’t cover how the concepts apply to touchscreens) but most of them are still good guiding principles (especially Raskin’s First Law: a program may not harm a user’s data or, through inaction, allow a user’s data to come to harm).
And any UI designer reading this will be so triggered they will raise their shields and never consider returning to the past.
I doubt there’s an effective way of getting the point through that these modern UIs are garbage.
CLI tooling is not exempt, either. See: a million node js tools that joyfully barf ANSI escape sequences or ‘reactive’ progress bars into their output, even when it’s not a TTY.
There is a silver lining here!
I have, um, following this with quite some interest, then frustration, then rage, ever since 2012 or so, when flat, convergent UIs started to become hip. I am, at this point, quite convinced that the current trend will soon start to get reversed.
Up until two or three years ago simply questioning any of these things was faux pas. Of course applications that look the same on desktop, laptops, tablets and phones are the right way, that’s where the future lies. Of course flat icons, flat buttons, flat everything is the way to go, the cleaner look helps you stay productive. Of course larger widgets are important, Fitts’ law says they’re easier to use (nevermind it doesn’t say anything of the sort…).
For the last year or so, though, I’ve been hearing more and more people openly speaking out against this stuff. It’s certainly aided by the commercial unfulfilled promises – or outright failures – that it brought. Convergent UIs were a disillusionment – an idea that looked great on paper but turned out to be pretty awful in practice. Laptops with touch screens looked like the future was finally here at some point – then it turned out they’re pretty awkward to use to do real work (mine was great for like the first two hours. I don’t think I’ve been as enthusiastic about a mouse after that, not even the first time I’ve seen a mouse). It’s getting harder and harder to argue that it’s a good idea to design something to be used across umpteen devices when that’s clearly not gonna happen. Consequently, it’s increasingly hard to come up with reasons to make applications that are meant to be ran on devices with Retina or desktop 4K monitors, keyboards and high-resolution mice and trackpads the same way you make apps designed for 5” screens that you can interact with only by poking them and shaking them.
But it’s also aided by the fact that, now that the novelty has worn off, people are starting to realize a lot of these things really are awful. Flat, thin-bordered icons look like anonymous blobs of colour unless they’re 128x128 – the way all screenshots show them, although very few people use them that way. Flat widgets look great until you try to get them into an application that legitimately needs three dozen widgets on a screen, like CAD apps or media production apps – turns out being able to tell a button from a label when you’ve been staring at the same screen for 13 hours is way more important than looking good in screenshots. Large, spacious widgets look like a good idea until you plug them into a productivity app. Disappearing scrollbars are great for infinitely scrolling a list of cat pictures but they’re a nightmare in CAD or other graphics software.
There’s now a small, but growing, and increasingly cohesive group of people who actively question these things, and I think there is some hope that the wheel is going to start turning.
I’m not really involved in UI/UX design anymore. I used to work with a UX group many, many years ago, at one of my first gigs, but all I did was help people smarter (and way older than me, at the time) set up test rigs for usability studies. I haven’t written fronted code in more than 13 years now. I’ve been following these things for a very selfish reason. I have very poor eyesight in my left eye (my right eye, thank God, is fine). I’m effectively one-eyed, and large, spacious interfaces mean that I have to move my one good eye around a lot more. Which is pretty awful when your job involves looking at a monitor all the time :).
At this point the Search under help in macOS will keep me using it until someone matches it. Searching dropdowns by hand has been a staple of GUIs since the ANSI graphics era. Automating that search saves me so much time and makes all other operating systems seem broken UX wise. I’ll still use them for specific needs, but they’ve all been navel gazing for a decade or so. The sad part is a cross platform app like Harrison Mixbus is 100x more usable under Mac than windows or linux (and I’ve used it under all 3) simply because of this feature and the deep menus it has as a DAW.
It bugs me so much that Office has almost copied this feature. If you type something into the search box, it will find the relevant ribbon option for you, just like the macOS help search box. On macOS; however, when you select one of those options it opens the relevant (sub)menu and points at the option so that you can find it again. With the Office Ribbon, it just shows you a copy and doesn’t tell you how to find it again and so the user doesn’t learn anything (though if you do it enough, eventually the thing you’re looking for is promoted to the help menu). Both are missing one of the great features from NeXTSTEP, where you could tear off any submenu to make a floating panel, so if you were repeatedly using a button three deep in a menu you could leave that menu somewhere near your cursor.
The menu tearing option was in GTK for a while as well together with my favorite: hover over an option in a menu, press a key combination and, hey!, this is now your keyboard shortcut to that menu item.
Don’t know why they disappeared.
Did a little search, the explanation shows how at the whim of the base of the pyramid developers we are as users. Not that Mac hasn’t taken away good things, but from the aqua beta to today, I’d argue there’s been the least disruptive change.
“GtkTearoffMenuItem is deprecated and should not be used in newly written code. Menus are not meant to be torn around.”
https://developer.gnome.org/gtk3/stable/GtkTearoffMenuItem.html
The feature you liked was “not meant to be.” I personally never used it.
I think one underappreciated issue here may be webapps. They are a window within a tab within a window. And they never really /had/ an option to conform with the old style.
I won’t comment on the other things, but I do really like the Chromium tooltip. When you have a few hundred tabs open in one window (not hard to do gradually when you’re not paying attention and just trying to solve some problem under time pressure), the only thing you can see other than the tooltip is the favicon.
I think Firefox has a better story here. There is a drop down menu on the right that allows you to see the full title of each tab in a more classical vertical list when necessary.
I still miss Tab Mix Plus and having rows of tabs wide enough to see more than the favicon.
Simple Tab Groups helps a bit; I usually have two windows, three tops if I’m also playing Youtube or whatever, as it keeps work organized and a single row of tabs encourages to keep the amount low.
Sucks for Chrome users who have to develop worse a Stockholm Syndrome.
It’s not the same thing, but Chromium also has a built-in tab groups feature (although it’s hidden behind a flag): chrome://flags/#tab-groups
I personally switch back and forth between Firefox and Chromium depending on which is being the least annoying for what I’m doing—Firefox continues to not do font rendering the way I like it, which is annoying (on Windows it’s the opposite). Plus, they still haven’t made it possible for scrollbars to be customized with just CSS, so sites like the ArcGIS/JHU COVID-19 dashboard look really ugly on Firefox.
That said, Tree Style Tab is sometimes worth using Firefox for. The web fonts inspector in Firefox’s devtools is also occasionally useful. Neither browser is perfect; they just have slightly different compromises.
I think I actually fell victim to this, but dismissed it as some temporary glitch.
Anyway, the article is spot on.