1. 38
  1.  

    1. 18

      Today’s icons are often extremely stylized to the point of being meaningless, at least without knowing what their predecessors once looked like.

      There’s a fairly simple test you can do here. Down sample the icon set to powers of two down to one by one pixel images. Then scale each downsampled version up to the same on-screen size. Then ask users to pick the correct icon, gradually increasing the level of detail. Then give them five minutes to study the high-resolution icon set and repeat the process.

      For a lot of ’80s and early ‘90s icons, the second time through, you can often pick the right one in the 4x4 or 8x8 representations. This matters because it means that you can tell the position of a specific icon easily with your peripheral vision. If you have to hunt, you’ve moved your focus from the task at hand to the secondary task of finding the UI element to use to do the thing.

      The hamburger menu also comes to mind. Compared to traditional menu bars, it counteracts Fitt’s law, impedes discoverability and often increases the amount of clicks needed to navigate.

      Well, kind of. It’s worth noting that Fitts’ Law is specific to a category of pointing device. It doesn’t apply to touchscreens (there’s another law that gives equivalent calculations for touch screens but I can’t remember its name and I’m to lazy to look it up). For small handheld touchscreens, radial menus from the lower two corners tend to do best.

      How, then, is Microsoft’s gargauntan “ribbon”-style toolbars rationalized - and what type of research and data prompted their introduction?

      The ribbon also makes one additional cardinal sin: its fast access mode does not promote discoverability. Apple got this right. If you don’t know where a thing is in the ribbon, you search for it, and then you can click on the button from the search box. Then, the next time that you want to find it, you do the same thing. Frustratingly, you know it’s somewhere in the ribbon, but possibly as a button that pops up only after pressing another button. In contrast, if you don’t know where something is on macOS, you search in the help menu and, when you select the item, it expands all of the menus leading to that menu item so that you can see where it was. It will also show you the associated keyboard shortcut, if there is one.

      1. 4

        Well, kind of. It’s worth noting that Fitts’ Law is specific to a category of pointing device. It doesn’t apply to touchscreens (there’s another law that gives equivalent calculations for touch screens but I can’t remember its name and I’m to lazy to look it up). For small handheld touchscreens, radial menus from the lower two corners tend to do best.

        I’m not sure which equivalent you’re thinking of – my own interest in HCI waned around the time when this paper came out so I haven’t really watched it, but a bunch of them have been proposed. I remember reading about one that was a pun on fat fingers and was… FFitt’s Law I think? but it was one of the many. All of them do agree with your point, in any case.

        Tangentially, I am somewhat unconvinced that you can get a good law for handheld touch screens as we know them in modern handheld devices.

        Fitts’ original experiment was so successful because it avoid “badly” non-linear behaviour near anatomical frontiers (scare quotes because his relation isn’t linear, of course). The “actuator” side (i.e. the hand moving the cursor) is operating well into its anatomically linear parameters – in the original experiment it execute ample motion against a set target – so it’s a reasonable assumption that its behaviour is primarily determined by motion coordination factors. That’s not really the case for small handheld devices, especially for thumb inputs, which happens far closer to anatomical limits (i.e. in some cases, the main obstacle to touching a target is that fingers just don’t bend that way).

        Most 2D models I know of try to incorporate that by adopting a particular statistical distribution of reachable endpoints. Unfortunately, most of these aren’t very useful for modern handheld devices, which have a lot of thumb operation, so the shape of the distribution is heavily dependent on device shape and size, finger size and anatomical limits. Most of them go right out the window as soon as you try to apply them outside the 18-25 age range, especially when you get into things like arthritis and the like. It’s hard to get a model that’s both simple and able to yield predictions more useful than “things place one thumb length’s away from the bottom corner in any direction are easier to hit”.

        But thats waaaay too advanced :-). Fitts’ law is routinely misapplied even for non-touch devices. Lots of recently-revamped UIs actually fare worse by Fitts’ metrics than their old versions – mostly because Fitts’ law is frequently misquoted in terms of “bigger widgets are easier to hit”. In fact, the rate at which the difficulty index increases (logarithmically) with distance is twice the rate at which it decreases with the target’s width. In practice, most menu items in contemporary vertically-triggered menus (or horizontally-navigated toolbars) have a higher ID index than their pre-touch fashion counterparts.

    2. 15

      The mIRC example is, um, not great. Author says it has

      Icons discernible through both shape and colour.

      But when I look at it, approximately half the icons are small rectangles. Six of those are rectangles with tiny pithy text labels inside them, giving even less room for the graphical elements that supposedly help to distinguish them further. And even with my glasses on it’s quite difficult, at normal distance from my screen, to clearly distinguish what’s going on in those icons.

      I strongly suspect the author is making the common mistake of having familiarity from long experience and conflating it with ease for new users (and I never used mIRC back in the day – I’ve always been a command-line IRC client user – so I don’t have the long experience and familiarity with mIRC that would help me instantly know what all those similar-looking little rectangles do).

      If someone still thinks that’s a good example, go find a person who’s under the age of 20, and ask them to guess what all these Windows 95 icons mean. Just as the author didn’t recognize a banker’s/file box, that icon sheet is full of objects and concepts which once were extremely common/recognizable/“iconic” and now are not. And so there are infamous stories like people asking what the save icon (floppy disk) in so many programs is supposed to be. Plus, take a look at that Windows 95 icon sheet again from a few feet away, and notice how many of them are not particularly distinct in shape or color.

      And this is kind of a recurring problem throughout the post. The author complains, for example, that the flag icon in Outlook “might also just be a sketch by Mondrian”, but that style of waving-banner icon for “flag” is very common, and is common because the shape of it helps to distinguish it from other plain-rectangle style icons. Does that mean someone who’d never used an email program (let alone Outlook) before would instantly know what it means? Of course not – that falls into the fallacy of the “intuitive” (i.e., instantly perfectly understood by someone who has no prior context for the program or its functionality) interface. Interfaces have to be learned, but the important thing is that they have consistent conventions for those who have learned them, and affordances to assist in learning, not that they be perfectly intuitable to a Boltzmann brain that just popped into existence a moment ago. And far too much complaining about “usability” really comes down to “conventions have evolved over time, away from what I knew years ago” – it’s not that modern software lacks conventions, it’s that modern software has different conventions than it did in the Windows 95 days.

      This doesn’t mean that all modern software has great usability, of course, but the older software the author holds up as better examples was not particularly great in its day, which undermines the whole “decline” narrative – there’s always been a spectrum of software usability, and I don’t think prior eras were on average much better than today, nor is today on average much worse than earlier.

      1. 3

        But when I look at it, approximately half the icons are small rectangles.

        At least they’re somewhat different colours. I look at my Gnome Files icons and they’re all small white bars on a black background.

    3. 13

      I was nodding along, until the mIRC screenshot gashed my eyes. Ow. What a monstrosity (like most Win95 UIs.) If they wanted an example of the laudable principles listed below it, surely there were better choices.

      1. 16

        I think the point was that mIRC is an incredibly low bar and yet a lot of modern apps still manage to be worse.

      2. 15

        I’ll take the mIRC UI over the flat ones any day of the week.

        I personally can’t stand the “UIs need to be ‘beautiful’” meme. Software is a tool, and a “beautiful tool” is one that’s well crafted and gets the job done efficiently.

        Too many UX designers seem detached from the users of the UIs they design, and treat a “beautiful UI” the same as a “beautiful artwork”. Software isn’t hung on a wall to admire, it’s used and worked with.

        1. 6

          Well-crafted and beautiful are more similar than you think. All the sharp edges and high contrast in the Win95 UI add visual noise, and the windows-within-windows thing is confusing because the layering is hard to discern and inner windows get clipped by outer ones.

          I agree that many designers focus too much on aesthetics alone, but that doesn’t mean aesthetics aren’t important and visual style doesn’t affect usability.

          1. 2

            Did you know that cities are often replacing “noise restrictions” with “sound restrictions”? What your neighbor calls “noise” you simply call “band practice”… does it violate the law?

            Who is to say those borders “visual noise” vs “clear functional distinctiveness”? I think this is the author’s point about asking where the usability studies are.

    4. 8

      Using software professionally isn’t about having a chic, boutique experience - it’s about getting the job done as quickly and efficiently as possible.

      This, I think, cuts to the heart of the issue. Product designers are busy trying to design “experiences” and I just want stuff that I can treat like a utility. I want my software to be like my dumb fridge – it’s a means to an end. Just keep my dang food cold. If I’m conscious of how I’m using my fridge, somebody screwed up.

      Even though Apple is better at design than most companies, I completely blame this “experience” trend on them. They used to be better. Try using an iPod from a decade ago and notice how much more of a utility-like experience it is. It feels more like a tool to listen to music rather than some flashy app that’s trying to impress me.

    5. 6

      What is the “Archive” icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out?

      It’s a bankers box. It’s a common way to store archival papers in America.

      1. 6

        So, a product that’s sold across the world chose an icon representation that makes sense only to people in one country?

        HCI books from the ‘80s talk about that as a bad idea. The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world. Picking a locale-specific physical object is even worse.

        Mind you, Outlook still can’t do quoting right in replies, in spite of people complaining about it since I was a child, so I have very low expectations for that team. They’ve completely rewritten the app at least twice without fixing basic functionality.

        1. 3

          The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

          I absolutely have seen documentaries of museums and archives in other countries with boxes of extremely similar form factor. And clerical/office staff (the traditional target users of much “office” software) would historically have been quite familiar with such boxes.

          The real issue here is almost certainly temporal — the archival storage box is now an anachronism on par with the floppy disk save icon. It’s a metaphor for a physical-world thing that in the physical world is no longer a common object.

          1. 4

            The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

            I did some consulting for a company that manages warehouses for long-term document storage (and also did fun things like taking tapes from banks mainframes and printing their daily audit results on microfiche). They had a lot of boxes in their warehouses but very few looked like the ones in the icon. I actually owned a few boxes like that (Staples used to sell them), but I would never associate them with archiving (in part because they ended up being stored in a basement and nothing in them survived).

        2. 3

          I don’t know how common Bankers Boxes are in other countries. I know the author is Swedish, so that might affect their perspective. I do know that Manila folders are uncommon outside the US, and becoming less common in the US as computers replace filing cabinets.

          1. 1

            I can attest to that. I’ve literally never seen one until I was well into my twenties. They are so uncommon that any equivalent term for it in my native language is ambiguous; pretty much every word you can use to translate the word “folder” also means “file”. We’ve settled on an awkward convention at some point in the late ‘90s – awkward because the word used for “file” also dentoes box or a locker that you put folders in, not the other way around – but it’s a convention that’s entirely specific to computers, it has no real-life correspondent.

            My hot take on the subject is that it’s a fun anecdote but a largely irrelevant design problem. The icon is weird for sure but it takes about two double-clicks to figure out what it’s for. Other than making localisation via automatic translation weird (Google Translate & co. don’t know about the conventional, computer-specific translation of those terms, so they end up using the equivalent terms for “file” and “folder” interchangeably) it has no discernible effect on computer usage. Like all technical terms, and like all symbolic representations of abstract or technical concepts, they’re just things you learn.

        3. 1

          The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world.

          That makes me really want to use owl imagery in any arcane documentation I write. Two (correct!) meanings in one :)

        4. 1

          It’s not a US-centric thing. An insurance broker I’ve known since being a kid in the 90s has a room chock-full of these boxes, and I’m from the UK.

      2. 4

        You are both wrong, that is obviously a Lego man with a mustache, wearing a flat cap and looking towards the left.

        What truly baffles me in that picture is why the junk bin icon is next to the Delete label, rather than next to the one saying, like Junk :-).

        1. 3

          Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

          Sorry, couldn’t resist.

          On a more serious note, I suspect the reason for a whole lot of these terrible designs are branding. Companies desperately want their products to be different from everybody else. Especially anyone with near-monopoly power, to really milk that cognitive dissonance your users will get from trying to use anything else, and to force your competitors to take a huge opportunity cost trying to keep up with the changes. Using the same icon and naming for things could be considered being a follower, rather than a leader, or some such BS.

          1. 2

            Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

            Half-serious, but all the delete bins I’ve ever seen have grid netting, or are translucid/solid. That one looks just like an old rubbish bin, hence my joke :-).

            Hidden behind my entirely unclassy joke is actually my equally unclassy professional opinion that, like most graphical conventions based on stylised concepts and symbols, software icon representations are entirely conventional, based on conventions specific to various cultures or niches, and are efficiently disseminated by external adoption, like virtually all symbolic representations in this world, from mathematical and technical symbols to Morse code for the Latin alphabet. Consequently, there is far more value in keeping them constant than in chasing magic resonant energy inner chakra symbolic woo-woo intuitiveness or whatever the word for it is today. Half the road signs out there are basically devoid of inherent meaning for most drivers. They work fine because you learn them once and, in most cases, you’re set for life. Left untouched, icons would work fine, too.

            1. 1

              Very good point. Like how everybody recognised the “save” icon, even if they’d never seen a floppy disk. On a related note, I wonder how we could salvage the situation, and get back some consistency. We’d need to somehow shift the incentives of companies intent on “branding” everything in sight. Maybe an accessibility org with a bit of clout could start certifying the accessibility of applications, and deduct points for any unrecognisable permutations of well-known patterns?

              1. 1

                I don’t know if all-round, universal standardisation is possible. There are standards specific to certain niches, e.g. ISO 15223 for medical devices labeling or ISO 7000-series standards for use on equipment, or to specific equipment, e.g. IEC 60417 for rechargeable batteries. But diversity of function inherently limits their application; lots of devices have functions only they perform, so standardising their representation is pretty much impossible.

                IMHO it’s not something that can be solved through regulatory means. It’s a problem of incentive alignment. The reason why we see this constant UI churn in commercial applications is that most organisations that develop customer-facing software or services have accrued large design & UX teams (on the tech side) affiliated to large marketing orgs (on the non-tech side), which lend a lot of organisational capital to their managers – because they’re large. These people cannot walk into a meeting room and say okay, we have 1M customers, we’re basically swimming in money, we’re no one’s touching the interface and making a million people learn how to use our app again on my watch. If they did, half their team would become redundant, at which point half their organisational capital would evaporate.

                All branches of engineering get to a point where advancing the state of the art requires tremendous effort and study. Computer engineering is no exception. 180 years ago, advancing the state of the art in electric motors mostly required tinkering and imagination; doing that today usually happens via a PhD and it’s really hard to explain the results to people who don’t have a PhD in the same field.

                Perpetually bikeshedding around that point (or significantly below it) on the other hand is accessible to most organisations. It doesn’t help that UX and interface design are related to immediate and subjective sensory experiences, so everyone is entitled to an opinion on these topics, which makes them susceptible to being influenced primarily by loud people and bullies in their orgs.

                1. 1

                  I don’t know if all-round, universal standardisation is possible.

                  Yeah, I argued for certification rather than standardisation for that reason. Just like a lot of things can’t easily be standardised in an objective and easily transferable format, having a trusted arbiter is probably more useful to achieve cohesion.

      3. 2

        Which means icons should(?) be a part of a localisation process too. Although it would bring a whole other set of new problems along too.

        1. 1

          I’ve just realised that icons are being partly localised already. Rich-text editors’ [B]old, [I]talic, and [U] in English are [N]egrita, [C]ursiva, and [S]ubrayado in Spanish. Consequentially, they have different keyboard shortcuts too.

          1. 2

            Same in Swedish-localized Office apps, which is annoying because Ctrl-F gives bold (”Fetstil”), not Find.

      4. 1

        Oh, now that you say it, I see it and that makes sense. But I didn’t know what it was supposed to be either before.

    6. 6

      At the risk of committing techno-stuckist heresy, I’m going to call this article out for being about 95% screed of anecdotes about how the world is going to hell and 5% argument for skeuomorphism and against flat design.

      The anecdotes I find unconvincing. There has always been bad design. There’s just more of it now because there’s more software and more people writing it. It’s also gotten easier to mess up with the proliferation of screen sizes and input modalities. I don’t think that the evolution of Windows’ design is indicative of the general state of UI design, either. Nor is Gnome.

      Skeuomorphisms are good, so the argument goes, because all flat design systems lack affordances and all skeuomorphisms are self-evidently affordances. QED non erat. So long as we’re quoting research from the Nielsen / Norman Group (the veracity of which is something else I would quite like to debate), Jakob’s Law states that users prefer one design to work the same way as the ones they already know. That does not mean that things should always remain the same. It means that as new design conventions become popular, expectations change. To wit: In Windows 3.1, there was a single tiny pancake floating in a square. It was a menu, but, in spite of the skeuomorphic drop shadow, that fact was not obvious. You had to figure that out by clicking on it. That convention disappeared for a decade or two. Then it came back in a different context in the guise of an abstract “hamburger” without a drop shadow. Setting aside the fact that popular UIs have been breaking Fitt’s Law for decades, not just recently, how does skeuomorphism make the affordance any more obvious here? We might as well say the problem with hamburger menus is that they don’t look enough like an actual hamburger.

      There’s a fine line between unusable design and someone simply having moved one’s cheese. While this article may bring up examples of indefensible design, it still manages to confuse the two.

      1. 1

        I don’t think that the evolution of Windows’ design is indicative of the general state of UI design, either.

        Windows still has the largest market share of desktop systems, and many (most?) of the open source desktop environments try - or at least used to try - to keep up with it, so their UIs are not alien to Windows converts. So, it kind of is de facto the general state of UI design.

        I wish the Mac had the larger market share, and maybe now that Apple Silicon is regularly trouncing even the high-end Intel chips, and the prices aren’t eye-wateringly wallet-shatteringly stupid, it can be. It’s not perfect, or even great, but at least it has internal consistency and most of the app developers mostly respect its design guidelines.

        1. 3

          “Desktop” masks a growing proportion of time humanity spends on computers. There are kids starting college these days who know only Chromebooks, phones, and tablets, and need to be taught the basics of desktop OS use that, frankly, most of us crustaceans take for granted. This is also true for millions of people in the world who can’t afford much more than a phone.

          1. 2

            We all had to learn everything, at one point or another. I certainly hope that we aren’t going to give up on desktop computers entirely just because there are people who have never used them, probably because they don’t need to.

            There are indeed a lot of people who don’t need a desktop computer. In the 90s and 00s, if tablets and smartphones existed, desktop computers probably wouldn’t have sold as much as they did, because the people who wanted or needed to experience the Internet and spell checking didn’t need an entire desktop computer.

            That doesn’t make the desktop computer obsolete, nor does it make it wrong to hope it would improve in usability instead of regress, or worse, be turned into a web or tablet like device so it’s easier for converts. The reason you get a desktop computer should be because you want to have the extra power you can’t have on a tablet, and therefore you should expect a bit of a learning curve.

            Maybe somewhat OT, but I keep hearing the argument about “the world can’t afford more than a phone” from people in the West. When I’ve actually talked to the very few people I’ve been able to speak to “on the ground” in Africa, in Eastern Europe, in the poorer parts of Asia, those who have computers are very appreciative to have them and enjoy using them in all the ways they can (experiencing photo editing for the first time, experiencing multi-window operation for the first time) – but they’re old computers. They’re Pentium 4 class Celerons, sure, but they’re desktop computers.

    7. 4

      I think this article does a fairly good job of capturing a common point of confusion – that clutter equals complexity equals lack of usability. Adding labels to buttons and having several options available in the main view adds a lot of clutter, but can greatly improve usability if it makes key options findable.

    8. 3

      Things were always bad back in the day, as they are now. Sometimes, they were really bad. If anything, skeuomorphism was criticized at the time.

      1. 1

        Did you link three times to the same page on purpose?.. or are we missing two more links?

        1. 1

          Ugh, classic frameset based website meaning the URL doesn’t change in the browser when you copy from the URL bar. Hilariously, a classic mistake from the 1990s. I meant to link to this and this.

    9. 1

      The author would take a special joy watching how GNOME 3 and its core apps change UI from release to release.