Have UI designers forgotten about usability?

I’m seeing a disturbing trend. It’s been happening for some time now but seems to be getting more and more prevalent. User interface designers appear to be focusing solely on design, with usability taking a back seat. This isn’t a new battle, the question of form follows function vs. function follows form was debated long before computers existed. Just to be clear, I’m squarely in the form follows function camp.

It does feel to me that the world of user interface design has reverted to the mentality of the early days of web design. You remember, the <blink> tag and animated GIFs? People did things “just because you can,” not because it served any useful purpose. More and more applications have decided that it’s far more important to have a “cool” interface than one that might be familiar or easy to use.

It used to be that once you understood the basics of how an application’s UI was set up on your particular operating system (Windows, Mac, Linux, etc), you had a substantial leg up when learning how to other applications. You could rely on basic windowing features acting in a known and expected manner. You could rely on the application having a menu bar in a consistent location on the screen that functioned in a consistent way, and offered basic functionality that operated in a consistent manner. In many cases, that consistent functionality may be 50% or more of the primary features you used in any given application.

This not only benefits the end user in being able to accomplish the given task quickly and accurately, but also allows documentation developers and trainers to make certain assumptions about the knowledge of their audience. You didn’t have to explain how to Print or how to perform basic file operations like Save, Open, Close, and so on.

Application developers actually tried to follow the user interface guidelines put forth by the operating system developers. There was even the concept of certification that your application actually did follow these guidelines (as well as other lower-level operations).

This all seems to have gone out the window. Much of this happens now with the rise of browser-based applications, where you’re really starting from scratch with an application’s interface. But many native OS (desktop) applications have taken to the practice of “skinning” their applications to create a different interface than the one you’d get by default from the operating system.

Everyone seems to think they know a “better” way to interact with an application.

Application developers: remember that you’re not creating a game (well, unless you are, in which case you’re off the hook). It’s likely that someone uses your application as a productivity tool (this would presumably be something that you would strive for). It could be that people use this application for 8 hours a day (or more), and their livelihood depends on it. Users don’t care that the application is “pretty” or “cool,” they just want it to work properly so they can get their job done.

A cool UI doesn’t actually benefit anyone (except possibly the designer’s portfolio). User’s typically don’t like it, and developers lose money in two ways. One, you have to pay to develop the cool design and two, fewer users will upgrade. I believe that the main reason people don’t upgrade to the latest version of an application is not the cost of the upgrade, but the expense related to the time it takes to learn a new UI; above all else, people just want to get their job done.

When the developer of a desktop application applies a “skin” or otherwise modifies the UI to not follow the default operating system UI, that tells me that the developer places their coolness factor over my ability to get my job done. This strikes me as complete arrogance and disregard to my needs as a customer and user. In addition to familiarity and consistency, adhering to the operating system UI, means that any custom colors and font properties defined by the user will (should) be applied to the application UI as well. This means less cost to the developer and better functionality for the user, where’s the problem with that?

The purpose of a UI is to provide a language to allow an interaction between the human user and the virtual computer program. When the application uses the same language as that defined by the operating system, the human user is more likely to be able to communicate successfully because they are more likely to know that language. When one application tries to redefine that language, it only complicates that communication.

Different types of devices (phone, tablet, refrigerator) will define new ways to communicate with the underlying applications. That’s fine, new device, new language. But the applications that run on that device should follow the guidelines established by the operating system on the device, not the whims of a “creative” UI designer.

Browser-based applications are in a bit of a tough spot. Now the browser (an application running on some host operating system) becomes the “operating system” for applications that are running on it. To some degree, most browsers do pass on the default styling of widgets based on the underlying OS, but because these applications are coded in HTML each developer really has to reinvent the UI for their app. More often than not, these UI designers seem to think that this gives them the right to go wild and come up with some completely new UI paradigm. However, most of the browser apps that are successful take their lead from well-established UI design practices that employ widgets that emulate physical (mechanical) objects like buttons, folders, tabs, and the like. When I am introduced to a new application that has none of these familiar features, I’m forced to feel my way around the interface waiting for things to pop up or glow as the cursor moves across. This is not an efficient or effective way to learn a new tool, and I’m typically inclined to find another application with a more stable UI.

Often when a developer decides to upgrade their application’s UI it means they have run out of useful new features to add. I strongly believe that the best “new feature” that any application can add is to fix all of the bugs. That’s all. If the only new feature was that all of the known bugs have been fixed, you’ll see more upgrades than ever before. I know it’s not sexy or exciting, but that’s what is important to the people who actually make their living using your application.

When a developer “reimagines” the way humans should interact with a computer program, this not only complicates the lives of the intended users of that application, but also the lives of those whose job it is to document and teach this application. We have reasonably well understood names for traditional UI widgets (button, dialog, window, folder), but these names are typically not available for new widgets, and if they are, they most certainly won’t be understood by the new user. First off, the technical writer can no longer make assumptions about how much the user knows, it’s unlikely that they will know anything, so more has to be documented. Secondly, the act of describing and referring to these somewhat amorphous objects (glowing text in the upper left, but down a bit from the top) is a challenge if not impossible.

Don’t get me wrong, I can appreciate a well designed and efficient user interface, even if it doesn’t adhere to the operating system standards. Just be sure that when you do create a new way of interacting with an application that you’re doing it for a reason, and not just because you want your application to look “special.” Also, try to maintain an understanding of who your customers are and what they do with your application. Do they really need the UI you’re developing, or would something simpler serve their needs better?

Sorry, this post has gone on quite far enough, and has turned into a bit of a rant. It’s just something that has been bothering me lately, and based on what I’ve seen on maillists and other forms of communication, it’s bothering others as well. I’d love to hear your thoughts.

7 thoughts on “Have UI designers forgotten about usability?

  1. Yves Barbion

    I fully agree, Scott. After all, we’re talking about a *U*I, not a *D*I (as in “Designer Interface”), aren’t we? In this respect, template development is like software development: yes, the result should look nice and professional, but above all, it should just *work*.

  2. Arnis Gubins

    Spot on Scott!

    bad UI = bad UX

    It would really help if developers ate their own dogfood and produced a complete & up-to-date “product” setting the gold standard for how the tool(s) should be used (instead of using tried & true versions 3-4 releases back!).

    For some companies, however, it sadly seems like Mr. Murphy in Marketing has forbidden the KISS rule to be applied.

  3. Klaus Daube

    Scott, this blogpost is balsam to my soul! I’m really thinking that at least in one big company there are far to many designers at work…
    Concerning the bug fixing issue: it’s my impressen that small companies or even share-ware-producers follow this principle. I use programs which are updated only for the adaptation to a new OS – they just work as intended.

    1. saprentice Post author

      Klaus .. you are right. The smaller companies do generally focus on higher quality and simpler design. At some point it seems that as a software company “grows up” they forget about who their customers really are and what those people really do. Sad but true.

Comments are closed.