User Interface is not Design
A usability rant inspired by accidentally turning off my DVD player for the hundredth time.
Here’s our DVD player. The long white rectangle is the disc tray. The small white rectangles are buttons, which are identical in appearance except for tiny, unobtrusive icons. One opens and closes the disc tray; the other is the power switch. Guess which is which.
That’s right: the button that controls the tray is located as far from it as possible.
The same player has a remote control which can also control the television. Instead of having a duplicate set of controls on the remote, there’s a button which toggles the remote back and forth: in one state, the power switch will turn on the TV; in the other state, the disc player. Guess how the remote indicates which state it’s currently in?
That’s right: it doesn’t. The only way to find out is to try it and see which device turns on.
The usability cost of guessing wrong, in both cases, is pretty high; it takes quite a while for the player to turn on, get the disc up to speed, and play through all the obnoxious fast-forward-disabled FBI warnings and disclaimers. (Yes, Fox. We know the opinions held by the 2nd AD on the commentary track are not your own; does it really give you extra legal protection to force us to stare at the warning message for 30 seconds every time we watch your movie?)
It doesn’t take an expensive usability expert to figure out that you should group related controls close to one another. It doesn’t take a focus group to tell you that it’s probably not a good idea to make your user go through a trial-and-error process every single time they want to use your product. These aren’t subtle details that require special talent or training; they’re obvious to anyone who spends more than about fifteen seconds looking at the product from the user’s point of view. If anybody at Phillips had sat down on a couch in front of the prototype of this player and tried to turn it on, even once, the product wouldn’t have these flaws.
And that’s the problem. You’ve got marketing people on staff to look at the product from a marketing point of view; you’ve got engineers on staff to look at it from an engineering point of view. Very few companies keep users on staff to look after the user’s interests — or if they do, they do it in the form of expensive (and therefore infrequent) focus groups and usability studies. They may think they have usability people on staff, but too often what they really have is designers.
I think it’s a real problem that usability and design have become so conflated — they’re almost entirely separate tasks. Design affects usability, to be sure, and many of the usability details such as logical grouping of controls, layout, labeling and so forth are traditionally tasked to the designers… but if the designer is focused on aesthetics instead of ease of use, all bets are off. (Conversely, an ugly design can still be very usable if the underlying structure is sound.) And many usability decisions are traditionally not tasked to the designers. Often they’re not even looked at as usability questions, but as business requirements, QA, or engineering decisions.
The nascent field of information architecture is an attempt to address this, to pull all those usability issues together under one umbrella — but it doesn’t seem to be catching on. (And the recent trend towards name soup — interaction design, customer experience, information architect, blah blah blah acronymcakes, isn’t helping one bit.) It may well be the wrong approach, in any case: since usability touches so many different aspects of a product, it’s difficult or impossible for one usability person to be expert enough in all those aspects to work effectively. (It’s also, let’s face it, extremely rare for a usability person to have enough influence in all those areas to meaningfully affect the product.)
One interesting approach: “Kleenex testing”. EA Games literally brought users on staff for The Sims 2, used them long enough to get usability feedback, then tossed them out and went back for more. The risk of this approach is of taking it too literally. If you slip into treating it as a QA process, it’ll fail: by the time you’re to QA, it’s too late; the design decisions have been made… anything you change after that point will be a patch job. This is only going to work if you do it early and often enough to be able to meaningfully make use of the results.
Or instead of trying to gather all the usability into one point, maybe it’d make more sense to spread that responsibility among the various parties — which it already is — but to make it explicit. Task one of your engineers, one of your designers, one of your marketing people, etc., to take off their usual hat and look at the product as a user. If it’s a DVD player, sit down on a couch and watch a movie. If it’s a web application, register, log in, and use it. Have them write down whatever annoys them or is difficult or broken; then compare notes, then go back and fix it. This won’t get you the fresh eyes every time that kleenex testing would, but it’s a lot less expensive — and it avoids the arbitrary and scattershot feedback you often get from first-time users.
None of these strategies can get you past the one, solid, unavoidable fact that if the product doesn’t have a core focus, a single overriding organizational metaphor, it’ll be broken and unusable no matter how good the individuals building it are. (This is rarely an issue for physical products like DVD players, where the focus — play DVDs — is obvious. In software, however, it’s of supreme importance.) Who this focus comes from — and it can be anyone, depending on the office politics and the individuals: the designer, the project manager, a motivated lead developer, the CEO, a competitor, even an articulate customer — is almost irrelevant. But if it’s not there, you’re nowhere.