Geoff-Hart.com: Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Articles --> 2013 --> Intuitive doesn’t mean obvious

Vous êtes ici : Essais --> 2013 --> Intuitive doesn’t mean obvious

Intuitive doesn’t mean obvious

By Geoff Hart

Previously published as: Hart, G. 2013. Intuitive doesn’t mean obvious. Intercom July/August: 19–21.

The holy grail of design is to create a product that is intuitive; often, this means “capable of being understood without explanation”. A vessel used to hold water, whether a concave piece of wood or a grail, is intuitive in this sense. A rock used as a hammer or thrown as a weapon is sufficiently intuitive that even our Neandertal ancestors learned how to use such tools without online help or a Web-based technical support forum. Fortunately for those of us who earn a living explaining things, few things are this intuitive. Indeed, most things we describe are abstract—and therefore not obvious. Using a rock as a hammer is easy enough, but using it to knap flint to produce a cutting edge and learning how to throw a 100-mph split-finger fastball are far more difficult to explain. Even the seeming simplicity of a bowl conceals surprising complexity; ask any student of topology about homeomorphism, a carpenter about wood carving, or a potter about crafting porcelain from what seems like nothing more than muck to the untrained eye.

The lesson is clear: intuitive is rare, and may be unobtainable for anything that’s even slightly abstract. Thus, our real goal as designers should be to mask this inherent complexity and create something whose logic can be learned and transmitted easily. Consider the smartphones that most of us own: “swiping” and multi-finger gestures aren’t obvious, but once you learn these options exist, most users confidently begin experimenting to see whether they work in new contexts. Google took advantage of this in their Gmail app for the iPhone, in which swipe-left and swipe-right gestures move between messages; this is intuitive because the same gestures move between screens of apps or pages of an eBook. When an interface’s logic makes sense, and operates consistently, users apply that logic without our help because they can predict what will happen; this provides enough confidence to support exploration, and when the logic is consistent, they succeed sufficiently often that they consult the documentation less. This design philosophy supports minimalism, in John Carroll’s sense of the word. (See the March 2013 issue of Intercom for an interview with Carroll about minimalism.) [A look back: I also briefly discuss minimalism in my article Ten Technical Communication Myths.—GH]

Some of us don’t design products, but do have some say in how they should be modified to improve their usability. Some of us design information architectures such as a Web site or help system or eBook structure that can be read equally well on devices with different screen sizes, shapes, and resolutions; in that context, we're directly responsible for creating something usable. Understanding how to make such designs more intuitive is therefore valuable for our work. To provide an example, I’ll focus on Microsoft Word in this article. Word has a great many egregious design flaws, but I’ve chosen it for its familiarity to most readers, not because it’s an unusually bad example of design. Word fails the “intuitive test” not because it is a complex, highly abstract creation but rather because:

This combination undermines confidence in one’s ability to explore safely. Worse yet, the software’s behaviors vary between versions (undoing the confidence gained by using previous versions for years) and between platforms (Mac vs. Windows). The combination makes life unnecessarily difficult for upgraders and corporate trainers, and makes software maintenance a nightmare for Microsoft. In the rest of this article, I’ll use Word to illustrate design choices that make software unnecessarily unintuitive, and lessons for our own design efforts.

Logic and metaphor

Many people anthropomorphize their software because it seemingly has a mind of its own, and trying to understand how that mind thinks is essential to using the software productively. If men are from Mars and women are from Venus, Microsoft Word is from Terry Pratchett’s “Discworld”. Consider something as basic as how we interact with the software, which follows two main patterns:

Both are legitimate syntax, with different strengths and drawbacks. Problems arise when the two forms mix without a clear pattern, forcing our audience to determine which of the two different grammatical patterns applies in a given context. Word doesn’t help, because some menus follow the first syntax (the File and Table menus define the object first), some follow the second logic (the Edit and Insert menus define the action first), and others follow an entirely different logic. For example, “Help” is not clearly a noun or verb. It implies that help is available, but provides no clues about whether to select the verb and specify an object for that verb (help me to understand revision tracking) or select an object and then specify a verb (in the online help system, help me find something). The Help system perpetuates that lack of clarity by failing to distinguish clearly between topics that start with a verb, topics that start with a noun, and topics that follow some “other” logic.

The problem returns in the ribbon introduced with Word 2007, but with an additional complication: users now face tabs such as Home, Layout, and Developer that are neither nouns nor verbs and that add a fourth logic (“this tab contains miscellaneous objects and verbs”). Worse, despite the claimed goal of eliminating complex menus from the software, Word’s designers hid most of the old File menu, not to mention unrelated things such as the settings, under the Office button beside the ribbon instead of integrating these functions within the ribbon. If you’ve been using Word for years, you’ve probably forgotten how hard these inconsistencies make it to learn—unless you’ve tried to teach students how to use Word.

In practice, it may not be possible to create an interface that is exclusively noun–verb or exclusively verb–noun. In that case, it’s more important to think carefully about the problem so we can recognize the different types of logic and find ways to reveal them. For example, we could group the menus and tabs that begin with a noun on the left, those that begin with a verb on the right, and things that fit neither model in the middle to separate these two groups. Adding visual indicators to separate these groups (creating visually distinct menu or tab groups) improves this solution. The result may be implicitly inconsistent because it’s based on three types of logic rather than one, but that may be acceptable because it appears more consistent to the user. This leads us to the concept of consistency.

Consistency

Consistency is crucial, and is why Microsoft and Apple both provide extensive user interface design guidelines for their operating systems. Imagine how difficult software would be if  [] represented a checkbox (“choose multiple options”) in one program, a radio button (“choose only one option”) in another program, and an action button (“click to implement the selected options”) in a third program. Consistency is crucial because once you learn the logic, you don’t have to relearn it for each new dialog box. The behavior is not initially obvious (it must be learned), but it becomes intuitive once learned.

In any design project, the first step should be to identify the kinds of concepts, objects, and actions you must design and reveal to your audience. Next, create a style guide that defines how you will accomplish each goal and apply the guidelines consistently. If you’re developing a familiar product such as an eBook, follow the design used by other eBooks to minimize the number of new things readers must learn. It’s never wise to radically change a familiar, functional interface if readers must abandon skills and overcome reflexes they have spent years developing. The only exception is when you’re solving serious problems with the old design or enormously improving the user experience, but even then, you should retain the old interface as an option. Microsoft’s elimination of Word’s menus in favor of the ribbon was at best ill-considered. [A look back: It was frankly stupid, but I didn't think Intercom would let me say that in print.—GH] It would have been easy to retain the menus and offer the ribbon as an option, thereby retaining a familiar interface for long-time users and offering a nominally simpler interface for those who were learning Word for the first time. That's not just me speculating; Word 2011 for the Macintosh offers both menus and the ribbon. I find the menus and the ribbon more efficient for different tasks.

Unfortunately, this is just one example of how Word 2007 and 2011 appear very different. This inconsistency is another poor design choice. Microsoft accepted the spurious logic that Word’s users can only use the interface conventions provided by the two operating systems, and produced versions that are visually and functionally inconsistent. Anyone who uses Word 2007 at work and Word 2011 at home and any corporate trainer who must support users of both versions faces the difficult task of mastering and teaching the differences. (In my book on onscreen editing, the length is 30 to 50% longer than necessary solely to account for interface and logical differences between the Windows and Macintosh versions.) The Web demonstrates why such differences are unnecessary: when we interact with Web pages, what we do is identical on all platforms (including Linux). We focus on our goal (e.g., viewing a Web page), and largely ignore subtle nuances of the interface that are used to achieve that goal (e.g., the “furniture” of dialog boxes). The same principle should apply to any design:

The more consistent your design is with familiar designs, with previous versions your audience has mastered, and with versions on other platforms, the easier it is for them to intuit (predict) how the product will behave. That leads us to the concept of predictability.

Predictability

Once you understand a program’s logic, it becomes easier to predict its behavior. That removes the stress associated with being uncertain what will happen when you perform an action. That stress is the biggest barrier to learning that I encounter when I train people to use Word. Understanding that there’s an undo function (Control+Z) increases their confidence that they can experiment without fear that they’ll damage something irreversibly. Unfortunately, only some actions are obviously undoable (e.g., copy/paste). Most others are only partially undoable; for example, if you delete sentences in widely scattered parts of a document, at some unknown point you’ll reach the undo function’s limit and be unable to undo the earliest deletions. Others actions cannot be undone at all; for example, many students are reluctant to change Word’s settings because once you make a change, there’s no going back—you have to memorize or write notes about what you changed and where you changed it so you’ll know where to go to undo the changes if Word starts behaving strangely. Teaching students to record what they did on a scrap of paper reduces their fear of experimentation, but that would be unnecessary if there were any easy and predictable way to undo such changes.

Many of Word’s functions provide warnings when their effects cannot be undone. Unfortunately, enough don’t that Word seems dangerously unpredictable to many users. Many functions provide no useful explanation about why a function failed even after you’ve carefully followed the instructions. For example, shortcuts for the AutoCorrect function must be at least three characters long, but Word will happily accept a two-character shortcut without warning you that it won’t work. Older versions didn’t warn that AutoCorrect was limited to 255 characters and truncated longer text. Worst of all, newcomers to Word have no idea this function exists, and when the function is triggered for the first time without warning or explanation, this creates a powerful fear: you don’t know what you just did, and therefore don’t know whether you might trigger more serious problems by inadvertently repeating that action. For simple changes, this isn’t a problem. But I’ve seen colleagues raging in frustration after Word suddenly changed a line of dashes into a paragraph border. No matter how hard you try, you can’t select and delete the line unless you know the secret: the line is a paragraph-level format. Even then, it’s not obvious that you change this format via the Borders and Shading dialog box and not (as you’d expect) from the Paragraph dialog box.

In any design, users will use what they already know to predict what your design will do. For example, most users will rarely encounter AutoCorrect during their first few hours with Word, and will therefore not expect Word to suddenly change its behavior and start modifying text without being asked. The key is to remember that what may be obvious to you (the designer) may appear scarily unpredictable to your audience. Since most users will not receive formal training in all but the most basic functions of a product, advanced features such as AutoCorrect should be disabled by default; they should only be enabled by a conscious action, usually after someone has learned of a feature and how it works. They should never be triggered by means of an accidental keystroke. This increases the likelihood that if the product’s behavior suddenly changes, the user will recognize what they did to cause this change.

Creating intuitive products

Usability and user experience design is a complex field, and it requires a profound understanding of the design principles I’ve discussed (logic, consistency, and predictability), human psychology (e.g., how people use these design properties to become comfortable with a product and what happens when they lose that comfort), and their interactions. Problems arise whenever we unnecessarily require users to master new types of logic, fail to eliminate inconsistencies in how that logic is implemented or revealed, or allow unpredictable product behavior. It’s rarely possible to eliminate all of these problems, but being aware of their existence lets you seek ways to minimize their impact. Where a problem can’t be eliminated, we may be able to mitigate its severity by clarifying the logic, revealing inconsistencies and their meaning, and triggering those inconsistencies only in response to deliberate action by an informed user. The result is a more predictable (and therefore more intuitive) design.

Creating intuitive products does not eliminate the need for documentation, and that documentation is particularly important when a product requires additional forms of user assistance such as training and technical support. Trainers will be grateful for design choices (e.g., creating consistent Macintosh and Windows interfaces) that let them develop a single set of training materials. Support staff will be grateful for only having to learn a single set of peculiarities and bugs, rather than one per version. To accomplish this, you’ll have to carefully think through every aspect of a design to ensure that the logic is clear and consistent and predictable, even if it’s not inherently obvious. As technical communicators, it’s our responsibility to explain away (or conceal) illogic, a lack of clarity, and inconsistencies. But we should also help other designers create more intuitive designs based on the same approach.


©2004–2024 Geoffrey Hart. All rights reserved.