You are here: Articles --> 2017 --> Writing—and editing—to reduce user errors
Vous êtes ici : Essais --> 2017 --> Writing—and editing—to reduce user errors
by Geoff Hart
Previously published as: Hart, G. 2017. Writing—and editing—to reduce user errors. https://techwhirl.com/writing-to-reduce-user-errors/
In my 30 years as an editor and technical writer, I’ve seen endless examples of how readers can misinterpret texts. In the most egregious cases, I’ve even seen authors misinterpret their own writing, leading to faulty conclusions with potentially serious consequences for readers and those who depend on them.
It’s impossible to eliminate all such problems; indeed, product developers sometimes joke that if you try to make something foolproof, Nature simply evolves a better fool. Less facetiously, we must recognize that our readers are often distracted, stressed, fatigued, sick, or otherwise unable to focus their full attention on our manuscripts. This increases the risk of error. We can’t fix that problem either. What we can do, whether writer or editor, is look for ways to eliminate certain common problems and, in so doing, minimize the likelihood of reader errors.
In this article, I’ll describe some of the main ways that editors accomplish this. Writers and other content creators can learn to include these methods in their own work, which is particularly important if their employer isn’t willing to hire an experienced editor to do these checks. And editors who are early in their career can benefit from adding these tools to their toolkit.
One of the key advantages of single-sourcing (i.e., content management systems) is that you can create proven, thoroughly tested and well-edited, solutions for specific communication problems. Once these chunks of text or visual information are available, you can reuse them without modification in your subsequent writing. When better solutions come along, you can update those chunks of information to take advantage of those solutions. The effort invested in implementing content management imposes a certain mindset (rigorous planning and review) and the discipline to use it, thereby building quality control into the information development process.
What you can’t do, even with the best content management system in the world, is assume that those better solutions work for every situation where a chunk of information has been used. When you “improve” older information, you need to carefully examine what changed from the original wording and ask yourself whether that change has potential implications for any parts of a potentially large body of information. If it does, someone has to look for text that will be affected by those implications and develop a solution. In some cases, the “better” solution may not be better after all for some text in some documents. You might find that it helps to create a checklist of criteria for new information, with each criterion designed to help you find problems before readers do—and then fix them.
If you’re not using a content management system, you can build solutions that accomplish a similar goal. For example, you can develop templates that can be shared among the writers and editors in a documentation group. As I noted in my long-ago article on “dynamic style guides” (Hart 2000), a well-designed template includes standardized information such as legal disclaimers and warning text, standardized headings, and standardized formats (e.g., page sizes, paragraph and character styles). Eliminating the need to remember and concentrate on applying these formats for each new document lets writers focus on more important details, thereby reducing the frequency of errors in the things that do change from document to document. These templates can also include details such as summaries of the contents of each section of a document to remind writers to include all the necessary information in each section, thereby reducing the frequency of missing information. Last but not least, the templates can link to external resources such as online style guides, repositories of standardized texts, and summaries of the design meeting that led to the creation of a document.
Document design meetings: At a previous employer, we recognized a serious problem with our writing and review process: Often, writers and their managers spent weeks developing a manuscript, only to discover that it didn’t meet the criteria of the senior managers who would approve it for publication. Back to the drawing board! Our solution? Have the author, their manager, the editor, and a senior management representative discuss and reach consensus on what should be written, and how. For details, see Hart (2011) and Hart (2012).
These benefits are large, but they don’t eliminate the need for an editorial review to ensure that the correct standardized text has been chosen in a given situation and is used appropriately in each instance. Particularly when authors must choose information from a long list of options (e.g., a database of dozens of warning messages, a long menu of options displayed in miniscule type), it’s easy to inadvertently choose the wrong option. That’s doubly true when several of the options appear superficially similar. Looking for ways to minimize those similarities will reduce the frequency of such problems, but there’s no substitute for a keen editorial eye.
One of the key findings about how we learn is that readers can understand, remember, and apply information better if they can fit the new information we provide into the framework of their existing knowledge. Thus, you should seek ways to “prime” readers so that they are ready to receive new information. You can do this by explicitly reminding them of what they already know. This ensures that they begin a task with the correct information already in their mind rather than proceeding based on false assumptions. For example, this is why we present cautions and warnings before procedures: so that readers can proceed safely, primed to avoid any pitfalls.
To accomplish this, start by asking what knowledge is necessary to understand any new information. For example, you can safely assume that most modern readers know how to use a mouse and menu system, but we can’t safely assume that they know where the functions they seek are hidden. One problem I find with most modern documentation, including documentation by big names like Microsoft, Apple, and Adobe, is that they name tools without telling me what they look like (e.g., by providing an image of the icon that represents the tool) and name features without telling me which menu holds them. Particularly when I haven’t used a product for a while, it can take me a long time to find where the features I require are hidden. Since it costs nothing more than a little of the writer’s time to repeat icon images and menu screenshots, this repetition should be a standard feature of all documentation. Don’t assume that readers know what you’re talking about; show them to ensure that they know.
You can implement other visual and textual cues that provide context. Provide additional information to help readers gather the necessary information before they begin by providing a “Before you begin...” heading. Help them choose among options by explicitly explaining when one option is better than another: “Choose A if...; choose B if...” When there are many criteria for a decision, consider presenting those criteria in a decision-support table rather than burying the information in a long paragraph of text. Help them choose the most efficient pathway to complete a task by identifying and then documenting that path: “Use the following procedure...” Sometimes a flow chart is the most effective way to do this, though the flowchart shouldn’t rely on conventions such as those used by programmers if the audience does not comprise programmers. Finally, show them how to confirm that they have correctly completed a task by describing indicators of success and describing symptoms of failure.
When the necessary knowledge isn’t already part of your documentation, and can’t be linked to from within that documentation, you must make it easy for readers to find that knowledge. Either explicitly describe what readers need to know or provide a link to external resources that will let your readers fully understand the necessary concepts before they read the new information. For example, it’s not feasible to teach all aspects of typography to users of publishing software, but it’s possible to provide links to an online typography tutorial. Because links on the Web tend to be ephemeral, the best way to ensure that the references remain available is to host them on your own Web site, if possible. If that isn’t possible (e.g., the author is unwilling to give you permission), link to these resources through a well-designed “useful links” page on your Web site. This approach means that you only need to fix the links Web page when a link disappears or changes, rather than trying to hunt down all information that refers to that link or having to update every reader’s copy of the documentation when a link changes. I’ve used this approach successfully with my own books.
Writing guides generally urge us to eliminate redundancies, on the plausible logic that readers don’t want to be forced to read the same information many times. But this advice is based on the outdated assumption that readers will read all of a manuscript, in linear order from start to finish. This is rarely true nowadays—if it ever was. Modern readers tend to dip into our documentation just long enough to solve specific problems rather than reading it “cover to cover”. This is particularly true in hypertexts such as online help, embedded help, and Web sites. As a result, we can never be certain that they have already read what they need to know to understand the new information. In that context, it’s better to repeat important context in each topic to ensure that it’s instantly available. Content management systems make this easier.
Ancient context tools remain important: Given the ubiquity of search tools, younger writers tend to forget that ancient tools such as a table of contents and an index remain important. A table of contents provides an overview of the structure of a body of information, presents what the authors consider to be a logical pathway through the information, and juxtaposes related topics that allow readers to create their own context. This information is not visible through search tools. In contrast, an index provides none of the overall context that is visible in the table of contents, but instead provides a range of synonyms for each concept and groups them using headings and subheadings to provide something analogous to a tightly focused, single-topic table of contents. If you’ve ever cursed a product’s documentation for not using the same words that you use to describe a problem, thereby making it impossible to find the help topic you need, you understand the power of an index: the index provides both the terminology the authors use to describe a concept and many of the synonyms you might use.
You’ve probably heard Mark Twain’s famous quip, that the “difference between the almost right word and the right word is... the difference between the lightning bug and the lightning.” It’s true. Choosing and scrupulously using the right word rather than the almost-right word is one of the key characteristics of clear writing.
In fiction, the goal of word choice is often to nudge the reader in a specific direction, but then leave them to choose their own idiosyncratic path thereafter. In technical communication, we can’t afford such variation because all readers must take the same path (e.g., use the same series of menu choices) and reach exactly the same destination. This means that you must be rigorous and specific. For example, you must ruthlessly eliminate vague terms such as “very” because they have different meanings for different readers. When the magnitude of something is important, quantify it as best you can, even if only with an approximation such as “the margin must be greater than 1 cm”. Eliminate terms such as “etc.” and “and so on” whenever the next item in a list is not crystal clear. If you cannot list all of the possibilities, at least list the main criteria that will let readers determine whether their choice will be appropriate.
Two of the key principles in the plain language/simplified technical English (PL/STE) movement serve well here—never use the same word to communicate two different meanings, and never use two words to communicate the same meaning. In the first case, you force readers to ask themselves which of the two meanings you intended to convey. For example, in the sciences, “significant” has two important but very different meanings: statistical significance (a mathematical concept related to how much confidence we should have in a number) and importance (a pragmatic concept related to whether a number is meaningful). In the second case, you force readers to ask themselves whether the two words convey important but subtle differences. For example, “economic” tends to be used to refer to the discipline of economics (e.g., an economic analysis), whereas “economical” refers to cost (e.g., an economical alternative to more expensive options).
Where the meaning of a word has unusually important consequences for the meaning of a sentence, provide a definition rather than assuming readers know it. This is particularly important for words that have multiple meanings in common usage; those meanings may conflict with the more technical meaning you intended to convey. In hypertexts, you can link such words to popup definitions; in both hypertexts and printed documents, you can provide a glossary. To ensure that readers know the glossary exists, advertise it prominently, perhaps by giving it its own link on a Web site’s or online help system’s home page, or by persuading the developers to mention its existence as a popup “tip of the day” the first time someone begins using the software. Alternatively, provide a “glossary” or “dictionary” link at the top of all pages, as an always-available option. This is particularly easy in an interface with a series of tabs running across the top of the screen.
Definitions are particularly important in contexts such as the captions of graphics and tables, and for isolated topics that are most likely to be reached by search results rather than by reading linearly through a body of information. Yes, readers can try using the search tools to find a definition, but why force them to scan the text in search of meaning when you can provide that meaning instantly and eliminate that work? Providing that definition in context (i.e., while the reader is performing a task) means that readers aren’t forced to disrupt their task (i.e., trying to accomplish something) by being forced to add an irrelevant task to their workload (i.e., using the search tool).
Minimize your use of shortcuts such as abbreviations. Except for certain standardized and highly familiar terms, such as DNA in manuscripts destined for geneticists, GDP in manuscripts destined for economists, and HTML in references destined for Web designers, each abbreviation increases the cognitive burden on readers: they must first unpack the abbreviation to retrieve the words it represents before they can determine the meaning of those words and apply that meaning to the current context. The greater the difficulty or frequency of that burden, the less attention they can devote to more important points, such as accomplishing the task at hand. Scientists and engineers are notorious for overuse of abbreviations, which has forced most publishers and style guides to add the requirement that all abbreviations must be defined the first time they appear in a manuscript, but as I’ve already noted, we have no guarantee that readers will ever encounter that first definition. It’s better, therefore, to use the full words. If the abbreviation is necessary (e.g., it appears in a product’s user interface), redefine it in each new topic (e.g., in each book chapter, in each online help topic that contains the abbreviation).
Watch for recurring problems with specific subsets of your audience. For example, I’ve found that many readers who have English as their second or third language have difficulty remembering the difference between million and billion—after all, the words differ by only one letter. Using exponential notation (million = 1×106 or 1exp06, billion = 1×109 or 1exp09) or the full number (1 000 000, 1 000 000 000) is often clearer. Of course, some audiences require a different strategy. If you expect the audience to include a significant proportion of innumerate readers, you may need to rethink the whole purpose of including large numbers to ensure that the meaning will be clear, even if the actual number isn’t something they grasp. Re-expressing the concept in a way they can grasp may be difficult, but it’s sometimes the only way to ensure comprehension.
Terminology is closely related to the broader topic of consistency: everything must be as consistent as possible. For example, if you follow the convention of using italics to indicate that a particular word is not an English word, italicize that word everywhere—and where you don’t, clearly indicate why that instance differs from the italicized instances. There are many tools for ensuring consistency in large documents or document collections. One that’s well worth your attention is PerfectIt, a powerful plug-in for Microsoft Word. Such tools automate much of the process of creating consistency, leaving the writer and editor free to concentrate on other tasks—such as error-proofing the writing.
Most technical communicators document processes, which follow a certain logical sequence. Documentation can reduce the frequency of errors if it follows that sequence rather than intentionally or accidentally circumventing it. Those of us who edit different types of information, such as scientific communication, try to develop a persuasive rhetoric in support of a conclusion. The more we follow a logical sequence for that rhetoric, the less likely our sequence will zig when the reader expects us to zag, the fewer times the reader will stray from that path, and the fewer errors they’ll commit on their way to their destination. That decreases the risk of error and increases the persuasiveness of an argument.
For all logic, it’s important to follow the logic through to its consequences to provide a reality check about where it leads. You can do this by exploring the possible actions or directions a reader can take and the consequences of each. If the reader does action X, what happens? What about if they don’t do X? What if they do action Y instead? If we want them to do action X, we must find ways to persuade them to do that action, and ways to help them return to the correct path if they do action Y instead.
Be particularly vigilant to identify any ways that a reader can harm themselves, whether physically or in terms of a loss of information. For example, if you’re documenting a word processor, look for opportunities to teach readers ways to prevent the loss of a whole manuscript, such as creating a help topic that describes the automated backup options and any manual steps they should take, such as copying their manuscript to a flash drive at the end of a day’s work. Then find some way to ensure that they see the information. For example, include a “protecting yourself” or “protecting your information” link on the home page of a Web site or online help system.
Although you can clutter your text with warnings, readers are most likely to miss those warnings in situations where they aren’t paying attention to such details, such as when they’re running hard to meet a deadline or are perhaps scared by thinking they might damage a machine they’re using. These are precisely the situations in which they’re most likely to harm themselves, their equipment, or their colleagues. You can mitigate some of these problems through careful documentation of what things to do and what ones to avoid. But it’s often more effective to persuade the developers to build in protections—such as archiving deleted files for a few days or preventing users from setting a dangerously high temperature without having their supervisor input an override code.
Look for other possible sources of error. For example, in a scientific manuscript, 15 000 metric tonnes (which can be abbreviated 15 kt) is lighter than 16 kt, but because readers often compare the length of the numbers and gloss over the units, they may compare 15 000 with 16 and draw the wrong conclusion. Converting both numbers to kt (15 versus 16) or providing the full numbers for both (15 000 versus 16 000) eliminates this type of error. For example, a series of graphs (e.g., a bar graph that shows quarterly profits and losses) that are designed to be compared should use the same axis scales. Readers often take this for granted and don’t closely examine the graph axes; instead they just compare the relative sizes of the bars. In my editing work, I’ve frequently seen this problem mislead readers into reaching a seriously flawed conclusion.
One thing editors learn that writers often don’t is the need to check everything. We editors check all cross-references or citations: page numbers, chapter numbers and titles, section numbers and titles, figure and table numbers, and literature citations. If an author claims that something has a certain value midway through a document, we make a mental note to confirm that they report the same value earlier and later in the manuscript. Not only do we check these details; we also check that the correct information has been referenced. Modern software lets authors create such links semi-automatically, but that gives us a false sense of security. As I noted earlier, it’s far too easy to select the wrong destination from a long list—and hurried authors sometimes forget to use automatic linking and manually type a link; when that destination changes, the manual link will no longer agree with the new location. This is particularly true for mobile writers. For example, the precision of my laptop’s trackpad is much lower than that of my desktop computer’s mouse, and clicking on the trackpad often causes the cursor to jump half a line or more and select the wrong option. Since I’m aware of that problem, I’ve internalized the need to double-check the option I selected. Then there are the autocorrect and automatic suggestion features available on most smartphones and tablet computers; sharing the errors produced by these features online is a popular pastime, and a reminder to ensure that what we’ve actually typed is what we think we’ve typed.
In addition to verifying the cross-references, you must test the final product. First and foremost, you must ensure that everything is clear and legible. I often work with text created by Asian authors, and things I can read on the screen often turn out to include Asian fonts or Asian formats such as grid-based formatting that won’t print correctly, or that won’t render correctly if exported to PDF. Changing them to use Western language settings solves those problems. Other problems can arise if you’re publishing graphics with different shades of color. If you’ll be producing printed documentation, it’s important to print a copy on a color printer to ensure that the colors have the same appearance as they do onscreen. Many of the authors I work with submit graphics formatted to use the RGB color space of onscreen information, and forget that these colors don’t fully overlap the color gamut of the CMYK color space used for print. As a result, the shades are distinct on the screen but the distinction is lost in the printout.
Accessibility: Red–green colorblindness affects up to 8% of men and 0.5% of women, depending on the population, so you should be particularly cautious about using those colors. To test whether they and other color combinations will appear visually distinct, print the image in black and white (e.g., on a laser printer). If the colors aren’t distinct, change them and try again.
Now that we work almost exclusively on the computer, there’s another peril to beware: authors who work at high magnification (a high “zoom” level). For example, my aging eyes make it easier for me to work at 150 to 200% magnification while I write. This makes it difficult to tell whether text that I can read easily will be easily readable at normal magnification or when printed. To ensure that it will be, I always test the images at 100%, and for complex pages that contain graphics, I’ll even print a copy to ensure the text is legible.
A final problem arises from limitations in the applications readers use to access our information. In many portable devices (including applications written for Apple’s iOS), expected features such as “pinch to zoom” are unavailable or not supported by an app, so graphics and text that you can zoom in on when viewing something on your desktop computer may be illegible on the portable device. Although software such as Adobe Digital Editions does a great job of simulating what you’ll see on these devices, there’s no substitute for the real thing. Recruit a group of volunteers to test your product on as many devices as possible. Special symbols are particularly important. For example, the emoji character sets differ significantly between platforms, and those differences may inadvertently convey different meanings. If you’re using such symbols, it would be wise to create a standard set that you have tested to ensure that they work similarly and correctly on all platforms. If they don’t, you’ll need to develop solutions that reduce the risk of errors.
For any document that contains a summary, such as the Abstract in a peer-reviewed journal manuscript or the executive summary in a workplace report, check that summary carefully. Particularly in documents that have multiple reviewers, it’s easy to end up with many requests for changes in the text that will also affect the summary. Unfortunately, in my experience, authors frequently forget to incorporate those changes in the summary.
For contexts such as online help and Web sites, in which you can’t know in advance how a reader has reached a topic, it’s important to check topics in isolation to ensure they contain everything that readers need to know to understand the topic without going on a wild goose chase in search of the missing information. (I’ve discussed several ways to do this earlier in this article.) Well-designed single-sourcing systems can minimize the number of problems with missing information, but single-sourcing is not yet a mature art. Among other things, it’s all too easy to include the wrong chunk of information. I once owned a car whose owner manual (generated using a content-management system) contained information for a different model and lacked information for my model. Human oversight is the necessary final step to ensure that everything works and makes sense.
In an ideal world, your readers would be well-rested, stress-free, able to devote their full attention to deciphering your prose, and willing to make the effort. Unfortunately, you must work in the real world, where readers are often fatigued, stressed, distracted, and unwilling to stick with difficult information until they understand it. You simply can’t take for granted that they’ll try to figure out any errors you’ve made, or that they’ll succeed if they do. If you think some of these issues aren’t serious, it’s worth remembering that the modern world is increasingly global and multilingual. Where you once could count on most of your readers speaking English as their native tongue, and being able to figure out English infelicities, we can no longer count on this. Thus, you need to be more careful than ever.
Hart, G.J. 2000. The style guide is dead: long live the dynamic style guide! Intercom, March:12–17.
Hart, G. 2011. Uprooting entrenched technical communication processes: process improvement using the kaizen method.
Hart, G. 2012. Revising the review-and-revision process: a case study of improving the speed and accuracy of technology transfer. Intercom February:22–27.
©2004–2018 Geoffrey Hart. All rights reserved