–Geoff-Hart.com: Editing, Writing, and Translation —Home —Services —Books —Articles —Resources —Fiction —Contact me —Français |
You are here: Articles --> 1995-1998 --> The five W's: an old tool for the new task of audience analysis
Vous êtes ici : Essais --> 1995-1998 --> The five W's: an old tool for the new task of audience analysis
Previously published, in a different form, as: Hart, G.J. 1996. The five W's: an old tool for the new task of audience analysis. Technical Communication 43(2):139–145.
An audience pays attention to your attempts at communication because they have certain needs that they expect you to solve. As journalists have long known, it's possible to meet the majority of an audience's need for information by answering five "W" questions: what, who, where, when and why. Although this approach is a core element of journalism, it has obvious applications for technical communicators, who must create information in the context of meeting an audience's needs.
The premise that readers want to answer certain fundamental questions by reading a news story is central to the rhetorical approach in journalism. By asking these questions about any topic before beginning to write, a journalist can discover the necessary information to include in a story. Journalists call this strategy the "five W's" because five questions that begin with the letter "W" form the core of this approach:
Some authors add a sixth question, "how?", but such questions can generally be rephrased so they are covered by one or more of the W questions; for example, "what permitted this to occur?" can replace "how did it happen?"
The answers to these five questions provide enough information for an audience to understand what happened, and whether and how it will affect them. In short, asking these questions implicitly leads to user-centered design. Regrettably, this approach no longer seems to be actively pursued by younger journalists; in my experience, stories that answer all five questions, let alone that answer each one satisfactorily and efficiently, are uncommon. Even in otherwise excellent papers that address how to develop documentation (e.g., Scholz 1994), the details of content are discussed more in terms of how to write rather than what to write.
Although the five W's represent and old and familiar approach, they remain relevant; old tools are no less valuable just because they're old. Our increasing emphasis on user-centered information makes the issues of "what happens" (or what must the user cause to happen) and "how will it affect the user" important ones. In this paper, I'll suggest how you can use the five W's approach in the audience analysis phase of designing information to help ensure that your information meets the audience's needs. Editors and documentation managers can use a similar approach to evaluate the effectiveness of a design.
This paper is based on personal experience with documentation that failed to meet my needs, and not on quantitative research on usability. I've provided references to relevent literature where I felt that this would enhance my paper, but I have not set out to write a literature review. To focus and simplify the discussion, I'll use the example of the needs analysis that underlies writing and illustrating computer software documentation, one of our main preoccupations, but the approach has clear application in all other forms of user-centered design.
The question of "what" has two main aspects: what might users attempt to do (what problem is there to solve?), and what steps must they take to achieve this goal?
The first question implies a task-based approach. That is, you must first identify all the tasks that users are likely to ask the software to perform. This requires a comprehensive understanding of the software's purpose so that you can define various categories of functions. You can then break each function down into a set of tasks that users can combine in various ways to achieve the larger function. For example, consider a common desktop publishing function: importing a graphic and placing it on the page. This simple function comprises several individual tasks, as shown in Figure 1.
Figure 1. An example of breaking a function into individual tasks based on "what" questions.
Function: Placing a graphic.
Task 1. Finding the graphic on the hard disk ("import" menu).
Task 2. Placing the graphic on the page. (Options: embed in file, use link.)
Task 3. Setting the text wrap.
Task 4. Resizing the graphic to fit the grid.
Task 5. Cropping the graphic.
Task 6. If the graphic is a photo, adjusting halftone parameters.
Knowing what the software cannot do is also important because it defines cases in which users can't use a built-in function and must instead adopt a work-around. For example, the desktop publishing software PageMaker has never included an automatic footnote feature, yet makes no reference to this lack in any of its documentation. As a result, I once spent a fruitless hour trying to save time by learning how to create footnotes automatically, only to discover after calling technical support that the software lacked this function. This omitted information was doubly puzzling because an otherwise comprehensive manual that clearly specified the limits of most functions (e.g., maximum type size) had led me to expect similar information for positioning text automatically. To be fair to the writers at Adobe, the list of what any software package cannot do is always far longer than the list of what it can do; nonetheless, there are certain obvious standard tasks (such as footnoting) that users may expect your software to perform, and you should identify any that the software can't perform. Good lists of expected functions usually appear in the comparative product reviews section of leading trade magazines (e.g., MacWorld and PC Magazine for computer hardware and software).
Each of the functions that you identify could define a section of the documentation, and each task that you identified under that function could then become a major heading; similarly, any subtasks could become subheadings, and so on until you reach the sentence level, the sequence of steps to perform each subtask. This approach, not coincidentally, also has profound implications for designing or redesigning the software's user interface. If you know the tasks that users will attempt to accomplish, you can use a parallel approach to determine how to group the software's functions to help users accomplish the functions efficiently. For example, a recent release of Macromedia Freehand regrouped related tasks into well-organized, "one-stop" palettes of tools that replaced several layers of menu choices in the previous version.
Documenting the actual steps in each procedure requires a knowledge of context (the task) and of cause and effect (the details), as discussed by Mary Battle (1994). To make the documention equally useful for experts and neophytes (two typical classes of user in any audience), the next problem becomes how to format the information so that experts need not wade through the detailed explanations that neophytes must have available. Providing quick reference cards, command references and similar devices, or using typographic conventions (Figure 2) are possible ways to make the summary of a step stand out from the subsequent details. If you identify other classes of user through audience analysis, you may need to incorporate additional organizational distinctions, such as providing a technical reference in addition to a user manual or even producing an entirely customized manual for an important, distinct group of users.
Figure 2. An example of using simple typographic cues to summarize steps in a procedure for expert users while providing details for those who need them.
Step 1. Perform the first step in the procedure.
Here are the details of step 1.
Here are more details of step 1.
Step 2. Perform the second step in the procedure.
Here are the details of step 2.
Here are more details of step 2.
Several other aspects of "what" may enter your writing. These include finding alternatives ("what components should be graphical instead of text?"), identifying components ("what is a...?"), reassuring the user ("what if I...?"), defining or differentiating items ("what is the difference between...?"), providing diagnosis or causality ("what caused... to happen?"), troubleshooting ("what did I do wrong?"), and problem resolution ("what can I do to save myself?"). You should consider each of these questions, and any others that are relevant to your context, to create complete documentation. I've listed comparable questions for the other four W's, but the same note applies in each case: your unique context will suggest unique questions that I haven't listed.
In technical writing, "who" has two primary aspects: who are the audience members for your documentation, and who performs the various actions described by the writing? That is, who are the users (their identity determines their needs) and who (the user or the computer) is responsible for the actions that lead to a given result?
The first question implies that you must learn the reader's needs by using available audience information, or obtaining such information ("audience analysis") if none exists. This form of "who" question is easy to avoid answering because audience analysis seems intimidating for any of several reasons:
My own experience suggests that this trepidation is unnecessary. Almost any attempt to understand the audience's needs, even relatively simplistic attempts, will result in better documentation; more sophisticated approaches will produce correspondingly better results. At a minimum, you can engage in simple role-playing by pretending to be the product's user, "forgetting" your pre-existing knowledge of the product, and attempting to see your product (particularly how to approach it) with new eyes. Along these lines, I've recently begun telling anyone who asks what I do for a living that I'm a professional idiot: I can manage to misunderstand anything (and subsequently correct it) before a client's audience does. But it's not easy to be a consistently professional idiot, and a more formal approach is usually more productive. The literature on audience analysis is extensive, but even selective reading of this literature can provide the confidence you need to begin. Thomas Warren's recent (1993) paper is a good starting point, and Tracy Montgomery (1989) and Michael Floreak (1989) have provided insightful case studies that show the theory at work in practice.
The second aspect of "who" involves distinguishing clearly between what users must do (i.e., who initiates the action) and how the software will respond (i.e., who finishes the action). At first glance, this distinction seems obvious and thus trivial, but it's surprising how often the distinction blurs, particularly for an inexperienced user. The most common examples involve error messages that fail to distinguish between errors caused by the user and those caused by the software or the computer. The infamous "abort, retry, fail?" message that MS-DOS provides in response to certain errors is possibly the least helpful error message in the history of computing because it often fails to identify who caused the problem (was the error of human, software, or hardware origin?), provides no information on how (or if) any of the three alternatives will resolve the problem, and provides no information on the consequences of each alternative.
Clearly separating who initiates the action from who (the computer) completes it provides the context a user needs to understand what will happen, and consequently how to tell if it didn't happen correctly. It's always easy to inform the reader of this change in responsibilities, often by simply stating "after you select choice X from the menu, the software will perform action Y, which must finish before you can continue by doing Z". To do this well, you must know what the software will do in response to each user input; from this starting point, you can then document each combination of command and response. The larger the software, the more complicated this becomes, but the principle at least is simple.
"Who" also involves my least-favorite aspect of most computer documentation: "who do I call if I can't solve the problem using the manual?" In a quick survey of the software manuals I use at work and at home, less than one-third placed the technical support phone numbers in an obvious place; in this case, I defined "obvious" (very subjectively) as meaning that I could find the necessary information in the "getting started" guide, in the first section in the reference manual, in an appendix to either manual, early in the table of contents (so that I didn't have to skim the entire book outline), or in the index under "getting help", "technical support", or several related synonyms. In the remaining two-thirds of the manuals, these phone numbers were either not listed anywhere "obvious", or were hidden away in one of the several manuals that accompanied the software. If few users will ever have to contact you for help, which is rarely true, or if your goal is to cut down on telephone costs, this may be acceptable; however, in practice, the users most likely to need your help may also be those least likely to find the technical support information. (I offer, as anecdotal evidence, most of my colleagues; your mileage may vary.) Adobe and Macromedia solve this problem elegantly by providing a card for your contact file that contains the technical support phone numbers and a convenient place to write the product's serial number so that this information is available when technical support staff ask for it.
Other aspects of "who" include defining authorship ("who wrote this stuff anyways?"), explaining licensing or distribution information ("who can I send a copy to without paying royalties?"), invoking audience information ("who will be skilled enough to use this product?") and supplementing your own information ("who provides products that enhance the functionality or usability of this product?").
Once you know the "who" (thus, the user's needs) and the "what" (how to meet these needs), the next step is to identify the locations of the tools for performing the "what" and, if possible, to explain this in the context of their use. You can do this purely in words, as in the phrase "choose the Bezier tool from the tools palette", but this is often less efficient than presenting the information visually, perhaps as a picture of the screen display in which you highlight the location of the tool. In this manner, you display information in the same visual context in which it will appear to the user. Visually depicting the location facilitates the cognitive "context switch" (from textual to visual modes of processing information) that occurs when users glance from the documentation to the screen, and also better depicts the tool's logical context by showing its physical position in relation to the available tools. In this example (selecting a tool from a palette), providing a visual image also reduces the need for users to memorize the meanings of a series of cryptic or illegibly small icons (e.g., Gurak 1992; Horton 1993).
In reference material, such as a list of all possible commands for software, "where" identifies the need to help users find specific information ("where does this command appear in the list of commands?") or the need to refer users to related information required to perform a task ("where can I find more details?"). Both needs suggest that you must discover how your audience will try to find information, and then structure your information to support this approach. In particular, the second question requires you to determine where related material should appear: should it be grouped so that all commands that relate to a task or function occur together, or is a cross-reference to another page permissible or even preferable?
"Where" also suggests the concept of providing a map of your information. Simple maps include tables of contents and introductory material such as "read me first" or "how to use this manual" sections. More complex maps help users to explore related information by placing it in the context of an overall schema (a "mental model"); in this sense, the map is an acronym for what Richard Saul Wurman (1989) calls "Man's Ability to Perceive". Typical schemata include information on how the software's many parts relate to each other and details on how you have grouped your information so that users can learn to assemble small tasks into larger functions. This implies that "where" need not refer to physical location alone; "where" may be more conceptual, in the sense of where a piece of information fits in some larger context. You can provide this information implicitly, by listing the related topics and asking users to integrate them into a larger picture, or explicitly, by stating how a task integrates with the other tasks that combine with it to accomplish some function. This is particularly helpful when it takes the form of "where have I been and where do I go from here?" and when you link this with the "why" information that I'll discuss later in this paper.
Other aspects of "where" include listing alternatives ("where else can I go to perform this function"?), providing background context ("where does this product [e.g., illustration software] fit into the collection of software that I use for a larger task [e.g., publishing]?"), providing alternative forms of instruction ("where can I go for training?"), locating service providers ("where can I go to get a printed copy of my product?"), and identifying physical location ("where are the various files on my hard drive?").
In most documentation, "when" is primarily a matter of communicating the sequence of tasks that will accomplish a larger function. This sequence may be precisely predetermined by the order in which the software requires users to perform the tasks, or it may be your attempt to suggest the most efficient of various possibilities. Note that even when there is a "better" way, it's still worthwhile to explain alternatives so that users can choose the approach that best suits them or try a different approach if your preferred approach fails. "When" can explain when users will be competent to attempt a task (e.g., "don't try this until you understand the previous chapter"). "When" can also have a true chronological context, such as when you advise users to back up their hard disk every evening before going home. "When" even extends the concept of sequence into the realm of rhetoric, in the form of when the effect should precede the cause in an instruction (Connatser 1994).
For some tasks, such as entering information into blanks ("fields") in a "fill in the blanks" structure such as a dialog box, the order of data entry is essentially irrelevant; in this case, the sequence should probably follow the physical sequence of the fields on screen (i.e., from top left to bottom right) rather than a chronological sequence. One order may be more efficient than another, but the results will be the same irrespective of the order. When this is true, you should indicate (explicitly or otherwise) that the order is unimportant. Conversely, when the data entered into one field will restrict the data or its format in a subsequent field, state this explicitly; for example, in a dialog box whose first field lets users specify units of measurement and subsequent fields specify the magnitude of various dimensions of an object, you should remind users to convert any existing numbers in the fields, particularly if the software doesn't do this automatically. Again, this sounds obvious, but I've seen several cases where colleagues have reflexively retyped the same numbers after changing the units of measurement and produced the wrong result. Fixing the problem is usually simple, but avoiding it in the first place is simpler still.
Presenting a series of numbered steps is a common explicit clue that sequence is important, but any printed information contains an implicit sequence (in English, from left to right, then from top to bottom) that suggests the order of operation. A subtler approach incorporates the "why" component (discussed in the next section) into your written descriptions to explain the sequence: by explaining the reason behind performing a task, readers can more easily infer why it must precede another task. Where this inference is not obvious, you should explain why the sequence is important. Unfortunately, determining whether something is "obvious" is highly subjective (thus, error-prone) and to be safe, you should test users unfamiliar with the task to specifically identify the inferences they make to move from one step to the next. If they don't make the same inferences that you did in designing the connections between steps in the sequence, then you must make these inferences explicit or perhaps even alter the sequence to conform with the audience's expectations.
Other aspects of "when" include time-stamping information ("when will this information cease being valid?"), copyrighting information ("when was this material copyrighted?"), explaining alternatives ("when is this approach less effective than another?"), and warning users ("when will this activity be hazardous or cause problems?").
The functions that software can perform provides a list of "whats" (what the software can do), but each "what" invokes another question: "why would I use that function?" This forms the basis for one useful type of documentation, a list of frequently asked questions and their answers. For example, on a very basic level, the "what" might be to select text in a word processor document, with the "why" being to prepare the selected text for another activity such as copying or deletion. Effective instructional documentation such as a tutorial uses the "what" information to show users how to proceed, but also provides matching "why" information to help users learn to solve problems. For example, Mulcahy (1989) demonstrated that effective instructions should provide causal links to explain how and why the steps in a procedure are related. Any "what" without an accompanying "why" may create a gap in the user's knowledge and thus interfere with the user's ability to solve a certain class of problem.
"Why" explains the need for each step the user must take to accomplish a task, and why the tasks follow a specific sequence. This provides the context in which the users will operate, and facilitates learning and subsequent performance of the steps. Most documentation is incomplete because it tells the user precisely what to do, but provides no contextual information on why each step is required, useful, or efficient. The result is that users are more likely to learn how to perform a function than how to link it to other functions to perform more complex tasks; without the documentation, many users will be unable to perform the function because they can't remember the next step. "Why" information is an important way to help users understand what the next step is likely to be, and thus remember what to do next.
Another aspect of "why" involves the most common user question of all: "why isn't this working?" Most reference manuals contain at least brief troubleshooting sections for common problems, and some extend this to the reference material to tell users what to look for if an important command succeeds or fails, but few extend this approach to individual functions. I once had a problem after installing a new monitor for my computer. The manual's troubleshooting section explained in commendable detail the possible solutions if no picture appeared after installing and turning on the monitor, but there was no information at all on how to resolve problems once the monitor was displaying a picture.
Other aspects of "why" include explanatory components ("why would I want to perform this sort of task in the first place?"), apologies ("why we didn't include an obvious feature"), and justifications ("why one approach is better than another").
I've presented the five questions in one particular order, but the "correct" order depends on your goal. For example, in redesigning a semiannual newsletter on current research, we started with the knowledge that we were writing for an audience of North American readers ("who") interested primarily in forestry research in their region of the continent. After grouping information by region to identify the regions in North America for which the research would be relevant ("where"), we revised the titles of the articles so that each identified the problem that the research addressed ("why" the reader should read past the title). We then edited each article to include a simplified experimental methodology (what the researchers did) and the results (what they found). We qualified this by noting the season in which the research occurred, which limits the applicability of some results. At the end of each article, we provided contact information (more "who") for readers who wanted more details. Defining this approach helped us to reduce the amount of writing and editing required, and better addressed the real needs of the readers.
Most technical documentation adequately addresses the reader's need to know "what", but either ignores or undervalues the remaining four W's. Addressing each of these five categories of questions relates each question to specific audience needs so that you can address these needs intentionally rather than serendipitously. For user-centered information, this approach explicitly addresses the audience's needs and leaves nothing to chance.
Who should use the five W's approach during the audience analysis component of a design? All technical communicators. What should these communicators do? Identify the information needs of the audience, and attempt to meet each of these needs efficiently. Where should this occur? In all information resources, from printed documentation and online help to the user interface itself. Why should this occur? User-centered documentation begins and ends with the recognition that the audience, not the annual publications competition, is the final judge of our success. When should we use the five W's? Starting at the overall document design stage and carrying through to the final implementation of the design.
The five W's approach does not cover every possible information requirement. For example, it does not explicitly address how the parts of a document should relate to each other from the perspectives of the overall structure (e.g., how to put chapters in an effective order) or the visual design (e.g., typography, page layout), although it will generally lead you to consider these factors. More importantly, the approach does not specify how to test the information's usability. The approach will provide valuable insights into how people will actually use your documentation, but the actual approach will differ from user to user; more importantly, your model of the users is only a theoretical construct, and must undergo a reality check to see if the model is valid. Asking these five categories of questions will help you to identify an efficient structure for the information and will increase the likelihood of producing usable documentation; it won't guarantee either without follow-up testing.
Battle, M.V. 1994. A new/old strategy for reading and writing technical documents. Technical Communication 41(1):81-88.
Connatser, B.R. 1994. Setting the context for understanding. Technical Communication 41(2):287-291.
Floreak, M.J. 1989. Designing for the real world: using research to turn a "target audience" into real people. Technical Communication 36(4):373-381.
Gurak, L.J. 1992. Toward consistency in visual information: standardized icons based on task. Technical Communication 39(1):33-37.
Horton, W. 1993. The almost universal language: graphics for international documents. Technical Communication 40(4):682-693.
Montgomery, T.T. 1989. Beyond the stereotype: teaching the complexities of audience analysis. P. ET-60 to ET-62 in: Proceedings, 36th International Technical Communication Conference, Society for Technical Communication, Arlington, VA.
Mulcahy, P.I. 1989. A problem solving schema for task instructions. p. RT-119 to RT-121 in: Proceedings, 36th International Technical Communication Conference, Society for Technical Communication, Arlington, VA.
Scholz, J. 1994. Developing technical documentation the smart way. Technical Communication 41(1):94-99.
Warren, T.L. 1993. Three approaches to reader analysis. Technical Communication 40(1):81-88.
Wurman, R.S. 1989. Information anxiety. Doubleday & Co. Inc., New York, NY.
©2004–2024 Geoffrey Hart. All rights reserved.