Geoff-Hart.com: Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Articles --> 2000 --> Ten technical communication myths

Vous êtes ici : Essais --> 2000 --> Ten technical communication myths

Ten technical communication myths

by Geoff Hart

Previously published, in a different form, as: Hart, G.J. 2000. Ten technical communication myths. Technical Communication 47(3):291–298.

Myths often represented the very human attempt to explain something important but poorly understood, such as the turning of the seasons, or to provide cautionary tales to warn listeners against unsanctioned behavior, as in the myths of Prometheus and Epimetheus. The fascination inspired by myths has kept many alive across the millennia, but despite the degree of abstraction or exaggeration that makes them so fascinating, there often lies a grain of truth or an insight into some fundamental aspect of the human condition at their heart. In our current enlightened age, we fancy that we’ve grown beyond the need for myths, yet “urban legends” abound (particularly on the Internet), and many of the things we do in our daily work are strongly influenced by “rules of thumb” that are, in a very real sense, a form of myth.

Like any other profession, technical communication has accumulated its share of mythical rules of thumb, but the good news about our profession’s myths is that they too contain grains of truth and insights into things that are truly important to us. The bad news is that we’ve also internalized some of these myths to the point that we no longer question them, and have begun to let them constrain our choices rather than to help us remember and see the truth. Some communicators even overgeneralize the occasional rule to the point at which it loses its validity and become dangerously misleading.

So what myths do we live by? In no particular order, this paper presents my “top ten" list of what I consider to be the central myths in modern technical communication. There are undoubtedly others. By writing this paper and acting as devil’s advocate, intentionally presenting these myths in a bad light, I’m hoping that I can persuade you to question these and other rules of thumb that you use daily. When you pay closer attention to the rules you obey, consciously or otherwise, and question why, you can start to recognize the disabling aspects of a myth and begin taking steps to free yourself from those constraints.

Myth #1: Knowledge of specific tools is vitally important

Few managers want to hire a new technical communicator and wait weeks for the person to become productive with the company’s writing tools, yet hiring on the basis of “tool skills” ignores the fact that the ability to format text is a very small part of our value as technical communicators. (It also ignores the fact that any new employee, even one who comes equipped with the desired tool skills, faces a learning curve at a company and may take weeks to learn the ins and outs of the new job.) Employers hire us primarily because we can understand their products and communicate that understanding to their customers. They hire us because we know how to take a product apart, literally or figuratively, and decide what components of the product we must document and how we should do it. They hire us because we possess the ability to pry information from the grasp of reluctant subject-matter experts, because we have that rare skill of empathizing with our audience well enough to understand that audience’s needs, and because we have the persistence to eventually satisfy those needs.

None of this depends strongly on the ability to work in Word, Framemaker, or RoboHelp. Back in the Dark Ages before computers, the ancients did a pretty good job of documenting complex things without these tools; in fact, those ancients could probably teach us a few things about good writing. Nowadays, few writers lack the ability to type and do basic formatting from the software’s menus, and these (not formatting skills) are the crucial tools that support our work; in many situations, advanced formatting skills are actually a red herring, because templates already exist and layout or design work consists more of applying the templates than of actively designing something new. It’s not that knowing how to format is unimportant to us; rather, it’s far less important than our ability to communicate.

But let’s assume that tool skills really are as important as some managers claim. Given that most of us have learned enough software skills to quickly develop basic to moderate competence with new software, the period of several weeks while we adapt to our new job is more likely to pose problems than our ability to learn new software. For example, in my comparatively short career (not quite 15 years), I’ve mastered four different layout programs, half a dozen word processors, three operating systems, and more other programs and utilities than I care to count, all the while coping with an ever-accelerating rate of evolution in each of these software categories. What’s impressive about this is not that I’m a software prodigy, but rather that I’m so average; many of my colleagues have an even more diverse portfolio of tools at their disposal. The consequence for employers is that most experienced technical communicators have yet to encounter software we couldn’t begin using productively within a day, and become skillful with in about a week. Mastery can certainly take far longer, but most of what we do doesn’t require that level of mastery.

To see the flaw in using tool skills as a primary hiring criterion, ask yourself this: would you rather read well-written documentation, or documentation produced by someone who can make Word 97 jump up and dance? Now ask yourself which of the two skill sets (writing vs. formatting) is easier to teach, and you’ll know which of the two writers you should hire. All else being equal, which is rare, choose the communicator who also knows your development tools and can use them for layout. And speaking of layout…

Myth #2: Sans serif fonts are more legible online

“All else being equal,” this rule of thumb claims, “sans serif typefaces remain easier to read on low-resolution displays such as computer monitors, which typically have resolutions of between 72 and 96 dots per inch”. This resolution is certainly low, even compared with that of the advanced 24-pin dot matrix printers we abandoned in favor of laser and inkjet printers, and certainly can’t do justice to the fine details of many serif fonts designed for print; in particular, the serifs can disappear entirely, and character outlines may even blur because the variable stroke width that characterizes traditional serif fonts lends itself poorly to fixed-size pixels.

Unfortunately, though these assertions all contain a grain of truth, “all else” is almost never equal, and you should distrust any typographic studies that claim otherwise. Many factors can overwhelm the theoretical difference in legibility between serif and sans serif type, even if we ignore the fact that it’s possible to optimize the designs of either typeface style for online display (e.g., “slab” serifs hold up better than thin serifs onscreen). The typographic factors that can overwhelm the choice of serif versus sans serif typefaces include, but are not limited to:

So many other factors influence legibility that a generalization such as choosing sans serif for online use often leads us to forget that the combination of the abovementioned typographical details is generally far more important than typeface choice per se. Since I’m attempting to provoke a reaction in this essay, I’ll go so far as to claim that any typeface designed for reading (rather than for “display”) can be made legible through skilled typography. Karen Schriver (1997) has provided a detailed summary of the literature on typographic issues that should be mandatory reading for documentation designers concerned with issues of typography.

Even were sans serif fonts the hands-down winners, enforcing their use ignores perhaps the greatest blessing of publishing information online, something that has thus far been impossible to achieve in print: readers get to choose the fonts and font sizes that they prefer rather than having to cope with our choices. This is particularly important for visually impaired readers, who will likely become an increasingly significant part of our audience as the average age of our readers increases. My advice, subject again to the caveat that I’m overstating my case to make a point, is that we should never deprive readers of this flexibility unless we’ve carefully weighed the advantages of what we're giving them in return. In my experience, there's little advantage to enforcing a typeface choice. Using specific typefaces is most important when the graphic design approaches the content in importance, but that’s rarely the case for most of the work we do.

One important caution about leaving typeface choice to the reader: make sure you document how readers can make the necessary changes, since many neither know that they have a choice nor understand how to make the change once they do know. But will that change over time? I’ve said that our audience is aging, and that’s just the tip of the iceberg.

Myth #3: Audiences are static

There's a myth that once you've characterized your audience through audience analysis, the job's done and all you need to do is follow up with a round of usability testing to provide a reality check. That's far from true.

Inconveniently, audiences insist on changing over time. The neophyte you devoted an entire “getting started” manual to teaching eventually grows beyond the need for this information, and may even become a “power user”. Some of the former power users leave, tempted away from the fold by newer, more interesting products that present exciting new possibilities; in particular, the radical fringe who first adopted a product and pushed it to its maximum potential often leave to follow newer waves, leaving behind craftsmen who feel no need for such exploration. And the cycle begins again as more neophytes pick up the product and decide it's worth learning because "it's the standard".

I’ve already mentioned that our audiences are aging, but this has significant implications beyond the need to remember legibility issues. One change that is already well underway and that may be complete within the professional lifetimes of most STC members, involves computer use. Even today, 20 years after personal computers began moving out of the hands of hobbyists and into the hands of regular users, we must write for an audience that includes a fair number of people who are acutely uncomfortable with computers, and who may be using them for the first time. But within one or two decades, these people will have become a vanishingly small component of the audience for typical software developers. If they become sufficiently rare, perhaps our employers won't grant us the time and resources to cater to their needs. For most of our audience, computers will be so familiar that they're second nature, and that will have profound implications for how and what we document. There's already a trend in this direction, since manuals that begin with the words "We assume you already know how to use Windows" have pretty much driven manuals with an operating system tutorial into extinction.

How else will our audience change over the next two decades? The only way to find out will be to keep our eye on them and start assessing how their needs are changing.

Myth #4: Minimalism means keeping text as short as possible

John Carroll has been one of the leading standard bearers in the minimalism movement, and no doubt has grown rather frustrated with the notion that minimalism means brevity, pure and simple. It also doesn’t mean trial and error learning, maximum simplicity, or any of several other misconceptions or oversimplifications. To set the record straight, he co-wrote an article that deals with the misconceptions firmly and eloquently (Carroll and van der Meij 1996). Since I lack the space here for a full review of minimalism, I’ll risk oversimplification myself by treating the subject in much less depth than it deserves. To quote Carroll: “The central principle in minimalism is task orientation. But many other principles play a role in this design approach either because they support task orientation or because they follow from it.” In short, the minimalist philosophy involves understanding what your audience is trying to accomplish (audience and task analysis), and focusing on those needs by providing enough information, in the right form and at the right time or in the right place, to help them accomplish their tasks.

The myth that minimalism equals brevity stems from a much more interesting and complex assertion: that you shouldn’t bury readers in extraneous detail. The challenge, of course, lies in discovering what is truly extraneous. It’s also a myth that minimalism is a one-size-fits-all solution for all communication problems, since its task orientation does not make it directly applicable to problems such as communicating theoretical information (e.g., the “why” of graphic design rather than the “how”) or writing to persuade the reader (e.g., marketing). Yet even for such seemingly unrelated problems, minimalism has much to teach us because of its emphasis on the reader, and that emphasis won’t lead us far astray even when the reader’s tasks are not immediately recognizable as tasks. The fact that a philosophy designed for one specific field (task-oriented documentation) can be so easily misinterpreted, yet still have broader implications for communication, leads neatly into another myth:

Myth # 5: The optimum number of steps in a procedure is 7 plus or minus 2

George Miller studied, among other things, human short-term memory, but is perhaps most famous for discovering “the magical number seven”. Miller’s best-known paper (Miller 1956) is also probably his least-read paper, and this lack of returning to the source has led to one of the more pernicious misunderstandings in the field of technical communication. In effect, generations of writers have made the assumption that (for example) lists and procedures should contain no more than five to nine steps, based solely so far as I can tell on the title of Miller’s paper and the myths that have grown up around it. As it happens, Miller’s article actually discusses the human ability to reliably distinguish categories (e.g., distinct shades of gray, sound levels) and the related issue of “channel capacity”, which represents (simplistically) how much information your audience can manage at a single time. In effect, this represents the number of cognitive tools a typical reader can hold in their "mind’s hand" (so to speak) and use to attack a problem.

I won’t try to summarize 16 pages of rich, moderately dense, prose by Miller in any depth, both because I want to encourage you to read the original article yourself and because an update of this subject merits its own article. Given the importance of what Miller discusses, we should begin thinking about how to test the applicability of this body of research in our own unique context so we can begin applying the new findings to our work. While we wait for those results to trickle in, two things we already know give us much to ponder:

First, we should always go to the source rather than blindly accepting someone else’s report of what that source said. This takes longer and usually requires considerably more thought on our part, but it greatly reduces the number of myths and misconceptions that we’ll perpetrate and perpetuate. More interestingly, revisiting an article often leads to inspiration and discovery of new ways to build on those old thoughts.

Second, Miller’s study does have intriguing implications for technical communication, even if not the ones we’ve been led to expect. For example, our audiences have very real limits on how much information they can process simultaneously, and recognizing the existence of these limits means that we need to better understand how we can help readers to process information. All else being equal, readers will always find it easier to deal with fewer items at a time than many items. As a starting point for applying Miller’s findings, we need to learn to write in such a way as to let readers digest one chunk of information before we force them to begin dealing with the next one. And if that means we’ll have to reconsider an interface design because we’re asking users to deal with two many inputs at once, then that leads neatly into two more myths:

Myth #6: You can make a bad interface easy to use through superior documentation

By definition, really good documentation makes even the worst interface easier to use—but it will never make a truly bad product easy to use. I stated earlier that one thing that makes us so valuable to our employers is our ability to think like the product’s users, and if something is difficult to use, we notice it first because we have a devil of a time trying to document how to use it. Our value as communicators lies in our ability to figure out where the barriers to usability lie and create documentation that guides users as painlessly as possible around the problems.

Unfortunately, that's all that most of us have been able to do thus far, and it’s time we began making concerted efforts to go one step further. If we can understand the barriers well enough to solve the problems in our documentation, this means we also understand the barriers well enough to propose changes in the interface itself. And we should; increasingly, that's the role we must take on for ourselves. I'm not the first to recognize this, nor am I the first to propose that we do something about it (e.g., Carliner 1998). But corporate culture is often such that making our voices heard is difficult, and there are many barriers raised in our paths. Why don’t we circumvent these barriers? Because of yet another myth:

Myth #7: We can’t talk to the SMEs

Myth #6 arises from the misperception that we can't talk to our subject-matter experts (SMEs) or development teams and persuade them to make necessary changes. Occasional conversations on the techwr-l mailing list have convinced me that some corporate cultures still formally or informally try to prevent technical communicators from “bothering” the SMEs or developers. Apart from making for unpleasant workplaces, this approach can prevent the synergies possible when the two groups collaborate. Fortunately, the truly Dilbertian companies are rare. But even in companies that encourage contacts between us and our partners in product development, it's easy to establish exclusively formal professional relationships that don’t foster true collaborations. It's easy to make personal contacts with SMEs, whether at lunch, at the company softball game, or after hours while awaiting the bus home. These personal contacts are crucial in technical communication because they establish mutual respect and often even affection, thereby earning us the time and open minds we need if we’re to get our opinions heard.

Once you’ve got someone listening, it's relatively easy to keep the conversation going and to start influencing how things get done. After all, a friend is far less likely than a complete stranger to refuse to provide technical information, or to dismiss your concerns out of hand. Even if friendships never develop, the relationships can still become more than merely professional, which means that they generate openings for an exchange of ideas and concessions when it comes to developing and documenting a product. For example, although I’m not formally part of the software development team at my workplace (where software development is a very new thing indeed), one of our three developers now comes to me seeking interface advice, and the other two now listen to and consider my advice once we’ve actually begun working together on the documentation.

Long-term, these sorts of dialogues can subvert even the most toxic corporate culture and produce a relationship in which developers and technical communicators start working together in the interests of our mutual audience. Start building the relationships now. Usability testing is a great way to begin, or would be were it not for yet another myth:

Myth #8: Usability testing is prohibitively expensive and difficult

Any time you try to study human psychology, you’re dealing with an inherently complex, subjective endeavor that provides considerable room for error. Usability testing is no exception, and if you’re going to perform a refined, statistically sound, replicable (repeatable) usability test, you need to understand a fair bit about both experimental design and human psychology. The good news? Despite these stiff requirements, usability testing need not be prohibitively expensive and logistically difficult, and that makes it easier to justify a series of small, inexpensive tests while you develop the necessary expertise. Jakob Nielsen’s experience, for example, has shown that you can get excellent results with surprisingly small test groups (Nielsen 2000). So the fact that so many of us have been scared away from even considering usability testing is unfortunate.

In fact, just about any thoughtfully designed usability test is better than no usability testing whatsoever. As Nielsen trenchantly observed, “[t]he most striking truth… is that zero users give zero insights”. In the worst-case scenario (i.e., you have no resources and you’re the only person available to do the work), you can use yourself as a stand-in for your audience, because then you’ve got at least one data point. I hasten to add that you are not your audience. While you document a product, you’ll learn far more about it than most of its users ever will, and in any event, it’s hard to imagine how any audience could be less diverse than a single individual. But even when you don’t truly represent your audience, many things that you find problematic will pose exactly the same problem for the real audience. If the current interface takes twelve clicks to accomplish a task and you can describe a way to do the task in three clicks, with equal or greater clarity, then you’ve discovered a more usable alternative. If the software uses six different words to describe the same concept, and only one of the words exists in a standard office dictionary, then that one common word is often the best word for your audience.

A really useful usability test will require considerably more feedback than you can provide by yourself, and that’s where things start getting complicated. Fortunately, a few simple guidelines can let you gather important, helpful, reasonably reliable data from even a small subset of your audience:

Test your questions and the tests themselves with a colleague, then determine whether you can analyze the answers efficiently and extract useful information. The answers must be easy to understand and summarize, and must let you identify and determine how to resolve the problems they identify.

Examine your questions or test designs for signs of bias. Biased questions or designs collect biased answers and can mislead you in your subsequent efforts to improve usability. For example, asking “just how bad is this interface?” gets a very different answer from “how highly do you rate this interface?” because of how the questions focus the respondent’s attention. Similarly, testing only a group of experienced users will provide data that don’t adequately represent the needs of neophytes.

Collect data separately for each distinct group in the audience (e.g., neophytes vs. experience users). Programmers who will install and maintain software have distinctly different needs from the people who’ll use the installed software, and you cannot combine the data for these two groups and still get results that express their different needs.

Collect data from several people within each distinct group to reduce the chance of focusing on the one person who is completely different from everyone else in that group. The mathematics of statistics suggest that larger samples are more representative of the overall population, but as Nielsen (2000) suggests, “larger” need not inevitably mean “prohibitively large”.

Avoid conditions in the test or the test environment that could influence the results (e.g., the test documentation is written in English but is given to someone with a different birth language, the phone in the participant’s office is constantly ringing, their computer is too slow to run the software efficiently). Each factor that you control introduces some measure of bias, since you’d get different results for situations in which that factor differs. When you try to control a test factor, don’t blind yourself to its implications; for example, “localized” documentation may be required for different linguistic audiences, voice-assisted navigation or spoken online help may prove useless in a noisy environment, and the computer configuration the developers recommended may be unrealistic.

Confirm that you’ve understand the answers or results by asking for clarification from the test participant (e.g., “why did you say this product is so bad?”).

Never focus so narrowly on your own objectives or preconceptions that you blind yourself to unforeseen revelations from the test participants.

This brief summary is intended solely to get you thinking about testing. For additional information, I recommend reading more about designing test questions (e.g., Rojek and Kanerva 1994), organizing and managing the test (e.g., Kantner 1994), and evaluating the results (e.g., Hubbard 1989). Craig (1991) provides an annotated bibliography that can help you expand your research.

Myth #9: Single-sourcing means dumping printed documents online

The need to produce online information to accompany printed documentation has been with us as long as software itself. But there’s a growing recognition that the interface itself is often the best form of documentation, and that effective online information must integrate effectively with the way people use the product itself (e.g., Lockett Zubak 2000). Complicating matters—as any working professional knows—is the fact that resources are always limiting in the business world, and that there may be insufficient resources to develop information tailored specifically for use in different printed and online information.

The conflict between the ideal of providing information customized for use in a single medium or context and the very real inability to create such information as often as we’d like has generated a tremendous need for single-sourcing: creating one set of information that can be reused for both online and printed documentation. There's a lot of logic in adopting this compromise, since even when resources aren't limited, it makes little sense to duplicate effort by writing the same text twice. Apart from the inefficiency of creating information twice, the risk of introducing discrepancies between the online and print versions of documentation by creating them independently is hard to accept.

Unfortunately, many companies have been misled into simply producing a single set of documentation and dumping it online. (This approach is encouraged by an attitude that documentation is a cost center, not a benefit to the company and the users of its products. I’ll discuss that myth next.) Much of the potential of technologies such as Adobe Acrobat has been wasted by communicators who can’t or won't even take the time to reformat a vertical document designed for paper to fit on a horizontal computer screen. At its most innocent, this practice simply shifts the cost of printed documentation to the users of the documentation, who give up trying to read the information online and resentfully print their own copies. At its worst, this form of documentation can be a serious productivity drain. Endlessly scrolling from the top of a page to the bottom or squinting to view a full page displayed in a miniscule font are inefficient and outright annoying to many readers. Indeed, the very word “scrolling” speaks eloquently about how badly this design serves our audience, for if scrolls were such a good communication tool, why then did we abandon them in favor of bound books?

Even when designers have the time to reformat printed documentation for proper display onscreen, this approach generally fails to take advantage of the respective strengths of each medium. STC members have been writing compellingly about this for more than a decade (e.g., Brockmann 1990, Horton 1994), yet I continue to see online documentation that is unusable or difficult to use online simply because of how it’s presented.

There are valid approaches to producing single-source documentation, of which SGML is perhaps the best known. But this requires a fundamental shift in the way we look at producing information, both in the initial stages (creation of the information) and in the final stages (presentation or delivery of the information). The failure to understand the distinction between these two phases and the often-forgotten third phase, use of the information by our audience, is one of the main factors behind the current sad state of much online documentation. Yes, you can go a long way towards single-sourcing with appropriate planning and tools; no, you cannot generally pour information from one medium into another without some reworking of the information’s content or structure.

Myth #10: Documentation is a cost center

It's easy to see why technical communicators are often first on the chopping block when it comes time to trim staff: we cost a lot, we make all kinds of unreasonable demands (such as time and money to perform audience analysis and usability testing), we take developers away from their crucial work to answer naive questions, we hide away in our cubicles and write instead of persuading others to shout our praise in the ears of upper management, and we produce a product that often generates no obvious income for our employer. That's the myth, leastwise. The facts can be quite different.

Work by STC members has increasingly highlighted the many ways in which we add value for our employers, including generating both income and savings (Redish and Ramey 1995, Mead 1998). The kinds of value we add include, but are not limited to:

It pays to think creatively when you’re trying to demonstrate your value. For example, documentation can conceivably generate a sizeable return on investment even in situations where that doesn’t seem likely. If documentation is sold as an optional extra for the product, the profits generated by selling it are obvious, but even when the documentation is part of the sale price, there can be tangible profits. Consider, for example, a company with tight accounting controls that sets a target for an overall 20% return on their gross investment. This means that for each $1.00 the company spends to produce a product (e.g., software), including all expenses incurred by the company, they would earn $0.20 in net revenue. In such situations, it’s easy to overlook the fact that documentation generates part of that $0.20 profit. Each $1.00 the company spends on our salaries and benefits, and on printing manuals, also generates $0.20 in profit for the company! This logic depends on many assumptions, including the assumption that profits are calculated based on total expenses rather than purely on development costs, as might be the case in a startup company. It also assumes that the company is sufficiently well run to generate a profit, let alone one as healthy as 20%. But the logic still applies to some companies, and it should be possible to obtain the data you need to determine whether it applies to your own company and generate some real numbers that demonstrate your net worth to your employer. If not, have a look at Redish and Ramey (1995) and Mead (1998) to see what other methods you could use to demonstrate your value.

I find this myth particularly interesting because it’s both held about us by others and something that we believe about ourselves. We each intuitively understand the value we add, even if we’re not able to easily articulate that value, but because we haven’t taken the time to demonstrate that value to others, we begin to doubt our own value. It’s past time we began changing management perceptions so that they understand our true value, whether we measure that value in tangible or intangible ways. Do some of the work necessary to define this value, and management will begin taking you more seriously. That’s a good first step towards feeling more secure about yourself, your value to your employer, and your job security.

Myths aren’t always invalid

Myths endure because no matter how much they simplify or exaggerate reality, they are nonetheless based on something truthful, something important to us, or something that sheds a bright light on an aspect of our lives. One of the things that fascinates me most about mythology is just how universal the themes can be, and how creatively each person or culture can be in reinventing a myth by recasting it in their own unique context. Folklorist Josepha Sherman has observed that “Myths are attempts to explain the cosmic truths... All peoples have the same questions, and so all peoples have the same basic type of myths." Each of the ten myths I’ve presented in this paper passes this test for that idiosyncratic group of people known as technical communicators. My hope is that each of us will find ways to answer those universal questions for ourselves by seeking out the underlying truths and building on them to create something more useful and fascinating still. By making the myths more relevant to us, we reinvigorate them and ourselves. One obvious way to do this is to re-examine our current rules of thumb and see how they can be refined. After all, the thing to remember about “rules of thumb” is that thumbs bend when the situation calls for it.

References

Brockmann, R.J. 1990. Writing better computer documentation: from paper to hypertext. John Wiley and Sons, New York, NY.

Carliner, S. 1998. Future travels of the infowrangler. Intercom, Sept./Oct.:20–24.

Carroll, J.M.; van der Meij, H. 1996. Ten misconceptions about minimalism. IEEE Transactions in Professional Communication 39(2):72–86.

Craig, J.S. 1991. Approaches to usability testing and design strategies: an annotated bibliography. Technical Communication 38(2):190–194.

Horton, W. 1994. Designing and writing online documentation. Hypermedia for self-supporting products. John Wiley and Sons, New York, NY.

Hubbard, S.E. 1989. A practical approach to evaluating test results. IEEE Transactions on Professional Communication 32(4):283–288.

Kantner, L. 1994. Techniques for managing a usability test. IEEE Transactions on Professional Communication 37(3):143–148.

Lockett Zubak, C. 2000. What is embedded help? Intercom, March:18–21.

Mead, J. 1998. Measuring the value added by technical communication: a review of research and practice. Technical Communication 45(3):353–379.

Miller, G.A. 1956. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 63(2):81–97.

Nielsen, J. 2000. Why you only need to test with 5 users. Alertbox (March 2000), http://www.useit.com/alertbox/20000319.html

Redish, J.G.; Ramey, J.A. 1995. Measuring the value added by professional technical communicators. Technical Communication 42(1): 23–83. [Special section, with seven articles.]

Rojek, J.; Kanerva, A. 1994. A data-collection strategy for usability tests. IEEE Transactions on Professional Communication 37(3):149–156.

Schriver, K.A. 1997. Dynamics in document design. Wiley Computer Publishing, New York, NY.


©2004–2024 Geoffrey Hart. All rights reserved.