You are here: Articles --> 2001 --> Cheating
the quality triangle
Vous êtes ici : Essais --> 2001 --> Cheating the quality triangle
by Geoff Hart
Previously published, in a different form, as: Hart, G.J. 2001. Cheating the quality triangle. Intercom April:6–10.
What’s our favorite tongue-in-cheek mantra in technical communication? We have many, but “fast, cheap, good—pick two” always makes the top-three list, right after “read the fine manual” and “you want that when?” The phrase often takes tangible form in the so-called quality triangle, an illustration of a triangle with each of these three factors occupying one side. This mantra speaks to us so clearly because resources always seem limited, and as a result, devoting resources to any two factors prevents us from applying those resources to the third factor. This “zero sum” assumption, in which every winner implies a loser, is so well-accepted that you could almost call it technical communication’s law of conservation of energy.
Fortunately, the quality triangle isn’t really a law of nature, and thus, it’s not immutable. It is possible to simultaneously improve speed, cost, and quality—if you can forget about geometry and think outside the triangle. In this article, I’ve presented a baker’s dozen tips that can get you started. Best of all, none requires anything remotely resembling advanced math.
Doing something faster always carries with it the implicit risk of compromising the resulting quality (“quick and dirty”) or imposing additional costs (“hire more writers”). That can certainly be true if fast means careless or “throwing money at a problem”, but neither has to be the case. How can you increase your speed without increasing cost or decreasing quality?
Triangles have no parallel sides, so when we discuss the quality triangle, it’s hardly surprising that we completely forget about working in parallel. Accomplishing two or more tasks simultaneously is obviously faster than waiting for each task to finish before beginning the next one.
The document approval process is one place where an eminently logical approach can become inefficient. For example, final reviews and approvals may occur at several levels: at the level of the documentation manager, the Research and Development Director, the Marketing Manager, the Engineering Supervisor, the Legal Department, and (for some multinational firms) a Head Office. Traditional approaches to reviews made these approvals sequential, with the next person up the chain of authority reviewing a document only after the next lowest person had vetted the document and approved it for that senior review. Given that each phase in the approval process leads to corrections before the document moves on to the next phase, and dead time when the next manager is out of the office or while the document is in transit between offices, several days can easily be lost between each approval. In contrast, conducting all these reviews simultaneously is often easy to sell to Management if the technical and editorial reviews that precede the approval process produce high-quality documents that require little subsequent revision.
Periodically getting together with your colleaugues to review the current processes and rethink what you’ve been doing can provide considerable payback. In the case of Management approval of documents, the additional value added by each consecutive cycle of approval plus revision often doesn’t justify the additional delays this approach adds. Rethinking your processes also lets you determine which parts have grown atherosclerotic and no longer serve their original purpose. Many of your findings will lead you to eliminate or streamline existing processes, saving time without costing money or resources.
Understanding what you must deliver at the end of a project (the deliverables) also helps you move faster: if you don’t know what you must produce to meet the needs of your customers, the odds are good that you’ll succumb to the temptation to document everything, no matter how irrelevant. Failing to identify all the deliverables when you start a project means that you’ll generally miss at least one important deliverable while you’re busy working on things that provide little benefit for the customer. This problem is amenable to Pareto optimization: identify the aspects that provide the most payback to customers, and focus the greatest attention on these aspects.
Ironically—and somewhat unfortunately—careful identification of the deliverables can itself pose problems, since focusing narrowly on the goal sometimes leads us to overlook obvious shortcuts. For example, many of us consider the entire printed manual to be the deliverable, and thus, the entire manual becomes the product we deliver for peer, technical, and management reviews. Seen from the customer’s viewpoint, this approach makes little sense, since our customers see each chapter and each topic within a chapter as different deliverables. That being the case, it makes more sense to send each chapter or topic for review as soon as it’s complete rather than waiting for completion of the entire manual. Reviewers are only human, and behave predictably when a monstrous task such as reviewing a 600-page manual lands on their desks: they immediately set it aside and start looking frantically for something easier to do. Breaking the task into smaller chunks provides less incentive to avoid the task, and even though the total amount of work remains constant—the whole manual must still eventually be reviewed—spreading it out over a longer period helps motivate reviewers to do the work.
The two preceding points demonstrate how understanding where delays arise can help you minimize or even eliminate the delays. One particular problem involves failing to reuse already-approved information or techniques, thereby forcing us to reinvent them each time we need them. On the large scale, reusing existing information can become “single sourcing”, in which the print manual becomes the online help or vice versa, but this approach also works well on a much smaller scale.
For example, most writers remember to copy license statements or disclaimers from existing documentation rather than rewriting them from scratch, but fewer use an already approved chapter in a manual as the template for writing subsequent chapters. Most of us encounter the occasional difficult writing problem and spend considerable time solving it ourselves, rather than asking our colleagues how they or others have solved the problem. Sometimes a little lateral thinking helps, and the solution is to not attack the problem at all; for example, could we simply negotiate a deal with the publishers of the For Dummies books to provide printed documentation, leaving us free to focus on the online help? Sometimes we find ourselves repeatedly coping with a problem and developing workarounds rather than seeking the problem’s root cause and preventing the problem from arising in the first place; each time the problem arises anew, we reinvent the solution.
Appropriate use of the technology you already have in place also helps the work go faster. Most of us develop only a superficial understanding of our crucial tools, such as word processors, and thus do many things manually that the software could do automatically, much faster and more accurately.
For example, Microsoft Word can automate the process of fixing certain mistakes through underused features such as “autocorrect”, “macros”, and customization of the interface. The autocorrect feature monitors your writing as you type, and fixes a variety of common problems (e.g., typing “the” as “hte”). That alone saves time, but teaching Word to autocorrect your own unique typing problems offers even better payback. For example, in a recent manuscript I kept mistyping the acronym OFSWA as OSFWA, something easily fixed by adding this typo to my autocorrect dictionary. There’s another payback from this approach: if Word corrects typos as you type, you won’t have to waste time correcting them during the final spell check. Similarly, add jargon to your personal spelling dictionary so that Word doesn’t question the spelling each time it encounters a word. If you work in a field such as medicine or law, you can often purchase special-purpose dictionaries that already contain most of the jargon from your field.
In some cases, you waste considerable time retyping standard information that reoccurs frequently. Storing this information in a file would lets you copy the text and insert it into new files during editing. For multi-keystroke tasks that you need to perform repeatedly, you should program a macro that reduces the task to a single mouse click or keystroke. Word’s immense potential for customization also works in your favor. For example, you can reprogram the keyboard shortcut Control-F4 (which, by default, closes a document) to launch a macro that automatically updates all fields in the document (the F9 keystroke), makes a backup copy of the file in your personal backup directory, and only then closes the document. I’ve discussed this in more depth in a previous article (Hart 2000a), and in my quarterly column in Intercom on effective on-screen editing.
The obvious problem with technology is that sometimes it becomes a roadblock. Once a technology is in place, it acquires a considerable amount of inertia, and we find ourselves using existing tools simply “because they’re there” rather than because they’re actually efficient.
For example, the final step in producing printed documentation (the printing process) used to consume considerable time because of its tedious, mechanical nature: we’d print camera-ready copy on a high-resolution laser printer, paste-up the graphics, then ship the resulting pages to our printer for the creation of printing plates. Using word processors as our publishing tool lay at the root of this problem, since word processors were poorly suited to an efficient, modern production process. Desktop publishing software, which was designed specifically to facilitate all-electronic production of printed documents, proves to be a more efficient tool, and the only efficient tool if you’re producing color publications. So until recently, if you wanted to print a manual directly from the software in which you created it, you required the use of a dedicated desktop publishing program. Recently, Adobe’s Acrobat technology let us use our word processors to produce electronic files that we can send directly to the printer. This approach still doesn’t work well for full-color documents, but only because word processors don’t understand the color models used in offset printing; however, high-speed color inkjet and laser printers can produce acceptable results in many cases, and it’s only a matter of time before even word processors let us produce color publications.
Computer technology offers compelling advantages, the more so now that the cost of the hardware keeps dropping even as its speed increases, and the effectiveness of the software keeps increasing while the price remains mostly constant. Both ongoing improvements mean that investing in appropriate new technologies is less expensive than ever before, and the payback potentially greater. But technology itself is only part of the solution.
There’s an old saying that “you have to spend money to make money”, but it’s equally true that you sometimes have to spend money to save money. The trick is to buy the tools you need to do your job right, even if you can somehow continue to coerce less-suitable tools into doing the job. In many cases, sticking with the current tools (whether an old computer, a small monitor, or unsuitable software) can actually wear us down to the point that we need contract writers to help with the workload. Even when the software itself is productive, it may entail additional costs in other phases of the production of documentation, as I discussed earlier in the context of producing printed documentation.
For example, Microsoft Word is an inexpensive solution, particularly since it often comes “free” with new computers, but despite Word’s power as a word processor, it’s not the best tool for every job. In addition to lacking the power, reliability, and stability of a dedicated desktop publishing program such as FrameMaker for large or complex documents, you can’t yet produce color documents in Word without expensive manual workarounds. Buying a tool such as FrameMaker certainly entails an additional startup cost, but that expense may be amply repaid in terms of reduced troubleshooting costs and downtime, not to mention reduced costs at the printer.
If you work for a company in which documentation costs are charged back to the development project, the benefits from using a more productive tool often greatly outweigh the costs of purchasing that new tool and training everyone to use it. More work gets done per billable hour, thereby lowering the client’s per-hour costs. Furthermore, it’s often possible to improve quality at the same time. True desktop publishing software has superior typographic tools, and by working with imagesetters (high-resolution printers that output pages to film), it provides higher resolution for the text and graphics. In many cases, the costs are also lower then with traditional techniques.
“It goes without saying”, one of my favorite oxymorons, often introduces the phrase “planning is crucial”. Everyone’s heard the phrase “measure twice, cut once”, but how often do we take this phrase to heart and actually plan a project so carefully that we need only cut once? Rework and unnecessary work cost us time, which indirectly increases costs, but rework also increases direct costs.
For example, consider the cost of producing a substantial amount of printed information that could be better implemented as online help. Manuals are expensive to print, and the greater the proportion of the information you move online, the smaller the manual and the less it costs to print. At the same time, moving too much information online or designing ineffective online help may prevent some customers from finding necessary information. That leads to increased telephone support costs, lost sales, and customer dissatisfaction. By understanding customer needs and defining the correct mix and design of online and printed documentation, you can greatly reduce printing costs as well as support costs.
The previous point provides a hint of the tradeoff between quantity and cost: doing more work always costs more, all else being equal, and sometimes you’ll discover you’ve been doing things you no longer need to do. By reducing the quantity of work, you lower your costs, and if you’ve carefully focused your efforts, you won’t affect quality much, if at all.
When, for example, was the last time you saw a tutorial on how to use Microsoft Windows in a software manual? This kind of introductory material was common back when our audience was making the transition from text-based interfaces such as MS-DOS to graphical user interfaces, but increasingly, we can safely assume that our audience understands the basics of a graphical interface. What invalid assumptions do we continue to make about our documentation? The risk of challenging such assumptions, of course, is that the assumptions arose for a good reason, and challenging those assumptions risks overestimating our audience’s background knowledge. To be safe, you must perform at least a cursory audience analysis to confirm that you’ve correctly understood your audience’s needs.
Quantity also becomes important in the context of Pareto optimization, an approach that recognizes the fact that a relatively small portion of a product may often provide the vast majority of the product’s benefits. The consequence of this notion for documentation? We might need to provide truly comprehensive, detailed information for only the 20% of our documentation that readers use 80% of the time, for the 20% of the features that cause 80% of the problems, or for the 20% of the tasks that have the most serious consequences if the user fails at the task. For everything else, it might be acceptable to provide less comprehensive or detailed information. Of course, we’ll still have to spend the time to find out what that crucial 20% happens to be, and doing so is neither cheap nor fast.
The final cost-saver I’ll propose revolves around the disappointing progress we’ve made towards implementing the proverbial paperless office. I’m a huge fan of paper-based documentation, in part because of the currently primitive state of online help systems, but that doesn’t mean I’m blind to the importance of striking an appropriate balance between online and on-paper information for each audience.
Most software that I currently use strikes an entirely inappropriate balance; in many cases, the information has been dumped online purely so that the developer doesn’t have to pay printing costs for manuals. Providing documentation in PDF format, unlinked to the software and formatted for letter-size paper rather than the screen, shifts the printing costs to our customers; sure, it saves us money, but you can bet the customers aren’t happy. Discussions with my colleagues suggest that this trend is continuing.
Minimizing our use of paper has obvious consequences for costs, but doing it without sacrificing customer satisfaction requires us to learn how to replace “paper documentation dumped online” with true electronic performance-support systems that integrate the documentation and interface so well that there’s less need for printed documentation. But in the near future, we won’t entirely eliminate paper documentation, and we’re going to have to do a better job of using online information judiciously.
Quality is too complex an issue to handle with any justice in such a short article, but searching the Web using the keywords “quality improvement” will turn up more resources than you have time to consult. Here, I’ve provided just four ways you can improve quality that don’t cost you more time or money, and may in fact save you both.
Yet another of our mantras is that “you can’t fix a bad interface through good documentation”. That’s true on several levels, but in the context of quality, producing a product that requires less documentation has important benefits in terms of documentation quality. How?
There’s a myth that we can’t work with developers (Hart 2000b); it’s not true. Start working with your developers today.
I’ve already touched on reviews, but it’s worth repeating that the review stage is the easiest place to correct the most serious quality defects. Unfortunately, the most common review procedure I hear being discussed relies on asking several subject-matter experts to review the entire documentation package.
Apart from the time problems I mentioned earlier in this article, significant quality problems arise from this approach. Given that reviewers only have a finite amount of energy or time to spend on a review, asking them to dilute that energy over the entire documentation package in a single step leaves fewer resources to focus on each portion of the package. Breaking the task into smaller chunks focuses that energy much more intensely on each chunk, thereby improving the quality of the review. Two approaches can make better use of this energy:
Another important solution relies on careful editing. Writers commonly complain that technical reviewers provide more comments on the writing style than on the important technical points they’ve been asked to review. The solution? Provide reviewers with a thoroughly edited document that has no grammatical or spelling errors to correct. In the absence of stylistic issues to correct, reviewers must, by default, focus on the substantive issues that we’ve asked them to review.
I’ve mentioned planning as a means of reducing costs, but planning also has salutary effects on quality. Every time you take off on a tangent and produce documentation that wasn’t part of your original plan, you’re wasting time that could be better spent improving the quality of the material that should have been listed in the original plan. You also run the risk of forgetting a deliverable or running out of time to document something more important. Well-designed documentation plans help ensure that you must only rarely diverge from the plan, and these changes arise primarily (or solely) from important unforeseen needs. After all, even the best planner occasionally misses something important. But the rest of the time, it pays to stick to the plan.
Iterative reviews provide another means of improving quality. In this approach, early reviews of relatively small portions of the documentation let you identify certain recurring problems such as the use of passive voice, incorrect terminology choices, or ineffective presentation of information. Correcting these problems early in a long project means that you won’t have to correct them in all subsequent work or (worse yet) in one marathon session at the end of the project.
Another benefit of this approach is that maintaining consistency throughout a long document can be difficult even for trained editors, and using iterative reviews helps you become increasingly consistent in your approach. This reduces the amount of work required to impose consistency later. It can also save enormous amounts of time along the way, since figuring out how to proceed early in the project means that you can immediately implement that solution in each subsequent phase. The time you save can be spent on improving quality.
None of the approaches I’ve suggested changes the simple fact that we must produce documentation fast, with good quality, and at a reasonable cost. In that sense, the quality triangle really does reflect a fundamental aspect of our work. Cheating the quality triangle doesn’t violate any immutable laws of nature; rather, it involves thinking about what we’re doing and figuring out how to make those laws work for us.
The result? Even when inadequate resources force you to sacrifice some degree of speed, cost, or quality, you can still minimize the magnitude of those sacrifices. Better still, you may be able to start with a faster process, a lower cost, and a higher level of quality than would have been the case if you simply accepted the inevitability of tradeoffs.
I’ve suggested more than a dozen ideas to get you started on that path, but this list isn’t exhaustive. Nor do I claim that each suggestions will work under all circumstances. Instead, what I hope is to get you thinking outside the triangle and finding even better solutions that fit your own specific context.
Hart, G.J.S. 2000a. The style guide is dead: long live the dynamic style guide. Intercom, March:12–17.
Hart, G.J.S. 2000b. Guest editorial: Ten technical communication myths. Technical Communication 47(3):291–298.
©2004–2018 Geoffrey Hart. All rights reserved