–Geoff-Hart.com: Editing, Writing, and Translation —Home —Services —Books —Articles —Resources —Fiction —Contact me —Français |
You are here: Articles --> 2004 --> The scientific method
Vous êtes ici : Essais --> 2004 --> The scientific method
by Geoff Hart
Previously published as: Hart, G.J. 2004. Scientific documentation: learning from journal articles. Intercom November:12–13.
Though it may not be obvious at first glance, technical communicators and working scientists have much in common: we both have a keen interest in learning how things work, and in sharing that understanding with others. Yet technical communication is a relatively young profession, and still isn't perceived as a profession by many of our clients. In contrast, science has been a recognized and respected profession for more than 500 years. During this time, scientists have developed powerful, standardized tools for tackling the thorny problem of understanding how our world works.
Can the younger profession of technical communication learn anything from the older profession's long history? Definitely. In this article, I'll discuss the "scientific method" that I learned during my own brief career as a forest biologist, and present a few thoughts on what this mode of inquiry can offer technical communicators.
Science follows a general sequence that's been honed into an incisive tool for investigation and the creation of understanding. It's this creation of understanding that ties science to technical communication and suggests what there is to learn from how scientists work. The scientific method can be described in various ways. Here's one I like:
Here are a few suggestions, some deliberately provocative, on how this approach might work in the context of technical communication.
Scientific problems represent any phenomena that we don't yet understand well enough to explain to someone else—whether those phenomena are as concrete as a chemical manufacturing process that doesn't work as well as we'd hope or as abstract as the need to develop an entirely new way of thinking about the existing body of knowledge. The problem becomes one of (respectively) exploring the factors that define a process until we understand it, and replacing an old paradigm with a new one, as was the case when chemistry replaced alchemy by providing a better explanation of the physical world. The former is arguably easier, but in neither case does the old state of knowledge yield gracefully to the new.
Scientists intrigued by how the world works will carefully narrow their research problems to manageable dimensions; there is never time or money to fully investigate everything. As communicators, we must also clearly define the scope of the communications problem. Many communicators, faced with tight deadlines or overwhelmed by the scope of a large product, build a comprehensive feature list, then immediately begin documenting each feature. But in so doing, they lose sight of the larger context—that the users of the product are generally less interested in its features than in how those features let them accomplish various tasks. That's the real problem, and when time and other resources are tight, I propose that it's more important to do that job well than to exhaustively and uselessly document the minutiae of each feature. (Minimalism suggests much the same approach.) Those details can be the work of future scientists—or future communicators, in our case.
Scientists also face the challenge of overcoming entrenched beliefs. The modern science of proteomics has begun to challenge the traditional paradigms of genomics, but faces stiff resistance: the old model does a remarkable job of explaining how genes work. Yet genomics has some notable flaws, and these flaws present opportunities for proteomics to overturn the older paradigm and improve our understanding. Similarly, technical communicators have successfully faced the challenge of transforming static printed manuals into dynamic online help—a paradigm that works sufficiently well that for many products, it has all but replaced printed documentation. Yet online help, impressive though it seems, has many clear and painful flaws, and is being challenged by new paradigms. "Embedded help" is one such challenger, and will become an even more potent paradigm when paired with emerging disciplines such as "interaction design". (See, for example, Alan Cooper's wonderful book The inmates are running the asylum). There are other challenges. For example, having mastered visual communication, can we learn how to communicate with our visually impaired audience?
The history of science is one of conservatively clinging to old understandings until the evidence for the new is incontrovertible. From science, we can learn to better define the problems we must solve, and to accept the need for a paradigm shift rather than clinging to the old ways when new evidence changes our understanding and suggests a better way.
Science builds on a body of established theory and practice to create something new, as Sir Isaac Newton noted: "If I have seen further than other men, it is because I have stood upon the shoulders of giants." Newton codified the laws that govern the motion of all physical objects and invented mathematical tools (calculus) for analyzing this motion that continue to inspire and guide researchers some 400 years later. Though Newton's laws been superceded by Einstein's relativity and calculus has progressed in ways he never imagined (e.g., computational mathematics), modern science leans heavily on Newton's legacy.
Technical communication also has an impressive body of theory and practical knowledge we can apply to the communication problems we confront daily. The field of cognitive science describes how we think and process information, and the emerging field of information design applies these principles. The field of human–computer interaction provides many insights into interface design. Even traditional fields such as rhetoric and textual analysis have much to say about how to create more effective sentences, paragraphs, chapters, and books.
These and many other fields of inquiry are clearly relevant to our daily work. Remaining ignorant of this rich body of knowledge impoverishes our communication skills: we use methods proven to be ineffective, and reinvent wheels that have long since been perfected by others. I encourage you to at least skim the table of contents of journals such as Technical Communication. Journal articles have a reputation of being formidable and impenetrable, but that reputation is often inaccurate. Moreover, persistence can reveal many insights from even the difficult articles that will let us stand on the shoulders of our own field's giants.
The scientist's worldview is of a stunningly complex system, only partially understood and revealing tantalizing glimpses of new understandings. One key problem faced by any scientist is how to identify the most interesting gaps in understanding and determine how to fill them. As communicators, we face a similar problem (a large body of knowledge), but very different gaps—the distance between our worldview (the understanding of a product) and the audience that must learn it. Too often, we focus on filling information gaps when it is the communications gaps that are more important.
Often, we create stereotypes of our audience through subjective assumptions about their identity, their goals, and the characteristics of both that constrain how we must communicate—then we blithely move on to filling in information gaps. In following this approach, we rob ourselves of information that could be obtained through audience analysis—something with an undeservedly intimidating reputation. Even where formal audience analysis lies beyond our means, there are riches to be had if only we'd look for them:
Each of these resources can help us refine the worldview that lets us understand our audience. Without that refined image of the people we're working for, we risk producing a solution to the wrong problem.
One definition of scientific inquiry (attributed to Karl Popper) is "the creation of falsifiable hypotheses": that is, if you can't create an explanation that is testable by experiment, mathematics, or applied logic, you're not doing science. Technical communicators follow a similar approach: we guess at how something works, test that hypothesis, decide whether the results support our initial belief, then try again if we're wrong. That something may be a button or menu feature, or an entire software module. Sometimes we give up in despair and ask the product's designer—an option not available to scientists.
Why not apply the same approach to understanding our audience? Perhaps because we're intimidated by audience analysis, we find it easier to make certain assumptions about our audience (as discussed in the previous section). Even if we must accept these assumptions, we rarely test the more important hypothesis: "our documentation meets the audience's needs!" Full-blown usability testing of documentation often lies beyond our means because of time or budget considerations, but we're never wholly unable to test the success of our efforts. Sometimes we are, ourselves, the user; the communicators who document Framemaker or RoboHelp clearly use the product the way others will use it. Sometimes, as is the case in companies that develop programming tools, the product's developers are the users. Other times, we have only Marketing's description of the perceived user.
In any of these cases, we can develop a testable hypothesis: "Any user reading our description of this specific procedure will be able to accomplish the task the procedure describes." But is that hypothesis defensible?
Scientists and technical communicators both collect, organize, analyze, and synthesize (assemble) large bodies of information. In so doing, scientists distinguish between laboratory studies, which are tightly controlled and thus repeatable, and "field" studies, which sacrifice a measure of control and repeatability for the sake of realism. These two approaches are similar (respectively) to the usability labs and "contextual inquiry" adopted by usability professionals. Scientists also distinguish between the abstract work of theoreticians (who may never actually touch the subject of their theories), the more concrete but still remote work of observers (who emphasize observation over interactions with the subject of the study), and the manipulative work of empiricists (who manipulate what they study, then observe the results of that interaction). These approaches are similar to our own work: sometimes we only imagine how our audience will use a product, sometimes we watch them use it, and sometimes we tell them what to do and watch them try.
Each approach is a relevant way to test our hypotheses. But as in science, we must understand the merits and drawbacks of each approach well enough to know when to use it in our work. Moreover, we must actually use these approaches to ensure that our hypotheses are supported by fact.
Scientists pick apart their subject of study until they understand its smallest components, then reassemble those components into a broader and more comprehensive understanding. This requires a rigorous application of logic and problem-solving skills. Clearly, technical communicators do similar work: we study the component parts of some product until we understand it well enough to explain it to someone else. We then assemble our knowledge into documentation that provides a comprehensive understanding of the product.
The key difference between us and the scientists is that scientists ask one more question once their research is complete: "Are we done yet, or can we continue to improve our understanding of the problem?" When the answer is that more remains to be known, the scientist digs deeper. But when we communicators complete a documentation project, we're tempted to relax and rest on our laurels. Instead, perhaps we should be asking ourselves whether there's more we could do. The downtime between product releases gives us the chance to ask that question and investigate ways to improve our understanding—and that of our audience.
Scientists publish their experimental results to stimulate discussion and let others confirm (replicate), qualify (constrain the context for), or reject those results. When scientists submit their manuscripts to anonymous peer review by reviewers who are experts in their field, these reviewers carefully examine whether the author adequately characterized the problem, explained how the research fits within the larger body of knowledge, satisfied the standards for proper scientific inquiry, obtained useful results, and drew reasonable conclusions. We communicators nominally follow a similar process, since our writing is reviewed by subject-matter experts (SMEs) to ensure that it is correct. Unfortunately, we miss the real point of a review: a science journal's peer reviewers are also users of the resulting published papers, but the SMEs are most often not the people who will use our documentation. I've already mentioned that the users of our documentation are its best reviewers. But our professional colleagues are also are peers, and well-suited to critique our manuscripts. A small proportion of STC's membership enters the annual publications competition to obtain this peer review. Why don't we do this on an ongoing basis, throughout the year?
In science, many scientists repeat published research in an attempt to replicate the original results (and prove them correct) or refute them (identify exceptions). Others examine how the results can be applied in other contexts. How many of us, once our documentation is complete, ask others to test our documentation to ensure it's correct? How often do we share our successful techniques with colleagues, or ask to borrow their successful techniques to improve our own efforts? How often, confronted with criticism that our documentation fails to meet the needs of its users, do we revise our paradigm as dramatically as Einstein's relativity revised Newton's mechanics? Our old "Newtonian" manuals may remain stolidly and conservatively effective, but perhaps sharing them with others may reveal things that we or they could be doing much better.
Scientists are never satisfied with an explanation, and always dig deeper to learn more; they are well aware of the quote attributed to Morris Meister: "Think of science as a powerful searchlight continuously widening its beam and bringing more of the universe into the light. But as the beam of light expands, so does the circumference of darkness." Scientists, whether they produce primarily abstract knowledge or strive to apply that knowledge, always move forward. That kind of passion to improve is rarer in our profession; as a whole, we're more likely to be satisfied with what we've always done well. Perhaps more of us should embrace the scientific approach and aim to progressively improve our work with each subsequent iteration.
Science isn't the only valid approach to understanding our world. Ethicists, philosophers, "journeyman tinkerers", and even the faceless member of our audience all have much to contribute. What sets science apart is that scientists challenge what has been discovered, and while remaining informed by theory, always work to test that theory in the real world. As a scientific communicator, I've learned much from technical communicators that inspired me to find new and more effective ways to communicate. I hope that with your improved understanding of the scientific method, you'll be similarly inspired to improve your own communication.
©2004–2024 Geoffrey Hart. All rights reserved.