Geoff-Hart.com: Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Articles --> 2008 --> Implementing onscreen editing: a four-step process

Vous êtes ici : Essais --> 2008 --> Implementing onscreen editing: a four-step process

Implementing onscreen editing: a four-step process

by Geoffrey Hart

Previously published as: Hart, G.J.S. 2008. Implementing onscreen editing: a four-step process. In: Proceedings, STC 55th Annual Conference, Philadelphia, PA. Soc. Tech. Comm., Arlington, VA. (This article is a shorter version of Chapter 18 of Effective Onscreen Editing. Chapters 15 through 17 provide useful related information.)

Abstract: Onscreen editing offers many potential efficiencies, but before you can obtain those benefits, it's necessary to actually implement the process. Unfortunately, before you can implement any organizational change you must find ways to overcome both organizational and human barriers. This paper presents a proven four-step process for overcoming those barriers: getting permission to proceed, demonstrating the benefits through a test case, solving expected problems, and keeping an eye open for unanticipated problems. Throughout, the paper emphasizes that the implementation process is not just about technology; it cannot hope to succeed without also understanding and managing the human aspects of change

During the 20+ years of my editing career, I've taught myself and many others how to use a word processor to edit more efficiently and accurately than is possible using paper alone. I've done this in a large federal research institute, in a small non-profit research institute, and (for the past 5 years as a freelance editor) with authors scattered around the world, from a wide range of cultures. In doing so, I've discovered the main barriers to successfully implementing onscreen editing, and the main success factors. In this paper and the accompanying presentation, I'll share what I've learned in the hope that it will make your own adoption of onscreen editing simpler and more effective.

Here, I've focused on managers who will implement onscreen editing for an organization or workgroup, but the approach is equally valid for peer review within a small group and for the kinds of relationships freelance editors strike up with their clients. The steps are identical, but must be modified to account for the different context. The content of this paper and my oral presentation are based on Chapter 18 of my book, Effective Onscreen Editing (Hart 2008), but I've boiled that chapter down to the essentials to fit within the constraints of a 1-hour talk. For more details of the implementation, and a detailed exploration of the technology and overall approach to onscreen editing, please consult the book.

In my experience, four primary barriers interfere with the technological and organizational changes required to implement onscreen editing. Overcoming each barrier becomes a step in the implementation process:

  1. Getting permission to try.
  2. Demonstrating the benefits through a test case.
  3. Planning to solve certain predictable problems.
  4. Planning for the unexpected.

This overall approach is robust and works well in a variety of environments; indeed, colleagues have successfully modified it to change other (non-editorial) workplace processes. One key is to recognize that each workplace is unique, and that you’ll have to modify the approach to account for the idiosyncrasies of your situation. Another is to recognize that changing existing practices is as much about motivating people as it is about battling recalcitrant technology; thus, you'll need refined people skills and a profound understanding of the people who will be affected by the change if you want to succeed.

Step 1. Get permission to try

The biggest obstacle to change is the inherent conservatism of people, which is magnified in organizations. To successfully implement onscreen editing, you'll need a management champion who can give you permission to proceed, provide appropriate motivation to workers affected by the change, and remove obstacles. You may be that manager, or you may need to persuade another manager (e.g., the leader of a product development group), but proceeding without this approval will be difficult in either case, and possibly even counterproductive.

Start by presenting an informal proposal or even a formal business case that explains the benefits and drawbacks, lists what you already know and what you plan to discover, and describes how you plan to overcome any problems. The goals are to explain what you hope to change and why, and to eliminate the fear of unforeseen problems.

Demonstrate the benefits

Start your proposal with a discussion of the benefits of onscreen editing, ideally in the context of a problem the organization faces that you plan to solve. Organizations that implement onscreen editing typically achieve the following benefits:

To demonstrate these benefits, you'll need to develop a test case, which I'll describe as Step 2 later in this paper. At the proposal stage, you'll need to put together more of a plan of attack that addresses each of these benefits. For example, you'll need to persuade your management champion that each benefit has been achieved, and that means you'll need to develop reliable and persuasive metrics. Since each person is persuaded by different things, you'll need to negotiate those metrics. Hart (2004a) provides some simple guidelines. If you already have some idea of the potential time savings or quality improvements, include this in your proposal, but don't over-promise. The conventional wisdom that you should “under-promise and over-deliver” applies here too. Your goal should be to set a standard you can sustain in the long term.

Eliminate or minimize bad consequences

In Steps 3 and 4, I'll provide details of some of consequences to expect and how to cope with the unexpected. During the proposal stage, the goal is to identify everyone who will be affected by the proposed changes, and their needs. Involving all stakeholders in this preliminary consultation ensures you won't miss anything major, and lets you propose how to meet each person's needs. Typical stakeholders include:

Your workplace may have additional stakeholders you must consult. Review every step in the process, as if you were documenting it for an ISO 9000 exercise, to ensure that you identify everyone. Hart (2006) provides some guidance about review processes and the people who are involved in them.

Step 2. Develop a test case

In the proposal stage, you worked to persuade your management champion to let you proceed. You negotiated what you hoped to accomplish, proposed metrics that would demonstrate your progress, and described how to identify and solve both anticipated and unexpected problems. In Step 2, it's time to put your money where your mouth is by performing tests that will reveal whether you can accomplish the proposed goals, collecting data (metrics) that provide proof, and detecting and solving problems. You'll also need to choose an author–editor pair who are willing to work together in this test, and willing to record the results for management and for those will follow in the footsteps of your testers. You’ll also need to pick an appropriate project. Last but not least, you’ll need to support your testers throughout the project to ensure that they succeed.

Obtain good numbers

In the proposal stage, you learned what kinds of numbers would persuade your managers. Most managers request two typical metrics:

To demonstrate improvements, choose a benchmark against which to compare the new process. If you’re replacing on-paper editing with onscreen editing, on-paper edits are the obvious choice and should be used to calculate baseline error rates. If your organization doesn't already have extensive statistics on your on-paper editing process, you'll need to collect them. You can do this simultaneously with your test case for onscreen editing, since the regular work must proceed while you test the new process, though it takes a bit more organization to track two different classes of statistics simultaneously. Calculate mean productivities for both approaches, but keep separate records for different document types, projects, or authors. Plan to estimate the variation in your results, such as productivity ranges or even statistics such as standard deviations or 95% confidence intervals.

Nobody works optimally while learning new tools, so delay your data collection until your testers have achieved at least basic proficiency. This way, you'll be measuring their skill, not their learning curve. (However, track the learning times too so you can budget time or money to train the rest of your organization.) Gathering data for several documents reveals the range of variation and may even demonstrate speed improvements from document to document.

Use a similar approach to measure accuracy. For example, assume that on-paper editing is your standard for comparison, and print copies of manuscripts edited on the screen see what errors were missed. Each stakeholder should propose classes of errors to track. For example, if your desktop publisher spends considerable time correcting formatting errors, track this category of error. Other types include typos, inconsistencies, grammatical errors, and unclear wording. Using this list, count all errors found during initial editing, author reviews of edits, peer or technical reviews, and proofreading.

Objective, numerical data are persuasive, but don’t ignore qualitative, subjective data. "Feelings" can't be ignored, since they often reveal subtle or dramatic problems that aren't obvious from purely numerical data but that nonetheless must be solved; nobody will use a super-efficient process that is uncomfortable. Gathering this information also shows each participant that their opinion is truly important, and that you're willing to work collaboratively to develop a comfortable process.

Pick a suitable author–editor pair

An author–editor pair is the crucial combination in the test, but don't neglect other stakeholders who will be affected by their work. An ideal pair has a history of working well together, is willing to try something new, has above-average proficiency with the word processor, and is sufficiently patient to work through delays. Choosing people who work well together ensures that personal incompatibilities won't bias the results. These people will help you to develop an efficient process and prove to others that it works. People prefer to adopt solutions they know have been developed and debugged by their peers rather than imposed from above. That's particularly true if your testers are enthusiastic about the results. Better still, because they will have solved most of the problems others would have faced, subsequent adopters will encounter fewer problems.

Pick appropriate projects

Start with a small document purely for the sake of learning how to use the tools, then pick a more challenging project large enough to let you collect statistics. A good initial test project encompasses the typical range of editing challenges you'll face, excludes the worst challenges (to avoid getting bogged down), has a reasonable deadline and room for slippage, is sufficiently important to justify editing, is not so critical that delays or failures will have serious consequences, and clearly demonstrate the potential payback.

Support the testers

Your test case must lead to a simple process that meets everyone’s needs. To ensure that your testers become evangelists for the new process, ease their fears and motivate them before and during the test. Emphasize that it's the new process that is being evaluated, not them. Take measures to ensure that any problems or failures will have minimal consequences both for the organization and for the testers. Offer incentives for all participants to try the new approach. Ensure that you protect or develop friendly, efficient working relationships between stakeholders. Establish a precedent for listening to and working with everyone throughout the process.

Provide adequate time

Ideally, the test case will take less time than the original on-paper edit because of the efficiency of onscreen editing. In practice, the overhead of monitoring and testing the procedure can make it take just as long, and sometimes longer. Expect to add time for:

Budget time accordingly. Once you've ironed out the worst wrinkles, the work will go faster, and you'll need to start finding time to train everyone else.

Step 3. Plan to detect and solve predictable problems

The problems you can expect to encounter fall into three main categories: organizational and bureaucratic barriers, human nature, and technological problems.

Organizational and bureaucratic barriers

These tend to manifest themselves as resistance to change, and sometimes require direct intervention from your management champion to overcome. But one powerful trick works in most cases: harnessing the energy of an existing process for your own purposes rather than trying to eliminate the process. If you can learn why a process exists, you can generally find a way to accomplish the same goal with only minor changes, such as replacing paper audit trails with electronic equivalents or even with signed and dated printouts. Organizational and bureaucratic problems fall into several categories:

Human nature

Anticipating the most common objections people will raise lets you address those objections right from the start:

Although these are common problems, your colleagues will reveal what they are worried about. Asking them establishes a precedent for dialogue and ongoing consultation, and will reveal problems you might otherwise miss so you can deal with them rather than leaving them to fester.

Technological problems

Technological problems you can expect to encounter at some point include:

Step 4. Plan for the unexpected

Murphy’s law applies to computers with a vengeance. The key to surviving is recognizing that you can't predict all problems. Instead, take measures to minimize the frequency and severity of surprises, and plan how to respond when those measures aren’t enough:

Following these steps won't eliminate problems, but will make them less frequent, less damaging, and more manageable.

Minimize incompatibilities

As much as possible, try to use the same software and operating system versions everywhere. This may require support from your computer staff to keep everyone in synch, which means you'll need to develop and maintain a mutually respectful relationship with these people. It will also require compromising between those who prefer to cling to old, familiar versions of software and those who insist on racing ahead to embrace the new.

Adopt an effective workflow

Develop a workflow that accounts for known problems and provides workarounds. For example, plan to create most content in software that offers good editing and review tools, and move it into software that lacks such tools only once the major revisions are complete.

Phase in the new process

The easiest way to fail is to impose a system without consultation, without training, without enthusiastic support, and without testing. Previously, I described each of these aspects, but you can expect surprises as you teach more people to use the system. Keep your ears, eyes, and mind open so you don't stop learning just because the initial tests went well. Once you've fully adopted a new system, problems become more disruptive than they were during initial testing, when testers were somewhat isolated from the pressures of regular work. Learn triage skills: solve the most time-consuming or damaging problems, or the ones that disrupt morale, first, and work on lesser problems only when you have time.

Watch for efforts to sabotage your work. Not everyone will eagerly embrace the new process, and you'll need to find ways to identify these people and minimize any disruptions they cause. Try to find ways to enlist them in the change, but if you can't, ensure that someone with authority can persuade them to behave appropriately.

Create audit trails

The goal of audit trails is to reveal anything you’re missing in the new approach. (If you’re not missing anything, the information you collect will demonstrate that the new approach works at least as well as the old one.) Always use this approach to identify and solve problems, not to assign blame. As soon as the process becomes adversarial, you raise the level of tension, increase resistance, and increase the frequency of stress-related errors and disputes.

Document any problems you encounter, how to avoid them in future, and how to solve the ones you can't avoid. Then make this information readily available, perhaps via a knowledgebase or wiki on your intranet. Working with people to implement such tools in a way that works for them is also a good way to learn about user-centered design.

Provide ongoing support

Editors and prolific authors will use the editing tools far more often than anyone else, and will become more efficient. Remind them not to lose patience with less-skilled colleagues, and to help those colleagues wherever possible. Providing ongoing coaching takes time, but compensates by building friendly, mutually supportive working relationships based on dialogue and cooperation, thereby creating a more pleasant working atmosphere and improving overall efficiency. Provide tools such as the aforementioned knowledgebase or even a simple primer that gets people up to speed quickly, such as the one on my Web site (www.geoff-hart.com/resources/Using-revision-tracking.pdf).

Keep talking

Synergy between authors and editors only happens if you encourage ongoing dialogue. Technology only supports communication; it is not, in itself, communication, and it does not encourage communication. Whether you’re a solitary freelance editor or the manager of a large and diverse group, make time to ensure that everyone is communicating effectively. Periodically confirm that everyone is satisfied their voice is being heard, and intervene whenever the dialogue appears to be endangered.

Relax a little!

This overall approach has proven surprisingly robust, but each workplace and group of people is unique. You’ll always have to modify the approach I've described somewhat to account for those idiosyncrasies. In this paper and my book, I’ve exaggerated the potential difficulty of implementing onscreen editing by comprehensively discussing all the possible problems. You’ll probably encounter fewer significant problems and be able to quickly and painlessly implement the new process. Following this four-step approach carefully, with honest concern for the needs of all stakeholders, makes things go much more easily.

Implementation can be tough when you’re working in a high-pressure environment, where you must resist the pressure to move faster than you're comfortable with. To the extent you can, ease into the process gradually, starting with preliminary testing and gradually expanding the new approach to include all stakeholders. The full payback will take some time to become apparent, so be sure to keep everyone informed of your victories along the way.

References

Hart, G.J. 2004a. Practical and effective metrics. Intercom February:6–8.

Hart, G.J. 2004b. Avoiding repetitive-stress injuries: a guide for the technical communicator.

Hart, G. 2006. Designing an effective review process. Intercom July/August 2006:18–21.

Hart, G. 2008. Effective onscreen editing: new tools for an old profession. Diaskeuasis Publishing, Pointe-Claire, Que. eBook in PDF format, 731 p.; printed version, 559 p.


©2004–2024 Geoffrey Hart. All rights reserved.