You are here: Articles --> 2006 -->
Editorial: In theory...
Vous êtes ici : Essais --> 2006 --> Editorial: In theory...
by Geoff Hart
Previously published as: Hart, G. 2006. Editorial: In theory... the Exchange 13(1):2, 8–10. http://www.stcsig.org/sc/newsletter/html/2006-1.htm#editorial
You may have noticed a growing backlash against science in modern society. Over the past few centuries, it's been noted with equal measures of justice and imprecision how science has pushed aside religion as a way to explain the world and deal with its vexing uncertainties. This may have resulted from a growing dissatisfaction with religion or with religion's perceived inability to provide certainty, exacerbated by a growing exposure to many new and vastly different cultures and religions. Whether this is a good or bad thing, I leave to each individual to decide.
However, whenever the thrill of a new paradigm wears off and we stop ignoring the holes in the emperor's new clothes, the new paradigm becomes unsatisfactory in its turn. For example, society has collectively grown disenchanted with science's inability to explain some of the more important aspects of our lives, particularly the emotional aspects of being human. Then there are the adverse consequences of uncritically adopting the technological paradigm spawned by science and its firstborn child, technology: industrial accidents, pollution, species extinctions and decreased biodiversity, the greenhouse effect, and increasingly serious diseases. These symptoms tell us that science doesn't have all the answers either.
(A brief clarification before proceeding: A closer look at both religion and science reveals that it is not the paradigm per se that is the problem, since paradigms are nothing more than tools for understanding. Rather, the problem arises from blind and simplistic adherence to a paradigm, combined with greed, laziness, and other typical human character defects.)
We see this disenchantment in the popular press all the time, where dismissive phrases such as "of course, this is only a theory" reveal both the dissatisfaction and a profound misunderstanding of how science works. Even some of my scientist clients make this mistake. An arguably more serious problem come from those who adopt a proven and highly effective rhetorical approach to attack science: use your opponent's own words against them when speaking to those who don't fully understand those words. The result? When (as is natural) scientists follow their learned reflex and use familiar words in an attempt to communicate clearly, as they would do when talking to their peers, the clarity and value of their message is greatly diluted by their opponents' misuse of those terms.
What should the message be? Something that more accurately reflects both the power and the limitations of the scientific paradigm. Before we can craft that message, it helps to take a large step back and refresh our memory of how science works. Simplistically, we can say that science attempts to describe our world in four phases:
Let's look at each of these in turn.
When scientists observe a phenomenon, they attempt to explain it based on what they already know about that phenomenon and thereby create one or more hypotheses (testable propositions) to explain the observed but currently unexplained aspects of the phenomenon. A given hypothesis may be entirely incorrect—the framer of the hypothesis often knows this when they frame it—but it follows logically (thus, plausibly) from what is already known or believed to be known. Where the framer strongly suspects that the hypothesis is incorrect, the goal of framing the incorrect hypothesis is to provide a tool that will eliminate certain possibilities, thereby leaving fewer potential explanations to be tested in future research.
As meticulous study begins to muster evidence that supports or refines (revises and improves) a hypothesis, scientists increasingly come to believe that the hypothesis offers a reasonable, if incomplete, understanding of the world. (Never forget the word incomplete in discussing theory: scientists fully recognize the amazing complexity of the world and accept that they may never reach a final, completely accurate description of any aspect of that complexity.)
The original hypothesis gradually transforms into a theory—an explanation that provides a unifying framework for understanding some aspect of the world. That original hypothesis has usually undergone considerable fine-tuning or revision as a result of an ongoing series of confirmations of key aspects of the hypothesis and rejections of other aspects of the hypothesis based on contradictory results. It's important to remember that theories are not static; even their proponents recognize them as a work in progress. In particular, theories spawn subordinate hypotheses that are in turn tested and either supported or rejected, thereby deepening and broadening and enriching the theory.
When a theory has withstood the test of time, which in the context of science means that it has been tested repeatedly, from many different perspectives and using many new generations of insights, and has withstood repeated challenges from competing hypotheses, it becomes enshrined as law. At that point, researchers gradually lose interest in testing that theory until new technologies or new insights from other fields suggest the possibility of tests that have not previously been attempted. Until those new tests can be done, the law is simply accepted as one of the "givens" of a field, and scientists move on to more interesting problems.
This has both good consequences and bad: researchers can build new hypotheses and theories that expand upon and enrich the law, but they may also come to accept the law as dogma. It's the latter consequence that is most problematic: accepting something as a dogma means that we no longer challenge our assumptions, and perhaps even blindly and simplistically apply the law even when it is not completely satisfactory. This can lead us to complacency and a temptation to ignore or aggressively fight any philosophy or any person who challenges that dogma. That's unfortunate, because it is the restless unease of an inquiring mind, combined with an increasing dissatisfaction with known imperfections in a theory or law, that leads to breakthroughs.
Even the finest theory and the most widely accepted law inevitably fails to describe some subtle or not-so-subtle aspect of the reality it purports to describe. When asked why I'm so in love with science, my answer generally relies on one of two metaphors. When I'm feeling lofty or exalted, I respond that each new mountain we climb reveals a new and exciting vista of distant peaks that we have not yet climbed. If I'm feeling more reductionist, I'll frame this in the other direction: the more narrowly we focus, the more of life's wonderful complexity we see. (I still remember my excitement, as a youth, in looking at a microscope slide filled with pond water and suddenly perceiving a whole new world whose existence I'd never before suspected.)
A paradigm shift occurs when some researcher detects a problem with or inadequacy in an accepted law and entertains a suspicion that they may be able to resolve that problem or fill in a gap. Perhaps there is some small detail the law does not describe, but that has been considered trivial and unworthy of explanation, or perhaps the law is known to only approximate reality and the researcher imagines they can provide a better approximation. The researcher does what any good scientist will do: they challenge the prevailing assumptions and propose a new testable hypothesis that will both honor the spirit of the law (which has, after all, worked well up to this point) and modify its letter by providing new knowledge or a more precise description. When the challenge is sufficiently radical, it becomes what Kuhn called a "paradigm shift": the way we look at and explain our world (the paradigm or metaphor) changes. The cycle begins again, leading to new theories and laws and new challenges that overturn the old laws.
In this manner, Sir Isaac Newton's hypotheses about how force and acceleration related to mass were tested and confirmed, giving rise to his theories of motion, and when these theories were tested extensively, exhaustively, and creatively, and found to be correct, they became Newton's laws of motion. Then along came Einstein, who demonstrated that Newton's laws only approximated reality—an enormously useful approximation, to be sure, and one that remains as valid today as it did nearly four centuries ago. Einstein recapitulated the process of turning hypothesis into theory, and now, after rigorous and long-term testing, both special and general relativity have nearly attained the status of law. However, we still refer to Einstein's theories (not laws) of relativity because there remain troublesome aspects that have not yet been fully explained, such as the linkage between relativity and quantum mechanics and uncertainty over how gravitation really works.
Some future genius will undoubtedly resolve these difficulties and update Einstein's work in much the same way that Einstein updated Newton's work. The important thing to understand about this process is not that Einstein made Newton irrelevant, nor that the future researcher will make Einstein irrelevant. Rather, they will provide a better approximation of reality that explains things that neither Newton nor Einstein could explain. Newton's laws still provide an acceptably accurate explanation for how moving objects such as cars will behave at the speeds we're likely to encounter in daily life; this lets us design functional cars. Similarly, Einstein's relatively allows us to calculate time discrepancies that result at much higher speeds with sufficient accuracy that we can obtain useful measurements from the global positioning system (GPS).
It is this process of continual refinement of theory that is widely misunderstood, particularly by the public, and that is seen as a flaw in science. It's natural and human to desire certainty, but if science teaches us nothing else, it teaches that even the most solid-seeming laws represent nothing more than the best current approximation of an unimaginably complex reality. The more we seek, the more complexity we discover and the more we find that we've described only the surface reality, not the truth that lies beneath it. It is this process of endlessly seeking a better explanation that truly captures the spirit of science. Contrary to the popular perception, the goal of science is not to provide certainty, but rather to provide an increasingly good approximation of how our world works. That is, we continually replace our former understanding with something more profound. This process of change should not be seen as a criticism of science, but rather celebrated as one of the unique insights offered by the scientific method: recognition and delight in uncertainty.
What does this mean for us as scientific communicators? It means that we must reconsider how we think of our work, and particularly, that we must discard the notion that our goal is to strive for certainty. Instead, our goal must be to communicate an increasingly accurate understanding of our world and of the limitations of this understanding. We must never forget that our understanding is provisional and that the journey to understanding will continue long after our personal journeys come to an end. We must never give the mistaken impression that we have completed that journey. Instead, we must embrace the complexity of our world and remind our audience that "the road goes ever on", as Tolkein wrote so movingly. Let us take back and revalue our words (hypothesis, theory, law), and use them to once more communicate effectively with both the experts and the innocents in our audience.
My essays on scientific communication have now been collected in the following book:
Hart, G. 2011. Exchanges: 10 years of essays on scientific communication. Diaskeuasis Publishing, Pointe-Claire, Que. Printed version, 242 p.; eBook in PDF format, 327 p.
©2004–2017 Geoffrey Hart. All rights reserved