You are here: Articles --> 2004 -->
Editorial: Check your assumptions at the door
Vous êtes ici : Essais --> 2004 --> Editorial: Check your assumptions at the door
by Geoff Hart
Previously published as: Hart, G.J. 2004. Editorial: Check your assumptions at the door. the Exchange 11(4): 2, 7. http://www.stcsig.org/sc/newsletter/html/2004-4.htm#editorial
"If you torture data sufficiently, it will confess to almost anything."
—Fred Menger, chemistry professor (1937–)
Since not long after the beginning of the scientific revolution, there's been a growing societal consensus that numbers are important, and that indeed, anything important can be grasped if only a number can be assigned to it. This is a more specific instance of the general rule that humans love categorizing things, and that anything we can't fit into a category makes us feel uncomfortable.
To a scientist, everything is a metric: something that can be measured. To me, one of the really interesting things about metrics is not just what they say about the thing being measured, but what they say about the measurer. Consider the example of intelligence quotient (IQ) testing. Early in the 20th century, psychologist Lewis Terman popularized the IQ test originally developed in France by Alfred Binet. As recounted by William Poundstone (2003, p. 27) in a refreshing account of the merits and demerits of intelligence testing, Terman found a few inconvenient results during his initial broad application of what eventually became known as the Stanford–Binet IQ test. On average:
As any good scientist would do, Terman asked himself what these differences meant. But unfortunately, Terman made a key assumption that led him astray in his attempts to answer this question: he assumed that the test was biased because it didn't fit with his expectations. The test was indeed biased, but not even remotely in the way he expected, and Terman went completely off the rails in making two additional assumptions:
In short, Terman assumed that men and women were of equal intelligence, but that "Whites" were more intelligent than other races. A test that could have provided some interesting and important insights into the nature of testing and intelligence (why do the observed differences exist? what do they tell us?) became instead an effort to support pre-existing prejudices. Since then, IQ testing has often become a rote application of standardized tests that increasingly demonstrate nothing of any real interest other than the ability to take a particular IQ test. The developers of the Scholastic Aptitude Test (SAT), to name one infamous example, have spent enormous effort trying to produce a truly representative test, yet it's still true that you can dramatically improve your SAT score simply by studying how to take the test. How objective can any such test be if a significant proportion of the score depends on whether you've been trained to take the test?
With the benefits of a century of hindsight, it's natural for us to assume that any competent scientist would have thought carefully about the early IQ findings, and would have carefully designed hypotheses to let them test various interpretations of the findings. Eventually, such an inquiry would have led the investigators to more objective and interesting conclusions—such as insights into disparities in how men and women are taught science, and in how and whether different ethnic populations are being failed by their educational systems. (Of course, that assumes you accept a definition of "failure" based on test scores, which is something of a questionable assumption.)
Currently, there's a growing consensus that intelligence is a multidimensional concept with many facets; for example, psychologists now distinguish between emotional and mathematical intelligence, among other dimensions. Now that's interesting! Complexity is often far more fun to explore than simplicity when it comes to understanding how the world works.
By itself, however, the notion of multidimensional intelligence still fails us through the shackles it places on our thinking. If we assume that the test for (say) emotional intelligence is valid, we may fail to ask the important next question: What is the source of any observed differences in test scores? Then there's the follow-up question: What implications does this have? For example, we might want to spend some time in our high schools teaching teenagers to understand and deal with their emotions, thereby improving their emotional intelligence. If the goal of our educational system is to prepare our children for adult life, we certainly seem to be neglecting an entire dimension of their intellectual development: the emotional side.
It can be remarkably difficult to identify one's assumptions; this is why, among other reasons, I can make a very good living as a technical and substantive editor of scientific manuscripts produced by many people with IQs much higher than my own. It's also why science journals conduct time-consuming and expensive peer reviews. But difficult is not the same as impossible, and the difficulty becomes much less if you're willing to ask yourself one fundamental question: "What conditions are required for this hypothesis to be valid?" Better still, we can ask a friend or colleague to ask that question for us, since their assumptions are likely to be sufficiently different from our own that they're not blinded to those assumptions.
I suspect the same question has important consequences for us, as communicators. What assumptions do we make about our audiences when we choose what to communicate with them, and how to perform that communication? How could we test those assumptions? If those assumptions are wrong, we fail to communicate successfully. If the results of the tests don't conform with our prejudices, we may have discovered something significant about more than our own prejudices. Whatever form of communication you perform, I encourage you to ask yourselves these questions—to check your assumptions at the door and see what interesting new things you might discover thereby.
If all goes as planned, the March 2005 issue of Intercom will be a special issue on scientific communication. [A look back from 2005: Everything went as planned.—GH] Watch your mailbox! I hope the articles in that issue, including (as you might expect) my own contributions, will inspire you to consider writing your own article on our profession, whether for Intercom or for the Exchange. Got an article idea? Drop me a line and let's discuss it!
If you haven't been receiving e-mail from the Scientific Communication SIG's e-mail discussion group, please follow Douglas Adams' advice: "don't panic!" The volume of mail in this forum is very low indeed, and we often go months between messages. But when an occasional message does go out, I get dozens of bounce messages from people whose e-mail addresses have expired. When I can't contact the person at that address, I delete them from the list of subscribers. If you haven't received any mail from this group, but want to, please see the instructions on the last page of the newsletter for information on how to resubscribe.
If you or any of your colleagues haven't been receiving notification that the most recent issue of this newsletter has been published, please confirm with STC that your contact information is correct. Many people forget to inform STC when they change jobs or Internet service providers, and that means they won't receive any messages from our SIG. Because STC values our privacy, they don't give me access to the membership database, even though I'm the SIG manager. As a result, I can't help you change your contact information; you'll have to do this yourself.
Poundstone, W. 2003. How would you move Mount Fuji? Microsoft's cult of the puzzle. How the world's smartest companies select the most creative thinkers. Little, Brown, and Company, New York, N.Y. 276 p.
My essays on scientific communication have now been collected in the following book:
Hart, G. 2011. Exchanges: 10 years of essays on scientific communication. Diaskeuasis Publishing, Pointe-Claire, Que. Printed version, 242 p.; eBook in PDF format, 327 p.
©2004–2017 Geoffrey Hart. All rights reserved