You are here: Articles --> 2011 --> Languages do shape how we think
Vous êtes ici : Essais --> 2011 --> Languages do shape how we think
by Geoff Hart
Previously published as: Hart, G. 2011. Languages do shape how we think. The Exchange 18(1):4–5.
The February 2011 issue of Scientific American has an interesting article by Lera Boroditsky about how the language we choose to describe a problem affects how we think and communicate about the problem.
It's well written and thought provoking, with only one major glitch: The starting example shows how a 5-year-old aboriginal child in Australia can always find the direction of north, whereas whenever Boroditsky tries this test in a lecture hall, the audience is typically unable to point north. Those who are confident enough to point, point in wildly incompatible directions. As described, the example proves nothing. Inside a building, it's trivially easy to get turned around and lose track of compass directions. But I've never failed to find north while outdoors (sometimes with a little effort). In the lecture hall example, the problem is that without a specific reason to do so, I (and the lecture audience) would have no reason to keep track of compass directions while indoors. The problem is context and goals, not one of language.
From a linguistic perspective, the problem that Boroditsky subsquently explains is that the aboriginal child exists in a culture where compass orientation is a key principle underlying their language; for example, when arranging images into a chronological sequence, the aboriginal child would arrange the images moving from east to west, whereas most readers of this article would arrange them from left to right (e.g., in an English society) or from right to left (e.g., in Israel). The direction does not depend in any way on which direction the person is facing: it's determined by the language the person is using. That's an interesting point on many levels, ranging from the crafting of fiction to communicating science.
There are many other interesting and relevant examples. First, consider the example of the synthetic languages used in computer programming. Programmers who think in assembler, BASIC, and C++ will take radically different approaches to thinking about, codifying, and solving an identical problem—because the nature of each of these languages prohibits (or makes prohibitively difficult) thinking about certain descriptions of a problem and certain expressions of the solution, while facilitating other descriptions and expressions.
Another example arises from how the languages of mathematics and of science (applied mathematics) differ from the language of social construction (a much abused philosophy with fascinating insights for those willing to consider the principles rather than blindly obeying the politics). The two discourse communities have nearly incompatible ways of thinking about the world, making communication at best difficult. Far from being an issue of merely academic concern, this has serious consequences in the field of risk communication: scientists who can only communicate in the language of science often cannot successfully persuade the general public that a risk is real—or that a risk is insignificant—because the rhetorical factors and language used by the scientist differ too greatly from those of their audience.
Finally, consider how differently statisticians, using the linguistic tools of their language ("statistics"), think compared with the general public when it comes to assessing risks (e.g., death in an airplane crash) and probabilities (e.g., lotteries). Two entirely different worlds. I recently had this discussion with my son concerning the reliability of certain brands of computers. Historically, Dell has been one of the most reliable brands, as shown routinely in PC Magazine's annual reliability surveys, which poll thousands of readers. Yet a friend of my son, who repairs computers for a living, considers the reliability of Dell computers to be unacceptably low because of how many he repairs each year. The problem, of course, is that Dell sells millions of computers. Even if 1 in 1000 is a lemon, this means that thousands per year will need repairs; the company that sells only 1000 equally high quality computers per year may seem more reliable than Dell because (on average) only 1 of those computers will need repairs. But the reliability is identical; all that differs is the number of computers sold by Dell. I haven't managed to help my son understand this point, but I'm still trying.
Each example is crucially important for scientists to understand. Science is neither conducted in a social vacuum nor is it immune to social pressures. For scientists to successfully communicate key policy issues to politicians and the public, they must learn their audience's language well enough to change how they think about and communicate the problem. That's no longer optional; it may have become crucial to our survival as a species.
From the perspective of fiction, the effects of language on how characters think cannot be ignored. Characters from different linguistic groups within a culture (e.g., the abovementioned scientists and general public), characters from different cultures (i.e., cultures with different languages), and characters from different species (again, with different languages) will think very differently about the same situation. Even when they nominally share the same language, as in the case of my Chinese authors who write in English as their second language, the thought patterns can be dramatically different because the starting perspectives are so different.
Men may be from Mars, and women may be from Venus, but that's just the tip of the iceberg when it comes to communication difficulties.
My essays on scientific communication have now been collected in the following book:
Hart, G. 2011. Exchanges: 10 years of essays on scientific communication. Diaskeuasis Publishing, Pointe-Claire, Que. Printed version, 242 p.; eBook in PDF format, 327 p.
©2004–2017 Geoffrey Hart. All rights reserved