The Dangers Of Hidden Jargon In Communicating Science

Jun 12, 2017

One of the challenges that can arise in communicating science and other forms of scholarship to non-experts is the jargon involved.

How many people can confidently explain the meaning of broadband asymmetric acoustic transmission, mural lymphatic endothelial cells, or graded incoherence (to borrow some phrases from recent journal publications)?

But the most dangerous kind of jargon isn't the kind we notice. It's the kind that slips by. When technical definitions hide behind words we use in everyday speech, the opportunities for miscommunication abound. The expert thinks she has been clear; the recipient thinks he has understood. And yet, both could be wrong.

A common example comes from the statistical use of the word "significance." When a result is statistically significant, it means that it has been evaluated with a statistical test and found to meet some predefined threshold. The test itself involves calculating the probability of obtaining a result at least as extreme as that actually observed under the "null hypothesis," which might — for example — be the hypothesis that two groups do not differ in the frequency with which some symptom occurs. When this probability falls under a predefined threshold (such as 5 percent), we can tentatively reject the null hypothesis and conclude that the two groups do differ when it comes to the frequency of the symptom.

Statistical significance can be confusing, not only because the definition involves some nuance but also because a result can be statistically significant without being significant in the everyday sense of the word. In particular, statistical significance doesn't guarantee that a result is important. A study could find a statistically significant effect of taking some cancer drug on the prevalence of headaches, but the effect could be so small that it wouldn't possibly affect decisions about taking the drug. Or a study could find that the height of kindergarteners is statistically significantly greater than the length of snails — but who cares?

Statistics brims with hidden jargon. In a conference paper dedicated to the topic, statistician Christine Anderson-Cook offers a variety of additional examples, from "confounding" and "random" to "uniform" and "normal."

These examples of hidden jargon can be dangerous. They can lull those who hear it into a false sense of understanding; they can present a barrier to students' acquisition of a technical term. And yet they are not — I contend — the best-disguised forms of hidden jargon. The best-disguised forms of hidden jargon are those that even the expert doesn't recognize as jargon. She doesn't feel some term needs an explanation because she thinks her usage of the term just is the everyday usage. For her, the jargon has appropriated the everyday word. I call this double-masked jargon: It is hidden from both producer and consumer.

Double-masked jargon is so sneaky that I've only managed to uncover a few examples, and even these might be contested by some (expert) readers. But I remain convinced that double-masked jargon is real, and that in some cases it presents a barrier to effective communication. So in an effort to bring this jargon to light, I present two examples: causation and knowledge.

Let's begin with causation.

Experimental research is all about testing causal hypotheses — hypotheses about the effect of some candidate causal factor C on some postulated effect E. In an ideal randomized controlled trial, two groups differ only with respect to the presence of the candidate causal factor C. If the two groups differ (significantly!) with respect to some measured outcome E, we have evidence that C has a causal influence on E.

For those engaged in experimental research, it's natural to think of causation in these terms. The implicit definition of causation is something like this: C has a causal influence on E if interventions on C affect E. But this implicit definition is rarely elaborated or made explicit. That may be because experimentalists don't usually take themselves to be operating with a technical term of art when they make claims about causal relationships. Instead, they take themselves to be applying a more rigorous procedure for establishing whether a relationship is indeed causal, where the notion of "causal" is just the familiar notion we use everyday.

But everyday talk of causation (and sometimes, scientific talk of causation) is infused with all sorts of additional considerations. When we say the drunk driver caused the accident (and not the sober driver who could have prevented it), we aren't just making a causal claim in the experimentalists sense, we're making a claim about moral responsibility in a particular case. When we say that yeast causes bread to rise, we're likely to be thinking about the mechanism by which yeast leavens bread, not (just) the fact that changing the presence or quantity of yeast has a consequence for whether and how bread rises.

These examples reveal a complicated and multi-faceted notion of causation at work in our everyday causal language. If scientists or the journalists covering their work present causal conclusions without recognizing that the experimentalist's notions of causation is masked jargon, they invite readers to draw unwarranted implications. They've succumbed to the dangers of double-masked jargon.

My second example of double-masked jargon comes from philosophy, and the example is knowledge. We talk about knowledge all the time, at least in its verbier form: She knows that he knows that she knows. In philosophy, providing a good analysis of knowledge has been one of the major projects of epistemology, where one thing this analysis should tell us is when it is appropriate to say that she does, in fact, know that he knows (or when she knows anything else for that matter — let's just call whatever she knows "X").

Pretty much all analyses of knowledge agree that in order for her to know X, X must be true. She cannot know that whales are fish, because they aren't. She might believe they are fish, but she cannot know it. This is a given for most philosophers, but is it also true of the everyday sense of knowledge, the one that we don't take to be jargon?

Consider the following statement (borrowed from a paper by philosopher Allan Hazlett, who adapted it from a National Geographic article): "Everyone knew that stress caused ulcers, before two Australian doctors in the early 80s proved that ulcers are actually caused by bacterial infection." But if ulcers are actually caused by bacteria, and we can only know things that are true, then it couldn't have been the case that everyone knew that ulcers were caused by stress. Yet you probably understood the sentence just fine.

There's some question about whether people interpret statement like Hazlett's to be literally true, but there's no question that such uses of "know" do crop up in everyday speech. It's also clear that the everyday use of "knows" departs from epistemologists' use in other ways. The idea that knowledge requires its target to be true is an especially poignant one for hidden jargon, though, because I don't think it's an aspect of the term that experts typically take to be technical. It's just supposed to be part of what "we" (meaning experts and non-experts alike) take to be knowledge.

In some ways, becoming an expert is like learning a new language. What initially feels like awkward jargon transforms into something mundane; it's simply the way you communicate with your peers. But the transformation is only partial. Just as a child learning Spanish at home and English at school learns when to use which word, the expert must police her usage: Unless her grandfather is a physicist or a biologist, she won't explain her work day to him by talking about "broadband asymmetric acoustic transmission" or "mural lymphatic endothelial cells."

If she did, though, her grandfather would just ask for clarification — this jargon isn't hidden. It's masked jargon that's more likely to perpetuate misunderstanding, and it's double-masked jargon that's hardest to recognize and correct.


Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo

Copyright 2017 NPR. To see more, visit http://www.npr.org/.