Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Make a sustaining gift today to support local journalism!

To Make Better Predictions, Don't Stick With The Easy Stuff

iStockphoto

The presidential primaries are a great opportunity to test your skills in political prediction. Who will win which states, and by what margin? And if your predictions aren't all that good, how can you do better?

Two weeks ago, I wrote about how to make better predictions in domains both big (world politics) and small (your breakfast), drawing on recent work in psychological science. Today, I'm going to revisit this question from a new angle.

Making good predictions isn't just about your accuracy; it's also about your calibration.

Accuracy refers to just what you probably expect. A prediction is accurate when it turns out to be right: Hillary Clinton, say, does win Alabama, and Bernie Sanders wins Oklahoma. If you make 100 predictions and 70 of them turn out to be right, your accuracy is 70 percent.

But predictions can be made with higher or lower confidence, and that's where calibration comes in. Suppose you were very confident that Clinton would win Alabama, but not very confident that Sanders would win Oklahoma. We can express this quantitatively: Perhaps you'd assign a probability of .90 to Clinton winning and a probability of .60 to Sanders. You're well-calibrated if these estimated probabilities of being correct track your actual accuracy: 90 percent of the claims to which you assign a probability of .90 should turn out to be right, and 60 percent of those to which you assign a probability of .60 should turn out to be right.

Most of us aren't just inaccurate when it comes to many everyday predictions (whether it's about world politics or which route will have less traffic); we're also often uncalibrated. For the most part, this poor calibration takes the form of overconfidence. For example, when we assign a probability of .90 to being correct ("I'm pretty sure!"), we might be right only 75 percent of the time.

Overconfidence also plagues experts. In one macabre example, a study of physicians found that even when the physicians were "completely certain" of a diagnosis, a subsequent patient autopsy revealed that they were wrong 40 percent of the time. Such overconfidence can lead to suboptimal decisions across a variety of domains.

That's the bad news: Our predictions are often inaccurate and poorly calibrated to boot. But the good news is that psychological research is shedding light on the characteristics of those who make more accurate and better-calibrated predictions, with potential implications for how we might learn to do better ourselves.

Here's one intriguing lesson for better calibration from a paper published in the March issue of the Journal of Experimental Social Psychology: To avoid being overconfident, avoid the temptation to focus on what's easy, and instead pay attention to what's hard.

The paper, by psychologists Joyce Ehrlinger, Ainsley Mitchum and Carol Dweck, reports three studies in which participants were asked to estimate their own performance on a task, either a multiple choice test with antonym problems or a multiple choice general knowledge quiz. The participants were asked to estimate their percentile relative to other students completing the task, from zero percent (worse than all other students) to 100 percent (better than all other students). If participants were perfectly calibrated, then the average percentile estimate should have been 50 percent. But that's not what they found. In Study 1, for instance, the average was 66 percent. Like the children of Lake Wobegon, participants (on average) believed themselves to be better than average.

But the researchers did more than document a typical pattern of overconfidence: They were able to isolate an important contributing factor.

In their initial study, the researchers had participants complete a questionnaire that assessed their beliefs about the malleability of intelligence. Based on this, they could classify participants into those who held an entity theory — the view that an individual's intelligence is pretty much fixed, and you can't do much about it — versus an incremental theory — the view that you can always substantially change how intelligent you are. It was the entity theorists who were responsible for the lion's share of overconfidence. Their percentile estimates averaged 76 percent, while those of the incremental theorists averaged a significantly lower 56 percent.

Why would an individual's beliefs about intelligence affect overconfidence?

Past work has shown that entity theorists tend to avoid challenge. If you think your intelligence is fixed, then you won't welcome poor performance or the experience of difficulty, which suggests that your fixed intelligence level isn't very high. In contrast, if you think your intelligence is malleable, then difficult problems provide an opportunity to improve. The researchers thus hypothesized — and in Study 2 found — that relative to those primed to adopt an incremental stance toward intelligence, those primed to adopt an entity theory would spend less time on hard problems (avoiding the experience of difficulty), and more time on easy problems. Moreover, the time spent on hard problems was related to overconfidence. If you gloss over difficulty, you fail to appreciate where and why you might get things wrong.

A final study pointed toward corrective measures: When entity theorists had their attention drawn to difficult problems, their overconfidence was reduced to the levels found for incremental theorists.

Translating this research back into real-world terms, we can say that there's value in recognizing and attending to the problems with which we struggle. They not only provide an opportunity to learn about the content of the problem, but also to become better calibrated in our predictions.

Extrapolating back to the primaries, perhaps the lesson is that you shouldn't just ignore the tough calls. It's probably appropriate to make your predictions about them with low confidence, but if you simply disregard the judgments you find difficult, you're likely to overestimate your accuracy and to perpetuate poor calibration. It's easy to stick with what's easy, but making good predictions requires attending to what's hard.


Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.