Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Make a sustaining gift today to support local journalism!

Blame Your Brain: The Fault Lies Somewhere Within

Hulton Archive
/
Getty Images

Science doesn't just further technology and help us predict and control our environment. It also changes the way we understand ourselves and our place in the natural world. This understanding can inspire awe and a sense of grandeur. But it can also be unsettling, especially when it calls into question our basic assumptions about the kinds of creatures we are and the universe we inhabit.

Current developments in neuroscience seem to be triggering precisely this jumble of reactions: wonder alongside disquiet, hope alongside alarm.

A recent headline at Salon.com, for example, promises an explanation for "how neuroscience could save addicts from relapse," while an article by Nathan Greenslit at The Atlantic, published less than a week later, raises worries that neuroscience is being used to reinforce racist drug policy. Obama's BRAIN initiative hails "the dawn of a new era of the brain," but with it comes the need to rapidly work out the ethical implications of what we're learning about the brain and about ourselves. We're fascinated by neuroscience; but we're not always sure what to make of it.

In a forthcoming paper at the journal Psychological Science, psychologists Azim Shariff, Joshua Greene and six of their colleagues bring these heady issues down to earth by considering whether learning about neuroscience can influence judgments in a real-world situation: deciding how someone who commits a crime should be punished.

The motivating intuition is this: to hold someone responsible for her actions, she must have acted with free will.

But if her actions were the result of brute, mechanical processes that fully determined their effects — a view that a neuroscientific understanding of the mind might engender — then she didn't have free will, so she shouldn't be held morally responsible or punished too harshly. (More precisely, she shouldn't be punished merely for retribution, or to receive her "just deserts." It might still make sense to support punishment for other reasons, such as deterring others from acting similarly in the future.)

The view that a causally deterministic world precludes free will is known as incompatibilism in philosophy, and while it isn't universally endorsed, it's not uncommon. So if learning about neuroscience suggests that the world is deterministic, and if determinism is judged incompatible with free will, then learning about neuroscience could have implications for how people assign moral responsibility and dole out retributive punishment.

To test these ideas, the researchers had participants read articles that were either about neuroscience or about other topics (nuclear power, natural headache remedies). The neuroscience articles highlighted the mechanistic, neural bases for human decisions, as reflected in these representative snippets:

"In a study published Sunday in Nature Neuroscience, researchers using brain scanners could predict people's decisions seven seconds before the test subjects were even aware of making them ... "

"The implications immediately seem far greater, and perhaps more unsettling, than learning about the physiological basis of other brain functions."

"The unease people feel originates in a misconception of self as separate from the brain, said National Institute of Health neuroscientist Mark Hallett."

After reading either the neuroscience articles or the alternatives, participants completed a seemingly distinct study for which they read about a student's violent crime:

"In the spring of 2005, Jonathan Scarrow, a high school senior in Ohio was involved in an altercation at a local bar which led to the death [of] a college student, Brandon Mahew ... "

"Scarrow entered an enraged state while fighting with Mahew ... When Scarrow was finally subdued by his own friends, Mahew lay bloody and unconscious. He was rushed to hospital, but never regained consciousness, and finally died two days later from massive head trauma."

After reading about the incident, participants were asked to rate Scarrow's blameworthiness and how long he should be incarcerated for his transgressions. To make sure that responses reflected participants' views concerning retributive punishment, they were asked to recommend the length of a jail sentence that would follow a fully effective program of rehabilitation, and were additionally told that the length of the sentence would have no effect on deterring future crimes.

The researchers found that, on average, participants who read the neuroscience articles assigned shorter prison sentences to Scarrow and found Scarrow less blameworthy than those who read the other articles. Bolstered by three additional studies reported in the paper, the findings suggest that learning about neuroscience reduces belief in free will, which in turn makes people less inclined towards retributive punishment.

Both Shariff and Greene, the paper's first two authors, were kind enough to correspond with me about the research by email. My first question for them was this: Is there something special about neuroscience that's generating these effects?

Shariff's answer was "yes":

"Yes, absolutely. Whereas other sciences can cause us to question important issues about the nature of energy, the origin of the universe and the origin of species, many of the insights emerging from psychology and neuroscience can compel us to question our very selves. Every bit about how our subjective experiences and how we interface with the world is up for grabs in neuroscientific research. In that sense, psychology and neuroscience are very much sciences about us. No other science is as personal, and as personally destabilizing."

Greene concurred:

"Neuroscience studies the physical mechanisms behind human decision-making, and that's what makes it special. For centuries philosophers and scientists have said that human choice is just a complicated physical process, that there is no 'tiny miracle' that happens in our brains when we choose. For many people this is hard to believe, but neuroscience has the potential to demonstrate in a compelling way that it's true, that we are ultimately physical beings. What this new paper indicates is that this scientific understanding of human nature affects people's moral and legal judgments."

So what was it about a mechanistic explanation of human decisions that influenced people's moral judgments? Was it the appeal to deterministic causal processes, as the motivation for the study seemed to suggest?

If so, then reading about neuroscience that isn't deterministic should have different effects. Consider this excerpt from a June 9th press release from UC Davis, describing a new study related to the one that Shariff and colleagues used for the neuroscience texts in their experiment:

"Our ability to make choices — and sometimes mistakes — might arise from random fluctuations in the brain's background electrical noise, according to a recent study from the Center for Mind and Brain at the University of California, Davis."

"'How do we behave independently of cause and effect?' said Jesse Bengson, a postdoctoral researcher at the center and first author on the paper. 'This shows how arbitrary states in the brain can influence apparently voluntary decisions.'"

"' ... if our brain is preparing to act before we know we are going to act, how do we make a conscious decision to act? The new work, though, shows how 'brain noise' might actually create the opening for free will,' Bengson said."

"'It inserts a random effect that allows us to be freed from simple cause and effect,' he said."

The quotes from Bengson reinforce the idea that it's a deterministic, "simple cause and effect" understanding of human decisions that challenges free will. Yet assimilating human decisions to random fluctuations doesn't seem a whole lot better than determinism. Are we morally responsible for our "brain noise"?

My own guess is that it isn't neuroscientific determinism per se that challenges our ideas about free will and moral responsibility. Instead, it could be that simply describing mental processes in terms of the brain discounts our usual explanations for behavior in terms of people's intentions, beliefs and desires. As argued by philosopher Eddy Nahmias and others, it's this replacement of a mentalistic vocabulary with talk of the brain that seems to cut out the intentional agent, the freely willing "I."

However the details pan out, the findings by Shariff, Greene, and colleagues make a larger point: that important beliefs about ourselves and about others — beliefs with implications for how we make moral and legal decisions — are malleable. And just reading about science is enough to subtly change their shape. As Shariff notes:

"There is a great cost to basing our behavior and social institutions on debunked intuitions. These studies — and historical examples — lend support to the idea that we can change and adapt."

Unfortunately, though, there's no guarantee that the changes induced by learning about science will always be for the better. To take the current case, should we reject retributive punishment? This is a moral question that won't be answered by neuroscience, or — as Greene points out — by their new findings:

"Whether or not this is good is a moral question that goes beyond the scope of the paper. As it happens, I think it is a good thing. I think that punishment is justifiable when it makes our society better off, but that making people suffer — even people who have committed terrible crimes — is not in itself a worthy goal."

In this case, I agree with Greene. But science itself can't tell us all the answers, so it can't — on its own — arbitrate disagreements when it comes to normative questions about how we ought to behave. Yet by giving us insight into the factors behind our own sense of morality, science can tell us about the kinds of creatures we are, and that's an important step in becoming the kinds of creatures we'd like to be.


You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.