In the Charles Dickens story A Christmas Carol, the wealthy miser Ebenezer Scrooge has a magical, life-changing epiphany. Thanks to visits from a series of ghosts, his eyes are opened as to how his behavior affects other people—and he goes from a selfish grump to a generous benefactor overnight.
Scrooge’s transformation comes down to knowledge. But do people really want to know how their actions affect others? Generosity has its own rewards, but it can also demand the sacrifice of time, money, effort and comfort. That may explain why willful ignorance, the intentional avoidance of information about the potential harm of one’s actions, is so common. Despite the plethora of scientific evidence for climate change, for instance, many people still avoid engaging with facts about global warming. Nor do they always want to know about the harsh living conditions of farm animals. And consumers often ignore the ethical origins of the products they purchase.
As behavioral scientists, we wanted to understand just how prevalent willful ignorance is—as well as why people engage in it. Together with our colleagues, we pooled data from multiple research projects that collectively involved more than 6,000 people. We discovered that willful ignorance is common and harmful, with 40 percent of people choosing “not to know” the consequences of their actions to free themselves of guilt while maximizing their own earnings. But we also found about 40 percent of people are altruistic: they seek out rather than avoid information about the consequences of their actions to increase the benefits to others.
In our analysis we analyzed data from 22 previously published studies on willful ignorance. This approach gave us a much larger, more comprehensive look at this phenomenon than past research. Although the specific experiments varied, most involved putting participants into pairs. People took part in the study either online or in person in a laboratory. Regardless of the setup, participants did not interact and remained anonymous to one another. The researchers, meanwhile, knew how many people made a certain decision but could not pinpoint who chose what.
In the experiments, researchers asked one member of each pair to choose between two options. The selection would determine the earnings for themselves and their partner. These decisions were made in one of two settings. In the transparent setting, decision-makers had information about how their choice would affect themselves and their partner. In an ambiguous setting, the decision-maker knew how their choice would matter for themselves but not their teammate—though they could request that insight.
For example, participants in several studies had to decide between receiving either $5 or $6. In the transparent setting, if they chose $5 for themselves, they knew their partner would also receive $5. If, however, they chose $6 for themselves, they knew their partner would receive only $1 in return.
In the ambiguous setting, the payout for partners worked differently. This time, there were two possible scenarios. In one, if the decision-maker selected $6 for themselves, their partner would receive $1, and if the decision-maker choose $5, their partner would receive $5 (just like the transparent case). But in a second scenario, the decision-maker could pick $6 and their partner would receive $5, or the decision-maker could select $5 and their partner would receive $1.
The decision-maker knew these two systems existed and understood how to receive a higher payout for themselves—but they were not initially aware of which scenario they were in. Interestingly, the decision-maker had the opportunity to resolve that ambiguity: by clicking a button, they could learn which payout scheme would apply to their decision. This option to learn more offered scientists a way of assessing willful ignorance.
Across all studies, we found that when participants were told the consequences of their choices—the transparent setting—the majority (55 percent) chose the altruistic option. That is, they gave up a part of their earning to share equally with their partner. The remaining 45 percent knowingly kept a bigger payout at a cost to their partner.
In the ambiguous setting, however, 40 percent of participants chose to remain ignorant. Not knowing freed them to be selfish: 60 percent of people in the ignorant group chose a higher personal payout in scenarios where this choice came at the expense of their partner. Among those who requested more information, 36 percent knowingly kept a higher payout at a cost to their partner.
That means the overall balance tipped toward selfishness when participants had the option to avoid information. Only 39 percent of people in the ambiguous setting made the choice that ultimately benefited their partner—a significant drop from 55 percent in the transparent condition.
But how do we know if ignorance in the ambiguous setting was willful? Could it be that some people avoided information unintentionally? To understand this point, we conducted a second analysis focused on what motivates people to seek information.
In this analysis we looked at how people who obtained additional information behaved in comparison with those who were given information. We found that people who chose to receive information in the ambiguous setting were seven percentage points more likely to make the altruistic choice than were people in the transparent setting. In other words, our analyses identified some truly altruistic actors: people who sought information out and then made a decision that benefitted their partner, even at a cost to themselves. That means information-seeking is at least partially motivated by the desire to do right. By the same token, the finding also suggests choosing ignorance has value for people who want an excuse to be selfish.
We cannot rule out that some people failed to click the button for more information unintentionally. But if confusion, laziness or even indifference were the only drivers of ignorance, we would not have observed any real difference in our comparison. We found that seeking information was linked to a clear motivation: these truly altruistic individuals wanted to benefit their partner. As such, ignorance is at least partially driven by the desire to shield oneself from one’s own judgment.
Our work suggests some altruistic behaviors in life are done because people feel pressure to do what is expected of them. When the consequences of choices are made clear, people may feel obliged to make a small sacrifice and be generous to others. But when given a chance, people may want to ignore the consequences of their actions. Ignorance shields people from knowing how their actions harm others and makes them feel less like a bad person.
As such, our findings hint at ways to combat willful ignorance. In the studies we analyzed, decision-making occurred within a moral framing: you could benefit yourself at the expense of your partner. This presentation is fertile ground for willful ignorance because it poses a threat to a person’s self-image, heightening the sense that—if you know what’s really going on—you will have to make harder choices to be a good person.
If we can avoid putting a strong moral emphasis on decisions, it may make people feel less threatened and, as a result, less willfully ignorant. Other research groups have found promising ways to do this. For instance, we can present choices in ways that highlight ethical options first, such as making vegetarian menus the default, while still allowing people to opt out to choose meat, as part of an effort to encourage sustainable food choices. Or we could encourage people to think more positively about good deeds rather than guilt-trip them for what they have failed to do. Highlighting recent global achievements, such as healing the ozone layer, for instance, can encourage people to keep up the good work rather than feel like the battle is lost and that it’s all gloom and doom. We may not have Dickensian ghosts to guide us—but there are still steps we can take to encourage selflessness and generosity in ourselves and others.
Are you a scientist who specializes in neuroscience, cognitive science or psychology? And have you read a recent peer-reviewed paper that you would like to write about for Mind Matters? Please send suggestions to Scientific American’s Mind Matters editor Daisy Yuhas at pitchmindmatters@gmail.com.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.