Jump to Introduction & Chronology
Jump back to Previous: Zen Physics - IX. Modern physics + Pride
The Righteous Mind
Chapter Four - Vote for Me (Here’s Why)
I really thought I was going to be able to skip this entire chapter, but there is a connection to The Birth of Tragedy -- or at least to Nietzsche’s presentation of Socrates -- that I can’t ignore.
...
p72 Early in [Plato’s] The Republic, Glaucon (Plato’s brother) challenges Socrates to prove that justice itself -- and not merely the reputation for justice -- leads to happiness...
Glaucon suggests an ethical thought experiment involving someone who can become invisible at will (the ring of Gyges) and how he would act.
p73 Glaucon’s thought experiment implies that people are only virtuous because they fear the consequences of getting caught -- especially the damage to their reputations...
...Socrates approaches... [the challenge] with an analogy: Justice in a man is like justice in a city... [a city-state]. He then argues that a just city is one in which there is harmony, cooperation, and a division of labor between all the castes... All contribute to the common good, and all lament when misfortune happens to any of them.
But in an unjust city, one group’s gain is another’s loss, faction schemes against faction, the powerful exploit the weak, and the city is divided against itself. To make sure the polis doesn’t descend into the chaos of ruthless self-interest, Socrates says that philosophers must rule, for only they will pursue what is truly good, not just what is good for themselves.
This sounds depressingly familiar.
Having gotten his listeners to agree to this picture of a just, harmonious, and happy city, Socrates then argues that exactly these sorts of relationships apply within a just, harmonious, and happy person. If philosophers must rule the happy city, then reason must rule the happy person. And if reason rules, then it cares about what is truly good, not just about the appearance of virtue.
Plato... had a coherent set of beliefs about human nature, and at the core of these beliefs was his faith in the perfectibility of reason. Reason is our original nature, he thought; it was given to us by the gods and instilled in our... heads. Passions often corrupt reason, but if we can learn to control those passions, our God-given rationality will shine forth and guide us to do the right thing, not the popular thing.
As is often the case in moral philosophy, arguments about what we ought to do depend upon assumptions -- often unstated -- about human nature and human psychology. And for Plato, the assumed psychology is just plain wrong. In this chapter I’ll show that reason is not fit to rule; it was designed to seek justification, not truth. I’ll show that Glaucon was right; people care a great deal more about appearance and reputation than about reality. In fact, I’ll praise Glaucon for the rest of the book as the guy who got it right -- the guy who realized that the most important principle for designing an ethical society is to make sure that everyone’s reputation is on the line all the time, so that bad behavior will always bring bad consequences.
p74 William James, one of the founders of American psychology, urged psychologists to take a “functionalist’ approach to the mind. That means examining things in terms of what they do, within a larger system. The function of the heart is to pump blood within the circulatory system... Thinking is for doing, he said.
What, then, is the function of moral reasoning? Does it seem to have been shaped, tuned, and crafted (by natural selection) to help us find truth, so that we can know the right way to behave and condemn those who behave wrongly? If you believe that, then you are a rationalist, like Plato, Socrates, and Kohlberg. Or does moral reasoning seem to have been shaped, tuned, and crafted to help us pursue socially strategic goals, such as guarding our reputations and convincing people to support us, or our team, in disputes? If you believe that, then you are a Glauconian.
We Are All Intuitive Politicians
If you see one hundred insects working together toward a common goal, it’s a sure bet they’re siblings. But when you see one hundred people working on a construction site or marching off to war, you’d be astonished if they all turned out to be members of one large family. [The military band of lovers idea from Plato's Symposium is an interesting twist on this notion.] Human beings are the world champions of cooperation beyond kinship, and we do it in large part by creating systems of formal and informal accountability. We’re really good at holding others accountable for their actions, and we’re really skilled at navigating through a world in which others hold us accountable for our own.
p75 Phil Tetlock... defines accountability as the “explicit expectation that one will be called upon to justify one’s beliefs, feelings, or actions to others,” coupled with an expectation that people will reward or punish us based on how well we justify ourselves. When nobody is answerable to anybody, when slackers and cheaters go unpunished, everything falls apart...
Tetlock suggests... we act like intuitive politicians striving to maintain appealing moral identities in front of our multiple constituencies. Rationalists such as Kohlberg and Turiel portrayed children as little scientist who use logic and experimentation to figure out the truth for themselves. When we look at children’s efforts to understand the physical world, the scientific metaphor is apt; kids really are formulating and testing hypotheses, and they really do converge, gradually, on the truth. But in the social world, things are different... Appearance is usually far more important than reality.
...when people know in advance that they’ll have to explain themselves, they think more systematically and self-critically. They are less likely to jump to premature conclusions and more likely to revise their beliefs in response to evidence.
p76 That might be good news for rationalists... [but] Not quite. Tetlock found two very different kinds of careful reasoning. Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is a “one-sided attempt to rationalize a particular point of view.” Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy.
When all three conditions apply, people do their darndest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time -- which is almost all of the time -- accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right. Tetlock summarizes it like this:
A central function of thought is making sure that one acts in ways that can be persuasively justified or excused to others. Indeed, the process of considering the justifiability of one’s choices may be so prevalent that decision makers not only search for convincing reasons to make a choice when they must explain that choice to others, they search for reasons to convince themselves that they have made the “right’ choice.
...
I can think of so many business and military instances of this. And this also is a good way to understand why I like military history because there is built-in accountability. With business, appearances are everything until the checks stop clearing.
We Are Obsessed With Polls
...
For a hundred years, psychologists have written about the need to think well of oneself. But Mark Leary, a leading researcher on self-consciousness, thought that it made no evolutionary sense for there to be a deep need for self-esteem. For millions of years, our ancestors’ survival depended upon their ability to get small groups to include them and trust them, so if there is any innate drive here, it should be a drive to get others to think well of us... Leary suggested that self-esteem is more like an internal gauge, a ‘sociometer” that continuously measures your value as a relationship partner. Whenever the sociometer needle drops, it triggers an alarm and changes our behavior.
...
p78 ... The sociometer is part of the elephant. Because appearing concerned about other people’s opinions makes us look weak, we (like politicians) often deny that we care about public opinion polls. But the fact is that we care a lot about what others think of us. The only people known to have no sociometer are psychopaths.
I suspect people (like me) who think we don’t care what people think, are actually constructing a “special” class of (possibly imaginary) people whose opinions we do care about.
Our In-house Press Secretary Automatically Justifies Everything
...
p81 ...Smart people make really good lawyers and press secretaries, but they are no better than others at finding reasons on the other side. [David] Perkins concluded that “people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”
...
We Lie, Cheat, and Justify So Well That We Honestly Believe We Are Honest
...
p83 [Conclusion from a study by Dan Ariely in the book Predictably Irrational about cheating] People didn’t try to get away with as much as they could. Rather, when Ariely gave them anything like the invisibility ring of Gyges, they cheated only up to the point where they themselves could no longer find a justification that would preserve their belief in their own honesty.
The bottom line is that in lab experiments that give people invisibility combined with plausible deniability [for example people will accept a larger check than they are owed unless they are asked if the amount is correct. Adding a lie to accepting too much money removes the plausible deniability], most people cheat. The press secretary (also known as the inner lawyer) is so good at finding justifications that most of these cheaters leave the experiment as convinced of their own virtue as they were when they walked in.
Of course I now see instances of this all around. There’s a local TV feature called “People Behaving Badly” that I started following online when the host/reporter had a stroke. PBB is like a seminar in what Haidt just wrote about. just yesterday he did an episode on people illegally double parking on a busy neighborhood commercial street (Chestnut). One woman was irate and got out of her car to take a smartphone picture of his videoing her. I didn’t read the Facebook comments, but I’m sure they were nasty -- when else do you get a chance to dump on people who have done this to you. She posted her photo and said that the comments here hurtful and unfair because she, “was a good person.” Her rider was working overtime to prove to her that her “need” to violate the law and inconvenience other people was something a “good person” would do.
Reasoning (and Google) Can Take You Wherever You Want To Go
...
p85 The difference between can and must is the key to understanding the profound effects of self-interest on reasoning. It’s also the key to understanding many of the strangest beliefs -- in UFO abductions, quack medical treatments, and conspiracy theories.
The social psychologist Tom Gilovich studies the cognitive mechanisms of strange beliefs. [If only I had known this was a possible career option!] His simple formulation is that when we want to believe something, we ask ourselves, “Can I believe it?” Then (as Kuhn and Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks.
But why do we want to believe these unlikely things?
In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must.
...
p85 ...Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds... and it’s part of the accountability system of science -- you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.
And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientific studies. Science is a smorgasbord, and Google will guide you to the study that’s right for you.
Still doesn’t address why we want to believe something.
We Can Believe Almost Anything That Supports Our Team
Many political scientists used to assume that people vote selfishly, choosing the candidate or policy that will benefit them the most. But decades of research on public opinion have led to the conclusion that self-interest is a weak predictor of policy preferences. Parents of children in public school are not more supportive of government aid to schools than other citizens; young men subject to the draft are not more opposed to military escalation than men too old to be drafted; and people who lack health insurance are not more likely to support government-issued health insurance than people covered by insurance.
p86 Rather, people care about their groups, whether those be racial, regional, religious, or political. The political scientist Don Kinder summarizes the findings like this: “In matters of public opinion, citizens seem to be asking themselves not ‘What’s in it for me?’ but rather ‘What’s in it for my group?’ “ Political opinions function as “badges of social membership.” They’re like the array of bumper stickers people put on their cars showing the political causes... they support. Our politics is groupish, not selfish.
Again, this works well with the notion that we are primate pack animals.
This also suggests a possible explanation for strange beliefs. Cults often have unusual clothing or body ornamenting habits (the hijab, the bonnet, the cross on a necklace) that signal membership in the cult. Unusual beliefs (not just zombie Jesus, but in chemtrails or the nonexistence of global warming) serves the same function of isolating and identifying members of the cult pack.
...Liberals and conservatives actually move further apart when they read about research on whether the death penalty deters crime, or when they rate the quality of arguments made by candidates in a presidential debate, or when they evaluate arguments about affirmative action or gun control.
...
[I’m skipping another clever experiment with political partisans being shown slides while under fMRI observation. But here are some conclusions] p87 ...The threatening information (their own candidate’s hypocrisy) immediately activated a network of emotion-related brain areas -- areas associated with negative emotion and responses to punishment. The handcuffs (of “Must I believe it?”) hurt.
Some of these areas are known to play a role in reasoning, but there was no increase in activity in the dorso-lateral prefrontal cortex (d1PFC). The d1PFC is the main area for cool reasoning tasks. Whatever thinking partisans were doing, it was not the kind of objective weighting or calculating that the d1PFC is known for.
p88 Once Westen released them from the threat, [by showing supportive information] the ventral striatum started humming -- that’s one of the brain’s major reward centers. All animal brains are designed to create flashes of pleasure when the animal does something important for its survival, and small pulses of the neurotransmitter dopamine in the ventral striatum... are where these good feelings are manufactured. Heroin and cocaine are addictive because they artificially trigger this dopamine response...
Westen found that partisans escaping from handcuffs (by thinking about the final slide, which restored their confidence in their candidate) got a little hit of that dopamine. And if this is true, then it would explain why extreme partisans are so stubborn, closed-minded, and committed to beliefs that often seem bizarre or paranoid... partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. [sic] Extreme partisanship may be literally addictive.
The Rationalist Delusion
Webster’s Third New International Dictionary defines delusion as “a false conception and persistent belief unconquerable by reason in something that has no existence in fact.” As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion. It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists). The rationalist delusion is not just a claim about human nature. It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.
...
p90 I’m not saying we should all stop reasoning and go with our gut feelings... Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. A neuron is really good at one thing: summing up the stimulation coming into its dendrites to “decide’ whether to fire a pulse along its axon. A neuron by itself isn’t very smart. But if you put neurons together in the right way you get a brain; you get an emergent system that is smarter and more flexible than a single neuron.
In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to discomfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth (such as an intelligence agency or a community of scientists) or to produce good public policy (such as a legislature or advisory board).
And if our goal is to produce good behavior, not just good thinking, then it’s even more important to reject rationalism and embrace intuitionism. Nobody is ever going to invent an ethics class that makes people behave ethically after they step out of the classroom. Classes are for riders, and riders are just going to use their new knowledge to serve their elephants more effectively. If you want to make people behave more ethically, there are two ways you can go. You can change the elephant, which takes a long time and is hard to do. Or, to borrow an idea from the book Switch, by Chip Heath and Dan Heath, you can change the path the elephant and rider find themselves traveling on. You can make minor and inexpensive tweaks to the environment, which can produce big increases in ethical behavior. You can hire Glaucon as a consultant and ask him how to design institutions in which real human beings, always concerned about their reputations, will behave more ethically.
I wish I could think of a counter argument to this, but I really can’t. As I said before, I see confirmation everywhere. But, still, it isn’t clear to me why people veer off in the directions they do. I know several (otherwise reasonable) people who hold some of these strange ideas that, among other things, violate Occam’s Razor in that they prefer a more complex -- not to say convoluted -- solution to the more obvious and simple solution. Haidt explains why you would continue on this odd path once you start, but not why you would start down it in the first place. Unless you know in advance, from past experience, that you are going to get a world of dopamine rewards for believing something counter-intuitive. ("Counter-intitive" is perhaps, not the best choice of words in this context.)
No comments:
Post a Comment