5:59 - Imagine the following scenario: You have two different tribes,
your collectivist tribe over here—where everything's in common, and your
individualist tribe over there. Imagine these tribes not only have
different ways of cooperating, but they rally around different gods,
different leaders, different holy texts that tell them how they should
live—that you're not allowed to sing on Wednesdays in this group, and in
this group over here, women are allowed to be herders, but in this group
over there, they're not; different ways of life; different ways of
organizing society. Imagine these societies existing separately, separated
by a forest that burns down. The rains come, and then suddenly you have a
nice lovely pasture, and both tribes move in.
Now the question is: How are they going to do this? We have different tribes that are cooperative in different ways. Are they going to be individualistic? Are they going to be collectivists? Are they going to pray to this god? Are they going to pray to that god? Are they going to be allowed to have assault weapons or not allowed to have assault weapons? That's the fundamental problem of the modern world—that basic morality solves the tragedy of the commons, but it doesn't solve what I call the "tragedy of common sense morality." Each moral tribe has its own sense of what's right or wrong—a sense of how people ought to get along with each other and treat each other—but the common senses of the different tribes are different. That's the fundamental and moral problem.
Deep Pragmatism - A Conversation with Joshua D. Greene [8.30.13]
Transcript - For the last 20 years I have been trying to understand morality. You can think of morality as having two basic questions. One is a descriptive scientific question: How does it work? How does moral thinking work? Why are we the kinds of moral creatures that we are? Why are we immoral to the extent that we're immoral? And then the further normative question: How can we make better moral decisions? Is there any fact of the matter about what's right or wrong at all? If so, what is it? Is there a theory that organizes, that explains what's right or what's wrong? These are the questions that have occupied me really my whole adult life, both as a philosopher and as a scientist.
What I'd like to do today is present two ideas about morality on two different levels. One is: What is the structure of moral problems? Are there different kinds of moral problems? And the other is: What is the structure of moral thinking? Are there different kinds of moral thinking? And then put those together to get at a kind of normative idea, which is that the key to moral progress is to match the right kind of thinking with the right kind of problem.
Morality is fundamentally about the problem of cooperation. There are things that we can accomplish together that we can't accomplish as individuals. That's why it makes sense for us, whether us as humans, or cells inside a body, or individual wolves coming together to form a pack of wolves—individuals can gain more by working together than they can separately; teamwork is effective. But teamwork is also problematic, because there is always the temptation to pull for yourself instead of the team; that's the fundamental problem of cooperation.
The best illustration of the problem of cooperation comes from the ecologist, Garrett Hardin, who talked about what he called the "tragedy of the commons." This is a parable that he used to illustrate the problem of overpopulation, but it really applies to any cooperation problem. The story goes like this: You have a bunch of herders who are raising sheep on a common pasture, and each of them faces the following question every so often: Should I add another animal to my herd? A rational herder thinks like this: If I add another animal to my herd, then I get more money when I take that animal to market. That's the upside. It's a pretty good upside. What's the downside? That animal has to be supported, but since we're sharing this common pasture, the cost is going to be shared by everybody, but since it's my herd, the benefit all comes to me. So I'll add another animal to my herd, and then another one, and another one.
And it's not just that herder. All the herders have the same set of incentives. Then something happens. Everybody has added so many animals to their herds that the commons—the pasture—can't support them all. The animals eat all the grass, then there's no food left, all the animals die, and all the people are left hungry; and that's the tragedy of the commons. It's essentially about the tension between individual rationality and collective rationality; what's good for me, versus what's good for us. What's good for us is for everybody to limit the size of their herd so that you could have a sustainable future in which everybody gets to share the commons and there's enough to go around for everybody. But what's good for me is to have everybody else limit their herds, and I add more and more animals to my herd, and I get more for myself. That's the fundamental problem of collaboration, that is, pulling for yourself versus doing what's good for the group.
How do you solve that? Well, morality is essentially nature's solution to this problem. We have some concern for the well-being of others, that is, I'm going to limit the size of my herd because it would be bad for other people, and we have a higher level concern where we're disdainful towards people who don't do that. If you see a herder who's adding more and more animals to her herd, you say, "Hey, come on. You're being selfish here." That's the fundamental problem.
There are different ways of being cooperative. There are different ways of solving that problem. On one extreme we have the Communist solution, which is to say that not only are we going to have a common pasture, we'll just have a common herd; everybody shares everything. There's no incentive for anyone to pull for themselves instead of the group, because everything is just shared. Another solution to that problem is to be completely individualistic and to say, "Not only are we not going to have a shared herd, we're not going to have a shared pasture. We're going to divide up the commons into different plots and everybody gets their own little piece of it, and everybody takes care of themselves." Neither of these is inherently moral or immoral. They're different terms on which a group can be cooperative—one more collectivist and the other more individualist.
There are other sorts of questions that you might face. Suppose somebody tries to take one of my animals. Am I allowed to defend myself? Am I allowed to kill them? Am I allowed to carry an assault weapon to defend my herd against people who might take it? Some tribes will say, no, you're not. There are limits to what you can do to defend your herd. Others will say you can do whatever you want, as long as you're defending what's yours. Are we going to have some kind of health insurance for our herds? If someone's animals get sick, do the rest of us come and pay for them or pay for part of their care? And so forth. There are different ways for groups to be cooperative, and they can work fine separately, but what happens when you have different groups that come together?
5:59 - Imagine the following scenario: You have two different tribes, your collectivist tribe over here, where everything's in common, and your individualist tribe over there. Imagine these tribes not only have different ways of cooperating, but they rally around different gods, different leaders, different holy texts that tell them how they should live—that you're not allowed to sing on Wednesdays in this group, and in this group over here women are allowed to be herders, but in this group over there, they're not; different ways of life; different ways of organizing society. Imagine these societies existing separately, separated by a forest that burns down. The rains come, and then suddenly you have a nice lovely pasture, and both tribes move in.
Now the question is: How are they going to do this? We have different tribes that are cooperative in different ways. Are they going to be individualistic? Are they going to be collectivists? Are they going to pray to this god? Are they going to pray to that god? Are they going to be allowed to have assault weapons or not allowed to have assault weapons? That's the fundamental problem of the modern world—that basic morality solves the tragedy of the commons, but it doesn't solve what I call the "tragedy of common sense morality." Each moral tribe has its own sense of what's right or wrong—a sense of how people ought to get along with each other and treat each other—but the common senses of the different tribes are different. That's the fundamental moral problem.
First, there are really two different kinds of cooperation problems. One is getting individuals within a group to cooperate, and the other is getting groups that are separately cooperative to cooperate with each other. One is the basic moral problem, and that's the problem that our brains were designed to solve. Then you have this more complex modern problem, where it's not about getting individuals to get along but about getting groups with different moralities, essentially, to get along with each other. That's the first idea—that morality and moral problems are not one kind, they really come in two fundamentally different kinds, which we can think of as me versus us—individuals versus the group, and us versus them—our values versus their values; our interests versus their interests. That's idea number one.
Idea number two is there are different kinds of thinking. Think of the human mind and the human brain as being like a camera—a digital camera that has, on the one hand, automatic settings—point-and-shoot. You have your landscape or your portrait setting, and it's point-and-shoot and it works. Or you can put your camera in manual mode, where you can adjust everything. Why would a camera have those two different ways of taking photos? The idea is that the automatic settings are very efficient; just point-and-shoot. If you want to take a picture of somebody in indoor light from not too far away, you put it in portrait and you get a pretty good shot with one click of a button. It's very efficient, but it's not very flexible; it's good for that one thing. With manual mode—where you can adjust the F-stop and everything else yourself—you can do anything, but it takes some time, it takes some expertise, you have to stop and slow down, and you have to know what you're doing. What's nice about that is that it's very flexible. It's not very efficient, but you can do anything with it, whatever kind of photographic situation you're in, or whatever you're trying to achieve, if you know how to work the manual settings you can get the photograph that you want.
Manual mode is our capacity for reasoning that we don't just have to go with our gut reactions, we can stop and think, and we can say, "Well, does this really make sense?" This is clearest when it comes to eating. There's a really nice experiment that was done by Baba Shiv where they had people come into the lab—they were told they were coming in for a memory experiment—and they were told to remember a string of numbers, and then go down the hall and they were going to have to repeat the numbers back to somebody else. And they said, "By the way, we have some snacks for you. You'll see a little cart with some snacks on it. On your way down the hall, you can pick up a snack." And on the cart were two kinds of snacks: There was fruit salad and there was chocolate cake. Of course—at least for people who were watching their weight, which is a lot of people—the chocolate cake would be much more enjoyable now, but people, in general, would rather have eaten the fruit salad instead of the chocolate cake. When you think about your waistline in a bathing suit this coming summer, that's the larger goal off in the future.
They've got chocolate cake and they have fruit salad, and they ask people to remember different strings of numbers; some were very short—they were easy to remember; some were long—mentally taxing. What they found is that when people had to remember the longer numbers, they were more likely to choose chocolate cake, especially if they're impulsive. What's going on? Well, the idea is that having to remember those numbers is occupying your manual mode. It's occupying your capacity for explicit conscious thinking. Daniel Kahneman calls this "thinking fast and slow." It's occupying your capacity for slow, more deliberative thought. Your slow-thinking brain—your manual mode—is taking up with these numbers, and then you're faced with this choice between the immediate gratifying reward—the chocolate cake—or the long-term reward, which is maintaining your weight the way you'd like. When you've got manual mode occupied, you'll go more with the impulse.
What this experiment nicely illustrates is this familiar tension between the automatic impulse and the long-term goal. We see this kind of structure all over the place. It's not just when it comes to things like eating food. It comes to, for example, in our reactions to members of other groups. If you show typical white people, at least in the United States, very quickly, pictures of black men, you'll get this flash of a signal in the amygdala, a part of the brain that's responsible for kind of sending off an early alarm system. If it's a very quick exposure, then that's all you see, but if you let people look at the pictures longer you see a dampening down of that response, and increased activity in another part of the brain called the dorsa-lateral prefrontal cortex, which is really the seat of manual mode. Dorsa-lateral prefrontal cortex on the outsides of the front of the brain. The idea is that your automatic settings say, "Chocolate cake now, great," and your manual mode says, "No, no, no, it's not worth it. Have the fruit salad and be thinner later." So, two different kinds of thinking, which you might call fast and slow.
Earlier I said there are two different kinds of moral problems. There is the problem of getting individuals to get along in a group—the tragedy of the commons, me versus us—and there's the problem of getting groups to get along—the problem of us versus them. What I've been thinking about is how these two things fit together. The hypothesis is that there's a normative match, that is, our automatic settings, our gut reactions, are very good at solving the first problem—the tragedy of the commons. They're not so good at solving the second problem. If you want to make moral progress, what you need to do is match them up in the right way. When it comes to treating other people well, when it comes to me versus us—am I going to be selfish or am I going to do the nice thing for the other people in my world? For most people, their gut reactions about that are pretty good. It doesn't mean that people are not selfish; we all know that people can be very selfish. But people at least have moral instincts that make them pretty good in that way. Let me give you an example of this.
What do you do? If you're selfish, you keep your money. Why? Because everybody else puts in, it gets doubled, you get a share of whatever everybody else puts in, and you get your original share. That's the selfish thing to do. The best thing to do for the group—the "us" thing to do—is for everybody to put all their money in. That way, you maximize the amount that gets doubled and everybody comes away better.
We looked at people's reaction times while they're making these judgments and what we found is something that's maybe surprising, that is, in general, the faster people made their decisions, the more likely they were to do the cooperative thing. When we put people under time pressure, they ended up putting more money into the common pool. We asked people, "Think of a time when you trusted your intuitions and it worked out well." That made people put more money in. If we'd say, "Think of a time when you engaged in deliberate careful reasoning, and that worked out well." That makes people put in less. What all three of these kinds of experiments are pointing to is the conclusion that I stated before, which is that when it comes to the tragedy of the commons, we have pretty good instincts about this; we have a gut reaction that says, "Okay, do the cooperative thing."
This is not true for everybody in all circumstances. Other experiments we've done, analyses we've done, suggest that it depends on the kind of culture in which one has grown up. At least for some people a lot of the time, the first thought is to be cooperative. That suggests that we do, indeed, have these claims of instincts, whether they're genetic instincts, or culturally honed instincts, or instincts honed from one's personal experience, whatever it is, the point-and-shoot automatic settings say, "Go ahead and do the cooperative thing." It's manual mode thinking that says, "Wait a second, hold on, I could really get screwed here. Maybe I shouldn't do this." That suggests the first point, that is, that there's a nice fit between going with your gut, automatic settings, point-and-shoot, fast thinking, and solving that basic cooperation problem of how do you get a bunch of individuals to get along with each other.
Now the question is: How are they going to do this? We have different tribes that are cooperative in different ways. Are they going to be individualistic? Are they going to be collectivists? Are they going to pray to this god? Are they going to pray to that god? Are they going to be allowed to have assault weapons or not allowed to have assault weapons? That's the fundamental problem of the modern world—that basic morality solves the tragedy of the commons, but it doesn't solve what I call the "tragedy of common sense morality." Each moral tribe has its own sense of what's right or wrong—a sense of how people ought to get along with each other and treat each other—but the common senses of the different tribes are different. That's the fundamental and moral problem.
Joshua D. Greene is the John and Ruth Hazel Associate Professor of the
Social Sciences and the director of the Moral Cognition Laboratory
in the Department of Psychology, Harvard University. He studies
the psychology and neuroscience of morality, focusing on the
interplay between emotion and reasoning in moral decision-making.
His broader interests cluster around the intersection of
philosophy, psychology, and neuroscience. He is the author
of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. -
EdgeBio Page
Deep Pragmatism - A Conversation with Joshua D. Greene [8.30.13]
Transcript - For the last 20 years I have been trying to understand morality. You can think of morality as having two basic questions. One is a descriptive scientific question: How does it work? How does moral thinking work? Why are we the kinds of moral creatures that we are? Why are we immoral to the extent that we're immoral? And then the further normative question: How can we make better moral decisions? Is there any fact of the matter about what's right or wrong at all? If so, what is it? Is there a theory that organizes, that explains what's right or what's wrong? These are the questions that have occupied me really my whole adult life, both as a philosopher and as a scientist.
What I'd like to do today is present two ideas about morality on two different levels. One is: What is the structure of moral problems? Are there different kinds of moral problems? And the other is: What is the structure of moral thinking? Are there different kinds of moral thinking? And then put those together to get at a kind of normative idea, which is that the key to moral progress is to match the right kind of thinking with the right kind of problem.
Morality is fundamentally about the problem of cooperation. There are things that we can accomplish together that we can't accomplish as individuals. That's why it makes sense for us, whether us as humans, or cells inside a body, or individual wolves coming together to form a pack of wolves—individuals can gain more by working together than they can separately; teamwork is effective. But teamwork is also problematic, because there is always the temptation to pull for yourself instead of the team; that's the fundamental problem of cooperation.
The best illustration of the problem of cooperation comes from the ecologist, Garrett Hardin, who talked about what he called the "tragedy of the commons." This is a parable that he used to illustrate the problem of overpopulation, but it really applies to any cooperation problem. The story goes like this: You have a bunch of herders who are raising sheep on a common pasture, and each of them faces the following question every so often: Should I add another animal to my herd? A rational herder thinks like this: If I add another animal to my herd, then I get more money when I take that animal to market. That's the upside. It's a pretty good upside. What's the downside? That animal has to be supported, but since we're sharing this common pasture, the cost is going to be shared by everybody, but since it's my herd, the benefit all comes to me. So I'll add another animal to my herd, and then another one, and another one.
And it's not just that herder. All the herders have the same set of incentives. Then something happens. Everybody has added so many animals to their herds that the commons—the pasture—can't support them all. The animals eat all the grass, then there's no food left, all the animals die, and all the people are left hungry; and that's the tragedy of the commons. It's essentially about the tension between individual rationality and collective rationality; what's good for me, versus what's good for us. What's good for us is for everybody to limit the size of their herd so that you could have a sustainable future in which everybody gets to share the commons and there's enough to go around for everybody. But what's good for me is to have everybody else limit their herds, and I add more and more animals to my herd, and I get more for myself. That's the fundamental problem of collaboration, that is, pulling for yourself versus doing what's good for the group.
How do you solve that? Well, morality is essentially nature's solution to this problem. We have some concern for the well-being of others, that is, I'm going to limit the size of my herd because it would be bad for other people, and we have a higher level concern where we're disdainful towards people who don't do that. If you see a herder who's adding more and more animals to her herd, you say, "Hey, come on. You're being selfish here." That's the fundamental problem.
There are different ways of being cooperative. There are different ways of solving that problem. On one extreme we have the Communist solution, which is to say that not only are we going to have a common pasture, we'll just have a common herd; everybody shares everything. There's no incentive for anyone to pull for themselves instead of the group, because everything is just shared. Another solution to that problem is to be completely individualistic and to say, "Not only are we not going to have a shared herd, we're not going to have a shared pasture. We're going to divide up the commons into different plots and everybody gets their own little piece of it, and everybody takes care of themselves." Neither of these is inherently moral or immoral. They're different terms on which a group can be cooperative—one more collectivist and the other more individualist.
There are other sorts of questions that you might face. Suppose somebody tries to take one of my animals. Am I allowed to defend myself? Am I allowed to kill them? Am I allowed to carry an assault weapon to defend my herd against people who might take it? Some tribes will say, no, you're not. There are limits to what you can do to defend your herd. Others will say you can do whatever you want, as long as you're defending what's yours. Are we going to have some kind of health insurance for our herds? If someone's animals get sick, do the rest of us come and pay for them or pay for part of their care? And so forth. There are different ways for groups to be cooperative, and they can work fine separately, but what happens when you have different groups that come together?
5:59 - Imagine the following scenario: You have two different tribes, your collectivist tribe over here, where everything's in common, and your individualist tribe over there. Imagine these tribes not only have different ways of cooperating, but they rally around different gods, different leaders, different holy texts that tell them how they should live—that you're not allowed to sing on Wednesdays in this group, and in this group over here women are allowed to be herders, but in this group over there, they're not; different ways of life; different ways of organizing society. Imagine these societies existing separately, separated by a forest that burns down. The rains come, and then suddenly you have a nice lovely pasture, and both tribes move in.
Now the question is: How are they going to do this? We have different tribes that are cooperative in different ways. Are they going to be individualistic? Are they going to be collectivists? Are they going to pray to this god? Are they going to pray to that god? Are they going to be allowed to have assault weapons or not allowed to have assault weapons? That's the fundamental problem of the modern world—that basic morality solves the tragedy of the commons, but it doesn't solve what I call the "tragedy of common sense morality." Each moral tribe has its own sense of what's right or wrong—a sense of how people ought to get along with each other and treat each other—but the common senses of the different tribes are different. That's the fundamental moral problem.
First, there are really two different kinds of cooperation problems. One is getting individuals within a group to cooperate, and the other is getting groups that are separately cooperative to cooperate with each other. One is the basic moral problem, and that's the problem that our brains were designed to solve. Then you have this more complex modern problem, where it's not about getting individuals to get along but about getting groups with different moralities, essentially, to get along with each other. That's the first idea—that morality and moral problems are not one kind, they really come in two fundamentally different kinds, which we can think of as me versus us—individuals versus the group, and us versus them—our values versus their values; our interests versus their interests. That's idea number one.
Idea number two is there are different kinds of thinking. Think of the human mind and the human brain as being like a camera—a digital camera that has, on the one hand, automatic settings—point-and-shoot. You have your landscape or your portrait setting, and it's point-and-shoot and it works. Or you can put your camera in manual mode, where you can adjust everything. Why would a camera have those two different ways of taking photos? The idea is that the automatic settings are very efficient; just point-and-shoot. If you want to take a picture of somebody in indoor light from not too far away, you put it in portrait and you get a pretty good shot with one click of a button. It's very efficient, but it's not very flexible; it's good for that one thing. With manual mode—where you can adjust the F-stop and everything else yourself—you can do anything, but it takes some time, it takes some expertise, you have to stop and slow down, and you have to know what you're doing. What's nice about that is that it's very flexible. It's not very efficient, but you can do anything with it, whatever kind of photographic situation you're in, or whatever you're trying to achieve, if you know how to work the manual settings you can get the photograph that you want.
On a camera it's nice to have both of these things because they
allow you to navigate the tradeoff between efficiency and
flexibility. The idea is that the human brain has essentially the
same design. We have automatic settings, and our automatic settings
are our gut reactions; they are emotional responses. They're the
precompiled responses that allow us to deal with the typical
everyday situations that we face. And it doesn't necessarily mean
they're hardwired—that they come from our genes. These can be things
that are shaped by our genes, but they're also shaped by our
cultural experiences and our individual trial-and-error learning.
It's the things that we've had some kind of experience with—either
our own experience, or our culture has had experience with this, or
our ancestors genetically have had experience with this and we're
benefiting from that wisdom in the kind of point-and-shoot gut
reaction that says, "Hey, that's a snake, you should be afraid of
that," or "You should take a step back."
Manual mode is our capacity for reasoning that we don't just have to go with our gut reactions, we can stop and think, and we can say, "Well, does this really make sense?" This is clearest when it comes to eating. There's a really nice experiment that was done by Baba Shiv where they had people come into the lab—they were told they were coming in for a memory experiment—and they were told to remember a string of numbers, and then go down the hall and they were going to have to repeat the numbers back to somebody else. And they said, "By the way, we have some snacks for you. You'll see a little cart with some snacks on it. On your way down the hall, you can pick up a snack." And on the cart were two kinds of snacks: There was fruit salad and there was chocolate cake. Of course—at least for people who were watching their weight, which is a lot of people—the chocolate cake would be much more enjoyable now, but people, in general, would rather have eaten the fruit salad instead of the chocolate cake. When you think about your waistline in a bathing suit this coming summer, that's the larger goal off in the future.
They've got chocolate cake and they have fruit salad, and they ask people to remember different strings of numbers; some were very short—they were easy to remember; some were long—mentally taxing. What they found is that when people had to remember the longer numbers, they were more likely to choose chocolate cake, especially if they're impulsive. What's going on? Well, the idea is that having to remember those numbers is occupying your manual mode. It's occupying your capacity for explicit conscious thinking. Daniel Kahneman calls this "thinking fast and slow." It's occupying your capacity for slow, more deliberative thought. Your slow-thinking brain—your manual mode—is taking up with these numbers, and then you're faced with this choice between the immediate gratifying reward—the chocolate cake—or the long-term reward, which is maintaining your weight the way you'd like. When you've got manual mode occupied, you'll go more with the impulse.
What this experiment nicely illustrates is this familiar tension between the automatic impulse and the long-term goal. We see this kind of structure all over the place. It's not just when it comes to things like eating food. It comes to, for example, in our reactions to members of other groups. If you show typical white people, at least in the United States, very quickly, pictures of black men, you'll get this flash of a signal in the amygdala, a part of the brain that's responsible for kind of sending off an early alarm system. If it's a very quick exposure, then that's all you see, but if you let people look at the pictures longer you see a dampening down of that response, and increased activity in another part of the brain called the dorsa-lateral prefrontal cortex, which is really the seat of manual mode. Dorsa-lateral prefrontal cortex on the outsides of the front of the brain. The idea is that your automatic settings say, "Chocolate cake now, great," and your manual mode says, "No, no, no, it's not worth it. Have the fruit salad and be thinner later." So, two different kinds of thinking, which you might call fast and slow.
Earlier I said there are two different kinds of moral problems. There is the problem of getting individuals to get along in a group—the tragedy of the commons, me versus us—and there's the problem of getting groups to get along—the problem of us versus them. What I've been thinking about is how these two things fit together. The hypothesis is that there's a normative match, that is, our automatic settings, our gut reactions, are very good at solving the first problem—the tragedy of the commons. They're not so good at solving the second problem. If you want to make moral progress, what you need to do is match them up in the right way. When it comes to treating other people well, when it comes to me versus us—am I going to be selfish or am I going to do the nice thing for the other people in my world? For most people, their gut reactions about that are pretty good. It doesn't mean that people are not selfish; we all know that people can be very selfish. But people at least have moral instincts that make them pretty good in that way. Let me give you an example of this.
I did a set of experiments with David Rand and Martin Nowak in
which we basically had people face the tragedy of the commons in the
lab. This is using what's called the "public goods game." The way
the public goods game works is as follows: Everybody gets a sum of
money—let's say there are four people—then everybody can put their
money into a common pool and the experimenter will double the amount
of money in the pool, and it gets divided equally among
everybody.
What do you do? If you're selfish, you keep your money. Why? Because everybody else puts in, it gets doubled, you get a share of whatever everybody else puts in, and you get your original share. That's the selfish thing to do. The best thing to do for the group—the "us" thing to do—is for everybody to put all their money in. That way, you maximize the amount that gets doubled and everybody comes away better.
If you want to maximize the group payoff, everybody puts in, and if
you want to maximize your own payoff, you don't put anything in.
It's just like the herders in Hardin's tragedy of the commons—what's
best for the group is for everybody to limit their herds, but what's
best for you is to just take more and more animals, take more and
more of the commons for yourself. What do people do? It turns out it
depends on how they're thinking.
We looked at people's reaction times while they're making these judgments and what we found is something that's maybe surprising, that is, in general, the faster people made their decisions, the more likely they were to do the cooperative thing. When we put people under time pressure, they ended up putting more money into the common pool. We asked people, "Think of a time when you trusted your intuitions and it worked out well." That made people put more money in. If we'd say, "Think of a time when you engaged in deliberate careful reasoning, and that worked out well." That makes people put in less. What all three of these kinds of experiments are pointing to is the conclusion that I stated before, which is that when it comes to the tragedy of the commons, we have pretty good instincts about this; we have a gut reaction that says, "Okay, do the cooperative thing."
This is not true for everybody in all circumstances. Other experiments we've done, analyses we've done, suggest that it depends on the kind of culture in which one has grown up. At least for some people a lot of the time, the first thought is to be cooperative. That suggests that we do, indeed, have these claims of instincts, whether they're genetic instincts, or culturally honed instincts, or instincts honed from one's personal experience, whatever it is, the point-and-shoot automatic settings say, "Go ahead and do the cooperative thing." It's manual mode thinking that says, "Wait a second, hold on, I could really get screwed here. Maybe I shouldn't do this." That suggests the first point, that is, that there's a nice fit between going with your gut, automatic settings, point-and-shoot, fast thinking, and solving that basic cooperation problem of how do you get a bunch of individuals to get along with each other.
What about the other problem—the problem of common sense morality
when it's us versus them? This is where I think our instincts get us
in trouble.
You also see this in the indifference that we have towards far away
strangers. Many years ago the philosopher, Peter Singer, posed the
following hypothetical: He said, suppose you're walking by a pond
and there's a child who's drowning there, and you can wade in and
save this child's life, but if
you do this you're going to ruin your fancy shoes and your fancy
suit. It might cost you $500, $1,000 or more, depending on how fancy
you are, to save this child's life. Should you do it, or is it okay
to not do it?
Almost everybody says, "No, of course, you'd be a monster if you
let this child drown because you were thinking about your clothes."
Peter Singer says, okay, well, what about this question: There are
children on the other side of the world in the Horn of Africa, let's
say, who desperately need food, water, and medicine; not to mention
having an education, and some kind of political representation, and
so on and so forth. Just to help satisfy some basic needs, keep
somebody alive for a better day, let's say you can donate $500,
$1,000, something that's substantial, but that you can afford; the
kind of money you might spend on a nice set of clothes, and you
could save somebody's life to do that. Do you have a moral
obligation to do that? Most people's thought is, well, it's nice if
you do that, but you're not a monster if you don't do that, and we
all do that all the time.
There are two different responses to this problem. One is to say,
well, these are just very different. Here, you have a real moral
obligation—saving the child who's drowning. Over there, it's a nice
thing to do, but you don't have to do it; there's no strong moral
demand there. There must be some good explanation for why it's okay
you have an obligation here, but you don't have an obligation there.
That's one possibility.
Another possibility is that what we're seeing here are the
limitations of intuitive human morality; that we evolved in a world
in which we didn't deal with people on the other side of the
world—the world was our group. We're the good guys. They're the bad
guys. The people on the other side of the hill—they're the
competition. We have heartstrings that you can tug, but you can't
tug them from very far away. There's not necessarily a moral reason
why we're like this, it's a tribal reason. We're designed to be good
to the people within our group to solve the tragedy of the commons,
but we're not designed for the tragedy of common sense morality.
We're not designed to find a good solution between our well-being
and their well-being. We're really about me and about us, but we're
not so much about them.
Is that right? First, let me tell you about an experiment. This was
done as an undergraduate thesis project with a wonderful student
named Jay Musen. We asked people a whole bunch of questions about
helping people, and one of the starkest comparisons goes like this:
In one version, you're traveling on vacation in a poor country. You
have this nice cottage up in the mountains overlooking the coast and
the sea, and then a terrible typhoon hits and there's widespread
devastation, and there are people without food, medicine, there are
sanitary problems, and people are getting sick, and you can help.
There's already a relief effort on the ground, and what they need is
just money to provide supplies, and you can make a donation. You've
got your credit card, and you can help these people. And we ask
people, do you have an obligation to help? And a majority, about
60-something percent of the people said yes, you have an obligation
to do something to help these people down on the coast below that
you can see.
Then we asked a separate group of people the following question:
Suppose your friend is vacationing in this faraway place, and you
describe the situation as exactly the same, except instead of you
there, it's your friend who's there. Your friend has a smart phone
and can show you everything that's going on; everything your friend
sees, you can see. The best way to help is to donate money, and you
can do that from home just as well. Do you have an obligation to
help? And here, about half as many people say that it's okay, that
you have an obligation to help—about 30 percent.
What's nice about this comparison is that it cleans up a lot of the
mess in Singer's original hypothetical. You can say, you're in a
better position to help in one case, but not in the other case, or
there are a lot of different victims, in one case, there's only one
victim. We've equalized a lot of the things there and you still see
it as a big difference. It's possible, maybe, that what's really
right is to say no, you really have an obligation to help when
you're standing there, and you really don't have an obligation when
you're at home. That's one interpretation. I can't prove that it's
wrong.
Another, in my view, more plausible possibility is that we're
seeing the limitations of our moral instincts. That, again, our
moral heart strings, so to speak, were designed to be tugged, but
not from very far away. But it's not for a moral reason. It's not
because it's good for us to be that way. It's because caring about
ourselves and our small little tribal group helped us survive, and
caring about the other groups—the competition—didn't help us
survive. If anything, we should have negative attitudes towards
them. We're competing with them for resources.
Bringing us back to our two problems, what this suggests that if
you agree that the well-being, the happiness of people on the other
side of the world is just as important as the happiness of people
here—we may be partial to ourselves, and to our family, and to our
friends, but if you don't think that objectively we're any better or
more important or that our lives are any more valuable than other
people's lives—then, we can't trust our instincts about this, and we
have to shift ourselves into manual mode.
To bring this all full circle—two kinds of problems, two kinds of
thinking. We've got individuals getting together as groups—me versus
us—and there we want to trust those gut reactions, we want to trust
the gut reactions that say, "Be nice, be cooperative, put your money
into the pool, and help the group out," and we're pretty good at
that. When it comes to us versus them—to distrusting people who are
different from us, who are members of other racial groups or ethnic
groups, when it comes to helping people who really could benefit
enormously from our resources but who don't have a kind of personal
connection to us, us versus them, their interests versus our
interests, or their interests versus my interests, or their values
versus my values—our instincts may not be so reliable; that's when
we need to shift into manual mode.
There is a philosophy that accords with this, and that philosophy
has a terrible name; it's known as utilitarianism. The idea behind
utilitarianism is that what really matters is the quality of
people's lives—people's suffering, people's happiness—how their
experience ultimately goes. The other idea is that we should be
impartial; it essentially incorporates the Golden Rule. It says that
one person's well-being is not ultimately any more important than
anybody else's. You put those two ideas together, and what you
basically get is a solution—a philosophical solution to the problems
of the modern world, which is: Our global philosophy should be that
we should try to make the world as happy as possible. But this has a
lot of counterintuitive implications, and philosophers have spent
the last century going through all of the ways in which this seems
to get things wrong.
Maybe a discussion for another time is: Should we trust those
instincts that tell us that we shouldn't be going for the greater
good? Is the problem with our instincts or is the problem with the
philosophy? What I argue is that our moral instincts are not as
reliable as we think, and that when our instincts work against the
greater good, we should put them aside at least as much as we can.
If we want to have a global philosophy—one that we could always sign
onto, regardless of which moral tribes we're coming from—it's going
to require us to do things that don't necessarily feel right in our
hearts. Either it's asking too much of us, or it feels like it's
asking us to do things that are wrong—to betray certain ideals, but
that's the price that we'll have to pay if we want to have a kind of
common currency; if we want to have a philosophy that we can all
live by. That's what ultimately my research is about. It's trying to
understand how our thinking works, what's actually going on in our
heads when we're making moral decisions. It's about trying to
understand the structure of moral problems, and the goal is to
produce a kind of moral philosophy that can help us solve our
distinctively complex moral problems.
DEEP PRAGMATISM
[8.30.13]