Joshua Greene. Crédito: Greg Salibian

Frontiers: “morality is the solution for common tragedy”, says Greene

The Harvard professor discussed how our mind makes moral decisions and how culture is a key factor in the formation of morality

In São Paulo, the latest edition of the cycle of lectures and debates Frontiers of Thought received Joshua Greene, a psychologist, neuroscientist and philosopher who works as professor and director of the Greene Lab at Harvard University. Journalist João Gabriel de Lima mediated Greene’s presentation.

This year, the events hosted by Frontiers of Thought have a main theme: the world in disagreement, democracy and culture wars. Greene's approach to dealing with the issue lies in morality, as he is an expert in the subject and that motivated his latest book, Moral tribes: emotion, reason, and the gap between us and them.

For the Harvard professor, the definition of morality can be understood as the search for the solution of the tragedy between commons – that is, between us, human beings. “It is a kind of tool that lives in our head and allows avoiding tragedy when thinking about the well-being of everyone as a whole,” he claims. “People should not think about what is good just for themselves; instead, they should think about what is good for us as a collective group.”

It is still hard for humans to understand how moral decisions work. Greene tries to explain with a metaphor: our brain operates like a camera. Automatic mode is equivalent to moral decisions based on emotions, so they are faster and do not require instant reflection. The manual mode would be equivalent to reasoning, a way of thinking and acting deliberately. He believes this dualism enables us to balance our decisions between efficiency and flexibility.

Experiments show how culture influences morality

In his presentation, Greene showed a series of experiments’ results related to moral decisions. In one of them, called “commonsense game,” people were paid ten dollars and could make two decisions: save the money for themselves or donate it, which would then be multiplied and divided among all volunteers. After replicating this test across cultures, Greene can demonstrate that selfish or generous action is strictly related to the degree of trust in other people involved in the decision. When people trust their group, the cash-donating rate is far higher.

“It is an example of classic tribalism, and this can also be expressed in situations such as racism,” says Greene when giving examples of situations faced by the black population. Some of these examples are the difficulty of finding a job when competing with white people or the possibility of being convicted by a jury because of their skin color - even though this is an unconscious process.

“Dealing with people from another group, another tribe, affects the area of the brain that deals with unwanted emotions and feelings,” he says. This same neurological reaction explains the answers to another, quite famous experiment: is it morally acceptable to kill one person to save five people?

Consider the hypothetical situation: a train will pass on a rail in which there are five people and, to save them, someone must be sacrificed. Respondents replied: 63% of them would choose the sacrifice if the decision were made only at the push of a button, but only 31% made the same decision if they had to push the person in question.

Green explains that, from a neurological point of view, decision-making involves several parts of the brain. When thinking about the side effect of an action, which in this case is the death of someone, the amygdala is triggered; when considering the reasonableness of this attitude, the prefrontal cortex is activated – and the closer the individual is to the collateral result of the action, the more the amygdala influences the decision. However, morality is not mapped anywhere in the brain, but in the set of activations.

“Morality has no specific module in the brain, but it isn’t apart from all the rest. Morality and ethics are not mechanisms, but neurological functions,” he concludes.  “Science says what it is, but philosophy explains what it should be,” he says.

Individual moral or search for the greater good?

There is a debate of at least three centuries in philosophy, about the role of morality, as to how it should be considered on what is most correct: a virtuous action or an action fostering the greater good.

Greene contrasts two antagonistic theses. On the one hand, the ethical argument raised by German philosopher Immanuel Kant, to whom ends should not justify means and actions should be thought of as the best possible measure for all individuals. On the other hand, the mainstream of utilitarianism, whose purpose is to ensure that all decisions and actions aim to do good for as many people as possible.

For the Harvard professor, considering a metamorality - a kind of moral system higher than basic morality – may be the natural way to solve this kind of debate. However, unfortunately, we are not yet evolutionarily prepared for it. “It is a new problem. It is not a problem of natural evolution, but of cultural evolution. It is the source of several other problems, and our mind is not ready to deal with it yet,” he says.

“In the end, the question that needs to be asked is: do we all live better with this or that decision? It is not a merely utilitarian thought, but a search for human values,” he concluded.