jump to navigation

Pure Reason is a Disease May 1, 2007

Posted by Dwight Furrow in Ethics.

Or how to naturalize Levinas. This article is a good summary of cognitive science and emotions. For my purposes, the nut graf:

“In 2004, Harvard psychologist Joshua Greene used brain imaging to demonstrate that our emotions play an essential role in ordinary moral decision-making. Whenever we contemplate hurting someone else, our brain automatically generates a negative emotion. This visceral signal discourages violence. Greene’s data builds on evidence suggesting that psychopaths suffer from a severe emotional disorder — that they can’t think properly because they can’t feel properly.

“This lack of emotion is what causes the dangerous behavior,” said James Blair, a cognitive psychologist at the National Institute of Mental Health.”

Update: More evidence (here) of the role of the brain and feelings of altruism in moral conduct.

Thanks to Kevin for the catch.



1. Michael Mussachia - May 1, 2007

A nice example of how empirical data can contribute to general considerations in moral philosophy.

2. Dwight Furrow - May 1, 2007

It is interesting to think about whether this negative emotion that Greene discovers has a felt quality that is essential to the motivational state that inhibits our acting to harm someone.

If so, that suggests that even highly sophisticated computers may not have a moral conscience as we understand it, since they would presumably lack the feeling states of biological organisms.

3. Nina Rosenstand - May 1, 2007

A wonderful example of how scientists can only weigh in on the descriptive angle of ethics, not the normative one. We should, of course, learn as much as we can about the hardwiring of morality–which ought to put to rest all lingering assumptions that ethics is merely a matter of socialization, or that moral decisions take place in an emotional vacuum. But from here to conclude what is correct moral behavior? Sorry, James Blair, that’s not how it works. Scientists may be able to tell us whether a moral consideration/inhibition is natural–not if it’s “right.” The difficulty in making moral decisions may well stem from precisely this inhibition–because sometimes the right thing to do is indeed the hard thing to do. Sometimes Kant is right, after all. Overriding one’s natural emotional reluctance to cause harm, if universalizing the act makes sense, could indeed be a sign of a higher moral understanding–not lunacy or emotional deficiency. So could overriding one’s natural emotional reluctance to cause harm to an individual, if by sacrificing the few one would save the many. Sometimes Bentham is right, too. Who says moral decisions are supposed to be easy?

4. Dwight Furrow - May 3, 2007


I don’t think the role of science here is purely descriptive. There is, or ought to be, a substantial normative dimension to this research. It may be that science cannot tell us whether a moral inhibition is right. But shouldn’t our theories about what is right be constrained by human psychology? If not, aren’t we in danger of constructing inhuman, unlivable theories? One of the problems with traditional versions of Kant and utilitarianism is that they can’t give an account of human motivation.

An analogy with epistemology is instructive here. On many epistemological theories, one central consideration in justifying a belief is that the belief is a product of a normally functioning belief-generating mechanism–e.g. a visual or auditory mechanism. Aren’t these accounts of the emotional dimension of ethics an attempt to describe a properly functioning norm-generating mechanism? I don’t think that they comprehensively describe our capacity for practical reason. But they do help to clarify part of the structure of our practical reason and in doing so help to explain why certain actions or judgments are justified or not.

Finally, I think that we should not cavalierly accept the idea that over-riding our reluctance to cause harm exhibits “higher moral understanding”. Much misery has been caused by people acting on the dictum that “you have to break a few eggs to make an omelet”. I am not suggesting that historical examples of mass murder necessarily satisfy the conditions of a univesalization procedure. But I take it no one has come up with a universalization procedure that is reliable and not ridden with exceptions or eviscerated by lawyerly caveats.

I think we ignore these inhibitions at our peril.

5. Nina Rosenstand - May 5, 2007


What fascinates me here is the rise of moral naturalism on a scientific basis—which I have, in fact, been hoping for. I just don’t want the pendulum to swing too far in the other direction, away from the acknowledgment of the role of reason so we end up assuming that our instincts are fundamentally good by nature and we don’t need principles.

6. Michael Mussachia - May 7, 2007

The value I see in scientific research here is in its increasing ability to reveal the evolutionary, cognitive and social roots and conditions for moral codes and their practice and hence clarify the sorts of moral codes and practices that have a better change of actually working for the betterment of individuals and societies. Many of our community and political leaders act on the basis of poorly thought out, uninformed by science moral philosophies (typically religion-based absolutist moral principles) that actually harm great numbers of people. Perhaps science cannot tell us what is morally correct behavior, but it can help us understand what has a chance of actually working towards whatever moral goals we have taken on. Even in regard to values, empirical enquiry can reveal their evolutionary, neurobiological and social sources, and such insights should contribute to a more practical (and politically effective) moral philosophy.

7. Evan Simons - May 9, 2007

Biology, as a discipline, is mired in context. Isolating a ‘moral center’ in the brain could lead to new developments in neuroscience, but just studying that part of the brain, without the context (here culture), will tell us little of the actuality of morality. There is a complex interaction between the organism and its enviroment; a healthy human is born with a ‘moral center,’ but that center is developed by the society in which it lives. (I am hesitant to use the word development as it carries conotations of inborn capacity, as in film contains the picture that is waiting to be developed, but oh well)

8. Kevin W. - June 3, 2007

another study on this just came out recently…

“The results were showing that when the volunteers placed the interests of others before their own, the generosity activated a primitive part of the brain that usually lights up in response to food or sex. Altruism, the experiment suggested, was not a superior moral faculty that suppresses basic selfish urges but rather was basic to the brain, hard-wired and pleasurable.”

If It Feels Good to Be Good, It Might Be Only Natural

9. Thea - June 3, 2007

Hmm. Right now, the researchers are operating under the assumption that human brains (and therefore moral centers) are basically alike when the truth is, science has shown us already how different human brains can be.

If I know anything about scientific evolution (and I don’t), the next step will be to compare and constrast moral centers among different people. They’ll want to determine whether IQ and ethnicity are relevant to the size of one’s moral center and on and one until they determine that moral centers are different in different people. Conclusion? A normative theory stating that we should expect different moral abilites from different people. Not good.

10. Nina Rosenstand - June 3, 2007

Johnny-on-the-spot! Thanks for the link, great article. Takes me back to “Friends”…so if Phoebe feels good after raking her neighbor’s leaves, is it a primitive selfish feeling or not? I think we’re closer to an answer!

11. Dwight Furrow - June 3, 2007


Well, you’re talking about Phoebe here, so who knows.

But her action would not be selfish since the intention of her action, presumably, was her neighbor’s good. The satisfaction would be a by-product of the intention.


Despite some of the comments made by scientists in the article, I don’t think this research demonstrates the presence of a “moral center” that is hard wired and that threatens our free will. Scientists often get the philosophical implications of their research wrong.

What this shows is a tendency toward altruism that is quite malleable, responsive to cultural conditioning, and very situation dependent.

Research by Zimbardo (http://www.prisonexp.org/)and others shows that decent people can turn into monsters with only the slightest encouragement.


Thanks for the catch. I will include the article on the original post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: