jump to navigation

Human Nature Decoded–No Whiskers, and No Penile Spines! March 10, 2011

Posted by Nina Rosenstand in Nina Rosenstand's Posts, Philosophy of Human Nature, Science.
Tags: , , , ,
add a comment

I’ve written a book about the Philosophy of Human Nature, The Human Condition. I’ve given talks, and written papers and blogs, and tweeted about the subject. I’ve been devouring every morsel of information about human evolution I could get my hands on since I was 13 years old. I teach two classes per year focusing on Phil of Human Nature. And what do I read this morning in the CNN online Health section? Brand new research about a significant difference between apes and humans: Ape males have penile spines and human males don’t. First thing I thought was, “And a good thing, too!” Now wipe the smirks off your faces—this finding turns out to have seriously philosophical consequences:

We know that humans have larger brains and, within the brain, a larger angular gyrus, a region associated with abstract concepts. Also, male chimpanzees have smaller penises than humans, and their penises have spines. Not like porcupine needles or anything, but small pointy projections on the surface that basically make the organ bumpy.

Gill Bejerano, a biologist at Stanford University School of Medicine, and colleagues wanted to further investigate why humans and chimpanzees have such differences. They analyzed the genomes of humans and closely related primates and discovered more than 500 regulatory regions — sequences in the genome responsible for controlling genes — that chimpanzees and other mammals have, but humans do not. In other words, they are making a list of DNA that has been lost from the human genome during millions of years of evolution. Results from their study are published in the journal Nature.

…[The scientists] found that in one case, a switch that had been lost in humans normally turns on an androgen receptor at the sites where sensory whiskers develop on the face and spines develop on the penis. Mice and many other animals have both of these characteristics, and humans do not.

“This switch controls the expression of a key gene that’s required for the formation of these structures,” said David Kingsley, a study co-author at Stanford University. “If you kill that gene — smash the lightbulb — which has been done previously in mouse genetics, the whiskers don’t grow as much and the penile spines fail to form at all.”

To sum up: Humans lack a switch in the genome that would “turn on” penile spines and sensory whiskers. But our primate relatives, such as chimpanzees, have the switch, and that’s why they differ from us in these two ways.

So what does it matter, other than, presumably,  a different female sexual experience, and a lack of ability to sense things a few inches from our faces?

The other “switch” examined in this study probably has to do with the expansion of brain regions in humans. Kingsley and colleagues believe they have found a place in their genome comparisons where the loss of DNA in humans may have contributed to the gain of neurons in the brain. That is to say, when humans evolved without a particular switch, the absence of that switch allowed the brain to grow further.

The earliest human ancestors probably had sensory whiskers, penile spines and small brains, Kingsley said. Evolutionary events to remove the whiskers and spines and enlarge the brain probably took place after humans and chimpanzees split apart as separate species (Some 5 million to 7 million years ago), but before Neanderthals and humans diverged (about 600,000 years ago), Kingsley said.

So there you have it: We were on the fast track to becoming Homo Sapiens when the switch for sensory whiskers and penile spines was turned off! Make of that what you want, in this Women’s History Month! For me, that story made my day!

(I thought of calling this blog post “Of Mice and Men”, but that would be unfair to Steinbeck.)

Advertisements

Patricia Churchland at Book Works March 9, 2011

Posted by Nina Rosenstand in Current Events, Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: , , , ,
1 comment so far

A quick message for interested San Diegans: Patricia Churchland will be doing a reading Thursday evening , March 10:

In “Braintrust,” Patricia Churchland, professor emeritus of philosophy at UCSD, uses neuroscience to question accepted wisdom about the origins of morality.

She will be at Book Works in Del Mar Thursday at 7 p.m. for a reading.

From a San Diego Union Tribune interview:

What is new about the hypothesis you are offering?

As I see it, moral values are rooted in family values displayed by all mammals — the caring for offspring. The evolved structure, processes, and chemistry of the brain incline humans to strive not only for self-preservation but for the well-being of allied selves — first offspring, then mates, kin, and so on, in wider and wider “caring” circles.

Separation and exclusion cause pain, and the company of loved ones causes pleasure; responding to feelings of social pain and pleasure, brains adjust their circuitry to local customs. In this way, caring is apportioned, conscience molded, and moral intuitions instilled.

A key part of the story is oxytocin, an ancient body-and-brain molecule that, by decreasing the stress response, allows humans to develop the trust in one another necessary for the development of close-knit ties, social institutions, and morality.

Read more here.

Hooked on Stories February 22, 2011

Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature, Philosophy of Literature.
Tags: , , ,
8 comments

 For someone like me who has researched and written about Narrative Philosophy (philosophy involving the phenomenon of storytelling) for close to 30 years, with special emphasis on Narrative Ethics, it is particularly gratifying to watch the latest developments in neuroscientific research concerning the human urge to tell stories. Some of my students may remember me showing them a science video of the “man with two brains,” a man who had his two brain hemispheres severed, and resorted to making up stories about his associations because he couldn’t explain them any other way. For years I have told my students that the man with two brains was trying to get control of a chaotic situation, and therefore chose to tell a story about it—-an example of why we tell stories: to get a grip, to make unmanageable life manageable. In short, that’s why we tell stories of historic events, why we have myths and legends, why we love novels and movies, and certainly also why we lie. 

The doctor in charge of research in connection with this man’s case was Dr. Mike Gazzaniga, UCSB. And a new article written by Jessica Marshall and published in NewScientist, “Mind Reading: the Science of Storytelling,”  notes that Gazzaniga has pursued the phenomenon of our natural capacity to confabulate in his subsequent work:

Nobody has done more to highlight the central role of storytelling in human psychology than neuroscientist Michael Gazzaniga of the University of California, Santa Barbara. In studies of people in whom the connection between the two sides of the brain has been severed, he has shown that the left hemisphere is specialised for interpreting our feelings, actions and experiences in the form of narrative. In fact, Gazzaniga believes this is what creates our sense of a unified self. We also seem to use storytelling to reconcile our conscious and subconscious thoughts – as, for example, when we make choices based on subconscious reasoning and then invent fictions to justify and rationalise them (New Scientist, 7 October 2006, p 32).

The psychology of narrativity (Daniel Morrow, Rolf Zwaan) has reached interesting results over the past 20 years, and now neuroscience is weighing in with corroborative research:

It would appear that we don’t just tell stories to make sense of ourselves, we actually adopt the stories of others as though we were the protagonist.

Brain-scanning research published in 2009 seems to confirm this. When a team led by Jeffrey Zacks of Washington University in St Louis, Missouri, ran functional magnetic resonance imaging (fMRI) scans on people reading a story or watching a movie, they found that the same brain regions that are active in real-life situations fire up when a fictitious character encounters an equivalent situation.

 And furthermore, our brains like it:

Stories can also manipulate how you feel, as anyone who has watched a horror movie or read a Charles Dickens novel will confirm. But what makes us empathise so strongly with fictional characters? Paul Zak from Claremont Graduate University, California, thinks the key is oxytocin, a hormone produced during feel-good encounters such as breastfeeding and sex.

Taking this idea a step further, Read Montague of Virginia Tech University in Blacksburg and William Casebeer of the US Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia, have started using fMRI to see what happens in the brain’s reward centres when people listen to a story. These are the areas that normally respond to pleasurable experiences such as sex, food and drugs. They are also associated with addiction. “I would be shocked if narrative didn’t engage the same kind of circuitry,” says Montague. That would certainly help explain why stories can be so compelling. “If I were a betting man or woman, I would say that certain types of stories might be addictive and, neurobiologically speaking, not that different from taking a tiny hit of cocaine,” says Casebeer.

So now we’re beginning to understand the power of stories: Our brains are set up to confabulate, we engage naturally in storytelling, and we can apparently get hooked on good stories.  But take a look at where some scientists are going with this:

Understanding the mechanisms by which stories affect us can be put to practical use. Hasson has coined the term neurocinematics to describe its application to movie-making. His work reveals how some directors’ styles are particularly effective at synchronising the neural activity among members of the audience. “Hitchcock is the best example I have so far,” he says. “He was considered an expert of really manipulating the audience and turning them on and off as he pleased,” Hasson notes, and this shows up in the scans of people watching his films. Perhaps future directors could use these insights to control an audience’s experience. Hasson’s team has investigated how the order in which different scenes appear affects neural responses to a movie – which could help editors create either more enigmatic or more instantly comprehensible storylines, as required.

Human history is full of examples of the motivating power of a shared narrative – be it national, religious or focused on some other ideal – and Casebeer wants to investigate the possible military and political applications of a deeper understanding of this kind of storytelling. “One of my interests is in understanding how we can design institutions that more effectively promote moral judgement and development,” he says. He believes, for example, that the right stories could help military academies produce officers who are more willing to exercise moral courage.

Casebeer notes that a compelling narrative can seal the resolve of a suicide bomber, and suggests that developing “counter-narrative strategies” could help deter such attackers. “It might be that understanding the neurobiology of a story can give us new insights into how we prevent radicalisation and how we prevent people from becoming entrenched in the grip of a narrative that makes it more likely that they would want to intentionally cause harm to others,” he says.

At this point I’m seeing the ghosts of Watson and Skinner, the behaviorists, and their grand program, not just to understand human behavior, but to control it.  I also see the ghost of Plato and his “Noble Lie.” And the ghost of every parent in the world who has ever told the story of “Little Red Riding Hood.” The fact that we’re story-telling animals (a term coined by Alasdair MacIntyre) also implies that we’re story-consuming animals, and as such we’re vulnerable to well-told manipulative stories. So this is where we need Narrative Philosophy/Narrative Ethics, in addition to brain research and psychological statistics. Even though the article by Casebeer referred to in Marshall’s piece is from 2005, reflecting the urgency of the post-9/11 years (which may of course feel new and fresh with every new terrorist act), the core concept of using stories to change the world remains the same—equally promising, and equally dangerous. Because what Casebeer is suggesting may sound, and be,  benign and downright useful in a new century with an ongoing struggle against terrorism (regardless of changing administrations’ different nomenclature): telling stories to counteract the narratives of fanaticism that can lead to radicalization and mass-murder. Science-Fiction has engaged in precisely such narratives for a couple of decades. But we cannot engage in such a practice without first having analyzed the ethical implications of narratives being deliberately told to control the emotions of the audience. We already have a term for such narratives—-we call them propaganda. And in order to evaluate whether such an approach is justified we need to engage in an ethical analysis of all aspects of storytelling, and raise our awareness of when we’re being entertained, and when we’re being manipulated/educated. One level doesn’t preclude the other, and we don’t have to vilify the manipulative/educational aspect, but we need to be aware of it, and the motivations of the manipulators. In other words, we need an Ethic of Narratives, not just Narrative Ethics, understanding ourselves as moral agents in the world through stories. 

And we haven’t even started talking about the stories embedded in commercials!

Happiness is a “Moment of Grace”? January 23, 2011

Posted by Nina Rosenstand in Culture, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: , ,
2 comments

The Philosophy of Happiness is a hot topic these days; what St. Augustine said about time, I think we can safely say about happiness, too: When you don’t ask me, I know what it is—when you ask me, I don’t. Here, in The Guardian, is an interesting interview with French philosopher and novelist Pascal Bruckner who focused on happiness before happiness was cool.

Now, 10 years after its French publication, Bruckner’s treatise on the nature of happiness has finally received an English translation under the title Perpetual Euphoria: On the Duty to Be Happy. As Bruckner acknowledges, happiness is a notoriously difficult concept to pin down. We can take it to mean wellbeing, contentment, joy and pleasure, as well as several other definitions, but whatever it entails, it’s a philosophical topic that dates back to the very beginnings of the discipline.

For the ancient Greeks, happiness was synonymous with the good life. To be happy was to fulfil a harmonious role in an ordered society. Christianity replaced happiness with salvation, a life of denial for the promise of eternal bliss after death. It was the Enlightenment that returned happiness to earth. Most famously, the American Declaration of Independence guaranteed the right “to life, liberty and pursuit of happiness”.

Today, however, says Bruckner, people feel an obligation to be happy, and if they can’t live up to it, their lives collapse:

Bruckner suggests that with nothing standing between ourselves and happiness, other than our willingness to grasp it, there is a moral compulsion weighing on us to be happy – and it’s precisely this social pressure that makes so many people unhappy. “We should wonder why depression has become a disease. It is a disease of a society that is looking desperately for happiness, which we cannot catch. And so people collapse into themselves.”

 Bruckner’s book is a rich mixture of philosophy, literary learning and social observation; a cultured diagnosis rather than a populist cure. He does not believe that happiness can be reliably identified, much less measured. “Wellbeing is the object of statistics,” he says. “Happiness is not.” But he is not above issuing advice. “You can’t summon happiness like you summon a dog. We cannot master happiness, it cannot be the fruit of our decisions. We have to be more humble. Not because we should praise frailty or humility but because people are very unhappy when they try hard and fail. We have a lot of power in our lives but not the power to be happy. Happiness is more like a moment of grace.”

Bruckner is at pains to emphasise that happiness has more in common with an accident than a self-conscious choice. Interestingly, the origin of the word lies in the Old Norse word for chance: happ. But leaving happiness to chance, warns Bruckner, is not the same as ignoring it. “It’s said that if you don’t look for happiness, it will come. In fact, it’s not so easy. If you turn your back on happiness, you might miss it. It’s a catch-22 and I don’t think there’s any way out, except perhaps that real happiness doesn’t care about happiness. You can reach it only indirectly.”

But how similar are we in our experience of happiness? As much as I am skeptical of the merits of relativism, it is obvious that different cultures have different views of the achievement and experience of happiness;  the sense of happiness achieved by a Frenchman may be ontologically and morally different than that of a Dane (and of course the Danes, my ancestral people, are supposed to be the happiest people on Earth).  Here we have a good example of a field of research that needs input from psychology, neuroscience, and anthropology, with a dash of literature and poetry, and a philosopher’s touch to tie it all together. Looking forward to reading Bruckner’s text.

Homo Ludens—Is Playing Good for Us? November 30, 2010

Posted by Nina Rosenstand in Culture, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: , , , ,
9 comments

 Years ago a Dutch researcher, Johan Huizinga, came out with a book, Homo Ludens, “The Playing Human,” which claimed that playing is older than human culture, that even adults play for the fun of it, and it’s good for us. That was actually an eye opener for most people at the time. Since then the scope of play behavior analysis has been extended to studying social animals, (see Bekoff and Pierce (Wild Justice) ) suggesting that social play allows for the development of a sense of fairness and justice, not only in humans, but in some species of animals as well.

In this article, “Why We Can’t Stop Playing,” we see the positive analysis of play continued—but this time the spotlight isn’t on playing as a social activity, but very much a solitary experience: “Casual games” that are played on our computers and our cell phones, mainly to pass the time while waiting for appointments:

Why do smart people love seemingly mindless games? Angry Birds is one of the latest to join the pantheon of “casual games” that have appealed to a mass audience with a blend of addictive game play, memorable design and deft marketing. The games are designed to be played in short bursts, sometimes called “entertainment snacking” by industry executives, and there is no stigma attached to adults pulling out their mobile phones and playing in most places. Games like Angry Birds incorporate cute, warm graphics, amusing sound effects and a reward system to make players feel good. A scientific study from 2008 found that casual games provide a “cognitive distraction” that could significantly improve players’ moods and stress levels.

Game designers say this type of “reward system” is a crucial part of the appeal of casual games like Angry Birds. In Bejeweled 2, for example, players have to align three diamonds, triangles and other shapes next to each other to advance in the game. After a string of successful moves, a baritone voice announces, “Excellent!” or “Awesome!”

In the 2008 study, sponsored by PopCap, 134 players were divided into groups playing Bejeweled or other casual games, and a control group that surfed the Internet looking for journal articles. Researchers, who measured the participants’ heart rates and brain waves and administered psychological tests, found that game players had significant improvements in their overall mood and reductions in stress levels, according to Carmen Russoniello, director of the Psychophysiology Lab and Biofeedback Clinic at East Carolina University’s College of Health and Human Performance in Greenville, N.C., who directed the study.

In a separate study, not sponsored by PopCap, Dr. Russoniello is currently researching whether casual games can be helpful in people suffering from depression and anxiety.

Hardly an incentive for further development of one’s sense of fairness and justice, like social play! But it may still have merit, if it can offset the unnaturally high levels of stress most of us labor under. For one thing, we can conclude that playing games by oneself adds an important dimension to the play behavior phenomenon; for another, I find it fascinating that the article doesn’t end with a Caveat such as, “You’re just being childish, needing approval from the world,” or “If you play too much you’ll become aggressive/a mass murderer/go blind” or whatever. For decades we’ve heard about the bad influence of computer gaming, as a parallel to the supposed bad influence of violent visual fiction. But the debate is ancient: to put it into classical philosophical terms, Plato warned against going to the annual plays in Athens, because he thought they would stir up people’s emotions and thus impair their rational, moral judgment; Aristotle, who loved the theater, suggested that  watching dramas and comedies would relieve tension and teach important moral lessons. In the last two-three decades most analyses of the influence of entertainment have, almost predictably, ended with a Platonic warning about the dangers of violent TV, movies, and videogames. Are we slowly moving in an Aristotelian direction? That would be fascinating, but here we should remember that Aristotle didn’t want us to OD on entertainment: the beneficial effects are only present if entertainment is enjoyed in moderation. 15 minutes of “Angry Birds” ought to be just enough…

On a Scale of 1 to 22… September 15, 2010

Posted by Nina Rosenstand in Criminal Justice, Nina Rosenstand's Posts, Philosophy of Human Nature, Science.
Tags: , , ,
5 comments

Since Dwight brought up Michael Stone, and I’ve encountered his work before, here’s another angle on the work of Stone, tying it  in with our ongoing discussion of the concept of evil. A trend in moral philosophy which was pretty well established in the 20th Century has been broken lately, and it is partly thanks to Stone’s work—because philosophers do, on occasion, read the works of psychologists/psychiatrists. That trend was, for ethicists, to avoid the term ”evil” professionally,  partly because of its supposed religious connotations, and partly because of its unmanageability—because how exactly do we define evil, and how does it play into our subjective notions? In other words, using the term “evil” opens the door wide to subjectively emotional statements of condemnation, the kind of sweeping irrationality that moral philosophers have tried to avoid and counteract ever since Plato. Instead, to designate truly horrendous acts toward others, ethicists have preferred to call them “morally wrong,”  “heinous,” “going against the accepted norms of society,” or simply “malevolent.”

But even so, in everyday life, in the media, in politics, and in the world of entertainment we have blithely proceeded to use the E-word, with all its baggage, because expressions such as “morally wrong” and “really really bad behavior” simply aren’t strong enough. And now philosophers seem to be softening their stance against using the word evil—not just because we have been inundated with graphic novels and their world view of good vs. evil, but because we’re beginning to get some kind of structured understanding of what we can and can’t say when using the word. The first step in that direction was taken by Hannah Arendt, German philosopher who, after reflecting on the atrocities of the Holocaust, coined the term the banality of evil, to designate a new category:   the deplorable tendency in (supposedly) each of us to be persuaded to cause harm to other human beings under the assumption that it’s okay, that it’s not our responsibility, that everyone is doing it, and that somehow those people we’re hurting must have deserved it. In the wake of her analysis, we’ve had Stanley Milgram and Philip Zimbardo substantiating her theory through psychological experiments. Zimbardo has even drawn a line from Arendt’s analysis directly to Abu Ghraib. So though the backdoor, so to speak, ethicists were reintroduced to the concept of evil, but this time in an everyday guise, of normal people capable of doing horrendous things to others.

And now we’re, interestingly, reopening the possibility of using the label of what some of us call “extreme” or “egregious evil,” by reexamining not the banality of it, but the rare and shocking total disregard for the suffering of others, and even the joy in inflicting it. That is what laypeople have called evil all along, and a few years ago Michael Stone reached out to laypeople and ethicists alike, by suggesting a scale of evil related to homicide and other infliction of pain. This list has been making the rounds on the Internet, and was even featured in a television series in 2006, Most Evil, but NPR ran a story about it in August:

Inspired by the structure of Dante’s circles of hell, Stone has created his own 22-point “Gradations of Evil” scale, made up of murderers in the 20th century. “I thought it would be an interesting thing to do,” he says.

His scale is loosely divided into three tiers. First are impulsive evil-doers: driven to a single act of murder in a moment of rage or jealousy. Next are people who lack extreme psychopathic features, but may be psychotic — that is, clinically delusional or out of touch with reality. Last are the profoundly psychopathic, or “those who possess superficial charm, glib speech, grandiosity, but most importantly cunning and manipulativeness,” Stone says. “They have no remorse for what they’ve done to other people.”

Stone hopes the scale could someday be used in prosecutions. “The people at the very end of the scale have certain things about their childhood backgrounds that are different,” he says, from those who appear earlier in the scale. And because the scale follows a continuum of likelihood a killer will kill again, courts may be able to better categorize the risks posed by releasing a psychopath.

 Justifiable homicide such as self-defense is not evil, according to Stone, and gets a 1. The worst of the worst rates a 22, psychopathic  torture-murderers. Spree killers “only” rate a 15, and there is a difference between murdering torturers (20) and “merely” torturing murderers (18). And, interestingly, I can’t find a category or a number that would fit Arendt’s duty-driven, even reluctant Nazi torturers, but with some tweaking, the list might accomocate the banality of evil.

Will this scale make the concept of evil easier to handle for ethicists? The list is not intended to be absolute, and of course philosophers are welcome to question the moral connotation of a clinical psychiatric list of symptoms ( and we should), but overall Stone has simply systematized a vocabulary that we have been using all along, with its dangers of misunderstandings and exaggerations. Egregiously evil acts do exist, and Stone offers a moral classification of the degree of evil involved, without getting into any metaphysical discussion about what made these people do these evil things, whether they acted out of “free will,” or whether “evil” exists as an entity. That’s not for him to do—that’s for philosophers to discuss. (However, as you can read in the post below by Dwight, Stone also has a theory about underlying brain anomalies which should be taken into account—but all the brain theories in the world can’t provide the complete answer to the perennial philosophical question of when, why and to what extent we choose to assign guilt and responsibility to an act.) So his list is useful—not the final word for ethicists, but another tool in our ongoing understanding of not only what people do and what might make them do it, but why we believe they shouldn’t do it. In the end, it does boil down to how we view our responsibility to the Other.

Consciousness Explained? September 2, 2010

Posted by Dwight Furrow in Dwight Furrow's Posts, Philosophy, Philosophy of Human Nature, Science, Uncategorized.
Tags: ,
2 comments

David Hirschman at Big Think summarizes recent views on the nature of consciousness:

Dr. Antonio Damasio, a neuroscientist from the University of Southern California who has studied the neurological basis of consciousness for years, tells Big Think that being conscious is a “special quality of mind” that permits us to know both that we exist and that the things around us exist. He differentiates this from the way the mind is able to portray reality to itself merely by encoding sensory information. Rather, consciousness implies subjectivity—a sense of having a self that observes one’s own organism as separate from the world around that organism.

“Many species, many creatures on earth that are very likely to have a mind, but are very unlikely to have a consciousness in the sense that you and I have,” says Damasio. “That is a self that is very robust, that has many, many levels of organization, from simple to complex, and that functions as a sort of witness to what is going on in our organisms. That kind of process is very interesting because I believe that it is made out of the same cloth of mind, but it is an add-on, it was something that was specialized to create what we call the self.”

It seems to me there is something missing from this all-too-brief summary of Damasio’s account. To have a self (and thus to be robustly conscious) is not just to be a “witness to what is going on in our organism” or to recognize that one’s own organism is separate from the world.

To be conscious is to have the felt sense that something matters—has significance or import. A sophisticated computer might know that it exists, that things around it exist, and that there is a difference between it and the world. But I doubt that such a machine would have a felt concern for something because it is not a biological organism with needs embedded in feeling states. Self-awareness is not merely a “witness” but an active sorter of what to attend to and what to ignore in light of what matters. It is hard to imagine a consciousness without this sorting ability.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

Do We Think What Our Language Tells Us to Think? August 29, 2010

Posted by Nina Rosenstand in Nina Rosenstand's Posts, Philosophy, Philosophy of Human Nature.
Tags: , ,
1 comment so far

What if our entire capacity for thinking is limited and determined by our respective languages, forever preventing true cross-cultural understanding from taking place? An interesting article by linguist Guy Deutscher in the New York Times Magazine gives a good overview of the linguistic debate while introducing his new book, Through the Language Glass: Why the World Looks Different in Other Languages.” In brief: in 1940 anthropologist Benjamin Lee Whorf suggested that language molds your capacity for thinking, to the point that if the language does not contain a certain concept, you’re incapable of thinking about it or understanding it.

In particular, Whorf announced, Native American languages impose on their speakers a picture of reality that is totally different from ours, so their speakers would simply not be able to understand some of our most basic concepts, like the flow of time or the distinction between objects (like “stone”) and actions (like “fall”). For decades, Whorf’s theory dazzled both academics and the general public alike. In his shadow, others made a whole range of imaginative claims about the supposed power of language, from the assertion that Native American languages instill in their speakers an intuitive understanding of Einstein’s concept of time as a fourth dimension to the theory that the nature of the Jewish religion was determined by the tense system of ancient Hebrew.

Deutscher tells us that the theory has been abandoned and ridiculed by linguists for decades, but for one thing, linguists aren’t the only ones who have an interest in the epistemological side of language—20th century philosophers have had many a discussion on the subject; and for another, I’ve certainly heard scholars from many different fields refer to such ideas as established truths. But, says Deutscher, Whorf’s theory lost out because of it radical approach:

Whorf, we now know, made many mistakes. The most serious one was to assume that our mother tongue constrains our minds and prevents us from being able to think certain thoughts. The general structure of his arguments was to claim that if a language has no word for a certain concept, then its speakers would not be able to understand this concept. If a language has no future tense, for instance, its speakers would simply not be able to grasp our notion of future time. It seems barely comprehensible that this line of argument could ever have achieved such success, given that so much contrary evidence confronts you wherever you look. When you ask, in perfectly normal English, and in the present tense, “Are you coming tomorrow?” do you feel your grip on the notion of futurity slipping away? Do English speakers who have never heard the German word Schadenfreude find it difficult to understand the concept of relishing someone else’s misfortune? Or think about it this way: If the inventory of ready-made words in your language determined which concepts you were able to understand, how would you ever learn anything new?

Says Deutscher, the interesting thing isn’t that language limits your thinking, but enforces a certain kind of thinking:

Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about.

In German, Spanish, Russian and many other languages, you have to think about nouns in terms of masculine and feminine. In English we have to think about actions in a certain tense—have we done something, will we do something, or are we doing it? In Chinese apparently you don’t have to be that specific. In Western languages we tend to put ourselves in the middle of most of our spatial references (left, right, back, forth), but some tribal languages do not: their talk about space involve cardinal points (north, south, east and west), not relative references to ourselves. If our language has certain words for colors, we are more apt to perceive them. The bottom line is that language does affect our way of thinking, in terms of what we have to be aware of, and what we learn to pay attention to, and to disregard. But to what extent?

For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.

Whether Deutscher’s reevaluation of Whorf’s theory is really something new in linguistics I can’t say—I’m not a linguist—but philosophically this is hardly a new approach; on the contrary, it is reminiscent of what some Continental philosophers of the twentieth century said a while back (actually, it was that brilliant linguist Nietzsche who first floated a similar idea!): we don’t all think the same way, because our available language creates a perspective or horizon, for all that we take for granted and are likely to notice—a hermeneutic circle. Not Whorf’s thought prison, but a Lifeworld of our interpretations into which we are thrown, and which takes some intellectual effort to rise above—hard, but not impossible. You can find similar ideas in the writings of several contemporary German and French scholars, and some of them have actually been influenced by Jakobson.

So without having read Deutscher’s book yet, it seems to me that the idea of  language as a primary condition for understanding the world, but not per se a prison of interpretation, is not exactly new in the general realm of scholarship. And philosophically as well as scientifically, it is already the subject of a revisionist overhaul, focusing on our neurological/ontological similarities beneath the cultural differences! 

But I’m glad Deutscher brought up the name and influence of  Roman Jakobson. I myself  actually had the privilege of meeting him at the 500th anniversary of the University of Copenhagen in 1979. Jakobson was talking about his amazing life in a succession of countries, and to the best of my recollection he said, “The surest way to stay mentally active is to change country and language every 10 years!” And he wasn’t talking about just learning new words and rules of grammar…

Moral Naturalism is Back! And so am I! August 22, 2010

Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: , , ,
4 comments

Vacation’s over, and I’m back after having been out of town all summer. Thanks to Dwight for keeping the blog hot, even with jury duty! Ever the optimist, I hope to weigh in on topics close to my heart, and mind (for I believe in the importance of both) with shorter intervals than last semester.

First, welcome to the new academic year, everyone—for I assume that most of you are involved in Academia in some form or other.  Next, let’s dig into the pile of saved articles I’ve accumulated over the past months where I haven’t had regular Internet access (and I survived!). Here’s a story that caught my eye, from New York Times July 22: A conference was held in Connecticut in which a new breed of moral philosophers (ethicists working with evolutionary psychologists and neuroscientists) confirmed the philosophical presence, and importance, of moral naturalism. A quick recap: Moral naturalism is the philosophy that morality is a natural occurrence in the human mind—not (primarily) a matter of acculturation, or selfish/unselfish choices based on rationality. In the 20th century moral naturalism was crowded out by psychoanalysis, behaviorism, logical positivism, and other approaches which all have merit, but the idea that ethics could be founded in our emotional apparatus wasn’t getting much attention except for a few thinkers such as Richard Taylor and Philip Hallie. (You’ll recognize some themes here from many of my previous posts.)

But something started happening in the late 1980s—interestingly, paralleling the development of narrative ethics, the idea that moral values find expression, and may be developed through storytelling. From one direction came the new findings of neuroscientists: that an area of the brain seems to be devoted to moral considerations. From another came the evolutionary psychologists, connecting morals to our long history of evolution. Add to that experimental philosophy, with its (sometimes a mite oversimplified, but intriguing) “What if” questions such as the famous Trolley Problem, introduced by Philippa Foot. Combine that with narrative ethics, and you get a new form of moral naturalism (and that description is also oversimplified, but I’m just trying to paint a general picture): human emotions have developed certain built-in features that enable us, even as babies, to recognize right from wrong, from the perspective of being social animals; as adults, these features are still fundamental, but can be overridden—by experience, cultural pressures, and/or reason; and the stories we hear, and tell, about right and wrong, will combine our moral emotions with a sense of causality, and teach/explain the moral dos and don’ts that will provide either compliance with societal rules or rebellion against them. So here, moving into the second decade of the new millennium, we have a moral philosophy that is actually on fairly firm ground, working with science as well as recognizing the common human experience of moral feelings. This is exciting, folks. So doesn’t anyone see a downside to this? Yes, some ethicists question the loss of the exalted state of reason as the foundation and instigator of moral choices and values. And we may have to do yet another reevaluation down the line, reinstating reason as a fundamental element of ethics. I often argue that we’ll have to, because moral emotions are error-prone.

So here are some tidbits from the conference, reported by David Brooks:

Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.

Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it.

At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.

This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.

Brooks comments that the conference left alone the question of transcendence and the sacred, and that is a valid complaint, since so many people are convinced that the entire idea of ethics is founded in religion, and we can’t just disregard that conviction and throw it under the trolley—that’s as bad as 20th century ethicists disregarding the role of emotions. But my overriding concern is that the moral naturalists of the 21st century (to which I suppose I belong) are losing sight of the role of reason, and that the new moral naturalism will become another fad, a radical “ism” that will, in time, be replaced by a counter-theory, in good Hegelian fashion.  We’re not quite at the pinnacle yet, where we understand everything about ethics.

The Gaze of Empathy June 1, 2010

Posted by Nina Rosenstand in Culture, Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature, Teaching.
Tags: , , ,
3 comments

In the midst of scientific reports that humans in general are far more empathetic than selfish (at least by nature) we all of a sudden hear that college students are less empathetic now than in generations past.

 Researchers analyzed data from studies conducted between 1979 and 2009, and found the sharpest drop in empathy occurred in the last nine years.

 For instance, today’s students are less likely to agree with statements like, “I sometimes try to understand my friends better by imagining how things look from their perspective” and “I often have tender, concerned feelings for people less fortunate than me.”

According to one of the lead researchers, Ed O’Brien, “It’s harder for today’s college student to empathize with others because so much of their social lives is done through a computer and not through real life interaction.”

So some researchers blame computers and the social sites like, yes, Facebook. You can communicate about yourself endlessly, without being expected to reciprocate (“Thanks for asking about my day—how was yours?”). But one comment, from “Cricket,” on the article quoted above really adds something to the discussion:

A fellow storyteller noticed that this year’s Master of Library Science class in storytelling (don’t laugh — good storytelling and story collecting involves a huge amount of research) didn’t make eye contact. This is an affluent group of white females — a culture in which eye contact has always been considered appropriate. (In some cultures it’s an invasion of privacy.) After discussing it with them, she learned they didn’t realize eye contact was appropriate. I remember parents and teachers used to insist on it: “Look at me when I’m talking to you / when you’re talking to me.” Since then, they have said that her class is more friendly than others, and it’s the only class where they socialize together after class.

That comment triggered a veritable Aha-moment for me, because I have observed the same phenomenon in my classes, increasingly, over the past decade: there are some students who hide and avoid eye contact because they haven’t studied the material. That’s nothing new—we’ve all done that when we were in school. And then there are students from some non-Western cultures who may have been taught that it is rude to look a person of authority straight in the eye. So cultural differences can account for some incidents.  But when good students with a Western cultural background are avoiding eye contact, it gets interesting. Increasingly I have students who bring their laptops or their Kindle devices to class. Some instructors prohibit such devices, I don’t—yet. I just ban non-class-related activity. And what I see is those students—the good ones— being utterly absorbed by what it is they’re watching, or doing, on the screen. Usually it’s note taking, and not game-playing (and I check!)…. But even when you take notes, you’re supposed to look up once in a while and look at the instructor performing his or her stand-up routine there in front of you. We’re not just standing up there at the whiteboard to repeat a lesson, like Tivo on a 3-D TV—we’re actually there to create a teaching moment from scratch every day, and some of it is improv! What creates the most significant difference between a classroom experience and an online course is the face-to-face encounter with questions and ideas. But without the basic eye contact participation you might as well be at home behind your screen, taking an online course (which has its merits, but the face-to-face learning moment isn’t one of them). When I have told my students that I expect eye contact from them, they have—to my enormous consternation—been surprised. And now  I realize that they simply may not be accustomed to eye contact being appropriate, because of having grown up frequently—maybe even primarily— communicating electronically with peers. The first generation in the history of humanity where eye contact is no longer the first clear human outreach? Now that is fundamentally frightening. The gaze of The Other is fundamental to many 20th century philosophies, in particular Sartre’s, who sees it as (by and large) a competition,  and Levinas’s, who sees it as humanity looking right at you, asking for your empathy. Look at Vermeer’s  “Girl with the Pearl Earring,” the picture I use for my “Gravatar,” as well as for the cover of my book, The Human Condition:

detailed view of face

This is the face of the Other. She is looking right at you, with the gaze of a human being, real and timeless. She expects a response. But if we withhold our gaze and think that’s normal, well, then there is no empathy coming forth.