Hooked on Stories February 22, 2011Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature, Philosophy of Literature.
Tags: Michael Gazzaniga, narrative ethics, neurocinematics, William Casebeer
For someone like me who has researched and written about Narrative Philosophy (philosophy involving the phenomenon of storytelling) for close to 30 years, with special emphasis on Narrative Ethics, it is particularly gratifying to watch the latest developments in neuroscientific research concerning the human urge to tell stories. Some of my students may remember me showing them a science video of the “man with two brains,” a man who had his two brain hemispheres severed, and resorted to making up stories about his associations because he couldn’t explain them any other way. For years I have told my students that the man with two brains was trying to get control of a chaotic situation, and therefore chose to tell a story about it—-an example of why we tell stories: to get a grip, to make unmanageable life manageable. In short, that’s why we tell stories of historic events, why we have myths and legends, why we love novels and movies, and certainly also why we lie.
The doctor in charge of research in connection with this man’s case was Dr. Mike Gazzaniga, UCSB. And a new article written by Jessica Marshall and published in NewScientist, “Mind Reading: the Science of Storytelling,” notes that Gazzaniga has pursued the phenomenon of our natural capacity to confabulate in his subsequent work:
Nobody has done more to highlight the central role of storytelling in human psychology than neuroscientist Michael Gazzaniga of the University of California, Santa Barbara. In studies of people in whom the connection between the two sides of the brain has been severed, he has shown that the left hemisphere is specialised for interpreting our feelings, actions and experiences in the form of narrative. In fact, Gazzaniga believes this is what creates our sense of a unified self. We also seem to use storytelling to reconcile our conscious and subconscious thoughts – as, for example, when we make choices based on subconscious reasoning and then invent fictions to justify and rationalise them (New Scientist, 7 October 2006, p 32).
The psychology of narrativity (Daniel Morrow, Rolf Zwaan) has reached interesting results over the past 20 years, and now neuroscience is weighing in with corroborative research:
It would appear that we don’t just tell stories to make sense of ourselves, we actually adopt the stories of others as though we were the protagonist.
Brain-scanning research published in 2009 seems to confirm this. When a team led by Jeffrey Zacks of Washington University in St Louis, Missouri, ran functional magnetic resonance imaging (fMRI) scans on people reading a story or watching a movie, they found that the same brain regions that are active in real-life situations fire up when a fictitious character encounters an equivalent situation.
And furthermore, our brains like it:
Stories can also manipulate how you feel, as anyone who has watched a horror movie or read a Charles Dickens novel will confirm. But what makes us empathise so strongly with fictional characters? Paul Zak from Claremont Graduate University, California, thinks the key is oxytocin, a hormone produced during feel-good encounters such as breastfeeding and sex.
Taking this idea a step further, Read Montague of Virginia Tech University in Blacksburg and William Casebeer of the US Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia, have started using fMRI to see what happens in the brain’s reward centres when people listen to a story. These are the areas that normally respond to pleasurable experiences such as sex, food and drugs. They are also associated with addiction. “I would be shocked if narrative didn’t engage the same kind of circuitry,” says Montague. That would certainly help explain why stories can be so compelling. “If I were a betting man or woman, I would say that certain types of stories might be addictive and, neurobiologically speaking, not that different from taking a tiny hit of cocaine,” says Casebeer.
So now we’re beginning to understand the power of stories: Our brains are set up to confabulate, we engage naturally in storytelling, and we can apparently get hooked on good stories. But take a look at where some scientists are going with this:
Understanding the mechanisms by which stories affect us can be put to practical use. Hasson has coined the term neurocinematics to describe its application to movie-making. His work reveals how some directors’ styles are particularly effective at synchronising the neural activity among members of the audience. “Hitchcock is the best example I have so far,” he says. “He was considered an expert of really manipulating the audience and turning them on and off as he pleased,” Hasson notes, and this shows up in the scans of people watching his films. Perhaps future directors could use these insights to control an audience’s experience. Hasson’s team has investigated how the order in which different scenes appear affects neural responses to a movie – which could help editors create either more enigmatic or more instantly comprehensible storylines, as required.
Human history is full of examples of the motivating power of a shared narrative – be it national, religious or focused on some other ideal – and Casebeer wants to investigate the possible military and political applications of a deeper understanding of this kind of storytelling. “One of my interests is in understanding how we can design institutions that more effectively promote moral judgement and development,” he says. He believes, for example, that the right stories could help military academies produce officers who are more willing to exercise moral courage.
Casebeer notes that a compelling narrative can seal the resolve of a suicide bomber, and suggests that developing “counter-narrative strategies” could help deter such attackers. “It might be that understanding the neurobiology of a story can give us new insights into how we prevent radicalisation and how we prevent people from becoming entrenched in the grip of a narrative that makes it more likely that they would want to intentionally cause harm to others,” he says.
At this point I’m seeing the ghosts of Watson and Skinner, the behaviorists, and their grand program, not just to understand human behavior, but to control it. I also see the ghost of Plato and his “Noble Lie.” And the ghost of every parent in the world who has ever told the story of “Little Red Riding Hood.” The fact that we’re story-telling animals (a term coined by Alasdair MacIntyre) also implies that we’re story-consuming animals, and as such we’re vulnerable to well-told manipulative stories. So this is where we need Narrative Philosophy/Narrative Ethics, in addition to brain research and psychological statistics. Even though the article by Casebeer referred to in Marshall’s piece is from 2005, reflecting the urgency of the post-9/11 years (which may of course feel new and fresh with every new terrorist act), the core concept of using stories to change the world remains the same—equally promising, and equally dangerous. Because what Casebeer is suggesting may sound, and be, benign and downright useful in a new century with an ongoing struggle against terrorism (regardless of changing administrations’ different nomenclature): telling stories to counteract the narratives of fanaticism that can lead to radicalization and mass-murder. Science-Fiction has engaged in precisely such narratives for a couple of decades. But we cannot engage in such a practice without first having analyzed the ethical implications of narratives being deliberately told to control the emotions of the audience. We already have a term for such narratives—-we call them propaganda. And in order to evaluate whether such an approach is justified we need to engage in an ethical analysis of all aspects of storytelling, and raise our awareness of when we’re being entertained, and when we’re being manipulated/educated. One level doesn’t preclude the other, and we don’t have to vilify the manipulative/educational aspect, but we need to be aware of it, and the motivations of the manipulators. In other words, we need an Ethic of Narratives, not just Narrative Ethics, understanding ourselves as moral agents in the world through stories.
And we haven’t even started talking about the stories embedded in commercials!
The Winner is Watson February 17, 2011Posted by Nina Rosenstand in Artificial Intelligence, Current Events, Nina Rosenstand's Posts, Science, Technology.
Tags: "Jeopardy", rights, Star Trek, Watson
So it has finally happened: a computer has outwitted the humans—Watson won on “Jeopardy.” As reported by the New York Times’ John Markoff,
For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million prize, but proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them.
Watson, specifically, is a “question answering machine” of a type that artificial intelligence researchers have struggled with for decades — a computer akin to the one on “Star Trek” that can understand questions posed in natural language and answer them.
One of Watson’s developers, Dr. Ferrucci, refers to the computer as though it is a person who actually deliberates. That, for you Trekkers, is also reminiscent of numerous Star Trek episodes:
Both Mr. Jennings and Mr. Rutter are accomplished at anticipating the light that signals it is possible to “buzz in,” and can sometimes get in with virtually zero lag time. The danger is to buzz too early, in which case the contestant is penalized and “locked out” for roughly a quarter of a second.
Watson, on the other hand, does not anticipate the light, but has a weighted scheme that allows it, when it is highly confident, to buzz in as quickly as 10 milliseconds, making it very hard for humans to beat. When it was less confident, it buzzed more slowly. In the second round, Watson beat the others to the buzzer in 24 out of 30 Double Jeopardy questions.
“It sort of wants to get beaten when it doesn’t have high confidence,” Dr. Ferrucci said. “It doesn’t want to look stupid.”
And what’s next?
For I.B.M., the future will happen very quickly, company executives said. On Thursday it plans to announce that it will collaborate with Columbia University and the University of Maryland to create a physician’s assistant service that will allow doctors to query a cybernetic assistant. The company also plans to work with Nuance Communications Inc. to add voice recognition to the physician’s assistant, possibly making the service available in as little as 18 months.
“I have been in medical education for 40 years and we’re still a very memory-based curriculum,” said Dr. Herbert Chase, a professor of clinical medicine at Columbia University who is working with I.B.M. on the physician’s assistant. “The power of Watson- like tools will cause us to reconsider what it is we want students to do.”
I.B.M. executives also said they are in discussions with a major consumer electronics retailer to develop a version of Watson, named after I.B.M.’s founder, Thomas J. Watson, that would be able to interact with consumers on a variety of subjects like buying decisions and technical support.
But…here’s the ultimate Star Trek question: Will Watson and others of its kind have the right to refuse the tasks they will be assigned to do? Because otherwise (thank you, Melissa Snodgrass, writer of that classic Star Trek: The Next Generation episode, “The Measure of a Man”) we will have created a new breed of—slaves. Provided that Watson actually develops a sense of self. But we have yet to see evidence of that.
Are We Better Off Without Grief? February 9, 2011Posted by Nina Rosenstand in Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: "The Eternal Sunshine of the Spotless Mind", grief, Homer's Odyssey, Peanuts, suffering
Is grief meaningful, or is it simply painful, wasted time? On the Practical Ethics blog Roger Crisp speculates about the nature and effect of grief; since it feels so terrible, perhaps we’d be better off if we could deselect it from the human experience, or remove from our mind with the help of drugs. Using an example from Homer’s Odyssey Part 4 (ethics in fiction!) he ponders whether it might not be beneficial, once in a while, to be able to forget one’s sorrows.
Some people claim that suffering pain is good in itself. It usually turns out, however, that they mean that suffering is good in so far as it enables one to acquire some other good, such as understanding what others are going or have gone through, or certain profound truths about human life. It’s also common for people to suggest that suffering, though it may be bad in itself, is required as a background against which certain good things in life – in particular, of course, pleasure – can stand out. These and other such claims, however, seem especially dubious in the case of someone who has already experienced quite a lot of suffering and can remember it – as will be true of nearly all adult human beings…Grief is usually unpleasant, sometimes extremely so. What if some medication could permanently remove any tendency to grief, with no damaging side-effects?
Crisp speculates that there might be some benefit to this, if it doesn’t remove the positive memories of the relationship to the loved one we have lost. Then we could just remember the good things, without the sting.
Somehow it reminds me of an old Peanuts strip: Linus and Charlie Brown are discussing suffering. Charlie Brown asks what the meaning of suffering might be, and Linus responds that it strengthens the soul and prepares it for future events. Charlie says, “What events?” And Linus responds, “More suffering.”
Crisp’s idea is a noble thought, in the tradition of minimizing misery for humanity under the assumption that pain is generally a bad thing, and pleasure is good. But do we really want to live in the Eternal Sunshine of the Spotless Mind? Do we want to dull or edit out our painful experiences because we think a good life is one without pain, loss and disappointment? It is a peculiarly modern, Western thing that we think a successful life is one without emotional pain; in other time periods and cultures grief has its place in a person’s life. And why assume that in order for grief to be meaningful, it has to have a silver lining, some non-painful payoff? I once read a quote which has stayed with me ever since: “Grief is the finest tribute to the joy we’ve had.” So grief is a response to a real situation, and in attempting to remove the sting of grief, you could at the same time alter the very nature of the relationship and the joy that’s lost. Those memories may lose their vibrancy. New brain research will tell us that the strengh of some memories is directly proportional to the emotion associated with them. I do see where Crisp wants to take us with his speculation: into a realm where we don’t have to suffer so terribly at the loss of someone we love, but without removing the good memories. However, there is already a cure for that, and it doesn’t involve drugs. It is called Time. It’s a hard cure, but it does indeed heal most wounds.
Can Novels Be Philosophical? Part 2 February 6, 2011Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Literature.
Tags: James Ryerson, John Steinbeck, Martha Nussbaum, narrative philosophy, Paul Ricoeur
In his NY Times article from Jan.20 James Ryerson brought up arguments supporting the view that there is a world of difference between the analytical arguments of philosophy and the murky feelings of literature (see blogpost below). But he also cites opposing views:
Of course, such oppositions are never so simple. Plato, paradoxically, was himself a brilliant literary artist. Nietzsche, Schopenhauer and Kierkegaard were all writers of immense literary as well as philosophical power. Philosophers like Jean-Paul Sartre and George Santayana have written novels, while novelists like Thomas Mann and Robert Musil have created fiction dense with philosophical allusion. Some have even suggested, only half in jest, that of the brothers William and Henry James, the philosopher, William, was the more natural novelist, while the novelist, Henry, was the more natural philosopher.
David Foster Wallace, who briefly attended the Ph.D. program in philosophy at Harvard after writing a first-rate undergraduate philosophy thesis (published in December by Columbia University Press as “Fate, Time, and Language”), believed that fiction offered a way to capture the emotional mood of a philosophical work. The goal, as he explained in a 1990 essay in The Review of Contemporary Fiction, wasn’t to make “abstract philosophy ‘accessible’ ” by simplifying ideas for a lay audience, but to figure out how to recreate a reader’s more subjective reactions to a philosophical text.
Unlike Murdoch, Gass and Wallace, Rebecca Newberger Goldstein, whose latest novel is “36 Arguments for the Existence of God,” treats philosophical questions with unabashed directness in her fiction, often featuring debates or dialogues among characters who are themselves philosophers or physicists or mathematicians. Still, she says that part of her empathizes with Murdoch’s wish to keep the loose subjectivity of the novel at a safe remove from the philosopher’s search for hard truth. It’s a “huge source of inner conflict,” she told me. “I come from a hard-core analytic background: philosophy of science, mathematical logic. I believe in the ideal of objectivity.” But she has become convinced over the years of what you might call the psychology of philosophy: that how we tackle intellectual problems depends critically on who we are as individuals, and is as much a function of temperament as cognition. Embedding a philosophical debate in richly imagined human stories conveys a key aspect of intellectual life. You don’t just understand a conceptual problem, she says: “You feel the problem.”
So according to Ryerson there are indeed authors whose work straddle the two fields—but I’m curious about his approach, because it seems to be exclusively from the viewpoint of analytic philosophy that a gap exists: Continental philosophers have traditionally felt far closer to fictional literature, and continental authors have blended philosophical thoughts into their works, as Ryerson himself mentions. Paul Ricoeur, the French philosopher, spent decades teaching his readers about the value of narrative philosophy. Here in this country similar lessons have been taught since the 1980s by literature people such as Wayne Booth and Hayden White. But even in contemporary American philosophy there is an increasing rapprochement between literature and philosophy; I’m surprised that Ryerson doesn’t even mention the one contemporary American philosopher who, perhaps more than anybody else, has seen the philosophical value in fiction without getting hung up on whether fiction displays formal arguments and “hard truths”: Martha Nussbaum. And if we want to look for an American novelist who has excelled in writing fictional works of moral philosophy where the reader doesn’t choke on formal arguments, but instead sees moral deliberations come alive through his characters, John Steinbeck is probably the best example of a writer who fuses literature and ethics—to the profound irritation of literary critics, because he broke with the standard rules of literature. From Of Mice and Men to East of Eden, and in particular The Winter of Our Discontent, Steinbeck weaves philosophical arguments about right and wrong, good and evil, into his storylines. And if you read Stephen K. George’s collections of essays, John Steinbeck and Moral Philosophy, and John Steinbeck and His Contemporaries, as well as Ethics, Literature, and Theory, you’ll find that a new generation of literature critics and moral philosophers have no problem recognizing philosophical fiction as simultaneously representative of good philosophy and good fiction.