The Ethics of…Game of Thrones! May 29, 2012Posted by Nina Rosenstand in Culture, Ethics, Film, Nina Rosenstand's Posts.
Tags: cynicism, Game of Thrones, heroism, narrative ethics
Time for some early summer fun; lucky those (like me), for whom fun and work often end up merging, such as in narrative ethics. And I’ve found the HBO series Game of Thrones to be seriously fun, once you get into the fictional, pseudo-historical universe. (Haven’t read any of the books yet—I understand the TV series is deviating from the original in increasingly dramatic ways.) If anybody wants to catch up on the series before the season finale on Sunday, HBO is running the entire season this week. You can have an early-summer GoT marathon—and afterwards you can acquire a copy of Game of Thrones and Philosophy from Blackwell, which I have just ordered as a light summer read! If you are not worried about reading a spoiler article, take a look at Time’s review of episode 9, “Game of Thrones Watch: Smoke on the Water, Fire in the Sky” (those of you over 50 can start humming now…). It contains good character analyses, and a particularly insightful view of the character who emerges as the real focal point of the story, Tyrion Lannister:
I was on the verge of calling Tyrion’s behavior “heroic,” but that’s not really the term. Notably, we see that this is not Tyrion rising to his true calling or discovering that it is a far, far greater thing her does, &c., &c. It’s a practical decision, in that if the defenders of the city are not inspired, he will die. He plays the part (and Peter Dinklage does) masterfully, but he rouses his men with a purely practical argument too: “Don’t fight for your king, and don’t fight for his kingdoms. Don’t fight for honor, don’t fight for for glory. Don’t fight for riches, because you won’t get any.”
And the reviewer could/should have added what comes next—what Tyrion tells his army: “Fight for your homes.” Because Tyrion may be pragmatic, but he is not altogether a cynic.
All in all, it is a story about moral decisions, big and small—split-second decisions that come from the heart, or weighed by a calculating mind, and which all have consequences. Some decisions are made from a utilitarian, some from a deontological stance. Lots of ethical egoism in there, too, and just knee-jerk egoism. And some characters are pure at heart, and we see their ethic of care, their virtue ethics unfold, such as Sansa who from being a victim all of a sudden finds strength in helping others.
And so forth! If you’re looking for a joyride this week, leading up to the season finale on Sunday (and have cable), watch the 9 shows on HBO and look for all the moral, immoral and amoral viewpoints swirling around. A well-told tale, well acted, just right for some summer speculations about fictional problems of fictional characters.
Enjoy your summer!
Hooked on Stories February 22, 2011Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature, Philosophy of Literature.
Tags: Michael Gazzaniga, narrative ethics, neurocinematics, William Casebeer
For someone like me who has researched and written about Narrative Philosophy (philosophy involving the phenomenon of storytelling) for close to 30 years, with special emphasis on Narrative Ethics, it is particularly gratifying to watch the latest developments in neuroscientific research concerning the human urge to tell stories. Some of my students may remember me showing them a science video of the “man with two brains,” a man who had his two brain hemispheres severed, and resorted to making up stories about his associations because he couldn’t explain them any other way. For years I have told my students that the man with two brains was trying to get control of a chaotic situation, and therefore chose to tell a story about it—-an example of why we tell stories: to get a grip, to make unmanageable life manageable. In short, that’s why we tell stories of historic events, why we have myths and legends, why we love novels and movies, and certainly also why we lie.
The doctor in charge of research in connection with this man’s case was Dr. Mike Gazzaniga, UCSB. And a new article written by Jessica Marshall and published in NewScientist, “Mind Reading: the Science of Storytelling,” notes that Gazzaniga has pursued the phenomenon of our natural capacity to confabulate in his subsequent work:
Nobody has done more to highlight the central role of storytelling in human psychology than neuroscientist Michael Gazzaniga of the University of California, Santa Barbara. In studies of people in whom the connection between the two sides of the brain has been severed, he has shown that the left hemisphere is specialised for interpreting our feelings, actions and experiences in the form of narrative. In fact, Gazzaniga believes this is what creates our sense of a unified self. We also seem to use storytelling to reconcile our conscious and subconscious thoughts – as, for example, when we make choices based on subconscious reasoning and then invent fictions to justify and rationalise them (New Scientist, 7 October 2006, p 32).
The psychology of narrativity (Daniel Morrow, Rolf Zwaan) has reached interesting results over the past 20 years, and now neuroscience is weighing in with corroborative research:
It would appear that we don’t just tell stories to make sense of ourselves, we actually adopt the stories of others as though we were the protagonist.
Brain-scanning research published in 2009 seems to confirm this. When a team led by Jeffrey Zacks of Washington University in St Louis, Missouri, ran functional magnetic resonance imaging (fMRI) scans on people reading a story or watching a movie, they found that the same brain regions that are active in real-life situations fire up when a fictitious character encounters an equivalent situation.
And furthermore, our brains like it:
Stories can also manipulate how you feel, as anyone who has watched a horror movie or read a Charles Dickens novel will confirm. But what makes us empathise so strongly with fictional characters? Paul Zak from Claremont Graduate University, California, thinks the key is oxytocin, a hormone produced during feel-good encounters such as breastfeeding and sex.
Taking this idea a step further, Read Montague of Virginia Tech University in Blacksburg and William Casebeer of the US Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia, have started using fMRI to see what happens in the brain’s reward centres when people listen to a story. These are the areas that normally respond to pleasurable experiences such as sex, food and drugs. They are also associated with addiction. “I would be shocked if narrative didn’t engage the same kind of circuitry,” says Montague. That would certainly help explain why stories can be so compelling. “If I were a betting man or woman, I would say that certain types of stories might be addictive and, neurobiologically speaking, not that different from taking a tiny hit of cocaine,” says Casebeer.
So now we’re beginning to understand the power of stories: Our brains are set up to confabulate, we engage naturally in storytelling, and we can apparently get hooked on good stories. But take a look at where some scientists are going with this:
Understanding the mechanisms by which stories affect us can be put to practical use. Hasson has coined the term neurocinematics to describe its application to movie-making. His work reveals how some directors’ styles are particularly effective at synchronising the neural activity among members of the audience. “Hitchcock is the best example I have so far,” he says. “He was considered an expert of really manipulating the audience and turning them on and off as he pleased,” Hasson notes, and this shows up in the scans of people watching his films. Perhaps future directors could use these insights to control an audience’s experience. Hasson’s team has investigated how the order in which different scenes appear affects neural responses to a movie – which could help editors create either more enigmatic or more instantly comprehensible storylines, as required.
Human history is full of examples of the motivating power of a shared narrative – be it national, religious or focused on some other ideal – and Casebeer wants to investigate the possible military and political applications of a deeper understanding of this kind of storytelling. “One of my interests is in understanding how we can design institutions that more effectively promote moral judgement and development,” he says. He believes, for example, that the right stories could help military academies produce officers who are more willing to exercise moral courage.
Casebeer notes that a compelling narrative can seal the resolve of a suicide bomber, and suggests that developing “counter-narrative strategies” could help deter such attackers. “It might be that understanding the neurobiology of a story can give us new insights into how we prevent radicalisation and how we prevent people from becoming entrenched in the grip of a narrative that makes it more likely that they would want to intentionally cause harm to others,” he says.
At this point I’m seeing the ghosts of Watson and Skinner, the behaviorists, and their grand program, not just to understand human behavior, but to control it. I also see the ghost of Plato and his “Noble Lie.” And the ghost of every parent in the world who has ever told the story of “Little Red Riding Hood.” The fact that we’re story-telling animals (a term coined by Alasdair MacIntyre) also implies that we’re story-consuming animals, and as such we’re vulnerable to well-told manipulative stories. So this is where we need Narrative Philosophy/Narrative Ethics, in addition to brain research and psychological statistics. Even though the article by Casebeer referred to in Marshall’s piece is from 2005, reflecting the urgency of the post-9/11 years (which may of course feel new and fresh with every new terrorist act), the core concept of using stories to change the world remains the same—equally promising, and equally dangerous. Because what Casebeer is suggesting may sound, and be, benign and downright useful in a new century with an ongoing struggle against terrorism (regardless of changing administrations’ different nomenclature): telling stories to counteract the narratives of fanaticism that can lead to radicalization and mass-murder. Science-Fiction has engaged in precisely such narratives for a couple of decades. But we cannot engage in such a practice without first having analyzed the ethical implications of narratives being deliberately told to control the emotions of the audience. We already have a term for such narratives—-we call them propaganda. And in order to evaluate whether such an approach is justified we need to engage in an ethical analysis of all aspects of storytelling, and raise our awareness of when we’re being entertained, and when we’re being manipulated/educated. One level doesn’t preclude the other, and we don’t have to vilify the manipulative/educational aspect, but we need to be aware of it, and the motivations of the manipulators. In other words, we need an Ethic of Narratives, not just Narrative Ethics, understanding ourselves as moral agents in the world through stories.
And we haven’t even started talking about the stories embedded in commercials!
Moral Naturalism is Back! And so am I! August 22, 2010Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: evolutionary psychology, experimental philosophy, moral naturalism, narrative ethics
Vacation’s over, and I’m back after having been out of town all summer. Thanks to Dwight for keeping the blog hot, even with jury duty! Ever the optimist, I hope to weigh in on topics close to my heart, and mind (for I believe in the importance of both) with shorter intervals than last semester.
First, welcome to the new academic year, everyone—for I assume that most of you are involved in Academia in some form or other. Next, let’s dig into the pile of saved articles I’ve accumulated over the past months where I haven’t had regular Internet access (and I survived!). Here’s a story that caught my eye, from New York Times July 22: A conference was held in Connecticut in which a new breed of moral philosophers (ethicists working with evolutionary psychologists and neuroscientists) confirmed the philosophical presence, and importance, of moral naturalism. A quick recap: Moral naturalism is the philosophy that morality is a natural occurrence in the human mind—not (primarily) a matter of acculturation, or selfish/unselfish choices based on rationality. In the 20th century moral naturalism was crowded out by psychoanalysis, behaviorism, logical positivism, and other approaches which all have merit, but the idea that ethics could be founded in our emotional apparatus wasn’t getting much attention except for a few thinkers such as Richard Taylor and Philip Hallie. (You’ll recognize some themes here from many of my previous posts.)
But something started happening in the late 1980s—interestingly, paralleling the development of narrative ethics, the idea that moral values find expression, and may be developed through storytelling. From one direction came the new findings of neuroscientists: that an area of the brain seems to be devoted to moral considerations. From another came the evolutionary psychologists, connecting morals to our long history of evolution. Add to that experimental philosophy, with its (sometimes a mite oversimplified, but intriguing) “What if” questions such as the famous Trolley Problem, introduced by Philippa Foot. Combine that with narrative ethics, and you get a new form of moral naturalism (and that description is also oversimplified, but I’m just trying to paint a general picture): human emotions have developed certain built-in features that enable us, even as babies, to recognize right from wrong, from the perspective of being social animals; as adults, these features are still fundamental, but can be overridden—by experience, cultural pressures, and/or reason; and the stories we hear, and tell, about right and wrong, will combine our moral emotions with a sense of causality, and teach/explain the moral dos and don’ts that will provide either compliance with societal rules or rebellion against them. So here, moving into the second decade of the new millennium, we have a moral philosophy that is actually on fairly firm ground, working with science as well as recognizing the common human experience of moral feelings. This is exciting, folks. So doesn’t anyone see a downside to this? Yes, some ethicists question the loss of the exalted state of reason as the foundation and instigator of moral choices and values. And we may have to do yet another reevaluation down the line, reinstating reason as a fundamental element of ethics. I often argue that we’ll have to, because moral emotions are error-prone.
So here are some tidbits from the conference, reported by David Brooks:
Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it.
At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
Brooks comments that the conference left alone the question of transcendence and the sacred, and that is a valid complaint, since so many people are convinced that the entire idea of ethics is founded in religion, and we can’t just disregard that conviction and throw it under the trolley—that’s as bad as 20th century ethicists disregarding the role of emotions. But my overriding concern is that the moral naturalists of the 21st century (to which I suppose I belong) are losing sight of the role of reason, and that the new moral naturalism will become another fad, a radical “ism” that will, in time, be replaced by a counter-theory, in good Hegelian fashion. We’re not quite at the pinnacle yet, where we understand everything about ethics.