jump to navigation

Chris Dorner: Not a Folk Hero February 14, 2013

Posted by Nina Rosenstand in Criminal Justice, Current Events, Ethics, Nina Rosenstand's Posts.
Tags: , , , , , ,
2 comments

It seems the saga of former LAPD cop and spree killer Chris Dorner has now come to an end, in a way that he himself predicted: He would not survive to experience the fallout. And I suspect that many of you, like myself, have been eerily mesmerized by the unfolding story over the past week. More fortunate than most, I have been able to discuss the case with a bunch of intelligent students, and we have exchanged viewpoints. I have also listened to talk shows, read online commentaries, followed news briefs, and read most of the Manifesto which Dorner had posted to Facebook. And I’m sitting here with a very bad feeling—not just for the four people who fell victims to Dorner’s vengeful rage, and for their families, but a bad feeling about the voices in the media who somehow seem to have elevated Dorner to some sort of folk hero, a Rambo, a Jason Bourne kind of character (as a guest on a talk show pointed out). When such views have been expressed, they have generally been prefaced with, “Yes, of course what he has done is wrong, BUT he has a point,” or “Of course he shouldn’t kill people, BUT even so, he is fighting the good fight.” In other words, his actions may be wrong/over the top, but somehow it is in a noble cause.

Now that upsets me. It upsets me, because that kind of evaluation shows a fundamental misunderstanding of the connection between having a cause and taking action, and perhaps even a politically motivated willingness to overlook certain very disturbing facts in favor of some subtext that some people feel ought to be promoted, such as “the LAPD is in need of reforms.”

So let us look at what Dorner actually did (allegedly, of course): He shot and killed a young woman and her fiance. The young woman was the daughter of an ex-cop from the LAPD who had been Dorner’s lawyer. He also shot and killed a Riverside police officer, as well as a San Bernardino deputy. In addition, he deprived three people of their right not to have their liberty interfered with (he tied up an elderly boat owner in San Diego, and two maids in Big Bear), he wounded several police officers, and he stole two cars. And for what purpose? In the Facebook Manifesto he states it clearly: Because he felt that he had been wronged when fired from the LAPD in 2009, he felt that the only way to “clear his name” was to kill members of the LAPD and their families.

Martha Nussbaum, the American philosopher, says that emotions should be considered morally relevant, provided that they are reasonable, meaning that they arise as a logical response to a situation, and thus inspire moral decisions/actions that are somehow reasonable/proportionate to the event that caused the anger (Nussbaum is also a philosopher of law). So let us allow for the possibility that Dorner experienced an emotion that was a relevant response to his (perhaps) unfair dismissal from the LAPD: He was angry. But exactly what is reasonable anger? That would be (according to Aristotle, whom Nussbaum admired) righteous anger that is directed toward the right people, for the right reason, at the right time, in the right amount. But even if he was unfairly dismissed (which is a common experience for many people), and even if he had experienced racism at his workplace,  would it ever be morally reasonable for him to exact revenge on the daughter of his lawyer? Or her fiance? Neither of them had anything to do with his being fired. The murders were simply a means to cause pain to her father. (For you Kant-aficionados: Dorner used his lawyer’s daughter merely as a means to an end to get back at him.)  The moment Dorner made good on his threat to start killing the relatives of LAPD officers was the moment where he lost any claim to a moral high ground, any claim to a righteous anger or any claim to taking justifiable action. That was the moment when he went from somebody with possibly a justified grievance to merely being a thug, and a petty, selfish one at that, taking his anger out on innocent victims.

And the killing of Riverside and San Bernardino law enforcement officers? That seems to have been dictated by his poor judgment, and his attempt to escape the dragnet cast over all of Southern California, not by his manifesto. He claimed to go after LAPD officers because the LAPD had “done him wrong,” but in the end, it was Riverside and San Bernardino that lost members of their police departments.  We can discuss, in the weeks to come, whether he was actually mentally stable in his final week. We can discuss whether the manifesto reveals an intelligent, reflective mind, or a person on the brink of insanity. We can discuss whether another outcome had been possible. We can even discuss whether his manifesto made some valid points. But the fact that he broke the basic covenant that he had been taught, as a police officer, to protect and serve those who need protection, and showed abysmal disregard for the lives of innocents, resulting in a chain of events that cost additional lives, removes him from the realm of folk heroes and reduces him to merely another criminal who will be remembered for the lives he took, not for his rationale. Even if it should turn out that original rationale may have been justified—he may have been right that he was treated unjustly—that does not justify in any way what he has done.  And for some media voices to overlook that fact is very disturbing…

Advertisements

Homo Ludens—Is Playing Good for Us? November 30, 2010

Posted by Nina Rosenstand in Culture, Nina Rosenstand's Posts, Philosophy of Human Nature.
Tags: , , , ,
9 comments

 Years ago a Dutch researcher, Johan Huizinga, came out with a book, Homo Ludens, “The Playing Human,” which claimed that playing is older than human culture, that even adults play for the fun of it, and it’s good for us. That was actually an eye opener for most people at the time. Since then the scope of play behavior analysis has been extended to studying social animals, (see Bekoff and Pierce (Wild Justice) ) suggesting that social play allows for the development of a sense of fairness and justice, not only in humans, but in some species of animals as well.

In this article, “Why We Can’t Stop Playing,” we see the positive analysis of play continued—but this time the spotlight isn’t on playing as a social activity, but very much a solitary experience: “Casual games” that are played on our computers and our cell phones, mainly to pass the time while waiting for appointments:

Why do smart people love seemingly mindless games? Angry Birds is one of the latest to join the pantheon of “casual games” that have appealed to a mass audience with a blend of addictive game play, memorable design and deft marketing. The games are designed to be played in short bursts, sometimes called “entertainment snacking” by industry executives, and there is no stigma attached to adults pulling out their mobile phones and playing in most places. Games like Angry Birds incorporate cute, warm graphics, amusing sound effects and a reward system to make players feel good. A scientific study from 2008 found that casual games provide a “cognitive distraction” that could significantly improve players’ moods and stress levels.

Game designers say this type of “reward system” is a crucial part of the appeal of casual games like Angry Birds. In Bejeweled 2, for example, players have to align three diamonds, triangles and other shapes next to each other to advance in the game. After a string of successful moves, a baritone voice announces, “Excellent!” or “Awesome!”

In the 2008 study, sponsored by PopCap, 134 players were divided into groups playing Bejeweled or other casual games, and a control group that surfed the Internet looking for journal articles. Researchers, who measured the participants’ heart rates and brain waves and administered psychological tests, found that game players had significant improvements in their overall mood and reductions in stress levels, according to Carmen Russoniello, director of the Psychophysiology Lab and Biofeedback Clinic at East Carolina University’s College of Health and Human Performance in Greenville, N.C., who directed the study.

In a separate study, not sponsored by PopCap, Dr. Russoniello is currently researching whether casual games can be helpful in people suffering from depression and anxiety.

Hardly an incentive for further development of one’s sense of fairness and justice, like social play! But it may still have merit, if it can offset the unnaturally high levels of stress most of us labor under. For one thing, we can conclude that playing games by oneself adds an important dimension to the play behavior phenomenon; for another, I find it fascinating that the article doesn’t end with a Caveat such as, “You’re just being childish, needing approval from the world,” or “If you play too much you’ll become aggressive/a mass murderer/go blind” or whatever. For decades we’ve heard about the bad influence of computer gaming, as a parallel to the supposed bad influence of violent visual fiction. But the debate is ancient: to put it into classical philosophical terms, Plato warned against going to the annual plays in Athens, because he thought they would stir up people’s emotions and thus impair their rational, moral judgment; Aristotle, who loved the theater, suggested that  watching dramas and comedies would relieve tension and teach important moral lessons. In the last two-three decades most analyses of the influence of entertainment have, almost predictably, ended with a Platonic warning about the dangers of violent TV, movies, and videogames. Are we slowly moving in an Aristotelian direction? That would be fascinating, but here we should remember that Aristotle didn’t want us to OD on entertainment: the beneficial effects are only present if entertainment is enjoyed in moderation. 15 minutes of “Angry Birds” ought to be just enough…

Magical Thinking, in Moderation December 24, 2009

Posted by Nina Rosenstand in Culture, Ethics, Nina Rosenstand's Posts, Philosophy of Literature.
Tags: , , , , , ,
1 comment so far

Remember when children’s books weren’t allowed to contain anything imaginary? At least according to recommendations of child psychologists. We’re talking about the 1970s and well into the Eighties. No fairy tales allowed, no tooth fairy, no Santa, and above all no imaginary friends, because one wouldn’t want children to grow up with a bunch of illusions that life could never measure up to, would one? So instead they wrote children’s books about parents divorcing, Fluffy the dog dying, and other realistic in-your-face topics, to train kids for more in-your-face adult hardship. Oh joy! That wasn’t much fun, was it? And I suspect that magical thinking just never went away, it just went underground—and resurfaced in graphic novels. So for a while we’ve been used to Superheroes being part of the Collective Unconscious of kids. But now we even hear from psychologists that it is downright healthy for kids to not only be exposed to fantastic tales, but even to make up stories themselves. Imaginary friends are to be encouraged and welcomed into the family! Apparently, children’s cognitive powers thrive by being exposed to, and learning to be comfortable within an imaginary universe.

Psychologists like Jacqueline Woolley, a professor at the University of Texas at Austin, are studying the process of “magical thinking,” or children’s fantasy lives, and how kids learn to distinguish between what is real and what isn’t.

The hope is that understanding how children’s cognition typically develops will also help scientists better understand developmental delays and conditions such as autism. For instance, there is evidence that imagination and role play appears to have a key role in helping children take someone else’s perspective, says Dr. Harris. Kids with autism, on the other hand, don’t engage in much pretend play, leading some to suggest that the lack of such activity contributes to their social deficits, according to Dr. Harris.

…It is important but not necessary for parents to encourage fantasy play in their children, says Dr. Woolley. If the child already has an imaginary friend, for instance, parents should follow their children’s lead and offer encouragement if they are comfortable doing so, she says. Similarly, with Santa, if a child seems excited by the idea, parents can encourage it. But if parents choose not to introduce or encourage the belief in fictitious characters, they should look for other ways to encourage their children’s imaginations, such as by playing dress-up or reading fiction.

For a narrative ethicist like myself this is of course fun stuff: psychologists advocating magical story-telling as an enhancement of social skills! That’s what narrative ethicists call a moral thought experiment. All over the world, raconteurs of children’s stories have always engaged in such mind experiments, but it is encouraging to see such an activity being promoted by psychologists. However…there’s got to be more to the study than that. Exactly how, and when does the child learn the difference between what’s real and what isn’t? Where is the built-in reality check? How far is the encouragement supposed to go? And is there an upper age limit? Are we supposed to engage in magical thinking into adulthood? (Which of course brings up the whole question of religion, and numerous anthropological studies.) This could be the flip side of the austere no-fairy-tales attitude: an indiscriminate acceptance of fantasies and magic, and I’m already beginning to yearn for stories like “When Mom and Dad Split Up.” Storytelling as a cognitive/ethical device has to include a measure of moderation, and a clear understanding that fantasy only “works” when contrasted to reality. And the studies referred to  surely must include just such an understanding—it’s just not apparent from the article.

Be that as it may, there is another aspect that fascinates me: the similarity to the old discussion between Plato (who discouraged an interest in fiction) and Aristotle (who encouraged it). Arguments that were presented 24 centuries ago are still valid today: Plato’s concern that exposure to emotional fiction (in the theater) can make the audience forget the all-important self-control provided by rationality, contrasted with Aristotle’s enthusiasm for the moral and psychological cleansing provided by a good, emotional drama. But both Plato and Aristotle lived in a world where moderation (Maeden Agan) was a moral and aesthetic ideal. So if we go down the Aristotelian path and encourage an immersion in dramatic fiction we should remember that he never meant for it to replace our sense of reality, but to enhance it. Some imagination is good, and even necessary in order to understand other minds, and other possibilities. Too much of it is not a good thing!

So, getting back to the imaginary friends: since this is Christmas Eve, is our imaginary friend Santa a plus or a minus in the cognitive development of a child? You decide. I never had a problem with Santa, not even when I realized (around the age of 5) that he was my granddad. And I was very careful not to let on that I had figured him out, because he was so jolly, and I didn’t want to ruin his Christmas…

Determinism Again, Again March 26, 2009

Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy.
Tags: , , , , , ,
3 comments

This started out as a comment to Dwight’s piece on Determinism Is Not Fatalism!, but it grew and grew, so I thought I might as well add it as a separate post. I read Baumeister’s piece, and for one thing, I find it frightening if a scientist doesn’t believe in mechanistic determinism—are we then back to old rags spontaneously generating mice and fleas? I suspect he assumes that “determinism” equals hard determinism. Precision is always a good thing. But hard determinism doesn’t say that everything has been laid out from Day One, in a locked pattern (which would be fatalism, if we assume that the pattern is predetermined by an intelligent power). The “butterfly effect” can also be advanced as an argument within hard determinism: the world is too complex for us to predict, but guess what? Everything is caused, even so, including your decisions. Micro-causes (like Dwight’s restaurant example) can alter the direction of events, in the external as well as the internal world, but that doesn’t mean they aren’t predictable effects, in principle. So hard determinism is a theory about de jure predictability and causality, not about predetermination.

 

Another disturbing aspect is Baumeister’s advocacy of indeterminacy. As Dwight points out, this leads to utter unpredictability, and the illusion of control will be shattered more effectively than under hard determinism. The indeterminist will find that, had the theory been true, we could no more count on our decision to order that chicken at the restaurant to result in us actually ordering it, or our decision to eat it actually resulting in putting a piece of chicken in our mouth—if causality is not a factor, internally or externally, then we’re lost in a world of random effects. No, the real problem with hard determinism isn’t that it can’t be proved, as Baumeister assumes; the problem is that it isn’t falsifiable. According to hard determinism, if I behave predictably (due to my heredity or environment), then it’s because of antecedent causes. If I behave unpredictably, it is also because of antecedent causes–even subconscious causes. As the determinist often argues, we do make choices, but the choices aren’t “free,” they are determined by events in our background. They only seem free to us. But if every decision is “caused,” and thus nullifying our free will, even by some far-fetched, forgotten past event or neural quirk, then the theory is getting so broad that it is fundamentally useless.

 

However, “Caused” is not the same as “unfree” or involuntary. That’s, essentially, what we call compatibilism. It is not, as Baumeister assumes, a watered-down version of determinism. It is making choices based on an array of possible consequences, recognizing that we decide, rationally and emotionally, from a limited spectrum of personal, social and physical possibilities, all providing causes/reasons for our choices (and determinists tend to confuse causes with reasons). And that is what we call having a free will, not an uncaused will. So what if there are causal factors behind every decision we make–I should hope so! I want to make my free choices based on evidence and good reasoning, not on some ridiculous notion of randomness. I’d like to see results! Because if the decision is uncaused, so, too, will be the effects of the decision: random.   

 

And, to top it off: People who truly can’t help what they’re doing are usually not held accountable. We recognize, and have always recognized, truly un-free/involuntary actions: due to mental illness, overwhelming emotional turmoil, some physical constraint or imminent threat (which Sartre would of course say is no excuse at all). We clearly and intuitively recognize a fundamental difference between free and unfree decisions (and Aristotle said it first: involuntary decisions are due to ignorance and compulsion). Sometimes we mistake one for the other, but that doesn’t mean we don’t know the difference. So what do we do with a theory that says we are mistaken, that all actions are fundamentally involuntary (if indeed that’s what hard determinism says)? We ask (with the good old polar concept argument, or “fallacy of the suppressed correlative”), then what is “involuntary,” if there is no “voluntary”? “Involuntary” is now devoid of meaning. Now ask the determinist, what about actions that seem “freer” than others? Being kidnapped and missing the midterm would generally be considered within the realm of involuntary acts. Choosing from a menu at a restaurant you’ve selected is usually considered a lot less involuntary. If the determinist is willing to concede that ordinary human intuition can’t be completely disregarded on this issue, we can proceed: What is implied by “less involuntary” is what the compatibilists among us call free will.  So if we can imagine an act, done with informed consent,  by a reasonably sane adult, with only the slightest level of constraint and hereditary impulses, then we have just reinvented the concept of “free will.”

 

But in a practical sense of course hard determinism doesn’t matter.  What matters in this Lebenswelt of ours, existentially, ethically, and certainly also legally (the Twinkies defense and Minority Report notwithstanding), is our human experience  of free (not uncaused) choices within the limits of our horizon, choices with consequences–consequences we can and will be held accountable for.

 

Overselling Experimental Philosophy March 3, 2009

Posted by Dwight and Lynn Furrow in Dwight Furrow's Posts, Philosophy, Science.
Tags: , , , ,
11 comments

This article on Experimental Philosophy (X-Phi) is overselling its capacity for innovation. Experimental Philosophy uses the techniques of empirical psychology (MRI scans, subject interviews and questionnaires, observations of behavior, etc.) to determine how ordinary people respond to philosophically interesting situations.

The authors rave about its revolutionary potential:

A dynamic new school of thought is emerging that wants to kick down the walls of recent philosophy and place experimentation back at its centre. It has a name to delight an advertising executive: x-phi. It has blogs and books devoted to it, and boasts an expanding body of researchers in elite universities. It even has an icon: an armchair in flames.

They proclaim that it has the potential to settle philosophical debates and is taking philosophy back to its roots in empirical research

…for the x-phi fan, empirical research is not a mere prop to philosophy, it is philosophy.

But this hype is mostly nonsense. X-phi is interesting because it might help philosophers do one part of their job. But it cannot solve philosophical problems.

Philosophers have always been concerned to describe our “untutored” beliefs about the world, the reasons or lack thereof for holding those beliefs, and to suggest how those untutored beliefs can be made more intelligible, coherent, or in touch with reality. That first task—to describe our “intuitions”—can be controversial. Too often, when philosophers describe what “we” believe, they are describing their own allegedly “untutored” intuitions. But there is no reason to think that philosophers’ “untutored” intuitions are shared by ordinary people. (not to mention the cultural biases that might come into play)

Experimental philosophy may help us determine what people believe and how they respond to various situations. Thus, it can act as a check against unreflectively assuming our intuitions are shared. But brain scans can’t tell us much about why people think as they do, and tracking blood flow or electrical activity is not going to reveal very much about patterns of reasoning. Furthermore, questionnaires and observations of behavior are notoriously unreliable in explaining the motives behind our actions, and are hardly revolutionary.

Most importantly, X-phi could not begin to tell us how we ought to think about reality. It is rooted in what is, not what should be. It can be critical of philosopher’s pretensions but not of the beliefs it purports to describe. It will not be making philosophical discoveries.

The real problem with some contemporary philosophy is not the absence of scientific data but the use of odd and fanciful scenarios like the Trolley Problem to unearth how we reason. Most people are not trained or accustomed to  thinking philosophically about wild, hypothetical scenarios that they have never encountered. I’m not at all sure that discovering what their brains do when confronted with such hypotheses is revealing.

To invoke Nietzsche (or Aristotle for that matter) in such an enterprise is a bit rich. Although both were interested in psychology, they were interested in how people responded to the realities of life—not the daydreams of professors.