jump to navigation

The Road to Imperial Ruin March 31, 2009

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Ethics, Political Philosophy.
Tags: , , , , , , , , ,
add a comment

Cross-posted at Reviving the Left..

One of the main themes of Reviving the Left is that the ethics of care is relevant in the political arena in areas such as foreign policy.

Unlike moral theories that strive for universality, and thus focus on what human beings have in common, the ethics of care rests prescriptions on knowledge of particular persons, their circumstances, and their differences, and the cultivation of empathy and perceptiveness to gain such knowledge.

Matt Yglesias makes a point about our approach to Pakistan that implicitly reinforces the importance of an ethic of care.

In responding to the argument that we may not be able to trust the Pakistanis to root out the Taliban and Al-Quaeda from tribal areas he writes:

“This sort of thing is, in my view, really the achilles heel of the American imperial project….And when we get involved in things like the internal politics of Pakistan, or political reform in Egypt, or wars in the Horn of Africa, and so forth we’re dealing in situations where the level of understanding is incredibly asymmetric. If you go to pretty much any country in the world, you’ll find that educated people there know more about the United States than you do about their country. Nobody at highest levels of the American government speaks Urdu. Or Arabic. Or Amharic or Somali or Pashto or Tajik.

Lots of people at high levels in the Pakistani government speak English….they have a vast bounty of media outlets to peruse to gather intelligence. And year-in and year-out Pakistan cares about the same smallish set of countries—Pakistani officials are always focused on issue in their region and issues with the United States. Our officials dance around—the Balkans are important this decade, Central Asia the next, Russia and the Persian Gulf flit on and off the radar, sometimes we notice what’s happening in Mexico, etc.

In other words, in a straightforward contest of power between the United States and Pakistan, we can of course win. But in a scenario where we are trying to manipulate the situation in Pakistan in such-and-such a way and Pakistani actors are trying to manipulate the situation for their own ends, the odds of us actually outwitting the Pakistanis are terrible. They’re in a much better position to manipulate us than we are them.

This is one reason why so many of our foreign policy and foreign aid initiatives go wrong. We assume that other people are like us. We assume they share our interests, habits of communication, and ways of looking at the world because we assume our way is simply the human way.

And these assumptions are encouraged by our dominant moral theories (e.g. Kantian or utilitarian theories) that enjoin us to act only on prescriptions on which it would be rational for anyone to act. Our moral reflection tends to take place on a very general and very generic level.

This Is Your Brain on Money March 30, 2009

Posted by Dwight Furrow in Dwight Furrow's Posts, Political Philosophy, Science.
Tags: , ,
1 comment so far

Economists often talk about a phenomenon called the “money illusion”, which they think is quite paradoxical. Experimental data shows that when people get, for instance, a 5% raise at work and yearly inflation is at 4%, they feel better about their situation than if they had gotten only a 1% raise with no inflation. The alleged paradox is that the buying power in both cases is exactly the same. Economists assume that a rational agent should be indifferent between the two cases.

I don’t really think there is a paradox here. There are lots of possible explanations for why a rational person might prefer the 5% raise, 4% inflation scenario. One’s raise might reflect your value to your employer or assessments of your ability, work habits, or achievements. It is not irrational to want to be appreciated. It may be that people think they have some control over their raises but absolutely none over inflation, which might fluctuate in unpredictable ways. If inflation moderates, we would rather have the 5% raise than the 1% raise.

But in any case, via Brad Delong, Weber and Falk provide neurological evidence of the phenomenon. While connected to an MRI scanner:

“We had now confronted our test subjects with two different situations”, Falk explains. “In the first, they could only earn a relatively small amount of money, but the items in the catalogue were also comparatively cheap. In the second scenario, the wage was 50 per cent higher, but now all the items were 50 per cent more expensive. Thus, in both scenarios the participants could afford exactly the same goods with the money they had earned – the true purchasing power had remained exactly the same.” The test subjects were perfectly aware of this, too – not only did they know both catalogues, but they had been explicitly informed at the start that the true value of the money they earned would always remain the same.

Surprisingly:

“In the low-wage scenario there was one particular area of the brain which was always significantly less active than in the high-wage scenario”, declares Bernd Weber, focusing on the main result. “In this case, it was the so-called ventro-medial prefrontal cortex – the area which produces the sense of quasi elation associated with pleasurable experiences”. Hence, on the one hand, the study confirmed that this money illusion really exists, and on the other, it revealed the cerebro-physiological processes involved…

The policy implications?

I suppose this suggests that people prefer inflation to wage cuts when in either case they lose equivalent buying power.

Reviving the Left March 29, 2009

Posted by Dwight Furrow in Dwight Furrow's Posts, Political Philosophy.
Tags: , , , ,
1 comment so far

As some of you know, I have been working on a book for the past few years. It is entitled Reviving the Left: The Need to Restore Liberal Values in America. The book has just been released.

The aim of the book is to describe a new moral vision for liberalism, one that rests less on social contract theory and more on the ethics of care. It is a book of popular philosophy intended for philosophers and non-philosophers alike. Hopefully, a quick but informative read.

I have a new website devoted to the book that includes two new blogs—one devoted to liberal theory, values, and politics, and the other devoted to liberal activism (maintained by my son who has considerable activist experience). So head on over and check us out.

I will, of course, continue to blog here, but with some of the more political material moving to the new site.

With two blogs to feed, when will I sleep? I’m not sure.

Celebrating Lévi-Strauss, and Barbie March 28, 2009

Posted by Nina Rosenstand in Culture, Current Events, Nina Rosenstand's Posts.
Tags: , , , , , ,
3 comments

I have neglected to celebrate two significant birthdays on this blog, so now I want to make amends: Neither one is a philosopher, but both have given the late 20th century a very distinct flavor, each in his/her own way. The first birthday we should celebrate happened last November 28, when the famed French cultural anthropologist and structuralist Claude Lévi-Strauss turned 100. The extraordinary thing about this is that Lévi-Strauss is still alive and kicking! The grand old man is truly a living legend, having transformed anthropology—which at the time was a generally accepted lesson in ethical relativism and a study in tribal ritual function—into a broad analysis of myth, focusing on binary opposing elements, based on the linguistic theory of structuralism developed by Roman Jakobson and Ferdinand de Saussure. Instead of focusing on the content of the tribal stories, Lévi-Strauss analyzed the relationship between the opposite components of the story (hunter/prey, raw/cooked, life/death, etc). For him, the stories of myth have no deeper meaning other than a tension between opposites that becomes resolved by being transformed into another set. And ultimately, the structure of myth becomes the template for all human cultural activity. Structuralism is no longer considered the key to the concept of meaning that many held it to be in the 1970s, but nevertheless, a theory of meaning can’t just bypass it. Binary tensions simply are fundamental structures of our stories at all levels—although most narrative philosophers look beyond the binary tensions to some assumption of underlying meaning/message. Lévi-Strauss was at one point a philosophy student, and has once said that he got into anthropology to escape from philosophy. However, he may have thought he left philosophy, but philosophy never left him. You never escape from philosophy…you just expand your territory…And since I myself latched on to philosophy to escape from anthropology, way back in the 20th century, I have always found that my own escape was more successful than his…

 

The other birthday of a cultural icon is that of Barbie. This month Barbie turned 50. Still youthful, still skinny, still with those long impossible legs. And the symbolic image of everything from the liberated woman to the brainwashed anorexic teen—-a doll that was no longer a baby doll, but became the mirror in which girls saw their future self—-and despaired. At least that’s what some say. And she has been analyzed to pieces in a number of ways, by Barthes and Baudrillard,  through post-structuralism to deconstruction. So in honor of  Lévi-Strauss’s 100th birthday, here is a quickie (and, yes, incomplete) structural analysis of Barbie: Think of the opposites involved in her figure as well as her pervasive popularity, as opposed to many little girls playing with Barbie. Tall/short, skinny/chubby, passive/active, often also white/non-white (before the Barbie line became racially diverse), young adult/child, unchanging/changing, loved/hated, and so forth. And the resolution of the tensions? For some girls, growing up, coming to terms with their own figure. For others, apparently, Barbie-torture. And then there is the ultimate reification of the little idol, if she has survived the torture-phase: regarding her as a collectible, an investment. Probably a better bet these days than most of our pension plans.

How Not to Think March 27, 2009

Posted by Dwight Furrow in Culture, Current Events, Dwight Furrow's Posts, Political Philosophy.
Tags: , , , , , , ,
add a comment

The New York Times is read by millions of people everyday. Why do they allow drivel on their editorial pages? (The offending op-ed piece is in Friday’s Union-Tribune)

On Thursday, Nicholas Kristoff wrote:

Ever wonder how financial experts could lead the world over the economic cliff?

One explanation is that so-called experts turn out to be, in many situations, a stunningly poor source of expertise. There’s evidence that what matters in making a sound forecast or decision isn’t so much knowledge or experience as good judgment — or, to be more precise, the way a person’s mind works.

Huh? How does one make a good judgment without knowledge or experience? Don’t we go to doctors when we are sick for a reason—because they have knowledge and experience?

Then he goes on to give examples of how people who pretend to be experts can bamboozle us.

“But experts who are trotted out on television can move public opinion by more than 3 percentage points, because they seem to be reliable or impartial authorities.

Well of course. But people who pretend to be experts are not really experts. We are fooled by reliance on authority—but that doesn’t tell us anything about genuine expertise.

We are already deep in the weeds of a thesis descending into nonsense—but it gets worse.

The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book, “Expert Political Judgment,” is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’ forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.

The result? The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board.

and then reinforces his point with:

Other studies have confirmed the general sense that expertise is overrated.

This completely misconstrues Tetlock’s thesis. Tetlock’s book is not about the uselessness of knowledge or expertise. Tetlock, a highly regarded psychologist, shows that a certain kind of judgment—one that is very sensitive to context, complexity, and change—is more reliable than a judgment that follows from rigidly held ideology that produces formulaic solutions.

Nothing in Tetlock’s book could be construed as an attack on knowledge or expertise. Instead, he attempts to distinguish genuine experts from hacks.

Kristoff in his zeal to sell newspapers is trying to tap into the anti-intellectualism that pervades our culture. Tristero over at Hullaballoo explains why this is dangerous.

This is one of the silliest pseudo-American myths, pure Norman Rockwell, that the average Joe (never a Jane) can perceive The Bigger Truth that somehow eludes the so-called pointy-headed experts.

This is how we get Joe the Plumber giving advice on foreign policy.

The decline of newspapers in this country may be a real loss in some respects. But sometimes they deserve their demise.

Friday Food Blogging 3/27/09 March 27, 2009

Posted by Dwight Furrow in Dwight Furrow's Posts, Food and Drink, Philosophy.
Tags: , , , ,
1 comment so far

Philosophy on the Mesa ponders all questions about how to live well—the spirit of Socrates lives on.

Questions like:

Stilltasty.com is a website with everything you need to know about what really matters. (h/t to IFA)

See! Isn’t philosophy practical?

Better People, Not Just Better Rules March 26, 2009

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Ethics, Political Philosophy.
Tags: , , , , , , ,
1 comment so far

On Thursday, Treasury Secretary Geithner outlined his plans for re-regulating the financial system in order to avoid future economic calamities like the one we are experiencing.

His proposal includes more government oversight of risk-taking in financial markets and tighter control of financial institutions, especially regarding how much capital they must hold as a buffer against losses.

This, of course, entails a significant expansion of the power of government regulators.

No doubt these regulations are necessary. But they are not sufficient.

After all, the Federal Reserve under Alan Greenspan could have imposed tighter lending standards on institutions or higher interest rates to slow down the growth of the housing bubble without any change in regulations. And the SEC already had the authority to raise capital requirements for banks.

Any of these moves would likely have prevented the credit crisis. But none of these steps were taken.

The problem was not that the rules were not good enough; rather the people charged with implementing the rules didn’t think regulating the private sector was important. They believed the government should not exercise oversight despite the fact that it was their job to do so. The theory that government was an unfortunate obstacle to economic activity drove the zeal to deregulate, but more importantly, it influenced the behavior of officials charged with the task of regulation. It was ideology and its influence on the motives of individuals, not the presence or absence of rules and procedures that caused the collapse in our financial markets.

This is why I argue that we should stop thinking of political ideologies as competing ways of organizing society, and instead think of them as prescribing competing constellations of motives for acting.

We need better motivated people; not just better formulated rules. And that requires moral change, not just political change.

Determinism Again, Again March 26, 2009

Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Philosophy.
Tags: , , , , , ,
3 comments

This started out as a comment to Dwight’s piece on Determinism Is Not Fatalism!, but it grew and grew, so I thought I might as well add it as a separate post. I read Baumeister’s piece, and for one thing, I find it frightening if a scientist doesn’t believe in mechanistic determinism—are we then back to old rags spontaneously generating mice and fleas? I suspect he assumes that “determinism” equals hard determinism. Precision is always a good thing. But hard determinism doesn’t say that everything has been laid out from Day One, in a locked pattern (which would be fatalism, if we assume that the pattern is predetermined by an intelligent power). The “butterfly effect” can also be advanced as an argument within hard determinism: the world is too complex for us to predict, but guess what? Everything is caused, even so, including your decisions. Micro-causes (like Dwight’s restaurant example) can alter the direction of events, in the external as well as the internal world, but that doesn’t mean they aren’t predictable effects, in principle. So hard determinism is a theory about de jure predictability and causality, not about predetermination.

 

Another disturbing aspect is Baumeister’s advocacy of indeterminacy. As Dwight points out, this leads to utter unpredictability, and the illusion of control will be shattered more effectively than under hard determinism. The indeterminist will find that, had the theory been true, we could no more count on our decision to order that chicken at the restaurant to result in us actually ordering it, or our decision to eat it actually resulting in putting a piece of chicken in our mouth—if causality is not a factor, internally or externally, then we’re lost in a world of random effects. No, the real problem with hard determinism isn’t that it can’t be proved, as Baumeister assumes; the problem is that it isn’t falsifiable. According to hard determinism, if I behave predictably (due to my heredity or environment), then it’s because of antecedent causes. If I behave unpredictably, it is also because of antecedent causes–even subconscious causes. As the determinist often argues, we do make choices, but the choices aren’t “free,” they are determined by events in our background. They only seem free to us. But if every decision is “caused,” and thus nullifying our free will, even by some far-fetched, forgotten past event or neural quirk, then the theory is getting so broad that it is fundamentally useless.

 

However, “Caused” is not the same as “unfree” or involuntary. That’s, essentially, what we call compatibilism. It is not, as Baumeister assumes, a watered-down version of determinism. It is making choices based on an array of possible consequences, recognizing that we decide, rationally and emotionally, from a limited spectrum of personal, social and physical possibilities, all providing causes/reasons for our choices (and determinists tend to confuse causes with reasons). And that is what we call having a free will, not an uncaused will. So what if there are causal factors behind every decision we make–I should hope so! I want to make my free choices based on evidence and good reasoning, not on some ridiculous notion of randomness. I’d like to see results! Because if the decision is uncaused, so, too, will be the effects of the decision: random.   

 

And, to top it off: People who truly can’t help what they’re doing are usually not held accountable. We recognize, and have always recognized, truly un-free/involuntary actions: due to mental illness, overwhelming emotional turmoil, some physical constraint or imminent threat (which Sartre would of course say is no excuse at all). We clearly and intuitively recognize a fundamental difference between free and unfree decisions (and Aristotle said it first: involuntary decisions are due to ignorance and compulsion). Sometimes we mistake one for the other, but that doesn’t mean we don’t know the difference. So what do we do with a theory that says we are mistaken, that all actions are fundamentally involuntary (if indeed that’s what hard determinism says)? We ask (with the good old polar concept argument, or “fallacy of the suppressed correlative”), then what is “involuntary,” if there is no “voluntary”? “Involuntary” is now devoid of meaning. Now ask the determinist, what about actions that seem “freer” than others? Being kidnapped and missing the midterm would generally be considered within the realm of involuntary acts. Choosing from a menu at a restaurant you’ve selected is usually considered a lot less involuntary. If the determinist is willing to concede that ordinary human intuition can’t be completely disregarded on this issue, we can proceed: What is implied by “less involuntary” is what the compatibilists among us call free will.  So if we can imagine an act, done with informed consent,  by a reasonably sane adult, with only the slightest level of constraint and hereditary impulses, then we have just reinvented the concept of “free will.”

 

But in a practical sense of course hard determinism doesn’t matter.  What matters in this Lebenswelt of ours, existentially, ethically, and certainly also legally (the Twinkies defense and Minority Report notwithstanding), is our human experience  of free (not uncaused) choices within the limits of our horizon, choices with consequences–consequences we can and will be held accountable for.

 

The Boss Says We’ve Lost Our Moral Center March 25, 2009

Posted by Dwight Furrow in Culture, Current Events, Dwight Furrow's Posts, Ethics, Political Philosophy.
Tags: , , , , ,
4 comments

And no one wants to argue with The Boss.

Recently two of my favorite people appeared on the same show. Jon Stewart interviewed Bruce Springsteen.

So what did Springsteen have in mind when he said “the country has lost its moral center?”

Here is how I would explain it.

It has long been an assumption in most social and political theories, whether in philosophy or the social sciences, that we can best understand human behavior by assuming that each person is a thoroughly self-interested, rational agent.

This self-interested rational agent knows what he wants and he makes decisions by making rational calculations about how to maximize the satisfaction of his desires. So regardless of whether it is a consumer choosing between Coke and Pepsi, an investor deciding between stocks and bonds, or a person deciding how best to spend her time, we make rational calculations about how to maximize our desires based on comparisons of the added benefit we would get by pursuing alternative courses of action. (Economists call this marginal utility)

Of course, no one really makes decisions this way because emotions, irrational attachments, and a variety of human weaknesses always enter the mix of factors explaining our decisions. But most theorists, and especially economists, have found this idea of a rational, self-interested agent useful, while realizing that it is an idealization and oversimplification. It is useful because our lives are really complicated and messy and, it is thought, that we have to eliminate some of that messiness if we are going to produce intelligible models of human action.

The recent collapse in our economic system, and the inability of economists to predict it, has called into question the effectiveness of these assumptions as a model. But I think the model has had more pernicious effects than simply disrupting economic theory.

One way to understand what Springsteen was talking about when he referred to losing our moral center is that, as a culture, we took this theoretical idealization of a rational, self-interested person out of the context of economic modeling and made it a moral ideal—something we should strive to be. This move largely defines modern conservatism.

It is thus no wonder that we have lost our moral center. No doubt human beings are sometimes self-interested desire machines trying to accumulate as much as we can. But morality begins when we see the folly of that. When we make the idea of a self-interested, rational maximizer our moral ideal, we lose the very basis of any moral point of view.

The question our current predicament poses is whether we can regain our moral center.

You can find out how right here.

Determinism Is Not Fatalism! March 24, 2009

Posted by Dwight Furrow in Dwight Furrow's Posts, Philosophy.
Tags: , , , , ,
20 comments

One of my pet peeves is that people, who should know better, describe determinism as if it were fatalism. Here is Roy Baumeister, Professor of Psychology at Florida State describing determinism:

To the determinist, the march of causality will make one outcome inevitable, and so it is wrong to believe that anything else was possible. The chooser does not yet know which option he or she is going to choose, hence the subjective experience of choice. Thus, the subjective choosing is simply a matter of one’s own ignorance – ignorance that those other outcomes are not really possibilities at all.

To illustrate: When you sit in the restaurant looking at the menu, it may seem that there are many things that you might order: the fish, the chicken, the steak, the onion soup. Eventually you will make a selection and eat it. To a determinist, causal processes dictated that what you ordered was inevitable. When you entered the restaurant you may not have known, yet, that you would end up ordering the chicken, but that simply reflects your ignorance of what was happening in your unconscious mind. To a determinist, there was never any chance at all that you could have ordered the fish. Maybe you saw it on the menu and were tempted to get it, and maybe you even started to order it and then changed your mind. No matter. It was never remotely possible. The causal processes that ended up making you order the chicken were in motion. Your belief that you could have ordered the chicken was mistaken.

Professor Baumeister is describing fatalism, not determinism. Fatalism is the view that the future is fixed, pre-ordained, so my deliberation about what to do in a situation doesn’t matter. If I walk into a restaurant and I am already fated to choose chicken from the menu, well then I will choose chicken regardless of my deliberation. Baumeister says “To a determinist, there was never any chance at all that you could have ordered the fish.”

But this is simply a misunderstanding of determinism. Determinism asserts that my actions are caused by my psychological state and other causal influences operating when I make a decision. But that psychological state will include a deliberative process that is continually being shaped and reshaped by new information. When I walk into a restaurant, given my preferences, I may be more likely to choose some items from the menu rather than others. But what I end up choosing will depend on odors wafting from the kitchen, the conversation at the table, the recommendations of the waiter, the descriptions of dishes on the menu, and other countless details about my surroundings that influence me. And I have to deliberate to find out, in light of those influences, what my preferences are. To the extent I am open to new information and have psychological states that are responsive to my surroundings, my actions are not fated.

It is of course true that all of these influences will determine what I choose. But when I walk into the restaurant most of the options on the menu (except for those that are distasteful) are genuine options and my ultimate choice will depend on my deliberation, which in turn is dependent on complex causal influences. So there is nothing illusory about choice—it is as real as the causal processes that determine my action and is in fact part of those processes.

So when Baumeister says —

“Choice is fundamental in human life. Every day people face choices, defined by multiple possibilities. To claim that all that is illusion and mistake is to force psychological phenomena into an unrealistic strait jacket.” —

He is inventing a straw man; a position no determinist holds.

He goes on to argue:

Also, psychological causality as revealed in our labs is arguably never deterministic. Our studies show a change in the odds of one response over another. But changes in the odds entail that more than one response was possible. Our entire statistical enterprise is built on the idea of multiple possibilities. Determinism denies the reality of this. Statistics are just ways of coping with our ignorance, to a determinist – statistics do not reflect how reality actually works.

Again, simple nonsense. Changes in odds reflect changes in causal conditions. There are multiple possibilities because there are multiple causal factors and the correlations don’t reveal which causal factors are at work.

He concludes:

To believe in determinism is thus to go far beyond the observed and known facts. It could be true, I suppose. But it requires a huge leap of faith, as well as a tortuous effort to deny that what we constantly observe and experience is real.

If determinism is false, then human actions must be uncaused—mysterious events that pop into existence and are somehow under our control yet outside the causal structure of reality.

Who is making a leap of faith?

A course in philosophy should be required for all scientists before they have a license to publish.

Technorati Tags: