jump to navigation

Why Economists Should Not Be Allowed to Vote September 29, 2007

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Political Philosophy.

Economist Bryan Caplan has written a book entitled The Myth of the Rational Voter: Why Democracies Choose Bad Politics , in which he argues that voters are irrational because they don’t think like economists.

According to Caplan, the typical voter favors government regulation of the economy to avoid economic bad times, is suspicious of excess profits by corporations, wants to protect domestic industry from foreign competition, and puts too much value on existing jobs. These are irrational beliefs because they violate economist’s faith in minimally regulated free markets.

So workers are irrational if they don’t want their jobs to go away, prefer to avoid the negative social consequences of recessions, and don’t like cheaters and thieves? There ought to be laws against economists making pronouncements about rationality, and perhaps more studies on the peculiar psychopathologies to which economists seem susciptible.

As Louis Menand writes in his review of this book, “Most people, even if you explained to them what the economically rational choice was, would be reluctant to make it, because they value other things—in particular, they want to protect themselves from the downside of change. They would rather feel good about themselves than maximize (even legitimately) their profit, and they would rather not have more of something than run the risk, even if the risk is small by actuarial standards, of having significantly less.”

Maximal economic efficiency is not the only thing we value–it is not irrational to value stability, risk reduction, or moral virtue.

If Caplan is the exemplar, contemporary economics is utilitarianism gone completely off the rails.

That Thin, Wild Mercury Sound September 29, 2007

Posted by Dwight Furrow in Art and Music, Culture, Dwight Furrow's Posts.
add a comment

This is a fascinating account of the making of one of rock music’s greatest works–Blonde on Blonde by Dylan.

It illustrates the importance of the sonic properties of rock music. Even a lyricist of Dylan’s stature obsesses endlessly about sound.

“Bird Brain”–No Longer An Epithet September 25, 2007

Posted by Dwight Furrow in Animal Intelligence, Dwight Furrow's Posts, Science.

We used to call people who were egregiously deficient in mental acuity a “bird brain.” But recent research suggests this epithet is no longer appropriate.

“A nutcracker can remember the precise location of hundreds of different food storage spots. And crows in Japan have learned how to get people to crack walnuts for them: They drop them near busy intersections, then retrieve the smashed nuts when the traffic light turns red.”

The article calls this “part of a growing recognition of the genius of birds.” I don’t know if this qualifies as genius but it is not bad for a bird.

At any rate, we need a new epithet. I guess “dumb as a tree” still works. Is there any research on the intelligence of trees?

Confusion About Academic Freedom September 23, 2007

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Ethics.
1 comment so far

Commentary by Eric Rauchway on two recent events involving the University of California exhibits some confusion about academic freedom.

Liberal law professor Erwin Chemerinsky was hired as Dean of UC Irvine’s law school. However, the offer was rescinded, apparently because of political pressure from outside the university, after he published an op ed article highly critical of then Attorney General Gonzales. UC Irvine’s action was widely criticized for being a violation of academic freedom and Chemerinsky has since been reinstated.

UC Irvine was wrong to rescind their offer to Chemerinsky (and I am pleased he has been reinstated) but not because it was a violation of academic freedom.

Deans should be hired on their merits–according to their ability to govern their schools–not on the basis of their political views. Chemerinsky’s political views were well-known before he was hired and they were apparently judged irrelevant until outside political pressure forced a change. Thus, UCI was guilty of corruption in allowing external political pressure to subvert their hiring process. But deans, even if they are former scholars, do not have academic freedom. Once they are hired as an administrator, their primary responsibility is no longer the pursuit of truth. They have a weightier obligation to their institution, which can be damaged by the expression of controversial political positions.

The Chemerinsky case bears a superficial resemblance to recent action by the UC Regents to disinvite former Harvard President Lawrence Summers, who had been scheduled to speak at a board dinner.

Summers was fired from his position as President of Harvard because of remarks he made suggesting that women may have less scientific aptitude than men. Although this is a topic of debate among cognitive scientists, Harvard was right to fire Summers because, as an administrator, he has no business casting doubt on the abilities of the highly qualified women scientists who work under his leadership. The academic freedom Summers enjoyed as an academic economist was forfeited when he became Harvard’s President.

However, the regents did violate standards of academic freedom when they disinvited Summers from their board dinner because of pressure from UC faculty. Since Summers is a scholar no longer obligated to Harvard, he should not be precluded from speaking on controversial issues. That is his obligation and right as a scholar. Of course, the faculty who protested his appearance have every right to their protest as well.

As Rauchway’s article points out, although Summer’s views may not be well supported by the full range of scientific evidence as we understand it today, the question of whether men and women have different capacities is a legitimate and important inquiry that should not be stifled by political agendas.

On handling invitations, the University of California is batting 0 for 2 this month.

Thanks to Jonathan McLeod for bringing this to my attention.

Poverty and Liberal Equality September 16, 2007

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Ethics, Political Philosophy.

If there is anything that contemporary liberals agree about, it is that we ought to do more to fight poverty. In the richest country in the world, it is a scandal that more than one person in ten falls below the poverty line (even after food stamps, welfare, etc. is included in their income.) Despite massive increases in our nation’s wealth (measured by gains in Gross Domestic Product) over the past 40 years, the percentage of persons living in poverty has not changed much.

Liberals want to solve the problem of poverty by providing the poor with the same opportunities that middle class folks have through basic income support, public education, wider access to health care and child care, etc .

Conservatives, of course, argue that there is little we can do about poverty. If a person is poor, it is her fault for not working hard enough, or not making good decisions about getting an education or saving money. The best we can do is let the free market punish people for their bad decisions and, if they remain poor, so be it.

What both liberals and conservatives agree on is that the poor are irrational when they don’t take advantage of their opportunities. The poor tend to waste their money, fail to develop habits necessary to participate in the work place, drop out of school, or have too many children at too young an age.

Liberals and conservatives disagree about what explains the irrationality–conservatives believe the best explanation is individual moral weakness. Liberals believe it is lack of opportunity, a history of racism or some other form of discrimination that undermines self-respect, structural problems in the economy, etc.

Philosopher Charles Karelis argues they are both wrong about why the poor fail to make use of their opportunities. He argues that the poor are perfectly rational in declining to take advantage of opportunities, up to a point. I think he is right about this and his view indicates some new directions for liberal thinking on this issue.

Karelis’s recent article is behind a subscription wall. This article in the Washington Post provides a cursory explanation of his view.

The following thought experiment (similar to the one used by Karelis) illustrates the basic idea. Suppose you have to travel 10 miles to the market to get food for your family, you have no transportation available, and only 5 dollars in your pocket. Suppose someone offers to take you the first mile for 1 dollar. Karelis argues that it is irrational for you to accept the ride. The cost is too great given the fact that you have no guarantee that you can get a ride the rest of the way (and back) or have money to buy groceries when you get there. Only if someone offers you a ride most of the way to the market, leaving you with enough money to purchase groceries, is it rational to accept the ride.

The poor are in a similar situation. We give them welfare, some minimal job training, emergency health care, or access to student loans but none of this gets them close to escaping poverty given the obstacles they confront. Thus, they are not irrational when they spend their meagre income on booze or drugs, have children when they are 17, or drop out of school. The cost of putting off short-term pleasure for long term gains is too great when the long term gains are so distant and unlikely that they don’t appear to be live options.

The implication for liberalism is that prosperous liberals should not view the poor as being “just like us”–disposed to reason in the same way we do when presented with opportunities. When you have enough resources–good parents, good genes, a good education–the American Dream looks achievable and the path to it well trodden and well marked. But if one is not so fortunate, the light at the end of the tunnel really is more than likely an on-coming train. Its best to stay clear of the tunnel altogether. Giving the poor the same opportunities the rest of us have will not suffice–the ideal of equality is too thin to give us a  handle on this problem.

If we are going to do anything about poverty it will require more than a few liberal carrots and a big conservative stick. We might have to actually care about their fate (instead of engaging in a lot of cheap moralizing) and do whatever it takes to make our society genuinely inclusive.

Karelis has a book out on the subject. Its certainly on my reading list.

A hat tip to Nina for sending me the Washington Post article.

Future Shock! (arrives in dribs and drabs) September 11, 2007

Posted by Dwight Furrow in Culture, Dwight Furrow's Posts, Ethics, Science.
1 comment so far

Dystopian visions of the future in films such as Blade Runner and Gattaca confront us with a future fully arrived. They instantaneously transport us to a new and frightening world for which our mundane existence leaves us unprepared. And the angst we experience at not knowing how to live in such a world heightens the dramatic appeal of the these films.

 A good deal of intellectual discussion of our technological future perform the same cheap trick. Prognosticators such as Francis Fukuyama and Michael Sandel write, in portentous tones, that advances in biotechnology or artificial intelligence will fundamentally threaten what it means to be human. They regale us with visions of parents creating designer babies in a search for perfection that undermines sympathy for the less well-endowed, a future of mutants and superbeings who obviate the need for compassion, solidarity, etc.

But these warnings miss a fundamental fact about all technological advance. It doesn’t arrive all at once. As Ray Tallis points out in this insighful essay:

 “Of course, people are worried about more invasive innovations; in particular, the direct transformation of the human body. And this is where the gradualness of change is important, because as individuals we have a track record of coping with such changes without falling apart or losing our sense of self entirely. After all, we have all been engaged all our lives in creating a stable sense of our identity out of whatever is thrown at us.”

We should think carefully about technological advance, but leave the scare tactics behind, and give some credit for future generations and their ability to cope as past generations have.

Just Say No? September 8, 2007

Posted by Nina Rosenstand in Ethics, Nina Rosenstand's Posts, Science.

Another break-through by neuroscientists concerning ethics and the brain: Marcel Brass from Germany’s Max Planck Institute and Patrick Haggard from University College of London (yes, the place where Jeremy Bentham is still sitting in his mahogany closet) have just published their findings that a center in the brain acts as a “second thought” or self-control mechanism that allows us to stop what we were doing or intended to do. This looks like evidence that we have freedom to choose, as a scientific fact! This area is in the dorsal fronto-median cortex—the area just above and between your eyes—and has been documented through a series of brain scans of 15 young healthy adults.

            Now whether localized brain activity actually proves free will, or a “free won’t” (like it is being dubbed), is a matter for philosophers to decide, not neuroscientists, because it is a philosophical question whether what feels like a free decision is, in the final end, exclusively a result of environmental and hereditary causes. But it seems to me that now we have at least clear evidence that we are not automata, and that if our actions are determined by environmental and hereditary factors, these factors are so complex that we are justified in assuming that our decision process is real. In other words, soft determinism is looking better all the time.

            But that is not the only fun stuff coming out of this research. For one thing, we should compare it with that other ground-breaking announcement last spring by Michael Koenigs, Antonio Damasio and others (see previous blogs) that our natural tendency goes toward not hurting other human beings. Their findings pretty much stated that if you’re capable of overriding your natural empathy, there must be something wrong with you (in other words, people who choose to hurt a few to save the many must be morally deficient. This upset a lot of utilitarians, including Peter Singer. Even I, who only consider myself a part-time utilitarian, was disturbed). But now compare this to the newly discovered stop-mechanism: Neuroscientists can tell us we have a natural tendency to act out of empathy, and now that we also have a built-in self-control mechanism. At first glance it looks like they go hand in hand: If we happen to be about to act in a way that may harm others, something between our eyes makes us stop! Or we’re about to do something that may harm ourselves, such as smoking after we’ve tried to stop, and the self-control kicks in, so we stop—sometimes. That’s the reason researchers call this mechanism our conscience, and it’s certainly fascinating all by itself.

But wait a minute—what if it is the other way around? What if we are about to act with empathy, as our instinct bids us—and all of a sudden the self-control mechanism makes us stop? Two answers here: (1) it could be because we’re selfish, and realize the risk we may be exposing ourselves to, so we don’t run into the burning building to save the child after all, but call 9-1-1 instead. But that assumes that it is the selfish act that make us feel fulfilled, and Koenigs and Damasio have showed that our brain actually enjoys helping others! Let’s look at (2) which is even more interesting: Perhaps we realize that as much as it may make us feel good to act with empathy, instinctively, sometimes it may be the wrong thing to do (because we’re mistaken, or because acting with empathy now will create a greater risk later—remember the Nazi sniper they let live in Saving Private Ryan?), and our self-control mechanism makes us stop. And what is really interesting, the “stop” act makes us feel frustrated, not good, according to the scientists—but we do it, anyway. Now that’s the real revelation: We have a brain mechanism that does not make us feel good, but it is highly active in the brain even so. So sometimes we may stop a harmful act because we realize it is wrong. Fine. And sometimes we may stop doing a benevolent act because we, in the last moment, just don’t want to. Okay. But sometimes we may stop ourselves from doing a benevolent act because, in the greater scheme of things, it will have undesirable consequences (utilitarianism), or possibly because we can’t universalize the act (deontology). And it doesn’t make us feel good to make that decision, at least not right then and there. My preliminary conclusion? We may have found Socrates’ little Daimon who told him what to do…The seat of morality may well be this stop mechanism rather than the warm and fuzzy empathy. But that of course leads to other classical questions such as, are there universally right reasons for the stop-mechanism to be engaged?

Besides, I got a real kick out of reading that the key brain area is above and between our eyes. Asian mysticism, anyone? The “Third Eye”? The Little Golden Egg? Hmmm……

Thanks to my student Tiffany for telling me about this research and e-mailing me the article!

Healthy, Wealthy, and Dumb? September 5, 2007

Posted by Nina Rosenstand in Current Events, Nina Rosenstand's Posts, Philosophy, Political Philosophy.

Philosophers are feeling the heat in France these days: The French finance minister Christine Lagarde (the Sarkozy administration) suggests the French should think less and work harder! This according to The New York Times:

In proposing a tax-cut law last week, Finance Minister Christine Lagarde bluntly advised the French people to abandon their “old national habit.” France is a country that thinks,” she told the National Assembly. “There is hardly an ideology that we haven’t turned into a theory. We have in our libraries enough to talk about for centuries to come. This is why I would like to tell you: Enough thinking, already. Roll up your sleeves.”

One might assume her point to be that excessive speculation may lead to a kind of action-paralysis (which may be true), but her comment seems to stem from a perception that if you work hard, you can accumulate wealth, but if you think hard, you can’t work hard. Ergo, if you think, you’ll stay poor, and wealth is good, so thinking must be bad. Huh? For one thing, I would suggest that it is probably a matter of priorities rather than an inherent flaw in the thinking process that most of us who think hard aren’t particularly wealthy. For another, thinking is hard work: the French philosopher and writer Alain Finkielkraut responds in the article that thinking is, in effect, a 24 hour job that you keep on doing even in your sleep. But Finkielkraut takes it one step further, which “sillifies” the entire debate: Not only does he find it offensive that the Sarkozy administration is anti-intellectual; the really offensive thing about President Sarkozy (whom he otherwise supports) is apparently that he is a jogger. Horror of horrors! Finkielkraut points out that all the great philosophers have been walkers, not joggers—a jogging French president is way too American! I hope this whole thing is tongue-in-cheek; otherwise I’d say that’s an excellent example of too much thinking right there…

            Getting back to Lagarde: What’s really amusing about her deliberate deselection of the philosophical tradition is that her appeal to being practical rather than theoretical in order to effectuate change is not new at all; who was it who implied that philosophers had done enough thinking, and the time had come to roll up one’s sleeves and take action? None other than Karl Marx himself: “The philosophers have only interpreted the world, in various ways; the point, however, is to change it”…But I’m sure this interesting similarity is unintentional; I doubt that a right-leaning administration such as Sarkozy’s would want to align themselves with Marxism…

Why is Hypocrisy Wrong? September 4, 2007

Posted by Dwight Furrow in Current Events, Ethics.

The recent revelation that Larry Craig (R, Idaho) had been arrested and had confessed to soliciting gay sex in a public restroom is another in a seemingly endless parade of conservative politicians and community leaders who bloviate about upholding so-called moral standards by day while deviating from those moral standards by night.

Bill Bennett’s gambling, Reverend Haggard’s gay affairs, Senator David Vitter’s appearance on a call girl’s list of customers, along with Craig’s dailliance with an undercover cop suggest that “do as I say, not as I do” is the categorical imperative of conservative virtuecrats.

People who are convicted in the public eye of hypocrisy are often forced to resign their positions, and the ensuing public debate suggests that the hypocrite has lost the moral authority to advocate for the principles he/she has allegedly violated. But why is hypocrisy wrong and why does the hypocrite lose moral authority?

Suppose we define moral authority as “having adequate justification for a moral claim”; and lets assume that the public figures mentioned above, contrary to fact, have adequate justification for the ideals for which they advocate. Why does the fact that they violate their ideals diminish their moral authority to advocate for them? If they had adequate justification for their ideals, that justification remains despite their inability to live up to them. Their advocacy may lack sincerity but so what if their justifications are good?

One might argue that hypocrisy reveals a character flaw–the hypocrite lacks the strength of will to live up to her ideals. But all of us are like that to some degree. None of us live up to our ideals all the time. It doesn’t follow that we cannot give convincing justifications of our moral ideals. Moreover, it seems like the hypocrite is being blamed for more than just weakness of the will, a common human failing.

So what is wrong with hypocrisy and why does it seem to diminish one’s moral authority?