jump to navigation

The Winner is Watson February 17, 2011

Posted by Nina Rosenstand in Artificial Intelligence, Current Events, Nina Rosenstand's Posts, Science, Technology.
Tags: , , ,
2 comments

So it has finally happened: a computer has outwitted the humans—Watson won on “Jeopardy.” As reported by the New York Times’ John Markoff,

For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million prize, but proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them.

Watson, specifically, is a “question answering machine” of a type that artificial intelligence researchers have struggled with for decades — a computer akin to the one on “Star Trek” that can understand questions posed in natural language and answer them.

One of Watson’s developers, Dr. Ferrucci, refers to the computer as though it is a person who actually deliberates. That,  for you Trekkers,  is also reminiscent of numerous Star Trek episodes:

Both Mr. Jennings and Mr. Rutter are accomplished at anticipating the light that signals it is possible to “buzz in,” and can sometimes get in with virtually zero lag time. The danger is to buzz too early, in which case the contestant is penalized and “locked out” for roughly a quarter of a second.

Watson, on the other hand, does not anticipate the light, but has a weighted scheme that allows it, when it is highly confident, to buzz in as quickly as 10 milliseconds, making it very hard for humans to beat. When it was less confident, it buzzed more slowly. In the second round, Watson beat the others to the buzzer in 24 out of 30 Double Jeopardy questions.

“It sort of wants to get beaten when it doesn’t have high confidence,” Dr. Ferrucci said. “It doesn’t want to look stupid.”

And what’s next?

For I.B.M., the future will happen very quickly, company executives said. On Thursday it plans to announce that it will collaborate with Columbia University and the University of Maryland to create a physician’s assistant service that will allow doctors to query a cybernetic assistant. The company also plans to work with Nuance Communications Inc. to add voice recognition to the physician’s assistant, possibly making the service available in as little as 18 months.

“I have been in medical education for 40 years and we’re still a very memory-based curriculum,” said Dr. Herbert Chase, a professor of clinical medicine at Columbia University who is working with I.B.M. on the physician’s assistant. “The power of Watson- like tools will cause us to reconsider what it is we want students to do.”

I.B.M. executives also said they are in discussions with a major consumer electronics retailer to develop a version of Watson, named after I.B.M.’s founder, Thomas J. Watson, that would be able to interact with consumers on a variety of subjects like buying decisions and technical support.

But…here’s the ultimate Star Trek question: Will Watson and others of its kind have the right to refuse the tasks they will be assigned to do? Because otherwise (thank you, Melissa Snodgrass, writer of that classic Star Trek: The Next Generation episode, “The Measure of a Man”) we will have created a new breed of—slaves. Provided that Watson actually develops a sense of self. But we have yet to see evidence of that. 🙂

Advertisements

Google/Verizon and Net Neutrality August 11, 2010

Posted by Dwight Furrow in Dwight Furrow's Posts, Technology.
Tags:
add a comment

I’m still performing my civic duty so time is limited. But here is a good discussion of an increasingly important issue.

The Internets have been all “atwitter” about the new deal between Google and Verizon that would allow some customers access to a higher speed Internet for a price, essentially creating a poor person’s Internet and a rich person’s Internet.

is this a good idea or not. Kevin Drum has an informative discussion:

“So what’s the story on the Google/Verizon proposal that would allow carriers to offer high-speed networks to favored customers at a higher price than standard internet access? Would it spell the end of net neutrality?

There are two parts of the proposal. The first would essentially eliminate the principle of net neutrality over wireless networks. So within that piece of the internet, the answer is yes.

But what about the wireline network? There, the VG proposal is a little more subtle. Basically, they suggest that the current internet — which their document calls the “public internet” — should remain governed by strict net neutrality that treats everybody equally. However, carriers would be allowed to construct complementary networks that discriminate freely. The subtext here is that while well-heeled corporations could indeed buy better service, the public internet — i.e., the one we all know and love today — would be unaffected.

So: is this true? David Post is a strong supporter (“indeed, I’m a religious zealot”) of the current end-to-end design of the internet, a design that essentially enforces net neutrality at the protocol level by placing all processing at the endpoints of the network and allowing the network itself to do very little aside from dumb transport of bits. Here’s his take:

The problem is that there are many things an E2E inter-network (like the one we have) can’t do that people want their inter-network to do and would pay to have it do, and businesses serving those people want to provide those things. Things like guaranteed delivery of packets; the E2E network can’t promise that your packet will arrive at its destination, because that would require the network to keep track of your transmission as it moves along….[etc.]

The problem then boils down to: is there a way to preserve the E2E network — the open, nondiscriminatory inter-network — while simultaneously allowing people to get the services they want? Now in fact, that’s not exactly the question, because we know the answer to that one. There are already thousands, hundreds and hundreds of thousands, of non-E2E networks that do lots and lots of internal processing and provide lots and lots of services the E2E Internet does not provide. Your cell phone provider’s network, for instance. Most corporate wide area networks, for instance. Obviously, if Verizon wants to build a separate network and offer all sorts of glorious services on it, it can do so. The real net neutrality problem is this: if Verizon uses the Internet’s infrastucture to provide those services, will that somehow degrade the performance of the E2E Internet or somehow jeopardize its existence? Put another way: if Verizon can figure out a way to provide additional services to some of its subscribers using the Internet infrastructure in a way that does not compromise the traffic over the E2E inter-network, why should we want to stop them from doing that?

I think this is a good way of putting the question, though I’d expand it a bit. First, there’s a technical question: can Verizon (and other carriers) segregate traffic over current backbones without degrading the performance of other traffic? I’m skeptical on fundamental grounds, but as Post says, there’s always the chance that “technological innovation can do things that I usually cannot foresee.” And it’s certainly true that content delivery vendors like Akamai already provide high-speed access for a fee by pushing the boundaries of the current architecture of the internet as far as it will go. So maybe Post is right. But there’s also an economic question: if carriers put all their capital development into high-speed dedicated networks, does this mean they’ll simply let the current public internet deteriorate naturally as traffic increases but bandwidth doesn’t keep up? That seems pretty likely to me.

If you’re a pure libertarian, your answer is, “So what?” If there’s a demand for high-performance public access, then the market will deliver it. If there’s not, then there’s no reason it should. But there’s a collective action problem here: if the public backbone deteriorates, there’s nothing I can do about it. As an individual, obviously I can’t afford the kind of dedicated high-speed network that Disney or Fox News can. But the public backbone is a shared resource. Unless lots of my fellow users are willing to pay for high-speed service, I can’t get it. And if access to most of the big sites is fast because they’re paying for special networks, what are the odds that people will care all that much about all the small sites? Probably kind of slim.

Again: who cares? If most people don’t care much about high-speed access to small sites as long as they have fast access to the highest-traffic sites, then that’s the way the cookie crumbles. There’s no law that says the market has to provide everything Kevin Drum wants.

Still, there are real benefits to providing routine, high-speed internet infrastructure to everyone. It means that small, innovative net-based companies can compete more easily with existing giants. It means schoolchildren can get fast access to a wide variety of content, not just stuff from Microsoft and Google. It means we have a more level playing field between content providers of all kinds. Sometimes universal access is a powerful economic multiplier — think postal service and electricity and interstate highways — and universal access to a robust internet is to the 21st century what those things were to the past. If, instead of an interstate highway system, we’d spent most of our money building special toll roads for Wal-Mart and UPS, would that have been a net benefit for the country? I’d be very careful before deciding that it would have been.

For now, then, count me on the side of a purer version of net neutrality, in which the backbone infrastructure stays robust because everyone — including the big boys — has an incentive to keep it that way. I’m willing to be persuaded otherwise, but Verizon and Google are going to have to do the persuading. And it better be pretty convincing.”

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

Longevity Genes—Who Wants to Know? July 5, 2010

Posted by Dwight Furrow in Dwight Furrow's Posts, Ethics, Technology.
Tags:
add a comment

Scientists have apparently uncovered a cluster of “longetivity genes” which protect some people from succumbing to a variety of diseases.

When it becomes affordable to have one’s genome sequenced, perhaps in a few years, a longevity test, though not a foolproof one, may be feasible, if a new claim holds up. Scientists studying the genomes of centenarians in New England say they have identified a set of genetic variants that predicts extreme longevity with 77 percent accuracy.

The centenarians had just as many disease-associated variants as shorter-lived mortals, so their special inheritance must be genes that protect against disease, said the authors of the study, a team led by Paola Sebastiani and Thomas T. Perls of Boston University. Their report appears in Thursday’s issue of Science.

The finding, if confirmed, would complicate proposals for predicting someone’s liability to disease based on disease-causing variants in the person’s genome, since much would depend on whether or not an individual possessed protective genes as well.

This discovery should make it possible to tell individuals the odds of them making it to 100 years old.

Would it be good to know your odds or not?  The answer would seem to be a matter for individuals to answer.

Many people say they would not want access to this information. They prefer the uncertainty, the adventure of not knowing when they are likely to die, and they would experience the demand to organize their lives around knowledge of such probabilities as  a burden. Others would want this information to help them plot out a strategy for living past 100.

So what are you—an adventurer or a planner?

Insurance companies will no doubt be very interested in this information and adjust rates accordingly. Should insurance companies mandate that individuals acquire such a test?  If you are an adventurer it would be an egregious infringement of personal liberty to require individuals to have access to this information.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

Is the Future Over? June 20, 2010

Posted by Dwight Furrow in Culture, Dwight Furrow's Posts, Science, Technology.
Tags: ,
1 comment so far

William Gibson thinks maybe so:

Say it’s midway through the final year of the first decade of the 21st Century. Say that, last week, two things happened: scientists in China announced successful quantum teleportation over a distance of ten miles, while other scientists, in Maryland, announced the creation of an artificial, self-replicating genome. In this particular version of the 21st Century, which happens to be the one you’re living in, neither of these stories attracted a very great deal of attention.

In quantum teleportation, no matter is transferred, but information may be conveyed across a distance, without resorting to a signal in any traditional sense. Still, it’s the word “teleportation”, used seriously, in a headline. My “no kidding” module was activated: “No kidding,” I said to myself, “teleportation.” A slight amazement.

The synthetic genome, arguably artificial life, was somehow less amazing. The sort of thing one feels might already have been achieved, somehow. Triggering the “Oh, yeah” module. “Artificial life? Oh, yeah.”

New devices are cool; new human possibilities with new meaning? Eh. Not so much.

Alvin Toffler warned us about Future Shock, but is this Future Fatigue? For the past decade or so, the only critics of science fiction I pay any attention to, all three of them, have been slyly declaring that the Future is over. I wouldn’t blame anyone for assuming that this is akin to the declaration that history was over, and just as silly. But really I think they’re talking about the capital-F Future, which in my lifetime has been a cult, if not a religion. People my age are products of the culture of the capital-F Future. The younger you are, the less you are a product of that. If you’re fifteen or so, today, I suspect that you inhabit a sort of endless digital Now, a state of atemporality enabled by our increasingly efficient communal prosthetic memory. I also suspect that you don’t know it, because, as anthropologists tell us, one cannot know one’s own culture.

The Future, capital-F, be it crystalline city on the hill or radioactive post-nuclear wasteland, is gone. Ahead of us, there is merely…more stuff. Events. Some tending to the crystalline, some to the wasteland-y. Stuff: the mixed bag of the quotidian.

The future used to be a place of radically new promises and perils, game changers made possible by science. But he welcomes this new realism.

This newfound state of No Future is, in my opinion, a very good thing. It indicates a kind of maturity, an understanding that every future is someone else’s past, every present someone else’s future. Upon arriving in the capital-F Future, we discover it, invariably, to be the lower-case now.

As he points out (and he should know), science fiction is more about present hopes and fears that it is about the future.

If you are a William Gibson fan, his comments on his own writing career and his forthcoming new book are quite interesting.

If Pattern Recognition was about the immediate psychic aftermath of 9-11, and Spook Country about the deep end of the Bush administration and the invasion of Iraq, I could say that Zero History is about the global financial crisis as some sort of nodal event, but that must be true of any 2010 novel with ambitions on the 2010 zeitgeist. But all three of these novels are also about that dawning recognition that the future, be it capital-T Tomorrow or just tomorrow, Friday, just means more stuff, however peculiar and unexpected. A new quotidian. Somebody’s future, somebody else’s past.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

Artificial Life June 1, 2010

Posted by Dwight Furrow in Philosophy, Science, Technology.
Tags:
2 comments

An article in a recent issue of Science reported that Craig Venter (the leader of one team of researchers that successfully mapped the human genome) has made a synthetic cell by inserting a fabricated genome into a bacterium. The press has been reporting this as the first successful attempt to create artificial life. But the paper has created a good deal of controversy, not only regarding the ethical issues, but whether this is really artificial life or not.

Sune Holm has an excellent summary of the debate:

In an interview with the BBC Nobel Prize-winning biologist Paul Nurse points out that not just the genome but the entire cell would have to be synthesized for it to be properly artificial. What Venter has produced is the first living cell which is entirely controlled by synthesized DNA, not artificial life.

George Church, geneticist at Harvard Medical School, doesn’t think that Venter has really created new life either. Commenting in Nature, Church says that the bacterium made by Venter “is not changed from the wild state in any fundamental sense. Printing out a copy of an ancient text isn’t the same as understanding the language.”

Also commenting in Nature, Jim Collins, professor of biomedical engineering at Boston University, points out that “The microorganism reported by the Venter team is synthetic in the sense that its DNA is synthesized, not in that a new life form has been created. Its genome is a stitched-together copy of the DNA of an organism that exists in nature, with a few small tweaks thrown in.

Holm argues that all of these skeptical comments assume a particular conception of what artificial life should be:

These comments seem to me to suggest the following requirement: In order to create an artificial organism one must build it in a way analogous to the way we build other complex artifacts such as watches and washing machines. This involves making the different parts that compose the machine and put them together according to a design plan. Furthermore, it is by being able to create artificial life in this sense that we satisfy the necessary condition for understanding life expressed in Feynman’s dictum, “What I cannot create I do not understand,” so often referred to in synthetic biology. If some day we become able to design and build a living thing from scratch by fabricating all its parts out of nonliving matter and assemble them according to a plan of our own design, then we may be said to understand life.

Holm suggests that some of the ethical worries many people have regarding this technology is the result, not of potential harmful effects, but of our uncertainty about how to classify such “organisms” and our inability to know what is “right or wrong with respect to these entities.”

The products of synthetic biology are typically presented in terms of rather vague but highly connotative hybrid notions such as “living machine” and “synthetic organism.” Dealing with ethical concerns arising from synthetic biology research it is important that we don’t neglect the need to investigate how to conceptualize the products we expect synthetic biology to result in. This task will involve investigation of our notions of organism, machine, artifact, and life. Venter’s achievement has made the need for philosophical exploration of these categories even more pressing.

Holm may be right that “ontological uncertainty” breeds ethical uncertainty. But this is uncertainty we will have to live with. I doubt that any of these new entities will fall neatly into the ontological categories we have available today. The question of whether they are “really artificial” or not may have no answer. And we may have to invent new categories to make sense of scientific innovation.

So it is probably best, at this point, not to get too hung up on definitions, which will likely be quite fluid.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

More on Facebook and Privacy May 17, 2010

Posted by Dwight Furrow in Culture, Dwight Furrow's Posts, Ethics, Technology.
Tags: , , ,
3 comments

Nina’s post about privacy on Facebook thoroughly covered the issue.

But Facebook’s habit of thumbing their nose at privacy concerns provoked a couple of interesting posts on Crooked Timber as well.

Apparently, Mark Zuckerberg, founder and owner of Facebook, is quoted in a forthcoming book making some dismissive remarks about privacy concerns:

“You have one identity,” he emphasized three times in a single interview with David Kirkpatrick in his book, “The Facebook Effect.” “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” He adds: “Having two identities for yourself is an example of a lack of integrity.”

As Henry at Crooked Timber points out:

Facebook appears to be deliberately and systematically making it harder and harder for people to vary their self-presentations according to audience. I think that this broad tendency (if it continues and spreads) impoverishes public life.

Kirkpatrick explains what is wrong with this:

Individuals are constantly managing and restricting flows of information based on the context they are in, switching between identities and persona. I present myself differently when I’m lecturing in the classroom compared to when I’m have a beer with friends. I might present a slightly different identity when I’m at a church meeting compared to when I’m at a football game. This is how we navigate the multiple and increasingly complex spheres of our lives.

And Kieren Healy argues that having integrity is not about having a consistent self-presentation:

Having an identity and having a secret are in fact quite closely related, and not just for superheroes. Here’s a piece from the Times from the pre-FB era that makes the point:

“In a very deep sense, you don’t have a self unless you have a secret, and we all have moments throughout our lives when we feel we’re losing ourselves in our social group, or work or marriage, and it feels good to grab for a secret, or some subterfuge, to reassert our identity as somebody apart,” said Dr. Daniel M. Wegner, a professor of psychology at Harvard. … Psychologists have long considered the ability to keep secrets as central to healthy development. Children as young as 6 or 7 learn to stay quiet about their mother’s birthday present. In adolescence and adulthood, a fluency with small social lies is associated with good mental health. … The urge to act out an entirely different persona is widely shared across cultures as well, social scientists say, and may be motivated by curiosity, mischief or earnest soul-searching. Certainly, it is a familiar tug in the breast of almost anyone who has stepped out of his or her daily life for a time, whether for vacation, for business or to live in another country. “It used to be you’d go away for the summer and be someone else, go away to camp and be someone else, or maybe to Europe and be someone else” in a spirit of healthy experimentation, said Dr. Sherry Turkle, a sociologist at the Massachusetts Institute of Technology. Now, she said, people regularly assume several aliases on the Internet, without ever leaving their armchair …”

This idea that it is dishonest or insincere to withhold information about oneself is fundamentally mistaken. Social life isn’t enhanced by brutal honesty and integrity is not about having a single self-presentation.

Integrity is a matter of consistently acting on the basis of one’s system of values and sustaining the value of the variety of things we care about. Not only is that consistent with having different self-presentations in different contexts—integrity requires a variety of self-presentations.

If I value my students and their education some facets of my private life will be irrelevant or inimical to their development. And if I value my family relationships, my self-presentation as a teacher must at times be suppressed.

But Zuckerberg does provide us with an example of the lack of integrity. As one commentator on Crooked Timber puts it:

Hey, you know what really is a lack of integrity is trying to conceal very obvious monetary motives behind a veneer of moralizing. How much more honest would it be if Zuckerberg just came out and said, yeah, we don’t give a damn about your privacy, this is how we’re going to make money. Then we could all know where we stand. The worst aspect of all of this is the pretense that anyone on Facebook’s corporate end cares about this and their projection of their own moral deficiencies onto people with legitimate privacy concerns. Not that I’m, like, surprised or anything.

It is easy for a straight, privileged man like Zuckerbeg to extol the virtues of a single identity while hiding behind his body guards and wealth. Women and anyone from marginalized social groups cannot afford to be so sanguine about privacy. But of course straight, privileged men tend to think they are the only people who matter.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

  

Collapse and Complexity May 13, 2010

Posted by Dwight Furrow in Current Events, Dwight Furrow's Posts, Technology.
Tags: , ,
1 comment so far

Last week, the stock market plunged nearly 1000 points in a matter of minutes, although the market clawed its way back a bit before it closed. We still don’t know what happened.

A stock market out of control is a scary thing, and it led Jon Taplin to reflect on the problems inherent in complex systems, problems which are likely to get worse as society becomes even more complex.

As societies and systems get more complex the layers of hierarchy cannot keep up with the complexity of the system. American organizational philosophy has always been built around the idea that “bigger is better”. As Alfred Chandler stated in his seminal history of industrial capitalism, the American advantage flowed from the “potential for exploiting the unprecedented cost advantages of the economies of scale and scope.” [ …]

But what Tainter and other writers like Jared Diamond are suggesting is that a certain point, scaling up begins delivering diminishing returns, as MacKenzie points out.

The extra food produced by each extra hour of labour – or joule of energy invested per farmed hectare – diminishes as that investment mounts. We see the same thing today in a declining number of patents per dollar invested in research as that research investment mounts. This law of diminishing returns appears everywhere, Tainter says.

But complexity often leads to tragedy as well. Just as in the forward operating base in Afghanistan or on the floating drilling platform in the Gulf of Mexico, the front line soldiers only have some of the information needed to handle a breakdown in the complex systems because of the chain of command structure in both the military and the oil business. And of course the complexity of a campaign like the Afghanistan war confuses even the most senior commanders. […]

The real story of today’s market crash will be the war between the high-frequency trading systems and the the retail brokers.

Among the big losers in the selloff were likely to be investors who had put limit orders on stocks they held. If an investor had placed a limit order with his broker to sell his P&G shares if the price fell to $50, then that sell order would have been triggered as the stock tumbled to its low of $39.97. The investor would have lost money on that sale and then lost again when the stock rebounded back to close at $60.76. Worse, if the investor had held the stock for a long time and had a gain, he would be hit with a tax bill on his profits.Accelerating the declines, high-frequency hedge funds, which use computers to trade at super high speed, appeared to pull back from the market as prices collapsed. These hedge funds have grown to account for a significant amount of trading volume, and their absence likely created a void into which prices fell.

As a system becomes more complex, the interconnections between the individual parts grow geometrically. Each new input multiplies exponentially, the number of potential interactions. Yet, the amount of work accomplished typically can only grow arithmetically, by adding more hours or more workers. (Unless new technologies increases productivity) Even the designers of complex systems may not be able to explain how input produces output, and the combination of possible inputs is too large to thoroughly test. Thus, in complex systems, responsible mangement may be impossible because there is much one doesn’t know and you don’t know you don’t know.

Eventually we simply run up against the ability of human beings to process information rapidly. So we let computers do the thinking. But computers are not good at anticipating the unexpected.

The result is system fail.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com

Twittering January 28, 2010

Posted by Dwight Furrow in Culture, Dwight Furrow's Posts, Technology.
Tags:
add a comment

For those of you listening to your Ipods, updating Facebook, tracking 3 Twitter conversations while finishing your defense of the Ontological Argument, here is something else to do: read the Encyclopedia Brittanica’s blog posts on controversies about multitasking.

Technology author Nicholas Carr writes:

The ability to multitask is one of the essential strengths of our infinitely amazing brains. We wouldn’t want to lose it. But as neurobiologists and psychologists have shown, and as Maggie Jackson has carefully documented, we pay a price when we multitask. Because the depth of our attention governs the depth of our thought and our memory, when we multitask we sacrifice understanding and learning. We do more but know less. And the more tasks we juggle and the more quickly we switch between them, the higher the cognitive price we pay.

The problem today is not that we multitask. We’ve always multitasked. The problem is that we’re always in multitasking mode. The natural busyness of our lives is being amplified by the networked gadgets that constantly send us messages and alerts, bombard us with other bits of important and trivial information, and generally interrupt the train of our thought. The data barrage never lets up. As a result, we devote ever less time to the calmer, more attentive modes of thinking that have always given richness to our intellectual lives and our culture — the modes of thinking that involve concentration, contemplation, reflection, introspection. The less we practice these habits of mind, the more we risk losing them altogether.

There’s evidence that, as Howard Rheingold suggests, we can train ourselves to be better multitaskers, to shift our attention even more swiftly and fluidly among contending chores and stimuli. And that will surely help us navigate the fast-moving stream of modern life. But improving our ability to multitask, neuroscience tells us in no uncertain terms, will never return to us the depth of understanding that comes with attentive, single-minded thought. You can improve your agility at multitasking, but you will never be able to multitask and engage in deep thought at the same time.”

I guess this means that if you are a contemplative sort with a penchant for profundity you should be careful about the multitasking. But if daytime soaps and reality shows are your thing, you might as well be twittering.

book-section-book-cover2 Dwight Furrow is author of

Reviving the Left: The Need to Restore Liberal Values in America

For political commentary by Dwight Furrow visit: www.revivingliberalism.com