Our Opinion: Flaky fear or serious study?

Is technology becoming too "smart" for our own good?

Super computers are the stuff of both reality and fantasy.

IBM has created computers with "artificial intelligence" that have competed against and beaten humans at chess and the quiz show, "Jeopardy!"

In science fiction, computers - including HAL in the movie, "2001: A Space Odyssey" - have made the leap from artificial intelligence to human thoughts and emotions. HAL plots and commits murder in the interest of self-preservation.

Are there limits to evolving technology? Do humans have valid reasons to be fearful?

Idle curiosity about such questions has moved into academia with a new program proposed at Cambridge University in Britain.

The Center for the Study of Existential Risk is designed to gather scientists, philosophers and other experts to consider if, and how, super intelligent technology might "threaten our own existence."

If the topic seems far-fetched, consider that the smart phone in your pocket corrects your texts and your car knows where you are and can parallel park.

What happens when "we're no longer the smartest things around?" wonders Huw Price, a Cambridge philosophy professor. Will we be at the mercy of "machines that are not malicious, but machines whose interests don't include us?"

If so, what are the consequences if the interests of humans and machines conflict?

"It tends to be regarded as a flaky concern," Price acknowledged, "but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous."

Is the center's endeavor flaky or serious?

And do we procrastinate at our peril?

Upcoming Events