I’m Sorry, Dave

The New York Times carried a report on Saturday on a meeting of computer scientists who, the article says, are concerned that in the not-too-distant future, new technology will enable us to build machines that are smarter than we are:

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

The meeting, which took place at the Asilomar Conference center in Monterey, California, was intended for disscussion of how the growing body of research into the nature of intelligence in general, and artificial intelligence [AI]  in particulat, might be managed:

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

These concerns are not new, as anyone who saw the movie 2001: A Space Odyssey will know.  More seriously, Joseph Weizenbaum, a computer science professor at MIT and a pioneer in AI research, compared the potential impact of the information technology “revolution” to that of the orignial Industrial Revolution, in his book, Computer Power and Human Reason.  He argued that, just as routine physical labor was devalued by the Industrial Revolution, routine mental labor might be similarly devalued by the IT revolution.  Weizenbaum also was the author of a computer program called ELIZA, which used simple pattern matching and a repertoire of canned response patterns to simulate a psychologist:

PATIENT: My mother hates me.

ELIZA: Who else in your family hates you?

PATIENT: Practically everyone.

ELIZA: Why do you say that?

Weizenbaum was quite disturbed to find that some of his test “patients” became attached to ELIZA, and continued in the attachment even after the program was explained to them.  Some requested a private session with the terminal, so they could discuss more sensitive matters.

The concern about intelligent machines running amok is not new, either.  The great science fiction writer Isaac Aaimov created his Three Laws of Robotics to govern the behavior of robots in his stories:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

One of the concerns that was mentioned by those at the conference was the idea of something like the existing Predator unmanned aircraft, in a fully autonomous version.  Some of the computer viruses and worms that spread so readily via the Internet can be very hard to eradicate; one might suggest, only partially tongue-in-cheek, that they have evolved to have the intelligence of cockroaches.

Although I’m sure that each of us can think of one or more people who might on the whole be profitably replaced by a machine, it doesn’t seem to me, or to the scientists at the meeting, that we need to worry about HAL taking over just yet.  The participants suggest, and I agree, that it is important to have an open discussion of what is possible and what is acceptable, just as is happening with genetic research.  In practice, I suspect the biggest near-term danger is that people will become too reliant on machines, and be lulled into a false sense of security.

I’m sure that as AI technology advances, it will cause some disruptions in our lives; new technologies, if they are of any importance at all, generally do.  Just as with genetic engineering, the knowledge cannot be unlearned, so we will just have to do our best to use it wisely.

One Response to I’m Sorry, Dave

  1. […] Last Monday, I wrote a note here about the concerns expressed at a scientific conference, that new technology might enable us to […]

%d bloggers like this: