Bacterial Computing

July 27, 2009

Since my earlier post today dealt with the issue of “intelligent” machines becoming more like living organisms, and humans in particular, it’s perhaps appropriate that this one turns it around, to discuss the use of living organisms as computers.   In a paper that is to appear in the Journal of Biological Engineering, a group of researchers has reported on an experiment in which they genetically engineered E. coli bacteria to solve a particular mathematical problem, the Hamiltonian Path Problem.   (This problem is a special case of the Traveling Salesman problem; it essentially adds the restriction that travel is only possible between adjacent cities.  The problem is of considerable interest in computer science because it is an NP-complete problem.)

The problem that was actually solved in the experiment is not particularly interesting in its own right, because it is quite small.   But the steps that were taken in the process are interesting, in the sense that they illustrate some of the things that are possible in the world of genetic engineering.

In the software world, genetic algorithms use techniques motivated by observations from evolutionary biology to solve search and optimization problems.  The algorithms are essentially heuristics that are intended to arrive at a solution in much the same way that biological evolution leads to the characteristics of a species.  When implemented in software, these algorithms have two fundamental components:

  • A representation encoding all possible solutions, corresponding to a biological gene
  • A fitness function, that determines the evolutionary success of a solution, paralleling the idea of differential reproductive success in the biological world.

From its starting point, the algorithm produces successive “generations” of solutions, with the most fit solutions in each generation selected to “reproduce”, mimicking the process of natural selection.

What the experimenters here have done is to implement this sort of approach using an actual organism.  They encode the relevant problem data as DNA sequences within the bacteria.  These sequences are then “shuffled” randomly as the bacteria reproduce.  Of course, there has to be some way of measuring the “solution” that has been arrived at.  In the three-node problem studied, this was done (quite cleverly, I think) by encoding the data in genes that produced red and green fluorescent pigments, such that a Hamiltonian path solution would contain both pigments, and fluoresce yellow.  The solutions were later verified by sequencing the DNA.  One can think of this as a very large parallel computer, with literally billions of small processors.

This experiment was really a proof-of-concept project.  As I noted earlier, the specific problem that was solved is very small.  It isn’t clear that there is a really effective way to scale this approach to large problems, and it is the difficulty of solving large problems that makes the NP-complete set interesting in the first place.  But it is a fascinating  demonstration of some of the things that can be done with biological engineering, and perhaps should remind us again that the boundary between our machines and living organisms may not be quite as well defined as we think.

I’m Sorry, Dave

July 27, 2009

The New York Times carried a report on Saturday on a meeting of computer scientists who, the article says, are concerned that in the not-too-distant future, new technology will enable us to build machines that are smarter than we are:

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

The meeting, which took place at the Asilomar Conference center in Monterey, California, was intended for disscussion of how the growing body of research into the nature of intelligence in general, and artificial intelligence [AI]  in particulat, might be managed:

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

These concerns are not new, as anyone who saw the movie 2001: A Space Odyssey will know.  More seriously, Joseph Weizenbaum, a computer science professor at MIT and a pioneer in AI research, compared the potential impact of the information technology “revolution” to that of the orignial Industrial Revolution, in his book, Computer Power and Human Reason.  He argued that, just as routine physical labor was devalued by the Industrial Revolution, routine mental labor might be similarly devalued by the IT revolution.  Weizenbaum also was the author of a computer program called ELIZA, which used simple pattern matching and a repertoire of canned response patterns to simulate a psychologist:

PATIENT: My mother hates me.

ELIZA: Who else in your family hates you?

PATIENT: Practically everyone.

ELIZA: Why do you say that?

Weizenbaum was quite disturbed to find that some of his test “patients” became attached to ELIZA, and continued in the attachment even after the program was explained to them.  Some requested a private session with the terminal, so they could discuss more sensitive matters.

The concern about intelligent machines running amok is not new, either.  The great science fiction writer Isaac Aaimov created his Three Laws of Robotics to govern the behavior of robots in his stories:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

One of the concerns that was mentioned by those at the conference was the idea of something like the existing Predator unmanned aircraft, in a fully autonomous version.  Some of the computer viruses and worms that spread so readily via the Internet can be very hard to eradicate; one might suggest, only partially tongue-in-cheek, that they have evolved to have the intelligence of cockroaches.

Although I’m sure that each of us can think of one or more people who might on the whole be profitably replaced by a machine, it doesn’t seem to me, or to the scientists at the meeting, that we need to worry about HAL taking over just yet.  The participants suggest, and I agree, that it is important to have an open discussion of what is possible and what is acceptable, just as is happening with genetic research.  In practice, I suspect the biggest near-term danger is that people will become too reliant on machines, and be lulled into a false sense of security.

I’m sure that as AI technology advances, it will cause some disruptions in our lives; new technologies, if they are of any importance at all, generally do.  Just as with genetic engineering, the knowledge cannot be unlearned, so we will just have to do our best to use it wisely.

%d bloggers like this: