Take the Road Train, Revisited

September 16, 2012

I’ve written here before about some of the work being done to develop “self-driving” cars, including Google’s tests of a fully-autonomous vehicle, and Volvo’s work on developing “road trains”, essentially convoys of semi-autonomous vehicles that follow a lead vehicle with a human driver.  Volvo’s  work is part of the European Union’s Project SARTRE (Safe Road Trains for the Environment).

The New Scientist site has an article reporting on a recent demonstration of the road train technology.  This approach probably has the higher likelihood of practical application in the near term, because it is largely based on technology that is already present, at least in some high-end cars.

Almost all the sensors and actuators that keep me from flying off the road now come as standard in most new Volvos (and other manufacturers for that matter). They are the exact same ones that enable cars to stay in lanes and avoid hitting other cars and pedestrians.

In contrast, completely autonomous cars, like those being tested by Google, require a considerable amount of added equipment to function.

Both approaches have the potential to provide significant improvements in safety.  The autonomous “driver” will not drive while sleepy or intoxicated; nor will it be distracted by sightseeing, fiddling with a cell phone, or turning around to smack the kid in the back seat.  An automatic system can also react more quickly than a human driver.

That faster reaction time means, in practice, that cars, particularly in a road train system, can follow one another much closer than would be safe or legal with a human driver  .  In the test reported in the article, the following distance at a speed of 90 km/hour [56 mph] was about 6 meters [19.7 feet].  By comparison, with a driver reaction time of 500 milliseconds, about 80 feet of additional separation would be needed at the same speed.  Putting vehicles closer together, with fewer speed fluctuations, should help reduce road congestion.  Obviously, all this assumes that the lead driver is highly competent.

The ability to follow other vehicles more  closely also might improve fuel economy, by the phenomenon that cyclists everywhere know as “drafting”.  As speed increases, the amount of power required just to overcome air resistance increases as the third power of the vehicle’s relative air speed (that is, taking into account any head- or tail-wind).   At a speed of 15 mph on level ground, for example, most of a cyclist’s power is used just to make a hole in the air. [Source: Bicycling Science, 2nd Edition, by Frank R. Whitt and David G. Wilson; Cambridge MA: MIT Press, 1997].  The effect is not so pronounced for cars, since they are typically more streamlined (that is, have a lower drag coefficient), but it is still significant.

Vehicles driving in such tight formations with fewer speed fluctuations should dramatically reduce congestion, says Erik Coelingh, Volvo’s senior technical specialist who is heading the research near Gothenburg. The reduction in drag could potentially cut fuel consumption by as much as 20 per cent, he says.

The technology is certainly interesting, and seems to have a good deal of potential.  Whether the legal and cultural obstacles to its adoption can be overcome remains to be seen.

California Will Allow Driverless Cars

August 31, 2012

I first wrote about Google’s project to develop a self-driving car back in October 2010, and I’ve tried to follow its progress here from time to time.  Earlier this year, the state of Nevada approved test operation of the driverless vehicles on public roads, under specified conditions.  (For example, the company is required to post a $1 million insurance bond, and to have human drivers in the vehicle who can take over in an emergency.)

Now, according to a brief article at Ars Technica, Google’s home state of California is getting in on the act.  The state legislature has passed, and sent to the Governor for signature, legislation that would further the move toward self-driving  vehicles.

The new bill requires the state’s Department of Motor Vehicles to adopt new regulations, including safety standards and “performance requirements” for new autonomous vehicles. Once those new rules are put in place, the bill “would permit autonomous vehicles to be operated or tested on the public roads in this state.”

Google has, of course, been conducting tests on roads in California for a while, under various arrrangements, but the new legislation enables testing, and possible future use, to be put on a more formal basis.  The details have been left for the motor vehicle department to sort out, so it remains to be seen what the rules will be.

It seems to me that this technology might potentially improve the safety and efficiency of road transportation, if we can work out a way to solve  not only the technical problems, but the legal and cultural ones also.

Trawling for Trouble

August 13, 2012

Banks have gotten quite a bit of bad press in the last few years, much of it well-deserved.  A recent article at Technology Review describes a new type of analytical software that claims to be able to help bank managements spot activity that is ethically or legally problematic.  The software, from a company called Digital Reasoning, uses machine learning techniques to look for potential problems in unstructured data, such as E-mails, tweets, and document files.

The software uses statistical models to break down sentences and infer their meaning. This is important because finding warning signs may not be as simple as matching a string of text.

One can see that string matching would probably not do the job; even the dimmest of potential swindlers probably does not put “Proposed Fraudulent Trade” as the Subject: line of his E-mail.  On second thought, though, that may be an unwarranted assumption:

U.S. Senate hearings later revealed that in 2007, before the financial meltdown, Goldman Sachs employees wrote e-mails bragging of selling blatantly terrible investments to clients.

(For that case, though, it is not clear that identifying the problem requires any particularly sophisticated analysis.)

I haven’t seen Digital Reasoning’s products, and so really can’t comment on them.  But I think this is an interesting example of the general trend in business to (belatedly) realize the potential value of the huge masses of unstructured information that they possess, information that is not captured in standard data bases.  They have, of course, had this sort of data for a very long time; for many years, though, it was on sheets of paper spread across hundreds or thousands of filing cabinets, and there was no practical way to get at it.  Now, because it is available in machine-readable form, it can be examined via statistical and artificial intelligence techniques.

The trend has even acquired its own buzzword: “Big Data”.   Google’s search engine is probably the most well-known example of the approach.  IBM’s Watson system, which beat human Jeopardy! champions, is another good example.   (Dr. Stephen Wolfram, developer of the Mathematica and Wolfram|Alpha software, discussed the different classes of data in his comments on Watson.)

Historically, business computing has been focused on the collection of structured data (in relational data bases, for example) to be processed using well-defined procedures (payroll, or trade settlement, for example).   The interest in Big Data marks a shift toward a less procedural world view; there is an interesting parallel, I think, in the evolution of machine translation.   The new approach will undoubtedly produce some bogus results, just as Watson came up with a few classic bloopers on Jeopardy!.   Still, it is a fascinating area, and its development may also give us some new insights into how we think.

Alan Turing Centenary, Part 3

June 25, 2012

I’ve come across a few more items of interest in connection with the Alan Turing Centenary.  The Official Google Blog has a post marking Turing’s 100th birthday, last Saturday, June 23. In addition to discussing some of Turing’s work, it describes Google’s involvement in the Bletchley Park restoration project, and gives a brief overview of the recently-opened Turing exhibit at the Science Museum.

Google also had a home page “doodle” in honor of Turing’s birthday, which was a small, working Turing machine.  You  can play with it here.

The BBC News site has added a couple of additional essays about Turing.  The first of these includes reminiscences of Turing from two of colleagues.  One, Mike Woodger, served as Turing’s assistant at the National Physical Laboratory after WW II.

Mike Woodger worked as an assistant to Alan Turing in 1946 – the year Turing, fresh from his wartime work code-breaking, joined the National Physical Laboratory, in Teddington. Turing left after a year, but Mr Woodger stayed on to work on the completion of the Pilot Ace Computer, which Turing had helped to design.

The other colleague was Captain Jerry Roberts, a linguist and code-breaker at Bletchley Park from 1941 to 1945.  He remembers the huge importance of Turing’s breaking the German naval Enigma.

Up to the time when he broke it, Britain had been losing tremendous tonnages of shipping, including all our food imports.

If we had gone on losing the same amount of shipping, in another four to six months Britain would have lost the war.

The next BBC essay is by the scriptwriter, Graham Moore, who reviews some of Turing’s appearances in fiction and biography.

If Alan Turing had not existed, would we have had to invent him? The question seems to answer itself: Alan Turing very much did exist, and yet we have persisted in inventing him still.

He mentions the 1986 play, Breaking the Code, by Hugh Whitemore.  I had a chance to see this during its run on Broadway, with Derek Jacobi playing the role of Turing, and enjoyed it very much.  Apparently the BBC has also made a film version. In a slightly different vein, there is Neal Stephenson’s novel, Cryptonomicon.

Stephenson uses historical fiction’s ability to conjure hypothetical, counterfactual realities to play a great game of “what if” with the Turing legend.

I’ve read Cryptonomicon, and recommend it highly.  I’m not familiar with the other works Moore mentions, but they’re now on my list to look into.

Finally, for those readers who may have a DIY itch that needs scratching, it is possible to build a Turing machine out of LEGOs.

In honor of Alan Turing’s hundredth birthday, Davy Landman, Jereon van den Bos, and Paul Klint built a Turing Machine out of LEGOs. And if you like, you can build one too.

Please enjoy!

Turing Exhibit Opens at Science Museum

June 24, 2012

As part of the Alan Turing Centenary, the Science Museum in London has opened a new exhibit on Turing’s life and work.  The exhibit includes a number of items related to Turing, including a model of the Pilot ACE computer, for which Turing produced the basic design in 1945 at the National Physical Laboratory, and an example of a German Enigma cipher machine.

The “Babbage” blog at The Economist has a review of the exhibit, and the ways in which it relates to Turing’s life, in an attempt to give a rounded picture of the man.

Unlike other Turing tributes, which have tended to focus on one aspect of his work, the Science Museum aims to give a flavour of Turing the individual, and thus the exhibition mixes illustrations of the importance of his academic achievements with exhibits from the personal life of the man himself.

As the article points out, Turing is probably better known to the public for his wartime codebreaking work than for his work in mathematics.  His 1936 paper, On Computable Numbers, with an Application to the Entscheidungsproblem [PDF], in which he described the computing device we now know as a Turing machine, is certainly not light reading.  And computers, especially modern ones, aren’t really all that interesting to look at.  The Pilot ACE is old enough to have a console and visible electronic components.

It sounds like a most interesting exhibit.

Alan Turing Centenary, Part 2

June 23, 2012

As one might expect, the BBC News site has a number of articles related to the Alan Turing Centenary.  In particular, it has been publishing  a series of essays on Turing’s life and work.   I have tried to give a brief overview of these below.  (The essays are set up as separate pages, but there is a set of links to all of them at the top of each article.)

The first essay, on “Turing’s Genius”, is by Google’s Vint Cerf, who I have mentioned before in connection with the ACM’s participation in the Turing Centenary, and who is a recipient of the ACM’s Turing Award.  (As he mentions in his essay, he also, coincidentally, shares a birthday with Turing: June 23.)  He discusses the many ways in which Turing’s original work relates to the technological world we all take for granted today.

The second essay, by Prof. Jack Copeland, University of Canterbury, Christchurch, New Zealand, relates Turing’s involvement in code-breaking at the Government Code and Cypher School at Bletchley Park (also called Station X).  It mentions Turing’s personal contribution to breaking the naval version of the German Enigma encryption system, and the Lorenz cipher.   These mathematical, cryptanalytic contributions would have been impressive; but Turing also made an enormous contribution to the work of turning Station X into what was, in effect, the world’s first code-breaking factory.  He helped develop the bombes, electro-mechanical computers used to break Enigma messages on a production basis, and the Tunny machine, used for the Lorenz cipher.   (A project to reconstruct a Tunny machine is underway.)  As in many aspects of wartime intelligence, time was of the essence.

The faster the messages could be broken, the fresher the intelligence that they contained, and on at least one occasion an intercepted Enigma message’s English translation was being read at the British Admiralty less than 15 minutes after the Germans had transmitted it.

The third essay, “Alan Turing: The Father of Computing?”, is by Prof. Simon Lavington, author of Alan Turing and His Contemporaries: Building the World’s First Computers.   He observes that Turing’s ideas were not always terribly influential in some of the early computer  implementations.

It was not until the late 1960s, at a time when computer scientists had started to consider whether programs could be proved correct, that On Computable Numbers came to be widely regarded as the seminal paper in the theory of computation.

On Computable Numbers, with an Application to the Entscheidungsproblem [PDF], Turing’s paper, proved nonetheless to be of immense importance.  In it, Turing laid out, for the first time as far as I know, the  idea of a theoretical machine that, as demonstrated in his mathematical analysis, could solve any solvable problem.

The fourth essay, by Prof. Noel Sharkey of the University of Sheffield, discusses the Turing Test, proposed by Turing in his 1950 paper, Computing Machinery and Intelligence.  That paper begins with a statement of the fundamental problem:

I propose to consider the question, ‘Can machines think ?’  This should begin with definitions of the meaning of the terms ‘machine ‘ and ‘ think ‘.

Turing’s paper was provocative, in part, because he realized how woolly the question, “Can machines think?”, really is   There are ongoing discussions of whether the test that Turing proposed is the right one, but it does have the considerable virtue of being realizable in practice.

%d bloggers like this: