Technology v. Terrorism

May 30, 2013

Yesterday evening, PBS broadcast an episode of its Nova science program, “Manhunt: The Boston Bombers”, reporting on the role of technology in tracking down those responsible for the Boston Marathon bombings.   I had seen a note about the program in our local paper, and was curious to see what sort of program it would be.

I’m glad to say that, on the whole, I thought the reporting was realistic and level-headed.  It avoided scare-mongering, and took a fairly pragmatic view of what technology can and cannot do, at least at present.  It was organized chronologically, with commentary on forensic technologies interwoven with the narrative.

The first segment dealt with evidence from the explosions themselves. The white smoke that resulted, easily visible in TV accounts, indicated a gunpowder type of explosive, a suggestion reinforced by the relatively small number of shattered windows.   One forensic expert, Dr. Van Romero of the New Mexico Institute of Mining and Technology [NM Tech], quickly suspected a home-made bomb built in a pressure cooker.  Although devices of this type have been rare in the US, they have been relatively common in other parts of the world.  Building a similar bomb, and detonating it on a test range at NM Tech, produced effects very similar to the Boston bombs.  A pressure cooker lid was subsequently found on the roof of a building close to one of the explosion sites.

Because the attacks took place very close to the finish line of the Boston Marathon, and because that location on Bolyston Street has a large number of businesses, the authorities were confident that they would have plenty of still and video images to help identify the bombers.  After examination of this evidence, they came up with images of two primary suspects, who at that point could not be identified.  At first, the police and FBI decided not to release the images to the public; they feared doing so might prompt the suspects to flee, and hoped that facial recognition technology might allow them to be identified.  Alas, as I’ve observed before, these techniques work much better — almost like magic — in TV shows like CSI or NCIS than they do in the real world.  The images, from security videos, were of low quality, and nearly useless with current recognition technology.  Ultimately, the authorities decided to make the images public, hoping that someone would recognize them.

As things turned out, it didn’t matter that much.  The two suspects apparently decided to flee, and car-jacked an SUV.  The owner of the SUV managed to escape, and raised the alarm.  In a subsequent gun battle with police, one suspect died (he was apparently run over by his associate in the SUV); the other was wounded but escaped.  He abandoned the SUV a short distance away, and hid in a boat stored in a backyard in Watertown MA.  He was subsequently discovered because an alert local citizen noticed blood stains on the boat’s cover; the suspect’s location was pinpointed using infrared cameras mounted on a police helicopter.

As I mentioned earlier, I think the program provided a good and reasonably balanced overview of what these technologies can do, and what they can’t.  Magic is still in short supply, but technology can help pull together the relevant evidence.

More work is still being done to improve these techniques.  A group at the CyLab Biometrics Center at Carnegie-Mellon University, headed by Prof. Marios Savvides, is working on a new approach to facial recognition from low-quality images.  They give their system a data base containing a large number of facial images; each individual has associated images ranging from very high to low resolution.  Using information  inferred from this data, and guided by human identification of facial “landmarks” (such as the eyebrows, or nose) in the target image, the system attempts to find the most likely matches.  The technique is still at a very early stage, but does show some promise.  There’s more detail in an article at Ars Technica.

As the NOVA program also pointed out, the growth in and improvement of all this surveillance technology has some potentially troubling implications for personal privacy.  Setting up a portion of the infrastructure for a police state is probably not good civic hygiene; but that’s a subject for a future post.


A Supercomputer ARM Race?

May 28, 2013

The PC World site has a report of an interesting presentation made at the EDAworkshop13 in Dresden, Germany, this month, on possible future trends in the high-performance computing [HPC] market.  The work, by a team of researchers from the Barcelona Supercomputing Center in Spain, suggests that we may soon see a shift in HPC architecture, away from the commodity x86 chips common today, and toward the simpler processors (e.g., those from ARM) used in smart phones and other mobile devices.

Looking at historical trends and performance benchmarks, a team of researchers in Spain have concluded that smartphone chips could one day replace the more expensive and power-hungry x86 processors used in most of the world’s top supercomputers.

The presentation material is available here [PDF].  (Although PC World calls it “a paper”, it is a set of presentation slides.)

As the team points out, significant architectural shifts have occurred before in the HPC market.  Originally, most supercomputers employed special purpose vector processors, which could operate on multiple data items simultaneously.  (The machines built by Cray Research are prime examples of this approach.)  The first Top 500 list, published in June 1993, was dominated by vector architectures  — notice how many systems are from Cray, or from Thinking Machines, another vendor of similar systems.  These systems tended to be voracious consumers of electricity; many of them required special facilities, like cooling with chilled water.

Within a few years, though, the approach had begun to change.  A lively market had developed in personal UNIX workstations, using RISC processors, provided by vendors such as Sun Microsystems, IBM, and HP.   (In the early 1990s, our firm, and many others in the financial industry, used these machines extensively.)  The resulting availability of commodity CPUs made building HPC system using those processors economically attractive.  They were not quite as fast as the vector processors, but they were a lot cheaper.  Slightly later on, a similar transition, also motivated by economics, took place away from RISC processors and toward the x86 processors used in the by-then ubiquitous PC.

Top 500 Architectures

Top 500 Processor Architectures

The researchers point out that current mobile processors have some limitations for this new role:

  • The CPUs are mostly 32-bit designs, limiting the amount of usable memory
  • Most lack support for error-correcting memory
  • Most use non-standard I/O interfaces
  • Their thermal engineering does not necessarily accommodate continuous full-power operation

But, as they also point out, these are implementation decisions made for business reasons, not insurmountable technical problems.  They predict that newer designs will be offered that will remove these limitations.

This seems to me a reasonable prediction. Using more simple components in parallel has often been a sensible alternative to more powerful, complex systems.  Even back in the RISC workstation days, in the early 1990s, we were running large simulation problems at night, using our network of 100+ Sun workstations as a massively parallel computer.  The trend in the Top 500 lists is clear; we have even seen a small supercomputer built using Raspberry Pi computers and Legos.  Nature seems to favor this approach, too; our individual neurons are not particularly powerful, but we have a lot of them.


Watson Goes to College

March 9, 2013

Back in early 2011, I wrote a number of posts here about IBM’s Watson system, which scored a convincing victory over human champions in the long-running TV game show, Jeopardy!.   Since then, IBM with its partners has launched efforts to employ Watson in a variety of other fields, including marketing, financial services and medical diagnosis, in which Watson’s ability to assimilate a large body of information from natural language sources can be put to good use.

Now, according to a post on the Gigaom blog, Watson will, in a sense, return to its roots in computer science research.  IBM has supplied a Watson system to the Rensselaer Polytechnic Institute [RPI] in Troy, NY.  According to Professor James Hendler, author of the post, and head of the Computer Science department at RPI, one focus of the work with Watson will be expanding the scope of information sources the system can use.

One of our first goals is to explore how Watson can be used in the big data context.  As an example, in the research group I run, we have collected information about more than one million datasets that have been released by governments around the world. We’re going to see what it takes to get Watson to answer questions such as “What datasets are available that talk about crop failures in the Horn of Africa?”.

Some of the research work with Watson will also be aimed at gaining more understanding of the process of cognition, and the interplay of a large memory and sophisticated processing.

By exploring how Watson’s memory functions as part of a more complex problem solver, we may learn more about how our own minds work. To this end, my colleague Selmer Bringsjord, head of the Cognitive Science Department, and his students, will explore how adding a reasoning component to Watson’s memory-based question-answering could let it do more powerful things.

The Watson system is being provided to RPI as part of a Shared University Research Award granted by IBM Research.  It will have approximately the same capacity as the system used for Jeopardy!, and will be able to support ~20 simultaneous users.  It will be fascinating to see what comes out of this research.

The original IBM press release is here; it includes a brief video from Prof. Hendler.


Dr. Watson Goes to Work

February 10, 2013

Back in early 2011, I wrote a number of posts here about IBM’s Watson system, which scored a convincing victory over human champions in the long-running TV game show, Jeopardy!.  The match, as a demonstration of the technology, was undoubtedly impressive, but the longer term aim was to employ Watson’s ability to cope with natural language and to assimilate a huge body of data for work in other areas, such as financial services, marketing, and medical diagnosis.  It’s also been suggested that Watson might be made available as a service “in the cloud”.

On Friday, IBM, together with development partners WellPoint, Inc. and Memorial Sloan-Kettering Cancer Center, announced the availability of Watson-based systems for cancer diagnosis and care.

IBM , WellPoint, Inc.,  and Memorial Sloan-Kettering Cancer Center today unveiled the first commercially developed Watson-based cognitive computing breakthroughs.  These innovations stand alone to help transform the quality and speed of care delivered to patients through individualized, evidence based medicine.

Since the beginning of the development, Watson has absorbed more than 600,000 pieces of medical evidence and 2 million pages of text from 42 medical journals.  It has also had thousands of hours of training from clinicians and technology specialists.  The goal is to provide doctors and other care-givers with a menu of treatment options.

Watson has the power to sift through 1.5 million patient records representing decades of cancer treatment history, such as medical records and patient outcomes, and provide to physicians evidence based treatment options all in a matter of seconds.

Keeping up with the latest developments in medical research and clinical practice is a serious issue in health care; by some estimates, the amount of available information doubles every five years.  A system based on Watson may give doctors a better chance of staying on top of all of that.

Three specific products were announced today:

The new products include the Interactive Care Insights for Oncology, powered by Watson, in collaboration with IBM, Memorial Sloan-Kettering and WellPoint.   The WellPoint Interactive Care Guide and Interactive Care Reviewer, powered by Watson, designed for utilization management in collaboration with WellPoint and IBM.

The Watson system has improved technically since its debut on Jeopardy!.  IBM says that its performance has increased by 240%, and its physical resource requirements reduced by 75%.  It can now be run on a single Power 750 server.

There’s more information on the technology at IBM’s Watson site.


IBM Announces Silicon Nanophotonics

December 12, 2012

One of the significant trends in recent computer system design has been the growing use of large-scale parallel processing.  From multiple-core CPUs in PCs to massively parallel systems like Titan at Oak Ridge National Laboratory, currently the world’s fastest supercomputer, and IBM’s Watson system, which won a convincing victory in a challenge match on Jeopardy!, the use of multiple processors has become the technique of choice for getting more processing horsepower.

These systems have achieved impressive levels of performance, but their design has its tricky aspects.  If the collection of processors is to work as one system, there obviously must be some mechanism for communication among them.  In practice, the capacity and speed of these interconnections can limit a system’s potential performance.  Even fiber-optic interconnections can be cumbersome with current technology: at each end, electrical signals must be converted to light pulses, and vice versa, by specialized hardware.

On Monday, IBM announced a new product technology that has the potential to remove some of these bottlenecks.   Building on research work originally described by IBM at the Tokyo SEMICON 2010 conference [presentation PDF], the Silicon Integrated Nanophotonics technology allows the fabrication of a single silicon chip containing both electrical (transistors, capacitors, resistors) and optical (waveguides, photodetectors) elements.

The technology breakthrough allows the integration of different optical components side-by-side with electrical circuits on a single silicon chip, for the first time, in standard 90nm semiconductor fabrication. The new features of the technology include a variety of silicon nanophotonics components, such as modulators, germanium photodetectors and ultra-compact wavelength-division multiplexers to be integrated with high-performance analog and digital CMOS circuitry.

IBM says that the technology allows a single nanophotonic transceiver to transfer data at 25 gigabits per second.  A single chip might incorporate several transceivers, allowing speeds in the terabit per second range, orders of magnitude faster than current interconnect technology.

Probably the more significant aspect of the announcement is that IBM has developed a method of producing these nanophotonic chips using a standard 90 nanometer semiconductor fabrication process.  Although I have not seen any specific figures, this has the potential to provide significantly faster and cheaper interconnections than current technology.

The initial deployments of the technology will probably be in large data centers, supercomputers, and cloud services.  However, if IBM has truly licked the manufacturing problem, there is no reason that the benefits should not, in time, “trickle down” to more everyday devices.

Ars Technica has an article on this announcement.


Watson in the Clouds

September 24, 2012

I’ve written here several times about IBM’s Watson system, which first gained some public notice as a result of its convincing victory in a Jeopardy! challenge match against two of the venerable game show’s most accomplished human champions.   Since then, IBM has announced initiatives to put Watson to work in a variety of areas, including medical diagnosis, financial services, and marketing.  All of these applications rely on Watson’s ability to process a very large data base of information in natural language, and to use massively  parallel processing to draw inferences from it.  (The Watson system that won the Jeopardy! test match used 10 racks of servers, containing 2880 processor cores, and 16 terabytes of memory.)

Now an article in the New Scientist suggests an intriguing  new possibility for Watson, as a cloud-based service.

Watson, the Jeopardy-winning supercomputer developed by IBM, could become a cloud-based service that people can consult on a wide range of issues, the company announced yesterday.

The details of this are, at this point, fuzzy at best, but making Watson available as a cloud service would certainly make it accessible to a much larger  group of users, given the sizable investment required for a dedicated system.

Because Watson can respond to natural language queries, it is tempting to compare it to other existing systems.  Apple’s Siri, for example, can interpret and respond to spoken requests, but the back-end processor is essentially a search engine.  The Wolfram|Alpha system also responds to natural-language queries, but its ability to deliver answers depends on a structured data base of information, as Dr. Stephen Wolfram has explained.  Watson really is a new sort of system.

All of this is still in the very early stages, of course, but it will be fascinating to see how it develops.


Take the Road Train, Revisited

September 16, 2012

I’ve written here before about some of the work being done to develop “self-driving” cars, including Google’s tests of a fully-autonomous vehicle, and Volvo’s work on developing “road trains”, essentially convoys of semi-autonomous vehicles that follow a lead vehicle with a human driver.  Volvo’s  work is part of the European Union’s Project SARTRE (Safe Road Trains for the Environment).

The New Scientist site has an article reporting on a recent demonstration of the road train technology.  This approach probably has the higher likelihood of practical application in the near term, because it is largely based on technology that is already present, at least in some high-end cars.

Almost all the sensors and actuators that keep me from flying off the road now come as standard in most new Volvos (and other manufacturers for that matter). They are the exact same ones that enable cars to stay in lanes and avoid hitting other cars and pedestrians.

In contrast, completely autonomous cars, like those being tested by Google, require a considerable amount of added equipment to function.

Both approaches have the potential to provide significant improvements in safety.  The autonomous “driver” will not drive while sleepy or intoxicated; nor will it be distracted by sightseeing, fiddling with a cell phone, or turning around to smack the kid in the back seat.  An automatic system can also react more quickly than a human driver.

That faster reaction time means, in practice, that cars, particularly in a road train system, can follow one another much closer than would be safe or legal with a human driver  .  In the test reported in the article, the following distance at a speed of 90 km/hour [56 mph] was about 6 meters [19.7 feet].  By comparison, with a driver reaction time of 500 milliseconds, about 80 feet of additional separation would be needed at the same speed.  Putting vehicles closer together, with fewer speed fluctuations, should help reduce road congestion.  Obviously, all this assumes that the lead driver is highly competent.

The ability to follow other vehicles more  closely also might improve fuel economy, by the phenomenon that cyclists everywhere know as “drafting”.  As speed increases, the amount of power required just to overcome air resistance increases as the third power of the vehicle’s relative air speed (that is, taking into account any head- or tail-wind).   At a speed of 15 mph on level ground, for example, most of a cyclist’s power is used just to make a hole in the air. [Source: Bicycling Science, 2nd Edition, by Frank R. Whitt and David G. Wilson; Cambridge MA: MIT Press, 1997].  The effect is not so pronounced for cars, since they are typically more streamlined (that is, have a lower drag coefficient), but it is still significant.

Vehicles driving in such tight formations with fewer speed fluctuations should dramatically reduce congestion, says Erik Coelingh, Volvo’s senior technical specialist who is heading the research near Gothenburg. The reduction in drag could potentially cut fuel consumption by as much as 20 per cent, he says.

The technology is certainly interesting, and seems to have a good deal of potential.  Whether the legal and cultural obstacles to its adoption can be overcome remains to be seen.


%d bloggers like this: