Who Ya Gonna Call ?

August 31, 2009

The Christian Science Monitor recently had an interesting article about innovation and the Internet.  Sitting here today, it is sometimes hard to remember life without the Internet, yet all of its development has happened in my adult lifetime.  (Somehow, we did manage life before.)  The Internet”s growth in both size and scope is truly amazing.

Part of this, of course, is due to basic advances in technology.  The laptop computer on which I am writing this is much faster, and has more than 500 times the memory and 10,000 times the disk storage of the first computer I ever used, an IBM 360/91.   But better basic technology is not the whole story.  The Internet has provided an unusual environment in which new ideas could flourish.  The article touches on some  of the reasons for this, although its main focus is the potential for problems down the road due to inadequate planning for the future.

The author, James Turner, points out that much of the infrastructure of the Internet has not changed very much since its early days:

Like a jazzy sports car that has never had its oil changed, the underlying protocols of the Internet have remained largely unchanged since it came into being in the mid-1980s. The Internet can be surprisingly fragile at times and is vulnerable to attack.

As a statement of fact, this is hard to challenge.  Moreover, the basic protocols that the Internet uses were designed for a different world.  I have, in one context or another, administered Internet E-mail lists practically since the beginning.  People often complain that the mail protocols are insecure, not auditable, and so on.  While many of these complaints are valid, the people making them often don’t realize that, back in the 1980s when machines from different manufacturers, using different operating systems, networking software, and even character sets had to be linked together, getting mail delivered at all reliably was a considerable achievement.

One of the ironies of Internet history is that, although the original designers did not envision their work being used on anything like the scale it is today, they did their job so well the structure has survived growing like topsy:

The Internet grew too big too fast, says John Doyle, professor of electrical engineering at the California Institute of Technology in Pasadena, Calif.

“The original was just an experimental demo, not a finished product,” he says. “And ironically, [the originators] were just too good and too clever. They made something that was such a fantastic platform for innovation that it got adopted, proliferated, used, and expanded like crazy.”

The original design was flexible and robust enough to be used to build a vast array of services and facilities, all of which can work together becuase they adhere to certain standards.  That’s the good news.  The bad news is that, because there is no overall authority “in charge”, making a wide-spread change is difficult, even if essentially everyone agrees that it is needed.

The outstanding example of this is the change in Internet addressing from IPv4 (which gives us the familiar 32-bit IP address, usually written in “dotted decimal” notation as four octets, e.g., 192.168.1.20) to IP version 6, which uses 128-bit addresses.  This obviously gives a much larger pool of potential addresses, and in fact the primary motivation for the change is the easily-predictable exhaustion of the IPv4 address space within a few years.  As of December 2008, the new IPv6 protocol had been on the “standards track” for ten years, but for the most part it is yet to be adopted.  Instead, various technical workarounds (such as Network Address Translation) have been employed to stretch out the life of IP v4.  Virtually everyone agrees that the adoption of IPv6 is necessary, but with respect to themselves and their organizations, they seem to be a bit like St. Augustine in his Confessions:  “Grant me chastity and continence, but not yet” (da mihi castitatem et continentiam, sed noli modo).

Some folks, like Professor Doyle, quoted above, think that what is needed is a re-design of the Internet  from the “bare metal” outwards, to take into account how its usage has developed:

“To the extent I’ve been working in this field for the last 10 years, I’ve been mostly working on band-aids. I’m really trying to get out of that business and try to help the people, the few people, who are really trying to think more fundamentally about what needs to be done.”

I can sympathize with the idea, although it would obviously be a mammoth undertaking, and one would be trying to re-design something that will not be obliging enough to stand still while the work is done.

More fundamentally, there is a basic issue here that I think may relate to other issues of our time.  The Internet is, arguably, the most complex artifact ever constructed.  It has flourished because of an odd combination of strict rules (the underlying, basic protocols) and anarchy (pretty much anything else).  The reality does not fit the picture of either a centrally controlled undertaking, or a complete free-for-all; there are elements of both.  It may be that there is a lesson here that is applicable to other areas.  It is not necessarily true that the only alternative to a completely laissez-faire economy is commissars and collective farms.  We need to think of ways in which we can beneficially combine elements of control and freedom, and not just cling to old ideologies because they are familiar.


Zooming In

August 30, 2009

There are reports at both the BBC News and the New Scientist about an apparent breakthrough in the microscopic imaging of the very small, by scientists at IBM Research in Zürich. (This is the same lab where Gerd Binnig and Heinrich Rohrer invented the scanning tunneling electron microscope in 1981, for which they won the 1986 Nobel Prize in Physics.)  The IBM group has managed, for the first time, to capture an image of an intact organic molecule, pentacene, which consists of five linked benzene rings, showing its structure, even to the positions of the hydrogen atoms on the periphery.  (Both stories have an image of the molecule.  It is about 20 ångstrom  units long, or 2.0 × 10-9 meter.)

Although images on this scale have been produced before, previous techniques have not been able to image molecules, because the imaging itself disrupted the molecular structure.  The team used a modified method of atomic force microscopy, with a single moecule of carbon monoxide ( CO ) at the tip of the probe:

The molecule is very fragile, but the researchers were able to capture the details of the hexagonal carbon rings and deduce the positions of the surrounding hydrogen atoms.

Although van der Waals force attracted the tip to its target, a quantum-mechanical effect called the Pauli exclusion principle pushed back. This happens because electrons in the same quantum state cannot approach each other too closely

When I was studying chemistry and physics in college, we covered the structure of these molecules, and their physical and chemical properties.  Quantum theory told us how things should behave at an atomic and sub-atomic scale, but it wasn’t something we could measure — we thought it was pretty cool that we could measure bond lengths and angles with a microwave spectrometer.

This development could be of significant value in the design of nanotechnology components, and perhaps in the development of drugs, since the function of so many biologically important molecules depends on their shape.


The Worm that won’t Die

August 28, 2009

The New York Times recently ran an article about the Conficker worm, which first appeared in November of last year.   Conficker, which attacks Microsoft Windows systems, has proved to be elusive and difficult to deal with, despite the efforts of a task force made up of security people from industry, academia, and government.  Its focus is, apparently, on assembling a huge network (a so-called botnet) of hijacked computers.

The program, known as Conficker, uses flaws in Windows software to co-opt machines and link them into a virtual computer that can be commanded remotely by its authors. With more than five million of these zombies now under its control — government, business and home computers in more than 200 countries — this shadowy computer has power that dwarfs that of the world’s largest data centers.

This particular bit of malicious software does not appear to be the work of a bored teenager sitting in a basement somewhere.  It employs very sophisticated methods to avoid detection, and, if detected, to keep itself from being totally removed from an infected system.  Also, unlike most previous malware of this type, which generally had to “phone home” for instructions, and thereby gave clues to its origin, Conficker uses a peer-to-peer protocol (as do file-sharing programs, like LimeWire or BitTorrent) to transmit its coordinating information, making it much less susceptible to disruption.

What is not clear at this point is the ultimate aim of whoever is responsible for the worm.  Some speculation involves the usual suspects: perhaps the intent is to use the botnet to distribute spam, to steal passwords,  or to launch distributed denial-of-service [DDoS] attacks.  There is some evidence that one ploy involves warning the user that his machine has a nasty virus, and offering to remove it for a payment made by credit card.

All of this is consistent with the idea that, to an increasing extent, the distribution of malware (and Internet nastiness in general) is becoming less like random vandalism and more like organized crime.  Since the Internet operates internationally, there is no overall authority to make or enforce rules.  (The article mentions that one of the FBI’s problems in trying to investigate this kind of thing is the necessity of building “a relationship with ‘noncorrupt’ law enforcement agencies in the countries where the suspects are located.”)

My own expectation is that the problem of malware distributed via the Internet is going to get considerably worse before it gets better.  The solution has got to involve more than just having everyone install anti-virus software on his/her computer; it will need to involve Internet providers and governments, and is sure to step on multiple toes.


Update from the Garbage Patch

August 27, 2009

Back on August 7, I posted a note about Project Kaisei, a research expedition to the Great Pacific Garbage Patch, an area in the North Pacific Ocean, bigger than Texas, in which a large amount of primarily plastic rubbish has been collected  by the prevailing winds and currents.   The National Science Foundation has reported today that the first of the two research vessels involved, the New Horizon, has completed its sample-collecting trip.

What they found, 1000 miles from the coast of California, was not a pretty picture.  The ship trawled through the area, collecting samples at various depths:

On August 11th, the researchers encountered a large net entwined with plastic and various marine organisms; they also recovered several plastic bottles covered with ocean animals, including large barnacles.

The research team also collected a large number of plastic bottles, many inhabited by a variety of sea creatures.  (The NSF press release has a number of images of some of the rubbish found.)   It’s striking, and sad, that there is all this junk floating out there in the middle of the ocean.


… and Credit Unions

August 27, 2009

The National Credit Union Administration has issued a Fraud Alert to its member credit unions, warning against a new malware attack targeting them.  The attack is carried out by mailing the target credit union a bogus “Fraud Alert”, and enclosing two CDs that supposedly contain training materials to help defend against the fictional threat.  The CDs contain malware to subvert the target’s systems:

The subject of the fraudulent letter itself is a purported NCUA FRAUD Alert. The letter advises credit unions to review training material (contained on the CDs). DOING SO COULD RESULT IN A POSSIBLE SECURITY BREACH TO YOUR COMPUTER SYSTEM, OR HAVE OTHER ADVERSE CONSEQUENCES.

This, along with the attacks against small- and medium-sized businesses that I discussed in my last post, is probably indicative of the steadily growing involvement of organized crime in computer-based fraud.  Unlike the scatter-gun tactics of early computer viruses, these attacks are targeted, and aimed at stealing money.  Credit unions are probabbly being targeted because they, on average, are probably less sophisticated about security matters.

If your business or organization receives unsolicited material on CDs or other media, the same advice applies as in the early days of floppy-disk-borne viruses.  Do NOT open the media, unless it has first been checked for malware.  The best way to do this, particularly if you need to receive media from external sources, is to use a dedicated machine, not connected to the network, for the scanning.  You might even consider running Linux or one of the BSDs as the OS, and running Windows, if necessary, in a virtual machine.

Update, Thursday, 27 August, 16:17

The SANS Internet Storm Center is now reporting that this incident was not an actual attack, but an authorized security test.  As they note, though, good security practices are still called for.


Cyber-Crooks Target Small Business

August 26, 2009

Brian Krebs has a story in Tuesday’s Washington Post about a new trend in the ongoing saga of Internet-based fraud.  Apparently, criminal groups, many based in Eastern Europe, are focusing their attention on small- and medium-sized businesses in the US, and stealing electronic banking credentials in order to carry out fraudulent wire transfers.

In the past six months, financial institutions, security companies, the media and law enforcement agencies are all reporting a significant increase in funds transfer fraud involving the exploitation of valid banking credentials belonging to small and medium sized businesses.

The attack typically begins with an E-mail message sent to the corporate treasurer, controller, or other financial officer.  The E-mail will typically be tailored to the recipient, and contain links to apparently legitimate Web sites.  If the recipient clicks on the link, he is taken to a site that downloads and installs malware, typically a keystroke logger or other trojan designed to steal passwords and other credentials. With these in hand, the crooks initiate wire transfers from the target company’s account, often using intermediaries (sometimes unwitting ones) to disguise the ultimate destination of the funds.

The businesses involved often are embarrassed to report the fraud to authorities. Because they are businesses, they also lack some of the statutory protection that consumers have for electronic transfers.

This trend reinforces  some security lessons that are by no means new.

  • A sensitive function like money transfer should never be done from a general-purpose PC that may be used for E-mail, browsing, Facebook, online shopping, and goodness knows what else.  It should be done using a workstation dedicated to that function, and that workstation should be carefully configured so that only software that is required for that function is installed; and its security configuration should be carefully monitored.  (In his blog, “Security Fix”, Brian Krebs has some useful suggestions for this.)
  • It should go without saying that anyone, whatever his or her position, who has access to money transfer facilities needs to be thoroughly trained in their secure operation.
  • The systems used for these functions should be configured so that it is not possible for the user to install software.

It’s also important, if you work in a financial function, to make sure that you read and understand what the rules are that apply to your online banking activities.  It should come as no surprise to anyone that banks have made some significant investment in fraud-prevention for their consumer banking operations, since the applicable law and regulations make them responsible for losses in some cases.  In the case of business accounts, the losses are usually borne by the account holder, and this externality means that the bank doesn’t care all that much.


%d bloggers like this: