Courting Google, Again

March 29, 2010

I’ve written here a couple of times before about some of the antics that some localities are using to try to become Google’s choice for their 1+ Gbps fiber network experiment.  The deadline for localities to submit entries was last Friday, so there won’t e any more, but one late entry really is in a class by itself.  According to an article on the “Law and Disorder” blog at Ars Technica, a city councilman in Raleigh NC, Mr. Bonner Gaylord, has promised to name his as-yet-unborn twins “Larry” and “Sergey”, after the founders of Google, if Raleigh is chosen for the experiment.  (He did sort of wimp out at the last minute, and include a proviso that the children must be boys.)  As my friend Phil would say, this is height of something or other.

These various shenanigans have attracted some attention at Google, although it’s not clear that it is attention of the desired kind.  But Google does have an official blog post listing some of the other stunts.


Can You Trust your Network Card?

March 28, 2010

Recently I have posted here about security threats from devices not traditionally part of the security officers’ paranoia list, like photo-copiers and electric meters.   An interesting presentation at the recent CanSec West conference in Vancouver added another item to the list: your computer’s network interface card [NIC].  The presentation, by Yves-Alexis Perez and Loïc Duflot of the French ANSSI [Agence National de la Sécurité des Systèmes d’Information], discussed some of the capabilities of current network cards, and how they might be exploited.  (The presentation slides are available here [PDF].)

Modern NICs often provide considerably more than a bare interface to the physical transport medium (e..g, the network cable).  Many have on-board memory, and even processors; these are intended to improve performance by providing an additional layer of buffering and  offloading some tasks from the CPU (such as dealing with fragmented packets), and also to provide remote diagnostic and control facilities.  On some cards, this means that:

  • Every packet (in- or out-bound) passes through the NIC’s on-board memory
  • The NIC has direct access to the main processor’s RAM [DMA]
  • The on-board NIC processor runs firmware loaded from an EEPROM, or, via a driver, from the host’s filesystem.
  • Remote diagnostic and control protocols allow the NIC to provide a “heartbeat” to the network, and to receive and act on remote commands (to reboot, for example).  These protocols are intercepted by the NIC and not passed to the host.

Potentially, this is a very serious source of risk.  As the presentation summary puts it:

An unauthenticated remote attack on a network card is almost the most efficient attack one can imagine. A remote attacker located anywhere on the network can take full control of the victim’s network in order to: intercept all packets sent to and from the victim’s machine and forwards them to an attacker on the network; perform man in the middle on all unauthenticated network connexion (such as ARP or DNS) to redirect traffic to target machines; remotely shutdown, reset or wake up the machine.

Because the NIC has access to the system’s memory via DMA, it is also potentially possible to alter that memory and inject arbitrary code, unless the host OS defends against such attacks.

The research team demonstrated proof-of-concept attacks of these types, using a specific network card and configuration.  The card vendor has issued a firmware and driver patch for the vulnerability, which in any case is not too likely to be of great practical significance (because it requires a specific configuration).  Still, this is valuable research, to raise people’s consciousness in an area where not much thought is given to security.  As the authors say:

Our goal is to raise awareness on the security problems related to hardware vulnerabilities. We believe that this kind of publication should lead to an improvement of the quality of low level embedded firmware. So far, no research was performed on network card vulnerabilities.

We need to remember that some of the changes that make things easier for the legitimate user and system administrator can make things easier for Other People, too.


More Cores Require OS Redesign?

March 27, 2010

Last week, there was a report in Network World on a presentation given by Dave Probert, who works on Windows kernel architecture at Microsoft, on the implications of multiple-core processors for operating system design.  (The article originated with the IDG news services, and was picked up by several industry publications.)  According to the report, Mr. Probert argued that the introduction of processor chips with more and more CPU “cores” means that a new approach to fundamental operating system design is required.

The presentation was given at the Universal Parallel Computing Research Center (of which Microsoft and Intel are sponsors) at the University of Illinois at Urbana-Champaign.   Since the news report was a little light on details, I have tried to get a copy of the presentation, but apparently it is off-limits except to members of the Center.  So I am going to try to discuss the presentation as outlined in the news report; I am aware that possibly it does not accurately or completely represent what Mr. Probert actually said.

The foundation of his argument appears to be that the user is not seeing enough benefit, in terms of greater system responsiveness, from the addition of more processor cores:

Today’s computers don’t get enough performance out of their multicore chips, Probert said. “Why should you ever, with all this parallel hardware, ever be waiting for your computer?” he asked.

Mr. Probert feels that greater responsiveness is what people want.  He says that current methods of scheduling tasks within the operating system are not doing a good job of achieving this, because the system has inadequate knowledge to assign priorities.

The problem in being responsive, he noted, is “how does the OS know [which task] is the important thing?” You don’t want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files.

If what he is saying is that application developers are not always careful to provide the information the system needs to set appropriate priorities, then he is right.  If he is saying that the priority mechanisms are incapable of doing a good job, then I have to disagree.  I have mentioned in earlier posts that I often run BOINC applications on my Linux machines.  These run with a very low priority, so that they only get CPU time when no other runnable process wants it; apart from the CPU utilization staying at about 100%, they have no noticeable impact on performance or interactive response.  Of course, if we are talking about an anti-virus program that wants to scan the entire disk, that will take a finite amount of time, and no amount of scheduling cleverness can make it go faster.

One of the remedies that Mr. Probert suggests is to assign specific cores to specific processes, and then let those processes do their own resource management.

The OS could assign an application a CPU and some memory, and the program itself, using metadata generated by the compiler, would best know how to use these resources.

First, at least some current operating systems, like Linux, have a “processor affinity” function, which allows a process to be tied to specific CPU(s) (or cores).  This would seem to accomplish much of what Mr. Probert is talking about.  More fundamentally, though, one of the reasons that people worry about parallel programming, and the key reason the multi-tasking function is provided by the operating system, is that application programmers have a hard time dealing with parallelism well.

As I noted in an earlier post, the degree to which parallel processing is possible is largely an attribute of the problem being solved.  There is also the opportunity for “accidental” parallelism that arises from the mix of processes running on the system at a given time.  It seems clear that only the OS can take advantage of that (of course, the applications designer should “play nice” with the system scheduler).

It may be that there is something in this proposal that I’m missing, or perhaps the reporting was incomplete.  But I don’t really see how the kind of approach that Mr. Probert is suggesting helps in any meaningful way.


Smart Meters, Again

March 26, 2010

As a way of improving the efficiency of electric power distribution, many utilities are looking at the deployment of so-called “smart meters” that, in addition to their basic function of measuring the amount of electricity consumed, are also networked computers.  The technology certainly has its appealing features.  For example, it could remove the necessity of meter readers visiting customer premises, and would facilitate the introduction of demand-based pricing, under which power would be more expensive at times of high demand, and cheaper during “off hours”.  But there have also been some security concerns about the deployment of this “smart grid” technology; I’ve written about them before

The PhysOrg Web site has an article about some new security research that has been done by the security firm InGuardians for three unnamed US utility companies.  As in previous examinations, the testers found significant security vulnerabilities in the meters.

At the very least, the vulnerabilities open the door for attackers to jack up strangers’ power bills. These flaws also could get hackers a key step closer to exploiting one of the most dangerous capabilities of the new technology, which is the ability to remotely turn someone else’s power on and off.

Some of the flaws found could be exploited via physical access to the meter (many of which are mounted outside).  Some systems use network connection devices, called access points (analogous to routers), that contain cryptographic keys and other sensitive information.  And some use wireless data communications in a not-very-secure way.

Many of these problems are reminiscent of the kinds of security problems that plagued early computer networks.  It is somewhat disheartening that, in each new extension of technology, some of the same lessons seemingly must be learned anew.  Nonetheless, the good news is that the utility companies are doing this testing before plunging into a large-scale deployment.  I hope they pay attention to the results.


Lady Lovelace’s Day

March 25, 2010

I seem to have overlooked the announcement at the BBC News site, but yesterday was Ada Lovelace Day.  I can perhaps be forgiven a bit, since this is only the second year it has been celebrated; I don’t think Hallmark even has a card for it yet.  The “holiday” was created in 2009 by Ms. Suw Charman-Anderson, a social media consultant in Britain, and is intended to celebrate women working in the fields of science and technology.  According to the BBC,

Additionally, events were held in London, Copenhagen, Dresden, Montreal and Brazil to mark the day, named after Ms Lovelace, held on 24 March.

Brazil is a nice city.

All kidding aside, Augusta Ada King (née Byron), Countess of Lovelace (to give her correct name), was quite an interesting person.  She was the only legitimate child of the poet Lord Byron and his wife, Anne Milbanke, but she had very little in the way of a relationship with her father, who separated from here mother shortly after Ada was born, and died when she was nine years old.  She suffered from ill health as a child, but received an unusually good education in mathematics, for a slightly odd reason:

Her mother’s obsession with rooting out any of the insanity of which she accused Lord Byron was one of the reasons that Lovelace was taught mathematics from an early age.

(I know a few people who sometimes thought that studying math might make them insane, but I had never before heard that math study had been proposed as a prophylactic measure.)

She became friends with many of her better-known contemporaries, including Charles Wheatstone, Charles Dickens, and Michael Faraday.

Her most significant work was done with the inventor Charles Babbage, who designed the Difference Engine and Analytical Engine, mechanical computing devices.   The Analytical Engine was never built, owing to its complexity, the projected expense of its construction, and the fact that no one knew how to evaluate it.  Nonetheless, it has a reasonable claim to being the first design for a general-purpose computer.

Lady Lovelace, in a series of notes on a paper describing the machine, set out an algorithm for using it to compute Bernoulli Numbers. Because of this, she is often credited with being the first computer programmer.  The computer programming language Ada, developed for the US Department of Defense, is named in her honor.  Perhaps more significantly, she realized that the machine could potentially do more than just crunch numbers.  She wrote, in 1844,

The engine can arrange and combine its numerical quantities exactly as if they were letters or any other general symbols; and in fact it might bring out its results in algebraical notation, if provisions were made accordingly.

So, although I’m a bit late, I’m glad to have the opportunity to salute Lady Lovelace and her contribution to the development of computing.

Update, Thursday, March 25, 17:37 EDT

The “Culture Lab” blog at the New Scientist site also has an article on Ada Lovelace Day.


Another National ID Card?

March 24, 2010

I’ve written before about the “Real ID” legislation, passed by Congress back in 2005, which imposes requirements on the process by which states issue driving licenses, in the interests of greater security; I’ve argued that it is, in fact, an attempt to establish a national ID card indirectly.  As I noted in that earlier post, there have been many problems with its implementation, and the Department of Homeland Security has put off, for at least one year, the December 31, 2009, deadline for compliance.

There is a story at Wired, in the “Threat Level” blog, about another new proposal to introduce a new, high-tech identity document.  The original proposal was made in an OpEd article in the Washington Post, by Senators Charles Schumer and Lindsey Graham.  The article contains many sensible suggestions about providing a less cumbersome mechanism for legal immigration, and eventually citizenship, but it also contains a proposal for a new Social Security card, summed up in the following paragraph:

We would require all U.S. citizens and legal immigrants who want jobs to obtain a high-tech, fraud-proof Social Security card. Each card’s unique biometric identifier would be stored only on the card; no government database would house everyone’s information. The cards would not contain any private information, medical information or tracking devices. The card would be a high-tech version of the Social Security card that citizens already have.

There are several interesting things in this proposal.  Calling a new card “high-tech” can be done pretty much at will, and is essentially meaningless — but “fraud proof”?   The Senators unfortunately do not give any hints as to how this worthy goal will be accomplished.  (As has sometimes been said, the problem with making things fool proof is that fools are so ingenious.)  If it means something like “forgery proof”, I am not aware of any document of non-trivial importance that hasn’t, at one time or another, been counterfeited.  The inclusion of a biometric identifier is mentioned as if it were a silver bullet; but there is no reason to suppose that a forgery could not contain a legitimate biometric of its user.  Given the amount of time and money that has been spent for millenia by governments trying to prevent the counterfeiting of their currencies,  I think it is fair to ask for a few more details on this score.

(We should also remember that, as the potential value of a forged document goes up, so does the effort invested in forging it.  When a driver’s license was just documentation of a qualification to drive, forged licenses were not too common.  Now that a license saying the bearer is 21 has external value, most high school students could probably tell you how to get one.)

The senators also say that their proposal would not involve the creation of a central database.  This is almost certainly nonsense.  In the first place, central records of identification credentials are kept for a reason: to prevent my handing you my supporting documents, so that you can go establish an identity in, let’s say, a different locality or state.   The only effective way I know of to guard against substitution of biometric data is to compare what’s on the card to a master copy.  Even if the establishment of an “official” data base could be avoided, there would be de facto data bases created as soon as the documents were in common use.   The Social Security number itself was never intended to be used for any purpose other than record-keeping in the Social Security system.  But it was appropriated as an identification number for credit files, taxes, and many other purposes.  Once again, the more authoritative a single credential is, the more valuable it is to someone with criminal intentions.  And, if one is concerned about privacy, the card itself will be a “tracking device” — does anyone seriously think that these cards would not begin to be used for numerous other purposes?

To paraphrase something Bruce Schneier has said many times, setting up the surveillance apparatus of a police state is not good civic stewardship.


%d bloggers like this: