Google Expands Unsafe Site Reporting

June 26, 2013

For some time now, Google has published its Transparency Report, which gives a high-level overview of how Google relates to events in the world at large. The report has historically included several sections:

  • Traffic to Google services (current and historical, highlighting disruptions)
  • Information removal requests (by copyright holders and governments)
  • Requests for user data (by governments)

This information can be interesting in light of current events. For example, at this writing, Google reports ongoing disruptions to their services in Pakistan, China, Morocco, Tajikistan,Turkey, and Iran.

Now, according to a post on the Official Google Blog, a new section will be added to the Transparency Report. The report is an outgrowth of work begun in 2006 with Google’s Safe Browsing Initiative.

So today we’re launching a new section on our Transparency Report that will shed more light on the sources of malware and phishing attacks. You can now learn how many people see Safe Browsing warnings each week, where malicious sites are hosted around the world, how quickly websites become reinfected after their owners clean malware from their sites, and other tidbits we’ve surfaced.

Google says that they flag about 10,000 sites per day for potentially malicious content. Many of there are legitimate sites that have been compromised in some way. The “Safe Browsing” section of the Transparency Report shows the number of unsafe sites detected per week, as well as the average time required to fix them.

Google, because its search engine “crawlers” cover so much of the Web, has an overview of what’s out there that few organizations can match. I think they are to be commended for making this information available.

Anti-Virus Updating

April 29, 2013

The folks over at the SANS Internet Storm Center have a recent diary entry on keeping anti-virus (AV) software up to date.  This kind of anti-malware protection typically tries to recognize “evil code” on the basis of a set of heuristics, or by recognizing bit patterns in the code itself (these are sometimes called “signatures”).  These elements, especially the signatures, need to be updated as new varieties of malware are created and discovered “in the wild”.   (The defender is always, in a sense, trying to catch up, since a new type of malware has to be found and identified as such before a signature can be developed.)

The contributors to the article are all very capable systems administrators, and I think it’s well worth a read, especially if you are responsible for a bunch of PCs.  (There are also some comments following the article itself; they are, as usual, sort of a mixed bag.)  I’d take away these suggestions from the discussion:

  • You may need to schedule AV updates more frequently than your initial instincts (one participant suggests hourly), to account for the fact that the updates will not all run every time they are scheduled.  (Machines may be rebooting, or turned off, for example.)
  • Because updates are not guaranteed to occur on the advertised schedule, it’s important to measure how up to date your machines actually are — if there are big discrepancies, try to find out why and fix the problem.
  • AV software is one layer of defense, but is certainly not a total solution.

Probably the most important advice is this: if a machine has been compromised by malware, it is highly improbable that AV software, or anything else, will be able to clean or repair it.  Modern systems, and the malware that attacks them, are so complex that figuring out exactly what has been affected, compromised, or corrupted is effectively impossible.  The only reliable recovery method is “nuking from orbit”: wiping the machines hard drive(s), and reloading the OS, applications, and data from known clean backup copies.  Yes, it is a bloody nuisance, but it’s really the only way to make sure that you have a clean system.

Virus-Infested Hospitals

October 20, 2012

Most readers, I suspect, will have run across news stories or other reports of nasty infections sometimes acquired by hospital patients.  According to a report at Technology Review, there is another worrying category of infection proliferating in hospital environments: computer virus infections of medical equipment.

Computerized hospital equipment is increasingly vulnerable to malware infections, according to participants in a recent government panel. These infections can clog patient-monitoring equipment and other software systems, at times rendering the devices temporarily inoperable.

The advent of the microprocessor and Moore’s Law has meant the introduction of digital technology, often replacing electro-mechanical control systems, in everything from toasters to “fly-by-wire” aircraft.  It should come as no surprise that many medical devices are now controlled by software as well.  This of course means that all the problems of software, including program bugs, security vulnerabilities, and malware, are part of the package.  Also, as with industrial control [SCADA] systems, the undoubted convenience of linking these devices to a network provides a convenient vector for malware infections.  (The direct connection may be to an internal network, but there is often a path to the Internet lurking somewhere in the background.)  In addition, hospital personnel, like workers in other fields, bring in personal laptops, USB memory sticks, and other devices, sometimes with some undesirable extras.

Another difficulty with medical equipment is also reminiscent of the SCADA case.  For obvious reasons, the vendors and users of these devices place a high value on availability — the machine should be ready for use whenever it is needed.  This means that scheduling downtime for, say, installing software patches is not popular.  In addition, some manufacturers do not allow any modifications to their equipment or its software, even to install security fixes.  This stems in part from the requirement that the devices have to be approved by the FDA; rightly or wrongly, some vendors believe that installing such fixes might require the device to be re-certified.

In a typical example, at Beth Israel Deaconess Medical Center in Boston, 664 pieces of medical equipment are running on older Windows operating systems that manufactures will not modify or allow the hospital to change—even to add antivirus software—because of disagreements over whether modifications could run afoul of U.S. Food and Drug Administration regulatory reviews, Fu says.  [Prof. Kevin Fu, associate professor of computer science at the University of Massachusetts, Amherst]

These security issues were the focus of a meeting last week of the Information Security & Privacy Advisory Board at the National Institute of Standards and Technology [NIST].   Prof. Fu was one of the attendees, as was Mark Olson, Chief Information Security Officer at Beth Israel Deaconess Medical Center in Boston MA.

At the meeting, Olson also said similar problems threatened a wide variety of devices, ranging from compounders, which prepare intravenous drugs and intravenous nutrition, to picture-archiving systems associated with diagnostic equipment, including massive $500,000 magnetic resonance imaging devices.

Hospitals have not, historically, had to focus very much on computer security.  With today’s equipment, though, they have become security administrators whether they like it or not.  As with SCADA systems and many others, there is some catching up to do.

Low-Tech Scareware

October 6, 2012

Once the first computer viruses, worms, and other malware had appeared on the scene, it was not long before software vendors, like McAfee and Norton, began to provide users with anti-virus software as a defense.  And then it wasn’t too long before the first scareware appeared to take advantage of that environment.  In one classic incarnation, scareware (which is essentially a “social engineering” attack) presented a message to the user, frequently in a pop-up window from a dodgy web site, saying that the user’s computer was infected with some dire virus.   The message would go on to say that terrible things were bound to happen; however, the user could return to serenity if (s)he purchased a special anti-virus program, which by lucky coincidence could be accomplished by simply clicking a link in the message.  The claimed infection was, of course, generally non-existent, and the anti-virus software worthless.  (It might erase some anodyne system file as “proof” that the infection had been removed.)

Usually, this was just a means of extracting money from gullible users, although it was always possible that the “anti-virus” software was the real malware.   If the user can be induced to install some arbitrary bit of software, the game is essentially over as far as defending the system goes.

This past week, Ars Technica reported that the US Federal Trade Commission [FTC] had filed six lawsuits in US District Court against 14 companies and 17 individuals the FTC says have been engaged in a similar scareware scam, with a twist: the initial approach was decidedly low-tech, via a telephone call.

By cold-calling victims and claiming to be from companies like Microsoft, Dell, and McAfee, the scammers directed users to a harmless error log on their computers and told them it was a sign of a serious infection, the FTC said. The alleged scammers went on to charge anywhere between $49 and $450 to “fix” the consumers’ computers.

The callers claimed that routine warning or error messages in  system log files indicated a grave malware infection, which they, by lucky chance, could fix.  (The means are different, but the basic idea of the scam is preserved.)   The FTC says that one company went so far as to purchase Google search ads, which showed up in searches for terms like “McAfee” or “anti-virus support”.

As with most of the original scareware scams, these callers apparently only wanted the money paid for their non-existent “services”, but the potential for something considerably worse is still there.

The basic lesson here is very simple, and applies to areas other than technology, too: don’t trust unsolicited phone calls, or E-mails, or …

Update Sunday, October 7, 16:30 EDT

Steve Bellovin, the FTC’s new Chief Technologist, has an excellent article on this case posted at the Tech@FTC blog.

Securing the Hardware, Revisited

August 1, 2012

I’ve written here a couple of times about the risk of an adversary inserting malicious code into a PC’s  firmware, either in the BIOS, or elsewhere, perhaps on a network interface card.    The risk is not just from intentionally malicious code; there is also a potential problem with parts that may not be genuine, but cheap “knock-offs”, similar to a fake Louis Vuitton bag.  One of the reasons that these risks are considered so serious is that, from the usual computer security viewpoint, which is focused (for good reason!) on software, these potential exploits are effectively invisible, appearing as part of the machine’s hardware.  The issue has generated considerable concern with respect to defense systems, since so many contemporary weapons systems are dependent on electronic components; and, as in civilian life, many of these components are manufactured in other countries, especially China.  On the other hand, there has also been some skepticism expressed about the practicality of such an attack.

The Technology Review has a report on a presentation at last week’s  Black Hat US security conference, in which a French hacker, Jonathan Brossard. demonstrated a practical attack of this kind that would work on a wide range of current PCs.

At the Black Hat security conference in Las Vegas last week, Jonathan Brossard demonstrated software that can be hidden deep inside the hardware of a PC, creating a back door that would allow secret remote access over the Internet. His secret entrance can’t even be closed by switching a PC’s hard disk or reinstalling its operating system.

The exploit, which is called Rakshasa, is quite cleverly designed to minimize its chances of being detected.  The modified firmware on the PC contains just enough code to allow the whole package to function.

When a PC with Rakshasa installed is switched on, the software looks for an Internet connection to fetch the small amount of code it needs to compromise the computer.

This means that, if there is no Internet connection available, the exploit will not function; but it also makes Rakshasa stealthy, and essentially invisible to most malware detection methods, since it does not leave any “footprints” in the file system, or in the disk’s boot record.  It also means that new attack functions can be added to the retrieved malware.

The code Rakshasa fetches is used to disable a series of security controls that limit what changes low-level code can make to the high-level operating system and memory of a computer. Then, as the computer’s operating system is booted up, Rakshasa uses the powers it has granted itself to inject code into key parts of the operating system. Such code can be used to disable user controls, or steal passwords and other data to send back to the person controlling Rakshasa.

The response of at least one manufacturer was, sadly and predictably, an attempt at spin control.

The attack can work on PCs with any kind of processor, but many of the standard features of PC motherboards originated with Intel. Suzy Greenberg, a spokeswoman for that company, said in an e-mail that Brossard’s paper was “largely theoretical,” since it did not specify how an attacker would insert Rakshasa onto a system, and did not take into account that many new BIOS chips have cryptographically verified code that would prevent it from working.

The response also, to a considerable extent, misses the point.  A supplier of chips or firmware, or a PC manufacturer, could easily install something like Rakshasa.  I presume that the “new BIOS chips” Ms. Greenberg refers to are those implementing the UEFI Secure Boot feature; however, as I’ve discussed, it’s likely that most new PCs with this feature will have some means of bypassing it, so that alternative software can be installed.  It is not difficult to imagine a “social engineering” attack that would persuade users to install a firmware “upgrade”.  Once that is done, the likelihood that any current anti-malware tools would discover anything amiss is very low.

Software flaws are difficult to find, and hardware defects are even more elusive.  In general, lower-level bugs (closer to the “bare metal”) are harder to find, a point Ken Thompson made many years ago in his address, “Reflections on Trusting Trust”, given at his acceptance of the Turing Award from the Association for Computing Machinery in 1983:

No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

It is certainly true that putting together a successful attack of this kind is a considerably more challenging project than constructing a MS Word macro virus.   But, as Mr. Brossard has demonstrated, it is by no means impossible, especially for an attacker with significant resources who sees a large potential payoff.

Banking on Linux, Revisited

July 15, 2012

Back in October, 2009, I posted a couple of notes here about the idea of using a PC booted from a Linux Live CD for online banking (or other sensitive functions) to improve security.    A Live CD is a bootable CD-ROM that contains a complete Linux distribution  (the OS itself plus applications); the system is booted and run entirely from the CD, and the PC’s hard disk is not touched,  Since everything runs from the CD, any malware on the PC’s hard disk will not have a chance to run.   The topic had been discussed by Brian Krebs in a post on his “Security Fix” blog at the Washington Post. following a series of investigative reports on online banking fraud against small- and medium-sized businesses (SMBs).  I was glad to see and endorse his recommendation,

Krebs is now writing an independent blog, Krebs on Security (there’s always a link in the sidebar), and has continued to investigate banking fraud.   He has once again published a post suggesting the Live CD approach, and I still think it is a very sensible way to go for SMBs.  My ideal solution, as I’ve written before, would be a dedicated machine with a hardened OS and no applications software except what is required for the banking function.  But economics matter, and the Live CD solution gives many of the same benefits at significantly lower cost — and it costs almost nothing to try.  The article includes a step-by-step guide to getting and using a Live CD, using the Puppy Linux distribution; it is a “light weight” distro, which should run well on any PC that can run a reasonably current version of Windows.

As Krebs points out in his article, the point is not that malware does not exist for other systems, but that the vast majority of it is targeted at Windows PCs.

All of the malware used in the attacks I’ve written about is built for Windows. That’s not to say bad guys behind these online heists won’t get around to targeting Mac OS X, or users of other operating systems.  Right now, there are no indications that they are doing this.

If you are going for a swim, and you can choose between two beaches, one of which is infested with sharks and the other is not, does it really matter that much why the sharks prefer the first beach?

%d bloggers like this: