Google Expands Unsafe Site Reporting

June 26, 2013

For some time now, Google has published its Transparency Report, which gives a high-level overview of how Google relates to events in the world at large. The report has historically included several sections:

  • Traffic to Google services (current and historical, highlighting disruptions)
  • Information removal requests (by copyright holders and governments)
  • Requests for user data (by governments)

This information can be interesting in light of current events. For example, at this writing, Google reports ongoing disruptions to their services in Pakistan, China, Morocco, Tajikistan,Turkey, and Iran.

Now, according to a post on the Official Google Blog, a new section will be added to the Transparency Report. The report is an outgrowth of work begun in 2006 with Google’s Safe Browsing Initiative.

So today we’re launching a new section on our Transparency Report that will shed more light on the sources of malware and phishing attacks. You can now learn how many people see Safe Browsing warnings each week, where malicious sites are hosted around the world, how quickly websites become reinfected after their owners clean malware from their sites, and other tidbits we’ve surfaced.

Google says that they flag about 10,000 sites per day for potentially malicious content. Many of there are legitimate sites that have been compromised in some way. The “Safe Browsing” section of the Transparency Report shows the number of unsafe sites detected per week, as well as the average time required to fix them.

Google, because its search engine “crawlers” cover so much of the Web, has an overview of what’s out there that few organizations can match. I think they are to be commended for making this information available.


Password Angst Again, Part 1

June 1, 2013

I’ve written here on several occasions about the problems of passwords as a user authentication mechanism, especially as the sole authentication mechanism.  When confronted with the necessity of choosing a password, many users make eminently lousy choices.  Examination of some actual password lists that have been hacked reveals a large number of passwords like ‘password’, ‘123456’, ‘qwerty123′, and the like.  Many thousands of words have been written in attempts to teach users how to choose “good” passwords.  Many Web sites and enterprises have password policies that impose requirements on users’ passwords; for example, “must contain a number”, or “must have both lower- and upper-case letters”.  It is not clear that these help all that much; if they are too cumbersome, they are likely to be circumvented.

The Official Google Blog has a recent post on the topic of password security, which contains (mostly) some very good advice.  The main suggestions are:

  •  Use a different password for each important service  This is a very important point.  Many people use the same password for multiple Web sites or services.  This is a Real Bad Idea for important accounts: online banking or shopping, sites that have sensitive data, or E-mail. It’s really essential that your E-mail account(s) be secure; the “I forgot my password” recovery for most sites includes sending you a new access token by E-mail.  If the Bad Guys can get all your E-mail, you’re hosed.
  • Make your password hard to guess Don’t pick obviously dumb things like ‘password’.  Avoid ordinary words, family names, common phrases, and anything else that would be easy to guess.  The best choice is a long, truly random character string.  Giving specific rules or patterns for passwords is not a good idea; paradoxically, these can have the effect of making the search for passwords easier.  (I’ll have more to say about this in a follow-up post.)
  • Keep your password somewhere safe Often, people are exhorted never to write their passwords down.  This is one of those suggestions that can actually be counter-productive.  If having to remember a large number of passwords is too difficult, the user is likely to re-use passwords for multiple accounts, or choose simple, easily guessed passwords.  It’s better to use good passwords, and write them down, as long as you keep in mind Mark Twain’s advice: “Put all your eggs in one basket, and watch that basket!”† You could, for example, write passwords on a piece of paper you keep in your wallet.  Most of us have some practice in keeping pieces of paper in our wallets secure.
  • Set a recovery option  If you can, provide a recovery option other than the so-called “secret questions” that many sites use.  A non-Internet option, like a cell phone number, is good because it’s less likely to be compromised by a computer hack.

All of this is good advice (and Google has been giving it for some time). There is also a short video included in Google’s blog post that gives advice on choosing a good password, but part of that advice is a bit troubling. The video starts off by saying, very sensibly, that one should not choose dictionary words or keyboard sequences (like ‘qwerty’).  It goes on to recommend starting with a phrase (in itself, OK), and then modifying it with special characters.  The example used starts with the phrase:

ilovesandwiches

and turns it into:

ilove$@nDwich3s

The problem with this is that this sort of substitution (sometimes called ‘133t speak’) is very well known.  There are password cracking tools that try substitutions like this automatically.  More generally, you don’t want to introduce any kind of predictable pattern into your password choices, even if it’s one that you, personally, have not used before.  Hackers can analyze those lists of leaked passwords, too.  Avoiding predictability is harder than it sounds; I’ll talk more about that in a follow-up post.

——

† from Pudd’nhead Wilson, Chapter 15


Technology v. Terrorism

May 30, 2013

Yesterday evening, PBS broadcast an episode of its Nova science program, “Manhunt: The Boston Bombers”, reporting on the role of technology in tracking down those responsible for the Boston Marathon bombings.   I had seen a note about the program in our local paper, and was curious to see what sort of program it would be.

I’m glad to say that, on the whole, I thought the reporting was realistic and level-headed.  It avoided scare-mongering, and took a fairly pragmatic view of what technology can and cannot do, at least at present.  It was organized chronologically, with commentary on forensic technologies interwoven with the narrative.

The first segment dealt with evidence from the explosions themselves. The white smoke that resulted, easily visible in TV accounts, indicated a gunpowder type of explosive, a suggestion reinforced by the relatively small number of shattered windows.   One forensic expert, Dr. Van Romero of the New Mexico Institute of Mining and Technology [NM Tech], quickly suspected a home-made bomb built in a pressure cooker.  Although devices of this type have been rare in the US, they have been relatively common in other parts of the world.  Building a similar bomb, and detonating it on a test range at NM Tech, produced effects very similar to the Boston bombs.  A pressure cooker lid was subsequently found on the roof of a building close to one of the explosion sites.

Because the attacks took place very close to the finish line of the Boston Marathon, and because that location on Bolyston Street has a large number of businesses, the authorities were confident that they would have plenty of still and video images to help identify the bombers.  After examination of this evidence, they came up with images of two primary suspects, who at that point could not be identified.  At first, the police and FBI decided not to release the images to the public; they feared doing so might prompt the suspects to flee, and hoped that facial recognition technology might allow them to be identified.  Alas, as I’ve observed before, these techniques work much better — almost like magic — in TV shows like CSI or NCIS than they do in the real world.  The images, from security videos, were of low quality, and nearly useless with current recognition technology.  Ultimately, the authorities decided to make the images public, hoping that someone would recognize them.

As things turned out, it didn’t matter that much.  The two suspects apparently decided to flee, and car-jacked an SUV.  The owner of the SUV managed to escape, and raised the alarm.  In a subsequent gun battle with police, one suspect died (he was apparently run over by his associate in the SUV); the other was wounded but escaped.  He abandoned the SUV a short distance away, and hid in a boat stored in a backyard in Watertown MA.  He was subsequently discovered because an alert local citizen noticed blood stains on the boat’s cover; the suspect’s location was pinpointed using infrared cameras mounted on a police helicopter.

As I mentioned earlier, I think the program provided a good and reasonably balanced overview of what these technologies can do, and what they can’t.  Magic is still in short supply, but technology can help pull together the relevant evidence.

More work is still being done to improve these techniques.  A group at the CyLab Biometrics Center at Carnegie-Mellon University, headed by Prof. Marios Savvides, is working on a new approach to facial recognition from low-quality images.  They give their system a data base containing a large number of facial images; each individual has associated images ranging from very high to low resolution.  Using information  inferred from this data, and guided by human identification of facial “landmarks” (such as the eyebrows, or nose) in the target image, the system attempts to find the most likely matches.  The technique is still at a very early stage, but does show some promise.  There’s more detail in an article at Ars Technica.

As the NOVA program also pointed out, the growth in and improvement of all this surveillance technology has some potentially troubling implications for personal privacy.  Setting up a portion of the infrastructure for a police state is probably not good civic hygiene; but that’s a subject for a future post.


Social Network Risks

May 17, 2013

Yesterday’s Washington Post has a report on the concerns raised by parents and child advocates about the use of social networks by pre-teenagers.  The story focuses on the photo sharing service, Instagrambut the general issues are relevant to other sites as well: is the site collecting the personal information of susceptible children, and does it do enough to protect them from miscellaneous predators.

The Instagram service is an offshoot of Facebook, the social networking giant, which has about 1 billion users.  The company’s policy requires users to be at least 13 in order to open an account, but the Instagram site does not even ask the user’s age when (s)he signs up.  (The main Facebook site does require a bit of verification, requiring the user’s real name and age; however, the effectiveness of this is questionable, since there is no way to check the user’s answers.)  The result is that many children under 13 have set up Instagram accounts.

There is some reason for concern about this; looking at the site (or at Facebook, for that matter, where I have an account) shows that many users post a great deal of what might be regarded as fairly personal information.  Most readers are probably familiar with news stories of people whose employment or other prospects have been damaged by indiscreet posting and photos on Facebook and other social sites.  Even if one grants that adults have a right to behave like complete idiots if they wish to, it seems reasonable that children, who lack both mature judgment (such as it is) and experience, deserve some protection.

However, people need to realize that, outside the realm of science fiction, this is not a problem that has a technological solution.  Even if it were possible to develop a peripheral device that would automagically detect a persons age, it really wouldn’t solve the problem; all the server on the other end of the transaction can do is to verify that the bit pattern it receives indicates the user is 13 (or 18, or 21).   Were such a device to be developed, I would not expect it to be long before some enterprising teenage hacker produced a “spoofing” device.

Facebook and other social-media sites have said that authenticating age is difficult, even with technology. A Consumer Reports survey in 2011 estimated that 7 million preteens are on Facebook.

It’s not difficult; it’s effectively impossible.

The other thing that all of us, kids and adults, need to remember is how businesses like Facebook work.  It may seem, as you sit perusing your friends’ postings, that you are a customer of the service.  But the customers are actually the advertisers who buy “space” on the service, which has every incentive to provide the customer with as much personal information as possible, in order to make ad targeting more effective, thereby supporting higher ad rates.  When you use Facebook, or other similar “free” services, you are not the customer — you are the product.


OUCH on Passwords

May 13, 2013

One of the “Useful Links” in the sidebar here is to the SANS Internet Storm Center [ISC].  The site, staffed by volunteer “handlers”, a group of highly skilled and experienced security professionals and systems/network administrators,  is a very valuable source of the latest security news.  It is, however, a site aimed at IT professionals, and tends, understandably, to be fairly technical, and to assume a fair amount of basic IT knowledge for starters.

However, to their credit, the folks at ISC have not neglected the ordinary user.  It has had, for a couple of years now, an initiative called Securing the Human, which attempts to address security policy issues considering the users’ perspective.  (In the interests of honesty, from personal experience, I am bound to say that this is probably not entirely from altruistic motives — better educated users are, on the whole, less likely to make terminally stupid mistakes.)    The Securing the Human initiative has also involved publishing a newsletter called OUCH!, which is oriented toward end users.

The latest issue of OUCH! has a short (three-page) article on good password practice [PDF].  It has some good, common sense advice that will help you use passwords securely.  If you are a systems admin person, you might want to consider giving copies to your users.

I’d just make one final suggestion: using a password manager, such as Bruce Schneier’s PasswordSafe, can be a big help in managing your passwords, and using them well.


Dotty Security Arguments

May 6, 2013

Bruce Schneier has an excellent opinion piece over at CNN, in which he discusses the criticism directed at security and intelligence agencies for not discovering and stopping the Boston Marathon bombing.  The litany of complaint is familiar enough:

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

Just as after the atrocities of 9/11, the agencies are being criticized for failing to “connect the dots” and uncover the plot.

Now, there have been specific incidents in connection with terrorism that one might think would raise some suspicions (for example, the 9/11 hijackers who took flying lessons but didn’t want to learn how land the plane).  But for the most part, as Schneier points out, “connecting the dots” is a bad and misleading metaphor.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. … It’s so simple that 5-year-olds can do it.

After an incident has occurred, we can look back through the history of the people and things involved, and attempt to piece together a pattern.  But that is possible only because we know what happened.  Before the fact, real life does not number the dots or even necessarily make them visible.  The problem, generally, is not that we have insufficient information.  It’s that we don’t now which tiny fraction of the information that we do have is relevant, and not just noise.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

I heard a news report a few days ago saying that Tamerlan Tsarnaev, the elder of the two brothers, had taken part in a monitored telephone call in which the term ‘jihad’ was mentioned.  Lumping together telephone calls (including those by reporters, of course), radio and TV broadcasts, and other forms of electronic communication, how many times per day would you guess that word might be mentioned?

As Schneier goes on to point out, this is an example of a psychological trait called hindsight bias, first explained by Daniel Kahneman and Amos Tversky,

Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen.

Telling stories is one of the primary ways that people attempt to make sense of the world around them.  The stories we construct tend to be a good deal tidier and more logical than the real world.  There is a strong tendency to adjust the “facts” to fit the story, rather than the other way around.  (This is one reason that science is hard.)

You can observe this tendency regularly in reporting on financial markets.  For example, whatever the stock market did yesterday — go up or down, a little or a lot — you can be sure that some pundits will have an eminently plausible explanation for why that happened.  You are very unlikely to hear anything like, “Well, the S&P 500 went down 500 points, and we don’t have a clue why it happened.”  (I have been saying for years that I will start paying attention to these stories when they are published before the fact.)

It is certainly sensible, after any incident, to look back to see if clues were missed, and to attempt to learn from any mistakes.  But it is neither sensible nor realistic to expect prevention of any and all criminal or terrorist activity.

Update Tuesday, May 7, 17:05 EDT

Schneier’s essay has now also been posted at his Schneier on Security blog.


%d bloggers like this: