A Tastier Selection of Cookies

June 24, 2013

I’ve written here a number of times about browser cookies: small pieces of text that your browser stores on your system at the request of a Web server.  The cookie’s contents can be returned to the server with a later HTTP request.  The cookie mechanism was developed to provide a means of maintaining state information in the otherwise stateless HTTP protocol, which deals only in page requests and responses; the concept of logging in to a Web site, or having a session, is grafted onto the underlying protocol via the cookie mechanism.  This can lead to some security problems; it also impacts users’ privacy, since cookies are very widely used to track users as they browse to different sites.  (For example, those ubiquitous “Like” buttons from Facebook can set tracking cookies in your browser, even if you never visit the Facebook site itself.)

For some time now, several browsers have offered an option to disallow so-called “third party” cookies: those set by sites other than the one you are visiting.  And  Apple’s Safari browser, as well as development builds of Mozilla’s Firefox, have included heuristics to accomplish something similar.  These are helpful, but imperfect, since the definition of a “third party” is not as precise as one might like.  For example, XYZ.COM might have a companion domain for videos, XYZ-MEDIA.COM; logically, both are part of the same site, but simple heuristics won’t see things that way.

Now, according to an article at Ars Technica, Stanford University, along with the browser makers Mozilla and Opera Software, is establishing a Cookie Clearinghouse to serve as a sort of central cookie  rating agency.

The Cookie Clearinghouse intends to provide lists of cookies that should be blocked or accepted. Still in the planning stages, it will be designed to work in concert with the heuristics found in Firefox in order to correct the errors that the algorithmic approach makes.

The Clearinghouse is just being set up, so it’s too early to say how much it will help.  Similar cooperative efforts have helped reduce the impact of spam, phishing, and malicious Web sites, though, so we should hope for the best.


Social Network Risks

May 17, 2013

Yesterday’s Washington Post has a report on the concerns raised by parents and child advocates about the use of social networks by pre-teenagers.  The story focuses on the photo sharing service, Instagrambut the general issues are relevant to other sites as well: is the site collecting the personal information of susceptible children, and does it do enough to protect them from miscellaneous predators.

The Instagram service is an offshoot of Facebook, the social networking giant, which has about 1 billion users.  The company’s policy requires users to be at least 13 in order to open an account, but the Instagram site does not even ask the user’s age when (s)he signs up.  (The main Facebook site does require a bit of verification, requiring the user’s real name and age; however, the effectiveness of this is questionable, since there is no way to check the user’s answers.)  The result is that many children under 13 have set up Instagram accounts.

There is some reason for concern about this; looking at the site (or at Facebook, for that matter, where I have an account) shows that many users post a great deal of what might be regarded as fairly personal information.  Most readers are probably familiar with news stories of people whose employment or other prospects have been damaged by indiscreet posting and photos on Facebook and other social sites.  Even if one grants that adults have a right to behave like complete idiots if they wish to, it seems reasonable that children, who lack both mature judgment (such as it is) and experience, deserve some protection.

However, people need to realize that, outside the realm of science fiction, this is not a problem that has a technological solution.  Even if it were possible to develop a peripheral device that would automagically detect a persons age, it really wouldn’t solve the problem; all the server on the other end of the transaction can do is to verify that the bit pattern it receives indicates the user is 13 (or 18, or 21).   Were such a device to be developed, I would not expect it to be long before some enterprising teenage hacker produced a “spoofing” device.

Facebook and other social-media sites have said that authenticating age is difficult, even with technology. A Consumer Reports survey in 2011 estimated that 7 million preteens are on Facebook.

It’s not difficult; it’s effectively impossible.

The other thing that all of us, kids and adults, need to remember is how businesses like Facebook work.  It may seem, as you sit perusing your friends’ postings, that you are a customer of the service.  But the customers are actually the advertisers who buy “space” on the service, which has every incentive to provide the customer with as much personal information as possible, in order to make ad targeting more effective, thereby supporting higher ad rates.  When you use Facebook, or other similar “free” services, you are not the customer — you are the product.


The Internet Surveillance State

March 30, 2013

One of the hardy perennial issues that comes up in discussions of our ever more wired (and wireless) lives is personal privacy.  Technology in general has invalidated some traditional assumptions about privacy.  For example, at the time the US Constitution was being written, I doubt that anyone worried much about the possibility of having a private conversation.  All anyone had to do, in an age before electronic eavesdropping, parabolic microphones, and the like, was to go indoors and shut the door, or walk to the center of a large open space.  It might be somewhat more difficult to conceal the fact that some conversation took place, but it was relatively easy to ensure that the actual words spoken were private.

Similarly, before the advent of computer data  bases, getting together a comprehensive set of information about an individual took a good deal of work.  Even records that were legally public (e.g., wills, land records) took some effort to obtain, since they existed only on paper, probably moldering away in some obscure courthouse annex.  Even if you collected a bunch of this data, putting it all together was a job in itself.

People whose attitudes date back to those days often say something like, “I have nothing to hide; why should I care?”  They are often surprised at the amount of personal information that can be assembled via technical means.  The development of the Internet and network connectivity in general has made it easy to access enormous amounts of data, and to categorize and correlate it automatically.  Even supposedly “anonymized” data is not all that secure.

Bruce Schneier, security guru and author of several excellent books on security (including Applied Cryptography,  Secrets and Lies, Beyond Fear, and his latest book, Liars and Outliers), as well as the Schneier on Security blog, has posted an excellent, thought provoking article on “Our Internet Surveillance State”.  He begins the article, which appeared originally on the CNN site, with “three data points”: the identification of some Chinese military hackers, the identification (and subsequent arrest) of Hector Monsegur. a leader of the LulzSec hacker movement, and the disclosure of the affair between Paula Broadwell and former CIA Director Gen. David Petraeus.  All three of these incidents were the direct result of Internet surveillance.

Schneier’s basic thesis is that we have arrived at a situation where Internet-based surveillance is nearly ubiquitous and almost impossible to evade.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Many people are aware that their Internet activity can be tracked by using browser cookies, and I’ve written about the possibility of identifying individuals by the characteristics of their Web browser.  And many sites that people routinely visit have links, not always obvious, to other sites.  Those Facebook “Like” buttons that you see everywhere load data and scripts from Facebook’s servers, and provide a mechanism to track you — you don’t even need to click on the button.  There are many methods by which you can be watched, and it is practically impossible to avoid them all, all of the time.

If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, and you’ve permanently attached your name to whatever anonymous service you’re using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can’t maintain his privacy on the Internet, we’ve got no hope.

As Schneier also points out, this is not a problem that is likely to be solved by market forces.  None of the collectors and users of surveillance data has any incentive, economic or otherwise, to change things.

Governments are happy to use the data corporations collect — occasionally demanding that they collect more and save it longer — to spy on us. And corporations are happy to buy data from governments.

Although there are some organizations, such as the Electronic Privacy Information Center [EPIC]  and the Electronic Frontier Foundation [EFF], that try to increase awareness of privacy issues, there is no well-organized constituency for privacy.  The result of all this, as Schneier says, is an Internet without privacy.


National Strategy for Information Sharing and Safeguarding

December 23, 2012

The US government, through its various intelligence operations, collects an enormous amount of information; especially recently, private organizations and businesses have assembled some pretty impressive collections of their own (think Google or Facebook).  These collections have the potential to tell us a lot about the emergence of threats to either physical or information systems assets.  The problem has always been that it is much more challenging to sift through and analyze the information than it is to collect it in the first place.  I’m sure most readers have heard the narrative about all the warning signs of the 9/11 attacks; they were not hard to find after the fact, but no one “connected the dots” beforehand.  Furthermore, even among government agencies, information was not always shared, either because of inter-agency politics, or just inertia.  Information exchange between government and private-sector entities was even more problematic.

In the last decade, there have been efforts made to improve this situation.   As part of that overall effort, this past week the White House released a new National Strategy for Information Sharing and Safeguarding [PDF here, 24 pp. total].  As the title implies, the Strategy recognizes that information must be shared, but in a controlled way; sharing everything with everyone risks giving too much information to potential adversaries.  Citizens’ rights and privacy concerns also need to be taken into account.

Our national security relies on our ability to share the right information, with the right people, at the right time. As the world becomes an increasingly networked place, addressing the challenges to national security—foreign and domestic—requires sustained collaboration and responsible information sharing.

It also recognizes that many entities, not all of them governmental, are involved:

The imperative to secure and protect the American public is a partnership shared at all levels including Federal, state, local, tribal, and territorial. Partnerships and collaboration must occur within and among intelligence, defense, diplomatic, homeland security, law enforcement, and private sector communities.

To the extent that this reflects a shift toward looking at this problem as a whole, and not just at individual pieces, this is a welcome development.

I have had a quick preliminary read of the Strategy; although it is, like many similar documents from large organizations, over-supplied with jargon, its basic thrust seems sound.  The approach is based on three basic principles:

  • Information is a National Asset
  • Information Sharing and Safeguarding Requires Shared Risk Management
  • Information Informs Decisionmaking

The last is perhaps the most important, in the context of recent history.  Information in a form that cannot be used to inform decisions is not worth much.

The Strategy identifies five broad goals going forward:

  • Drive Collective Action through Collaboration and Accountability
  • Improve Information Discovery and Access through Common Standards
  • Optimize Mission Effectiveness through Shared Services and Interoperability
  • Strengthen Information Safeguarding through Structural Reform, Policy, and Technical Solutions
  • Protect Privacy, Civil Rights, and Civil Liberties through Consistency and Compliance

Each of these is discussed, and further broken down to more specifics.  The Strategy then goes on to identify objectives for action going forward.

As is often the case with security policy issues, the devil is very much in the details of implementation; but it is encouraging that a reasonable framework has been developed as a starting point.


Ubuntu 12.04 Reviewed

May 30, 2012

Late last month, I posted a note here about the release of Ubuntu Linux 12.04, “Precise Pangolin”.  Ars Technica now has a review article that covers changes in this release in considerably more detail.  (Note that this covers the base Ubuntu distribution released by Canonical Ltd, and does not necessarily apply to other variants, such as Kubuntu or Xubuntu.)

The review concentrates on the desktop and user interface portions of the system, which is sensible, since they provide the major differentiating factors between versions.  (Because the architecture of the Linux OS and  desktop is much more modular than that of, say, Microsoft Windows, it is generally possible to run almost any Linux application on any contemporary Linux system.)    Since 2010, the Ubuntu project has been working on a new desktop environment, called Unity, that attempts to deliver a more consistent user interface across applications and devices, including mobile devices.

The review is, I think, well done, and the author, Paul Ryan, has done a good job of explaining how the Unity interface differs from some more familiar interfaces.  Having had a couple of weeks to try the new release, I agree with his basic conclusion that the interface is significantly improved from earlier versions, but still has a few rough edges.   This release of Unity has a new feature, called Heads Up Display [HUD], which is intended to save time for users who prefer to keep their hands on the keyboard.

Let’s suppose that I am running Firefox on Ubuntu (as, in fact, I am at the moment), and I want to see the HTML source for the page I am looking at.  The conventional way to do this, as of Firefox 12.0, is to pull down the “Tools” menu, then select “Web Developer”, and then “Page Source”.  If HUD in enabled, I can just start typing “page source”, and HUD will show me all the menu items that match.  A nice side benefit of this is that I don’t have to remember which sub-menu contains the function I want.

The new version also includes a new privacy management framework called Zeitgeist, which allows you to control the extent to which the Unity system tracks your usage of applications, files, and so on.  Although the initial implementation is not perfect, it is a step forward.   It regulates the information gathered by Unity itself, but does not affect any logging or other data capture done by individual applications.

The whole review article is worth a read if you use or are interested in Ubuntu, or even if you’re just interested in interface design.

 


A Black Box for your Car, Revisited

May 16, 2012

About a year ago, I posted an article here about the possibility that the US government, specifically the National Highway Traffic Safety Administration [NHTSA], might soon require all new automobiles sold in the US to be equipped with event data recorders [EDRs]. the so-called “black boxes”.  Similar devices have been used for years on commercial aircraft. and the data obtained from them has been of great value in understanding crashes and improving safety.  As I mentioned in that earlier post, many newer cars already have electronic data recorders of some sort.  These have proved to be useful in accident investigations, although how they work and what they record has been, until quite recently, pretty much up to the automaker.

A recent article at Wired provides an update on what’s happening in this area.  At present, although there is no mandate to equip cars with EDRs, the NHTSA’s regulations do specify that, if an EDR is installed, it must collect a specified set of data.

Since 2006, NHTSA has required that consumers be informed when an automaker has installed an EDR in a vehicle, although the disclosure is typically buried on the car’s owner’s manual. More recently, NHTSA mandated that vehicles manufactured after September 1, 2011 that include the devices must record a minimum of 13 data points in a standardized format.

Congress is now considering legislation that would require EDRs to be installed on new vehicles.

[US Senate] Bill 1813 that mandates EDRs for every car sold in the U.S. starting with the model year 2015 has already passed the Senate. The U.S. House of Representatives is expected to pass a version of the bill with slightly different language.

There are privacy concerns about the collection of this data.  At present, the proposed rules say that EDRs can only collect data related to vehicle safety; but it is not hard to imagine that some security agencies might think that recording GPS coordinates might be a useful little enhancement.  Then there is the question of who owns that collected data.  The pending legislation says that the data belongs to the owner or lessee of the vehicle, which is good.  But it’s likely that the devil is in the details, and the ownership rules will need to be carefully drawn.  For example, the article points out that, if a car is “totalled” in a crash, it typically becomes the property of the insurance company.  The company might, in some cases be tempted to declare the car a total loss in order to own the EDR data for use in legal proceedings.

There is a strong case, on safety improvement grounds, for collecting this kind of data.  WE just need to do our best to ensure that it is not misused.


Prof. Felten’s New Blog

April 30, 2012

In discussing technology policy and security issues here, I’ve frequently mentioned Professor Ed Felten of Princeton, director of the University’s Center for Information Technology Policy [CITP], who is serving a term as the Chief Technologist of the US Federal Trade Commission [FTC].  I’ve just discovered that, in his new capacity, he has recently started a blog, Tech@FTC; he describes the goal this way:

Our goal is to talk about technology in a way that is sophisticated enough to be interesting to hard-core techies, but straightforward enough to be accessible to the broad public that knows something about technology but doesn’t qualify as expert.  Every post will have an identified author–usually me–who will speak to you in the first person.  We’ll aim for a conversational, common-sense tone–and if we fall short, I’m sure you’ll let us know in the comments.

I have not yet had a chance to read all the posts that are there, even though there are not that many yet, but I am sure that they will be worth reading.  I’ll mention two recent posts that I have read.  The first explains why “hashing” data, such as Social Security numbers, does not make the data anonymous,  The second discusses why pseudonyms aren’t anonymous, either.  (I’ve previously written a couple of times about the difficulty of “anonymizing” data.)

I’m looking forward to reading the rest of what’s there, and to Prof. Felten’s future posts.  At the time his appointment to the FTC post was announced, I was pleased that someone so well-qualified had been chosen.  Reading the new blog reinforces that feeling.


%d bloggers like this: