Twitter Problems, Revisited

Following the Twitter problems of last week (which also affected Facebook and LiveJournal to a lesser degree), there were many explanations being circulated around the Internet.  One of the most popular of these claimed that the attack on Twitter was the result of an orchestrated E-mail campaign, made in order to silence a pro-Georgian (and anti-Russian) blogger.  Brian Krebs, in his “Security Fix” blog at the Washington Post, has a fair summary of this case.

CNet and CNN place blame for the incident on an elaborate, politically motivated vendetta timed to coincide with the one year anniversary of the Russia-Georgia war, a brief but costly skirmish in August 2008 accompanied by cyber attacks on Georgian government Web sites.

The explanation has the sort of innate appeal that any conspiracy theory has, but some of the details of what happened didn’t quite seem to fit the pattern.

Now Technology Review, on the “Unsafe Bits” blog, has a report with some more forensic detail.  This casts significant doubt on the attack-via-spam hypothesis.  The pattern of traffic seen by Internet monitoring organizations most closely resembled a typical Distributed Denial of Service (DDoS) attack, mounted by a “botnet” of hijacked PCs.

“The attack traffic is not an e-mail click but SYN floods and UDP floods going to Twitter’s space,” says Craig Labovitz, chief scientist for Arbor. “It’s stuff that does not look like it was directly tied to a click-through or e-mail attacks.”  [Arbor is Arbor Networks, a network services provider]

If one looks at the traffic inbound to Twitter, the level drops sharply at the time that the attack began.  This is not really consistent with a large number of people clicking on links in E-mails, but it is consistent with the DDoS scenario, which essentially floods the server with a huge number of small messages.

One lesson that is perhaps of value from this incident is that having a capacity “cushion” is a Good Thing.  One reason that Facebook was affected less than Twitter is that Facebook has a lot more bandwidth available:

The company has a much more robust infrastructure consisting of an Akamai-like distributed hosting service and crunches a lot more bandwidth than Twitter, says Labovitz. While Twitter typically maxes out at 300 Gbps, Facebook accounts for 0.5 percent of the bandwidth of the entire Internet, he says.

On the other hand, being somewhat vulnerable to this tpe of attack is part of the price that fast-growing sites like Twitter pay for their rapid growth.  The New Scientist has an interesting essay on the observation that, despite all the chatter and hand-wringing over the outage by the terminally self-centered, most users didn’t seem to be all that bothered by the service disruptions.  (I can’t speak for Twitter, but most people on Facebook appeared hardly to notice.)  Their suggestion is that users are tolerant partly because the services have been a bit flaky in the past, and because the outage is itself a part of the shared social experience.

Comments are closed.

%d bloggers like this: