Open Data Centers, Another Look

Yesterday, I wrote about Facebook’s decision to “open source” the design of its new data center in Oregon, with the Open Compute ProjectArs Technica also has an article on the announcement, examining why Facebook may have taken this step.  The author, Jon Stokes, argues that, contrary to one’s instincts, hardware is an essential factor in Facebook’s business, and in its rival Google’s.

Google is essentially a maker of very capital-intensive, full-custom, warehouse-scale computers—a “hardware company,” if you will. It monetizes those datacenters by keeping as many users as possible connected to them, and by serving ads to those users. To make this strategy work, it has to hire lots of software people, who can write the Internet-scale apps (search, mainly) that keep users connected and viewing ads.

Keeping users connected longer obviously provides the opportunity to display more ads; and getting those users connected in more different ways (e.g, for messaging, photo sharing, and event scheduling) provides more information on them, which facilitates the sale of more ads.  (Although it may be natural for us as users to think of ourselves as Facebook’s customers, we are not; we are, essentially,  Facebook’s product.  The advertisers are its customers.)

Facebook has an advantage in this game, because people tend to share more information with their Facebook friends than they might with the world at large.  But Google has much deeper pockets than Facebook, and size is an advantage in a capital-intensive business like building data centers.  So, Stokes argues, Facebook’s move is a clever attempt to reduce Google’s advantage, by using open collaboration to leverage its own efforts.

It’s an intriguing idea, and I suspect a largely correct one.

Comments are closed.

%d bloggers like this: