Social networks of various kinds have been one of the great Internet success stories of the last few years. Facebook, probably the premier example, boasts of having more than 350 million members worldwide. Even a broken-down old crock like me has a Facebook page, and it has been fun to reconnect with some old friends I haven’t seen in many years. It also never ceases to amaze some of my younger relatives and acquaintances that such a querulous and decrepit old fossil is able to use this modern technology.
One of the fundamental ideas of social networks is that each person has a list or network of “friends”, who may get special privileges in terms of what information they can see. On Facebook, for example, if I post a note, I can specify to whom it is visible: only friends, friends and networks (e.g., fellow college alumni), or everyone, for example. Because not everything has to be made available to the world at large, people are in general more willing to be a bit more open in what they say than they might be on a public site.
There is a potential downside to this, though. Probably most of you have read about people getting in trouble at work or school for posting ill-considered comments or photos on Facebook or other sites. But even if one exercises some common sense in that respect, there are other potential risks, stemming from the perception that the environment is somehow more protected and benign than the wild, wild, Internet.
The European PC security company BitDefender has recently published a report of a small experiment illustrating that the perception of greater safety may lead people to lower their guard. The entire study included several different tests, but I am for the present going to focus on just one. (You can download the summary of the whole study [PDF].) They were interested in testing to what extent Facebook users would affirm a “friend request” from someone they did not know. (In order for A and B to become Facebook friends, A must ask to add B as a friend, and B must agree to this.) This is another instance of the issue of trust, which I have written about in a slightly different context.
To start the experiment, the BitDefender folks created three fictitious Facebook profiles. To identify them, I have added labels in square brackets [like this]:
BitDefender researchers created three honeypot profiles – one without any picture and holding few details [LOW], another with an image and limited information [MED] and a third with a large amount of data and photos [HIGH].
They then had each of these “individuals” join a few general-interest Facebook groups to gain some exposure. They then started to attempt setting up friend connections. Within about an hour, LOW had 23 friends, MED had 47, and HIGH had 53. This is interesting for a couple of reasons. First, it is clear that, for at least some Facebook users, the definition of a friend is fairly elastic. (Remember that these were entirely fictitious “people”.) Second, and in a way more interesting, providing more (bogus) information about the “person” seemed to make the friend request more credible. In a way, this makes some sense: research has shown that people knowledgeable about, say, investments are more likely to fall for a well-crafted investment scam than average.
If you are a fan of one or more of the social games (like “Farmville” on Facebook), you might want to consider that, when the phony profiles joined several games, the numbers of acceptances of friend requests went up to 85 for LOW, 105 for MED, and 111 for HIGH.
The final stage of this experiment was perhaps the scariest, at least for a security person. The researchers posted a link (URL) on the phony profile pages, shortened using a common URL shortening service (bit.ly). They found that 24% of their new “friends” clicked on the link, despite not knowing the person who posted it (because he/she/it doesn’t exist) or where it went. It is hard to think up a slicker way of distributing malware.