Happy 2011 !

December 31, 2010

I want to wish all of you a very happy, healthy, and successful New Year !


Clueless Celebrities, 2010

December 31, 2010

Once again this year, the British charity Sense about Science has published its review [PDF] of especially silly statements made by various celebrities about science during the past year.   (My post about the 2009 review is here.)   As always, there are some striking new contributions to the world’s stock of lore.

Each year at Sense About Science we review the odd science claims people in the public eye have made – about diets, cancer, magnets, radiation and more – sent in to us by scientists and members of the public. Many of these claims promote theories, therapies and campaigns that make no scientific sense.

This year, for example, the pop singer Sarah Harding told a magazine interviewer that she always sprinkled charcoal over her food, in order to “absorb all the bad, damaging stuff in the body.”  (She should have met my great aunt — most of her meals included charcoal already, especially her famous “black bottom” fried eggs.)   Another dietary tip comes from Naomi Campbell, Ashton Kutcher, and Demi Moore, who periodically eat nothing but maple syrup, lemons, and pepper for periods of up to two weeks; this is supposed to “cleanse” the body.  David Beckham, Robert de Niro, and Kate Middleton all use a “Power Balance” silicone bracelet embedded with a hologram, which supposedly improves strength, energy, and flexibility.

All of this is not terribly surprising, considering the average level of scientific ignorance among the population.  Many of these claims are for diets and gadgets sold by the same sort of appeal to pseudo-science that is employed to sell “magical” hi-fi equipment.  Of course these folks are entitled to their own preferences, silly though they may be; but the rest of us should remember that having a pretty face does not mean that anything of very great value is going on inside.


Intercepting GSM Phone Calls

December 30, 2010

One year ago today, I posted a note about a presentation at the 26th Chaos Communications Congress on breaking the encryption used to protect GSM cellular telephony.  (Actually, the GSM system, which is used for approximately 80% of cellular phones world-wide, has two available encryption methods, called  A5/1 and A5/2, for encrypting the voice data stream.  The report concerned cracking the A5/1 cipher, the stronger of the two methods.  The A5/2 method had been known to be vulnerable for some time.)  As is customary after such reports, the practicality of the attack was dismissed by the cellular providers.

The ability to decrypt GSM’s 64-bit A5/1 encryption was demonstrated last year at this same event … However, network operators then responded that the difficulty of finding a specific phone, and of picking the correct encrypted radio signal out of the air, made the theoretical decryption danger minimal at best.

By now, one might think that the providers would realize that some people would regard their response as a challenge, but they seem to be slow learners.

Ars Technica has an article about a presentation at this year’s 27th Chaos Communications Congress, being held in Berlin, that showed that GSM conversations could  be intercepted and decrypted, using no equipment other than cheap GSM phones and a laptop computer.

Speaking at the Chaos Computer Club (CCC) Congress in Berlin on Tuesday, a pair of researchers demonstrated a start-to-finish means of eavesdropping on encrypted GSM cellphone calls and text messages, using only four sub-$15 telephones as network “sniffers,” a laptop computer, and a variety of open source software.

There are several steps in the interception process, which exploits weaknesses in the GSM protocol and its implementation.

  1. Because of the way that GSM networks exchange subscriber location information, the location of a particular phone can be narrowed down to a relatively small geographic area: a city, or a particular region.
  2. Once the location of the target phone is narrowed down, the attacker can “war drive” around the area, sending out “broken” SMS (text) messages, and listening for the system’s responses.  This allows the attacker to deduce the network ID assigned to the target phone.
  3. Once the target has been located and identified, the data stream can be intercepted and decrypted.  The decryption is facilitated by the use, by many operators, of background status messages to and from the target phone; these messages, although encrypted, have sizable blocks of “known plaintext”, providing what is known as a crib.  (A very similar bit of carelessness by the Germans in World War II — having standardized message headers for, among other things, weather reports — helped Alan Turing and the cryptographers at Bletchley Park to crack the Germans’ Enigma messages.)

The researches replaced the firmware of their inexpensive phones with new code that captured and stored the raw data being transmitted by the cellular network.  The network operators also made the hackers’ job easier by frequently re-using random session keys, meaning that it was often possible to retrieve the unencrypted data from several consecutive conversations.  The researchers did find that one encryption key was very well protected: the key used to encrypt communications between the provider’s system and the SIM card in the phone, which is used for billing information.  At least this indicates that the providers can get it right when their pocketbooks are on the line.

The researchers provided a live demonstration in which they sniffed the message headers used by a phone, cracked the session keys, and recorded the ensuing conversation, all within a few minutes.

As Bruce Schneier has frequently reminded us, attacks on cryptographic systems only get better over time.  You would be well advised to take your provider’s claims about the security of your conversations with several shovelfuls of salt.

Update, Saturday, 1 January 2011, 10:50 EST

The ThreatPost blog at Kaspersky Labs also has a brief article on this.


Rarer Rare Earths

December 29, 2010

The Global Business section of the New York Times had an article yesterday about an announcement, by China’s commerce ministry, of a significant reduction in export quotas for rare earths, beginning in the first half of 2011.

The reduction in quotas for the early months of 2011 — a 35 percent drop in tonnage from the first half of this year — is the latest in a series of measures by Beijing that has gradually curtailed much of the world’s supply of rare earths.

The rare earths are the elements that lie in the periodic table from Lanthanum [La], atomic number 57, to Lutetium [Lu], atomic number 71, plus Scandium [Sc] and Yttrium [Y].  They are used in a wide range of technological products.  Depending on which element we select, China currently mines 95-99 % of the world’s supply.  The announcement has heightened concerns about the long-term supply of these elements, a concern also stimulated by China’s temporary threat to stop rare earth exports to Japan in September, over a relatively trivial maritime dispute.

Despite what their name suggests, the rare earth elements are not particularly rare in the Earth’s crust. The problem is that they are usually widely dispersed, and mineral deposits with enough concentration for effective and economic mining are rare.  Up until the 1980s, most of the world’s supply came from the United States and South Africa, with lesser amounts coming from Brazil and India.  The shift to China as the primary supplier was motivated primarily by low prices.  China’s production costs are lower, in part, because the rare earths are mined together with iron; but they are also probably due, in part, to lax-to-nonexistent Chinese environmental standards — rare earth production is a dirty business.

Although China is now saying that its quota reductions are motivated by environmental concern, that represents a relatively recent change.

Until a few months ago, Chinese officials said that their rare earth policies were aimed at forcing foreign industries to move high-tech factories to China so as to have access to Chinese rare earths. But as trade frictions have increased, they have given greater emphasis to environmental concerns.

This is probably related to the prohibition of export quotas and tariffs by the World Trade Organization [WTO], except on the grounds of environmental protection or national security.

Because of the uncertainty about Chinese supplies, there are moves underway to restart production at mines in the US, Canada, Australia, and South Africa, which were closed because of the Chinese dominance of the industry.  Technology Review reports that a mine in Mountain Pass, California, once the world’s leading producer of rare earths, is to be re-opened by Molycorp Minerals in 2011.

By 2012, the revamped U.S. mine is expected to produce around 20,000 tons of rare earth materials per year. Molycorp plans to use new processing techniques that it claims are more environmentally friendly and less expensive than conventional methods.

Although there have been various alarmist news articles about a strategic shortage of rare earths, it seems likely that the main risk is a short-to-medium term run-up in prices, until alternative sources are fully operational.  But the situation does point up the risk of relying on a single supplier for critical materials, especially when that supplier may be pursuing an agenda not entirely based on economics.  No one with sense is suggesting that we should adopt some sort of Soviet-style industrial policy; but completely ignoring the long-term risks to supplies of essential raw materials is not very sensible, either.


Antibiotics in Agriculture

December 28, 2010

I’ve written several times before (here, for example) about the problem of bacteria resistant to antibiotics.  From a present-day perspective, it is hard to realize that infections, before the advent of antibiotics, were a very serious health threat, and not infrequently fatal.  So the possibility of running out of effective antibiotic therapies is something to take seriously.

It is obvious that any use of antibiotics creates evolutionary selection pressure favoring resistant organisms,  (Sir Alexander Fleming, discoverer of penicillin, saw evidence of developed resistance in his work.)   The use of antibiotics to treat infections in people has clearly been a huge positive for human health.  We don’t want to lose that; we do want to discourage doctors prescribing antibiotics indiscriminately (or just to make the patient feel that something is being done), and we want to encourage people to finish any course of antibiotics that is prescribed.

There are other potential sources of antibiotic resistance, though.  There is evidence that some other commonly used chemicals (such as triclosan) may also create selection pressure for antibiotic resistance; and large quantities of antibiotics are used in agriculture, not just to treat sick animals, but as a regular addition to food or water, in order to promote faster growth.  The food industry has always dismissed the idea that this could be a meaningful source of antibiotic resistance in humans; it has been difficult to get information on the quantities of antibiotics actually used.

A recent article at Wired provides some fresh evidence, based on data from a post at the Center for a Livable Future [CLF] blog.   (The blog is written by staff at the Center for a Livable Future at Johns Hopkins Bloomberg School of Public Health; posts on the blog are personal views, not official positions of the School.)   The FDA recently released information on the total amount of various antibiotics sold in the US for veterinary or agricultural purposes.  The CLF researchers have managed to come up with figures for the total amount of antibiotics used in humans, from a separate study.  The combined results show that 80% of the antibiotics used in the US are used on farm animals.   For example, about 10.2 million pounds of tetracycline antibiotics are given to animals each year, more than the amount of all antibiotics given to people.

The food industry, of course, denies that this is a problem, partially on the basis that some of the antibiotics that they use are not exactly the same as the ones used in human medicine (many of them are identical).  This is really a red herring; as the CLF authors wrote:

LivableFutureBlog readers might recall an October blog post in which Dr. Ellen Silbergeld, professor of environmental health sciences at the Johns Hopkins Bloomberg School of Public Health, warned that, “Bacteria respond to chemical structures, not brand names, and resistance to one member of a pharmaceutical class results in cross resistance to all other members of the same class.”

I have not heard anyone object to the therapeutic use of antibiotics for animals that are actually sick; the issue is the routine addition of antibiotics to the animals’ diets to promote faster (= cheaper) growth.   Since this practice is of economic value to the producers, they are virtually certain to resist any attempt to curtail it, but  the scientific evidence suggests that to continue with business as usual would be almost breathtakingly stupid.

 


Analyzing Facebook Updates

December 27, 2010

The Network World site has an interesting article about an analysis of Facebook status updates, performed by the Facebook data team.  (For those readers who are not familiar with Facebook, a status update is a short message that your Facebook “friends” see on their “News Feed” page.  Typical messages might be updates on a vacation, reports from a concert or sporting event, or just an interesting link to an article or YouTube video.)   The analysis first categorized word usage in a large sample of updates posted by US English speakers; the categorization attempts to identify characteristics of the message such as subject matter and emotional content (positive or negative).

Facebook analyzed the word usage for about one million status updates from its US English speakers. The social network said all identifiable information was stripped from the status updates before they were analyzed  …

Once the updates were anonymized, the words were organized into 68 different word categories based on the Linguistic Inquiry and Word Count (LIWC)–a text analysis software program created by James W. Pennebaker, Roger J. Booth, and Martha E. Francis.   Some examples of word categories used in the study include past tense verbs, prepositions, religion and positive feelings.

The analysis measured correlations between user characteristics, such as age, message length, time of day, and number of responses with the LWIC measures.  The  results were interesting, although not especially surprising in most cases.  The team found that:

  • Younger users tend to express more negative emotions, talk about themselves more, and cuss more than older users.
  • Older users post longer messages, on average, and talk more about other people.
  • People post more positive updates in the morning; the proportion of negative updates rises as the day progresses.
  • Longer updates attract more responses.
  • Negative updates generate more responses than positive ones.  The positive updates get more “likes”; the negative ones get more comments.   (Flame wars have been popular on the Internet for a long time.)
  • Groups of friends tend to post updates with similar characteristics.  (Now there’s a stunner!)

There is nothing here that is terribly surprising, but it is interesting to see that at least some of one’s intuitions seem to be confirmed.

The Facebook data team has posted a report on the Facebook blog; it has more details of the results.


A New Desalination Technique

December 26, 2010

In many arid regions of the world, removing the salt from sea water – desalination – is an important, although expensive, source of fresh water.  Currently, there are two techniques that are widely used.  The first is thermal evaporation or distillation: sea water is heated until it evaporates, leaving the salt behind; the water vapor is then condensed to get fresh water.  The second is called reverse osmosis: water is forced, under pressure, through a membrane that allow water molecules to pass, but blocks the salt ions.   Both of these processes are fairly energy-intensive.

An article in Technology Review reports that OASYS Water, based in Cambridge, Massachusetts, is working on a new desalination technology, which it expects to bring to market next year.  The OASYS method uses ordinary (forward) osmosis and heat to produce fresh water.  The technique is clever, based on the fact that osmosis through  a permeable membrane will tend toward an equilibrium with equal concentrations of dissolved substances on both sides of the membrane.

On one side of a membrane is sea water; on the other is a solution containing high concentrations of carbon dioxide and ammonia. Water naturally moves toward this more concentrated “draw” solution, and the membrane blocks salt and other impurities as it does so.

Since the membrane blocks movement of salt, the result is that solution on the “draw” side has high concentrations of carbon dioxide and ammonia, but no salt.  Since both ammonia and carbon dioxide, although soluble in water, are gases at normal temperatures, they can then be driven from the solution by heat.  (The solubility of gases in water decreases with increasing temperature.  That is why fizzy drinks, which contain dissolved carbon dioxide, keep better in the refrigerator than at room temperature.)  The gases are then captured and reused.

OASYS claims that this process can produce fresh water more economically than current technology, and might use waste heat from a power plant, since the water/gas solution does not need to be heated as much as sea water in a plant using a thermal process.

The system uses far less energy than thermal desalination because the draw solution has to be heated only to 40 to 50°C, McGinnis [OASYS cofounder and chief technology officer Robert McGinnis] says, whereas thermal systems heat water to 70 to 100 °C. These low temperatures can be achieved using waste heat from power plants.

Although the savings from the new technology could be substantial, they are unlikely to be enough to make the process economically viable for water production in agriculture (which is by far the single largest use of fresh water).  Still, it might be a welcome development for coastal cities in arid climates.


%d bloggers like this: