A New Desalination Technique

December 26, 2010

In many arid regions of the world, removing the salt from sea water – desalination – is an important, although expensive, source of fresh water.  Currently, there are two techniques that are widely used.  The first is thermal evaporation or distillation: sea water is heated until it evaporates, leaving the salt behind; the water vapor is then condensed to get fresh water.  The second is called reverse osmosis: water is forced, under pressure, through a membrane that allow water molecules to pass, but blocks the salt ions.   Both of these processes are fairly energy-intensive.

An article in Technology Review reports that OASYS Water, based in Cambridge, Massachusetts, is working on a new desalination technology, which it expects to bring to market next year.  The OASYS method uses ordinary (forward) osmosis and heat to produce fresh water.  The technique is clever, based on the fact that osmosis through  a permeable membrane will tend toward an equilibrium with equal concentrations of dissolved substances on both sides of the membrane.

On one side of a membrane is sea water; on the other is a solution containing high concentrations of carbon dioxide and ammonia. Water naturally moves toward this more concentrated “draw” solution, and the membrane blocks salt and other impurities as it does so.

Since the membrane blocks movement of salt, the result is that solution on the “draw” side has high concentrations of carbon dioxide and ammonia, but no salt.  Since both ammonia and carbon dioxide, although soluble in water, are gases at normal temperatures, they can then be driven from the solution by heat.  (The solubility of gases in water decreases with increasing temperature.  That is why fizzy drinks, which contain dissolved carbon dioxide, keep better in the refrigerator than at room temperature.)  The gases are then captured and reused.

OASYS claims that this process can produce fresh water more economically than current technology, and might use waste heat from a power plant, since the water/gas solution does not need to be heated as much as sea water in a plant using a thermal process.

The system uses far less energy than thermal desalination because the draw solution has to be heated only to 40 to 50°C, McGinnis [OASYS cofounder and chief technology officer Robert McGinnis] says, whereas thermal systems heat water to 70 to 100 °C. These low temperatures can be achieved using waste heat from power plants.

Although the savings from the new technology could be substantial, they are unlikely to be enough to make the process economically viable for water production in agriculture (which is by far the single largest use of fresh water).  Still, it might be a welcome development for coastal cities in arid climates.

Building the Analytical Engine

December 25, 2010

Most of us, when we think about the early development of computers, probably think of early electronic machines like the ENIAC, built originally at the University of Pennsylvania for the US Army Ballistics Research Laboratory (but also used by John von Neuman for the hydrogen bomb project at Los Alamos), or perhaps the ACE computer, built at the National Physical Laboratory in the UK from a design by Alan Turing.

Arguably, though, the first design for a stored program computer was developed about 100 years earlier, by the British mathematician, Charles Babbage.  The device Babbage designed, the Analytical Engine, was never built, but it seems clear from the design that it would have been the world’s first Turing-complete computer (meaning it could compute any computable function), incorporating features commonplace in computers today, including sequential control, branching, and looping.  Only a trial model, on display at the Science Museum in London, was built in Babbage’s lifetime.   Lord Byron’s daughter Ada, Lady Lovelace, for whom the Ada programming language is named, wrote a program for the Analytical Engine to compute Bernoulli numbers, and was possibly the world’s first programmer.  Babbage’s son Henry built a portion of the machine, the Mill (essentially, the CPU), in 1910; that machine is also on display at the Science Museum.

Now the New Scientist site reports that  the British writer John Graham-Cumming has launched a project to build a working Analytical Engine from Babbage’s design; he is hoping to raise £100,000 in donations to fund the project.

I have started a project to build an analytical engine, dubbed Plan 28 after one of Babbage’s detailed plans. I’m aiming for £100,000 and hope to complete the project in time for the 150th anniversary of Babbage’s death on 18 October 2021.

At one time, it was thought that building the Analytical Engine would have been impossible in Babbage’s time, because it would require manufacturing precision that would not have been possible.  However, the Science Museum built a working model of one of Babbage’s simpler machines, the Difference Engine No. 2, with tolerances that were achievable at the time, so there is a good chance that a working Analytical Engine could be built as well.

Of course, we won’t really know until someone tries; but I find the idea of a fully-functional, steam-powered mechanical computer to be just really fascinating.

Car Hacking, Again

December 24, 2010

In recent years, auto manufacturers have introduced a new security technology, sometimes called an immobiliser, in an attempt to reduce the incidence of car theft.  The immobiliser, which is typically present in the key fob, sends an encrypted wireless signal to the car’s electronic engine control when the driver attempts to start the engine.  If the encrypted code is correct, the engine starts; otherwise, the engine is locked down.

These systems have apparently achieved some success in reducing theft.  Auto thefts in Germany had been steadily declining for sixteen years, but that trend has now been broken, according to an article at New Scientist:

AFTER a 16-year decline, car theft in Germany rose in 2009, according to figures released recently by the German Insurance Association. One “white hat” hacker, who probes security systems to flag up flaws that can then be patched, thinks he knows why. Karsten Nohl of Security Research Labs in Berlin, Germany, has identified vulnerabilities in the engine immobilisers used to protect modern cars from theft.

There appears to be an underlying problem with the encryption used in these systems, a problem that will not come as a surprise to anyone who has worked in the area.

…  the proprietary encryption keys used to transmit data between the key fob, receiver and engine are so poorly implemented on some cars that they are readily cracked, Nohl told the Embedded Security in Cars conference, in Bremen, Germany, last month.

As I’ve said before, the history of proprietary security systems and encryption algorithms is fairly dismal.  Getting this stuff right is hard, and the best way we know to get a method without serious flaws is to employ a technique that is published, so that a variety of people with the requisite expertise can check it.

It appears that, in addition to using questionable proprietary methods, some vendors also used key lengths of 40 or 48 bits.   Using the relatively new Advanced Encryption Standard [AES] with 128-bit keys is now considered a minimum requirement for security.  One manufacturer did something even dumber: the Vehicle Identification Number [VIN] was used as the cryptographic key.   The VIN is generally displayed on a dashboard label, visible through the windshield, so it is hardly a closely guarded secret.

I’ve written before about the potential threat from hacking car’s electronic controls.  It is somewhat disheartening to find that even the car’s security system is, well, not very secure.

New Flaw in Internet Explorer

December 23, 2010

Microsoft has issued a new Security Advisory (2488013) concerning a previously unreported flaw in all supported versions (8, 7, and 6) of its Internet Explorer browser, on all supported Windows versions.  This is a potentially very serious vulnerability that might allow an attacker to execute code on your machine if you visit a malicious Web page.  Microsoft says, in the Advisory:

The vulnerability exists due to the creation of uninitialized memory during a CSS function within Internet Explorer. It is possible under certain conditions for the memory to be leveraged by an attacker using a specially crafted Web page to gain remote code execution.

Microsoft has also acknowledged, in a post on its Security Research and Defense blog, that a working exploit of this vulnerability has been published on the Internet, although there are no reports of active exploits so far.   (That is likely to change.)    The exploit uses a previously discovered technique to evade two of the security features Microsoft introduced with Windows versions Vista and 7: Address Space Layout Randomization [ASLR], which randomizes the address at which Dynamic Link Libraries [DLLs] are loaded; and Data Execution Prevention [DEP], which prevents execution of code located in memory segments marked as containing data.  It loads a DLL, used by Internet Explorer to process certain HTML tags, that was compiled in such a way that it will not use ASLR, and is thus loaded at a fixed, predictable location in memory, allowing the attacker to inject executable code.

Microsoft suggests that the risk to users’ systems can be reduced by using its Enhanced Mitigation Experience Toolkit [EMET].  ( I talked briefly about EMET in an earlier post about a vulnerability in Adobe’s software.)  Essentially, the EMET allows you to force the use of ASLR and DEP, regardless of how the executable modules were compiled.  This mitigation is not without its own potential problems.  Some applications may not be compatible with the EMET.  I recommend that you enable it only on an application-by-application basis, not on a system-wide basis.  The latter option has been reported to cause system stability problems in some cases.  You should test each application carefully before using it with EMET in a production environment.

Another issue with EMET is that, although it can be installed on Windows XP/SP3 and Server 2003, provided you have the .NET framework 2.0 installed, you cannot use the ASLR protection, because those versions of Windows don’t support it.

I will post further information as it becomes available.  Brian Krebs has an article on this problem at his Krebs on Security blog; the SANS Internet Storm Center also has a diary entry.

Security in 2020

December 22, 2010

Bruce Schneier, in his Schneier on Security blog, has a very interesting essay about how current trends in technology, security, and business might shape how security evolves as we move forward towards 2020.  His basiic thesis is somewhat provocative:

In the next 10 years, the traditional definition of IT security—­that it protects you from hackers, criminals, and other bad guys—­will undergo a radical shift. Instead of protecting you from the bad guys, it will increasingly protect businesses and their business models from you.

He talks about several trends that seem to point in this direction.  One, which has been going on for some time, is the increasing irrelevance of the concept of the network perimeter, the boundary between the “Good Guys” in the inside, and the untamed wilderness without.   (I talked about this briefly in my previous post.)   Schneier also cites “consumerization”: the increasing degree to which users want to use their own devices, configured as they like them, rather than using standardized machines provided by the organization’s IT function.  The increasing use of “cloud computing”, particularly for storing data in the cloud, also is a poor match to the traditional model of security.

Schneier also identifies two new trends that he thinks will be important in shaping security understanding and strategy going forward.  The first is the increasing prevalence of special-purpose computing devices.

The general-purpose computer is dying and being replaced by special-purpose devices. Some of them, like the iPhone, seem general purpose but are strictly controlled by their providers. Others, like Internet-enabled game machines or digital cameras, are truly special purpose. In 10 years, most computers will be small, specialized, and ubiquitous.

The second trend, which Schneier calls “decustomerization”,  is one that should really provoke some thought.  More and more, we are getting online services, like E-mail, collaboration, and social networks, in the cloud for “free”, with the costs being covered by advertising.  The essay points out an obvious consequence: the traditional relationship between the supplier and the user is being radically changed:

This is important because it destroys what’s left of the normal business rela­tionship between IT companies and their users. We’re not Google’s customers; we’re Google’s product that they sell to their customers. It’s a three-way relation­ship: us, the IT service provider, and the advertiser or data buyer.  …   Facebook’s continual ratcheting down of user privacy in order to satisfy its actual customers­—the advertisers—and enhance its revenue is just a hint of what’s to come.

(I’ve used the word ‘obvious’ here in the sense it’s used in mathematics: something is obvious once someone has shown it to you.)   Realizing that the users of a social networking system life Facebook are the product, not the customers, makes many changes much easier to understand.   For example, when I first started using Facebook a couple of years ago, the user profile had text fields in which one could enter things like favorite books and movies.  In a couple of subsequent design changes, these text entries have been replaced, in essence, by check boxes, with which you can select your favorites from a defined list.  The latest redesign, just a few weeks old, takes this further.  Now, for example, you can say that you do or do not speak French; the former possibility of writing that you spoke a bit of French is now off the menu.  One effect of these changes is clear: it is easier for Facebook to supply advertisers with lists of users that meet pre-defined criteria.

Quibbles about ugly neologisms aside, the whole essay is worth a read; it’s definitely thought-provoking.

Intel’s Kill Switch

December 21, 2010

Once, not so very long ago, the basic idea underlying security strategies for corporations and other organizations was fairly simple.  There was a network “inside” the organization, which contained applications and data used to run the organization.  There was also the “outside” network (meaning, usually, the Internet), which was largely populated with Bad Guys and wild beasts.  In between, there was an organizational firewall, which functioned as a barrier to keep Bad Things from the outside from getting in, and Good Things from the inside from leaking out.  Of course, there was some traffic that had to be passed through the firewall: E-mail, and requests for Web pages and the responses to those requests, for example.  This approach has its potential pitfalls, but it was in theory a relatively tractable problem.

Unfortunately, the world did not stand still, and now that model is seriously outmoded.  We have seen, most recently in the WikiLeaks flap, that small transportable storage devices and media have made it very easy for internal data to slip out, with just a little assistance.  The proliferation of mobile devices, like laptops and smart phones, presents even more problems, since these devices can potentially access the “inside” from the “outside”; furthermore, users often want to store at least a subset of confidential information on them, to use while on the go.  The introduction of many new mobile devices (iThings and others) in the consumer market has also made it harder for IT departments to keep on top of what is being used.   This has led to the introduction of some new security features, especially for devices intended for the corporate market.  Some phones, for example, have a provision that allow their owners to remotely lock the device or delete the contents of its memory, if it is lost or stolen.

According to an article at ThreatPost and a diary entry at the SANS Internet Storm Center, Intel’s new line of processors (code named “Sandy Bridge”) and their associated chip sets will support a new capability called “Anti-Theft” that enables devices built with the chips to be disabled, by being sent a “poison pill”.  This can be triggered in two ways:

  • Based on local rules configured into the device.  For example, it might be required that the device “check in” with a network server at least every N hours.
  • Set by a remote command sent by an administrator if, for example, the device is reported as stolen.  For devices that incorporate 3G cellular connections, this can be initiated even when the device is powered off.  (It can be turned on remotely via the 3G link.  This is also used to enable administrators to “push” software updates even to devices that are off.)

The disabling action can prevent the device from booting; Intel claims that it can also prevent access to data on an encrypted storage device, even if that device is removed and put in another machine, apparently by deleting part of the cryptographic key information that is stored in non-volatile memory.

Intel also claims, in their white paper [PDF] on the new chipsets, that disabled devices that are recovered can be re-activated without loss of data, in one of two ways:

  • By entry of a locally configured pass-phrase stored securely in the device.
  • By entry of a “recovery token” that can be generated from the security administration workstation.

Now these capabilities have some obvious attractions from a security viewpoint; but their introduction has also sparked some significant debate.  Capabilities like this are almost always a two-edged sword: being able to disable a stolen device remotely is a plus, but if an attacker figures out how to send a counterfeit “poison pill”, he will have an excellent tool for a denial-of-service attack.  Similarly, being able to re-activate a disabled device is a good thing, but enabling it by means of a locally-configured pass-phrase potentially introduces all the problems associated with passwords (‘123456’, anyone?) that we’ve discussed many times before.

In implementing facilities like this, the devil is always in the details, and it’s all too easy to get the details wrong.  Intel, of course, has the resources and talent to get it right; time will tell whether they succeed.

%d bloggers like this: