Scientific “Placeholders”

January 30, 2011

One of the puzzles currently engaging the attention of scientists working at the Large Hadron Collider [LHC] in Switzerland is the nature of the dark matter that is postulated to account for something like 80% of the matter in the universe.  It is called dark, because it apparently does not interact with electromagnetic radiation (such as light), and was proposed as an explanation for experimental observations that seem to show that there is too much gravity to be accounted for by ordinary, visible matter.  (The “amount” of gravity can be inferred by examining the dynamic behavior of space objects, such as rotating galaxies.)   But, so far, we do not have any direct evidence of its existence, or real understanding of its properties.

The “Nobel Intent” blog at Ars Technica has an interesting article that tries to put the issue of dark matter in some historical scientific context.  There are some people who are troubled by the idea of assuming the existence of something that cannot be detected.

The comments appear like clockwork every time there’s a discussion of the Universe’s dark side, for both dark matter and dark energy. At least some readers seem positively incensed by the idea that scientists can happily accept the existence of a particle (or particles) that have never been observed and a mysterious repulsive force. “They’re just there to make the equations work!” goes a typical complaint.

As the article points out, this is hardly the first time that a scientific theory or explanation has been accepted, even though it did not “fill in” all the blanks for the phenomenon concerned. Sometimes a conceptual “placeholder” has to stand in for the parts that we haven’t yet understood.

Charles Darwin’s theory of evolution by means of natural selection, cited in the article,  is an obvious example.  In The Origin of Species, Darwin gave a thorough and convincing account of how heritable traits that improved reproductive success could become pervasive in a population.  But Darwin offered no explanation of the mechanism  by which traits could be inherited; Mendel made a careful study of the effects of heredity, but similarly had no explanation of how heredity worked.  It was only in the mid-20th century, with the discovery of DNA, that biologists could show how traits were inherited in the first place.

Another example, not in the article, is Newton’s development of the laws of gravity.  His work gave us tools for computing gravity’s effects that are accurate enough to send a spacecraft to Mars; but Newton really had no idea how gravity worked.  The beginning of that understanding had to wait for Einstein’s theory of General Relativity, which describes gravity as affecting the fabric of space-time.  And we are still trying to reconcile General Relativity with the Standard Model of quantum physics.

There are examples in other fields, too.  Aspirin, for example, was used in medicine for decades before the mechanism by which it works was elucidated in 1971.

As the article points out, even those “placeholders” that turn out to be really wrong are not useless.  The case of phlogiston is instructive.  Phlogiston (from the Attic Greek φλογιστόν, “burning”) was a combustible element presumed to be contained in all materials that would burn, being released during combustion.  So materials that burned in air,  like wood or oil, were supposed to be rich in phlogiston; when they were burned, they became “dephlogisticated”, leaving the pure material (sometimes called the calx).  The released phlogiston was absorbed by the air; that a fire in a small enclosed volume of air went out after a short time showed that the air could not absorb any more phlogiston – it was said to be completely “phlogisticated”.  The theory, which in some ways was almost the inverse of the correct explanation, oxidation, was not dropped  until the experiments of Lavoisier demonstrated that the mass of all combustion products, including gases, were greater than the initial mass of fuel.  Nonetheless, the theory of phlogiston did lead scientists to theorize (correctly) that combustion, metabolism, and corrosion (rusting) had something in common.

The whole point of all this, really, is that, like most forms of human endeavor, science often proceeds along a somewhat meandering path, rather than in a straight line from premise to complete explanation.  Sometimes even small gaps in our knowledge can be frustrating; yet it is gratifying that, on the whole, we keep making progress.


The Internet Kill Switch, Again

January 29, 2011

Last summer, I wrote about a bill, proposed by Sen. Joe Lieberman, to provide an “Internet Kill Switch” that the President could use to disconnect US installations from the Internet in the event of a cyber attack on US infrastructure.   Although that bill was approved by the Homeland Security and Governmental Affairs Committee, it died with the outgoing Congress.  The “Threat Post” blog at Wired is now reporting that a very similar bill is about to be introduced again, this time sponsored by Sen. Susan Collins.  There is perhaps a little irony in the timing, in view of the move by Egyptian authorities to cut off Internet access there in the wake of anti-government protests.

The actual bill has not been introduced at this point, so no one can say for sure what it contains.  It seems clear, though, that it is intended to apply to both government and private-sector systems.

An aide to the Homeland Security committee described the bill as one that does not mandate the shuttering of the entire internet. Instead, it would authorize the president to demand turning off access to so-called “critical infrastructure” where necessary.

This seemed like a bad idea last summer, and I cannot see that it has improved any with age.  Any responsible operator of an infrastructure system surely knows how it is connected (if it is) to the Internet, and how to break that connection if necessary.  If the aim of the legislation is to allow the President to tell system operators that there is an emergency, and that they need to take action, I think he can do that now.  If the aim is to put together some sort of controlling “meta system” that can shut off access, it is a really bad idea, for all the reasons that I outlined in that earlier post.  It would introduce a single point of failure into a system that, at present, is fairly decentralized and resilient; and that failure point would be the biggest prize possible for an attacker.

Update Sunday, 30 January, 13:18 EST

Ars Technica has a post in “Law & Disorder” on how Egypt’s disconnection may have been done.  It also points out the considerably greater complexity of the Internet infrastructure in the US.


New Microsoft Advisory

January 28, 2011

Microsoft has issued a Security Advisory (2501696) for Windows about a newly discovered security flaw  [CVE-2011-0096]  that affects Internet Explorer, as well as, potentially, any other applications that use Windows’s MHTML protocol handler.  (MHTML is an Internet protocol that defines a MIME structure for wrapping HTML content.)   The potential attack is somewhat similar to server-side cross-site scripting.   All currently supported versions of Windows are affected, except Windows 2008 Server Core installations.

The Advisory provides a work-around to mitigate the vulnerability.  The Windows Registry can be modified to prevent the execution of scripts within an MHTML document.  Modifying the Registry incorrectly can have serious bad effects, including making your system fail to boot.  It is not a job for the ten-thumbed.  Microsoft’s consumer-oriented article on this advisory has a “FixIt” tool that will apply the  Registry work-around for you; there is also a tool to uninstall it.  Like other work-arounds, this has the potential to cause problems, so careful testing is advisable.  So far, there is no announced schedule for a patch.

This vulnerability seems to be potentially exploitable in a number of different ways.  If you must use Internet Explorer, I suggest that you try the FixIt tool, carefully.

 


OpenOffice 3.3 Released

January 28, 2011

The OpenOffice project has released a new version, 3.3, of its office productivity suite.  The new version incorporates a number of new and enhanced capabilities, which are summarized on this page.  It also fixes 14 security vulnerabilities, some serious, which are summarized in the Security Bulletins.  More information on the changes is also available in the Release Notes.

You can obtain in the installation packages, for Mac OS X, Windows, Linux, and Solaris, from the download page, in a variety of (human) languages.  Windows and Linux users should note that the installation packages generally include the Java run-time  environment; but it is possible to download a version without Java.  Java is used to implement a number of features of OpenOffice, but it is possible to install OpenOffice without it.  (The OpenOffice site has a list of features that require Java; I wrote about the pros and cons of installing Java in this post.)

Because of the security updates in the new version, I recommend that you upgrade your installation.


Wolfram on Watson

January 27, 2011

Last summer, in one of my earlier posts on IBM’s Watson project, to build a computer system that could play Jeopardy!, I mentioned Wolfram|Alpha, another system designed to answer queries expressed in natural language.  Yesterday, Stephen Wolfram, the designer of Wolfram|Alpha, and of the Mathematica software, published a blog post on Watson, comparing and contrasting it with Wolfram|Alpha.

The most fundamental difference between the two systems is the sort of information that they process.  Watson is fundamentally designed to work with unstructured text data, while Wolfram|Alpha uses a “curated” data base that attempts to represent knowledge directly.

The key point is that Wolfram|Alpha is not dealing with documents, or anything derived from them. Instead, it is dealing directly with raw, precise, computable knowledge. And what’s inside it is not statistical representations of text, but actual representations of knowledge.

Whereas Watson starts with a large body of text, and tries to extract and classify information from it, a lot of the classification work for Wolfram|Alpha is done in the process of setting up the computable knowledge data base.

In Wolfram|Alpha most of the work is just adding computable knowledge to the system. Curating data, hooking up real-time feeds, injecting domain-specific expertise, implementing computational algorithms—and building up our kind of generalized grammar that captures the natural language used for queries.

As Wolfram points out, there are, generally speaking, two types of data stores in corporations and other organizations.  The first is the traditional data base, which embeds knowledge of the data domain in its structure.  (For example, think of the entity-relationship diagrams used in designing data bases.)  The second type includes large amounts of unstructured information: things like memos, letters, product literature, images, and E-mails.  In a very broad sense, the Watson project is really aimed at this second category of data.

There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data—with such potential applications as mining medical documents or patents, or doing e-discovery in litigation.

What Wolfram is doing with Alpha is, in a sense, to find ways to support free-form, unstructured queries on the structured (or curated) data in the first category.  In many ways, the two approaches are complementary. Wolfram suggests, for example, that Watson might pre-process text data to make it easier to structure it for Wolfram|Alpha.

The whole post is worth a read; it gives a good overview of both technologies.


Opera Updated to Version 11.01

January 27, 2011

Opera software has released a new version of its Opera browser, version 11.01, for Mac OS X, Linux/UNIX, and Windows.  The new version fixes several serious security issues, and incorporates a number of other improvements and bug fixes.  The change logs have more details for Windows, Linux/UNIX, and Mac OSX.    You can obtain the new version using the built-in update mechanism (Help > Check for Updates); Linux users should be able to get the updated package using the standard package management tools (e.g., Synaptic).  Alternatively, you can download a complete installation package.

Because of its security content, I recommend that you update to the new version as soon as you conveniently can.


Redefining the Kilogram

January 26, 2011

A while ago, I posted a note about the kilogram’s weight-loss problem.   The kilogram is the only fundamental unit of the SI [Le Système International d’Unités] system of units that is defined by a physical object: the mass of a particular cylinder of platinum/iridium alloy, stored in a vault at the Bureau International des Poids et Mesures [BIPM] at Sèvres, outside of Paris.  As I mentioned in that earlier post, that cylinder appears to be losing weight, at least by comparison with copies of it that have been distributed to various national metrology labs.   Since there is no more authoritative source with which to compare it, no one is quite sure what is going on.  All of the other fundamental SI units have been re-defined in terms of physical processes.  For example, the meter, originally defined as 1/10,000,000 of the distance between the earth’s Equator and the North Pole, is now defined as the distance that light travels in 1/299,792,458 second.

The New Scientist site has an article on the ongoing effort to find a new definition for the kilogram.

In October, the General Conference on Weights and Measures in Paris is expected to begin the process of changing the definition of the kilogram to one based on fundamental constants like Avogadro’s constant, the number of atoms in a mole, and the Planck constant, which relates the energy of a photon or particle to its frequency. If everyone can agree on the technologies to do this, the redefinition process should be completed by 2015.

There are two principal contenders for the new definition.  One is based on a device called a Watt balance, which I mentioned in my earlier post.  This is essentially like a regular balance, except that the force on one side is electromagnetic, rather than being provided by gravity.  The idea is that the “master” kilogram could be weighed in the Watt balance; once that was done, and the balancing voltage and current are known, then an accurate comparison of any object could be made without reference to the physical cylinder.   Obviously, this would require a very accurate and stable method of measuring the current and voltage.

Another approach is outlined in a note [abstract] in this week’s edition of Physical Review Letters.  This involves using a new method to create a sphere of silicon-28 with a known number of atoms.

A team led by metrologist Peter Becker of the Federal Institute of Physical and Technical Affairs in Braunschweig, Germany, reveals a breakthrough in an attempt to measure the number of atoms in a silicon sphere, which has let them compute Avogadro’s constant to unprecedented accuracy.

Using their technique, the team has been able to determine the value of Avogadro’s constant (approximately 6.02×1023) to an accuracy of ±3.0×10-8. They feel that, if they can approximately double this accuracy, they will have the best candidate for a new definition of the kilogram, based on the mass of a certain number of silicon-28 atoms.

Getting to a new, agreed definition will of course involve a good deal of discussion, and perhaps some disagreements.  However, it’s probably useful to remember that this problem exists only because we have learned to measure the physical world to a degree of accuracy undreamed of when the metric system was first formulated in the 18th century.


Follow

Get every new post delivered to your Inbox.

Join 30 other followers

%d bloggers like this: