Open Source Application Architecture

May 24, 2011

Yesterday, I got an E-mail message from a professional acquaintance, suggesting that I have a look at a new book, The Architecture of Open Source Applications, edited by Amy Brown and Craig Wilson.  The aim of the book is to provide worked examples of how significant, real-world applications have been designed.

Architects look at thousands of buildings during their training, and study critiques of those buildings written by masters. In contrast, most software developers only ever get to know a handful of large programs well—usually programs they wrote themselves—and never study the great programs of history. As a result, they repeat one another’s mistakes rather than building on one another’s successes.

As any regular reader knows, I am a proponent of the open-source model of software development; I’ve cited the value of openness as a corrective against error, as well as the value of source code as the best possible documentation.

I confess, though,  it never occurred to me to cite its value as an educational tool for other developers; now, having been prodded, it seems to me a bit strange, since much of my early education in programming, back in the early 1970s, came from studying other folks’  programs.  (This also  follows the model  of science, of course.)

In any case, though I have not had time to read all of it, the book seems like a very valuable resource.  It contains descriptions of the architecture and design process for 25 open-source applications; if you are an aspiring developer, I think reading these accounts would be very well worth your time.  You may not agree with all, or indeed any, of the design decisions that were made; understanding why  those decisions were made, even the wrong ones,. can be of considerable educational value.

For example, the book contains a chapter on the sendmail(8) program (the oldest, and perhaps still the most common mail transfer agent on the Internet) by Eric Allman, the original author and designer.  The discussion of the evolution of the application’s design is enlightening; and reading its history provides some valuable  insights into the history of E-mail (especially for younger readers who were not around at the time).

The text of the book is available on the Web here; you can also purchase a soft-cover copy.  Part of the proceeds of paper editions is being donated to Amnesty International.


Copyrights and Wrongs, Revisited

May 23, 2011

I have written here a couple of times before about the origins of copyright law, and about the questionable nature of some of the “evidence” of widespread infringement presented by the content producing industry.   Apart from being obviously self-serving, the claims for economic loss due to infringement make the assumption that every unauthorized copy would become a sale if only enforcement were better, a claim that is not justified by any evidence that I’ve seen.  The industry’s claims also ignore the difficult to estimate, but almost certainly non-zero, benefits of fair use.

All of this has been an ongoing controversy here in the US, but it affects other countries, too.  Now, as reported in a post on the “Law & Disorder” blog at Ars Technica, an independent review of intellectual property law in the United Kingdom, commissioned by Prime Minister David Cameron, and conducted by Professor Ian Hargreaves, has been published.  The report, Digital Opportunity: A Review of Intellectual Property and Growth [PDF] although it recognizes the economic importance of “intangibles” to societies like the UK,  raises many of the same concerns that have expressed here in the US; in particular, as Prof. Hargreaves writes in the Foreword, the law has not kept up with the changes in society and technology:

Could it be true that laws designed more than three centuries ago with the express purpose of creating economic incentives for innovation by protecting creators’ rights are today obstructing innovation and economic growth?

The short answer is: yes. We have found that the UK’s intellectual property framework, especially with regard to copyright, is falling behind what is needed.

The review also finds that, as has been suggested in the US, much of the data presented to support the case for enhanced copyright protection is somewhat suspect.

Much of the data needed to develop empirical evidence on copyright and designs is privately held.  It enters the public domain chiefly in the form of ‘evidence’ supporting the arguments of lobbyists (‘lobbynomics’) rather than as independently verified research conclusions.

The review makes a strong case for moving to an evidence-driven process for setting policy, with the evidence being developed and vetted by someone who does not have an axe to grind.  It also strongly recommends that fair use exceptions to copyright take account of the benefits of such use, and that restrictions on copying for private use (e.g., for time- or format-shifting) be minimal.

The review also takes a strong position against the increasingly common practice of retroactive copyright extension, pointing out that the incentive for creative work that is supposedly the prime justification for copyright does not even make sense in that context.

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels. …  This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead

The report is worth reading if you have an interest in this area; I intend to do my best to persuade my Congress Critters to read it.


Clues from the Japanese Earthquake

May 22, 2011

Most of us have probably followed the ongoing story of the March earthquake and tsunami in Japan, and the dangerous situation with the nuclear power plants there.  The quake, estimated at magnitude 9, was the largest to strike Japan in modern times.    We’ve also seen other major earthquakes in Haiti and New Zealand.   At this point, we know a good deal about the underlying structure of the tectonic plates that make up the Earth’s crust, and also about the mechanics of the faults between plates; the buildup and subsequent sudden release of stress along these faults is an underlying cause of earthquakes.  We haven’t made too much headway, though, when it comes to predicting earthquakes.

The “Physics ArXiv” blog at Technology Review recently summarized some very interesting results, gathered from observations just before the Japanese earthquake.   The results [abstract, full article PDF available], which are preliminary and subject to revision, were presented at the 2011 European Geosciences Union conference in Vienna.

Dimitar Ouzounov at the NASA Goddard Space Flight Centre in Maryland and a few buddies present the data from the Great Tohoku earthquake which devastated Japan on 11 March. Their results, although preliminary, are eye-opening.

The researchers examined  data on infrared radiation, measured by satellite and provided by the Climate Prediction Center at NOAA; they also looked at ionospheric observations from three sources: the GPS/TEC ionosphere maps, ionospheric tomography from satellites in low earth orbit, and data from ground stations in Japan.  They discovered two interesting effects:

  • Beginning on March 8 (three days prior to the earthquake), there was a rapid increase in the emitted infrared radiation, centered over the area of the quake’s epicenter.
  • During the period March 3-11, there was an increase in electron density observed by all three systems, peaking on March 8.

As the Physics ArXiv post points out, this is consistent with an existing hypothesis on the interactions between events in the lithosphere and the atmosphere.

These kinds of observations are consistent with an idea called the Lithosphere-Atmosphere-Ionosphere Coupling mechanism. The thinking is that in the days before an earthquake, the great stresses in a fault as it is about to give cause the releases large amounts of radon.

Radon [Rn, atomic number 86] is a radioactive gas, and a natural product of the radioactive breakdown of uranium.  It is an α-emitter; its most stable isotope, 222Rn, has a half-life of 3.8 days.  It is certainly plausible that a large release of radon into the atmosphere could cause an significant increase in electron density (it is called ionizing radiation for a reason, after all).   Moreover, since water molecules are electrically polar, water vapor molecules are attracted to ions in the atmosphere.  This has the effect of providing nucleation sites for the condensation of water vapor to liquid water; because of water’s unusually high heat of vaporization (40.68 kJ/mol), the condensation could produce warming of the atmosphere, and hence an increase in emitted infrared radiation.

Once again, these results are preliminary, and need to be confirmed.  (And it is hard to predict how soon that might happen, since getting some of that confirmation will probably require us to observe more earthquakes.)   If the results hold up, they might provide a starting point for better predictions of earthquake risk.  They might also help to account for the persistence of anecdotal evidence that some animals seem to sense imminent earthquakes.


Common Vulnerability Reporting Framework Proposed

May 20, 2011

Back in 2008, a group of five technology provides formed the Industry Consortium for Advancement of Security on the Internet [ICASI]; the original five member companies — Cisco Systems, IBM, Intel, Juniper Networks, and Microsoft — have been joined by Nokia, as a Founding Member, and by Amazon.  The idea behind the formation of the effort was to provide a mechanism for cooperative work on security issues.

ICASI will allow IT vendors to work together to address multi-vendor security threats. The consortium will provide a mechanism for international vendor and customer involvement, and allow for a government-neutral way of resolving significant global, multi-product security incidents.

This past week, ICASI has released a free white paper  proposing a new Common Vulnerability Reporting Framework [CVRF], which attempts to provide a uniform format for reporting security information.   As the white paper points out, some basic standardization of security information has been achieved with, for example, the Common Vulnerabilities and Exposures [CVE] database; but most security information is still produced in a variety of formats, often vendor-specific. The CVRF proposal aims to provide a standard format for this reporting.

The Common Vulnerability Reporting Framework (CVRF) is an XML-based language that is designed to provide a standard format for the dissemination of security-related information. CVRF is intended to replace the myriad of nonstandard vulnerability reporting formats with one format that is machine readable.

Appendix A of the paper contains a list of Frequently Asked Questions.

I think anyone who has had the dubious pleasure of reading through vulnerability reports and security bulletins from multiple vendors would probably agree that the objective of standardizing this information is a worthy one.  It remains to be seen, of course, whether the various participants will get on board.


Memristors’ Workings Elucidated

May 20, 2011

Last fall, I wrote a note about a new business venture between Hewlett-Packard [HP] and Hynix (a Korean electronics manufacturer) to produce memory devices using memristor technology.  The projection that the devices would be commercially available by sometime in 2013 seemed a bit speculative, given the many questions surrounding the development of a new technology.

This week, an article at the BBC News site reports that at least some of the fundamental questions about how memristors work have begun to be answered.  A group of HP researchers has used X-ray techniques to analyze how current flows through the devices, and how the heat produced affects the structure of the materials in the device.

Now, researchers at Hewlett-Packard including the memristor’s discoverer Stan Williams, have analysed the devices using X-rays and tracked how heat builds up in them as current passes through. … The passage of current caused heat deposition, such that the titanium dioxide surrounding the conducting channel actually changed its structure to a non-conducting state.

The research was reported in the journal Nanotechnology [abstract, PDF download available], and examines the chemical, structural, and thermal characteristics of the device as its electrical state changes.

Dr. Williams told the BBC that this information would be of great importance in developing memristor technology.

The detailed knowledge of the nanometre-scale structure of memristors and precisely where heat is deposited will help to inform future engineering efforts, said Dr Williams.

He contrasted this with Thomas Edison’s development of the incandescent light bulb, which was characterized by a series of trial and error experiments.   The technology is still very new — memristors were first predicted theoretically in the 1970s, with the first prototype device being built at HP in 2008 — but it has the potential to produce some further amazing gains in electronics’ performance.


Better BIOS Security

May 19, 2011

An article at the Science Daily site reports that the National Institute of Science and Technology has issued a new report on guidelines for achieving better security in the PC BIOS, the initial firmware executed when the PC begins its boot sequence.  (I have talked a bit about what the BIOS does here.)  The report, BIOS Protection Guidelines  [NIST SP 800-147] [PDF], presents and discusses a set of guidelines for getting a secure BIOS implementation.  The key points are:

  1. An approved BIOS Update mechanism.  All updates to the BIOS code must be via an authenticated update mechanism, or a physically-secure local update mechanism.
  2. BIOS updates should be signed using a secure cryptographic protocol.
  3. An optional provision may be made for a physically secure local update mechanism, for emergency use.
  4. The BIOS code stored in the machine’s non-volatile memory should be protected against modification or corruption.
  5. It should not be possible to bypass any of the protection mechanisms.

This is a new area of security attention for the NIST, and a welcome one.  I have often smiled to myself when examining  purportedly secure PC workstations, which in too many cases have access to the BIOS settings (often called “Setup”, or something similar) completely unprotected, even though modern BIOS installations generally provide at least password protection.   If an attacker can access the Setup routine, he can typically boot from a device of choice (such as a USB flash drive or a CD-ROM).  If he can then go further, and modify the BIOS code, he can give himself a wide menu of malicious possibilities.

Without appropriate protections, attackers could disable systems or hide malicious software by modifying the BIOS. This guide is focused on reducing the risk of unauthorized changes to the BIOS.

As Ken Thompson pointed out in his 1984 Turing Award lecture, Reflections on Trusting Trust, malicious code at this level can be very difficult to find.

In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

The report is timely, because many manufacturers are at the point of using new BIOS implementations, including those using the new Unified Extensible Firmware Interface [UEFI] specification.

The NIST report also makes suggestions for system management practices to complement improved BIOS security.

The publication also suggests management best practices that are tightly coupled with the security guidelines for manufacturers. These practices will help computer administrators take advantage of the BIOS protection features as they become available.

It also contains a very good summary of how the boot process works, both for conventional and UEFI implementations.


Follow

Get every new post delivered to your Inbox.

Join 30 other followers

%d bloggers like this: