Formulas for Disaster, Part 1

May 31, 2009

Up till now, I have stayed away from comments about our current economic and financial situation, dire though some aspects of it certainly are..  So much ink has been spilled, and so much hot air emitted, that it’s been hard to think of something to say that would be both thought-provoking and have some connection with reality.

However, I’ve just read an article in the June 8 edition of Newsweek that has led me to change my mind.  The article, “Revenge of the Nerd”, discusses how some of Wall Street’s “quants” contributed to the financial meltdown.

Imagine an aeronautics engineer designing a state-of-the-art jumbo jet. In order for it to fly, the engineer has to rely on the same aerodynamics equation devised by physicists 150 years ago, which is based on Newton’s second law of motion: force equals mass times acceleration. Problem is, the engineer can’t reconcile his elegant design with the equation. The plane has too much mass and not enough force. But rather than tweak the design to fit the equation, imagine if the engineer does the opposite, and tweaks the equation to fit the design.

It goes on to talk about some of the work being done by Dr. Paul Wilmott, who studied applied mathematics at Oxford; in 2003, he founded a “Certificate in Quantitative Finance” program, in the City of London, to teach quants the often messy practicalities of applying statistics and math to the real financial world.  He is well qualified for the task; Nassim Taleb, mathematician and author of The Black Swan, says:

He’s the only one who truly understands what’s going on … the only quant who uses his own head and has any sense of ethics.

I had the good fortune, back in 1994,  to attend a seminar on “The Mathematics of Financial Derivatives” at St. Hugh’s College, Oxford, presented by Dr. Wilmott and two colleagues, Dr. Jeff Dewynne and Dr. Sam Howison.  I know that they are three very bright guys, and their book, Option Pricing: Mathematical Models and Computation, has been on my office bookshelf ever since.  I think they really have identified one of the fundamental weaknesses in the industry that contributed to the current mess.

I started out in quantitative finance back in the 1970s, after I got my MBA at the University of Chicago. (I worked as a research assistant for Fischer Black, of the Black & Scholes option model, when I was in grad school.) The initial application of many  quantitative financial techniques were in markets like US equities, or listed stock options, where the assumptions that one participant couldn’t affect the overall market much, and that there were reliable sources of information on prices and liquidity, were probably at least somewhat reasonable.

But if you look at one of the key “villains” in this current mess, the credit-default swap [CDS] market, it’s an entirely different story. There is an article that appeared in Wired magazine 17.03, “The Formula that Killed Wall Street”,that discusses how much of the CDS market was based on a formula, developed by David Li, for estimating the correlation of default risks.  (A copy of Li’s paper, “On Default Correlation: A Copula Function Approach”, as a PDF, is available on the Web.)  When it was first unveiled, the formula and the approach it embodied were greeted with enormous enthusiasm; some people spoke of the possibility of Li receiving a Nobel Prize in economics.  The idea was eagerly adopted by participants in the rapidly expanding CDS market.

I have read Li’s paper on the Gaussian copula function, and had a look at an implementation, used for predicting the expected default rate in CDS valuation.  What it is essentially doing is using a statistical sampling function to estimate the expected lifetime to failure (= default) for a population of debt instruments. Now, there is nothing wrong with the math per se; similar approaches are used in manufacturing for quality assurance. However, there is big difference: estimating the failure rate of, say, light bulbs does not in itself have any effect on that rate. But in the case of the CDS, the failure rate is being used as an input to the model that is used to price the swap. If the default rate estimate is too low (too optimistic), the asset values will be too high — and that, in turn, will lead to lower estimates of the default rate. In essence, there is a built-in feedback mechanism that can act as an error amplifier, a problem that is exacerbated by the lack of transparency and liquidity in the CDS market.  Having large participants whose activities can impact the overall market only makes the problem worse.

Wilmott marvels at the carelessness of it all. “They built these things on false assumptions without testing them, and stuffed them full of trillions of dollars. How could anyone have thought that was a good idea?”

That’s a very good question.  There’s plenty of blame to go around. The managements, who should have known better, were bedazzled by the dollar signs seeming to float out of their economic perpetual-motion machine. The quants knew the math, and their hubris led them to think that nothing else was needed. And the investors, while proving anew the truth of P.T. Barnum’s Law of Applied Economics, forgot that there ain’t no free lunch.

A significant piece of the problem is related to how Wall Street’s compensation works.  Many of these swap deals are long term (20-30 years), and far from transparent.  Yet the folks who trade them are still largely compensated on the basis of short-term P&L, determined by market values computed from the models.  What could possibly go wrong with that?

Wilmott realizes he’s fighting a losing battle, and that changing finance will take a lot more than a few thousand better-prepared quants. As long as banks get paid in the first year for selling a CDO that doesn’t mature for 30 years, little will change.

I am glad that the current US administration is proposing tighter regulation of derivative securities.  However, the devil is always in the details.  I hope that someone on the Obama economic team is talking to people like Dr. Wilmott.

A New S- Word

May 30, 2009

One of the topics I talk about here is the open-source model of software development; and, more generally, about the kinds of collaborative actions that are made possible by communications technology, most notably by the Internet.  So I was very interested to see, in the recent issue of Wired magazine, an article by Kevin Kelly called “The New Socialism: Global Collectivist Society Is Coming Online”.

Now, the first thing I want to say about the article is that I think the title is unfortunate.  As Mr. Kelly himself says, the word “socialism” carries with it an awful lot of baggage:

I recognize that the word socialism is bound to make many readers twitch. It carries tremendous cultural baggage, as do the related terms communal, communitarian, and collective. I use socialism because technically it is the best word to indicate a range of technologies that rely for their power on social interactions. Broadly, collective action is what Web sites and Net-connected apps generate when they harness input from the global audience. Of course, there’s rhetorical danger in lumping so many types of organization under such an inflammatory heading. But there are no unsoiled terms available, so we might as well redeem this one.

What he is talking about is not a political ideology, and in fact doesn’t have much to do with politics at all, at least at present.  Rather, he’s looking at a range of collaborative activities that are enabled by Internet tedchnology, including:

  • Sharing In some sense the first and most basic form, sharing is represented by sites like Facebook or YouTube.  There has, of course, been some controversy over people sharing content that is not theirs to share, but I’d guess that most of what’s there is personal and completely above-board.
  • Cooperation This is the next step, in which a (usually) ad hoc group works together toward some common purpose.  Many of the original text-based newsgroups on USENET fit this pattern, as do user-focused support forums.  To cite one example in which I’ve participated, the group comp.lang.c has existed for many years to discuss and help resolve programming problem with the C language.
  • Collaboration Represents a more organized group working with a more focused purpose. Many open-source projects, like the Apache Web server, fit this pattern.   Here it is commonly the case that the direct reward to an individual participant is small compared to his or her investment of skilled labor.  Rather, the rewards tend to be intangible: reputation for skill, for example.
  • Collectivism This is the pattern exhibited by the largest group endeavors, like Wikipedia, or the development of the Linux operating system.  Typically, the total number of contributors is large, but there is a smaller core group that coordinates the effort.  In the case of Linux itself, there is also its originator, Linus Torvalds, who serves as a “benevolent dictator”.

One of the interesting aspects of all this is that it seems to alleviate some of the tension that always existed between allowing individual freedom and initiative on one  hand, and organizing for efficiency on the other.

In the past, constructing an organization that exploited hierarchy yet maximized collectivism was nearly impossible. Now digital networking provides the necessary infrastructure. The Net empowers product-focused organizations to function collectively while keeping the hierarchy from fully taking over. The organization behind MySQL, an open source database, is not romantically nonhierarchical, but it is far more collectivist than Oracle.

(This ties in, too, with some of the writing that Eric Raymond has done on the open-source phenomenon, notably his extended essay, “Homesteading the Noosphere”.)

Kelly’s hypothesis is that the success of collective ventures enabled by the Internet is making people more receptive to the idea of collective action on other fronts.  The Internet phenomena are different from traditional political socialism, in that they are based much more on pragmatism than ideology.

The coercive, soul-smashing system of North Korea is dead; the future is a hybrid that takes cues from both Wikipedia and the moderate socialism of Sweden.

I am not at all sure that I agree with all his conclusions about the political import of these “collectivist” activities, but I think it is clear that we are seeing the evolution of an interesting new social and cultural phenomenon.   It’s been an interesting journey so far.

Galactic Positioning System

May 30, 2009

No, that title is not a mistake.  Of course, most people by now are familiar with the idea of another GPS, the Global Positioning System, which uses a “constellation” of satellites in Earth orbit, fitted with high–accuracy atomic clocks, to enable a terrestrial receiver to determine its position.

According to a note posted on the Physics ArXiv blog at the MIT Technology Review Web site, a couple of French researchers have published a short paper [PDF – quite technical] proposing that a similar system could be constructed on a much larger scale, using natural objects in place of the satellites:

Today, Bertolomé Coll at the Observatoire de Paris in France and a friend propose an interstellar GPS system that has the ability to determine the position of any point in the galaxy to within a metre.

The proposed system would use signals from a set of four pulsars, which lie approximately in a tetrahedron centered on the Solar System.  (A pulsar is a rotating neutron star, with a very high magnetic field, which emits a strong beam of electromagnetic radiation.  Because, as with the Earth, the magnetic axis does not correspond exactly with the rotational axis, the signal appears to “blink” on and off, as a lighthouse might.  This “blinking”, for at least some individual pulsars, has a very stable period, to within a few nanoseconds.)

Because of the distances involved, and the fact that the signals travel at the speed of light, General Relativity has to be taken into account:

Why four pulsars? Coll points out that on these scales relativity has to be taken into account when processing the signals and to do this, the protocol has to specify a position in space-time, which requires four signals.

Basically the corrections must account for the relativistic time dilation, and the curvature of space-time caused by gravity (sometimes called the gravitational blue-shift).

Many people (myself included) are surprised to find that the existing GPS also has to take relativity into account.  The current system’s undithered signals (the ones used by the military) permit determining location to about a 1 meter radius.  In order to do this, time must be measured to an accuracy of about 1 part in 1013.  But if relativistic effects were ignored, that would introduce an error of about 1 part in 1010, or 1000 times as much.  Over the course of a day, you could accumulate a position error of ~10 kilometers.  (Source: Warped Passages, by Lisa Randall, ISBN 0-06-053109-6.)

I’ve mentioned before how parts of modern physics take us away from the domain where our instincts and intuition work.  This is another aspect of that same paradox: the ideas underlying General Relativity (such as curved, non-Euclidean space-time) seem exotic, and the mathematics is formidable.  Yet the little box you may have in your dashboard has to “know” all about it.

Microsoft Updates Firefox ?

May 29, 2009

Brian Krebs of the Washington Post has a new article on his Security Fix blog about an unannounced side effect of one of Microsoft’s many security updates:

A routine security update for a Microsoft Windows component installed on tens of millions of computers has quietly installed an extra add-on for an untold number of users surfing the Web with Mozilla’s Firefox Web browser.

Briefly, Microsoft issued an update to its .NET Framework component of Windows via the Windows Update mechanism.  That update silently installed an extension to the Mozilla Firefox browser, called the .Net Framework Assistant. It appears that the extension is installed in such a way that it is quite difficult to remove via the normal user controls (although it can be disabled).  This is rather naughty behavior for a few reasons:

  • It really isn’t appropriate for Microsoft (or any vendor) to be updating another vendor’s software, especially without telling the customer.  (It is left as an exercise for the reader — and not a very diffcult one — to imagine Microsoft’s response if updating Firefox were to mess arounfd with the internals of Windows.)
  • The reason the “Uninstall” button for the extensions is greyed out is the Microsoft installed the extension in an unconventional way.  Normally, Firefox extensions are installed on a per-user-profile basis; this one, according to Microsoft,  is installed to provide “support at the machine level in order to enable the feature for all users on the machine”.  So, if you have a machine used by more than one person, everyone gets the “benefit” of any bugs or security flaws in the extension — without knowing it, of course.
  • The .NET framework itself is a mechanism that, in part, allows a Web site to provide executable content to be run in the browser context.   Some people may not want this, for good reasons.

Unfortunately, getting rid of the extension is a real pain.   Microsoft has instructions for doing so in a Knowledge Base article; be forewarned that this requires some manual hacking of the Windows Registry, which is not for the ten-thumbed or the faint of heart.

Apparently, a later version of the extension does partially remedy the problem, in that it allows per-user un-installation.  Further information and a download link for the new version are on Brad Abram’s MSDN blog.  More information is also available on the Web site, which deals with aspects of Windows that are, um, annoying.

I’ve Got Nothing Worth Stealing

May 28, 2009

One of the things that can be very frustrating in the computer security field is getting some users to take the idea of security seriously:

Computer users often dismiss Internet security best practices because they find them inconvenient, or because they think the rules don’t apply to them. Many cling to the misguided belief that because they don’t bank or shop online, that bad guys won’t target them.

This is the opening of a really good article, “The Scrap Value of a Hacked PC“, by Brian Krebs of the Washington Post in his Security Fix blog.  As he points out, the “direct” items people think of being stolen, such as passwords or credit card numbers, are not the only, or even the most valuable, targets:

When casual Internet users think about the value of their PC to cyber crooks, they typically think stolen credit card numbers and online banking passwords. But as we have seen, those credentials are but one potential area of interest for attackers.

He lists a number of ways in which a criminal can make use of your PC, some of which you really don’t want, including:

  • Use as a Web host for pirated software or movies, or for kiddie porn
  • Use as a relay for junk E-mail (spam)  or for “laundering” connections
  • Use as a tool for Internet advertising “click fraud”
  • Use in denial-of-service or extortion attempts on other Web sites.

Bad guys also have a keen interest in any access credentials, or clues thereto, that may be lying around on your machine,  Those might include other people’s E-mail addresses, or credentials to connect to your workplace network.  Miscellaneous personal information is also grist for the identity thief’s mill.

There’s another thing, too.  Although I am not aware of any specific cases, it does not seem at all improbable to me, in our litigious society, that a PC user might be sued for damages on account of his PC being used in an attack on someone else.  I am not a lawyer, of course, but I think there is a legal doctrine sometimes referred to as “The Attractive Nuisance”, under which a person can be liable for negligence if he leaves a dangerous situation or condition untended, or without reasonable care.  I can visualize this being reworked along the lines of: “The defendant knew, or should have known, that his PC was insecure and could be used to damage someone else.”  You would probably also prefer not to have to prove that the kiddie porn pictures someone downloaded from your PC were put there without your knowledge.

So be careful out there.

The Next Ken Jennings, continued

May 27, 2009

Back on May 3, I wrote about an IBM project to build a system that can play Jeopardy!.  The Technology Review, published by MIT, now has an article that describes the system, called Watson, in a little more depth:

The company has not yet published any research papers describing how its system will tackle Jeopardy!-style questions. But David Ferrucci, the IBM computer scientist leading the effort, explains that the system breaks a question into pieces, searches its own databases for “related knowledge,” and then finally makes connections to assemble a result.

The key problem here is to make the system able to “understand” a Jeopardy! clue  expressed in natural language.

Ferrucci describes how the technology would handle the following Jeopardy!-style question: “It’s the opera mentioned in the lyrics of a 1970 number-one hit by Smokey Robinson and the Miracles.”

The Watson engine uses natural-language processing techniques to break the question into structural components. In this case, the pieces include 1) an opera; 2) the opera is mentioned in a song; 3) the song was a hit in 1970; and 4) the hit was by Smokey Robinson and the Miracles.

The plan is to stage a match between the Watson system and human contestants, with the clues being given to the system in text format (so it doesn’t also have to deal with the problem of interpreting speech):

Demonstrations of the system are expected this year, with a final televised matchup–complete with hosting by the show’s Alex Trebek–sometime next year.

Regardless of who wins the match, this should be a fascinating exercise.

Microsoft releases Vista SP2

May 27, 2009

Microsoft has released the production version of Service Pack 2 for Windows Vista and Windows Server 2008.  Those of you who are lucky (cough) enough to be using Vista can download it now from the Microsoft Download Center.  If you use Windows Update, either automatically or manually, the SP2 update should be offered within a few days (according to Microsoft), assuming your machine is configured adequately (see below).  The MS Knowledge Base has an article describing alternate ways of getting the Service Pack.

To download the Service Pack itself (the same updater is used for Vista and Server 2008), use one of the following links:

There is one important prerequisite for installing SP2 on Vista, which is that Vista SP1 must already be installed. Apparently Microsoft packaged things this way so that the SP2 download would not be so big; as it is, the 32-bit version weighs in at about 348 MB.  Windows Server 2008 was released after SP1 was available, and it already includes the contents of SP1.  (If you need to get Vista SP1, there is a download link in the Knowledge Base article I mentioned earlier.)

Microsoft has a summary of changes included in SP2; the more technical release notes are available on MS’s TechNet site.

%d bloggers like this: