Last Monday, I posted a note here describing an interesting analysis of quote and trade data related to the wild gyrations experienced in the US stock market on May 6, 2010. The analysis, provided by market data vendor Nanex, seemed to suggest that there was a design flaw in the system used by the New York Stock Exchange [NYSE] to report quotes, and that there was some evidence of what Nanex called “quote stuffing” — generation of a very high volume (1000s per second) of quote updates which may have contributed to clogging the consolidated quote reporting system.
I have just read a very interesting note from Professor Ed Felten at Princeton, on the Freedom to Tinker blog hosted by the University’s Center for Information Technology Policy, in which he talks about the implications of the Nanex analysis. He first makes a disclaimer (which I should also have made), saying that his analysis assumes that the data supplied by Nanex is correct. He observes, correctly, that the design flaw / bug in the time-stamping of NYSE quotes is the kind of insidious bug that can easily slip through testing, since it will only manifest itself under high-load conditions that the test environment may not provide. (I’ve written before about the difficulties involved in finding bugs in a complex system.) It is entirely plausible that no one ever noticed the issue with the quote time stamps; even if someone did, (s)he might have dismissed it as immaterial, as it indeed would be under most market conditions.
Prof. Felten’s second point is more subtle, but very insightful. He posits three hypothetical market participants. One, Alice, knows of the time-stamp problem, and writes trading software to take advantage of it, perhaps by “quote stuffing”. The second, Bob, knows of the flaw, and writes his software to take advantage of it if possible, but not to do anything to trigger or exacerbate the flaw. The third, Claire, does not know about the flaw, but is a very careful systems designer; her system has many checks built in for cross-market consistency. When the flaw is triggered, Claire’s system marks the NYSE quotes as suspect, and proceeds to trade accordingly.
What Prof. Felten points out is that, in a sequence of events like those on May 6, Alice, Bob, and Claire may all trade in a way that results in their making a large profit, even though the reasons motivating their trading might be different. Consequently, it might be very difficult for any ex post analysis to distinguish between Alice’s trading, which seems clearly unethical, and Claire’s, which is just the result of taking superior care.
The moral of the story, it seems to me, is that we need to do a very careful analysis of the rules under which systems like this operate, before the fact, because it is potentially very difficult to disentangle who did what and why afterward. It seems to me quite possible, once again, that we can potentially design a system whose behavior we cannot fully understand.