Happy New Year! (or Years ?)

January 2, 2012

Since this is my first post of 2012, let me start by wishing all of you a very happy, healthy, and successful year.   Although this is the year, according to fans of the Mayan calendar doomsday prediction, that we will all disappear, let’s at least make it good while it lasts.

Also, by coincidence, a story at the PhysOrg site reports on a proposal by two professors at Johns Hopkins University for an overhaul of the calendar.  (Wired also has an article on this.)  The basic idea is to arrange the calendar so that a given date, such as December 25, falls on the same day of the week every year.

Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences, and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.

This is accomplished by dividing the year into four identical 91-day quarters, with each quarter having two 30-day months followed by one 31-day month.   (The existing month names would presumably be retained.)  This would produce a 364-day, 52-week year.  Of course, the complication for all calendar schemes stems from the fact that the year is not an integral number of days long; a year lasts 365.2422 days.  That is why, in the existing Gregorian calendar, we have leap years.  The developers of the new calendar would handle this by inserting an extra seven-day week, after December, every five or six years, whenever the corresponding Gregorian calendar for that year would begin or end on a Thursday.  (The first few years with the extra week would be 2015, 2020, 2026, 2032, 2037, 2043, and 2048.)   Messrs. Henry and Hanke have a web site, which includes the proposed calendar and a FAQ list.  The Cato Institute has also republished a copy of an article on the calendar, originally written for Globe Asia.

The authors claim that a switch to the new calendar would produce many benefits.  For example, since there would be fixed correspondence between dates and days of the week, schedules (for academics, say) could be put together once and for all.  (It is of course true that this could be done with the existing calendar, at the cost of having ordinary and leap year versions for each of the seven possible days for January 1.)   They also claim that a significant economic benefit would accrue, because artificial day count conventions could be eliminated.  (For example, US corporate bonds traditionally have accrued interest calculated based on the “30/360” rule: that is, all months are assumed to have 30 days, making the year 360 days long.)

I am very skeptical that this would amount to much.  I worked in the investment / banking industry for more than 30 years, and I don’t remember ever meeting anyone who saw this as a significant problem, even before the era of ubiquitous personal computers.  The interest calculation conventions are, of course, well known, and the securities are priced accordingly.  Getting rid of these conventions would simplify things somewhat, but I doubt that actual savings would be significant.  We would get rid of the requirement to handle leap years correctly, but would have to build in the logic to identify the years that have an extra week, which at first glance is of about the same complexity.

The authors also propose that the existing structure of local time zones be eliminated, and that everyone switch to using UTC time (a successor to Greenwich Mean Time).  As a practical matter, this strikes me as a bit silly.  The correspondence between UTC and local time is well-defined (I grant that Daylight Savings Time is a nuisance), and UTC is already used for many things, such as aviation, and the Internet’s Network Time Protocol.

The whole proposal, to me, is reminiscent of the periodic proposals for English spelling reform, or for replacing the standard QWERTY keyboard.  In each of these cases, there is in principle some benefit in efficiency and simplicity to be gained, but the effort involved in making the change is significant, and the switch is to some degree an “all or nothing” proposition.  I accept that English spelling is not phonetic (although not necessarily illogical), and I accept that another keyboard layout might allow faster typing.  However, I know how to spell with the current system, and I can touch type on any standard keyboard at a pretty rapid pace, so any potential benefit to me personally is small.

Probably the only way a calendar change like this could occur is with a significant push from governments, public authorities, and other large institutions.   (After all, the last calendar change, to the current Gregorian calendar, begun in 1582, had to be imposed by the Pope.)  Considering all the fuss that was made about the Y2K issue, it would probably not be easy.

That leads me to the final slightly puzzling aspect of this.  As I noted earlier, the Cato Institute has posted the article by Henry and Hanke on its Web site.  But Cato has at least a somewhat libertarian policy outlook, which it describes this way:

The Cato Institute is a public policy research organization — a think tank — dedicated to the principles of individual liberty, limited government, free markets and peace.

A wholesale change of this kind, which I think would have to be imposed “top down” in order to become anything more than an eccentricity and pet subject for cranks, seems an odd cause for Cato to take up.


Too Clever by Half ?

August 9, 2011

Earlier this summer, I posted a note here about the smart grid initiative announced by the White House Office of Science and Technology Policy.  In order to increase the proportion of our energy use supplied by renewable sources, such as wind and solar power, we need a power distribution system (the grid) that is more responsive to changes in the availability and relative cost of power, because these renewable sources are subject to natural fluctuations: some are predictable (the sun will set this evening), some (it may get really windy this afternoon) not so much.

The adoption of smart grid technology is not without its potential pitfalls.  In January of this year, the US Government Accountability Office [GAO] issued a report warning of the security risks involved.  I’ve written about some of the security concerns specific to smart electricity meters.  The MIT News site has posted a report of some new research, pointing out another potential problem with a grid that is “too smart for its own good”.

One of the potentially attractive consequences of having a smart grid is that consumers could be provided with information about the varying cost of energy throughout the day, in different seasons.  The idea is that the customer might choose to run certain energy-intensive appliances (like a clothes dryer) at off-peak times, when electricity would presumably be cheaper.  Time-varying rates (typically, cheaper at night) have been tried in some places, and have resulted in some smoothing of electricity demand.  But a really smart grid could, in principle, deliver varying price information in close to real time.

One envisioned application of these “smart meters” is to give customers real-time information about fluctuations in the price of electricity, which might encourage them to defer some energy-intensive tasks until supply is high or demand is low.

However, the MIT researchers found [paper PDF] that there is a risk of making the system too responsive.

Recent work by researchers in MIT’s Laboratory for Information and Decision Systems, however, shows that this policy could backfire. If too many people set appliances to turn on, or devices to recharge, when the price of electricity crosses the same threshold, it could cause a huge spike in demand; in the worst case, that could bring down the power grid

Although the pricing information can be delivered quickly, the utility cannot necessarily respond to changes in demand quickly.  It takes time to start up or shut down a coal- or gas-fired power plant (these restrictions are called “ramp constraints”).  Moreover, events in other markets that feature nearly real-time information show that instability is not just a theoretical concern.  The “flash crash” in the equity market in May, 2010 is one example.

The authors do find that there are some relatively simple changes to reporting mechanisms that could reduce this risk.  Their paper is highly technical, but a first step might be to present a “smoothed” price value to consumers, so that short-term fluctuations would not lead to instability.  The authors suggest that, down the road, a market with more complete information, including information on customers’ preferences, could lead to even better results.

There is still a good deal of work to be done on resolving these issues; I hope it is done before, rather than after, the smart grid is fully implemented.


Copyrights and Wrongs, Revisited

May 23, 2011

I have written here a couple of times before about the origins of copyright law, and about the questionable nature of some of the “evidence” of widespread infringement presented by the content producing industry.   Apart from being obviously self-serving, the claims for economic loss due to infringement make the assumption that every unauthorized copy would become a sale if only enforcement were better, a claim that is not justified by any evidence that I’ve seen.  The industry’s claims also ignore the difficult to estimate, but almost certainly non-zero, benefits of fair use.

All of this has been an ongoing controversy here in the US, but it affects other countries, too.  Now, as reported in a post on the “Law & Disorder” blog at Ars Technica, an independent review of intellectual property law in the United Kingdom, commissioned by Prime Minister David Cameron, and conducted by Professor Ian Hargreaves, has been published.  The report, Digital Opportunity: A Review of Intellectual Property and Growth [PDF] although it recognizes the economic importance of “intangibles” to societies like the UK,  raises many of the same concerns that have expressed here in the US; in particular, as Prof. Hargreaves writes in the Foreword, the law has not kept up with the changes in society and technology:

Could it be true that laws designed more than three centuries ago with the express purpose of creating economic incentives for innovation by protecting creators’ rights are today obstructing innovation and economic growth?

The short answer is: yes. We have found that the UK’s intellectual property framework, especially with regard to copyright, is falling behind what is needed.

The review also finds that, as has been suggested in the US, much of the data presented to support the case for enhanced copyright protection is somewhat suspect.

Much of the data needed to develop empirical evidence on copyright and designs is privately held.  It enters the public domain chiefly in the form of ‘evidence’ supporting the arguments of lobbyists (‘lobbynomics’) rather than as independently verified research conclusions.

The review makes a strong case for moving to an evidence-driven process for setting policy, with the evidence being developed and vetted by someone who does not have an axe to grind.  It also strongly recommends that fair use exceptions to copyright take account of the benefits of such use, and that restrictions on copying for private use (e.g., for time- or format-shifting) be minimal.

The review also takes a strong position against the increasingly common practice of retroactive copyright extension, pointing out that the incentive for creative work that is supposedly the prime justification for copyright does not even make sense in that context.

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels. …  This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead

The report is worth reading if you have an interest in this area; I intend to do my best to persuade my Congress Critters to read it.


Property Follies

March 11, 2011

There has been a great deal of discussion about the causes of the most recent financial crisis, and the ensuing recession, with many conflicting suggestions about how to prevent a recurrence.  Last week’s issue of The Economist has a special report on one of the more mundane potential culprits: property (or real estate).  It argues that, although property is widely regarded as a relatively safe investment, it is in some ways one of the most dangerous of assets.

There were many reasons for the housing bubble that has now burst, from huge amounts of global liquidity seeking high returns to the rise of private-label securitisation. But it is striking how often property causes financial trouble. “We do not want to fight the last war,” says one European banking regulator, referring to property busts, “but the fact is that we keep fighting the same war over and over.”

There are a number of reasons to think that property investments are riskier than the common perception.  The first is the sheer size of the property market.  The article estimates that, even after the recent decline in prices, the total value of property in the rich world is something like $ 80 trillion (of which about 3/4 is residential), compared to about $ 20 trillion in all equities.  To make another comparison, the value of property investments is close to 200 % of the combined countries’ GDP in 2010.

Property, especially residential property, is also an inconvenient asset in many ways.  If you have a portfolio of stocks or bonds, you can sell a portion of it to raise funds.  It is hard to sell off a bathroom and a couple of closets from your house.  The property market also tends to be illiquid; quoted values are based, typically, on a small number of recent transactions; one odd deal can significantly affect the results.  And just wanting to sell a house does not guarantee that you will find anyone who is interested in buying.

Property is also the one asset where ordinary investors can achieve very high leverage, perhaps putting down only a few percent of the purchase price in equity.  Together with tax subsidies for mortgage interest, this leads, at least in the US market, to artificially high house prices.  Since the notional owners have so little equity, things can turn sour quickly when prices fall.  The article estimates that about 25% of mortgages in the US are currently “under water”: the outstanding balance on the loan is more than the property is worth.   The recent popularity of low-quality “liar loans” (with no income verification) and “innovative” securitization has hardly helped matters.

Commercial property is slightly less crazy, but even there, otherwise sensible investors can do silly things.  Quite a few years ago, when I was working as a pension fund consultant, one of our clients made a sizable investment in a commercial property fund.  The fund manager had shown them graphs of the steadily increasing value of the fund over the previous several years.  I pointed out that, if their equity managers were allowed to value their portfolios based on what they thought the stocks should be worth, the volatility of their returns would very likely be lower.  The client went ahead with the investment anyway.  Then there was an economic downturn, and they wanted to shift some money from property to another asset class.  Unfortunately, they had not read the clause in their contract, standard for real estate funds, that said that the manager could not be forced to sell property in order to meet a redemption request.  I don’t know if they ever got their money out.

Buying a house also is not a straightforward financial transaction:

…  if housing were simply a financial investment, buyers might be clearer-eyed in their decision-making. People generally do not fall in love with government bonds, and Treasuries have no other use to compensate for a fall in value. Housing is different. Greg Davies, a behavioural-finance expert at Barclays Wealth, says the experience of buying a home is a largely emotional one, similar to that of buying art. That makes it likelier that people will pay over the odds.

Perhaps some of this will finally sink in.


Securing Infrastructure

February 19, 2011

I’ve made occasional posts here about the challenges of securing various pieces of the US infrastructure from attacks, particularly cyber attacks, most recently in connection with the GAO’s warnings about the risks of the “smart grid” for electricity distribution.   In addition to the power distribution networks, there are other sections of critically-important systems that need to be protected: for example, the interbank electronic payments system.  (The two largest US banks process transactions each day worth $ 7-8 trillion.)  An article at the Network World site reports on a panel at the recently concluded RSA Conference 2011 that discussed the issue.

This is a tricky problem for a few reasons:

  • The US systems are (mostly) owned by private entities, although some are operated by the government.  There is no existing mechanism for ensuring their security, or even for finding all the pieces.
  • The need for cyber-security is a new one for many of these infrastructure operators.
  • Networked systems are, generally, only as strong as their weakest component.

There are some economic reasons for concern, too.  A small participant, acting rationally in its own interest, will not spend more than it can possibly lose; yet a security vulnerability there may threaten the whole network.  This is another case of externalities, which I’ve mentioned many times here.  They occur, in this case,  when some part of the cost of a security failure are borne by someone other than the entity that can prevent the failure.   In this situation, rational market transactions will provide less security than the optimal amount.

The conference participants did come up with some sensible suggestions.  First, although the government may need to be involved to set standards, those standards should specify objectives or results, not technologies or solutions.  Mike McConnell, former Director of National Intelligence, and of the NSA, and now an Executive VP at Booz Allen Hamilton, spoke about this:

“To protect those transactions there should be a requirement for a higher level of protection to mitigate that risk,” he said, but that government should set the requirement and the private sector should compete to figure out how to meet it.

Another good suggestion was to require corporate officers to certify that any security requirements have been met, just as they are required to certify their financial statements.

[Bruce] Schneier concurred, noting that holding individuals at a company accountable for certain protections has worked with environmental regulations and Sarbanes-Oxley, the post-Enron law that requires directors and executives to certify their financial results

I can tell you from personal observation that this is one approach that corporate executives do take seriously — that is why they can be expected to fight it, tooth and nail.

This is really a situation where I think the idea of some kind of self-regulation is non-starter.  The various industries involved will not like it, but it seems to me that a joint government-industry effort is going to be needed if any effective solution is to be obtained.


Financial Crisis Report Issued

February 1, 2011

The US Financial Crisis Inquiry Commission [FCIC] was established in 2009 to diagnose our recent economic and financial crisis — dubbed the “Great Recession”.

The Financial Crisis Inquiry Commission was created to “examine the causes, domestic and global, of the current financial and economic crisis in the United States.” The Commission was established as part of the Fraud Enforcement and Recovery Act (Public Law 111-21) passed by Congress and signed by the President in May 2009.

Last week, the FCIC published the report [PDF, 662pp.] of its findings.  The report includes not only the findings of the FCIC as a whole, based on 15 days of public hearings, interviews with 700+ individuals, and the review of many documents, but also two dissenting reports from some members of the FCIC.

I have just started reading the report, but the New York Times has a summary article.  That article, and the first pages of the report itself, make it clear that there is plenty of blame to go around.

The report examined the risky mortgage loans that helped build the housing bubble; the packaging of those loans into exotic securities that were sold to investors; and the heedless placement of giant bets on those investments.

Enabling those developments, the panel found, were a bias toward deregulation by government officials, and mismanagement by financiers who failed to perceive the risks.

This makes sense.  Although the financial system is an extremely complex web of interconnections, it is really not very likely that one participant, or even a few, could cause the near-collapse of the whole thing.  I’ve written here before about some the financial factors that may have contributed to the crisis (in “Formulas for Disaster”, parts 1, 2, 3, and 4, and the sequel), and will post another note when I’ve had time to go over the report.


%d bloggers like this: