SAE Endorses Electric Vehicle Charging Standard

October 19, 2012

Back in September of last year, I wrote about an announcement that a group of auto manufacturers (Audi, BMW, Daimler, Ford, General Motors, Porsche and Volkswagen) had agreed on a standard set of connections and protocols for charging the batteries in electric vehicles [EVs].  Now, a post on the “Autopia” blog at Wired reports that the Society of Automotive Engineers [SAE] has officially adopted a version of this standard (called J1772 Revision B) for the United States and Europe.  The standard specifies the connectors and electrical interfaces to be used in public charging stations for electric and plug-in hybrid vehicles.

Using electricity instead of fossil fuels as a vehicle energy source has some significant attractions; but one problem that needs to be solved, in order for large-scale adoption of EVs to become reality, is the establishment of a charging infrastucture.  (We don’t think much about this with respect to our traditional, gasoline-powered cars, since the refueling infrastructure — gas stations — has been in place for many years.)  Having standards for the charging system is of obvious importance: imagine a world where a different kind of gas pump was required, depending on whether you drove a VW, or a Toyota, or a Ford.  It is, in a way, analogous to the question of whether we should drive on the right (as in the US), or on the left (as in the UK).  It isn’t obvious, at least to me, that either choice has any intrinsic or essential merit relative to the other; however, it is clearly quite useful for all of us to agree on a single choice.

Agreeing on a standard is also complicated by the number of factors to be taken into account.  It’s probably a fair assumption that most EV owners, most of the time, will use their standard domestic electricity supply (whatever that is, another variable) to recharge their car’s batteries.  That can be a slow process, though (measured in hours), using the standard US domestic supply at (nominally) 110-115 volts AC.  The standard also has to provide for an implementation that is safe to use in an uncontrolled environment (that is,  outdoors) in less-than-ideal conditions.

The new standard also makes some technical progress, while remaining backward-compatible with earlier versions of the J1772 standard.  In particular, it allows for high-voltage (~ 500 volts) direct current [DC] charging, which could reduce the time required for a full charge to 30 minutes or less.

This agreement on a standard is a good thing, but the picture is somewhat clouded, because some Japanese carmakers (particularly Mitsubishi and Nissan) had already adopted a Japanese standard called CHAdeMO to accommodate fast charging.

“We are disappointed that SAE has approved a fast-charging standard that will not accommodate more than 70 percent of the electric vehicles on U.S. roadways today,” Nissan America said in a statement. “At the time of launch, the Nissan Leaf was designed to comply with the CHAdeMO standard of quick charging, which was the only existing quick-charge standard certified at the time.”

Now, if this difference was only about two alternatives connectors and voltage levels and that sort of thing, as long as the standards are published, we should shortly expect to see adapters, to go from CHAdeMO to J1772-RevB, and vice versa.   I hope, though, that the auto makers will recognize that having a common, agreed, standard for recharging EVs is something they all should want.


Growth in High-Frequency Trading

August 9, 2012

In trying to examine the background and underlying causes of financial market events such as the disruption in equity trading on August 1, or the “flash crash” of May 6, 2010, the question of whether we are looking at some fundamental change in the way the markets work is frequently raised.  Trading volumes have certainly increased over the past few years, but firms engaged in high-frequency trading tend to be rather reticent about the exact nature and extent of their operations.

In one of my posts about the 2010 flash crash, I mentioned an analysis carried out by Nanex, a firm that specializes in systems and software to distribute and process market data, giving them a particularly good view of the overall market.

Our business is supplying a real-time data feed comprising trade and quote data for all US equity, option, and futures exchanges.

I have just come across a rather striking demonstration, put together by Nanex, of the growth of high-frequency trading (HFT) activity in US markets over the period January 2007 through January 2012.    The top graph on the page is an animated GIF graphic, which shows daily HFT activity for the various US exchanges (denoted by different colors).  At the beginning of the period, in 2007, the lines are relatively flat.  As time progresses, one begins to see more activity, first at the beginning and end of the trading day, then spreading throughout the day. Assuming Nanex’s data are correct (and I have no reason to doubt that), there has been a very substantial increase in HFT activity over the last few years.

The graphs on the lower part of the page show a more interesting aspect of this growth.  The graph on the left shows the growth, over time, of quote activity (that is, the posting of bid and offer prices for securities).  The graph on the right shows actual trades over the same period,  It is clear that most of the activity growth is in quotes, rather than trades.   There was some suspicion, in the 2010 flash crash, that some market participants might have engaged in “quote stuffing”, the generation of spurious quotes that would effectively clog up competitors’ systems.   There is certainly nothing in this evidence to disprove that suspicion.

Nanex also points out that spurious quote messages, like spam, are close to free for the sender, but not for other market participants.  As noted above, the large increase in quote activity does not correspond to an increase in actual trades, meaning that the cost per trade of processing quote messages has gone up substantially.

We think that a 10-fold increase in costs without any benefits would be considered “detrimental” by most business people. We think the regulators would agree with us as well.

This analysis, together with other evidence, suggests that HFT activity is not as benign as some proponents would have us believe.   As Prof. Ed Felten at Princeton observed back in 2010, it can be difficult to disentangle just what happened after the fact, but I’ll try to fit some of the pieces together in future posts.


Harvard Library’s Faculty Advisers Push for Open Access

April 24, 2012

The movement toward providing open access to scholarly research seems to be continuing.  I’ve noted before the decisions by a number of different organization, including Princeton University, the Royal Society, the JStor research archive, and, most recently, the World Bank, to provide open access to some or all of their research publications.   According to an article at Ars Technica, a faculty advisory council to the Harvard University Library has just issued a memorandum urging all faculty members to move to open access publication as much as possible, because of what it terms “untenable” and “unsustainable” trends in the pricing of traditional academic journals.

… the Faculty Advisory Council is fed up with rising costs, forced bundling of low- and high-profile journals, and subscriptions that run into the tens of thousands of dollars. So, it’s suggesting that the rest of the Harvard faculty focus on open access publishing.

The library’s current budget for journal subscriptions runs to about $3.75 million.  Admittedly, this is not a large sum compared to the size of Harvard’s endowment, roughly $32 billion; but it is clear from the language of the memorandum that the members of the council have had enough of continually increasing prices that, in their view, have little economic justification.  Some of their complaints, such as the “bundling” of journal subscriptions, will sound familiar to readers familiar with the boycott of Reed Elsevier journals, organized via the Web site, thecostofknowledge.com.  (Incidentally, when I first wrote about the boycott back in January, there were 1,335 researchers who had signed up to participate; the current total is 10,200.)  They feel that the increasing consumption of library resources for these expensive journals will compromise other parts of their mission.

The Faculty Advisory Council to the Library, representing university faculty in all schools and in consultation with the Harvard Library leadership,  reached this conclusion: major periodical subscriptions, especially to electronic journals published by historically key providers, cannot be sustained: continuing these subscriptions on their current footing is financially untenable.

They urge faculty members to submit research to open access journals, or at least those with reasonable access policies; to try to raise the prestige of open access publication; and to consider resigning from the editorial boards of journals with unreasonable subscription policies.

The recommendations are not binding on the faculty, but I hope that they will realize, along with academics elsewhere, that they do have the power to effect considerable change.  After all, they supply the “raw material”, in the form of their papers, that the journals need to exist, and they also supply most of the editorial work, usually for no compensation.  For too long, some of these journal publishers have not only bitten the hand that feeds them, but charged the rest of the body for the privilege.


World Bank Research to be Open Access

April 14, 2012

I’ve written here before about the encouraging trend to make more scholarly research available online at no charge, including efforts by JStor, The Royal Society, and the National Academies Press.  Now, according to an article at Ars Technica, the World Bank has decided to make its research and knowledge products, as well as the data underlying them,  available free of  charge under a new Open Access Policy.

…  the Bank says it will apply to “manuscripts and all accompanying data sets… that result from research, analysis, economic and sector work, or development practice… that have undergone peer review or have been otherwise vetted and approved for release to the public.

Most of the material will be made available under a liberal Creative Commons license [CC-BY].  The Bank has set up a new Web site, the Open Knowledge Repository, to make its work available for browsing and download.  (At the time I am writing this, there appears to be a problem with the site’s SSL certificate for secure [https:] access; you may get a security warning from your browser.)  There are currently more than 2,100 papers and books available in the Repository, and more will be added over the coming months.  Data sets will be available, too, and will probably be of considerable value to researchers, given the World Bank’s special insight into the process of economic development.

“Making our knowledge widely and readily available will empower others to come up with solutions to the world’s toughest problems,” World Bank Group President Robert B. Zoellick said in the Bank’s announcement.

It is great to see another significant institution move toward making information more widely and easily available.


Toxic Environments

March 18, 2012

I have finally had a chance to read Greg Smith’s letter, “Why I Am Leaving Goldman Sachs”, published as an Op-Ed this past week in the New York Times.  In it, Mr. Smith, an executive director of the US equity derivatives business at one of the world’s  leading merchant banks, says that he is resigning because, in his view, the culture of the firm has changed significant;ly for the worse since he joined it twelve years ago.

… I believe I have worked here long enough to understand the trajectory of its culture, its people and its identity. And I can honestly say that the environment now is as toxic and destructive as I have ever seen it.

Mr. Smith says that the culture of the firm has changed, from one which put the customer’s interest first , to one where making the maximum profit for the firm, at the customer’s expense if necessary, has become paramount.

I attend derivatives sales meetings where not one single minute is spent asking questions about how we can help clients. It’s purely about how we can make the most possible money off of them.

Much of the Wall Street reaction to Mr. Smith’s letter has been fairly predictable.  He has been pictured as a naive hypocrite, who never understood what the business was about, but was happy enough to deposit his bonus checks.  My own reaction, having worked for about thirty years in the financial services industry, is that no one there should be at all surprised by what Greg Smith said.  I have no specific knowledge of Goldman Sachs, but the scene he describes sounds all too familiar.

As William Cohan points out in an article in the Washington Post, the idea of Goldman Sachs, or any other investment bank, duping its clients is not exactly new.  He cites the example of Goldman’s role in and around the bankruptcy of Penn Central in 1970.  Goldman was the underwriter for Penn Central’s commercial paper.  Because of its relationship with the firm, Goldman was privy to information about Penn Central’s deteriorating liquidity position, information it did not share with its customers even as it continued to flog the commercial paper.  The SEC investigated following Penn Central’s bankruptcy.

According to the SEC, Goldman “gained possession of material adverse information, some from public sources and some from nonpublic sources indicating a continuing deterioration of the financial condition of the [railroad]. Goldman, Sachs did not communicate this information to its commercial paper customers, nor did it undertake a thorough investigation of the company. If Goldman, Sachs had heeded these warnings and undertaken a reevaluation of the company, it would have learned that its condition was substantially worse than had been publicly reported.”

The SEC sued Goldman, and the suit was settled within a short time.  Goldman was also sued by some of its customers.  Many of these suits were also settled, but some, for whatever reason, were allowed to proceed to a trial, which Goldman lost.

Incredibly, Goldman thought it could win the lawsuits and allowed them to go to trial, where much of the firm’s dirty laundry was aired. In the end, it lost the suit brought by the three companies and paid the plaintiffs 100 cents on the dollar, plus interest.

Cohan argues that, if Greg Smith had been paying attention, he could have figured out that Goldman’s actions did not always match its lofty principles.  At one level, it is hard to argue with this.  Certainly since I started work in the industry in the mid-1970s, there has never been any shortage of skunks and weasels on Wall Street.

On another level, though, I think Smith is right: the culture of Wall Street has gotten worse, and there are at least some identifiable reasons for this.  Once upon a time, firms like Goldman Sachs were partnerships, meaning that the money they were risking belonged to the partners that owned and managed the firm.  Now, most of these firms are public companies, whose (very highly paid) managers are risking the stockholders’ money; they have also been permitted to become bank holding companies, with access to lending from the Federal Reserve, meaning they can risk taxpayers’ money, too.   The rise of proprietary trading in ever more exotic and opaque financial instruments has made effective oversight more difficult.  The bonus system rewards those who produce short-term profits, even when those profits are based on theoretical valuations of long-term transactions.  (I’ve written about this before.  These are sometimes called “IBG” trades on the floor: “I’ll Be Gone” by the time the deal craters.)  It is hard, offhand, to think of a more complete collection of perverse incentives, to say nothing of agency problems and moral hazards.

Really, the only thing surprising about this is that anyone is surprised.


That Elusive Fuel Economy

January 8, 2012

Back in the 1950s and 1960s, when I was growing up, people didn’t worry very much about their cars’ gas mileage, or the price of gasoline.  I can remember, in the years right after I got my driver’s license, often buying gas for less — sometimes considerably less — than $1 per gallon.   (Even then, this was considerably cheaper than gasoline in other locations, such as Europe.)  The formation of OPEC, and the Arab oil embargo of 1973-74, which almost doubled the effective price of crude oil, put an end to this carefree attitude.

Since then, we have seen various steps taken by the US government, attempting to encourage the production of vehicles with better gas mileage.  One of these is the CAFE [Corporate Average Fuel Economy] standard, first enacted by Congress in 1975, which sets minimum gas mileage requirements for vehicle manufacturers, based on a sales-weighted average of fuel economy figures for a manufacturer’s current model year offerings.  This has been raised from time to time, most recently during the current Obama administration.  The car companies have certainly made technical changes, such as the use of fuel injectors in place of carburetors, that do improve efficiency;  yet, the perception is that actual gas mileage has not gotten very much better.

The MIT News Service has a report on some new research, by Professor Christopher Knittel, an energy economist in the Sloan School of Management, that sheds some light on this puzzle.  (Prof. Knittel’s paper, “Automobiles on Steroids”, has just been published by the American Economic Review.  A copy of the paper, in PDF form, is available at Prof. Knittel’s web page.)  He finds that, over the period 1980 to 2006, average fuel economy of cars sold in the US increased by a bit more than 15%.   However, fuel efficiency was not the only thing that was changing.  As has been noted many times, and is clear to anyone who has been paying attention, car buyers’ vehicle choices have also changed: we now see, and the manufacturers sell, many more minivans, SUVs, and other large vehicles.

In 1980, light trucks represented about 20 percent of passenger vehicles sold in the United States. By 2004, light trucks — including SUVs — accounted for 51 percent of passenger-vehicle sales.

Prof. Knittel estimates that, over the same 1980-2006 period, the average curb weight of new vehicles has increased by 26%, and average horsepower has increased by 107%.  He calculates that, if vehicle characteristics had remained constant over the period, fuel economy would have increased by 60%.  In other words, car buyers have taken part of the efficiency gain in the form of larger, more powerful vehicles.

Thus if Americans today were driving cars of the same size and power that were typical in 1980, the country’s fleet of autos would have jumped from an average of about 23 miles per gallon (mpg) to roughly 37 mpg, well above the current average of around 27 mpg. Instead, Knittel says, “Most of that technological progress has gone into [compensating for] weight and horsepower.”

Prof. Knittel says that, if our policy objective is to reduce gasoline consumption, in order to reduce dependence on foreign oil supplies, and to reduce the production of greenhouse gases, using a gasoline tax is a better approach than trying to manipulate consumers’ preferences.

For his part, Knittel thinks it is understandable that consumers would opt for large, powerful vehicles, and that the most logical way to reduce emissions is through an increased gas tax that leads consumers to value fuel efficiency more highly.

“When it comes to climate change, leaving the market alone isn’t going to lead to the efficient outcome,” Knittel says. “The right starting point is a gas tax.”

I think this would be consistent with the recommendations from most economists.  It is often argued that regulation is a bad thing, because it is too costly.  This is, in a certain way, nonsensical.  The good reason to impose regulation is that externalities, either costs or benefits, prevent the market from reaching an efficient solution — pollution is (literally) a textbook example.  The reason for imposing regulation is to ensure that the person making the decision to pollute also bears the cost.  A fairer criticism of regulation is that it often, in practice, tends to prescribe how an objective should be achieved, rather than focusing solely on the objective.  Increasing the gasoline tax would provide consumers with a direct economic incentive to buy more efficient vehicles.  (I have written about this before, in the context of proposals for a mileage tax.)

Using a tax, rather than further regulation, should also appeal to proponents of personal freedom.  If you wish to, and can afford it, you can drive a 1990 Lamborghini Countach, which gets less than 9 mpg, but can pass anything on the road (except a filling station).  You’ll  just have to pay a bit more for your fun.


%d bloggers like this: