Prof. Felten Elected to National Academy of Engineering

February 9, 2013

I’ve mentioned Princeton University’s  Center for Information Technology Policy [CITP] here in a number of posts, on topics ranging from security “Worst Practices” to high-frequency stock trading.  I’ve also mentioned the CITP’s director, Professor Edward Felten, who in addition to his work at the university has also served a term as the Chief Technologist of the US Federal Trade Commission.  The CITP has consistently produced some of the most interesting research on the intersection of public policy and technology, and it has always seemed to me that Prof. Felten’s leadership has been vital to that work.

So I was delighted to see an announcement that Prof. Felten has been elected to the National Academy of Engineering, “for contributions to security of computer systems, and for impact on public policy.”  As the announcement states,

Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made outstanding contributions to “engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature,” and to the “pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

I have always found Prof. Felten’s work and writing to be consistently interesting and insightful, and congratulate him on a very well deserved honor.


Prof. Felten’s Take on Washington

September 17, 2012

Back in November, 2010, I wrote about the appointment of Prof. Ed Felten, of Princeton University, as the Federal Trade Commission’s Chief Technologist.   This was a term appointment, and Dr. Felten is now back at Princeton as a professor of computer science and public affairs.  He is also resuming his role as Director of the university’s Center for Information Technology Policy, and frequent contributor to the Freedom to Tinker blog.

Ars Technica has an interview with Prof. Felten, focused on his experience in Washington.

So what’s it like to be a geek in the land of lawyers? Ars Technica interviewed Felten by phone on Tuesday to find out.

The interview is short, but well worth reading for anyone interested in technology policy.  As the article points out, many people in policy-making positions in Washington have little to no technical background; many are lawyers.  And many of these people, regardless of their background, have some odd ideas about technology in general.

Computer scientists are a rare breed in lawyer-dominated Washington, DC, and Felten said it was sometimes a challenge helping policymakers understand the nature and limits of technology.

For example, he said a lot of people in Washington have a misconception that any problem “can obviously be solved if you try hard enough.”

In the absence of technical knowledge and understanding, many policy makers rely on getting advice from people they trust, on the basis of personal relationships.  This, of course, is at the root of the enormous lobbying business, but it is not all bad.  If the trusted people are actually competent, and not just pre-scripted automatons, it provides a means for technically qualified people to communicate their views.

… Felten said there are ways ordinary geeks can influence the policy process. The most important thing they can do, he said, is to develop relationships with people who do have direct connections to the policy process.

Although technology and science evolve quite rapidly, human nature has really not changed all that much.  Technical people ignore or discount personal relationship building at their peril.


The Digital Big Bang

March 12, 2012

Recently, in a couple of posts about some recently declassified correspondence between the mathematician John Nash and the National Security Agency, I mentioned the confluence of very bright people in and around Princeton NJ shortly before, during, and after World War II.  Besides John Nash, the list includes Albert Einstein, John von Neumann, Kurt Gödel, and Alan Turing.  Another was the theoretical physicist Freeman Dyson, now Professor Emeritus at the Institute for Advanced Study (IAS).

I have just come across another discussion of that period, in the form of an interview at Wired with the science historian George Dyson, Freeman Dyson’s son.  concerning his new book on the origins of modern computing, Turing’s Cathedral.  George Dyson grew up in Princeton, while his father was at the IAS, and had some direct personal experience with some of the early computer development there.

The institute was a pretty boring place, full of theoreticians writing papers. But in a building far away from everyone else, some engineers were building a computer, one of the first to have a fully electronic random-access memory. For a kid in the 1950s, it was the most exciting thing around. I mean, they called it the MANIAC!

After the work of the Manhattan Project in developing the atomic bomb, von Neumann had persuaded the US government to fund the development of a digital computer, to be used for the development of the hydrogen bomb.  Although there were other early computers, including ENIAC and EDSAC, the development was significant, because it was the first computer to have a fully modern stored-program architecture (which is still called a von Neumann architecture).

George Dyson’s initial fascination with the project apparently developed into a more apprehensive feeling a little later, when he tried to distance himself from computers.

Computers were going to take over the world. So I left high school in the 1960s to live on the islands of British Columbia. I worked on boats and built a house 95 feet up in a Douglas fir tree. I wasn’t anti-technology; I loved chain saws and tools and diesel engines. But I wanted to keep my distance from computers.

He eventually returned to study digital development, because he was struck by the similarities between the biological and digital worlds.

When I looked at the digital universe, I saw the tracks of organisms coming to life. I eventually came out of the Canadian rain forest to study this stuff because it was as wild as anything in the woods.

In the balance of the interview, Dyson talks about some of the people most directly involved in the project, including Turing, von Neumann, and Julian Bigelow, the engineer who directed the actual construction — a difficult job just after the war, when many materials and facilities were hard to come by.   The biologist Nils Barricelli used the machine to simulate the evolution of digital “life forms” when it was not busy simulating thermonuclear explosions.  Dyson also makes an interesting observation about a side effect of the early hardware’s unreliability.

Vacuum tubes in the early machines had an extremely high failure rate, and von Neumann and Turing both spent a lot of time thinking about how to tolerate or even take advantage of that. If you had unreliable tubes, you couldn’t be sure you had the correct answer, so you had to run a problem at least twice to make sure you got the same result. Turing and von Neumann both believed the future belonged to nondeterministic computation and statistical, probabilistic codes.

As Dyson points out, the idea of probabilistic computations has produced some intriguing results recently, in areas like language translation, as well as in IBM’s Watson project.

Although I have not yet got my hands on a copy of Dyson’s book, it sounds most interesting  There is a review at The Economist, and another at The Wall Street Journal.


%d bloggers like this: