New Languages

July 31, 2010

A joke from a former colleague from Japan:

Q: What do you call a person who speaks two languages?
A: Bilingual

Q: What do you call a person who speaks three languages?
A: Trilingual

Q: What do you call a person who speaks one language?
A: American

I’ve written a bit here about programming languages, and some of the history behind them (for example, FORTRAN and COBOL), so I was interested to see an article at Technology Review about the first Emerging Languages Camp at the most recent O’Reilly Open Source Convention.  The participants were apparently an eclectic group, ranging from hobbyists to representatives of Internet powers like Google.

In dense 20-minute presentations, designers shared details of their embryonic languages. What all the designers had in common was a desire to shed decades-old programming conventions that seem increasingly ill-suited to modern computing–a desire shared by the tech industry at large.

Historically, new programming languages have sometimes been introduced with the idea of being “The One”: the grand unified language that can be used for everything.  This was not necessarily their prime focus, but that idea was often lurking in the background.  Early on, PL/1, introduced by IBM, was supposed to combine the best features of FORTRAN and COBOL, with a dash of systems programming support built in.  Likewise, ADA was to be the all-singing, all-dancing solution to the programming problems encountered by the US Department of Defense.  To a certain extent, C++ was designed to remedy perceived deficiencies in the C language.

This new crop of proposed languages seems to be different, in that most seem to be focused at a particular programming area.  One example (which I have mentioned before) is the Go language introduced by Google, and designed in part by Rob Pike, which is intended as a systems programming language particularly for large-scale, distributed environments.  One of its claimed advantages is a greatly streamlined compilation process, which takes seconds compared to minutes for C++.  Another is AmbientTalk,  developed by Tom Van Cutsem from Vrije Universiteit Brussel in Belgium, which is targeted at the mobile device environment.  It differs from conventional programming frameworks in its assumptions (or lack thereof) about networking:

  • It does not assume any centralized network infrastructure
  • It assumes that connections are unreliable and volatile

The idea is to make programming for these mobile devices more natural.

Overall, the trend in language design seems to be more toward creating specialized tools for specialized needs — just as we use HTML for Web pages, and SQL for relational data base queries.

Alex Payne, a former engineer at Twitter (and now chief product and technology officer for BankSimple, a personal finance startup), who organized the Emerging Languages Camp, says that “polyglot programming” is much more likely to become the norm, with programmers becoming fluent in many different languages that are optimized for different problems.

This is, on balance, a positive trend.   For programming languages, as with clothing, “one size fits all” really means that nobody is fit particularly well.


Fix for .LNK Flaw Promised for Monday

July 30, 2010

A couple of weeks ago, I posted a note about a newly reported Windows vulnerability, related to Windows “shortcut” files, which have a .LNK extension.  (The vulnerability is documented in Microsoft Security Advisory 2286198.)  A few days later, there were reports of an exploit being published, and apparently some well-known malware sources have begun to include exploits for this flaw in their offerings.

Microsoft has now announced, in a TechNet blog post,  that it will release an out-of-schedule patch to fix this vulnerability on Monday, August 2, at around 10:00 AM PDT.  According to the announcement, Microsoft has been seeing an increasing number of attacks directed against this vulnerability.

I will post another note when the fix is actually available, or if I get any more information.


Verizon’s 2010 Data Breach Report is Out

July 29, 2010

Verizon’s RISK Team publishes an annual report summarizing data breach incidents, and categorizing them on various criteria (e.g., who did it?  how was it done?).  It usually makes for some interesting, although sometimes depressing, reading.   This year’s report [PDF] has now been released, and features a considerably larger data sample than in the past, thanks to the inclusion of data contributed by the US Secret Service.

I haven’t yet had a chance to read the 2010 report, but one statistic from it, quoted in a diary entry from the SANS Internet Storm Center, caught my eye: “86% of victims had evidence of the breach in their log files”.   In other words, the sizable majority of breaches could be detected without anything fancier than the log files already being generated by the server(s).

I’ll post another note with some comments after I’ve read the report.


The Big Switch

July 29, 2010

I’ve written here a couple of times previously about the potential security problems associated with the introduction of “smart” electricity meters that can be controlled remotely.   According to a blog post by Ross Anderson at the Security Research group at the Computer Laboratory, University of Cambridge, there is a plan afoot in the UK to replace some 47 million existing electricity meters with “smart meters”.  The motivation appears to be primarily economic:

The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded.

Ross and his colleagues have a new paper [PDF] on the potential impact of this strategic vulnerability.  From the abstract:

The off switch creates information security problems of a kind, and on a scale, that the energy companies have not had to face before. From the viewpoint of a cyber attacker — whether a hostile government agency, a terrorist organisation or even a militant environmental group — the ideal attack on a target country is to interrupt its citizens’ electricity supply. This is the cyber equivalent of a nuclear strike; when electricity stops, then pretty soon everything else does too.

Apart from the details of potential vulnerabilities, this makes a very important point: installation of these meters, and their supporting infrastructure, creates a very large-scale vulnerability where none existed before.

Trying to  think of an analogy is difficult — but suppose the FAA were suddenly to decide that all air-traffic control information would be housed “in the cloud” and distributed via the public Internet.  Communications with aircraft would be via VoIP telephone service.   Such a system might well save money, but would have the potential to be incredibly dangerous.

Generally, when we design systems to be robust and reliable, we try to eliminate “single points of failure”, parts of the system on which everything else depends.  Deliberately building them in seems a bit imprudent, to say the least.


Technology and Spycraft

July 28, 2010

I’ve written here on many occasions about the impact that our rapidly developing technology has had and is having on our privacy and the security of our personal information.   Recently, the Economist had an interesting short article exploring another aspect of these changes: the effect of technological change on the business of spying.

If you are in the spying business, your attitude toward technology, as the article points out, probably depends a lot on what sort of spy you are.

DEPENDING on what kind of spy you are, you either love technology or hate it. For intelligence-gatherers whose work is based on bugging and eavesdropping, life has never been better.

Obviously, for folks like the spooks at the National Security Agency, or at GCHQ in the UK, technology is generally a great boon.  For example, in the not too distant past, eavesdropping on someone’s telephone calls required that a physical electrical connection be made to his phone line, perhaps outside his house or in the telephone company central office.  With cellular and cordless phones, the signals can just be sniffed out of the air.  The cellular provider may claim, honestly, that the signals are encrypted, but the security record of these systems is not good.  (For example, see my post earlier this year on the cracking of DECT encryption.)

For the more old-fashioned kind of spy, though — the kind pursuing human intelligence with his or her feet on the ground — technology has made life a lot harder.  Once, a common method of developing a false identity was to start from the authentic birth certificate of a child that had died in infancy, and add a few plausible supporting documents.

Creating false identities used to be easy: an intelligence officer setting off on a job would take a scuffed passport, a wallet with a couple of credit cards, a driving licence and some family snaps.

In a world based on paper records, untangling even a moderately complex false trail took time and a lot of legwork.  Today, with so much information about us available online, the task of creating a convincing back story, or “legend”, has become much harder; and checking on someone’s background has become something that can be done, in large part, sitting at one’s desk.    (This is sort of the flip side of the increased ease of identity theft.)

The article suggests that the future days of the classic “deep cover” spy are numbered.  More likely is the use of “real people”, in an age where people routinely move about the world much more than they used to.  Espionage may go back to being a game for amateurs and free-lancers, rather than a professional career.

Spycraft

A tide turns

Technology used to help spies. Now it hinders them

DEPENDING on what kind of spy you are, you either love technology or hate it. For intelligence-gatherers whose work is based on bugging and eavesdropping, life has never been better.


Apple Updates Safari

July 28, 2010

Apple has released updated versions of its Safari Web browser, in order to address a number of security vulnerabilities.   The primary new version, 5.0.1, is available for Windows XP, Vista, and 7, and for Mac OS X versions 10.5.x and 10.6.x.   Apple also released version 4.1.1, addressing essentially the same issues, for Mac OS X 10.4.11.   Apple’s “Security Content” article has details of the vulnerabilities being patched.

The new version, which I recommend installing as soon as you conveniently can, should be available through Apple’s Software Update mechanism; alternatively, the new versions can be obtained from Apple’s Support Downloads page.

Update Wednesday, 28 July, 23:25 EDT

I did not realize it from reading the original release announcement, but the new Safari 5.0.1 version apparently does have one significant functional enhancement, in addition to its security fixes.  This version includes the new framework for allowing third-party browser extensions, similar to the facility that has existed in Firefox for some time.  There’s a short survey of the new feature at Wired‘s “Webmonkey” site.

Apple has also introduced a new Safari Extensions Gallery.


%d bloggers like this: