Programming in Parallel

June 11, 2009

If you’ve bought, or thought about buying, a new PC in the last year or two, you’ve probably seen advertising material about dual-core, quad-core, or more-cored processors (the Intel® Core Duo™ would be an example).  These are CPU chips that incorporate more than one instruction processing element on a single chip; the idea is to provide at least some of the benefit of having multiple CPUs, in a more economical way.   With more than one processing element, the chip can, in principle, do two things at once, truly in parallel — as distinct from the appearance of multi-tasking produced by “time slicing” on a single processor.

I recently saw an article in the Government Computer News called “Does Parallel Processing Require New Languages?” It talks about a number of projects that are underway to develop new programming languages to support parallel programming:

Now that almost all new servers and computers are running processors with multiple cores, the software-design community is trying to figure out the best way of making use of this new architecture

Getting the full workload of multicore processors can be tricky because, in order for a program to make use of more than one core, it must divvy its workload in such a way that it doesn’t take more effort than the gains achieved by adding more cores. Most programming languages were written assuming just one processor would be working through the code sequentially, line by line.

I’ve seen a number of other articles, too, that talk about the need for new languages.  Now, I think it is true that providing languages and tools that allow for the easy specification of parallel computation would be a help to developers.  But I don’t really think that programming languages are a significant obstacle to using parallel hardware facilities.  To see why, let’s think about a couple of types of potentially parallel processing.

The first type involves the kind of multi-processing that is done by the operating system on your machine as a matter of course.  You can be typing into a text-input field while one background process is checking your spelling as you type, and another is receiving data for a file you are downloading from the Internet.  This kind of parallel processing has been done for a long time, going back to the mainframe days in the 1960s.  (For example, IBM’s OS/360 had multi-processing as an integral part of its design.)  Although of course the developers of the operating system and its components have to take the possibility of parallelism into account, this is an area of computer science that has been fairly well investigated.  And, although the application developer can make some modest adjustments to make his program “play nice” in this kind of environment, he or she really doesn’t have to do anything special.

The second type of parallelism is that done within a single application.  The first thing to observe here is that the degree to which parallel processing is even possible is determined by the nature of the application, and has little to do with the language in which the program is written.  At one extreme, a stochastic (Monte Carlo) simulation can typically make significant use of parallel processing, because it generates a large number of (pseudo-random) independent trials, each of which is in effect a self-contained computation.  But not every problem can be broken down in this way.  To use Fred Brooks’s memorable example from  The Mythical Man-Month, producing a baby requires nine months no matter how many women are assigned to the project.

In this second case, then, the degree to which parallel facilities can help is a characteristic of the problem, and not likely to be changed much by changing tools (although, as I noted above, better tools could certainly help with the mechanics of parallelism).

Why does this matter?  Because it suggests that there is an important design choice to be made.  If we take a typical modern end-user application, like a spreadsheet or word-processing program, there are tasks within the application that can be done in parallel: for example, keyboard input, spell checking, and printing.  One might think that this would be an argument for making large, all-inclusive application suites.  But there is another possibility: to create a set of smaller, single-purpose tools, and let the operating system handle any parallel processing.

I think that this second approach, of multiple small tools, deserves to be considered when a new system is designed.  We provide input/output facilities, and device drivers, as part of the operating system so that each application programmer doesn’t have to deal with all the messy details, and so we don’t end up with different drivers for each application.  I thnk there’s a case for handling parallel processing in the same way.  So, although I think adding language facilities to allow the natural expression of parallelism could be helpful, I hope it does not lead us to solve the wrong problem.


Firefox 3.0.11 Released

June 11, 2009

The Mozilla Foundation has released a new version (3.0.11) of the Firefox Web browser.  According to the Release Notes. this update fixes a couple of bugs associated with Firefox’s bookmarks database.  It also addresses several security vulnerabilities, the details of which are given on the Security Advisories page.

The update should be available through the standard update mechanism (under the menus Help / Check for updates… ).  Note that, in order to install a new version, you must have sufficient privilege to write to the installation directories.  Alternatively, the installer package can be downloaded from the Firefox Download page.   Versions are available (in 60+ languages) for Windows, Linux, and Mac OS/X.

Because this update addresses a couple of potentially serious security issues, I recommend installing it as soon as you conveniently can.


%d bloggers like this: