IBM Introduces Transactional Memory

Back in February 2009, IBM announced that it had been selected by the US Department of Energy to develop a new supercomputer system at the Lawrence Livermore National Laboratory.  The new system, named Sequoia, scheduled to become operational next year, as impressive specifications.  It will be able to execute more than 20 petaflops [2 × 1016 floating point operations per second], and will have 1.6 petabytes of memory.  (For comparison, the fastest system on this year’s Top 500 List, the K Computer at the RIKEN Advanced Institute for Computational Science, in Kobe, Japan, cranks out 8.16 petaflops.)    The Sequoia system will use about 100,000 64 bit PowerPC chips, each with 18 processor cores, and will run the Linux operating system.

All this would be impressive enough, but an article at Wired describes another new capability to be included in Sequoia.  At the recently-concluded Hot Chips conference, IBM presented information on the system’s hardware support for transactional memory, a technology that is aimed at making effective parallel programming easier and more efficient.

I’ve talked before about some of the difficulties that arise in trying to structure applications to make efficient use of multiple processing units, and about the problem of deadlock that can arise when there is competition for non-sharable resources among concurrent processes.  Current software solutions to these problems often employ resource locks: a process gets exclusive use of a resource until its critical work is complete.  This can get complicated very quickly, and can be inefficient when there are many competing requests for resources; it can also engender problems if, for example, a process gets an exclusive lack, and then dies.

Transactional memory, in essence, provides hardware support for “packaging” a set of operations as an atomic transaction (that is, either all the operations are completed, or none is).  The memory hardware supports versions of its contents.  When a transaction starts, the contents of its critical memory region are noted; when the transaction  is complete, there are two possibilities.  If the contents of the memory region are unchanged, then the transaction is committed: it proceeds to completion.  However, if the contents of the memory region have  changed, indicating action by a different process, the transaction is “rolled back”, as if it had not occurred; it can then be re-tried.  (This should be conceptually familiar to developers who have used standard SQL-based relational databases, which employ similar commit or rollback capabilities.)

The transactional model is appealing because of its conceptual clarity; previous attempts to implement it have been done in software, which imposes a performance penalty that is unacceptable in some applications.  The hardware implementation that IBM is building will be an interesting test of the viability of the approach..

As specialized as Sequoia is, the insight it will give into the utility of transactional memory will be invaluable. The combination of ease-of-use advantages for programmers and the performance potential (both of transactional memory and speculative execution) make transactional memory very appealing.

As multiple-processor systems become more and more the norm, the issues raised by parallel processing will become more significant.  Providing better technological tools to address them have the potential to make a big difference.

Comments are closed.

%d bloggers like this: