The PC World site has a report of an interesting presentation made at the EDAworkshop13 in Dresden, Germany, this month, on possible future trends in the high-performance computing [HPC] market. The work, by a team of researchers from the Barcelona Supercomputing Center in Spain, suggests that we may soon see a shift in HPC architecture, away from the commodity x86 chips common today, and toward the simpler processors (e.g., those from ARM) used in smart phones and other mobile devices.
Looking at historical trends and performance benchmarks, a team of researchers in Spain have concluded that smartphone chips could one day replace the more expensive and power-hungry x86 processors used in most of the world’s top supercomputers.
The presentation material is available here [PDF]. (Although PC World calls it “a paper”, it is a set of presentation slides.)
As the team points out, significant architectural shifts have occurred before in the HPC market. Originally, most supercomputers employed special purpose vector processors, which could operate on multiple data items simultaneously. (The machines built by Cray Research are prime examples of this approach.) The first Top 500 list, published in June 1993, was dominated by vector architectures — notice how many systems are from Cray, or from Thinking Machines, another vendor of similar systems. These systems tended to be voracious consumers of electricity; many of them required special facilities, like cooling with chilled water.
Within a few years, though, the approach had begun to change. A lively market had developed in personal UNIX workstations, using RISC processors, provided by vendors such as Sun Microsystems, IBM, and HP. (In the early 1990s, our firm, and many others in the financial industry, used these machines extensively.) The resulting availability of commodity CPUs made building HPC system using those processors economically attractive. They were not quite as fast as the vector processors, but they were a lot cheaper. Slightly later on, a similar transition, also motivated by economics, took place away from RISC processors and toward the x86 processors used in the by-then ubiquitous PC.
The researchers point out that current mobile processors have some limitations for this new role:
- The CPUs are mostly 32-bit designs, limiting the amount of usable memory
- Most lack support for error-correcting memory
- Most use non-standard I/O interfaces
- Their thermal engineering does not necessarily accommodate continuous full-power operation
But, as they also point out, these are implementation decisions made for business reasons, not insurmountable technical problems. They predict that newer designs will be offered that will remove these limitations.
This seems to me a reasonable prediction. Using more simple components in parallel has often been a sensible alternative to more powerful, complex systems. Even back in the RISC workstation days, in the early 1990s, we were running large simulation problems at night, using our network of 100+ Sun workstations as a massively parallel computer. The trend in the Top 500 lists is clear; we have even seen a small supercomputer built using Raspberry Pi computers and Legos. Nature seems to favor this approach, too; our individual neurons are not particularly powerful, but we have a lot of them.