Chip giant Intel, graphics card maker Nvidia, MIT and the Sandia National Laboratory are all recipients of the first grants to be used to create prototype exascale machines, capable of a million trillion floating point operations per second. The U.S defense research organization, Darpa, expects the first prototypes to be working by 2018.
The research project would attempt to create hardware that “overcomes the limitations of current evolutionary approach”, characterized by Moore’s Law, which says the number of transistors that can fit on a given piece of silicon will double every 18-24 months. The limitations of that approach are the mushrooming power, management and structural issues that crop up as components shrink.
Darpa is looking to “develop radically new computer architectures and programming models that are 100 to 1,000 times more energy efficient, with higher performance, and that are easier to program than current systems”.
The US owned Jaguar supercomputer has a top speed of 1.75 petaflops