‘It gives you an economic advantage for discovery and innovation.’
ADDING UP: Jill Pipher is director of Brown's Institute for Computational and Experimental Research in Mathematics, which hosted a conference on the next generation of high-speed computers earlier this month.
PBN PHOTO/RUPERT WHITELEY
By Richard Asinof Contributing Writer
The design of the next generation of high-speed computers may have its roots, in the form of algorithms, in a gathering of world-class mathematicians earlier this month at Brown University.
More than 50 of the nation’s top mathematicians from industry, academia and national-research laboratories gathered at Brown from Jan. 9-13 to wrestle with potential solutions to the future design and economics of the next generation of ultra-exascale, super computers.
The discussions took place at the headquarters of Brown University’s new Institute for Computational and Experimental Research in Mathematics (ICERM), in a remodeled space formerly occupied by a law firm that school officials expect to become a magnet for attracting creative, undergraduate talent in information technology to the university.
The preferred common language used for the discussions were algorithms, complex equations used to solve the challenges of the future architecture and speed of super computers. Such algorithms could be found sprouting up on the numerous white boards that dominate the wall space of the math-research center.
The conference, sponsored by the U.S. Department of Energy, was entitled “Synchronization-Reducing and Communication-Reducing Algorithms and Programming Models for Large-scale Simulations.”
The conference’s challenge, explained ICERM Director Jill Pipher, was “to develop computers 1,000 times more powerful than the ones we have today, as measured in number of calculations that you can do per second,” while at the same time minimizing power consumption. The government’s deadline is 2018, and meeting it, Pipher continued, will require that a new series of algorithms be created. “These challenges are, at heart, mathematical,” she said.
More than an intellectual problem, the algorithmic challenges of exascale computing involve an economics equation based in physics: the costs and the trade-offs in energy use, the speed of computations and the distance needed to transmit memory.
At stake, according to Jan S. Hesthaven, professor of Applied Mathematics at Brown and deputy director at ICERM, is the distinct economic advantage of super-computing. “If you have it, and you know how to use it, it gives you an economic advantage for discovery and innovation.”
In designing the next generation of computational architecture, Hesthaven continued, power consumption is a big problem. “If you were to take what we have now, and scale it up, make it bigger, it simply wouldn’t work,” he said. “The amount of energy that it would take would be hundreds of millions of dollars; it would be too expensive.” The only way to do it, Hesthaven continued, is to think about it in a “radically different way that pushes mathematicians – that’s what we’re here for.”