A revolution is happening in computer hardware. After three decades during which microprocessor speeds increased almost 4000 times, we are starting to hit long-predicted physical limits on the speed of a single processor. Recent computers instead use two, four or even twelve processor cores working together "in parallel'', giving peak performance that is equivalent to a 5GHz, 10GHz or even 30GHz single processor, but at a fraction of the projected energy usage.
There have even been experimental 48-core "single cloud computer'' chips giving peak performance that would exceed that of a 100GHz single processor. The effective exploitation of such high performance is essential to support modern demands for computing power in the home, in industry and in the economy at large. Combining this with low energy usage is crucial if the performance is to be delivered at a reasonable financial and environmental cost.
Future designs will harness even greater numbers of processor cores, perhaps in the thousands or millions, and perhaps with widely varying speeds and capabilities. These will be combined with advanced graphics processor units and other specialist units to give further performance and energy gains. In this way we will be able to meet society's future needs for computing power.
“Future computers will consist of thousands or even millions of processors, which poses a real problem to traditional programmers not used to thinking in parallel,” said ParaPhrase project leader Professor Kevin Hammond of the University of St. Andrews. (…)“The sheer complexity of these systems means that powerful tools are needed to develop software that runs stably and efficiently while making the most of the ability to process in parallel. The technologies we have developed in ParaPhrase make it possible now to really exploit the power of these new system"(read the full article here).
There are significant challenges in building computers, such as those described above, from heterogeneous processor and other computing units. There are even greater challenges in building parallel software that can use them effectively. In order to do this there was a need for a software that is easy to write but that still allows the hardware to be used effectively.
And that is where the ParaPhrase project had its word. The aim was exactly to produce such software that is easy to write using the hardware more effectively with the goal of speeding up processing by at least one order of magnitude over sequential execution on real near-term multicore architectures for the use cases and systems that will be considered in the project.
ParaPhrase built on a (multi-level) model of parallelism, where implementations of parallel programs are expressed in terms of interacting components. The researchers have developed an approach that allows large parallel programs to be constructed out of standard building blocks called patterns. A refactoring tool allows these patterns to be reassembled in optimal ways without changing the functionality of the overall program.
Project partners included both industry and academia representatives. It was possible to exploit the project results in a commercial context and on the other hand academic partners will use the gained knowledge to enrich their teaching activities and to reinforce their prestige in the scientific community. (...)“It was important to us that our research could be directly exploited by industry and other researchers. That's why we applied ParaPhrase to several important industrial case studies during the project.” Professor Hammond said.
The project partners are looking to the future. A number of follow-on projects are underway and more are in the pipeline.