Computational scientists and researchers the world over are experiencing growing pains these days with high performance computing (HPC) applications.
They want to squeeze more performance from applications. But, they quickly find that the applications don’t scale with the addition of extra CPU cores or cluster compute nodes (servers). CPU-based clusters just can’t adequately scale software applications to model complex physical phenomena.
According to a study by analyst firm IDC, a mere one percent of HPC applications can scale to thousands of CPU-based nodes. The majority can run on only a single CPU node. And 16 percent can run only on a single core.
HPC developers use threads (like OpenMP) to scale to multiple CPU cores and MPI to scale across 100s and sometimes 1000s of server nodes. While scaling applications isn’t easy, it is certainly possible, and the same parallel programming methods can be used to scale across GPU-accelerated servers too.
Customers are demonstrating how they’ve successfully scaled their applications to thousands of GPUs. For example, researchers at the Chinese Academy of Sciences Institute of Process Engineering were able to take their research into more efficient solar panel technologies and scale it to thousands of GPUs. Likewise, researchers in France in aim to provide a better understanding of earthquakes and are accelerating their science on a grand scale with GPUs
One of Tokyo Tech’s research projects showed application scaling up to 1.8 million CUDA cores. Their work was recently validated through their being awarded the coveted Gordon Bell Prize – considered the Nobel Prize of supercomputing.
It’s no surprise that HPC developers are turning to GPU accelerators to speed-up their applications. If parallelizing code requires time and effort, it makes sense to use the computing platform that offers the most performance benefits.
One way or another, the rest of the HPC world will come to embrace the inevitable restructuring of their applications. Parallel computing — primarily with hybrid systems that leverage both CPUs and GPUs — is the way of the future.
Do you have an application that is scaling to hundreds of GPUs? Please leave a comment below and tell us about it.
(source: digit GPU GURU)