High performance computing (HPC) will reach the exascale level within the next few years. Single compute clusters will combine the performance of millions of cores to perform more than 1018 operations per second. Besides the challenges resulting from going to exascale computing, HPC is in the transition from being compute-centric to being data-centric. The management and handling of data become more and more important, and it will be crucial to scale data capacity and data bandwidth.
Our group Efficient Computing and Storage at the Johannes Gutenberg University Mainz is focusing on the areas storage systems and scalable computing.
We are focusing both on block and file level storage. We are developing protocols and architectures, which are able to efficiently use the underlying storage medias and integrate these architectures within scalable environments. New storage technologies, like solid state disks (SSDs), are integrated within these environments and help us to deliver optimized storage systems, e.g., in the context of data deduplication and backup.
Combining the performance of accelerators and standard processors within a single framework is investigated in the context of next generation HPC. Our compiler extensions automatically extend scientific source codes, so that the resulting applications can be seamlessly moved between CPUs and GPUs and the operating system can optimize the nodes’ throughput. The optimized utilization and energy efficiency are also investigated in the context of Cloud Computing where we simplify the access to scientific applications and HPC.