Multiscale Simulation Methods for Soft Matter Systems

Multiscale modeling is a central topic in theoretical condensed matter physics and materials science. One prominent class of materials, whose properties can rarely be understood on one length scale and one time scale alone, is soft matter. The properties of soft materials are determined by an intricate interplay of energy and entropy, and minute changes of molecular interactions may lead to massive changes of the system’s macroscopic properties.

espp_logo

The ZDV coordinates the central support project, which offers software development services for the science projects within TRR 146 and which is itself a research project that analyses and optimizes the usage of HPC resources within the context of soft-matter systems. The scientific projects described in this proposal depend on a reliable and flexible software platform that allows to efficiently implement and test new methods and algorithms and which provides a toolset for data analysis. The MD software framework ESPResSo++ will become thiscore platform within the TRR 146 and the focus of project G will be explained based on ESPResSo++. Nevertheless, other platforms within this TRR face very similar challenges and the investigations will be extended to all projects within this TRR.

The aim of this central support project is to provide a central software platform that provides solutions to the aforementioned problems and that is accessible byall collaboration partners. It therefore:

  • Supports the scientific projects in the development process of simulation software and helps to transfer algorithms from CPUs to acceleratorsand couples the individual modules with ESPResSo++.
  • Develops load-balancing schemes for MD simulations, which are able to efficiently scale to HPC environments. The load-balancing schemes will be initially implemented in the core ESPResSo++-modules and will be afterwards transferred to the other simulation environments of the TRR.
  • Provides an asynchronous checkpointing environment, which is able to scale by distributing the checkpoints in the HPC cluster, using techniques, which are independent from the programming framework.
  • Allows applications to steer their IO-behavior and therefore enables them to improve IO performance even after the development process has finished

Project Partnes

  • Zentrum für Datenverarbeitung
  • Max-Planck-Institut für Polymerforschung

Funding Period

10/2014 -- 06/2022

External Links

Web site of the TRR 146

Contact

Publications

2023

  • James Vance, Zhen-Hao Xu, Nikita Tretyakov, Torsten Stuehn, Markus Rampp, Sebastian Eibl, Christoph Junghans, and Andre Brinkmann. 2023. Code modernization strategies for short-range non-bonded molecular dynamics simulations. COMPUTER PHYSICS COMMUNICATIONS 290. DOI Author/Publisher URL

2017

  • Giuseppe Congiu, Matthias Grawinkel, Federico Padua, James Morse, Tim Süß, and André Brinkmann. 2017. MERCURY: A Transparent Guided I/O Framework for High Performance I/O Stacks. In Proceedings of the 25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), 46–53.
  • Tim Süß, Lars Nagel, Marc-André Vef, André Brinkmann, Dustin Feld, and Thomas Soddemann. 2017. Pure Functions in C: A Small Keyword for Automatic Parallelization. In 2017 IEEE International Conference on Cluster Computing (CLUSTER), 552–556. DOI
  • Yingjin Qian, Xi Li, Shuichi Ihara, Lingfang Zeng, Jürgen Kaiser, Tim Süß, and André Brinkmann. 2017. A configurable rule based classful token bucket filter network request scheduler for the lustre file system. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 6:1–6:12. DOI

2016

  • Jürgen Kaiser, Ramy Gad, Tim Süß, Federico Padua, Lars Nagel, and André Brinkmann. 2016. Deduplication potential of HPC applications’ checkpoints. Seiten: 413–422. Author/Publisher URL

2015

  • Matthias Grawinkel, Lars Nagel, Markus Mäsker, Federico Padua, Andre Brinkmann, and Lennart Sorth. 2015. Analysis of the ECMWF Storage Landscape. In Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST), 15–27. Author/Publisher URL

2014

  • Guiseppe Congiu, Matthias Grawinkel, Federico Padua, James Morse, Tim Süß, and André Brinkmann. 2014. Optimizing scientific file {I/O} patterns using advice based knowledge. Seiten: 282–283. Author/Publisher URL