The ADMIRE project will create a European adaptive storage system to boost data-intensive applications comprising HPC simulation, bioinformatics, and artificial intelligence. The project aims to integrate novel and existing technologies through the co-design of six applications Pillars with high industrial and social relevance: weather forecasting, molecular dynamics, turbulence simulations, planetary scale cover mapping, brain super-resolution imaging, and Software Heritage catalog management and indexing.
ADMIRE aims to deliver an Input/Output software stack and a clearly defined Application Programming Interface for the optimization of data-intensive HPC and machine learning applications. The main objective of the ADMIRE project is to establish this control by creating an active I/O stack that dynamically adjusts computation and storage requirements through intelligent global coordination, malleability of computation and I/O, and the scheduling of storage resources along all levels of the storage hierarchy. To achieve this, we will develop a software-defined framework based on the principles of scalable monitoring and control, separated control and data paths, and the orchestration of key system components and applications through embedded control points.
Funding
The ADMIRE project has received €7.9M funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 956748. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Germany, France, Italy, Poland, and Sweden.
The funding period starts in April 2021 and lasts until end of March 2024.
Project Partners
- University Carlos III of Madrid (UC3M), Spain, Coordinator
- Barcelona Supercomputing Center (BSC), Spain
- Technische Universität Darmstadt, Germany
- Max Planck Computing and Data Facility (MPCDF), Germany
- Forschungszentrum Jülich (FZJ), Germany
- DDN, France
- Paratools, France
- INRIA, France
- CINI, Italy
- CINECA, Italy
- E4, Italy
- Poznan Supercomputing and Networking Center (PSNC), Poland
- Royal Institute of Technology in Stockholm (KTH), Sweden
External Links
Web site of the Admire project
Official press release
Contact
Publications
2024
- Ahmad Tarraf, Martin Schreiber, Alberto Cascajo, Jean-Baptiste Besnard, Marc-Andre Vef, Dominik Huber, Sonja Happ, Andre Brinkmann, David E Singh, Hans-Christian Hoppe, Alberto Miranda, Antonio J Pena, Rui Machado, Marta Garcia-Gasulla, Martin Schulz, Paul Carpenter, Simon Pickartz, Tiberiu Rotaru, Sergio Iserte, Victor Lopez, Jorge Ejarque, Heena Sirwani, Jesus Carretero, and Felix Wolf. 2024. Malleability in Modern HPC Systems: Current Experiences, Challenges, and Future Opportunities. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 35, 9: 1551–1564. DOI Author/Publisher URL
- Yingjin Qian, Marc-André Vef, Patrick Farrell, Andreas Dilger, Xi Li, Shuichi Ihara, Yinjin Fu, Wei Xue, and André Brinkmann. 2024. Combining Buffered I/O and Direct I/O in Distributed File Systems. In Proceedings of the 22nd USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, February 27-29, 17–33. Author/Publisher URL
2023
- Jesus Carretero, Javier Garcia-Blas, Marco Aldinucci, Jean Baptiste Besnard, Jean-Thomas Acquaviva, Andre Brinkmann, Marc-Andre Vef, Emmanuel Jeannot, Alberto Miranda, Ramon Nou, Morris Riedel, Massimo Torquati, and Felix Wolf. 2023. Adaptive multi-tier intelligent data manager for Exascale. In Proceedings of the 20th ACM International Conference on Computing Frontiers {CF}, Bologna, Italy, May 9-11, 285–290. DOI
- Marc-André Vef, Alberto Miranda, Ramon Nou, and André Brinkmann. 2023. From Static to Malleable: Improving Flexibility and Compatibility in Burst Buffer File Systems. In 2nd International Workshop on Malleability Techniques Applications in High-Performance Computing (HPCMall). Author/Publisher URL
- Marc-André Vef. 2023. New techniques for tracing and designing HPC storage systems. Author/Publisher URL
- Nafiseh Moti, André Brinkmann, Marc-André Vef, Philippe Deniel, Jesus Carretero, Philip Carns, Jean-Thomas Acquaviva, and Reza Salkhordeh. 2023. The I/O Trace Initiative: Building a Collaborative I/O Archive to Advance HPC. In Proceedings of the SC ’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis. DOI Author/Publisher URL
- Yingjin Qian, Wen Cheng, Lingfang Zeng, Xi Li, Marc-André Vef, Andreas Dilger, Siyao Lai, Shuichi Ihara, Yong Fan, and André Brinkmann. 2023. Xfast: Extreme File Attribute Stat Acceleration for Lustre. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), Denver, CO, USA, November 12-17, 96:1–96:12. DOI Author/Publisher URL
2022
- Nafiseh Moti, Reza Salkhordeh, and André Brinkmann. 2022. Protected Functions: User Space Privileged Function Calls. In Proceedings of the 35th International Conference on the Architecture of Computing Systems (ARCS), Heilbronn, Germany, September 13-15, 117–131. Author/Publisher URL
- Yingjin Qian, Wen Cheng, Lingfang Zeng, Marc-André Vef, Oleg Drokin, Andreas Dilger, Shuichi Ihara, Wusheng Zhang, Yang Wang, and André Brinkmann. 2022. MetaWBC: POSIX-compliant metadata write-back caching for distributed file systems. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 795–814.
2021
- Frederic Schimmelpfennig, Marc-André Vef, Reza Salkhordeh, Alberto Miranda, Ramon Nou, and André Brinkmann. 2021. Streamlining distributed Deep Learning I/O with ad hoc file systems. In 2021 IEEE International Conference on Cluster Computing (CLUSTER), 169–180. DOI Author/Publisher URL