MLPerf Storage Working Group
Mission
Define and develop the MLPerf Storage benchmarks to characterize performance of storage systems that support machine learning workloads.
Purpose
Storing and processing of training data is a crucial part of the machine learning (ML) pipeline. The way we ingest, store, and serve data into ML frameworks can significantly impact the performance of training and inference, as well as resource costs. However, even though data management can pose a significant bottleneck, it has received far less attention and specialization for ML.
The main goal of the MLPerf Storage working group is to create a benchmark that evaluates performance for the most important storage aspects in ML workloads, including data ingestion, training, and inference. Our end goal is to create a storage benchmark for the full ML pipeline which is compatible with diverse software frameworks and hardware accelerators. The benchmark will not require any specific hardware for performing computation.
Creating this benchmark will establish best practices in measuring storage performance in ML, contribute to the design of next generation systems for ML, and help system engineers find the right sizing of storage relative to compute in ML clusters.
Deliverables
- Storage access traces for representative ML applications, from the applications’ perspective. Our initial targets are Vision, NLP, and Recommenders. (Short-term goal)
- Storage benchmark rules for:
- Data ingestion phase (Medium-term goal)
- Training phase (Short-term goal)
- Inference phase (Long-term goal)
- Full ML pipeline (Long-term goal)
- Flexible generator of datasets:
- Synthetic workload generator based on analysis of I/O in real ML traces, which is aware of compute think-time. (Short-term goal)
- Trace replayer that scales the workload size. (Long-term goal)
- User-friendly testing harness that is easy to deploy with different storage systems. (Medium-term goal)
Join
Meeting Schedule
Friday November 8, 2024
Weekly – 08:05 – 09:00 Pacific Time
Related Blog
-
New MLPerf Storage v1.0 Benchmark Results Show Storage Systems Play a Critical Role in AI Model Training Performance
Storage system providers invent new, creative solutions to keep pace with faster accelerators, but the challenge continues to escalate
-
MLPerf Results Highlight Growing Importance of Generative AI and Storage
Latest benchmarks include LLM in inference and the first results for storage benchmark
-
Introducing the MLPerf Storage Benchmark Suite
The first benchmark suite that measures the performance of storage for machine learning workloads
How to Join and Access MLPerf Storage Working Group Resources
- To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- Fill out our subscription form and indicate that you’d like to join the MLPerf Storage Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the Storage Working Group is approved, you’ll be able to access the Storage folder in the Public Google Drive.
- To engage in group discussions, join the group’s channels on the MLCommons Discord server.
- To access the GitHub repository (public):
- If you want to contribute code, please submit your GitHub ID to our subscription form.
- Visit the GitHub repository.
Storage Working Group Chairs
To contact all Storage working group chairs email [email protected].
Curtis Anderson
Curtis is a filesystem developer at heart, spending the last 36 of his 45 years of programming experience working on filesystems and nearly every type of storage-related technology. He’s currently working at Panasas helping steer PanFS toward a more commercial view of the HPC market. He also enjoys watching the business side of the house do their things, it’s foreign to tech but has its own internal logic and “architecture”.
Johnu George
Johnu George is a staff engineer at Nutanix with a wealth of experience in building production grade cloud native platforms and large scale hybrid data pipelines. His research interests include machine learning system design, distributed learning infrastructure improvements and ML workload characterization. He is an active open source contributor and has steered several industry collaborations on projects like Kubeflow, Apache Mnemonic and Knative. He is an Apache PMC member and currently chairing Kubeflow Training and AutoML Working groups.
Oana Balmau
Oana is an Assistant Professor in the School of Computer Science at McGill University. Her research focuses on storage systems and data management systems, with an emphasis on large-scale data management for machine learning, data science, and edge computing. She completed her PhD at the University of Sydney, advised by Prof. Willy Zwaenepoel. Before her PhD, Oana earned her Bachelors and Masters degrees in Computer Science from EPFL.
Vice Chairs
Huihuo Zheng
Huihuo Zheng is a computer scientist at Argonne National Laboratory. His research interests include data management and parallel I/O for deep learning applications, as well as large scale distributed training on HPC supercomputers. He also applies HPC and deep learning to solve challenging domain science problems in physics, chemistry and material sciences. Huihuo received his PhD. in Physics at the University of Illinois at Urbana-Champaign in 2016.