MLPerf Training Working Group

Mission
Define, develop, and conduct the MLPerf Training benchmarks.
Purpose
Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws on expertise in AI and the technology that powers AI from across the industry to design and create industry-standard benchmarks. Together, we create the reference implementations, rules, policies, and procedures to benchmark a wide variety of AI workloads.
The MLPerf Training working group strives for a critical balance of perspectives to ensure fairness and accuracy in the benchmarks. This balance comes from our members’ diverse experience in many different AI hardware and software spaces. We are always looking for new members to help us create the benchmarks that best capture innovation in AI.
Deliverables
- Training benchmark roadmap
- Training benchmark rules
- Training benchmark reference implementations
- Training benchmark results every ~6 months
Meeting Schedule
Weekly on Thursday from 8:35-10:00AM Pacific.
Join
Related Blog
-
New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years
New benchmarks, new submitters, performance gains, and new hardware add scale to latest MLCommons MLPerf results
-
MLPerf Results Show Rapid AI Performance Gains
Latest benchmarks highlight progress in training advanced neural networks and deploying AI models on the edge
-
Latest MLPerf Results Display Gains for All
MLCommons’ benchmark suites demonstrate performance gains up to 5X for systems from microwatts to megawatts, advancing the frontiers of AI
How to Join and Access MLPerf Training Working Group Resources
The MLPerf Training working group is limited exclusively to MLCommons members and affiliates. If you are not already a member or affiliate, or part of a member or affiliate company, you can learn more about MLCommons membership here.
- To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- Fill out our subscription form and indicate that you’d like to join the MLPerf Training Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the Training Working Group is approved, you’ll be able to access the Training folder in the Members Google Drive.
- To engage in working group discussions, join the group’s channels on the MLCommons Discord server.
- To access the GitHub repository (public):
- If you want to contribute code, please submit your GitHub ID to our subscription form.
- Visit the GitHub repository.
Training Working Group Chairs
To contact all MLCommons Training working group chairs email training-chairs@mlcommons.org.
Hiwot Kassa (hiwot@mlcommons.org)
Hiwot is a research engineer at AI SW/HW codesign team at Meta working on performance optimization and benchmarking of large-scale workloads. She holds a Ph.D. in computer science and engineering from the University of Michigan.
Ritika Borkar (ritika@mlcommons.org) - LinkedIn
Ritika Borkar is a Senior Deep Learning Architect at NVIDIA focusing on HW and SW optimizations for High Performance AI Computing on GPUs and datacenter systems. Previously, she worked on microarchitecture definition, ASIC design, and verification for IPs at Atmel and NVIDIA. Since MLPerf’s inception in 2018, Ritika has influenced rules and processes for the training suite of benchmarks. She holds a master’s degree in Electrical Engineering from the University of Minnesota, and a bachelor’s degree from the National Institute of Technology at Trichy in India.