Training Working Group
- Overview
- Training Working Group
- Inference Working Group
- Datasets Working Group
- Best Practices Working Group
- Research Working Group
Mission
Define, develop, and conduct the MLPerf™ Training benchmarks.
Purpose
Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. This working group draws on expertise in AI and the technology that powers AI from across the industry to design and create industry-standard benchmarks. Together, we create the reference implementations, rules, policies and procedures to benchmark a wide variety of AI workloads.
The training working group strives for a critical balance of perspectives to ensure fairness and accuracy in the benchmarks. This balance comes from our member's diverse experience in many different AI hardware and software spaces. We are always looking for new members to help us create the benchmarks that best capture innovation in AI.
Deliverables
- Training benchmark roadmap
- Training benchmark rules
- Training benchmark reference implementations
- Training benchmark results every ~6 months
Meeting Schedule
Weekly on Thursday from 8:35-10:00AM Pacific.
How to Join
Use this link to request to join the group/mailing list, and receive the meeting invite:
Training Google Group.
Requests are manually reviewed, so please be patient.
Working Group Resources
-
Shared documents and meeting minutes:
- Associate a Google account with your e-mail address.
- Ask to join our Public Google Group.
- Ask to join our Members Google Group.
- Once approved, go to the Training folder in our Members Google Drive
-
GitHub (public)
- If you want to contribute code, please sign our CLA first.
- GitHub link.
Working Group Chairs
Eric Han (erichan@mlcommons.org) - LinkedIn
Eric is a software engineer at Meta driving PyTorch performance improvements.
Ritika Borkar (ritika@mlcommons.org) - LinkedIn
Ritika Borkar is a Senior Deep Learning Architect at NVIDIA focusing on HW and SW optimizations for High Performance AI Computing on GPUs and datacenter systems. Previously, she worked on microarchitecture definition, ASIC design, and verification for IPs at Atmel and NVIDIA. Since MLPerf's inception in 2018, Ritika has influenced rules and processes for the training suite of benchmarks. She holds a master's degree in Electrical Engineering from the University of Minnesota, and a bachelor's degree from the National Institute of Technology at Trichy in India.