MLPerf Inference Working Group
Mission
Create a set of fair and representative inference benchmarks.
Purpose
Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call.
Deliverables
- Inference benchmark rules and definitions
- Inference benchmark reference software
- Inference benchmark submission rules
- Inference benchmark roadmap
- Publish inference benchmark results every ~6 months
Join
Meeting Schedule
Tuesday July 2, 2024
Weekly – 08:35 – 10:00 Pacific Time
Results Publication
August 28, 2024
Wednesday
Related Blog
-
New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems
New mixture of experts benchmark tracks emerging architectures for AI models
-
Mixtral 8x7B: a new MLPerf Inference benchmark for mixture of experts
MLPerf task force shares insights on the design of its mixture of experts large language model benchmark
-
New MLPerf Inference Benchmark Results Highlight The Rapid Growth of Generative AI Models
With 70 billion parameters, Llama 2 70B is the largest model added to the MLPerf Inference benchmark suite.
Technical Resources
CM Framework Workshop
Learn how to produce MLPerf Inference results with less fuss using the CM Framework in this MLCommons workshop.
How to Join and Access MLPerf Inference Working Group Resources
This group is limited exclusively to MLCommons members and affiliates. If you are not already a member, affiliate or part of a member or affiliate company, you can learn more about MLCommons membership here.
- To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- Fill out our subscription form and indicate that you’d like to join the MLPerf Inference Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the Inference Working Group is approved, you’ll be able to access the Inference folder in the Members Google Drive.
- To engage in group discussions, join the working group’s channels on the MLCommons Discord server.
- To access the GitHub repository (public):
- If you want to contribute code, please submit your GitHub ID to our subscription form.
- Visit the GitHub repository.
Inference Working Group Chairs
To contact all MLPerf Inference working group chairs email [email protected].
Miro Hodak
Miro Hodak is a Senior Member of Technical Staff at AMD where he works on AI performance, strategy, and solutions. Before joining AMD, he worked as an AI Architect at Lenovo Infrastructure Solutions Group, and, prior to that, he was a Research Assistant Professor in Physics at North Carolina State University. Miro has participated in MLPerf/MLCommons activities since 2020 including submitting multiple rounds of Inference and Training benchmarks. Miro has journal publications spanning AI, computer science, materials science, physics, and biochemistry. His work has been cited over 2,000 times.
Mitchelle Rasquinha
Mitchelle Rasquinha is a Senior Software Engineer working on the ML Performance Team within Google. She is interested in accurately capturing innovations in system architectures through robust benchmarking. Mitchelle has a background in Computer Architecture from the Georgia Institute of Technology.