Benchmarks

Delivering open, useful measures of quality, performance and safety to help guide responsible AI development.

About MLCommons Benchmarks

The foundation for MLCommons benchmark work was derived from and builds upon MLPerf which aims to deliver a representative benchmark suite for AI/ML that fairly evaluates system performance to meet five high-level goals: 

01

Enable fair comparison of competing systems while still encouraging AI innovation.  

02

Accelerate AI progress through fair and useful measurement.  

03

Enforce reproducibility to ensure reliable results.

04

Serve both the commercial and research communities.  

05

Keep benchmarking effort affordable so all can participate. 

Benchmark management

Each benchmark suite is defined by a working group community of experts, who establish the fair benchmarks for AI systems. The working group defines the AI model to run, the data set against which it gets run, sets rules on what changes to the model are allowed, and measures how fast a given hardware runs the model. By working within this AI model tripod, MLCommons AI systems benchmarks measure not only the speed of hardware, but also the quality of training data, and quality metrics of an AI model itself.


Benchmark Suites

How to submit MLPerf results

If you are interested in submitting MLPerf benchmark results, please join the appropriate working group. Registration deadlines are several weeks in advance of submission dates to ensure that all submitters are aware of benchmark requirements, and to ensure proper provision of all necessary resources. 

  • Enable fair comparison of competing systems while still encouraging AI/ML innovation.
  • Accelerate AI/ML progress through fair and useful measurement.  
  • Enforce reproducibility to ensure reliable results.  
  • Serve both the commercial and research communities.
  • Keep benchmarking effort affordable so all can participate.

Membership is required for most benchmark working groups (e.g., Training, Inference, Mobile). There are some public benchmark working groups that have no access requirements where non-members may submit to the benchmark by first signing a Non-member Test Agreement.  

We encourage people to become MLCommons Members if they wish to contribute to MLCommons projects. However, if you are interested in contributing to one of our open source projects and do not think your organization would be a good fit as a Member, please enter your GitHub ID into our subscription form. If your organization is already a Member of MLCommons, you can also use the subscription form to request authorization to commit code in accordance with the CLA.