SAN FRANCISCO – September 8, 2022 - Today, the open engineering consortium MLCommons® announced fresh results from MLPerf™ Inference v2.1, which analyzes the performance of inference - the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. This round established new benchmarks with nearly 5,300 performance results and 2,400 power measurements, 1.37X and 1.09X more than the previous round, respectively, reflecting the community's vigor.
MLPerf benchmarks are comprehensive system tests that stress machine learning models, software, and hardware, and optionally monitor energy consumption. The open-source and peer-reviewed benchmark suites level the playing ground for competitiveness, which fosters innovation, performance, and energy efficiency for the whole sector.
The MLPerf Inference benchmarks are focused on datacenter and edge systems, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the contributors to the submission round.
To view the results and find additional information about the benchmarks please visit https://mlcommons.org/en/inference-datacenter-21/ and https://mlcommons.org/en/inference-edge-21/. These results reveal extensive industry participation, a focus on energy saving, paving the path for more capable intelligent systems that will benefit society as a whole.
MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.