Today, MLCommons®, an open engineering consortium, released new results from MLPerf™ Training v2.0, which measures the performance of training machine learning models. Training models faster empowers researchers to unlock new capabilities such as diagnosing tumors, automatic speech recognition or improving movie recommendations. The latest MLPerf Training results demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems to benefit society at large.
The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.
In this round, MLPerf Training added a new object detection benchmark that trains the new RetinaNet reference model on the larger and more diverse Open Images dataset. This new test more accurately reflects state-of-the-art ML training for applications like collision avoidance for vehicles and robotics, retail analytics, and many others.
“I’m excited to release our new object detection benchmark, which was built based on extensive feedback from a customer advisory board and is an excellent tool for purchasing decisions, designing new accelerators and improving software,” said David Kanter, executive director of MLCommons.
The MLPerf Training v2.0 results include over 250 performance results from 21 different submitters including Azure, Baidu, Dell, Fujitsu, GIGABYTE, Google, Graphcore, HPE, Inspur, Intel-HabanaLabs, Lenovo, Nettrix, NVIDIA, Samsung, and Supermicro. In particular, MLCommons would like to congratulate first time MLPerf Training submitters ASUSTeK, CASIA, H3C, HazyResearch, Krai, and MosaicML.
“We are thrilled with the greater participation and the breadth, diversity, and performance of the MLPerf Training results,” said Eric Han, Co-Chair of the MLPerf Training Working Group. “We are especially excited about many of the novel software techniques highlighted in the latest round.”
To view the results and find additional information about the benchmarks please visit https://mlcommons.org/en/training-normal-20/
MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.