Today MLCommons® announced new results from the industry-standard MLPerf® Tiny v1.2 benchmark suite. 

Machine learning inference on the edge is fast becoming a popular way to add intelligence to different devices and increase energy efficiency, privacy, responsiveness, and autonomy. The MLPerf Tiny benchmark suite captures inference use cases that involve “tiny” neural networks and tests them in a fair and reproducible manner. These networks are typically under 100 kB and process data from sensors including audio and vision to provide endpoint intelligence for low-power devices in the smallest form factors.

“We are pleased by the continued adoption of the MLPerf Tiny benchmark suite throughout the industry,” said David Kanter, Executive Director of MLCommons. “The diversity of submissions shows us that the industry is embracing AI through increased software support, which makes our benchmarking work all the more important.”

This latest round of MLPerf Tiny results includes submissions from Bosch, Kai Jiang (individual), Qualcomm Technologies, Inc., Renesas, STMicroelectronics, Skymizer, and Syntiant, with 91 overall performance results including 18 energy measurements. The results included a range of new, capable hardware systems designed to take advantage of AI-powered processes and the latest software stacks that increase performance and efficiency.

“We are pleased to see the MLPerf Tiny benchmark being used to characterize a wide range of low-power systems, including a variety of microprocessor architectures and AI-enabled low-power sensing hubs,” said Csaba Kiraly, MLPerf Tiny working group co-chair. “Congratulations to all the submitters.”

“The Tiny ML community is continuing to push the envelope with multiple new systems incorporating AI-specific features as well as new software stacks,” said Jeremy Holleman, co-chair of the MLPerf Tiny working group.

View the Results
View the MLPerf Tiny v1.2 benchmark results.