MLCommons®, the open engineering consortium behind the industry-standard MLPerf® benchmarks, today announced the release of MLPerf Client v1.6, the latest update to its benchmark suite for evaluating AI performance on personal computers.

MLPerf Client measures how effectively PCs—from laptops and desktops to workstations—run AI workloads such as large language models (LLMs) locally. By simulating real-world generative AI tasks including summarization, content creation, and code analysis, the benchmark provides clear, standardized metrics for both responsiveness and throughput.

With version 1.6, MLPerf Client continues to refine and improve the benchmarking experience with updates to key software components and enhancements to usability and performance.

What’s New in MLPerf Client v1.6

MLPerf Client v1.6 introduces updated acceleration support through new versions of core AI runtimes and frameworks. These include updates to Windows ML and llama.cpp, along with the latest runtime optimizations from independent hardware vendors on Windows platforms. On Apple platforms, updates to MLX with Metal and llama.cpp with Metal further improve performance and compatibility on macOS and iPad systems.

The release also brings a series of improvements to the graphical user interface. Startup performance has been optimized to reduce application launch times, and a new progress bar provides clear feedback during initialization. The application has been re-architected internally to improve overall stability during benchmarking runs.

To streamline repeated testing workflows, MLPerf Client v1.6 adds a new option to disable download confirmation prompts when starting benchmark runs. This enhancement allows users to launch multiple test batches more efficiently with a single click, improving productivity for reviewers and developers running iterative tests.

These updates reinforce MLPerf Client’s role as a reliable, easy-to-use benchmark for evaluating AI performance across a wide range of client systems.

MLPerf Client is developed through collaboration among leading technology companies, including AMD, Intel, Microsoft, NVIDIA, Qualcomm Technologies, Inc., and major PC OEMs. The benchmark is freely available, with source code open for inspection and contribution via the MLCommons GitHub repository.

For downloads, documentation, and more information, visit mlcommons.org/benchmarks/client. The GUI versions of MLPerf Client are also available via the App Stores for iOS and Mac and, in the coming days, via Steam.


About MLCommons

MLCommons is an open engineering consortium with a mission to make machine learning better for everyone. The organization develops industry-leading benchmarks, datasets, and best practices spanning cloud, data center, edge, and client AI systems. Its MLPerf benchmark suite is widely recognized as the standard for measuring machine learning performance.