Better AI for Everyone
Building trusted, safe, and efficient AI requires better systems for measurement and accountability. MLCommons’ collective engineering with industry and academia continually measures and improves the accuracy, safety, speed, and efficiency of AI technologies.
125+
MLCommons Members and Affiliates
6
Benchmark Suites
56,000+
MLPerf Performance Results to-date
Accelerating Artificial Intelligence Innovation
In collaboration with our 125+ founding members and affiliates, including startups, leading companies, academics, and non-profits from around the globe, we democratize AI through open industry-standard benchmarks that measure quality and performance and by building open, large-scale, and diverse datasets to improve AI models.
Focus Areas
We help to advance new technologies by democratizing AI adoption through the creation and management of open useful measures of quality and performance, large scale open data sets, and ongoing research efforts.
Benchmarking
Benchmarks help balance the benefits and risks of AI through quantitative tools that guide effective and responsible AI development. They provide consistent measurements of accuracy, safety, speed, and efficiency which enable engineers to design reliable products and services and help researchers gain new insights to drive the solutions of tomorrow.
Datasets
Evaluating AI systems depends on rigorous, standardized test datasets. MLCommons builds open, large-scale, and diverse datasets and a rich ecosystem of techniques and tools for AI data, helping the broader community deliver more accurate and safer AI systems.
Research
Open collaboration and support with the research community helps accelerate and democratize scientific discovery. MLCommons shared research infrastructure for benchmarking, rich datasets and diverse community, help enable the scientific research community to derive new insights for new breakthroughs in AI, for the betterment of society.
Members
MLCommons is supported by our 125+ founding members and affiliates, including startups, leading companies, academics, and non-profits from around the globe.
Join Our Community
MLCommons is a community-driven and community-funded effort. We welcome all corporations, academic researchers, nonprofits, government organizations, and individuals on a non-discriminatory basis. Join us!
Featured Articles
New MLPerf Storage v1.0 Benchmark Results Show Storage Systems Play a Critical Role in AI Model Training Performance
Storage system providers invent new, creative solutions to keep pace with faster accelerators, but the challenge continues to escalate
MLCommons Medical WG Supports FeTS 2.0 Clinical Study with MedPerf and GaNDLF
Partnering with global researchers to advance brain tumor research with AI
New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems
New mixture of experts benchmark tracks emerging architectures for AI models
Announcing the results of the inaugural AlgoPerf: Training Algorithms benchmark competition
Non-diagonal preconditioning has dethroned Nesterov Adam, and our self-tuning track has crowned a new state-of-the-art for completely hyperparameter-free training algorithms
MLCommons AI Safety Working Group’s Rapid Progress to a v1.0 Release
Building a comprehensive approach to measuring the safety of LLMs and beyond
New MLPerf Training Benchmark Results Highlight Hardware and Software Innovations in AI Systems
Two new benchmarks added – highlighting language model fine-tuning and classification for graph data