The rapid advancements in AI and the challenges those advancements bring makes the work we do at MLCommons® more important than ever. Thanks to the hard work of our dedicated community, the scope and global impact of our efforts continues to grow. 

Since the early days of MLPerf®, our community has taken on bigger and bolder challenges. We’ve grown from one performance benchmark, to a family of performance benchmarks, to an organization with a mission to make “AI better for everyone.” Beyond our core MLPerf work, our efforts have expanded significantly –  ranging from a dataset format standard now in use by multiple major portals, to an innovative federated evaluation platform for benchmarking medical AI accuracy now in discussion with leading pharma organizations. And most recently, we introduced an effort to deliver a global standard for AI safety benchmarking with dedicated support from industry sponsors. The AI Safety effort is of significant interest to policymakers and standards bodies worldwide, and is on an accelerated trajectory to deliver a production level benchmark suite later this year. We believe this effort has the potential to substantially benefit society and the AI industry broadly by increasing AI safety and reducing purchasing and compliance uncertainty.

To ensure all efforts are fully supported for maximum success, we are evolving our organizational leadership to lead us into this next phase of growth.

Rebecca Weiss will become the interim Executive Director of MLCommons. She brings a wealth of experience including starting and building the data science function at Mozilla where she also conceived and built Rally. Rebecca is already deeply familiar with MLCommons in her role as a Distinguished Fellow as part of the AI Safety leadership team, she serves on the Board of the Pew Research Center, and holds graduate degrees from MIT and Stanford. 

MLPerf is a core priority for the organization and will benefit from a dedicated focus from David Kanter as the Head of MLPerf. David helped build MLCommons during our transformative early years, is a widely-recognized leader, spokesperson, and technical expert for our industry standard performance benchmarking. With this dedicated focus for MLPerf we will be able to deepen support and growth across all our core benchmark work.

MLCommons’ open, collaborative engineering approach, and extensive community spanning both industry and academia, continually measures and improves the accuracy, safety, speed, and efficiency of AI technologies. This new leadership structure reflects our growing role in the AI ecosystem, and will ensure maximum impact for all our efforts.