AI Safety Working Group
![](https://mlcommons.org/wp-content/uploads/2023/11/AI-Safety-1024x576.jpg)
Mission
Support community development of AI safety tests and organize definition of research- and industry-standard AI safety benchmarks based on those tests.
Purpose
Our goal is for these benchmarks to:
1. Guide responsible development: Computing performance benchmarks such as MLPerf have repeatedly shown their ability to concretely define a common objective such as “faster” and thereby accelerate overall progress towards that objective. Similarly, the safety benchmarks can help define “safer” and thus accelerate the development of “safer” AI systems.
2. Support consumer/purchaser decision making: AI systems are complex and determining if an AI system is suitable for a particular use-case is challenging. The AI safety benchmarks should help individual consumers and corporate purchasers make more informed decisions.
3. Enable technically sound and risk-based policy regulation: Spurred by public concern, governments in the EU, UK, US, and elsewhere are increasingly examining the safety of AI systems. The safety benchmarks should enable data-driven decision making for informing regulations.
Deliverables
Specifically, the working group has the following four major tasks:
- Tests: Curate a pool of safety tests from diverse sources, including facilitating the development of better tests and testing methodologies.
- Benchmarks: Define benchmarks for specific AI use-cases, each of which uses a subset of the tests and summarizes the results in a way that enables decision making by non-experts.
- Platform: Develop a community platform for safety testing of AI systems that supports registration of tests, definition of benchmarks, testing of AI systems, management of test results, and viewing of benchmark scores.
- Governance: Define a set of principles and policies and initiate a broad multi-stakeholder process to ensure trustworthy decision making.
Meeting Schedule
Weekly on Friday from 8:35-9:30AM Pacific.
Join
Related Blogs and News
-
MLCommons AI Safety prompt generation expression of interest. Submissions are now open!
MLCommons is looking for prompt generation suppliers for its v1.0 AI Safety Benchmark Suite. This will be for a paid opportunity.
-
MLCommons and AI Verify to collaborate on AI Safety Initiative
Agree to a memorandum of intent to collaborate on a set of AI safety benchmarks for LLMs
-
Creating a comprehensive Test Specification Schema for AI Safety
Helping to systematically document the creation, implementation, and execution of AI safety tests
AI Safety Working Group Projects
MLCommons AI Safety Overview
MLCommons AI Safety Benchmarks
Prompt Generation – Expression of Interest
How to Join and Access AI Safety Working Group Resources
- To sign up for the group mailing list and receive the meeting invite:
- Fill out our subscription form and indicate that you’d like to join the AI Safety Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the AI Safety Working Group is approved, you’ll be able to access the AI Safety folder in the Public Google Drive.
- To access the GitHub repositories (public):
- If you want to contribute code, please submit your GitHub username to our subscription form.
- Visit the GitHub repositories:
AI Safety Working Group Chairs
To contact all AI Safety working group chairs email [email protected].
Joaquin Vanschoren
Joaquin Vanschoren is an Associate Professor of Computer Science at the Eindhoven University of Technology. His research focuses on understanding machine learning algorithms and turning insights into progressively more automated and efficient AI systems. He founded and leads OpenML.org, initiated and chaired the NeurIPS Datasets and Benchmarks track, and has won the Dutch Data Prize, an Amazon Research Award, and an ECMLPKDD Best Demo award. He has given over 30 invited talks, was a tutorial speaker at NeurIPS 2018 and AAAI 2021, and has authored over 150 scientific papers, as well as reference books on Automated Machine Learning and Meta-learning. He is editor-in-chief of DMLR, action editor of JMLR, and moderator for ArXiv. He is a founding member of the European AI networks ELLIS and CLAIRE.
Percy Liang
Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models. His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.
Peter Mattson
Peter Mattson is a Senior Staff Engineer at Google. He co-founded and is President of MLCommons®, and co-founded and was General Chair of the MLPerf consortium that preceded it. Previously, he founded the Programming Systems and Applications Group at NVIDIA Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. His research focuses on understanding machine learning models and data through quantitative metrics and analysis. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.
AI Safety Workstream Chairs
Advanced Hazards, with focus on Bias, Context and Misrepresentation: Heather Frase, Center for Security and Emerging Technology (CSET)
Commercial Beta Users: Marisa Boston, Reins AI; James Ezick, Qualcomm Technologies, Inc.; Alice Schoenauer Sebag, Cohere; Rebecca Weiss, MLCommons
Evaluator Models: Kurt Bollacker, MLCommons
Generate Prompts: Bertie Vidgen, MLCommons
Grading, Scoring, Reporting: Heather Frase, CSET; Wiebke Hutiri, Sony AI; Peter Mattson, Google, MLCommons and AI Safety co-chair; Besmira Nushi, Microsoft; Forough Poursabzi, Microsoft
Multimodal, with focus on Vision/Language: Ken Fricklas, Turaco Strategy; Alicia Parrish, Google; Paul Röttger, Università Bocconi; Bertie Vidgen, MLCommons
Taxonomy, Personas, Localization and Use Cases: Heather Frase, CSET; Eleonora Presani, Meta
Test Integrity and Score Reliability: Sean McGregor, UL Research Institutes; Rebecca Weiss, MLCommons