AI Risk & Reliability Working Group
Mission
Support community development of AI risk and reliability tests and organize definition of research- and industry-standard AI safety benchmarks based on those tests.
Purpose
Our goal is for these benchmarks to guide responsible development,
support consumer / purchase decision making, and enable technically
sound and risk-based policy negotiation.
Deliverables
We are a community based effort and always welcome new members. There is no previous experience or education required to join as a volunteer.
Specifically, the working group has the following four major tasks:
- Tests: Curate a pool of safety tests from diverse sources, including facilitating the development of better tests and testing methodologies.
- Benchmarks: Define benchmarks for specific AI use-cases, each of which uses a subset of the tests and summarizes the results in a way that enables decision making by non-experts.
- Platform: Develop a community platform for safety testing of AI systems that supports registration of tests, definition of benchmarks, testing of AI systems, management of test results, and viewing of benchmark scores.
- Governance: Define a set of principles and policies and initiate a broad multi-stakeholder process to ensure trustworthy decision making.
Join
Meeting Schedule
Friday November 8, 2024
Weekly – 08:30 – 09:30 Pacific Time
Related Blogs and News
-
MLCommons AI Safety Working Group’s Rapid Progress to a v1.0 Release
Building a comprehensive approach to measuring the safety of LLMs and beyond
-
MLCommons AI Safety prompt generation expression of interest. Submissions are now open!
MLCommons is looking for prompt generation suppliers for its v1.0 AI Safety Benchmark Suite. This will be for a paid opportunity.
-
MLCommons and AI Verify to collaborate on AI Safety Initiative
Agree to a memorandum of intent to collaborate on a set of AI safety benchmarks for LLMs
AI Risk & Reliability Working Group Projects
MLCommons AI Safety Overview
MLCommons AI Safety Benchmarks
Prompt Generation – Expression of Interest
How to Join and Access AI Risk & Reliability Working Group resources
- To sign up for the group mailing list and receive the meeting invite:
- Fill out our subscription form and indicate that you’d like to join the AI Risk & Reliability working group.
- Associate a Google account with your organizational email address.
- Once your request to join the AI Risk & Reliability working group is approved, you’ll be able to access the AI Risk & Reliability folder in the Public Google Drive.
- To access the GitHub repositories (public):
- If you want to contribute code, please submit your GitHub username to our subscription form.
- Visit the GitHub repositories:
AI Risk & Reliability Working Group Chairs
To contact all AI Risk & Reliability working group chairs email [email protected].
Joaquin Vanschoren
Joaquin Vanschoren is an Associate Professor of Computer Science at the Eindhoven University of Technology. His research focuses on understanding machine learning algorithms and turning insights into progressively more automated and efficient AI systems. He founded and leads OpenML.org, initiated and chaired the NeurIPS Datasets and Benchmarks track, and has won the Dutch Data Prize, an Amazon Research Award, and an ECMLPKDD Best Demo award. He has given over 30 invited talks, was a tutorial speaker at NeurIPS 2018 and AAAI 2021, and has authored over 150 scientific papers, as well as reference books on Automated Machine Learning and Meta-learning. He is editor-in-chief of DMLR, action editor of JMLR, and moderator for ArXiv. He is a founding member of the European AI networks ELLIS and CLAIRE.
Percy Liang
Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models. His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.
Peter Mattson
Peter Mattson is a Senior Staff Engineer at Google. He co-founded and is President of MLCommons®, and co-founded and was General Chair of the MLPerf consortium that preceded it. Previously, he founded the Programming Systems and Applications Group at NVIDIA Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. His research focuses on understanding machine learning models and data through quantitative metrics and analysis. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.
AI Risk & Reliability Workstream Chairs
Commercial beta users: Marisa Boston, Reins AI; James Ezick, Qualcomm Technologies, Inc., and Rebecca Weiss, MLCommons
Evaluator models: Kurt Bollacker, MLCommons, Ryan Tsang, and Shaona Ghosh, NVIDIA
Grading, scoring, and reporting: Wiebke Hutiri, Sony AI; Peter Mattson, Google, MLCommons and AI Safety co-chair and Heather Frase, Center for Security and Emerging Technology (CSET)
Hazards: Heather Frase, Center for Security and Emerging Technology (CSET); Chris Knotz, and Adina Williams, Meta
Multimodal: Ken Fricklas, Turaco Strategy; Alicia Parrish, Google, and Paul Röttger, Università Bocconi
Prompts: Mala Kumar, MLCommons and Bertie Vidgen, MLCommons
Scope (Taxonomy, Personas, Localization and Use Cases): Heather Frase, CSET and Eleonora Presani, Meta
Test integrity and score reliability: Sean McGregor, UL Research Institutes and Rebecca Weiss, MLCommons
Questions?
Reach out to us at [email protected]