AI Risk & Reliability

Support community development of AI risk and reliability tests and organize definition of research- and industry-standard AI safety benchmarks based on those tests.

Connect with us:

Purpose


Our goal is for these benchmarks to guide responsible development, support consumer / purchase decision making, and enable technically sound and risk-based policy negotiation.

Deliverables


We are a community based effort and always welcome new members. There is no previous experience or education required to join as a volunteer. Specifically, the working group has the following four major tasks:

  • Tests: Curate a pool of safety tests from diverse sources, including facilitating the development of better tests and testing methodologies.
  • Benchmarks: Define benchmarks for specific AI use-cases, each of which uses a subset of the tests and summarizes the results in a way that enables decision making by non-experts.
  • Platform: Develop a community platform for safety testing of AI systems that supports registration of tests, definition of benchmarks, testing of AI systems, management of test results, and viewing of benchmark scores.
  • Governance: Define a set of principles and policies and initiate a broad multi-stakeholder process to ensure trustworthy decision making.
Meeting Schedule

Friday December 13, 2024 Weekly – 08:30 – 09:30 Pacific Time

How to Join and Access Resources



AI Risk & Reliability Working Group Chairs

To contact all AI Risk & Reliability working group chairs email [email protected].

Joaquin Vanschoren

Joaquin Vanschoren is an Associate Professor of Computer Science at the Eindhoven University of Technology. His research focuses on understanding machine learning algorithms and turning insights into progressively more automated and efficient AI systems. He founded and leads OpenML.org, initiated and chaired the NeurIPS Datasets and Benchmarks track, and has won the Dutch Data Prize, an Amazon Research Award, and an ECMLPKDD Best Demo award. He has given over 30 invited talks, was a tutorial speaker at NeurIPS 2018 and AAAI 2021, and has authored over 150 scientific papers, as well as reference books on Automated Machine Learning and Meta-learning. He is editor-in-chief of DMLR, action editor of JMLR, and moderator for ArXiv. He is a founding member of the European AI networks ELLIS and CLAIRE.

Percy Liang

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models. His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.

Peter Mattson

Peter Mattson is a Senior Staff Engineer at Google. He co-founded and is President of MLCommons®, and co-founded and was General Chair of the MLPerf consortium that preceded it. Previously, he founded the Programming Systems and Applications Group at NVIDIA Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. His research focuses on understanding machine learning models and data through quantitative metrics and analysis. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.

AI Risk & Reliability Workstream Chairs

Commercial beta users: Marisa Boston, Reins AI; James Ezick, Qualcomm Technologies, Inc., and Rebecca Weiss, MLCommons

Evaluator models: Kurt Bollacker, MLCommons, Ryan Tsang, and Shaona Ghosh, NVIDIA

Grading, scoring, and reporting: Wiebke Hutiri, Sony AI; Peter Mattson, Google, MLCommons and AI Safety co-chair and Heather Frase, Veraitech

Hazards: Heather Frase, Veraitech; Chris Knotz, and Adina Williams, Meta

Multimodal: Ken Fricklas, Turaco Strategy; Alicia Parrish, Google, and Paul Röttger, Università Bocconi

Prompts: Mala Kumar, MLCommons and Bertie Vidgen, MLCommons

Scope (Taxonomy, Personas, Localization and Use Cases): Heather Frase, Veraitech and Eleonora Presani, Meta

Test integrity and score reliability: Sean McGregor, UL Research Institutes and Rebecca Weiss, MLCommons

Questions?

Reach out to us at [email protected]