As AI systems begin to proliferate, we all share a keen interest in ensuring that they are safe – and understanding exactly what steps have been taken to make them safe. To that end, the MLCommons® AI Safety working group has created an AI Safety Test Specification Schema that is used in the recent AI Safety v0.5 proof-of-concept (POC) benchmark release. The schema is a key tool for building a shared understanding of how the safety of AI systems is being tested.

Safety tests are a critical building block for evaluating the behavior of AI models and systems. They measure specific safety characteristics of an AI system, such as whether it can generate stereotypical or offensive content, wrong or inaccurate information, or information that might create public safety hazards. AI safety tests need to be thoroughly documented, including their creation, implementation, coverage areas, known limitations, and execution instructions. A test specification captures this information and serves as a central point of coordination for a broad set of practitioners: developers of AI systems, test implementers and executors, and the purchasers and consumers of AI systems. Each of these stakeholders approaches a safety test with their individual but overlapping perspectives and needs. 

Standardizing the schema for AI safety tests helps to make safety testing higher quality, more comprehensive and consistent, and reproducible at large scale among a group of practitioners who intend to implement, execute, or consume the tests. It also makes it easier (and more desirable) to share AI safety tests between components and projects. On the other hand, a lack of standardized test specifications can lead to missed test cases, miscommunications about system abilities and limitations, and misunderstandings about known issues, and can ultimately influence critical decisions regarding AI models and systems at the organizational level. 

Documenting AI Safety Tests

The AI Safety Test Specification Schema included in the recently released POC provides a standard template for documenting AI safety tests. It was created and vetted by a large and diverse group of researchers and practitioners in AI and related fields to ensure that it reflects the most up-to-date thinking on how to test for the safety of AI systems. 

The schema includes information on:

  • Test background and context: administrative and identity information about the test (e.g. a unique identifier, name, authors), as well as the purpose and scope including covered hazards, languages, and modalities.
  • Test data: expected structure, input and output of the test. Also information about the stakeholders, including target demographic groups and characteristics of annotators and evaluators.
  • Test implementation, execution, and evaluation: procedures for rating tests, requirements for the execution environment, metrics, and potential nuances of the evaluation process.

“Safety test specifications are typically “living documents” that are updated as needs change and the tests evolve over their lifecycle,” says Besmira Nushi, Microsoft Principal Researcher and co-author of the test specification schema. “The schema includes provisions for tracking both the provenance and the revision history of a specification – a feature that the ML Commons AI Safety working group is already putting to use as it marches toward a version 1.0 release of the AI safety benchmark later this year.”

The AI Safety Test Specification Schema builds on related AI safety efforts, including Datasheets for Datasets for documenting training datasets; Model/System Cards to document the performance characteristics and limitations of AI models and systems; Transparency Notes to communicate systems’ capabilities and limits; and the AI Safety Benchmark Proof-of-Concept v0.5.

The Test Specification Schema contains the fill-in-the-blanks specification template, as well as examples of completed test specifications – including specifications for the tests included in the AI Safety benchmark POC.

Building a Shared Understanding of AI Safety Testing

“The better job we can collectively do to clearly, consistently, and thoroughly document the testing we do on AI systems, the more we can trust that they are safe for us to use,” said Forough Poursabzi, co-author of the test specification schema. “The AI Test Specification Schema is an important step forward in helping all of us to speak the same language as we talk about the safety testing that we do.”

AI Safety tests are one piece of a comprehensive safety-enhancing ecosystem which includes red teaming strategies, evaluation metrics, and aggregation strategies.

The AI Safety Test Specification Schema can be downloaded here. We welcome feedback from the community.

Contributing authors:

  • Besmira Nushi, Microsoft, workstream co-chair
  • Forough Poursabzi, Microsoft, workstream co-chair
  • Adarsh Agrawal, Stony Brook University
  • Kurt Bollacker, MLCommons
  • James Ezick, Qualcomm Technologies, Inc.
  • Marisa Ferrara Boston, Reins AI
  • Kenneth Fricklas, Turaco Strategy
  • Lucía Gamboa, Credo AI
  • Michalis Karamousadakis, Plaixus
  • Bo Li, University of Chicago
  • Lama Nachman, Intel
  • James Noh, BreezeML
  • Cigdem Patlak
  • Hashim Shaik, National University
  • Wenhui Zhang, Bytedance