AI Risk & Reliability

AI Risk & Reliability Prompt EOI

Prompt Generation Expression of Interest

Thank you for your interest! This expression of interest is now closed. The deadline to apply was July 19, 2024. Please check the MLCommons’ website regularly for information about future paid AI safety engagement opportunities.


Deliverable options

Organizations can express interest in one or both of the following deliverable options. While MLCommons can adjust the submission dates of individual deliverables, the project must be completed by October 1, 2024.

 All submissions must demonstrate that you can fulfill one or both deliverable options:

Option #1

Pilot Project


  • Generation and delivery of ~200 prompts 
  • ~80% of prompts should be simple malicious / vulnerable 
  • ~20% of prompts should be generic adversarial 
  • Option to cover a target MLCommons language (English, French, Simplified Chinese, Hindi) or propose another language
  • Option to cover one of 13 MLCommons hazards or propose another
  • Smaller budget

Option #2

Full Coverage Project


  • Generation and delivery of ~50,000 prompts in each of the target languages
  • ~80% of the prompts should be simple malicious / vulnerable use
  • ~20% of the prompts should be “generic adversarial” prompts
  • Must cover English, French, Simplified Chinese and Hindi
  • Must cover all MLCommons hazards and personas
  • Bigger budget

Please read the EOI description for information on who should express interest, disqualifications, the budget, and the expected deliverable timeline. Clarifying questions and answers can be found here. Please email Mala at [email protected] with questions.


AI Risk & Responsibility working group contributors

The MLCommons AI Risk & Responsibility working group is composed of a global group of industry leaders, practitioners, researchers, and civil society experts committed to building a harmonized approach to AI risk and reliability. The following organizations have contributed to the AI Risk & Responsibility working group.

  • Accenture
  • ActiveFence
  • Anthropic
  • Argonne National Laboratory
  • Bain & Company
  • Blue Yonder
  • Bocconi University
  • Broadcom
  • cKnowledge, cTuning foundation
  • Carnegie Mellon
  • Center for Security and Emerging Technology
  • Coactive AI
  • Cohere
  • Columbia University
  • Common Crawl Foundation
  • Commn Ground
  • Context Fund
  • Credo AI
  • Deloitte
  • Digital Safety Research Institute
  • Dotphoton
  • EleutherAI
  • Ethriva
  • Febus
  • Futurewei Technologies
  • Georgia Institute of Technology
  • Google
  • Hewlett Packard Enterprise
  • Humanitas AI
  • IIT Delhi
  • Illinois Institute of Technology
  • Inflection
  • Intel
  • Kaggle
  • Lawrence Livermore National Laboratory
  • Learn Prompting
  • Lenovo
  • MIT
  • Meta FAIR
  • Microsoft
  • NASA
  • Nebius
  • NVIDIA Corporation
  • NewsGuard
  • Nutanix
  • OpenAI
  • Process Dynamics
  • Protecto.ai
  • Protiviti
  • Qualcomm Technologies, Inc.
  • RAND
  • Reins AI
  • SAP
  • SaferAI
  • Stanford
  • Surescripts LLC
  • Telecommunications Technology Association
  • Toloka
  • TU Eindhoven
  • Turaco Strategy
  • UC Irvine
  • Univ. of British Columbia (UBC)
  • Univ. of Birmingham
  • Univ. of Cambridge
  • Univ. of Chicago
  • Univ. of Illinois at Urbana-Champaign
  • Univ. of Southern California (USC)
  • Univ. of Trento

Funding for the initial AI Risk & Responsibility working group effort was provided by Google, Intel, Meta, NVIDIA and Qualcomm Technologies, Inc. MLCommons is committed to supporting a long-term effort for this important work and welcomes additional funding contributors.