MLCommons

Research Working Group

Dynabench Working Group

Mission

Accelerate machine innovation and increase scientific rigor in machine learning by providing a flexible ML benchmarking platform.

Purpose

Dynabench is a research platform for dynamic data collection and benchmarking. In particular, Dynabench challenges existing ML benchmarking dogma by embracing dynamic dataset generation. Benchmarks for machine learning solutions based on static datasets have well-known issues: they saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts and have unclear or imperfect evaluation metrics. In this sense, Dynabench enables a scientific experiment: is it possible to make faster progress if data is collected dynamically, with humans and models in the loop, rather than in the old-fashioned static way? Further, DynaBench enables an ecosystem of other ML benchmarks in areas such as algorithmic efficiency.

Deliverables

Roadmap for Dynabench development
Dynabench benchmarking platform
Dynabench community support

Meeting Schedule

Weekly on Thursday from 10:35-11:30AM Pacific.

How to Join

Use this link to request to join the group/mailing list, and receive the meeting invite:
Dynabench Google Group.
Requests are manually reviewed, so please be patient.

Working Group Resources

Working Group Chairs

Adina Williams (adinawilliams@mlcommons.org) - LinkedIn

Adina Williams is an AI Research Scientist in the Facebook Artificial Intelligence Research (FAIR) Group in New York City. She received her PhD in Linguistics under the supervision of Liina Pylkkänen in the fall of 2018 from New York University, where she also contributed to the Machine Learning for Language Laboratory in the Center for Data Science with the support of Sam Bowman. Her research aims to bridge the gap between linguistics, cognitive science, and NLP. She is currently working on projects involving natural language inference, evaluating model biases, and information theoretic approaches to computational morphology.

Max Bartolo (max@mlcommons.org) - LinkedIn - Twitter

Max leads the Command modelling team at Cohere working on improving adversarial robustness and the overall capabilities of large language models. He is also one of the original contributors to the Dynabench working group, which he currently co-leads, and he also lectures at UCL.