Research Working Group
Dynabench Working Group
- Training Working Group
- Inference Working Group
- Datasets Working Group
- Best Practices Working Group
- Research Working Group
Accelerate machine innovation and increase scientific rigor in machine learning by providing a flexible ML benchmarking platform.
Dynabench is a research platform for dynamic data collection and benchmarking. In particular, Dynabench challenges existing ML benchmarking dogma by embracing dynamic dataset generation. Benchmarks for machine learning solutions based on static datasets have well-known issues: they saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts and have unclear or imperfect evaluation metrics. In this sense, Dynabench enables a scientific experiment: is it possible to make faster progress if data is collected dynamically, with humans and models in the loop, rather than in the old-fashioned static way? Further, DynaBench enables an ecosystem of other ML benchmarks in areas such as algorithmic efficiency.
Roadmap for Dynabench development
Dynabench benchmarking platform
Dynabench community support
Weekly on Thursday from 10:35-11:30AM Pacific.
How to Join
Use this link to request to join the group/mailing list, and receive the meeting invite:
Dynabench Google Group.
Requests are manually reviewed, so please be patient.
Working Group Resources
- Shared documents and meeting minutes:
- Associate a Google account with your e-mail address.
- Ask to join our Public Google Group.
- Once approved, go to the Dynabench folder in our Public Google Drive.
- GitHub (public)
- If you want to contribute code, please sign our CLA first.
- GitHub link.
Working Group Chairs
Adina Williams (email@example.com) - LinkedIn
Adina Williams is an AI Research Scientist in the Facebook Artificial Intelligence Research (FAIR) Group in New York City. She received her PhD in Linguistics under the supervision of Liina Pylkkänen in the fall of 2018 from New York University, where she also contributed to the Machine Learning for Language Laboratory in the Center for Data Science with the support of Sam Bowman. Her research aims to bridge the gap between linguistics, cognitive science, and NLP. She is currently working on projects involving natural language inference, evaluating model biases, and information theoretic approaches to computational morphology.
Max Bartolo (firstname.lastname@example.org) - LinkedIn - Twitter
Max leads the Command modelling team at Cohere working on improving adversarial robustness and the overall capabilities of large language models. He is also one of the original contributors to the Dynabench working group, which he currently co-leads, and he also lectures at UCL.