Algorithms
Create a set of rigorous and relevant benchmarks to measure neural network training speedups due to algorithmic improvements.
Purpose
We need a more scientifically sound methodology for evaluating training speedups due to new algorithms, including both new optimizers and new model architectures. Cutting edge machine learning (ML) models are exceeding the compute budgets of many researchers, and ML compute is becoming a larger and larger cost in industry. To reduce the compute cost of ML research and practice, we need rigorous benchmarking of efficiency. Such benchmarks will guide us in selecting the best directions to evolve existing techniques and ultimately enable progress toward models that produce not only better results, but better results at lower cost.
To drive innovation in machine learning algorithms that reduce the time needed to create useful models, we propose a new set of benchmarks to evaluate the training time for different algorithms (models, optimizers, preprocessing, etc.) on a fixed hardware configuration (future iterations can adopt new hardware configurations as needed). Our proposal includes two tracks: (1) a model track and (2) a training algorithm track. The goal of the model track is to find models that can be trained to achieve the target solution quality (out-of-sample error) in the least amount of time on each benchmark dataset. Similarly, the goal of the training algorithm track is to find training algorithms (optimizers, etc.) that train benchmark models to reach the goal out-of-sample error rate as fast as possible.
Deliverables
- Rules: We will produce a set of rules for algorithmic efficiency benchmarking, that specify an initial 2-3 benchmarks.
- Harness: We will produce a testing harness that is executable on commonly available clouds using MLCube®.
- Baseline training algorithm/model implementations: We will produce a baseline training algorithm and model implementation for each benchmark, which can also serve as submission skeletons.
- Call for participation.
- Initial Submission round: Once rules and harness/references are developed we will call for participation by the research/industry community.
- Additional submission rounds on a regular schedule.
Meeting Schedule
Thursday December 19, 2024 Weekly – 11:35 – 12:30 Pacific Time
How to Join and Access Algorithm Resources
To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- Fill out our subscription form and indicate that you’d like to join the Medical Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the Algorithms Working Group is approved, you’ll be able to access the Algorithms folder in the Public Google Drive.
To engage in working group discussions, join the group’s channels on the MLCommons Discord server.
To access the GitHub repositories (public):
- If you want to contribute code, please submit your GitHub ID to our subscription form.
- Visit the GitHub repository.
Algorithm Working Group Chairs
To contact all Algorithms working group chairs email [email protected].