Algorithms

Create a set of rigorous and relevant benchmarks to measure neural network training speedups due to algorithmic improvements.

Connect with us:

Purpose


We need a more scientifically sound methodology for evaluating training speedups due to new algorithms, including both new optimizers and new model architectures. Cutting edge machine learning (ML) models are exceeding the compute budgets of many researchers, and ML compute is becoming a larger and larger cost in industry. To reduce the compute cost of ML research and practice, we need rigorous benchmarking of efficiency. Such benchmarks will guide us in selecting the best directions to evolve existing techniques and ultimately enable progress toward models that produce not only better results, but better results at lower cost.

To drive innovation in machine learning algorithms that reduce the time needed to create useful models, we propose a new set of benchmarks to evaluate the training time for different algorithms (models, optimizers, preprocessing, etc.) on a fixed hardware configuration (future iterations can adopt new hardware configurations as needed). Our proposal includes two tracks: (1) a model track and (2) a training algorithm track. The goal of the model track is to find models that can be trained to achieve the target solution quality (out-of-sample error) in the least amount of time on each benchmark dataset. Similarly, the goal of the training algorithm track is to find training algorithms (optimizers, etc.) that train benchmark models to reach the goal out-of-sample error rate as fast as possible. 

Deliverables


  • Rules: We will produce a set of rules for algorithmic efficiency benchmarking, that specify an initial 2-3 benchmarks.
  • Harness: We will produce a testing harness that is executable on commonly available clouds using MLCube®.
  • Baseline training algorithm/model implementations: We will produce a baseline training algorithm and model implementation for each benchmark, which can also serve as submission skeletons.
  • Call for participation.
  • Initial Submission round: Once rules and harness/references are developed we will call for participation by the research/industry community.
  • Additional submission rounds on a regular schedule.
Meeting Schedule

Thursday December 19, 2024 Weekly – 11:35 – 12:30 Pacific Time


How to Join and Access Algorithm Resources 


Algorithm Working Group Chairs

To contact all Algorithms working group chairs email [email protected].

Frank Schneider

Frank Schneider is a postdoctoral researcher at the Chair for the Methods of Machine Learning at the University of Tübingen. Before that, he did his Ph.D. in the same group as part of the IMPRS-IS (International Max Planck Research School for Intelligent Systems) under the supervision of Prof. Dr. Philipp Hennig. His research focuses on helping the community move beyond the unsatisfactory user experience of current optimization methods for deep learning. He holds a Bachelor’s and Master’s degree in Simulation Technology from the University of Stuttgart as well as a Master’s degree in Industrial and Applied Mathematics from the Eindhoven University of Technology.

George Dahl

George Dahl received his Ph.D. from the University of Toronto under the supervision of Geoff Hinton, where he worked on deep learning approaches to problems in speech recognition, computational chemistry, and natural language text processing. Along with his collaborators, he created the first successful deep acoustic models for speech recognition, technology that now forms the basis for modern speech recognition. He has been a research scientist at Google on the Brain team since 2015. His current research focuses on improving our empirical understanding of neural network training as well as on deep learning applications to linguistic, perceptual, chemical, biological, and medical data.

Engineering Leads

Priya Kasimbeg

Priya Kasimbeg is a Research Engineer in Google DeepMind, Mountain View, CA, where she works on deep learning training algorithms research. She holds a Bachelor of Arts in Physics and Economics from New York University. She also holds a Master of Science degree in Computational and Mathematical Engineering from Stanford University, where she worked with the Stanford Artificial Intelligence Laboratory and the Autonomous Systems Laboratory.

Zachary Nado

Zachary Nado is a Research Engineer on the Google Research, Brain Team in Cambridge, MA, where he works on how to accelerate deep learning training algorithms and tuning. He holds a Bachelors of Science with honors from Brown University in Computer Science and Applied Mathematics, where he worked as a research associate in the Serre Lab.