Inference Working Group
Tiny Working Group
- Training Working Group
- Inference Working Group
- Datasets Working Group
- Best Practices Working Group
- Research Working Group
Develop “tiny ML” benchmarks to evaluate inference performance on ultra-low-power systems.
ML inference on the edge is increasingly attractive to increase energy efficiency, privacy, responsiveness, and autonomy of edge devices. Recently there have been significant strides, in both academia and industry, towards expanding the scope of edge machine learning to a new class of ultra-low-power computational platforms. “Tiny ML,” or machine learning on extremely constrained devices, breaks the traditional paradigm of energy and compute hungry machine learning and allows for greater overall efficiency relative to a cloud-centric approach by eliminating networking overhead. This effort extends the accessibility and ubiquity of machine learning since its reach has traditionally been limited by the cost of larger computing platforms.
To enable the development and understanding of new, tiny machine learning devices, this “TinyMLPerf” working group will extend the existing inference benchmark to include microcontrollers and other resource-constrained computing platforms.
- 3-4 benchmarks with defined datasets and reference models for the closed division
- Software framework to load inputs and measure latency
- Rules for benchmarking latency and energy
- Power and energy measurement with partners
Weekly on Monday from 6:00-7:00AM Pacific.
Working Group Resources
Working Group Chair Emails
Colin Banbury (email@example.com)
Vijay Janapa Reddi (firstname.lastname@example.org)
Working Group Chair Bios
Colby Banbury is a Ph.D. student in the Edge Computing Lab at Harvard University. He received his B.S. from the University of Delaware. His research focuses on ultra low power machine learning and hardware-software codesign.
Vijay Janapa Reddi is an Associate Professor at Harvard University. Before joining Harvard, he was an Associate Professor at The University of Texas at Austin in the Department of Electrical and Computer Engineering. His research interests include computer architecture and runtime systems, specifically in the context of autonomous machines and mobile and edge computing systems. Dr. Janapa Reddi has received multiple honors and awards, including the National Academy of Engineering (NAE) Gilbreth Lecturer Honor and has been inducted into the MICRO and HPCA Halls of Fame. He received a Ph.D. in computer science from Harvard University, M.S. from the University of Colorado at Boulder and B.S from Santa Clara University.