Best Practices Working Group
- Overview
- Training Working Group
- Inference Working Group
- Datasets Working Group
- Best Practices Working Group
- Research Working Group
Mission
The best practices working group aims to improve AI ease-of-use and to scale AI to more people.
Purpose
The best practices working group looks at opportunities to address common and cross-cutting needs of AI practitioners. The starting point for this effort is to reduce friction for machine learning by ensuring that models are easily portable and reproducible. This initial starting point is the MLCube™ project, where we are creating the source code and specifications to achieve this.
MLCube is the shipping container that enables researchers and developers to easily share the software that powers machine learning. MLCube is a set of common conventions for creating ML software that can just "plug-and-play" on many different systems. MLCube makes it easier for researchers to share innovative ML models, for a developer to experiment with many different models, and for software companies to create infrastructure for models. It creates opportunities by putting ML in the hands of more people.
Deliverables
- The MLCube specification
- Tutorials and instructions for MLCube
- The MLCube OSS project
Meeting Schedule
Weekly on Friday from 9:00-10:00AM Pacific.
Mailing List
Working Group Resources
Working Group Chair Emails
Victor Bittorf (victor@mlcommons.org)
Diane Feddema (dfeddema@redhat.com)
Working Group Chair Bios
Victor is an applied research scientist at Facebook driving PyTorch performance to new heights for both production and research. Previously, he was in Google Brain working on performance for Tensorflow and TPUs. Holding a Masters Degree in computer science from UW-Madison, his academic work focused on ML optimization algorithm design and it's efficient system implementation. Victor enjoys serving his local community through in-kind contributions as a photographer and videographer for not-for-profit organizations.
Diane is a Principal Software Engineer at Red Hat leading performance analysis and visualization for the Open Data Hub managed service. She also creates experiments comparing different types of infrastructure and software frameworks to validate reference architectures for machine learning workloads using MLPerf™. Previously Diane was a performance engineer at the National Center for Atmospheric Research, NCAR, working on optimizations and tuning of parallel global climate models. She also worked at SGI and Cray on performance and compilers. She has a BS in Computer Science from the University of Iowa and MS in Computer Science from the University of Colorado.