Best Practices Working Group


The best practices working group aims to improve AI ease-of-use and to scale AI to more people.


The best practices working group looks at opportunities to address common and cross-cutting needs of AI practitioners. The starting point for this effort is to reduce friction for machine learning by ensuring that models are easily portable and reproducible. This initial starting point is the MLCube™ project, where we are creating the source code and specifications to achieve this.

MLCube is the shipping container that enables researchers and developers to easily share the software that powers machine learning. MLCube is a set of common conventions for creating ML software that can just "plug-and-play" on many different systems. MLCube makes it easier for researchers to share innovative ML models, for a developer to experiment with many different models, and for software companies to create infrastructure for models. It creates opportunities by putting ML in the hands of more people.


  1. The MLCube specification
  2. Tutorials and instructions for MLCube
  3. The MLCube OSS project

Meeting Schedule

Weekly on Friday from 9:05-10:00AM Pacific.

How to Join

Use this link to request to join the group/mailing list, and receive the meeting invite:
Best Practices Google Group.
Requests are manually reviewed, so please be patient.

Working Group Resources

Shared documents and meeting minutes:

  1. Associate a Google account with your e-mail address.
  2. Ask to join our Public Google Group.
  3. Ask to join our Members Google Group.
  4. Once approved, go to the Best Practices folder in the Members Google Drive.

Working Group Chairs

Sergey Serebryakov ( - LinkedIn

Sergey is a machine learning engineer at Hewlett Packard Labs. He has worked on a number of projects including event and relation extraction from texts; benchmarking tools and performance analysis for deep learning workloads; and real-time anomaly detection for time series data. Currently, he focuses on research problems associated with collecting machine learning metadata and using it to accelerate machine learning pipelines. Sergey got his Ph.D. from Saint-Petersburg Institute of Informatics and Automation, Master's and Bachelor's degrees from Saint-Petersburg State Polytechnical University.

Diane Feddema ( - LinkedIn

Diane is a Principal Software Engineer at Red Hat leading performance analysis and visualization for the Open Data Hub managed service. She also creates experiments comparing different types of infrastructure and software frameworks to validate reference architectures for machine learning workloads using MLPerf™. Previously Diane was a performance engineer at the National Center for Atmospheric Research, NCAR, working on optimizations and tuning of parallel global climate models. She also worked at SGI and Cray on performance and compilers. She has a BS in Computer Science from the University of Iowa and MS in Computer Science from the University of Colorado.