Best Practices Working Group
Benchmark Infra Working Group
- Training Working Group
- Inference Working Group
- Datasets Working Group
- Best Practices Working Group
- Research Working Group
Make machine learning more reproducible and easier to manage for the broader community by building logging tools and recommending approaches for tracking and operating machine learning systems.
Across MLCommons projects, we strive to simplify user experience by providing a unified set of tools. Centralized logging tools are especially critical because they simplify rules compliance and ensure that all vendor submissions for MLPerf benchmarks are easy to debug and capture the relevant ML system details.
This WG strives to improve reproducibility of results and automation of documentation about results. By understanding system-level specs and increasing reproducibility, we can start to build a more detailed matrix of performance-impacting factors. By improving automation, we can improve user experience and verify that each vendor submission includes requisite information.
- Logging and reporting tools for MLCommons projects
- Logging metrics and format
- Definition and examples of system specs
- Roadmap for unified logging tools across MLCommons projects aligned with inference, training, best practices, etc. roadmaps
- Best practices for MLPerf training and inference result reproducibility
Weekly on Monday from 11:00AM-12:00PM Pacific.
Working Group Resources
Working Group Chair Emails
Xinyuan Huang (firstname.lastname@example.org)
Emily Potyraj (email@example.com)
Working Group Chair Bios
Xinyuan is a technical leader at Cisco who is focusing on systems for ML ops and performance on both cloud and edge. Previously, he has also worked on cloud infrastructure optimizations and machine data analytics. He holds a Master's degree in Machine Learning from University College London, and Bachelor's degree from Fudan University.
Emily is a Solution Architect at Pure Storage and works to streamline DL data pipelines at scale. Her specialty is input pipeline performance debugging and optimization. Emily’s background is in statistics, real-time analytics tools, and AI workflow optimization.