Benchmark Suite Results

MLPerf Inference: Mobile

The MLPerf Inference: Mobile benchmark suite measures how fast systems can process inputs and produce results using a trained model. Below is a short summary of the current benchmarks and metrics. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the motivation and guiding principles behind the benchmark suite. 


Results

MLCommons results are shown in an interactive table to enable you to explore the results. You can apply filters to see just the information you want and click across the top tabs to view the results visually. To see all result details, expand the columns by clicking on the “+” icon, which appears when you hover over “System Name” and subsequent columns.


Scenarios & Metrics

To enable representative testing of a wide variety of inference platforms and use cases, MLPerf has defined four different scenarios as described below. A given scenario is evaluated by a standard load generator generating inference requests in a particular pattern and measuring a specific metric. 

ScenarioQuery GenerationDurationSamples/queryLatency ConstraintTail LatencyPerformance Metric
Single streamLoadGen sends next query as soon as SUT completes the previous query1024 queries and 60 seconds1None90%90%-ile measured latency
Multiple stream (1.1 and earlier)LoadGen sends a new query every latency constraint if the SUT has completed the prior query, otherwise the new query is dropped and is counted as one overtime query270,336 queries and 60 secondsVariable, see metricBenchmark specific99%Maximum number of inferences per query supported
Multiple stream (2.0 and later)Loadgen sends next query, as soon as SUT completes the previous query270,336 queries and 600 seconds8None99%99%-ile measured latency
ServerLoadGen sends new queries to the SUT according to a Poisson distribution270,336 queries and 60 seconds1Benchmark specific99%Maximum Poisson throughput parameter supported
OfflineLoadGen sends all queries to the SUT at start1 query and 60 secondsAt least 24,576NoneN/AMeasured throughput

Benchmarks

Each benchmark is defined by a Dataset and Quality Target. The following table summarizes the benchmarks in this version of the suite (the rules remain the official source of truth): 

AreaTaskModelDatasetModeQualityLatest Available Version
Generative AIText to imageStable Diffusion 1.5MS-COCO 2014 captionsSingle-streamText to image CLIP score, NIMA IQA-Av4.1
VisionObject detectionMobileDETsMS-COCO 2017Single-stream
95% of FP32 (mAp: 0.285)
v4.1
VisionSegmentationMOSAICADE20K (32 classes, 512×512)Single-stream96% of FP32 (32-class mIOU: 59.8)v4.1
LanguageLanguage ProcessingMobile-BERTSQUAD 1.1Single-stream93% of FP32 (F1 score: 90.5)v4.1
Image ProcessingSuper ResolutionEDSR F32B5OpenSRSingle-stream33 dB PSNR (peak signal to noise ratio)v4.1
VisionImage ClassificationMobileNetV4ImageNetSingle-stream & Offline81% ~ 98% of FP32 (Top 1: 82.68%)v4.1
VisionImage ClassificationMobileNetEdge TPUImageNetSingle-stream & Offline74.66% ~ 98% of FP32 (Top1: 76.19%) v4.0
VisionSegmentationDeepLabV3+ (MobileNetV2)ADE20K (32 classes, 512×512)Single-stream97% of FP32 (32-class mIOU: 54.8)v2.1
VisionObject detectionSSD-MobileNetV2MS-COCO 2017Single-stream93% of FP32 (mAp: 0.244)v0.7

Each Mobile benchmark requires the single stream scenario. The Image classification benchmark permits an optional Offline scenario. 

Divisions

MLPerf aims to encourage innovation in software as well as hardware by allowing submitters to reimplement the reference implementations. There are two Divisions that allow different levels of flexibility during reimplementation:

  • The Closed division is intended to compare hardware platforms or software frameworks “apples-to-apples” and requires using the same model as the reference implementation.
  • The Open division is intended to foster innovation and allows using a different model or retraining. 

Availability

MLPerf divides benchmark results into Categories based on availability. 

  • Available systems contain only components that are available for purchase or for rent in the cloud. 
  • Preview systems must be submittable as Available in the next submission round.
  • Research, Development, or Internal (RDI) contain experimental, in development, or internal-use hardware or software.

Submission Information

Each row in the results table is a set of results produced by a single submitter  using the same software stack and hardware platform. Each Closed division row contains the following information:

  • Submitter

    The organization that submitted the results.

  • Software

    The ML framework and primary ML hardware library used.

  • System

    General system description.

  • Benchmark Results

    Results for each benchmark as described above.

  • Processor and Count

    The type and number of CPUs used, if CPUs perform the majority of ML compute.

  • Details

    Link to metadata for submission.

  • Accelerator and Count

    The type and number of accelerators used, if accelerators perform the majority of ML compute.

  • Code

    Link to code for submission.

Open Divisions

You may add the following rows:


Model Used
The model used to produce the results, which may or may not match the Closed Division requirement.

Notes
Arbitrary notes from submitter.

Power Measurements

Each row will add columns for each benchmark containing the following:


System Power
for Server and Offline scenarios, or…

Energy Per Stream
for Single stream and Multiple stream scenarios