Linux / macOS / Ubuntu

build product first:

  1. # in root directory of MNN
  2. mkdir build
  3. cd build
  4. cmake .. -DMNN_BUILD_BENCHMARK=true && make -j4

then execute the commmand:

  1. ./benchmark.out models_folder [loop_count] [forwardtype]

forwardtype is in these options: 0->CPU,1->Metal,3->OpenCL,6->OpenGL,7->Vulkan. Here are benchmark models: models.

Android

You can directly execute the script bench_android.sh in the benchmark directory. It builds in armeabi-v7a architecture by default, and in arm64-v8a architecture if builds with parameter of arm64-v8a. BenchmarkModels will be pushed to your device if executed with parameter of -p.
benchmark.txt will be generated in benchmark directory after the execution.

iOS

  1. Prepare models with running the script get_model.sh in the tools/scropt;
  2. Open demo project in demo/iOS and run with Benchmark button at right-top edge, you can switch model, forward type and thread number for benchmark with bottom toolbar.

Benchmark for Models Constructed usig MNN Express

Use the following command for help:

  1. ./benchmarkExprModels.out help

Example:

  1. ./benchmarkExprModels.out MobileNetV1_100_1.0_224 10 0 4
  2. ./benchmarkExprModels.out MobileNetV2_100 10 0 4
  3. ./benchmarkExprModels.out ResNet_100_18 10 0 4
  4. ./benchmarkExprModels.out GoogLeNet_100 10 0 4
  5. ./benchmarkExprModels.out SqueezeNet_100 10 0 4
  6. ./benchmarkExprModels.out ShuffleNet_100_4 10 0 4

The links to the papers for these models are in the header files, e.g. benchmark/exprModels/MobileNetExpr.hpp