benchmark_inference

command
v1.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 3, 2023 License: Apache-2.0, MIT Imports: 11 Imported by: 0

Documentation

Overview

Benchmark the inference speed of a model.

Usage example:

# Disable CPU power scaling
sudo apt install linux-cpupower
sudo cpupower frequency-set --governor performance

# Configure benchmark
MODEL=$(pwd)/learning/lib/ami/simple_ml/test_data/model/adult_gdt_classifier
DATASET=csv:$(pwd)/learning/lib/ami/simple_ml/test_data/dataset/adult.csv
BUILD_FLAGS="-c opt --copt=-mfma --copt=-mavx2"
RUN_OPTIONS="--model=${MODEL} \
	--dataset=${DATASET} \
	--batch_size=100 \
	--warmup_runs=10 \
	--num_runs=100"

# Benchmark
bazel run ${BUILD_FLAGS} \
	//third_party/yggdrasil_decision_forests/port/go/cli:benchmarkinference -- \
	${RUN_OPTIONS}

Naming convention:

  • A (benchmark) "run" evaluates the speed of a model on a dataset.
  • A "run" is composed of one of more "unit runs".
  • A "unit run" measure the speed of a specific inference implementation (called "engine") with specific parameters (e.g. batchSize=10).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL