aibench

module
v0.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 22, 2021 License: MIT

README

license CircleCI go.dev reference Forum Discord

aibench

This repo contains code for benchmarking deep learning solutions, including RedisAI. This code is based on a fork of work initially made public by TSBS at https://github.com/timescale/tsbs.

Current DL solutions supported:
  • RedisAI: an AI serving engine for real-time applications built by Redis Labs and Tensorwerk, seamlessly plugged into ​Redis.
  • Nvidia Triton Inference Server: An open source inference serving software that lets teams deploy trained AI models from any framework (TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework), from local storage or Google Cloud Platform or AWS S3 on any GPU- or CPU-based infrastructure.
  • TorchServe: built and maintained by Amazon Web Services (AWS) in collaboration with Facebook, TorchServe is available as part of the PyTorch open-source project.
  • Tensorflow Serving: a high-performance serving system, wrapping TensorFlow and maintained by Google.
  • Common REST API serving: a common DL production grade setup with Gunicorn (a Python WSGI HTTP server) communicating with Flask through a WSGI protocol, and using TensorFlow as the backend.
Current use cases

Currently, aibench supports two use cases:

  • creditcard-fraud [details here]: from Kaggle with the extension of reference data. This use-case aims to detect a fraudulent transaction based on anonymized credit card transactions and reference data.

  • vision-image-classification[details here]: an image-focused use-case that uses one network “backbone”: MobileNet V1, which can be considered as one of the standards by the AI community. To assess inference performance we’re recurring to COCO 2017 validation dataset (a large-scale object detection, segmentation, and captioning dataset).

Current DL solutions supported per use case:
Use case/Inference Server model RedisAI TensorFlow Serving Torch Serve Nvidia Triton Rest API
Vision Benchmark (CPU/GPU) (details) mobilenet-v1 (224_224) Not supported Not supported Not supported
Fraud Benchmark (CPU) (details) Non standard Kaggle Model with the extension of reference data docs docs docs Not supported docs
Installation

The easiest way to get and install the go benchmark programs is to use go get and then issuing make:

# Fetch aibench and its dependencies
go get github.com/RedisAI/aibench
cd $GOPATH/src/github.com/RedisAI/aibench

make
Blogs/White-papers that reference this tool

Directories

Path Synopsis
cmd
aibench_load_data
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
aibench_run_inference_flask_tensorflow
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
aibench_run_inference_redisai
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
aibench_run_inference_tensorflow_serving
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
aibench_run_inference_torchserve
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
aibench_run_inference_triton_vision
This program has no knowledge of the internals of the endpoint.
This program has no knowledge of the internals of the endpoint.
inference module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL