Whisper Service
gRPC service for a self-hosted openai/whisper.
Running with Docker
To get and run the service from Docker Hub:
docker pull ashesss/openai-whisper-service:latest
docker run -p 8080:8080 ashesss/openai-whisper-service:latest
GPU
To utilize the GPU for the service, first make sure that nvidia plugin for docker is installed then start ther container with GPU support:
docker run -p 8080:8080 --gpus all ashesss/openai-whisper-service:latest
Model Cache
ASR models will be downloaded for each new container, if you want to cache them on persistent volume, mount it as follows:
docker run -p 8080:8080 -v /path/to/cached/models:/root/.cache/whisper ashesss/openai-whisper-service:latest
Basic Usage
With Go you can simply import client from this repo:
import whisperpb "github.com/d-ashesss/whisper-service/proto"
func main() {
conn, _ := grpc.Dial("localhost:8080", grpc.WithTransportCredentials(insecure.NewCredentials()))
c := whisperpb.NewWhisperServiceClient(conn)
stream, err := c.Transcribe(context.Background())
...
}
For other languages use whisper.proto file to generate gRPC client.