ffdl-trainer

command module
v0.0.0-...-4e3790f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 30, 2019 License: Apache-2.0, CC-BY-4.0, MIT Imports: 7 Imported by: 0

README

Read this in other languages: Deutsch, 中文.

build status

Fabric for Deep Learning (FfDL)

Latest: PyTorch 1.0 and ONNX support now in FfDL

This repository contains the core services of the FfDL (Fabric for Deep Learning) platform. FfDL is an operating system "fabric" for Deep Learning. It is a collaboration platform for:

  • Framework-independent training of Deep Learning models on distributed hardware
  • Open Deep Learning APIs
  • Running Deep Learning hosting in user's private or public cloud

ffdl-architecture

To know more about the architectural details, please read the design document. If you are looking for demos, slides, collaterals, blogs, webinars and other materials related to FfDL, please find them here

Prerequisites

Usage Scenarios

  • If you are getting started and want to setup your own FfDL deployment, please follow the steps below.
  • If you have a FfDL deployment up and running, you can jump to FfDL User Guide to use FfDL for training your deep learning models.
  • If you want to leverage Jupyter notebooks to launch training on your FfDL cluster, please follow these instructions
  • If you have FfDL configured to use GPUs, and want to train using GPUs, follow steps here
  • To invoke Adversarial Robustness Toolbox to find vulnerabilities in your models, follow the instructions here
  • To deploy your trained models, follow the integration guide with Seldon
  • If you have used FfDL to train your models, and want to use a GPU enabled public cloud hosted service for further training and serving, please follow instructions here to train and serve your models using Watson Studio Deep Learning service.

Steps

  1. Quick Start
  1. Test
  2. Monitoring
  3. Development
  4. Clean Up
  5. Troubleshooting
  6. References

1. Quick Start

There are multiple installation paths for installing FfDL into an existing Kubernetes cluster. Below are the steps for quick install. If you want to follow more detailed step by step instructions , please visit the detailed installation guide

If you are using bash shell, you can modify the necessary environment variables in env.txt and export all of them using the following commands

source env.txt
export $(cut -d= -f1 env.txt)
1.1 Installation using Kubeadm-DIND

If you have Kubeadm-DIND installed on your machine, use these commands to deploy the FfDL platform:

export VM_TYPE=dind
export PUBLIC_IP=localhost
export SHARED_VOLUME_STORAGE_CLASS="";
export NAMESPACE=default # If your namespace does not exist yet, please create the namespace `kubectl create namespace $NAMESPACE` before running the make commands below

make deploy-plugin
make quickstart-deploy
1.2 Installation using Kubernetes Cluster

To install FfDL to any proper Kubernetes cluster, make sure kubectl points to the right namespace, then deploy the platform services:

Note: For PUBLIC_IP, put down one of your Cluster Public IP that can access your Cluster's NodePorts. For IBM Cloud, you can get your Public IP with bx cs workers <cluster_name>.

export VM_TYPE=none
export PUBLIC_IP=<Cluster Public IP>
export NAMESPACE=default # If your namespace does not exist yet, please create the namespace `kubectl create namespace $NAMESPACE` before running the make commands below

# Change the storage class to what's available on your Cloud Kubernetes Cluster.
export SHARED_VOLUME_STORAGE_CLASS="ibmc-file-gold";

make deploy-plugin
make quickstart-deploy

2. Test

To submit a simple example training job that is included in this repo (see etc/examples folder):

make test-push-data-s3
make test-job-submit

3. Monitoring

The platform ships with a simple Grafana monitoring dashboard. The URL is printed out when running the deploy make target.

4. Development

Please refer to the developer guide for more details.

5. Clean Up

If you want to remove FfDL from your cluster, simply use the following commands.

helm delete $(helm list | grep ffdl | awk '{print $1}' | head -n 1)

If you want to remove the storage driver and pvc from your cluster, run:

kubectl delete pvc static-volume-1
helm delete $(helm list | grep ibmcloud-object-storage-plugin | awk '{print $1}' | head -n 1)

For Kubeadm-DIND, you need to kill your forwarded ports. Note that the below command will kill all the ports that are created with kubectl.

kill $(lsof -i | grep kubectl | awk '{printf $2 " " }')

6. Troubleshooting

  • FfDL has only been tested under Mac OS and Linux
  • If glide install fails with an error complaining about non-existing paths (e.g., "Without src, cannot continue"), make sure to follow the standard Go directory layout (see Prerequisites section).

  • To remove FfDL on your Cluster, simply run make undeploy

  • When using the FfDL CLI to train a model, make sure your directory path doesn't have slashes / at the end.

  • If your job is stuck in pending stage, you can try to redeploy the plugin with helm install storage-plugin --set dind=true,cloud=false for Kubeadm-DIND and helm install storage-plugin for general Kubernetes Cluster. Also, double check your training job manifest file to make sure you have the correct object storage credentials.

7. References

Based on IBM Research work in Deep Learning.

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
plugins
grpc_trainer_v2
Package grpc_trainer_v2 is a generated protocol buffer package.
Package grpc_trainer_v2 is a generated protocol buffer package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL