csi-sanlock-lvm

module
v0.4.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 23, 2023 License: Apache-2.0

README

CSI Sanlock-LVM Driver

Build Status Test Coverage

csi-sanlock-lvm is a CSI driver for LVM and Sanlock.

It comes in handy when you want your nodes to access data on a shared block device - a typical example being Kubernetes on bare metal on a SAN (storage area network).

Maturity

This project is in alpha state, YMMV.

Features

  • Dynamic volume provisioning
    • Support both filesystem and block devices
    • Support different filesystems
    • Support single node read/write access
  • Online volume extension
  • Online snapshot support
  • Volume groups (fs groups)
  • Ephemeral volumes (TODO)

Prerequisite

  • Kubernetes 1.20+
  • kubectl

Limitations

Sanlock might require up to 3 minutes to start, so bringing up a node can take some time.

Also, Sanlock requires every cluster node to get an unique integer in the range 1-2000. This is implemented using least significant bits of the node ip address, which works as long as all the nodes reside in a subnet that contains at most 2000 addresses (a /22 subnet or larger).

Installation

This chapter describes a step-by-step procedure to get csi-sanlock-lvm running on your cluster.

Initialize a shared volume group

Before deploying the driver, you need to have at least a shared volume group set up, as well as some logical volumes dedicated for csi-sanlock-lvm rpc mechanism. You can use the provided csi-sanlock-lvm-init pod to initialize lvm as follows:

kubectl apply -f "https://raw.githubusercontent.com/aleofreddi/csi-sanlock-lvm/v0.4.5/deploy/kubernetes/csi-sanlock-lvm-init.var.yaml"

Then attach the init pod as follows and initialize the VG.

kubectl attach -it csi-sanlock-lvm-init

Within the init shell, initialize the shared volume group and the rpc logical volumes:

# Adjust your devices before running this!
vgcreate --shared vg01 [/dev/device1 ... /dev/deviceN]

# Create the csl-rpc-data logical volume.
lvcreate -L 8m -n csl-rpc-data \
  --add-tag csi-sanlock-lvm.vleo.net/rpcRole=data vg01 &&
  lvchange -a n vg01/csl-rpc-data

# Create the csl-rpc-lock logical volume.
lvcreate -L 512b -n csl-rpc-lock \
  --add-tag csi-sanlock-lvm.vleo.net/rpcRole=lock vg01 &&
  lvchange -a n vg01/csl-rpc-lock

# Initialization complete, terminate the pod successfully.
exit 0

Now cleanup the init pod:

kubectl delete po csi-sanlock-lvm-init
Deploy the driver

When the volume group setup is complete, go ahead and create a namespace to accommodate the driver:

kubectl create namespace csi-sanlock-lvm-system

And then deploy using kustomization:

# Install the csi-sanlock-lvm driver.
kubectl apply -k "https://github.com/aleofreddi/csi-sanlock-lvm/deploy/kubernetes?ref=v0.4.5"

It might take up to 3 minutes for the csi plugin to become Running on each node, and all the containers should be ready (for the plugin ones that would be 4/4). To check the current status you can use the following command:

kubectl -n csi-sanlock-lvm-system get pod

Each node will should have its own csi-sanlock-lvm-plugin pod. You should get an output similar to:

NAME                            READY   STATUS    RESTARTS   AGE
csi-sanlock-lvm-attacher-0      1/1     Running   0          2m15s
csi-sanlock-lvm-plugin-cm7h6    4/4     Running   0          2m13s
csi-sanlock-lvm-plugin-zkw84    4/4     Running   0          2m13s
csi-sanlock-lvm-provisioner-0   1/1     Running   0          2m14s
csi-sanlock-lvm-resizer-0       1/1     Running   0          2m14s
csi-sanlock-lvm-snapshotter-0   1/1     Running   0          2m14s
snapshot-controller-0           1/1     Running   0          2m14s
Setup storage and snapshot classes

To enable the csi-sanlock-lvm driver you need to refer it from a storage class.

In particular, you need to set up a StorageClass object to manage volumes, and optionally a VolumeSnapshotClass object to manage snapshots.

Configuration examples are provided at conf/csi-sanlock-lvm-storageclass.yaml and conf/csi-sanlock-lvm-volumesnapshotclass.yaml.

The following storage class parameters are supported:

  • volumeGroup (required): the volume group to use when provisioning logical volumes.

The following volume snapshot class parameters are supported:

  • maxSizePct (optional): maximum snapshot size as percentage of its origin size;
  • maxSize (optional): maximum snapshot size.

If both maxSizePct and maxSize are set, the strictest is applied.

Example application

The examples directory contains an example configuration that will spin up a pvc and pod using it:

kubectl apply -f examples/pvc.yaml examples/pod.yaml

You can also create a snapshot of the volume using the snap.yaml:

kubectl apply -f examples/snap.yaml

Building the binaries

Requirements

To build the project, you need:

  • Golang ≥1.20;
  • GNU make;
  • protoc compiler;
  • protoc-gen-go and protoc-gen-go-grpc;
  • gomock.

You can install protoc-gen-go and gomock as follows:

go install github.com/golang/protobuf/protoc-gen-go google.golang.org/grpc/cmd/protoc-gen-go-grpc github.com/golang/mock/mockgen
Build binaries

If you want to build the driver yourself, you can do so invoking make:

make
Build docker images

Similarly, you want to build docker images for the driver using the build-image target:

make build-image

Security

Each node exposes a CSI server via a socket to Kubernetes: access to such a socket grants direct access to any cluster volume. The same holds true for the RPC data volume which is used for inter-node communication.

Disclaimer

This is not an officially supported Google product.

Directories

Path Synopsis
cmd
diskrpc module
driverd module
logger module
lvmctrld module
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL