Files
kubernetes/cluster/juju/layers/kubernetes-worker
Kubernetes Submit Queue 3a3dc827e4 Merge pull request #43467 from tvansteenburgh/gpu-support
Automatic merge from submit-queue (batch tested with PRs 44047, 43514, 44037, 43467)

Juju: Enable GPU mode if GPU hardware detected

**What this PR does / why we need it**:

Automatically configures kubernetes-worker node to utilize GPU hardware when such hardware is detected.

layer-nvidia-cuda does the hardware detection, installs CUDA and Nvidia
drivers, and sets a state that the k8s-worker can react to.

When gpu is available, worker updates config and restarts kubelet to
enable gpu mode. Worker then notifies master that it's in gpu mode via
the kube-control relation.

When master sees that a worker is in gpu mode, it updates to privileged
mode and restarts kube-apiserver.

The kube-control interface has subsumed the kube-dns interface
functionality.

An 'allow-privileged' config option has been added to both worker and
master charms. The gpu enablement respects the value of this option;
i.e., we can't enable gpu mode if the operator has set
allow-privileged="false".

**Special notes for your reviewer**:

Quickest test setup is as follows:
```bash
# Bootstrap. If your aws account doesn't have a default vpc, you'll need to
# specify one at bootstrap time so that juju can provision a p2.xlarge.
# Otherwise you can leave out the --config "vpc-id=vpc-xxxxxxxx" bit.
juju bootstrap --config "vpc-id=vpc-xxxxxxxx" --constraints "cores=4 mem=16G root-disk=64G" aws/us-east-1 k8s

# Deploy the bundle containing master and worker charms built from
# https://github.com/tvansteenburgh/kubernetes/tree/gpu-support/cluster/juju/layers
juju deploy cs:~tvansteenburgh/bundle/kubernetes-gpu-support-3

# Setup kubectl locally
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
juju scp kubernetes-master/0:kubectl ./kubectl

# Download a gpu-dependent job spec
wget -O /tmp/nvidia-smi.yaml https://raw.githubusercontent.com/madeden/blogposts/master/k8s-gpu-cloud/src/nvidia-smi.yaml

# Create the job
kubectl create -f /tmp/nvidia-smi.yaml

# You should see a new nvidia-smi-xxxxx pod created
kubectl get pods

# Wait a bit for the job to run, then view logs; you should see the
# nvidia-smi table output
kubectl logs $(kubectl get pods -l name=nvidia-smi -o=name -a)
```

kube-control interface: https://github.com/juju-solutions/interface-kube-control
nvidia-cuda layer: https://github.com/juju-solutions/layer-nvidia-cuda
(Both are registered on http://interfaces.juju.solutions/)

**Release note**:
```release-note
Juju: Enable GPU mode if GPU hardware detected
```
2017-04-04 14:33:26 -07:00
..
2017-02-24 14:09:27 +08:00

Kubernetes Worker

Usage

This charm deploys a container runtime, and additionally stands up the Kubernetes worker applications: kubelet, and kube-proxy.

In order for this charm to be useful, it should be deployed with its companion charm kubernetes-master and linked with an SDN-Plugin.

This charm has also been bundled up for your convenience so you can skip the above steps, and deploy it with a single command:

juju deploy canonical-kubernetes

For more information about Canonical Kubernetes consult the bundle README.md file.

Scale out

To add additional compute capacity to your Kubernetes workers, you may juju add-unit scale the cluster of applications. They will automatically join any related kubernetes-master, and enlist themselves as ready once the deployment is complete.

Operational actions

The kubernetes-worker charm supports the following Operational Actions:

Pause

Pausing the workload enables administrators to both drain and cordon a unit for maintenance.

Resume

Resuming the workload will uncordon a paused unit. Workloads will automatically migrate unless otherwise directed via their application declaration.

Known Limitations

Kubernetes workers currently only support 'phaux' HA scenarios. Even when configured with an HA cluster string, they will only ever contact the first unit in the cluster map. To enable a proper HA story, kubernetes-worker units are encouraged to proxy through a kubeapi-load-balancer application. This enables a HA deployment without the need to re-render configuration and disrupt the worker services.

External access to pods must be performed through a Kubernetes Ingress Resource. More information