Files
kubernetes/plugin
Kubernetes Submit Queue 58457daf63 Merge pull request #31652 from intelsdi-x/poc-opaque-int-resources
Automatic merge from submit-queue

[PHASE 1] Opaque integer resource accounting.

## [PHASE 1] Opaque integer resource accounting.

This change provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes. See the example at the bottom of the PR description for more info.

Summary of changes:

- Defines opaque integer resources as any resource with prefix `pod.alpha.kubernetes.io/opaque-int-resource-`.
- Prevent kubelet from overwriting capacity.
- Handle opaque resources in scheduler.
- Validate integer-ness of opaque int quantities in API server.
- Tests for above.

Feature issue: https://github.com/kubernetes/features/issues/76

Design: http://goo.gl/IoKYP1

Issues:

kubernetes/kubernetes#28312
kubernetes/kubernetes#19082

Related:

kubernetes/kubernetes#19080

CC @davidopp @timothysc @balajismaniam 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
Added support for accounting opaque integer resources.

Allows cluster operators to advertise new node-level resources that would be
otherwise unknown to Kubernetes. Users can consume these resources in pod
specs just like CPU and memory. The scheduler takes care of the resource
accounting so that no more than the available amount is simultaneously
allocated to pods.
```

## Usage example

```sh
$ echo '[{"op": "add", "path": "pod.alpha.kubernetes.io~1opaque-int-resource-bananas", "value": "555"}]' | \
> http PATCH http://localhost:8080/api/v1/nodes/localhost.localdomain/status \
> Content-Type:application/json-patch+json
```

```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 11 Aug 2016 16:44:55 GMT
Transfer-Encoding: chunked

{
    "apiVersion": "v1",
    "kind": "Node",
    "metadata": {
        "annotations": {
            "volumes.kubernetes.io/controller-managed-attach-detach": "true"
        },
        "creationTimestamp": "2016-07-12T04:07:43Z",
        "labels": {
            "beta.kubernetes.io/arch": "amd64",
            "beta.kubernetes.io/os": "linux",
            "kubernetes.io/hostname": "localhost.localdomain"
        },
        "name": "localhost.localdomain",
        "resourceVersion": "12837",
        "selfLink": "/api/v1/nodes/localhost.localdomain/status",
        "uid": "2ee9ea1c-47e6-11e6-9fb4-525400659b2e"
    },
    "spec": {
        "externalID": "localhost.localdomain"
    },
    "status": {
        "addresses": [
            {
                "address": "10.0.2.15",
                "type": "LegacyHostIP"
            },
            {
                "address": "10.0.2.15",
                "type": "InternalIP"
            }
        ],
        "allocatable": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "capacity": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "conditions": [
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient disk space available",
                "reason": "KubeletHasSufficientDisk",
                "status": "False",
                "type": "OutOfDisk"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient memory available",
                "reason": "KubeletHasSufficientMemory",
                "status": "False",
                "type": "MemoryPressure"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:11Z",
                "message": "kubelet is posting ready status",
                "reason": "KubeletReady",
                "status": "True",
                "type": "Ready"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:01Z",
                "message": "kubelet has no disk pressure",
                "reason": "KubeletHasNoDiskPressure",
                "status": "False",
                "type": "DiskPressure"
            }
        ],
        "daemonEndpoints": {
            "kubeletEndpoint": {
                "Port": 10250
            }
        },
        "images": [],
        "nodeInfo": {
            "architecture": "amd64",
            "bootID": "1f7e95ca-a4c2-490e-8ca2-6621ae1eb5f0",
            "containerRuntimeVersion": "docker://1.10.3",
            "kernelVersion": "4.5.7-202.fc23.x86_64",
            "kubeProxyVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "kubeletVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "machineID": "cac4063395254bc89d06af5d05322453",
            "operatingSystem": "linux",
            "osImage": "Fedora 23 (Cloud Edition)",
            "systemUUID": "D6EE0782-5DEB-4465-B35D-E54190C5EE96"
        }
    }
}
```

After patching, the kubelet's next sync fills in allocatable:

```
$ kubectl get node localhost.localdomain -o json | jq .status.allocatable
```

```json
{
  "alpha.kubernetes.io/nvidia-gpu": "0",
  "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
  "cpu": "2",
  "memory": "8175808Ki",
  "pods": "110"
}
```

Create two pods, one that needs a single banana and another that needs a truck load:

```
$ kubectl create -f chimp.yaml
$ kubectl create -f superchimp.yaml
```

Inspect the scheduler result and pod status:

```
$ kubectl describe pods chimp
Name:           chimp
Namespace:      default
Node:           localhost.localdomain/10.0.2.15
Start Time:     Thu, 11 Aug 2016 19:58:46 +0000
Labels:         <none>
Status:         Running
IP:             172.17.0.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://46ff268f2f9217c59cc49f97cc4f0f085d5ac0e251f508cc08938601117c0cec
    Image:              nginx:1.10
    Image ID:           docker://sha256:82e97a2b0390a20107ab1310dea17f539ff6034438099384998fd91fc540b128
    Port:               80/TCP
    Limits:
      cpu:                                      500m
      memory:                                   64Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   3
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   1
    State:                                      Running
      Started:                                  Thu, 11 Aug 2016 19:58:51 +0000
    Ready:                                      True
    Restart Count:                              0
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath           Type            Reason                  Message
  ---------     --------        -----   ----                            -------------           --------        ------                  -------
  9m            9m              1       {default-scheduler }                                    Normal          Scheduled               Successfully assigned chimp to localhost.localdomain
  9m            9m              2       {kubelet localhost.localdomain}                         Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Pulled                  Container image "nginx:1.10" already present on machine
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Created                 Created container with docker id 46ff268f2f92
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Started                 Started container with docker id 46ff268f2f92
```

```
$ kubectl describe pods superchimp
Name:           superchimp
Namespace:      default
Node:           /
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Containers:
  nginx:
    Image:      nginx:1.10
    Port:       80/TCP
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   10Ki
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  PodScheduled  False 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  3m            1s              15      {default-scheduler }                    Warning         FailedScheduling        pod (superchimp) failed to fit in any node
fit failure on node (localhost.localdomain): Insufficient pod.alpha.kubernetes.io/opaque-int-resource-bananas
```
2016-10-28 22:25:18 -07:00
..
2016-10-21 17:32:32 -07:00
2016-03-02 20:46:32 -05:00