Commit Graph

5326 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
58457daf63 Merge pull request #31652 from intelsdi-x/poc-opaque-int-resources
Automatic merge from submit-queue

[PHASE 1] Opaque integer resource accounting.

## [PHASE 1] Opaque integer resource accounting.

This change provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes. See the example at the bottom of the PR description for more info.

Summary of changes:

- Defines opaque integer resources as any resource with prefix `pod.alpha.kubernetes.io/opaque-int-resource-`.
- Prevent kubelet from overwriting capacity.
- Handle opaque resources in scheduler.
- Validate integer-ness of opaque int quantities in API server.
- Tests for above.

Feature issue: https://github.com/kubernetes/features/issues/76

Design: http://goo.gl/IoKYP1

Issues:

kubernetes/kubernetes#28312
kubernetes/kubernetes#19082

Related:

kubernetes/kubernetes#19080

CC @davidopp @timothysc @balajismaniam 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
Added support for accounting opaque integer resources.

Allows cluster operators to advertise new node-level resources that would be
otherwise unknown to Kubernetes. Users can consume these resources in pod
specs just like CPU and memory. The scheduler takes care of the resource
accounting so that no more than the available amount is simultaneously
allocated to pods.
```

## Usage example

```sh
$ echo '[{"op": "add", "path": "pod.alpha.kubernetes.io~1opaque-int-resource-bananas", "value": "555"}]' | \
> http PATCH http://localhost:8080/api/v1/nodes/localhost.localdomain/status \
> Content-Type:application/json-patch+json
```

```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 11 Aug 2016 16:44:55 GMT
Transfer-Encoding: chunked

{
    "apiVersion": "v1",
    "kind": "Node",
    "metadata": {
        "annotations": {
            "volumes.kubernetes.io/controller-managed-attach-detach": "true"
        },
        "creationTimestamp": "2016-07-12T04:07:43Z",
        "labels": {
            "beta.kubernetes.io/arch": "amd64",
            "beta.kubernetes.io/os": "linux",
            "kubernetes.io/hostname": "localhost.localdomain"
        },
        "name": "localhost.localdomain",
        "resourceVersion": "12837",
        "selfLink": "/api/v1/nodes/localhost.localdomain/status",
        "uid": "2ee9ea1c-47e6-11e6-9fb4-525400659b2e"
    },
    "spec": {
        "externalID": "localhost.localdomain"
    },
    "status": {
        "addresses": [
            {
                "address": "10.0.2.15",
                "type": "LegacyHostIP"
            },
            {
                "address": "10.0.2.15",
                "type": "InternalIP"
            }
        ],
        "allocatable": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "capacity": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "conditions": [
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient disk space available",
                "reason": "KubeletHasSufficientDisk",
                "status": "False",
                "type": "OutOfDisk"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient memory available",
                "reason": "KubeletHasSufficientMemory",
                "status": "False",
                "type": "MemoryPressure"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:11Z",
                "message": "kubelet is posting ready status",
                "reason": "KubeletReady",
                "status": "True",
                "type": "Ready"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:01Z",
                "message": "kubelet has no disk pressure",
                "reason": "KubeletHasNoDiskPressure",
                "status": "False",
                "type": "DiskPressure"
            }
        ],
        "daemonEndpoints": {
            "kubeletEndpoint": {
                "Port": 10250
            }
        },
        "images": [],
        "nodeInfo": {
            "architecture": "amd64",
            "bootID": "1f7e95ca-a4c2-490e-8ca2-6621ae1eb5f0",
            "containerRuntimeVersion": "docker://1.10.3",
            "kernelVersion": "4.5.7-202.fc23.x86_64",
            "kubeProxyVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "kubeletVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "machineID": "cac4063395254bc89d06af5d05322453",
            "operatingSystem": "linux",
            "osImage": "Fedora 23 (Cloud Edition)",
            "systemUUID": "D6EE0782-5DEB-4465-B35D-E54190C5EE96"
        }
    }
}
```

After patching, the kubelet's next sync fills in allocatable:

```
$ kubectl get node localhost.localdomain -o json | jq .status.allocatable
```

```json
{
  "alpha.kubernetes.io/nvidia-gpu": "0",
  "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
  "cpu": "2",
  "memory": "8175808Ki",
  "pods": "110"
}
```

Create two pods, one that needs a single banana and another that needs a truck load:

```
$ kubectl create -f chimp.yaml
$ kubectl create -f superchimp.yaml
```

Inspect the scheduler result and pod status:

```
$ kubectl describe pods chimp
Name:           chimp
Namespace:      default
Node:           localhost.localdomain/10.0.2.15
Start Time:     Thu, 11 Aug 2016 19:58:46 +0000
Labels:         <none>
Status:         Running
IP:             172.17.0.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://46ff268f2f9217c59cc49f97cc4f0f085d5ac0e251f508cc08938601117c0cec
    Image:              nginx:1.10
    Image ID:           docker://sha256:82e97a2b0390a20107ab1310dea17f539ff6034438099384998fd91fc540b128
    Port:               80/TCP
    Limits:
      cpu:                                      500m
      memory:                                   64Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   3
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   1
    State:                                      Running
      Started:                                  Thu, 11 Aug 2016 19:58:51 +0000
    Ready:                                      True
    Restart Count:                              0
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath           Type            Reason                  Message
  ---------     --------        -----   ----                            -------------           --------        ------                  -------
  9m            9m              1       {default-scheduler }                                    Normal          Scheduled               Successfully assigned chimp to localhost.localdomain
  9m            9m              2       {kubelet localhost.localdomain}                         Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Pulled                  Container image "nginx:1.10" already present on machine
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Created                 Created container with docker id 46ff268f2f92
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Started                 Started container with docker id 46ff268f2f92
```

```
$ kubectl describe pods superchimp
Name:           superchimp
Namespace:      default
Node:           /
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Containers:
  nginx:
    Image:      nginx:1.10
    Port:       80/TCP
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   10Ki
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  PodScheduled  False 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  3m            1s              15      {default-scheduler }                    Warning         FailedScheduling        pod (superchimp) failed to fit in any node
fit failure on node (localhost.localdomain): Insufficient pod.alpha.kubernetes.io/opaque-int-resource-bananas
```
2016-10-28 22:25:18 -07:00
Tim St. Clair
304dbd0e2e Increase sys container usageBytes upper bound 2016-10-28 14:50:08 -07:00
Kubernetes Submit Queue
bbe36f9186 Merge pull request #35812 from rmmh/owner-better
Automatic merge from submit-queue

Improve update_owners.py username detection and error message.

Fixes the root cause of #35808.
2016-10-28 14:44:38 -07:00
Kubernetes Submit Queue
cf7178d7c3 Merge pull request #35572 from bprashanth/ip_gc
Automatic merge from submit-queue

GC pod ips

Finally managed to write a *failing* test. 
Supersedes https://github.com/kubernetes/kubernetes/pull/34373

```release-note
GC pod ips
```
2016-10-28 14:44:28 -07:00
Ryan Hitchman
20754c0f5c Improve update_owners.py username detection and error message.
Also, skip _output.
2016-10-28 13:23:19 -07:00
Connor Doyle
b0421c1ba4 Updated test owners file. 2016-10-28 10:28:50 -07:00
Connor Doyle
c93646e8da Support opaque integer resource accounting.
- Prevents kubelet from overwriting capacity during sync.
- Handles opaque integer resources in the scheduler.
  - Adds scheduler predicate tests for opaque resources.
- Validates opaque int resources:
  - Ensures supplied opaque int quantities in node capacity,
    node allocatable, pod request and pod limit are integers.
  - Adds tests for new validation logic (node update and pod spec).
- Added e2e tests for opaque integer resources.
2016-10-28 10:15:13 -07:00
bprashanth
7cde81b59c Clean up static-ip in e2e 2016-10-28 10:06:31 -07:00
Clayton Coleman
ca2f1b87ad Replace negotiation with a new method that can extract info
Alter how runtime.SerializeInfo is represented to simplify negotiation
and reduce the need to allocate during negotiation. Simplify the dynamic
client's logic around negotiating type. Add more tests for media type
handling where necessary.
2016-10-28 11:30:11 -04:00
Andy Goldstein
72cec547cd Convert - to _ for protobuf package names
Convert - to _ for protobuf package names to allow protobuf code generation
support for go packages that have - in their names.
2016-10-28 11:08:13 -04:00
Wojciech Tyczynski
96a26d93f5 Merge pull request #35789 from wojtek-t/fix_quota_backend_bytes
Fix wrong flag to etcd in kubemark
2016-10-28 17:08:12 +02:00
Wojciech Tyczynski
6a4a4bcf36 Fix wrong flag to etcd in kubemark 2016-10-28 15:54:15 +02:00
Kubernetes Submit Queue
1c677ed91e Merge pull request #35690 from gmarek/e2e2
Automatic merge from submit-queue

Create multiple namespaces in the Density test
2016-10-28 06:06:20 -07:00
Piotr Szczesniak
3bea5fc28a Removed 1.3 clientset usage 2016-10-28 15:02:32 +02:00
deads2k
557e653785 add front proxy authenticator 2016-10-28 08:36:46 -04:00
Wojciech Tyczynski
7ee7b55c5e Rename TEST_ETCD_VERSION to ETCD_VERSION 2016-10-28 13:56:59 +02:00
gmarek
30c78c8ab3 Create multiple namespaces in the Density test 2016-10-28 13:50:39 +02:00
Wojciech Tyczynski
2f756e4ebc Merge pull request #35766 from wojtek-t/backend_quota_bytes_kubemark
Increase backend-quota-bytes in kubemark
2016-10-28 12:14:21 +02:00
Kubernetes Submit Queue
14495fed7c Merge pull request #35717 from vishh/rkt-v1.18.0
Automatic merge from submit-queue

Update rkt version on GCI nodes to v1.18.0

v1.18.0 avoids outputting debug information by default which happens to
pollute events and kubelet logs.
2016-10-28 03:10:30 -07:00
Wojciech Tyczynski
137d2398a8 Increase backend-quota-bytes in kubemark 2016-10-28 09:14:57 +02:00
bprashanth
37bc34c567 periodically GC pod ips 2016-10-27 22:15:35 -07:00
Janet Kuo
e0252f9be0 Update test owners 2016-10-27 17:25:46 -07:00
Janet Kuo
10aee82ae3 Rename PetSet API to StatefulSet 2016-10-27 17:25:10 -07:00
Kubernetes Submit Queue
a266f72b34 Merge pull request #35730 from yujuhong/expand_benchmarks
Automatic merge from submit-queue

Add coreos and gci images to the node benchmark job
2016-10-27 16:47:19 -07:00
Kubernetes Submit Queue
bbe5fe327f Merge pull request #35650 from rmmh/verify-test-owners
Automatic merge from submit-queue

Add hack/verify-test-owners.sh to ensure tests always have owners.

This ensures that new tests or changed tests are assigned appropriate owners.
2016-10-27 16:46:50 -07:00
Anirudh
8f2f4ddab4 Review comments. 2016-10-27 15:12:14 -07:00
Yu-Ju Hong
bf2fd238cc Add coreos and gci images to the node benchmark job 2016-10-27 14:52:58 -07:00
Anirudh
b751e0daa9 Fixing e2e tests which rely on network disruptions. 2016-10-27 14:31:09 -07:00
Kubernetes Submit Queue
da43c15edc Merge pull request #35598 from piosz/test-ownership
Automatic merge from submit-queue

Swap in tests ownership

To make the test ownership more closer to actual area of expertise I made the following swap. I included @mtaufen to close the cycle. Please wait with applying lgtm label for the second reviewer.
2016-10-27 13:50:01 -07:00
Kubernetes Submit Queue
90f4ceefc4 Merge pull request #35349 from vishh/gci-cmount
Automatic merge from submit-queue

Update GCI mounter script to run in a rkt container

Depends on #35652
2016-10-27 13:49:37 -07:00
deads2k
df4ed892c4 convert SA controller to shared informers 2016-10-27 15:44:46 -04:00
Ryan Hitchman
8e4e8944b6 Add hack/verify-test-owners.sh to ensure tests always have owners. 2016-10-27 12:35:43 -07:00
Vishnu kannan
c556b33bd6 update rkt to v1.18.0 which avoids outputting debug information by default
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2016-10-27 12:24:29 -07:00
Anirudh Ramanathan
7870543471 Merge pull request #35702 from mikedanese/unrevert
unrevert genrule for bindata
2016-10-27 11:53:04 -07:00
Mike Danese
bce3f0e247 unrevert genrule for bindata 2016-10-27 10:35:28 -07:00
Vishnu kannan
19c19c2e0f Updating GCI mounter to be containerized
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2016-10-27 09:37:08 -07:00
David Ashpole
eb19713486 kubelet calls GetDirFsInfo(root directory) instead of using GetFsInfo(root label). Reverted #33520, and changed e2e test context to use nodefs 2016-10-27 08:04:59 -07:00
Kubernetes Submit Queue
5423eaf431 Merge pull request #35431 from deads2k/client-16-remove-old
Automatic merge from submit-queue

remove the non-generated client

Removes the non-generated client from kube.  The package has a few methods left, but nothing that needs updating when adding new groups.

@ingvagabund
2016-10-27 05:12:33 -07:00
gmarek
b8a83b983f Remove outdated parts of density test 2016-10-27 11:37:26 +02:00
gmarek
d0ef0d238a Add node affinity test to scheduler benchmark 2016-10-27 11:18:49 +02:00
Kubernetes Submit Queue
6e767c71ed Merge pull request #35175 from rothgar/issue-33765
Automatic merge from submit-queue

Fixed gcloud command in logs-generator makefile

I grepped through the code looking for `gcloud` and `push` commands and only found one Makefile missing the `--`. I added it.

fixes #33765 🐛
2016-10-27 01:28:04 -07:00
Kubernetes Submit Queue
e190fec59e Merge pull request #35128 from wongma7/wait-restartpolicy
Automatic merge from submit-queue

Set done to true & return error if RestartPolicy not Always in test framework

Found a small issue with https://github.com/kubernetes/kubernetes/pull/34632, it returns an error if the RestartPolicy is not Always, but the user will never see it because done isn't set to true & they will timeout instead.

@Random-Liu because you wrote that PR
2016-10-27 01:27:56 -07:00
Vishnu kannan
e861a5761d Adding a root filesystem override for kubelet mounter
This is useful for supporting hostPath volumes via containerized
mounters in kubelet.

Signed-off-by: Vishnu kannan <vishnuk@google.com>
2016-10-26 21:42:59 -07:00
Kubernetes Submit Queue
dcdbf27d4f Merge pull request #34648 from nikhiljindal/NSCasDel
Automatic merge from submit-queue

Adding cascading deletion support to federated namespaces

Ref https://github.com/kubernetes/kubernetes/issues/33612

With this change, whenever a federated namespace is deleted with `DeleteOptions.OrphanDependents = false`, then federation namespace controller first deletes the corresponding namespaces from all underlying clusters before deleting the federated namespace.

cc @kubernetes/sig-cluster-federation @caesarxuchao


```release-note
Adding support for DeleteOptions.OrphanDependents for federated namespaces. Setting it to false while deleting a federated namespace also deletes the corresponding namespace from all registered clusters.
```
2016-10-26 21:04:03 -07:00
Kubernetes Submit Queue
ab0ee35462 Merge pull request #35651 from caesarxuchao/remove-label-selectors
Automatic merge from submit-queue

Sending #35255 again: Remove versioned LabelSelectors

ref #35255: "Remove versioned LabelSelectors"

FYI @smarterclayton
2016-10-26 18:21:22 -07:00
Kubernetes Submit Queue
f300d7ed69 Merge pull request #35646 from vishh/klet-relative-mount
Automatic merge from submit-queue

rename kubelet flag mounter-path to experimental-mounter-path

```release-note
* Kubelet flag '--mounter-path' renamed to '--experimental-mounter-path'
```

The feature the flag controls is an experimental feature and this renaming ensures that users do not depend on this feature just yet.
2016-10-26 16:57:33 -07:00
nikhiljindal
f955d556f8 Adding cascading deletion support to federated namespaces 2016-10-26 16:54:12 -07:00
Brian Grant
2ae2339d6a Merge pull request #35546 from thockin/kill-head-scary-warning-on-master
Remove obsolete munger on docs
2016-10-26 16:44:53 -07:00
Vishnu kannan
adef4675a0 rename kubelet flag mounter-path to experimental-mounter-path
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2016-10-26 14:50:33 -07:00
Chao Xu
0a896a9e57 remove versioned LabelSelector definitions 2016-10-26 13:50:13 -07:00