Commit Graph

1928 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
744876d13f Merge pull request #38798 from NickrenREN/nodecontroller-status
Automatic merge from submit-queue

delete continue in monitorNodeStatus
2016-12-21 10:35:25 -08:00
Kubernetes Submit Queue
ad47a181ee Merge pull request #38986 from ncdc/fix-daemonset-controller-cache-mutation
Automatic merge from submit-queue

Fix DaemonSet cache mutation

**What this PR does / why we need it**: stops the DaemonSetController from mutating the DaemonSet shared informer cache

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #38985

cc @deads2k @mikedanese @lavalamp @smarterclayton
2016-12-21 09:09:18 -08:00
Kubernetes Submit Queue
f42574893b Merge pull request #39011 from wojtek-t/node_controller_listing_from_cache
Automatic merge from submit-queue

NodeController listing nodes from cache instead of cache in apiserver

This is reducing load on apiserver.
2016-12-21 03:13:09 -08:00
Kubernetes Submit Queue
237be4b2be Merge pull request #38855 from gnufied/fix-variable-shadow-exp-backoff
Automatic merge from submit-queue (batch tested with PRs 36888, 38180, 38855, 38590)

Fix variable shadowing in exponential backoff when deleting volumes

While https://github.com/kubernetes/kubernetes/pull/38339 implemented exponential backoff on
volume deletion, that PR suffers from a minor bug when error thrown on volume deletion is anything other than `VolumeInUse` errors - in which case exponential backoff will not work.

This PR fixes that. This PR also makes unit tests more deterministic because exponential backoff changed the way operations are permitted.

CC @jsafrane @childsb @wongma7
2016-12-20 20:33:56 -08:00
Hemant Kumar
7b423085fa Fix variable shadowing in exponential backoff when deleting volumes
Also fix pv_controller unit tests to behave more accurately
in light of exponential backoffs
2016-12-20 21:31:12 -05:00
Wojciech Tyczynski
1b2d9eb2e7 NodeController listing nodes from cache instead of cache in apiserver 2016-12-20 13:13:14 +01:00
Kubernetes Submit Queue
d373d1c467 Merge pull request #38917 from foxyriver/if-statement-must-be-true
Automatic merge from submit-queue (batch tested with PRs 38426, 38917, 38891, 38935)

if statement must be true

**What this PR does / why we need it**:

if len(metrics.Items)==0, the function would been returned. so the statement if len(metrics.Items) > 0 is redudant, it must be true.

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2016-12-19 18:18:24 -08:00
Andy Goldstein
febc641cee Fix DaemonSet controller cache mutation
Add dsStoreSynced so we also wait on this cache when starting the
DaemonSetController.

Switch to using a fake clientset in the unit tests.

Fix TestNumberReadyStatus so it doesn't expect the cache to be mutated.
2016-12-19 16:39:23 -05:00
Kubernetes Submit Queue
40bed8e189 Merge pull request #38080 from kargakis/requeue-on-selector-updates
Automatic merge from submit-queue

controller: sync deployments once they don't overlap anymore

Fixes https://github.com/kubernetes/kubernetes/issues/34458.

@kubernetes/deployment
2016-12-19 07:31:15 -08:00
Kubernetes Submit Queue
5f82fe76a2 Merge pull request #38878 from kubernetes/revert-38780-ds-fix1
Automatic merge from submit-queue (batch tested with PRs 34353, 33837, 38878)

Revert "daemonset: bail out after we enqueue once"

I get overzealous sometimes.

Reverts kubernetes/kubernetes#38780
2016-12-19 06:43:00 -08:00
Michail Kargakis
04c6fecbc7 controller: use defaultResync for the deployment controller 2016-12-19 14:04:15 +01:00
Michail Kargakis
d19a1109e2 controller: sync deployments once they don't overlap anymore 2016-12-19 14:04:15 +01:00
foxyriver
69c76d8398 if statement must be true 2016-12-17 11:52:41 +08:00
Maciej Szulik
9f064c57ce Remove extensions/v1beta1 Job 2016-12-17 00:07:24 +01:00
Mike Danese
3a6593c9f1 Revert "daemonset: bail out after we enqueue once" 2016-12-16 10:18:06 -08:00
Robert Rati
91931c138e [scheduling] Moved node affinity from annotations to api fields. #35518 2016-12-16 11:42:43 -05:00
Kubernetes Submit Queue
5b240ca897 Merge pull request #36748 from kargakis/remove-events-from-deployment-tests
Automatic merge from submit-queue

Fix Recreate for Deployments and stop using events in e2e tests

Fixes https://github.com/kubernetes/kubernetes/issues/36453 by removing events from the deployment tests. The test about events during a Rolling deployment is redundant so I just removed it (we already have another test specifically for Rolling deployments).

Closes https://github.com/kubernetes/kubernetes/issues/32567 (preferred to use pod LISTs instead of a new status API field for replica sets that would add many more writes to replica sets).

@kubernetes/deployment
2016-12-16 03:57:02 -08:00
Kubernetes Submit Queue
7ca5f92b58 Merge pull request #38780 from mikedanese/ds-fix1
Automatic merge from submit-queue

daemonset: bail out after we enqueue once

This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-15 16:15:52 -08:00
Michail Kargakis
7ef3e6f7c9 controller: wait for all pods to be deleted before Recreating 2016-12-15 19:55:18 +01:00
bprashanth
98c7fe98e1 Don't eat 403 in service controller 2016-12-15 10:27:14 -08:00
NickrenREN
fab228a4ef delete continue in monitorNodeStatus
the continue will run at the end of the for loop, we do not need it
2016-12-15 13:41:24 +08:00
Kubernetes Submit Queue
d8efc779ed Merge pull request #38154 from caesarxuchao/rename-release_1_5
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)

Rename "release_1_5" clientset to just "clientset"

We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.

@kubernetes/sig-api-machinery @deads2k 

```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
2016-12-14 14:21:51 -08:00
Mike Danese
3a311a2bc2 daemonset: bail out after we enqueue once
This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-14 12:59:06 -08:00
Chao Xu
6709b7ada2 run hack/update-codegen.sh
run hack/verify-gofmt.sh
update bazel
2016-12-14 12:39:49 -08:00
Chao Xu
03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Kubernetes Submit Queue
af23f40f82 Merge pull request #37272 from brendandburns/cleanup
Automatic merge from submit-queue

Remove 'minion' from the code in two places in favor of 'node'

Part of https://github.com/kubernetes/kubernetes/issues/1111
2016-12-14 00:09:43 -08:00
Kubernetes Submit Queue
7b8ecda289 Merge pull request #38743 from caesarxuchao/remove
Automatic merge from submit-queue

Remove accidentally committed files

Accidentally committed in #37534.
2016-12-13 20:44:16 -08:00
Chao Xu
411128f294 remove wrongly committed files 2016-12-13 19:44:51 -08:00
Dan Winship
f369372dad Drop version-parsing from pkg/version
pkg/version is now just version constants, etc, not version parsing
2016-12-13 08:53:19 -05:00
Kubernetes Submit Queue
15f9572b8c Merge pull request #38613 from kargakis/do-not-adopt-when-deleted
Automatic merge from submit-queue

controller: adopt pods only when controller is not deleted

When a replica set is deleted it will continue adopting pods thus driving the worker that handles it in erroring out because the adoption is [always cancelled](59c313730c/pkg/controller/controller_ref_manager.go (L110)) in the controller reference manager.
```
E1212 14:40:31.245773    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.258462    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259131    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259149    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
I1212 14:40:31.268012    7964 deployment_controller.go:314] Error syncing deployment e2e-tests-deployment-2rr3m/test-rollover-deployment: Operation cannot be fulfilled on deployments.extensions "test-rollover-deployment": the object has been modified; please apply your changes to the latest version and try again
E1212 14:40:31.277252    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277276    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277287    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289148    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-b6s4x_82fa8343-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289169    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289176    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289181    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
```

@kubernetes/deployment @caesarxuchao
2016-12-13 04:57:49 -08:00
Kubernetes Submit Queue
8abbedae54 Merge pull request #38315 from mikedanese/pin-gazel
Automatic merge from submit-queue

Pin gazel to a version and support cgo

This fixes the bazel build.

@krousey who is buildcop
2016-12-12 19:32:29 -08:00
Kubernetes Submit Queue
f45e918b8b Merge pull request #35833 from apelisse/owners-pkg-controller
Automatic merge from submit-queue

Curating Owners: pkg/controller

cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
   the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed.  Being an approver or reviewer
   of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
   OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
   over again (don't hesitate to ask me for help, or use the pull-request
   above as an example)
2016-12-12 18:51:33 -08:00
Prashanth B
8ff3182fd4 Update OWNERS 2016-12-12 17:55:18 -08:00
Prashanth B
0eda833c31 Update OWNERS 2016-12-12 17:54:39 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Kubernetes Submit Queue
5e6578a734 Merge pull request #38419 from freehan/service-status-update
Automatic merge from submit-queue

bump log level on service status update

ref: https://github.com/kubernetes/kubernetes/issues/38349

I tried to reproduce the problem in #38349 and failed. Not sure why service status update failed and service controller skip status update in the next round. What I have observed is that if service status update failed due to conflict, the next round of processServiceUpdate will correct it. 

Bumping log level to get a better signal when it occurs.
2016-12-12 12:42:53 -08:00
Michail Kargakis
ec2c79a35e controller: adopt pods only when controller is not deleted 2016-12-12 15:12:44 +01:00
Michail Kargakis
9c7b39066e Log enqueueing replica sets for availability checks 2016-12-12 14:09:16 +01:00
Kubernetes Submit Queue
83a77fa5a1 Merge pull request #38299 from kargakis/calculate-unavailable-correctly
Automatic merge from submit-queue (batch tested with PRs 38608, 38299)

controller: set unavailableReplicas correctly when scaling down

```
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
```

The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.

@kubernetes/deployment
2016-12-12 04:18:04 -08:00
Kubernetes Submit Queue
f071c7701d Merge pull request #38595 from yarntime/fix_typo_storage
Automatic merge from submit-queue

fix typo

**What this PR does / why we need it**:
    fix typo.

**Release note**:

```NONE
```
2016-12-11 22:14:21 -08:00
yarntime@163.com
a71741929e fix typo 2016-12-12 10:32:06 +08:00
Clayton Coleman
c52d510a24
refactor: generated 2016-12-10 18:05:53 -05:00
Clayton Coleman
3c72ee2189
Change references to OwnerReference 2016-12-10 18:05:36 -05:00
Clayton Coleman
42d410fdde
Switch to use pkg/apis/meta/v1/unstructured and the new interfaces
Avoid directly accessing an unstructured type if it is not required.
2016-12-10 18:05:28 -05:00
Clayton Coleman
c30862a488
Move OwnerReference to pkg/apis/meta/v1 and remove metatypes pkg
OwnerReference is common.
2016-12-10 18:05:28 -05:00
Kubernetes Submit Queue
e732ee70f4 Merge pull request #38406 from liggitt/remove-internal-json-annotations
Automatic merge from submit-queue

Remove json serialization annotations from internal types

fixes #3933

Internal types should never be serialized, and including json serialization tags on them makes it possible to accidentally do that without realizing it.

fixes in this PR:

* types
  * [x] remove json tags from internal types
  * [x] fix references from serialized types to internal ObjectMeta
* generation
  * [x] remove generated json codecs for internal types (they should never be used)
* kubectl
  * [x] fix `apply` to operate on versioned object
  * [x] fix sorting by field to operate on versioned object
  * [x] fix `--record` to build annotation patch using versioned object
* hpa
  * [x] fix unmarshaling to internal CustomMetricTargetList in validation
* thirdpartyresources
  * [x] fix encoding API responses using internal ObjectMeta
* tests
  * [x] fix tests to use versioned objects when checking encoded content
  * [x] fix tests passing internal objects to generic printers

follow ups (will open tracking issues or additional PRs):
- [ ] remove json tags from internal kubeconfig types (`kubectl config set` pathfinding needs to work against external type)
- [ ] HPA should version CustomMetricTargetList serialization in annotations
- [ ] revisit how TPR resthandlers encoding objects
- [ ] audit and add tests for printer use (human-readable printer requires internal versions, generic printers require external versions)
- [ ] add static analysis tests preventing new internal types from adding tags
- [ ] add static analysis tests requiring json tags on external types (and enforcing lower-case first letter)
- [ ] add more tests for `kubectl get` exercising known and unknown types with all output options
2016-12-10 14:00:17 -08:00
Kubernetes Submit Queue
f7e3668867 Merge pull request #37611 from yarntime/fix_typo_in_pet_set
Automatic merge from submit-queue

fix typo in pet_set

fix typo in pet_set.
2016-12-09 15:38:19 -08:00
Kubernetes Submit Queue
b72c006eb3 Merge pull request #34554 from derekwaynecarr/quota-storage-class
Automatic merge from submit-queue (batch tested with PRs 37270, 38309, 37568, 34554)

Ability to quota storage by storage class

Adds the ability to quota storage by storage class.
1. `<storage-class>.storageclass.storage.k8s.io/persistentvolumeclaims` - quota the number of claims with a specific storage class
2. `<storage-class>.storageclass.storage.k8s.io/requests.storage` - quota the cumulative request for storage in a particular storage class.

For example:

```
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: 100
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: 5
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: 10
    bronze.storageclass.storage.k8s.io.kubernetes.io/requests.storage: 100Gi
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: 15
$ kubectl create -f quota.yaml
$ cat pvc-bronze.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  generateName: pvc-bronze-
  annotations:
    volume.beta.kubernetes.io/storage-class: "bronze"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl create -f pvc-bronze.yaml
$ kubectl get quota storage-quota -o yaml
apiVersion: v1
kind: ResourceQuota
...
status:
  hard:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "15"
    bronze.storageclass.storage.k8s.io/requests.storage: 100Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    persistentvolumeclaims: "100"
    requests.storage: 100Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "10"
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
  used:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "1"
    bronze.storageclass.storage.k8s.io/requests.storage: 8Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    gold.storageclass.storage.k8s.io/requests.storage: "0"
    persistentvolumeclaims: "1"
    requests.storage: 8Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    silver.storageclass.storage.k8s.io/requests.storage: "0"
```
2016-12-09 14:11:21 -08:00
Jordan Liggitt
6676bab9c3
Fix unmarshaling into internal version of CustomMetricTargetList in validation 2016-12-09 16:26:05 -05:00
Kubernetes Submit Queue
43233caaf0 Merge pull request #37871 from Random-Liu/use-patch-in-kubelet
Automatic merge from submit-queue (batch tested with PRs 36692, 37871)

Use PatchStatus to update node status in kubelet.

Fixes https://github.com/kubernetes/kubernetes/issues/37771.

This PR changes kubelet to update node status with `PatchStatus`.

@caesarxuchao @ymqytw told me that there is a limitation of current `CreateTwoWayMergePatch`, it doesn't support primitive type slice which uses strategic merge.
* I checked the node status, the only primitive type slices in NodeStatus are as follows, they are not using strategic merge:
  * [`ContainerImage.Names`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2963)
  * [`VolumesInUse`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2909)
* Volume package is already [using `CreateStrategicMergePath` to generate node status update patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/attachdetach/statusupdater/node_status_updater.go#L111), and till now everything is fine. 

@yujuhong @dchen1107 
/cc @kubernetes/sig-node
2016-12-09 11:29:11 -08:00