Commit Graph

1913 Commits

Author SHA1 Message Date
Maciej Szulik
9f064c57ce Remove extensions/v1beta1 Job 2016-12-17 00:07:24 +01:00
Robert Rati
91931c138e [scheduling] Moved node affinity from annotations to api fields. #35518 2016-12-16 11:42:43 -05:00
Kubernetes Submit Queue
5b240ca897 Merge pull request #36748 from kargakis/remove-events-from-deployment-tests
Automatic merge from submit-queue

Fix Recreate for Deployments and stop using events in e2e tests

Fixes https://github.com/kubernetes/kubernetes/issues/36453 by removing events from the deployment tests. The test about events during a Rolling deployment is redundant so I just removed it (we already have another test specifically for Rolling deployments).

Closes https://github.com/kubernetes/kubernetes/issues/32567 (preferred to use pod LISTs instead of a new status API field for replica sets that would add many more writes to replica sets).

@kubernetes/deployment
2016-12-16 03:57:02 -08:00
Kubernetes Submit Queue
7ca5f92b58 Merge pull request #38780 from mikedanese/ds-fix1
Automatic merge from submit-queue

daemonset: bail out after we enqueue once

This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-15 16:15:52 -08:00
Michail Kargakis
7ef3e6f7c9 controller: wait for all pods to be deleted before Recreating 2016-12-15 19:55:18 +01:00
bprashanth
98c7fe98e1 Don't eat 403 in service controller 2016-12-15 10:27:14 -08:00
Kubernetes Submit Queue
d8efc779ed Merge pull request #38154 from caesarxuchao/rename-release_1_5
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)

Rename "release_1_5" clientset to just "clientset"

We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.

@kubernetes/sig-api-machinery @deads2k 

```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
2016-12-14 14:21:51 -08:00
Mike Danese
3a311a2bc2 daemonset: bail out after we enqueue once
This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-14 12:59:06 -08:00
Chao Xu
6709b7ada2 run hack/update-codegen.sh
run hack/verify-gofmt.sh
update bazel
2016-12-14 12:39:49 -08:00
Chao Xu
03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Kubernetes Submit Queue
af23f40f82 Merge pull request #37272 from brendandburns/cleanup
Automatic merge from submit-queue

Remove 'minion' from the code in two places in favor of 'node'

Part of https://github.com/kubernetes/kubernetes/issues/1111
2016-12-14 00:09:43 -08:00
Kubernetes Submit Queue
7b8ecda289 Merge pull request #38743 from caesarxuchao/remove
Automatic merge from submit-queue

Remove accidentally committed files

Accidentally committed in #37534.
2016-12-13 20:44:16 -08:00
Chao Xu
411128f294 remove wrongly committed files 2016-12-13 19:44:51 -08:00
Dan Winship
f369372dad Drop version-parsing from pkg/version
pkg/version is now just version constants, etc, not version parsing
2016-12-13 08:53:19 -05:00
Kubernetes Submit Queue
15f9572b8c Merge pull request #38613 from kargakis/do-not-adopt-when-deleted
Automatic merge from submit-queue

controller: adopt pods only when controller is not deleted

When a replica set is deleted it will continue adopting pods thus driving the worker that handles it in erroring out because the adoption is [always cancelled](59c313730c/pkg/controller/controller_ref_manager.go (L110)) in the controller reference manager.
```
E1212 14:40:31.245773    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.258462    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259131    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259149    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
I1212 14:40:31.268012    7964 deployment_controller.go:314] Error syncing deployment e2e-tests-deployment-2rr3m/test-rollover-deployment: Operation cannot be fulfilled on deployments.extensions "test-rollover-deployment": the object has been modified; please apply your changes to the latest version and try again
E1212 14:40:31.277252    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277276    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277287    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289148    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-b6s4x_82fa8343-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289169    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289176    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289181    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
```

@kubernetes/deployment @caesarxuchao
2016-12-13 04:57:49 -08:00
Kubernetes Submit Queue
8abbedae54 Merge pull request #38315 from mikedanese/pin-gazel
Automatic merge from submit-queue

Pin gazel to a version and support cgo

This fixes the bazel build.

@krousey who is buildcop
2016-12-12 19:32:29 -08:00
Kubernetes Submit Queue
f45e918b8b Merge pull request #35833 from apelisse/owners-pkg-controller
Automatic merge from submit-queue

Curating Owners: pkg/controller

cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
   the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed.  Being an approver or reviewer
   of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
   OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
   over again (don't hesitate to ask me for help, or use the pull-request
   above as an example)
2016-12-12 18:51:33 -08:00
Prashanth B
8ff3182fd4 Update OWNERS 2016-12-12 17:55:18 -08:00
Prashanth B
0eda833c31 Update OWNERS 2016-12-12 17:54:39 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Kubernetes Submit Queue
5e6578a734 Merge pull request #38419 from freehan/service-status-update
Automatic merge from submit-queue

bump log level on service status update

ref: https://github.com/kubernetes/kubernetes/issues/38349

I tried to reproduce the problem in #38349 and failed. Not sure why service status update failed and service controller skip status update in the next round. What I have observed is that if service status update failed due to conflict, the next round of processServiceUpdate will correct it. 

Bumping log level to get a better signal when it occurs.
2016-12-12 12:42:53 -08:00
Michail Kargakis
ec2c79a35e controller: adopt pods only when controller is not deleted 2016-12-12 15:12:44 +01:00
Michail Kargakis
9c7b39066e Log enqueueing replica sets for availability checks 2016-12-12 14:09:16 +01:00
Kubernetes Submit Queue
83a77fa5a1 Merge pull request #38299 from kargakis/calculate-unavailable-correctly
Automatic merge from submit-queue (batch tested with PRs 38608, 38299)

controller: set unavailableReplicas correctly when scaling down

```
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
```

The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.

@kubernetes/deployment
2016-12-12 04:18:04 -08:00
Kubernetes Submit Queue
f071c7701d Merge pull request #38595 from yarntime/fix_typo_storage
Automatic merge from submit-queue

fix typo

**What this PR does / why we need it**:
    fix typo.

**Release note**:

```NONE
```
2016-12-11 22:14:21 -08:00
yarntime@163.com
a71741929e fix typo 2016-12-12 10:32:06 +08:00
Clayton Coleman
c52d510a24
refactor: generated 2016-12-10 18:05:53 -05:00
Clayton Coleman
3c72ee2189
Change references to OwnerReference 2016-12-10 18:05:36 -05:00
Clayton Coleman
42d410fdde
Switch to use pkg/apis/meta/v1/unstructured and the new interfaces
Avoid directly accessing an unstructured type if it is not required.
2016-12-10 18:05:28 -05:00
Clayton Coleman
c30862a488
Move OwnerReference to pkg/apis/meta/v1 and remove metatypes pkg
OwnerReference is common.
2016-12-10 18:05:28 -05:00
Kubernetes Submit Queue
e732ee70f4 Merge pull request #38406 from liggitt/remove-internal-json-annotations
Automatic merge from submit-queue

Remove json serialization annotations from internal types

fixes #3933

Internal types should never be serialized, and including json serialization tags on them makes it possible to accidentally do that without realizing it.

fixes in this PR:

* types
  * [x] remove json tags from internal types
  * [x] fix references from serialized types to internal ObjectMeta
* generation
  * [x] remove generated json codecs for internal types (they should never be used)
* kubectl
  * [x] fix `apply` to operate on versioned object
  * [x] fix sorting by field to operate on versioned object
  * [x] fix `--record` to build annotation patch using versioned object
* hpa
  * [x] fix unmarshaling to internal CustomMetricTargetList in validation
* thirdpartyresources
  * [x] fix encoding API responses using internal ObjectMeta
* tests
  * [x] fix tests to use versioned objects when checking encoded content
  * [x] fix tests passing internal objects to generic printers

follow ups (will open tracking issues or additional PRs):
- [ ] remove json tags from internal kubeconfig types (`kubectl config set` pathfinding needs to work against external type)
- [ ] HPA should version CustomMetricTargetList serialization in annotations
- [ ] revisit how TPR resthandlers encoding objects
- [ ] audit and add tests for printer use (human-readable printer requires internal versions, generic printers require external versions)
- [ ] add static analysis tests preventing new internal types from adding tags
- [ ] add static analysis tests requiring json tags on external types (and enforcing lower-case first letter)
- [ ] add more tests for `kubectl get` exercising known and unknown types with all output options
2016-12-10 14:00:17 -08:00
Kubernetes Submit Queue
f7e3668867 Merge pull request #37611 from yarntime/fix_typo_in_pet_set
Automatic merge from submit-queue

fix typo in pet_set

fix typo in pet_set.
2016-12-09 15:38:19 -08:00
Kubernetes Submit Queue
b72c006eb3 Merge pull request #34554 from derekwaynecarr/quota-storage-class
Automatic merge from submit-queue (batch tested with PRs 37270, 38309, 37568, 34554)

Ability to quota storage by storage class

Adds the ability to quota storage by storage class.
1. `<storage-class>.storageclass.storage.k8s.io/persistentvolumeclaims` - quota the number of claims with a specific storage class
2. `<storage-class>.storageclass.storage.k8s.io/requests.storage` - quota the cumulative request for storage in a particular storage class.

For example:

```
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: 100
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: 5
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: 10
    bronze.storageclass.storage.k8s.io.kubernetes.io/requests.storage: 100Gi
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: 15
$ kubectl create -f quota.yaml
$ cat pvc-bronze.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  generateName: pvc-bronze-
  annotations:
    volume.beta.kubernetes.io/storage-class: "bronze"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl create -f pvc-bronze.yaml
$ kubectl get quota storage-quota -o yaml
apiVersion: v1
kind: ResourceQuota
...
status:
  hard:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "15"
    bronze.storageclass.storage.k8s.io/requests.storage: 100Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    persistentvolumeclaims: "100"
    requests.storage: 100Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "10"
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
  used:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "1"
    bronze.storageclass.storage.k8s.io/requests.storage: 8Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    gold.storageclass.storage.k8s.io/requests.storage: "0"
    persistentvolumeclaims: "1"
    requests.storage: 8Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    silver.storageclass.storage.k8s.io/requests.storage: "0"
```
2016-12-09 14:11:21 -08:00
Jordan Liggitt
6676bab9c3
Fix unmarshaling into internal version of CustomMetricTargetList in validation 2016-12-09 16:26:05 -05:00
Kubernetes Submit Queue
43233caaf0 Merge pull request #37871 from Random-Liu/use-patch-in-kubelet
Automatic merge from submit-queue (batch tested with PRs 36692, 37871)

Use PatchStatus to update node status in kubelet.

Fixes https://github.com/kubernetes/kubernetes/issues/37771.

This PR changes kubelet to update node status with `PatchStatus`.

@caesarxuchao @ymqytw told me that there is a limitation of current `CreateTwoWayMergePatch`, it doesn't support primitive type slice which uses strategic merge.
* I checked the node status, the only primitive type slices in NodeStatus are as follows, they are not using strategic merge:
  * [`ContainerImage.Names`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2963)
  * [`VolumesInUse`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2909)
* Volume package is already [using `CreateStrategicMergePath` to generate node status update patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/attachdetach/statusupdater/node_status_updater.go#L111), and till now everything is fine. 

@yujuhong @dchen1107 
/cc @kubernetes/sig-node
2016-12-09 11:29:11 -08:00
Derek Carr
459a7a05f1 Ability to quota storage by storage class 2016-12-09 13:26:59 -05:00
Kubernetes Submit Queue
5b5b1e7533 Merge pull request #38371 from wojtek-t/get_options_in_client
Automatic merge from submit-queue (batch tested with PRs 38354, 38371)

Add GetOptions parameter to Get() calls in client library

Ref #37473 

This PR is super mechanical - the non trivial commits are:
- Update client generator
- Register GetOptions in batch/v2alpha1 group
2016-12-09 04:12:09 -08:00
Wojciech Tyczynski
aa7da5231f Update bazel files 2016-12-09 09:42:02 +01:00
Wojciech Tyczynski
e8d1cba875 GetOptions in client calls 2016-12-09 09:42:01 +01:00
Kubernetes Submit Queue
98c4c73c71 Merge pull request #37770 from enj/enj/r/storage_decorator
Automatic merge from submit-queue (batch tested with PRs 38278, 37770)

Refactor REST storage to use generic defaults

This removes the repetition in the REST storage builders by moving the logic to `restoptions.ApplyOptions`.  `registry.StorageWithCacher`/`generic.StorageDecorator` no longer assume that they can build the `keyFunc` for arbitrary objects.  `restoptions.ApplyOptions` uses the `registry.Store`'s `KeyFunc` for its call to `generic.StorageDecorator`.

```release-note
Cluster federation servers have changed the location in etcd where federated services are stored, so existing federated services must be deleted and recreated. Before upgrading, export all federated services from the federation server and delete the services. After upgrading the cluster, recreate the federated services from the exported data.
```
2016-12-09 00:25:35 -08:00
Random-Liu
beba1ebbf8 Use PatchStatus to update node status in kubelet. 2016-12-08 17:13:59 -08:00
Minhan Xia
e082ac4a78 bump log level on service status update 2016-12-08 15:00:43 -08:00
Monis Khan
a6bafbacbf
Refactor REST storage to use generic defaults
Signed-off-by: Monis Khan <mkhan@redhat.com>
2016-12-08 17:24:21 -05:00
Jordan Liggitt
6819706adf
Pass addressable values to DeepCopy 2016-12-08 14:16:01 -05:00
Kubernetes Submit Queue
8f11cc78a8 Merge pull request #38339 from gnufied/backoff-on-volume-delete
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

Exponential back off when volume delete fails

**What this PR does / why we need it**:

This PR implements ability in pv_controller to back off when deleting a volume fails from plugin API. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

Partly fixes #38295 , but I think volume delete is most problematic thing happening in pv_controller without any sort of backoff. 

After this change the attempts of volume deletion look like:

```
controller : I1208 00:18:35.532061   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:20:50.578325   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:23:05.563488   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:25:20.599158   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:27:35.560009   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:29:50.594967   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:32:05.539168   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:34:20.581665   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
```
2016-12-08 10:52:03 -08:00
Kubernetes Submit Queue
3519ba4099 Merge pull request #36648 from kargakis/follow-up-to-perma-failed
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

controller: sync stuck deployments in a secondary queue

@kubernetes/deployment this makes Deployments not depend on a tight resync interval in order to estimate progress.
2016-12-08 10:51:59 -08:00
Kubernetes Submit Queue
9125d03418 Merge pull request #36365 from kargakis/backoff-in-deployment-controller
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

Backoff correctly when adopting replica sets/pods

@kubernetes/deployment ptal

Fixes https://github.com/kubernetes/kubernetes/issues/34534
2016-12-08 10:51:57 -08:00
Hemant Kumar
caf867a402 Exponential back off when volume delete fails
This implements pv_controller to exponentially backoff
when deleting a volume fails in Cloud API. It ensures that
we aren't making too many calls to Cloud API
2016-12-07 19:25:36 -05:00
Alejandro Escobar
759530536f type found with controller comment. 2016-12-07 10:55:02 -08:00
Michail Kargakis
c82cae85f6 controller: set unavailableReplicas correctly when scaling down
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0

The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
2016-12-07 17:34:09 +01:00