Commit Graph

2040 Commits

Author SHA1 Message Date
Michail Kargakis
ce04ee6170 extensions: add readyReplicas in Deployments 2017-01-02 11:59:15 +01:00
Mike Danese
161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Bowei Du
589f58ca39 Remove HostRecord annotation (beta feature)
The annotation has made it to GA so this code should be deleted.
2016-12-28 12:47:08 -08:00
Kubernetes Submit Queue
69ddd8eb27 Merge pull request #39247 from wojtek-t/optimize_controller_manager_memory
Automatic merge from submit-queue

Avoid unnecessary memory allocations

Low-hanging fruits in saving memory allocations. During our 5000-node kubemark runs I've see this:

ControllerManager:
- 40.17% k8s.io/kubernetes/pkg/util/system.IsMasterNode
- 19.04% k8s.io/kubernetes/pkg/controller.(*PodControllerRefManager).Classify

Scheduler:
- 42.74% k8s.io/kubernetes/plugin/pkg/scheduler/algrorithm/predicates.(*MaxPDVolumeCountChecker).filterVolumes

This PR is eliminating all of those.
2016-12-28 00:02:59 -08:00
rkouj
e7e3c55ad7 Add unit tests for MountVolume() of operation executor 2016-12-27 16:07:06 -08:00
rkouj
d5f7610b82 Refactor operation_executor to make it unit testable 2016-12-27 15:12:16 -08:00
Wojciech Tyczynski
d1292a7397 Optimize memory allocations in controller manager 2016-12-27 16:11:11 +01:00
Kubernetes Submit Queue
48793a48d4 Merge pull request #34273 from wlan0/master
Automatic merge from submit-queue (batch tested with PRs 39093, 34273)

start breaking up controller manager into two pieces

This PR addresses: https://github.com/kubernetes/features/issues/88

This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece

the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourceQuotaController
- namespaceController
- deploymentController 
  etc..

among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController

are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.

I have added a new cloud provider called "external", which signals the controller-manager that
cloud provider specific loops are being run by another controller. I have added these changes in such
a way that the existing cloud providers are not affected. This change is completely backwards compatible, and does not require any changes to the way kubernetes is run today.

Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will be addressed in a different commit/issue.

@alena1108 @ibuildthecloud @thockin @dchen1107 

**Special notes for your reviewer**:

@thockin - Im making this **WIP** PR to ensure that I don't stray too far from everyone's view of how we should make this change. As you can see, only one controller, namely `nodecontroller` can be disabled with the `--cloudprovider=external` flag at the moment. I'm working on cleaning up the `rancher-controller-manger` that I wrote to test this.

Secondly, I'd like to use this PR to address cloudprovider specific code in kubelet and api-server.

**Kubelet**
Kubelet uses provider specific code for node registration and for checking node-status. I thought of two ways to divide the kubelet: 
- We could start a cloud provider specific kubelet on each host as a part of kubernetes, and this cloud-specific-kubelet does node registration and node-status checks. 
- Create a kubelet plugin for each provider, which will be started by kubelet as a long running service. This plugin can be packaged as a binary.

I'm leaning towards the first option. That way, kubelet does not have to manage another process, and we can offload the process management of the cloud-provider-specific-kubelet to something like systemd. 

@dchen1107 @thockin what do you think?

**Kube-apiserver**

Kube-apiserver uses provider specific code for distributing ssh keys to all the nodes of a cluster. Do you have any suggestions about how to address this? 

**Release note**:

``` release-note
```
2016-12-23 01:25:28 -08:00
Mayank Kumar
777977612b ReplicaSet has owner ref of the Deployment that created it 2016-12-22 16:45:50 -08:00
wlan0
75da310757 sanitize names and add more comments, and other essential boilerplate changes 2016-12-22 14:37:15 -08:00
wlan0
1e48fd18cb add cloud-controller-manager as the first step in breaking controller-manager 2016-12-22 14:37:15 -08:00
wlan0
731616e0b2 start breaking up controller manager into two pieces
Addresses: kubernetes/features#88

This commit starts breaking the controller manager into two pieces, namely,

1. cloudprovider dependent piece
2. coudprovider agnostic piece

the controller manager has the following control loops -

   - nodeController
   - volumeController
   - routeController
   - serviceController
   - replicationController
   - endpointController
   - resourcequotacontroller
   - namespacecontroller
   - deploymentController etc..

among the above controller loops,

   - nodeController
   - volumeController
   - routeController
   - serviceController

are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.

Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-12-22 14:37:14 -08:00
Kubernetes Submit Queue
744876d13f Merge pull request #38798 from NickrenREN/nodecontroller-status
Automatic merge from submit-queue

delete continue in monitorNodeStatus
2016-12-21 10:35:25 -08:00
Kubernetes Submit Queue
ad47a181ee Merge pull request #38986 from ncdc/fix-daemonset-controller-cache-mutation
Automatic merge from submit-queue

Fix DaemonSet cache mutation

**What this PR does / why we need it**: stops the DaemonSetController from mutating the DaemonSet shared informer cache

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #38985

cc @deads2k @mikedanese @lavalamp @smarterclayton
2016-12-21 09:09:18 -08:00
Kubernetes Submit Queue
f42574893b Merge pull request #39011 from wojtek-t/node_controller_listing_from_cache
Automatic merge from submit-queue

NodeController listing nodes from cache instead of cache in apiserver

This is reducing load on apiserver.
2016-12-21 03:13:09 -08:00
Kubernetes Submit Queue
237be4b2be Merge pull request #38855 from gnufied/fix-variable-shadow-exp-backoff
Automatic merge from submit-queue (batch tested with PRs 36888, 38180, 38855, 38590)

Fix variable shadowing in exponential backoff when deleting volumes

While https://github.com/kubernetes/kubernetes/pull/38339 implemented exponential backoff on
volume deletion, that PR suffers from a minor bug when error thrown on volume deletion is anything other than `VolumeInUse` errors - in which case exponential backoff will not work.

This PR fixes that. This PR also makes unit tests more deterministic because exponential backoff changed the way operations are permitted.

CC @jsafrane @childsb @wongma7
2016-12-20 20:33:56 -08:00
Hemant Kumar
7b423085fa Fix variable shadowing in exponential backoff when deleting volumes
Also fix pv_controller unit tests to behave more accurately
in light of exponential backoffs
2016-12-20 21:31:12 -05:00
Wojciech Tyczynski
1b2d9eb2e7 NodeController listing nodes from cache instead of cache in apiserver 2016-12-20 13:13:14 +01:00
Kubernetes Submit Queue
d373d1c467 Merge pull request #38917 from foxyriver/if-statement-must-be-true
Automatic merge from submit-queue (batch tested with PRs 38426, 38917, 38891, 38935)

if statement must be true

**What this PR does / why we need it**:

if len(metrics.Items)==0, the function would been returned. so the statement if len(metrics.Items) > 0 is redudant, it must be true.

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2016-12-19 18:18:24 -08:00
Andy Goldstein
febc641cee Fix DaemonSet controller cache mutation
Add dsStoreSynced so we also wait on this cache when starting the
DaemonSetController.

Switch to using a fake clientset in the unit tests.

Fix TestNumberReadyStatus so it doesn't expect the cache to be mutated.
2016-12-19 16:39:23 -05:00
Kubernetes Submit Queue
40bed8e189 Merge pull request #38080 from kargakis/requeue-on-selector-updates
Automatic merge from submit-queue

controller: sync deployments once they don't overlap anymore

Fixes https://github.com/kubernetes/kubernetes/issues/34458.

@kubernetes/deployment
2016-12-19 07:31:15 -08:00
Kubernetes Submit Queue
5f82fe76a2 Merge pull request #38878 from kubernetes/revert-38780-ds-fix1
Automatic merge from submit-queue (batch tested with PRs 34353, 33837, 38878)

Revert "daemonset: bail out after we enqueue once"

I get overzealous sometimes.

Reverts kubernetes/kubernetes#38780
2016-12-19 06:43:00 -08:00
Michail Kargakis
04c6fecbc7 controller: use defaultResync for the deployment controller 2016-12-19 14:04:15 +01:00
Michail Kargakis
d19a1109e2 controller: sync deployments once they don't overlap anymore 2016-12-19 14:04:15 +01:00
foxyriver
69c76d8398 if statement must be true 2016-12-17 11:52:41 +08:00
Maciej Szulik
9f064c57ce Remove extensions/v1beta1 Job 2016-12-17 00:07:24 +01:00
Mike Danese
3a6593c9f1 Revert "daemonset: bail out after we enqueue once" 2016-12-16 10:18:06 -08:00
Robert Rati
91931c138e [scheduling] Moved node affinity from annotations to api fields. #35518 2016-12-16 11:42:43 -05:00
Kubernetes Submit Queue
5b240ca897 Merge pull request #36748 from kargakis/remove-events-from-deployment-tests
Automatic merge from submit-queue

Fix Recreate for Deployments and stop using events in e2e tests

Fixes https://github.com/kubernetes/kubernetes/issues/36453 by removing events from the deployment tests. The test about events during a Rolling deployment is redundant so I just removed it (we already have another test specifically for Rolling deployments).

Closes https://github.com/kubernetes/kubernetes/issues/32567 (preferred to use pod LISTs instead of a new status API field for replica sets that would add many more writes to replica sets).

@kubernetes/deployment
2016-12-16 03:57:02 -08:00
Kubernetes Submit Queue
7ca5f92b58 Merge pull request #38780 from mikedanese/ds-fix1
Automatic merge from submit-queue

daemonset: bail out after we enqueue once

This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-15 16:15:52 -08:00
Michail Kargakis
7ef3e6f7c9 controller: wait for all pods to be deleted before Recreating 2016-12-15 19:55:18 +01:00
bprashanth
98c7fe98e1 Don't eat 403 in service controller 2016-12-15 10:27:14 -08:00
NickrenREN
fab228a4ef delete continue in monitorNodeStatus
the continue will run at the end of the for loop, we do not need it
2016-12-15 13:41:24 +08:00
Kubernetes Submit Queue
d8efc779ed Merge pull request #38154 from caesarxuchao/rename-release_1_5
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)

Rename "release_1_5" clientset to just "clientset"

We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.

@kubernetes/sig-api-machinery @deads2k 

```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
2016-12-14 14:21:51 -08:00
Mike Danese
3a311a2bc2 daemonset: bail out after we enqueue once
This isn't terrible because we dedup in the queue but it's a waste of
cycles.
2016-12-14 12:59:06 -08:00
Chao Xu
6709b7ada2 run hack/update-codegen.sh
run hack/verify-gofmt.sh
update bazel
2016-12-14 12:39:49 -08:00
Chao Xu
03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Kubernetes Submit Queue
af23f40f82 Merge pull request #37272 from brendandburns/cleanup
Automatic merge from submit-queue

Remove 'minion' from the code in two places in favor of 'node'

Part of https://github.com/kubernetes/kubernetes/issues/1111
2016-12-14 00:09:43 -08:00
Kubernetes Submit Queue
7b8ecda289 Merge pull request #38743 from caesarxuchao/remove
Automatic merge from submit-queue

Remove accidentally committed files

Accidentally committed in #37534.
2016-12-13 20:44:16 -08:00
Chao Xu
411128f294 remove wrongly committed files 2016-12-13 19:44:51 -08:00
Dan Winship
f369372dad Drop version-parsing from pkg/version
pkg/version is now just version constants, etc, not version parsing
2016-12-13 08:53:19 -05:00
Kubernetes Submit Queue
15f9572b8c Merge pull request #38613 from kargakis/do-not-adopt-when-deleted
Automatic merge from submit-queue

controller: adopt pods only when controller is not deleted

When a replica set is deleted it will continue adopting pods thus driving the worker that handles it in erroring out because the adoption is [always cancelled](59c313730c/pkg/controller/controller_ref_manager.go (L110)) in the controller reference manager.
```
E1212 14:40:31.245773    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.258462    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259131    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259149    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
I1212 14:40:31.268012    7964 deployment_controller.go:314] Error syncing deployment e2e-tests-deployment-2rr3m/test-rollover-deployment: Operation cannot be fulfilled on deployments.extensions "test-rollover-deployment": the object has been modified; please apply your changes to the latest version and try again
E1212 14:40:31.277252    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277276    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277287    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289148    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-b6s4x_82fa8343-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289169    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289176    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289181    7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
```

@kubernetes/deployment @caesarxuchao
2016-12-13 04:57:49 -08:00
Kubernetes Submit Queue
8abbedae54 Merge pull request #38315 from mikedanese/pin-gazel
Automatic merge from submit-queue

Pin gazel to a version and support cgo

This fixes the bazel build.

@krousey who is buildcop
2016-12-12 19:32:29 -08:00
Kubernetes Submit Queue
f45e918b8b Merge pull request #35833 from apelisse/owners-pkg-controller
Automatic merge from submit-queue

Curating Owners: pkg/controller

cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
   the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed.  Being an approver or reviewer
   of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
   OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
   over again (don't hesitate to ask me for help, or use the pull-request
   above as an example)
2016-12-12 18:51:33 -08:00
Prashanth B
8ff3182fd4 Update OWNERS 2016-12-12 17:55:18 -08:00
Prashanth B
0eda833c31 Update OWNERS 2016-12-12 17:54:39 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Kubernetes Submit Queue
5e6578a734 Merge pull request #38419 from freehan/service-status-update
Automatic merge from submit-queue

bump log level on service status update

ref: https://github.com/kubernetes/kubernetes/issues/38349

I tried to reproduce the problem in #38349 and failed. Not sure why service status update failed and service controller skip status update in the next round. What I have observed is that if service status update failed due to conflict, the next round of processServiceUpdate will correct it. 

Bumping log level to get a better signal when it occurs.
2016-12-12 12:42:53 -08:00
Michail Kargakis
ec2c79a35e controller: adopt pods only when controller is not deleted 2016-12-12 15:12:44 +01:00
Michail Kargakis
9c7b39066e Log enqueueing replica sets for availability checks 2016-12-12 14:09:16 +01:00
Kubernetes Submit Queue
83a77fa5a1 Merge pull request #38299 from kargakis/calculate-unavailable-correctly
Automatic merge from submit-queue (batch tested with PRs 38608, 38299)

controller: set unavailableReplicas correctly when scaling down

```
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
```

The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.

@kubernetes/deployment
2016-12-12 04:18:04 -08:00
Kubernetes Submit Queue
f071c7701d Merge pull request #38595 from yarntime/fix_typo_storage
Automatic merge from submit-queue

fix typo

**What this PR does / why we need it**:
    fix typo.

**Release note**:

```NONE
```
2016-12-11 22:14:21 -08:00
yarntime@163.com
a71741929e fix typo 2016-12-12 10:32:06 +08:00
Clayton Coleman
c52d510a24
refactor: generated 2016-12-10 18:05:53 -05:00
Clayton Coleman
3c72ee2189
Change references to OwnerReference 2016-12-10 18:05:36 -05:00
Clayton Coleman
42d410fdde
Switch to use pkg/apis/meta/v1/unstructured and the new interfaces
Avoid directly accessing an unstructured type if it is not required.
2016-12-10 18:05:28 -05:00
Clayton Coleman
c30862a488
Move OwnerReference to pkg/apis/meta/v1 and remove metatypes pkg
OwnerReference is common.
2016-12-10 18:05:28 -05:00
Kubernetes Submit Queue
e732ee70f4 Merge pull request #38406 from liggitt/remove-internal-json-annotations
Automatic merge from submit-queue

Remove json serialization annotations from internal types

fixes #3933

Internal types should never be serialized, and including json serialization tags on them makes it possible to accidentally do that without realizing it.

fixes in this PR:

* types
  * [x] remove json tags from internal types
  * [x] fix references from serialized types to internal ObjectMeta
* generation
  * [x] remove generated json codecs for internal types (they should never be used)
* kubectl
  * [x] fix `apply` to operate on versioned object
  * [x] fix sorting by field to operate on versioned object
  * [x] fix `--record` to build annotation patch using versioned object
* hpa
  * [x] fix unmarshaling to internal CustomMetricTargetList in validation
* thirdpartyresources
  * [x] fix encoding API responses using internal ObjectMeta
* tests
  * [x] fix tests to use versioned objects when checking encoded content
  * [x] fix tests passing internal objects to generic printers

follow ups (will open tracking issues or additional PRs):
- [ ] remove json tags from internal kubeconfig types (`kubectl config set` pathfinding needs to work against external type)
- [ ] HPA should version CustomMetricTargetList serialization in annotations
- [ ] revisit how TPR resthandlers encoding objects
- [ ] audit and add tests for printer use (human-readable printer requires internal versions, generic printers require external versions)
- [ ] add static analysis tests preventing new internal types from adding tags
- [ ] add static analysis tests requiring json tags on external types (and enforcing lower-case first letter)
- [ ] add more tests for `kubectl get` exercising known and unknown types with all output options
2016-12-10 14:00:17 -08:00
Kubernetes Submit Queue
f7e3668867 Merge pull request #37611 from yarntime/fix_typo_in_pet_set
Automatic merge from submit-queue

fix typo in pet_set

fix typo in pet_set.
2016-12-09 15:38:19 -08:00
Kubernetes Submit Queue
b72c006eb3 Merge pull request #34554 from derekwaynecarr/quota-storage-class
Automatic merge from submit-queue (batch tested with PRs 37270, 38309, 37568, 34554)

Ability to quota storage by storage class

Adds the ability to quota storage by storage class.
1. `<storage-class>.storageclass.storage.k8s.io/persistentvolumeclaims` - quota the number of claims with a specific storage class
2. `<storage-class>.storageclass.storage.k8s.io/requests.storage` - quota the cumulative request for storage in a particular storage class.

For example:

```
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: 100
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: 5
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: 10
    bronze.storageclass.storage.k8s.io.kubernetes.io/requests.storage: 100Gi
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: 15
$ kubectl create -f quota.yaml
$ cat pvc-bronze.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  generateName: pvc-bronze-
  annotations:
    volume.beta.kubernetes.io/storage-class: "bronze"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl create -f pvc-bronze.yaml
$ kubectl get quota storage-quota -o yaml
apiVersion: v1
kind: ResourceQuota
...
status:
  hard:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "15"
    bronze.storageclass.storage.k8s.io/requests.storage: 100Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    persistentvolumeclaims: "100"
    requests.storage: 100Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "10"
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
  used:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "1"
    bronze.storageclass.storage.k8s.io/requests.storage: 8Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    gold.storageclass.storage.k8s.io/requests.storage: "0"
    persistentvolumeclaims: "1"
    requests.storage: 8Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    silver.storageclass.storage.k8s.io/requests.storage: "0"
```
2016-12-09 14:11:21 -08:00
Jordan Liggitt
6676bab9c3
Fix unmarshaling into internal version of CustomMetricTargetList in validation 2016-12-09 16:26:05 -05:00
Kubernetes Submit Queue
43233caaf0 Merge pull request #37871 from Random-Liu/use-patch-in-kubelet
Automatic merge from submit-queue (batch tested with PRs 36692, 37871)

Use PatchStatus to update node status in kubelet.

Fixes https://github.com/kubernetes/kubernetes/issues/37771.

This PR changes kubelet to update node status with `PatchStatus`.

@caesarxuchao @ymqytw told me that there is a limitation of current `CreateTwoWayMergePatch`, it doesn't support primitive type slice which uses strategic merge.
* I checked the node status, the only primitive type slices in NodeStatus are as follows, they are not using strategic merge:
  * [`ContainerImage.Names`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2963)
  * [`VolumesInUse`](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2909)
* Volume package is already [using `CreateStrategicMergePath` to generate node status update patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/attachdetach/statusupdater/node_status_updater.go#L111), and till now everything is fine. 

@yujuhong @dchen1107 
/cc @kubernetes/sig-node
2016-12-09 11:29:11 -08:00
Derek Carr
459a7a05f1 Ability to quota storage by storage class 2016-12-09 13:26:59 -05:00
Kubernetes Submit Queue
5b5b1e7533 Merge pull request #38371 from wojtek-t/get_options_in_client
Automatic merge from submit-queue (batch tested with PRs 38354, 38371)

Add GetOptions parameter to Get() calls in client library

Ref #37473 

This PR is super mechanical - the non trivial commits are:
- Update client generator
- Register GetOptions in batch/v2alpha1 group
2016-12-09 04:12:09 -08:00
Wojciech Tyczynski
aa7da5231f Update bazel files 2016-12-09 09:42:02 +01:00
Wojciech Tyczynski
e8d1cba875 GetOptions in client calls 2016-12-09 09:42:01 +01:00
Kubernetes Submit Queue
98c4c73c71 Merge pull request #37770 from enj/enj/r/storage_decorator
Automatic merge from submit-queue (batch tested with PRs 38278, 37770)

Refactor REST storage to use generic defaults

This removes the repetition in the REST storage builders by moving the logic to `restoptions.ApplyOptions`.  `registry.StorageWithCacher`/`generic.StorageDecorator` no longer assume that they can build the `keyFunc` for arbitrary objects.  `restoptions.ApplyOptions` uses the `registry.Store`'s `KeyFunc` for its call to `generic.StorageDecorator`.

```release-note
Cluster federation servers have changed the location in etcd where federated services are stored, so existing federated services must be deleted and recreated. Before upgrading, export all federated services from the federation server and delete the services. After upgrading the cluster, recreate the federated services from the exported data.
```
2016-12-09 00:25:35 -08:00
Random-Liu
beba1ebbf8 Use PatchStatus to update node status in kubelet. 2016-12-08 17:13:59 -08:00
Minhan Xia
e082ac4a78 bump log level on service status update 2016-12-08 15:00:43 -08:00
Monis Khan
a6bafbacbf
Refactor REST storage to use generic defaults
Signed-off-by: Monis Khan <mkhan@redhat.com>
2016-12-08 17:24:21 -05:00
Jordan Liggitt
6819706adf
Pass addressable values to DeepCopy 2016-12-08 14:16:01 -05:00
Kubernetes Submit Queue
8f11cc78a8 Merge pull request #38339 from gnufied/backoff-on-volume-delete
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

Exponential back off when volume delete fails

**What this PR does / why we need it**:

This PR implements ability in pv_controller to back off when deleting a volume fails from plugin API. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

Partly fixes #38295 , but I think volume delete is most problematic thing happening in pv_controller without any sort of backoff. 

After this change the attempts of volume deletion look like:

```
controller : I1208 00:18:35.532061   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:20:50.578325   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:23:05.563488   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:25:20.599158   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:27:35.560009   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:29:50.594967   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:32:05.539168   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:34:20.581665   16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
```
2016-12-08 10:52:03 -08:00
Kubernetes Submit Queue
3519ba4099 Merge pull request #36648 from kargakis/follow-up-to-perma-failed
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

controller: sync stuck deployments in a secondary queue

@kubernetes/deployment this makes Deployments not depend on a tight resync interval in order to estimate progress.
2016-12-08 10:51:59 -08:00
Kubernetes Submit Queue
9125d03418 Merge pull request #36365 from kargakis/backoff-in-deployment-controller
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

Backoff correctly when adopting replica sets/pods

@kubernetes/deployment ptal

Fixes https://github.com/kubernetes/kubernetes/issues/34534
2016-12-08 10:51:57 -08:00
Hemant Kumar
caf867a402 Exponential back off when volume delete fails
This implements pv_controller to exponentially backoff
when deleting a volume fails in Cloud API. It ensures that
we aren't making too many calls to Cloud API
2016-12-07 19:25:36 -05:00
Alejandro Escobar
759530536f type found with controller comment. 2016-12-07 10:55:02 -08:00
Michail Kargakis
c82cae85f6 controller: set unavailableReplicas correctly when scaling down
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0

The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
2016-12-07 17:34:09 +01:00
Michail Kargakis
b3765c4df9 Backoff correctly when adopting replica sets/pods 2016-12-07 16:13:18 +01:00
Kubernetes Submit Queue
d6b9a7aa60 Merge pull request #37693 from wojtek-t/pipe_get_options_to_storage
Automatic merge from submit-queue (batch tested with PRs 37693, 38085)

Pipe get options to storage

Ref #37473
2016-12-07 00:52:26 -08:00
Kubernetes Submit Queue
f527f44019 Merge pull request #37032 from gnufied/attach-detach-test
Automatic merge from submit-queue

Add integration tests for desire state of world populator

Add integration tests for desire state of world populator
    
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994
    
Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.
2016-12-06 18:23:59 -08:00
Michail Kargakis
a8a7ca28f0 controller: sync stuck deployments in a secondary queue 2016-12-06 18:08:35 +01:00
Hemant Kumar
fcf5d79be7 Add integration tests for desire state of world populator
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994

Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.
2016-12-06 06:52:52 -05:00
Wojciech Tyczynski
c8711f29a5 Update autogenerated files 2016-12-06 12:25:57 +01:00
gmarek
15f2dbe13c gcOrphaned check via the API that the node doesn’t exist 2016-12-06 12:17:38 +01:00
Wojciech Tyczynski
3432fea8b2 Pipe GetOptions to storage 2016-12-06 11:48:37 +01:00
Kubernetes Submit Queue
cffaf1b71b Merge pull request #31321 from anguslees/lb-nodes
Automatic merge from submit-queue (batch tested with PRs 37328, 38102, 37261, 31321, 38146)

Pass full Node objects to provider LoadBalancer methods
2016-12-05 20:16:53 -08:00
Kubernetes Submit Queue
f587c7a49f Merge pull request #38076 from kargakis/requeue-replica-sets-for-availability-checks
Automatic merge from submit-queue (batch tested with PRs 38076, 38137, 36882, 37634, 37558)

Requeue replica sets for availability checks

See https://github.com/kubernetes/kubernetes/issues/35355#issuecomment-264746268 for more details

@kubernetes/deployment @smarterclayton
2016-12-05 19:25:49 -08:00
deads2k
5788317953 demonstrate separation of controller intializers 2016-12-05 10:24:45 -05:00
Kubernetes Submit Queue
45a436ac24 Merge pull request #36909 from sttts/sttts-discovery-with-verbs
Automatic merge from submit-queue (batch tested with PRs 37370, 37003, 36909)

Add verbs to APIResourceInfo for discovery

Verbs will be used by generic controllers (gc, namespace) to avoid unnecessary API calls, reducing load on the apiserver. E.g. not all objects can be deleted.

Example:
```json
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "batch/v1",
  "resources": [
    {
      "name": "jobs",
      "namespaced": true,
      "kind": "Job",
      "verbs": [
        "create",
        "delete",
        "deletecollection",
        "get",
        "list",
        "update",
        "watch"
      ]
    },
    {
      "name": "jobs/status",
      "namespaced": true,
      "kind": "Job",
      "verbs": [
        "create",
        "get"
      ]
    }
  ]
}
```
2016-12-05 06:48:41 -08:00
Kubernetes Submit Queue
4a91faa1b6 Merge pull request #37370 from gmarek/test-refactor
Automatic merge from submit-queue

Make NodeController test utils usable from outside

Required to fix tests for #37365 and useful in general.
2016-12-05 06:38:32 -08:00
Dr. Stefan Schimanski
2dff13f332 Update generated files 2016-12-05 12:42:31 +01:00
Dr. Stefan Schimanski
24e24fc7bb Add verb support to gc and namespace controllers 2016-12-05 12:36:05 +01:00
Dr. Stefan Schimanski
458d2b2fe4 Add verb support for discovery client 2016-12-05 12:36:05 +01:00
Dr. Stefan Schimanski
4d1d98c49a Remove namespace controller pod precedence 2016-12-05 12:36:05 +01:00
gmarek
94f091ad03 Make NodeController test utils usable from outside 2016-12-05 10:56:06 +01:00
gmarek
770e1c289a Change 'controller.go' filenames to more meaningfull ones 2016-12-05 09:12:22 +01:00
yarntime@163.com
148170da5d fix typo 2016-12-05 11:58:21 +08:00
Michail Kargakis
267dae6435 controller: requeue replica sets for availability checks 2016-12-05 02:41:15 +01:00
Clayton Coleman
3454a8d52c
refactor: update bazel, codec, and gofmt 2016-12-03 19:10:53 -05:00
Clayton Coleman
5df8cc39c9
refactor: generated 2016-12-03 19:10:46 -05:00
Kubernetes Submit Queue
6fd00e9f56 Merge pull request #37678 from tsmetana/issue_37377
Automatic merge from submit-queue (batch tested with PRs 37608, 37103, 37320, 37607, 37678)

Fix issue #37377: Report an event on successful PVC provisioning

This is a simple patch to fix the issue #37377: On a successful PVC provisioning an event is emitted so it's clear the provisioning actually succeeded.

cc: @jsafrane
2016-12-02 23:32:50 -08:00
Kubernetes Submit Queue
6b05a519a3 Merge pull request #37169 from smarterclayton/approver
Automatic merge from submit-queue (batch tested with PRs 37945, 37498, 37391, 37209, 37169)

Refactor certificate controller to make approval an interface

@mikedanese
2016-12-02 20:32:49 -08:00
Kubernetes Submit Queue
fb7e9d901d Merge pull request #37939 from yarntime/fix_typo_in_node_status_updater
Automatic merge from submit-queue (batch tested with PRs 37997, 37939, 37990, 36700, 37258)

fix typo in node_status_updater

fix typo.
2016-12-02 19:26:47 -08:00
Kubernetes Submit Queue
c552f8918b Merge pull request #37727 from rkouj/bug-fix-upgrade-test
Automatic merge from submit-queue

SetNodeUpdateStatusNeeded whenever nodeAdd event is received

**What this PR does / why we need it**:
Bug fix and SetNodeStatusUpdateNeeded for a node whenever its api object is added. This is to ensure that we don't lose the attached list of volumes in the node when its api object is deleted and recreated.

fixes https://github.com/kubernetes/kubernetes/issues/37586
         https://github.com/kubernetes/kubernetes/issues/37585


**Special notes for your reviewer**:


<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
2016-12-02 05:44:57 -08:00
yarntime@163.com
df6e9db9d9 fix typo 2016-12-02 17:33:45 +08:00
Kubernetes Submit Queue
6abb472357 Merge pull request #37720 from freehan/lb-src-update
Automatic merge from submit-queue

Fix Service Update on LoadBalancerSourceRanges Field

Fixes: https://github.com/kubernetes/kubernetes/issues/33033
Also expands: https://github.com/kubernetes/kubernetes/pull/32748
2016-12-01 18:21:39 -08:00
Kubernetes Submit Queue
67a12c9e1f Merge pull request #35272 from yarntime/refactor_reconcileAutoscaler
Automatic merge from submit-queue

rescale immediately if the basic constraints are not satisfied

refactor reconcileAutoscaler.
If the basic constraints are not satisfied, we should rescale the target ref immediately.
2016-12-01 15:06:58 -08:00
Kubernetes Submit Queue
ca7848a787 Merge pull request #37714 from deads2k/auth-08-client-fallout
Automatic merge from submit-queue

fix rbac informer.  it's listers are all internal

Fixes https://github.com/kubernetes/kubernetes/issues/37615

The rbac informer still uses internal types in its listers, which means it must use internal clients for evaluation.  Since its running inside the API server, this seems ok for now and we can/should fix it when generated informers come along.  This just patches us to keep RBAC working.

@kubernetes/sig-auth @sttts @liggitt this is broken in master, let's get it sorted quickly.
2016-12-01 08:45:55 -08:00
Clayton Coleman
bdd880a1b4
Refactor certificate controller to make approval an interface 2016-12-01 09:55:28 -05:00
Kubernetes Submit Queue
2c0e59b974 Merge pull request #37613 from wojtek-t/limitranger_index
Automatic merge from submit-queue

Add namespace index for limit ranger

Without this PR I'm seeing a huge number of lines like this:
```
Index with name namespace does not exist
```

Those are coming from LimitRanger admission controller - this PR fixes those.
2016-12-01 04:52:04 -08:00
Kubernetes Submit Queue
256a99d220 Merge pull request #36432 from kargakis/controller-fixes
Automatic merge from submit-queue

Update deployment status only when there is a new scaling update during a rollout

@kubernetes/deployment
2016-12-01 00:39:09 -08:00
rkouj
638ef1b977 SetNodeUpdateStatusNeeded whenever nodeAdd event is received 2016-11-30 21:12:34 -08:00
Kubernetes Submit Queue
66fe55f5ad Merge pull request #37238 from deads2k/controller-02-minor-fixes
Automatic merge from submit-queue

controller manager refactors

The controller manager needs some significant cleanup.  This starts us down the patch by respecting parameters like `stopCh`, simplifying discovery checks, removing unnecessary parameters, preventing unncessary fatals, and using our client builder.

@sttts @ncdc
2016-11-30 20:08:19 -08:00
Kubernetes Submit Queue
737edd02a4 Merge pull request #35258 from feiskyer/package-aliase
Automatic merge from submit-queue

Fix package aliases to follow golang convention

Some package aliases are not not align with golang convention https://blog.golang.org/package-names. This PR fixes them. Also adds a verify script and presubmit checks.

Fixes #35070.

cc/ @timstclair @Random-Liu
2016-11-30 16:39:46 -08:00
Minhan Xia
1c2c0c1f63 support service loadBalancerSourceRange update 2016-11-30 15:27:34 -08:00
Angus Lees
83e7a85ecc provider: Pass full node objects to *LoadBalancer
Many providers need to do some sort of node name -> IP or instanceID
lookup before they can use the list of hostnames passed to
EnsureLoadBalancer/UpdateLoadBalancer.

This change just passes the full Node object instead of simply the node
name, allowing providers to use the node's provider ID and cached
addresses without additional lookups.  Using `node.Name` reproduces the
old behaviour.
2016-12-01 09:53:53 +11:00
deads2k
672eb99201 fix rbac informer. it's listers are all internal 2016-11-30 15:24:06 -05:00
Kubernetes Submit Queue
e0dd422c14 Merge pull request #37623 from yarntime/fix_typo_in_deployment
Automatic merge from submit-queue

fix typo in deployment

fix typo in deployment.
2016-11-30 08:03:37 -08:00
Kubernetes Submit Queue
b01e6f68fe Merge pull request #37431 from liggitt/namespace-leftovers
Automatic merge from submit-queue

hold namespaces briefly before processing deletion

possible fix for #36891

in HA scenarios (either HA apiserver or HA etcd), it is possible for deletion of resources from namespace cleanup to race with creation of objects in the terminating namespace

HA master timeline:
1. "delete namespace n" API call goes to apiserver 1, deletion timestamp is set in etcd
2. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
3. "create deployment d" API call goes to apiserver 2, gets persisted to etcd
4. apiserver 2 observes namespace deletion, stops allowing new objects to be created
5. namespace controller finishes deleting listed deployments, deletes namespace

HA etcd timeline:
1. "create deployment d" API call goes to apiserver, gets persisted to etcd
2. "delete namespace n" API call goes to apiserver, deletion timestamp is set in etcd
3. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
4. list call goes to non-leader etcd member that hasn't observed the new deployment or the deleted namespace yet
5. namespace controller finishes deleting the listed deployments, deletes namespace

In both cases, simply waiting to clean up the namespace (either for etcd members to observe objects created at the last second in the namespace, or for other apiservers to observe the namespace move to terminating phase and disallow additional creations) resolves the issue

Possible other fixes:
* do a second sweep of objects before deleting the namespace
* have the namespace controller check for and clean up objects in namespaces that no longer exist
* ...?
2016-11-30 04:44:31 -08:00
Tomas Smetana
a02ee64d00 Fix issue #37377: Report an event on successful PVC provisioning
cc: @jsafrane
2016-11-30 10:27:22 +01:00
Pengfei Ni
f584ed4398 Fix package aliases to follow golang convention 2016-11-30 15:40:50 +08:00
yarntime@163.com
1e4c0f33a8 fix typo 2016-11-29 18:20:09 +08:00
Wojciech Tyczynski
8780736acf Add namespace index for limit ranger 2016-11-29 09:35:21 +01:00
deads2k
585daa2069 use the client builder to support using SAs 2016-11-28 15:02:22 -05:00
deads2k
49ebc2c2ae remove unnecessary startcontroller options 2016-11-28 15:02:21 -05:00
deads2k
d973158a4e make controller manager use specified stop channel 2016-11-28 15:02:21 -05:00
Jordan Liggitt
79ec8ae654
hold namespaces briefly before processing deletion 2016-11-28 11:35:09 -05:00
Michail Kargakis
d87aca66b1 Update deployment status only when there is a new scaling update during a rollout 2016-11-28 13:49:43 +01:00
Tim Hockin
c6c66f02f9 Remove vowels from rand.String, to avoid 'bad words' 2016-11-23 21:53:25 -08:00
Clayton Coleman
35a6bfbcee
generated: refactor 2016-11-23 22:30:47 -06:00
Chao Xu
bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu
b50367cbdc remove v1.Semantics 2016-11-23 15:53:09 -08:00
Chao Xu
96cd71d8f6 kubectl 2016-11-23 15:53:09 -08:00
Chao Xu
7eeb71f698 cmd/kube-controller-manager 2016-11-23 15:53:09 -08:00
ymqytw
3cc294b1e0 Revert "support patch list of primitives"
This reverts commit 34891ad9f6.
2016-11-22 21:06:36 -08:00
ymqytw
d248843b65 Revert "try old patch after new patch fails"
This reverts commit f32696e734.
2016-11-22 21:02:30 -08:00
Kubernetes Submit Queue
1f82f2491a Merge pull request #37206 from gmarek/nodecontroller
Automatic merge from submit-queue

Add more logging around Pod deletion

After this PR we'll have at least V(2) level log near all Pod deletions.

@saad-ali - this is required by GKE to help with diagnosing possible problem.

cc @dchen1107 @wojtek-t
2016-11-22 01:42:14 -08:00
Brendan Burns
e68fe4d62e Remove 'minion' from the code in two places in favor of 'node' 2016-11-21 22:48:06 -08:00
gmarek
795961f7e7 Add more logging around Pod deletion 2016-11-21 11:20:48 +01:00
yarntime@163.com
1ef7fd36fb rescale immediately if the basic constraints are not satisfied 2016-11-21 17:32:00 +08:00
Brendan Burns
ef6529bf2f make groupVersionResource listing dynamic when third party resources are
enabled.
2016-11-20 20:48:57 -08:00
Kubernetes Submit Queue
0042ce5684 Merge pull request #36892 from gmarek/nodecontroller
Automatic merge from submit-queue

Add logs near force deletions of Pods

We should always log something when control plane force deletes the Pod.

@davidopp I think that logging force deletions is enough, or do you think we should log soft deletions as well?

cc @deads2k
2016-11-20 16:00:10 -08:00
Kubernetes Submit Queue
b9d2d74a94 Merge pull request #37038 from ymqytw/retry_old_patch_after_new_patch_fail
Automatic merge from submit-queue

Fix kubectl Stratigic Merge Patch compatibility

As @smarterclayton pointed out in [comment1](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290820) and [comment2](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290847) in PR #35647,
we cannot assume the API servers publish version and they shares the same version.

This PR removes all the calls of GetServerSupportedSMPatchVersion().
Change the behavior of `apply` and `edit` to:
Retrying with the old patch version, if the new version fails.
Default other usage of SMPatch to the new version, since they don't update list of primitives.

fixes #36916

cc: @pwittrock @smarterclayton
2016-11-19 01:02:47 -08:00
ymqytw
f32696e734 try old patch after new patch fails 2016-11-17 14:28:09 -08:00
Jing Xu
3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
gmarek
78bc6c2ecd Add logs near force deletions of Pods 2016-11-16 15:00:01 +01:00
Dlugolecki, Jakub
d1896a695f Change ScheduledJob POD name suffix from hash to Unix Epoch 2016-11-15 17:25:32 +01:00
Kubernetes Submit Queue
b85acd957a Merge pull request #36579 from kargakis/restore-events-for-tests
Automatic merge from submit-queue

Restore event messages for replica sets in the deployment controller

Needed to unblock release upgrade tests (see https://github.com/kubernetes/kubernetes/issues/36453)

@kubernetes/deployment ptal
2016-11-11 15:50:31 -08:00
Kubernetes Submit Queue
3e169be887 Merge pull request #35647 from ymqytw/patch_primitive_list
Automatic merge from submit-queue

Fix strategic patch for list of primitive type with merge sementic

Fix strategic patch for list of primitive type when the patch strategy is `merge`.
Before: we cannot replace or delete an item in a list of primitive, e.g. string, when the patch strategy is `merge`. It will always append new items to the list.
This patch will generate a map to update the list of primitive type.
The server with this patch will accept either a new patch or an old patch.
The client will found out the APIserver version before generate the patch.

Fixes #35163, #32398

cc: @pwittrock @fabianofranz 

``` release-note
Fix strategic patch for list of primitive type when patch strategy is `merge` to remove deleted objects.
```
2016-11-11 14:36:44 -08:00
Kubernetes Submit Queue
868f6e1c5c Merge pull request #36585 from jszczepkowski/hpa-unittest2
Automatic merge from submit-queue

More unittests for HPA.
2016-11-11 09:17:06 -08:00