Commit Graph

1928 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
66fe55f5ad Merge pull request #37238 from deads2k/controller-02-minor-fixes
Automatic merge from submit-queue

controller manager refactors

The controller manager needs some significant cleanup.  This starts us down the patch by respecting parameters like `stopCh`, simplifying discovery checks, removing unnecessary parameters, preventing unncessary fatals, and using our client builder.

@sttts @ncdc
2016-11-30 20:08:19 -08:00
Kubernetes Submit Queue
737edd02a4 Merge pull request #35258 from feiskyer/package-aliase
Automatic merge from submit-queue

Fix package aliases to follow golang convention

Some package aliases are not not align with golang convention https://blog.golang.org/package-names. This PR fixes them. Also adds a verify script and presubmit checks.

Fixes #35070.

cc/ @timstclair @Random-Liu
2016-11-30 16:39:46 -08:00
Minhan Xia
1c2c0c1f63 support service loadBalancerSourceRange update 2016-11-30 15:27:34 -08:00
Angus Lees
83e7a85ecc provider: Pass full node objects to *LoadBalancer
Many providers need to do some sort of node name -> IP or instanceID
lookup before they can use the list of hostnames passed to
EnsureLoadBalancer/UpdateLoadBalancer.

This change just passes the full Node object instead of simply the node
name, allowing providers to use the node's provider ID and cached
addresses without additional lookups.  Using `node.Name` reproduces the
old behaviour.
2016-12-01 09:53:53 +11:00
deads2k
672eb99201 fix rbac informer. it's listers are all internal 2016-11-30 15:24:06 -05:00
Kubernetes Submit Queue
e0dd422c14 Merge pull request #37623 from yarntime/fix_typo_in_deployment
Automatic merge from submit-queue

fix typo in deployment

fix typo in deployment.
2016-11-30 08:03:37 -08:00
Kubernetes Submit Queue
b01e6f68fe Merge pull request #37431 from liggitt/namespace-leftovers
Automatic merge from submit-queue

hold namespaces briefly before processing deletion

possible fix for #36891

in HA scenarios (either HA apiserver or HA etcd), it is possible for deletion of resources from namespace cleanup to race with creation of objects in the terminating namespace

HA master timeline:
1. "delete namespace n" API call goes to apiserver 1, deletion timestamp is set in etcd
2. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
3. "create deployment d" API call goes to apiserver 2, gets persisted to etcd
4. apiserver 2 observes namespace deletion, stops allowing new objects to be created
5. namespace controller finishes deleting listed deployments, deletes namespace

HA etcd timeline:
1. "create deployment d" API call goes to apiserver, gets persisted to etcd
2. "delete namespace n" API call goes to apiserver, deletion timestamp is set in etcd
3. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
4. list call goes to non-leader etcd member that hasn't observed the new deployment or the deleted namespace yet
5. namespace controller finishes deleting the listed deployments, deletes namespace

In both cases, simply waiting to clean up the namespace (either for etcd members to observe objects created at the last second in the namespace, or for other apiservers to observe the namespace move to terminating phase and disallow additional creations) resolves the issue

Possible other fixes:
* do a second sweep of objects before deleting the namespace
* have the namespace controller check for and clean up objects in namespaces that no longer exist
* ...?
2016-11-30 04:44:31 -08:00
Tomas Smetana
a02ee64d00 Fix issue #37377: Report an event on successful PVC provisioning
cc: @jsafrane
2016-11-30 10:27:22 +01:00
Pengfei Ni
f584ed4398 Fix package aliases to follow golang convention 2016-11-30 15:40:50 +08:00
yarntime@163.com
1e4c0f33a8 fix typo 2016-11-29 18:20:09 +08:00
Wojciech Tyczynski
8780736acf Add namespace index for limit ranger 2016-11-29 09:35:21 +01:00
deads2k
585daa2069 use the client builder to support using SAs 2016-11-28 15:02:22 -05:00
deads2k
49ebc2c2ae remove unnecessary startcontroller options 2016-11-28 15:02:21 -05:00
deads2k
d973158a4e make controller manager use specified stop channel 2016-11-28 15:02:21 -05:00
Jordan Liggitt
79ec8ae654
hold namespaces briefly before processing deletion 2016-11-28 11:35:09 -05:00
Michail Kargakis
d87aca66b1 Update deployment status only when there is a new scaling update during a rollout 2016-11-28 13:49:43 +01:00
Tim Hockin
c6c66f02f9 Remove vowels from rand.String, to avoid 'bad words' 2016-11-23 21:53:25 -08:00
Clayton Coleman
35a6bfbcee
generated: refactor 2016-11-23 22:30:47 -06:00
Chao Xu
bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu
b50367cbdc remove v1.Semantics 2016-11-23 15:53:09 -08:00
Chao Xu
96cd71d8f6 kubectl 2016-11-23 15:53:09 -08:00
Chao Xu
7eeb71f698 cmd/kube-controller-manager 2016-11-23 15:53:09 -08:00
ymqytw
3cc294b1e0 Revert "support patch list of primitives"
This reverts commit 34891ad9f6.
2016-11-22 21:06:36 -08:00
ymqytw
d248843b65 Revert "try old patch after new patch fails"
This reverts commit f32696e734.
2016-11-22 21:02:30 -08:00
Kubernetes Submit Queue
1f82f2491a Merge pull request #37206 from gmarek/nodecontroller
Automatic merge from submit-queue

Add more logging around Pod deletion

After this PR we'll have at least V(2) level log near all Pod deletions.

@saad-ali - this is required by GKE to help with diagnosing possible problem.

cc @dchen1107 @wojtek-t
2016-11-22 01:42:14 -08:00
Brendan Burns
e68fe4d62e Remove 'minion' from the code in two places in favor of 'node' 2016-11-21 22:48:06 -08:00
gmarek
795961f7e7 Add more logging around Pod deletion 2016-11-21 11:20:48 +01:00
yarntime@163.com
1ef7fd36fb rescale immediately if the basic constraints are not satisfied 2016-11-21 17:32:00 +08:00
Brendan Burns
ef6529bf2f make groupVersionResource listing dynamic when third party resources are
enabled.
2016-11-20 20:48:57 -08:00
Kubernetes Submit Queue
0042ce5684 Merge pull request #36892 from gmarek/nodecontroller
Automatic merge from submit-queue

Add logs near force deletions of Pods

We should always log something when control plane force deletes the Pod.

@davidopp I think that logging force deletions is enough, or do you think we should log soft deletions as well?

cc @deads2k
2016-11-20 16:00:10 -08:00
Kubernetes Submit Queue
b9d2d74a94 Merge pull request #37038 from ymqytw/retry_old_patch_after_new_patch_fail
Automatic merge from submit-queue

Fix kubectl Stratigic Merge Patch compatibility

As @smarterclayton pointed out in [comment1](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290820) and [comment2](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290847) in PR #35647,
we cannot assume the API servers publish version and they shares the same version.

This PR removes all the calls of GetServerSupportedSMPatchVersion().
Change the behavior of `apply` and `edit` to:
Retrying with the old patch version, if the new version fails.
Default other usage of SMPatch to the new version, since they don't update list of primitives.

fixes #36916

cc: @pwittrock @smarterclayton
2016-11-19 01:02:47 -08:00
ymqytw
f32696e734 try old patch after new patch fails 2016-11-17 14:28:09 -08:00
Jing Xu
3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
gmarek
78bc6c2ecd Add logs near force deletions of Pods 2016-11-16 15:00:01 +01:00
Dlugolecki, Jakub
d1896a695f Change ScheduledJob POD name suffix from hash to Unix Epoch 2016-11-15 17:25:32 +01:00
Kubernetes Submit Queue
b85acd957a Merge pull request #36579 from kargakis/restore-events-for-tests
Automatic merge from submit-queue

Restore event messages for replica sets in the deployment controller

Needed to unblock release upgrade tests (see https://github.com/kubernetes/kubernetes/issues/36453)

@kubernetes/deployment ptal
2016-11-11 15:50:31 -08:00
Kubernetes Submit Queue
3e169be887 Merge pull request #35647 from ymqytw/patch_primitive_list
Automatic merge from submit-queue

Fix strategic patch for list of primitive type with merge sementic

Fix strategic patch for list of primitive type when the patch strategy is `merge`.
Before: we cannot replace or delete an item in a list of primitive, e.g. string, when the patch strategy is `merge`. It will always append new items to the list.
This patch will generate a map to update the list of primitive type.
The server with this patch will accept either a new patch or an old patch.
The client will found out the APIserver version before generate the patch.

Fixes #35163, #32398

cc: @pwittrock @fabianofranz 

``` release-note
Fix strategic patch for list of primitive type when patch strategy is `merge` to remove deleted objects.
```
2016-11-11 14:36:44 -08:00
Kubernetes Submit Queue
868f6e1c5c Merge pull request #36585 from jszczepkowski/hpa-unittest2
Automatic merge from submit-queue

More unittests for HPA.
2016-11-11 09:17:06 -08:00
Michail Kargakis
9afe2c3a7c Do not emit event for AlreadyExists errors 2016-11-11 10:49:05 +01:00
Kubernetes Submit Queue
f70c2ef20e Merge pull request #36584 from kargakis/add-more-logging-in-the-deployment-controller
Automatic merge from submit-queue

Do not handle AlreadyExists errors yet

Until we fix https://github.com/kubernetes/kubernetes/issues/29735 (use a new hashing algo) we should not handle AlreadyExists (was added recently in the perma-failed PR).

@kubernetes/deployment
2016-11-10 09:53:27 -08:00
Jerzy Szczepkowski
f843aff083 More unittests for HPA.
Added more unittests for HPA. Fixed inconsistency in replica calculator when usageRatio == 1.0.
2016-11-10 17:30:23 +01:00
Michail Kargakis
8cd4459b6c Do not handle AlreadyExists errors yet 2016-11-10 15:45:56 +01:00
Michail Kargakis
8ef6fdde72 Restore event messages for replica sets in the deployment controller 2016-11-10 14:34:40 +01:00
Kubernetes Submit Queue
cc51dc56a1 Merge pull request #36436 from jszczepkowski/hpa-events-fix
Automatic merge from submit-queue

HPA: removed duplicated events, added events in all execution paths.
2016-11-10 03:48:57 -08:00
Kubernetes Submit Queue
0f082c6663 Merge pull request #36280 from rkouj/better-mount-error
Automatic merge from submit-queue

Better messaging for missing volume binaries on host

**What this PR does / why we need it**:
When mount binaries are not present on a host, the error returned is a generic one.
This change is to check the mount binaries before the mount and return a user-friendly error message.

This change is specific to GCI and the flag is experimental now.

https://github.com/kubernetes/kubernetes/issues/36098

**Release note**:
Introduces a flag `check-node-capabilities-before-mount` which if set, enables a check (`CanMount()`) prior to mount operations to verify that the required components (binaries, etc.) to mount the volume are available on the underlying node. If the check is enabled and `CanMount()` returns an error, the mount operation fails. Implements the `CanMount()` check for NFS.















Sample output post change :


rkouj@rkouj0:~/go/src/k8s.io/kubernetes$ kubectl describe pods
Name:		sleepyrc-fzhyl
Namespace:	default
Node:		e2e-test-rkouj-minion-group-oxxa/10.240.0.3
Start Time:	Mon, 07 Nov 2016 21:28:36 -0800
Labels:		name=sleepy
Status:		Pending
IP:		
Controllers:	ReplicationController/sleepyrc
Containers:
  sleepycontainer1:
    Container ID:	
    Image:		gcr.io/google_containers/busybox
    Image ID:		
    Port:		
    Command:
      sleep
      6000
    QoS Tier:
      cpu:	Burstable
      memory:	BestEffort
    Requests:
      cpu:		100m
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  data:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	127.0.0.1
    Path:	/export
    ReadOnly:	false
  default-token-d13tj:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-d13tj
Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----						-------------	--------	------		-------
  7s		7s		1	{default-scheduler }						Normal		Scheduled	Successfully assigned sleepyrc-fzhyl to e2e-test-rkouj-minion-group-oxxa
  6s		3s		4	{kubelet e2e-test-rkouj-minion-group-oxxa}			Warning		FailedMount	Unable to mount volume kubernetes.io/nfs/32c7ef16-a574-11e6-813d-42010af00002-data (spec.Name: data) on pod sleepyrc-fzhyl (UID: 32c7ef16-a574-11e6-813d-42010af00002). Verify that your node machine has the required components before attempting to mount this volume type. Required binary /sbin/mount.nfs is missing
2016-11-09 18:51:00 -08:00
Kubernetes Submit Queue
6a8edf72e1 Merge pull request #35957 from jsafrane/implement-external-provisioner
Automatic merge from submit-queue

Implement external provisioning proposal

In other words, add "provisioned-by" annotation to all PVCs that should be provisioned dynamically.

Most of the changes are actually in tests.

@kubernetes/sig-storage
2016-11-09 18:12:56 -08:00
Rajat Ramesh Koujalagi
d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
ymqytw
34891ad9f6 support patch list of primitives 2016-11-09 11:46:59 -08:00
Kubernetes Submit Queue
5d4d596667 Merge pull request #36438 from mwielgus/pdb-generation
Automatic merge from submit-queue

Use generation in pod disruption budget

Fixes #35324

Previously it was possible to use allowedDirsruptions calculated for the previous spec with the current spec. With generation check API servers always make sure that allowedDisruptions were calculated for the current spec. 

At the same time I set the registry policy to only accept updates if the version based on which the update was made matches to the current version in etcd. That ensures that parallel eviction executions don't use the same allowed disruption.

cc: @davidopp @kargakis @wojtek-t
2016-11-09 10:02:29 -08:00
Kubernetes Submit Queue
2a674307f5 Merge pull request #36430 from kargakis/fix-deployment-progress-estimation
Automatic merge from submit-queue

Use the correct time field to estimate progress in deployments

Fixes https://github.com/kubernetes/kubernetes/issues/36427

@kubernetes/deployment
2016-11-09 08:49:53 -08:00
Kubernetes Submit Queue
5464d42a36 Merge pull request #36491 from kargakis/panic-deployment-controller
Automatic merge from submit-queue

controller: fix panic in deployments

Fixes https://github.com/kubernetes/kubernetes/issues/36488

@kubernetes/deployment
2016-11-09 06:12:29 -08:00
Kubernetes Submit Queue
fb75f8d3d6 Merge pull request #36329 from derekwaynecarr/replenishment_informers
Automatic merge from submit-queue

Use available informers in quota replenishment

more iteration on the goal to use informers where available in quota system.  this time adding persistent volume claims so the same informer is used here and https://github.com/kubernetes/kubernetes/pull/36316
2016-11-09 03:49:15 -08:00
Michail Kargakis
9fe910dac5 controller: fix panic in deployments 2016-11-09 12:25:23 +01:00
Marcin
8e2347370e Add observedGeneration to PodDisruptionBudgetStatus 2016-11-08 17:06:17 +01:00
Jerzy Szczepkowski
7ebd50c7cd HPA: removed duplicated events, added events in all execution paths.
HPA: removed duplicated events, added events in all execution paths. Fixes #36357.
2016-11-08 13:40:49 +01:00
Michail Kargakis
2972538f5b Use the correct time field to estimate progress in deployments 2016-11-08 11:41:53 +01:00
Solly Ross
2c66d47786 HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage.  However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.

This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down.  If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale.  Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.

The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.

Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up.  As above, this cannot change the
direction of the scale.

This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions.  Currently, this only works for CPU.  For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-11-08 00:59:23 -05:00
Kubernetes Submit Queue
6b16307d1f Merge pull request #35465 from lukaszo/ds_event
Automatic merge from submit-queue

Emit event when scheduling daemon fails
2016-11-07 18:18:05 -08:00
Kubernetes Submit Queue
1866e1862e Merge pull request #36021 from soltysh/cronjobs
Automatic merge from submit-queue

Rename ScheduledJobs to CronJobs

I went with @smarterclayton idea of registering named types in schema. This way we can support both the new (CronJobs) and old (ScheduledJobs) resource name. Fixes #32150.

fyi @erictune @caesarxuchao @janetkuo 

Not ready yet, but getting close there...

**Release note**:
```release-note
Rename ScheduledJobs to CronJobs.
```
2016-11-07 07:12:17 -08:00
Kubernetes Submit Queue
7bc358681a Merge pull request #36235 from jszczepkowski/hpa-events-fix
Automatic merge from submit-queue

Improved event generation for HPA.
2016-11-07 02:16:27 -08:00
Maciej Szulik
41d88d30dd Rename ScheduledJob to CronJob 2016-11-07 10:14:12 +01:00
Kubernetes Submit Queue
14961af811 Merge pull request #35665 from m1093782566/m109-pet-test
Automatic merge from submit-queue

Add StatefulSet update pod unit test and set log level

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What**:
- Add petset controller update pet unit test
- set petset controller log level

**Why**
- #32482 fixed "losing pet updated information between update retries", as @bprashanth suggested, "there should be a UT to ensure we fix identity if something corrupts it". I implement the UT in this PR.
- set petset controller log level in order to avoid spamming.

@bprashanth
2016-11-06 23:19:22 -08:00
Kubernetes Submit Queue
fefdad2366 Merge pull request #36324 from mwielgus/diseve
Automatic merge from submit-queue

Add more events to disruption controller

To provide users with information that their PDB may not be working as intended.

cc: @davidopp
2016-11-06 21:21:23 -08:00
Kubernetes Submit Queue
4b081985ed Merge pull request #36248 from ncdc/operationNotSupportedCache-mutex
Automatic merge from submit-queue

Fix possible race in operationNotSupportedCache

Because we can run multiple workers to delete namespaces simultaneously, the
operationNotSupportedCache needs to be guarded with a mutex to avoid concurrent
map read/write errors.
2016-11-06 18:57:39 -08:00
Derek Carr
669ce59b9b Use available informers in quota replenishment 2016-11-06 18:45:36 -05:00
Marcin Wielgus
51e7bd92db Add more events to disruption controller 2016-11-07 00:07:52 +01:00
Marcin
1fee246ca9 Autogenerated stuff for policy/v1beta1 api change 2016-11-06 19:37:33 +01:00
Marcin
47a1458ff3 Add DisruptedPod map to PodDisruptionBudgetStatus 2016-11-06 19:37:33 +01:00
Kubernetes Submit Queue
c02a9c6aad Merge pull request #36080 from ncdc/lister-gen
Automatic merge from submit-queue

lister-gen updates

- Remove "zz_generated." prefix from generated lister file names
- Add support for expansion interfaces
- Switch to new generated JobLister

@deads2k @liggitt @sttts @mikedanese @caesarxuchao for the lister-gen changes
@soltysh @deads2k for the informer / job controller changes
2016-11-06 06:05:23 -08:00
Kubernetes Submit Queue
43a915e628 Merge pull request #35491 from pmorie/byebye-getrootcontext
Automatic merge from submit-queue

Remove GetRootContext method from VolumeHost interface

Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.

Per #33951 and #35127.

Depends on #33663; only the last commit is relevant to this PR.
2016-11-06 01:09:19 -08:00
Kubernetes Submit Queue
2c50d2b6fc Merge pull request #36094 from janetkuo/overlapping-deployment-select
Automatic merge from submit-queue

Update how we detect overlapping deployments

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #24152 

**Special notes for your reviewer**: cc @kubernetes/deployment 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
NONE
```

When looking for overlapping deployments, we should also find other deployments that select current deployment's pods,
not just the ones whose pods are selected by current deployment.
2016-11-05 21:04:58 -07:00
m1093782566
396d959252 implement statefulset update pet unit test
Change-Id: I2234ee8357752aaa8801efb52d5963c215a0344d
2016-11-05 19:47:01 +08:00
Kubernetes Submit Queue
a811515d34 Merge pull request #35691 from kargakis/controller-changes-for-perma-failed
Automatic merge from submit-queue

Controller changes for perma failed deployments

This PR adds support for reporting failed deployments based on a timeout
parameter defined in the spec. If there is no progress for the amount
of time defined as progressDeadlineSeconds then the deployment will be
marked as failed by a Progressing condition with a ProgressDeadlineExceeded
reason.

Follow-up to https://github.com/kubernetes/kubernetes/pull/19343

Docs at kubernetes/kubernetes.github.io#1337

Fixes https://github.com/kubernetes/kubernetes/issues/14519

@kubernetes/deployment @smarterclayton
2016-11-04 14:49:43 -07:00
Andy Goldstein
4855917bc3 Fix possible race in operationNotSupportedCache
Because we can run multiple workers to delete namespaces simultaneously, the
operationNotSupportedCache needs to be guarded with a mutex to avoid concurrent
map read/write errors.
2016-11-04 14:11:54 -04:00
Michail Kargakis
f52ea8fc67 Update replica annotations every time they are out of sync 2016-11-04 16:29:41 +01:00
Jerzy Szczepkowski
6adc18ed67 Improved event generation for HPA.
Improved event generation for HPA: added grace-period before warning event is generated. Resolves #29799.
2016-11-04 15:38:06 +01:00
Michail Kargakis
a5029bf373 controller: support perma-failed deployments
This commit adds support for failing deployments based on a timeout
parameter defined in the spec. If there is no progress for the amount
of time defined as progressDeadlineSeconds then the deployment will be
marked as failed by adding a condition with a ProgressDeadlineExceeded
reason in it. Progress in the context of a deployment means the creation
or adoption of a new replica set, scaling up new pods, and scaling down
old pods.
2016-11-04 13:36:46 +01:00
Kubernetes Submit Queue
929d3f74e8 Merge pull request #34645 from kargakis/rs-conditions-controller-changes
Automatic merge from submit-queue

Replica set conditions controller changes

Follow-up to https://github.com/kubernetes/kubernetes/pull/33905, partially addresses https://github.com/kubernetes/kubernetes/issues/32863.

@smarterclayton @soltysh @bgrant0607 @mfojtik I just need to add e2e tests
2016-11-04 04:21:10 -07:00
Kubernetes Submit Queue
0a86b5ec34 Merge pull request #36161 from soltysh/fix_replace
Automatic merge from submit-queue

Fix how we iterate over active jobs when removing them for Replace policy

When fixing the Replace Active removal I used wrong for loop construct which panics :/ This PR fixes that by using for range.

@janetkuo ptal

@jessfraz this will also be a cherry-pick candidate for 1.4, I remember we've picked the aforementioned fix as well
2016-11-04 03:09:18 -07:00
Kubernetes Submit Queue
f2b5600567 Merge pull request #36017 from foxish/kubectl-new-2
Automatic merge from submit-queue

Set reason and message on Pod during nodecontroller eviction

**What this PR does / why we need it**: Pods which are evicted by the nodecontroller due to network partition, or unresponsive kubelet should be differentiated from termination initiated by other sources. The reason/message are consumed by kubectl to provide a better summary using get/describe.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #35725 

**Release note**:
```release-note
Pods that are terminating due to eviction by the nodecontroller (typically due to unresponsive kubelet, or network partition) now surface in `kubectl get` output 
as being in state "Unknown", along with a longer description in `kubectl describe` output.
```
2016-11-03 18:05:44 -07:00
Andy Goldstein
8c923faf74 Switch to JobLister 2016-11-03 20:41:40 -04:00
Anirudh
6d7213dd39 Update bazel 2016-11-03 13:47:09 -07:00
Anirudh
8fd7de5f13 Added unit test for adding reason with termination. 2016-11-03 13:47:09 -07:00
Anirudh
a5bdc5f509 Set reason and message on Pod during nodecontroller eviction
Pods which are evicted by the nodecontroller due to network
malfunction, or unresponsive kubelet should be differentiated
from termination initiated by other sources. The reason/message
are consumed by kubectl to provide a better summary using get/describe.
2016-11-03 13:47:03 -07:00
Paul Morie
4722cb299b Remove GetRootContext from VolumeHost 2016-11-03 12:21:19 -04:00
Jan Safranek
2224e80dd7 Fix race when two provisioner create two PVs for a single claim. 2016-11-03 16:58:25 +01:00
Maciej Szulik
80ec726858 Fix how we iterate over active jobs when removing them for Replace policy 2016-11-03 14:54:38 +01:00
Marcin
26acced6d8 Add policy api version v1beta1 and disable v1alpha1 2016-11-03 13:26:27 +01:00
Antoine Pelisse
e73a10fe46 Update OWNERS based on PR comments 2016-11-02 16:28:23 -07:00
Marek Grabowski
e044b9fb01 Update OWNERS 2016-11-02 16:19:53 -07:00
Jan Chaloupka
69c2c34829 Update OWNERS 2016-11-02 16:19:30 -07:00
David Eads
ff09343f75 update owners 2016-11-02 16:19:30 -07:00
Anirudh Ramanathan
bbca91d185 Update podGC owners 2016-11-02 16:19:30 -07:00
Prashanth B
23b048b3ec Update OWNERS 2016-11-02 16:19:29 -07:00
Matt Liggett
6f0d4e1651 Add mml to reviewers 2016-11-02 16:19:22 -07:00
Chao Xu
9aa1049a03 Update OWNERS 2016-11-02 16:19:22 -07:00
Derek Carr
d2311ecf71 Update OWNERS 2016-11-02 16:19:21 -07:00
Saad Ali
eac6809845 Add reviewers for controller/volume dir 2016-11-02 16:19:19 -07:00
Antoine Pelisse
db35acde19 Update OWNERS: Remove reviewers: pkg/controller 2016-11-02 16:19:19 -07:00
Antoine Pelisse
c695a54c1c Update OWNERS approvers and reviewers: pkg/controller 2016-11-02 16:19:18 -07:00
Janet Kuo
9f3560c563 Update how we detect overlapping deployments
When looking for overlapping deployments, we should also find other deployments that select current deployment's pods,
not just the ones whose pods are selected by current deployment.
2016-11-02 15:29:28 -07:00
Kubernetes Submit Queue
49f1aa0632 Merge pull request #35739 from foxish/migrating-the-annotation
Automatic merge from submit-queue

Making the pod.alpha.kubernetes.io/initialized annotation optional in PetSet pods

**What this PR does / why we need it**: As of now, the absence of the annotation `pod.alpha.kubernetes.io/initialized` in PetSets causes the PetSet controller to effectively "pause". Being a debug hook, users expect that its absence has no effect on the working of a PetSet. This PR inverts the logic so that we let the PetSet controller operate as expected in the absence of the annotation.
Letting the annotation remain alpha seems ok. Renaming it to something more meaningful needs further discussion.

**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes https://github.com/kubernetes/kubernetes/issues/35498

**Special notes for your reviewer**: 

**Release note**:

``` release-note
The annotation "pod.alpha.kubernetes.io/initialized" on StatefulSets (formerly PetSets) is now optional and only encouraged for debug use.
```

cc @erictune @smarterclayton @bprashanth @kubernetes/sig-apps 
@kow3ns The examples will need to be cleaned up as well I think later on to remove them.
2016-11-02 09:58:00 -07:00
Jan Safranek
18de83c641 Implement external provisioning proposal
In other words, add "provisioned-by" annotation to all PVCs
that should be provisioned dynamically.
2016-11-02 14:13:34 +01:00
Michail Kargakis
2491216222 Replica set/rc controller changes for Conditions 2016-11-02 10:30:09 +01:00
Kubernetes Submit Queue
49e7d640d9 Merge pull request #35235 from foxish/node-controller-no-force-deletion
Automatic merge from submit-queue

Node controller to not force delete pods

Fixes https://github.com/kubernetes/kubernetes/issues/35145

- [x] e2e tests to test Petset, RC, Job.
- [x] Remove and cover other locations where we force-delete pods within the NodeController.

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->

``` release-note
Node controller no longer force-deletes pods from the api-server.

* For StatefulSet (previously PetSet), this change means creation of replacement pods is blocked until old pods are definitely not running (indicated either by the kubelet returning from partitioned state, or deletion of the Node object, or deletion of the instance in the cloud provider, or force deletion of the pod from the api-server). This has the desirable outcome of "fencing" to prevent "split brain" scenarios.
* For all other existing controllers except StatefulSet , this has no effect on the ability of the controller to replace pods because the controllers do not reuse pod names (they use generate-name).
* User-written controllers that reuse names of pod objects should evaluate this change.
```
2016-11-01 20:08:57 -07:00
Anirudh
71941016c1 Fix old e2e tests, refactor and add new e2e tests. 2016-11-01 11:46:13 -07:00
Anirudh
5ccd7a325e Removing force deletion of pods from the node-controller 2016-11-01 11:44:34 -07:00
Kubernetes Submit Queue
1fa8369074 Merge pull request #35639 from ncdc/lister-gen
Automatic merge from submit-queue

Add tooling to generate listers

Add lister-gen tool to auto-generate listers. So far this PR only demonstrates replacing the manually-written `StoreToLimitRangeLister` with the generated `LimitRangeLister`, as it's a small and easy swap.

cc @deads2k @liggitt @sttts @nikhiljindal @lavalamp @smarterclayton @derekwaynecarr  @kubernetes/sig-api-machinery @kubernetes/rh-cluster-infra
2016-11-01 09:29:06 -07:00
Kubernetes Submit Queue
cbabb03acc Merge pull request #34841 from derekwaynecarr/quota-shared-informer
Automatic merge from submit-queue

quota controller uses informers if available for pod calculation

This PR does the following:
1. plumb informer factory into quota registry and evaluators
2. pod quota evaluator uses informers for determining aggregrate usage instead of making direct calls
3. admission code path does not use informers because
   1. we do not want to add new watches in apiserver
   2. admission code path does not require aggregate usage calculation

As a result, quota controller is much faster in re-calculating quota usage when it observes a pod deletion.

Follow-on PRs will make similar changes for other informer backed resources (pvcs next).

/cc @deads2k @mfojtik @smarterclayton @kubernetes/rh-cluster-infra
2016-10-31 14:34:57 -07:00
Anirudh
5c3792a4db Flipping behavior when annotation is absent. 2016-10-31 12:49:11 -07:00
derekwaynecarr
1bcb057636 quota controller uses informers if available for pod calculation 2016-10-31 11:38:22 -04:00
gmarek
8d766462e7 Initialize CIDR allocator before registering handle functions 2016-10-31 16:21:37 +01:00
Andy Goldstein
13abf36c60 Update bazel build files 2016-10-31 11:13:44 -04:00
Andy Goldstein
5ab385480b Convert StoreToLimitRangeLister to LimitRangeLister 2016-10-31 11:13:44 -04:00
Chao Xu
2044927034 move watch.ListWatchUntil to its own package to avoid future import cycle 2016-10-30 13:14:20 -07:00
Kubernetes Submit Queue
7d911417c2 Merge pull request #35420 from soltysh/sj_replace_fix
Automatic merge from submit-queue

Remove Job also from .status.active for Replace strategy

When iterating over list of Jobs we're removing each of them when strategy is replace. Unfortunately, the job reference was not removed from `.status.active` which cause the controller trying to remove it once again during next run and failed removing what was already removed during previous run. This was cause by not removing the reference previously. This PR fixes that and cleans logs a bit, in that controller.

@erictune fyi
@janetkuo ptal
2016-10-30 05:08:43 -07:00
Kubernetes Submit Queue
7f309f5fae Merge pull request #35471 from caesarxuchao/client-gen-multi-versions
Automatic merge from submit-queue

Let release_1_5 clientset include multiple versions of a group

Fix #35237 

This PR make versioned clientset to include multiple versions of a group. Currently only `batch` has `v1` and `v2alpha1`. The clientset interface now looks like:
```go
	BatchV2alpha1() v2alpha1batch.BatchV2alpha1Interface
	BatchV1() v1batch.BatchV1Interface
	// Deprecated: please explicitly pick a version if possible.
	Batch() v1batch.BatchV1Interface
```

Commit "update client-gen to say internalversion rather than unversioned" fixes https://github.com/kubernetes/kubernetes/issues/24481. 


cc @kubernetes/sig-api-machinery @soltysh @deads2k @nikhiljindal 



```release-note
release_1_5 clientset supports multiple versions of a group.
```
2016-10-29 15:40:13 -07:00
Kubernetes Submit Queue
7c9c8cbf28 Merge pull request #34952 from kargakis/update-observedgeneration-for-overlapping-deployments
Automatic merge from submit-queue

Make overlapping deployments deletable

@kubernetes/deployment ptal

Fixes https://github.com/kubernetes/kubernetes/issues/34466 by 1) not adding the overlapping annotation in the working deployment, 2) updates observedGeneration for overlapping deployments, and 3) updates the kubectl deployment reaper to do non-cascading deletion for deployments with the overlapping annotation.
2016-10-29 14:50:16 -07:00
Chao Xu
850729bfaf include multiple versions in clientset
update client-gen to use the term "internalversion" rather than "unversioned";
leave internal one unqualified;
cleanup client-gen
2016-10-29 13:30:47 -07:00
Kubernetes Submit Queue
620788a795 Merge pull request #35230 from deads2k/controller-12-sa-controller
Automatic merge from submit-queue

convert SA controller to shared informers

convert the SA controller to shared informer + workqueue.

I think one of @derekwaynecarr @ncdc or @liggitt
2016-10-29 10:09:46 -07:00
Kubernetes Submit Queue
9a219eb803 Merge pull request #34651 from smarterclayton/negotiate
Automatic merge from submit-queue

Simplify negotiation in server in preparation for multi version support

This is a pre-factor for #33900 to simplify runtime.NegotiatedSerializer, tighten up a few abstractions that may break when clients can request different client versions, and pave the way for better negotiation.

View this as pure simplification.
2016-10-29 03:32:02 -07:00
Kubernetes Submit Queue
3e7172d49e Merge pull request #34859 from jingxu97/syncAttach-10-15
Automatic merge from submit-queue

Add sync state loop in master's volume reconciler

At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached
volumes kept in
the actual state cache. If the volume is no longer attached to the node,
the actual state will be updated to reflect the truth. In turn,
reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
2016-10-28 18:33:29 -07:00
Kubernetes Submit Queue
1cba31af40 Merge pull request #35541 from foxish/deletions-safe-again
Automatic merge from submit-queue

Moving some force deletion logic from the NC into the PodGC

**What this PR does / why we need it**: Moves some pod force-deletion behavior into the PodGC, which is a better place for these.

This should be a NOP because we're just moving functionality
around and thanks to #35476, the podGC controller should always
run.

Related: https://github.com/kubernetes/kubernetes/pull/34160, https://github.com/kubernetes/kubernetes/issues/35145

cc @gmarek @kubernetes/sig-apps
2016-10-28 09:40:00 -07:00
Jing Xu
abbde43374 Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.

This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.

To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.

More details are explained in PR #33760
2016-10-28 09:24:53 -07:00
Clayton Coleman
ca2f1b87ad
Replace negotiation with a new method that can extract info
Alter how runtime.SerializeInfo is represented to simplify negotiation
and reduce the need to allocate during negotiation. Simplify the dynamic
client's logic around negotiating type. Add more tests for media type
handling where necessary.
2016-10-28 11:30:11 -04:00
Janet Kuo
10aee82ae3 Rename PetSet API to StatefulSet 2016-10-27 17:25:10 -07:00
Anirudh
1ae1a19e7b addressing comments. 2016-10-27 13:30:04 -07:00
deads2k
df4ed892c4 convert SA controller to shared informers 2016-10-27 15:44:46 -04:00
Anirudh
c0d116c419 Updated bazel 2016-10-27 11:56:15 -07:00
Anirudh
d57e8f11a3 Updated unit tests. 2016-10-27 11:56:15 -07:00
Anirudh
05365d7cb2 Moving deletion behavior from the NC into PodGC
This should be a NOP because we're just moving functionality
around and thanks to #35476, the podGC controller should always
run anyway.
2016-10-27 11:56:15 -07:00
Kubernetes Submit Queue
cfdaf18277 Merge pull request #34298 from derekwaynecarr/ns-controller-panic
Automatic merge from submit-queue

Fix potential panic in namespace controller when rapidly create/delet…

Fixes https://github.com/kubernetes/kubernetes/issues/33676

The theory is this could occur in either of the following scenarios:

1. HA environment where a GET to a different API server than what the WATCH was read from
1. In a many controller scenario (i.e. where multiple finalizers participate), a namespace that is created and deleted with the same name could trip up the other namespace controller to see a namespace with the same name that was not actually in a delete state.  Added checks to verify uid matches across retry operations.

/cc @liggitt @kubernetes/rh-cluster-infra
2016-10-26 23:15:00 -07:00
Chao Xu
17426490d9 remove unnecessary import rename 2016-10-26 17:32:44 -07:00
Kubernetes Submit Queue
453bfa1f0f Merge pull request #34368 from jingxu97/Oct/statusupdate-10-7
Automatic merge from submit-queue

Node status updater should SetNodeStatusUpdateNeeded if it fails to

update status

When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
2016-10-26 11:09:16 -07:00
Maciej Szulik
dc364b8ebb Remove Job also from .status.active for Replace strategy 2016-10-25 21:44:03 +02:00
derekwaynecarr
e634312de4 Fix potential panic in namespace controller when rapidly create/delete namespace of same name 2016-10-25 13:51:27 -04:00
Jan Safranek
ad946f4fcc Fixed mutation warning in Attach/Detach controller
Objects from shared informer must not be changed, they are shared among all
controllers.

This fixes CacheMutationDetector panic with this output:

CACHE *api.Node[5] ALTERED!
{"metadata":{"name":"ip-172-18-8-71.ec2.internal","selfLink":"/api/v1/nodes/ip-172-18-8-71.ec2.internal","uid":"73d07d16-976e-11e6-8225-0e2f14b56070","resourceVersion":"136","creationTimestamp":"2016-10-21T09:12:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"t2.medium","beta.kubernetes.io/os":"linux","failure-domain.beta.kubernetes.io/region":"us-east-1","failure-domain.beta.kubernetes.io/zone":"us-east-1d","kubernetes.io/hostname":"ip-172-18-8-71.ec2.internal"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"i-9cb6180f","providerID":"aws:///us-east-1d/i-9cb6180f"},"status":{"capacity":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"allocatable":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"InodePressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoInodePressure","message":"kubelet has no inode pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:22Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"InternalIP","address":"172.18.8.71"},{"type":"LegacyHostIP","address":"172.18.8.71"},{"type":"ExternalIP","address":"54.85.104.236"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"78a79498db8e4fdc9ac24b5e436a982c","systemUUID":"EC2BB406-5467-4ABE-B54D-D9993C45714F","bootID":"2553d6b8-1ddb-4ef0-902a-d09a807b89ba","kernelVersion":"4.6.7-300.fc24.x86_64","osImage":"Fedora 24 (Cloud Edition)","containerRuntimeVersion":"docker://1.10.3","kubeletVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","kubeProxyVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-release:latest"],"sizeBytes":714569002},{"names":["openshift/origin-haproxy-router-base:latest"],"sizeBytes":294417608},{"names":["openshift/origin-base:latest"],"sizeBytes":275310761},{"names":["docker.io/centos@sha256:2ae0d2c881c7123870114fb9cc7afabd1e31f9888dac8286884f6cf59373ed9b","docker.io/centos:centos7"],"sizeBytes":196744353},{"names":["gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff","gcr.io/google_containers/busybox:1.24"],"sizeBytes":1113554},{"names":["gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516","gcr.io/google_containers/pause-amd64:3.0"],"sizeBytes":746888}],"volumesInUse":["kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352"]

A: ,"volumesAttached":[{"name":"kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352","devicePath":"/dev/xvdba"}]}}

B: }}
2016-10-25 14:28:10 +02:00
Jing Xu
70efadc2f4 Node status updater should SetNodeStatusUpdateNeeded if it fails to
update status

When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
2016-10-24 13:59:39 -07:00
Łukasz Oleś
0d7371fdfe Emit event when scheduling daemon fails
Restoring the code which was removed in 528bf7a. When there are no sufficient
resources on node or there is a conflicting host port event is emited.
It helps with debugging.

Fixes #31369
2016-10-24 22:54:31 +02:00
Mike Danese
3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Kubernetes Submit Queue
a7807eb5a4 Merge pull request #34138 from ingvagabund/create-restclient-interface
Automatic merge from submit-queue

Create restclient interface

Refactoring of code to allow replace *restclient.RESTClient with any RESTClient implementation that implements restclient.RESTClientInterface interface.
2016-10-21 16:02:04 -07:00
deads2k
a0a0e61f62 add generic shared informer backed by existing informer 2016-10-21 08:39:09 -04:00
Jan Chaloupka
6079053407 Update clientset generator to use RESTClient interface instead of the RESTClient data type 2016-10-21 10:13:51 +02:00
Kubernetes Submit Queue
71471c2330 Merge pull request #34740 from derekwaynecarr/storage-class-informer
Automatic merge from submit-queue

Add an informer for StorageClass

Add an informer for `StorageClass` for later consumption in quota.

/cc @eparis @deads2k @erinboyd
2016-10-20 23:36:58 -07:00
Kubernetes Submit Queue
6306ab00d2 Merge pull request #34845 from derekwaynecarr/make_pvc_informer_list
Automatic merge from submit-queue

PVC informer lister supports listing

This will be used in follow-on PRs for quota evaluation backed by informers for pvcs.

/cc @deads2k @eparis @kubernetes/sig-storage
2016-10-19 19:29:02 -07:00
Kubernetes Submit Queue
91b7e1f9c3 Merge pull request #34638 from screeley44/k8-get-sc
Automatic merge from submit-queue

Adding default StorageClass annotation printout for resource_printer and describer and some refactoring

adding ISDEFAULT for _kubectl get storageclass_ output

```
[root@screeley-sc1 gce]# kubectl get storageclass
NAME            TYPE                   ISDEFAULT
another-class   kubernetes.io/gce-pd   NO        
generic1-slow   kubernetes.io/gce-pd   YES       
generic2-fast   kubernetes.io/gce-pd   YES       
```

```release-note
Add ISDEFAULT to kubectl get storageClass output
```

@kubernetes/sig-storage
2016-10-19 11:36:08 -07:00
derekwaynecarr
39da9d19f0 Add an informer for StorageClass 2016-10-19 12:26:57 -04:00
Scott Creeley
86f1a94be5 Adding default StorageClass annotation printout for resource_printer 2016-10-19 10:59:07 -04:00
Kubernetes Submit Queue
29af9853fe Merge pull request #35047 from deads2k/controller-11-rs-flakes
Automatic merge from submit-queue

fix more RS controller flakes

I saw another flake:

```
panic: Fail in goroutine after TestUpdatePods has completed
/usr/local/go/src/runtime/panic.go:500 +0x1ae
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x17d
/usr/local/go/src/runtime/panic.go:458 +0x271
/usr/local/go/src/testing/testing.go:412 +0x182
/usr/local/go/src/testing/testing.go:484 +0x95
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:619 +0x1d2
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:414 +0x191
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:403 +0x39
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:169 +0x42
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:87 +0x70
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:88 +0xbe
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:49 +0x5b
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:625 +0x369
```

This resolves that by separating the listers from the watch like it used to be in this set of tests.  The tests were like this before the refactor.  I think they limit utility, but I'm not prepared to re-write them all.

@kargakis
2016-10-18 19:52:48 -07:00
Kubernetes Submit Queue
a31482207f Merge pull request #34960 from sdminonne/serviceaccount_informer
Automatic merge from submit-queue

To add service account informer

@deads2k  this PR adds ServiceAccount informer as discussed [here](https://github.com/openshift/origin/pull/11330#issuecomment-253216303)
2016-10-18 16:21:41 -07:00