Automatic merge from submit-queue
Use available informers in quota replenishment
more iteration on the goal to use informers where available in quota system. this time adding persistent volume claims so the same informer is used here and https://github.com/kubernetes/kubernetes/pull/36316
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
Automatic merge from submit-queue
Rename ScheduledJobs to CronJobs
I went with @smarterclayton idea of registering named types in schema. This way we can support both the new (CronJobs) and old (ScheduledJobs) resource name. Fixes#32150.
fyi @erictune @caesarxuchao @janetkuo
Not ready yet, but getting close there...
**Release note**:
```release-note
Rename ScheduledJobs to CronJobs.
```
Automatic merge from submit-queue
Add more events to disruption controller
To provide users with information that their PDB may not be working as intended.
cc: @davidopp
Automatic merge from submit-queue
Fix possible race in operationNotSupportedCache
Because we can run multiple workers to delete namespaces simultaneously, the
operationNotSupportedCache needs to be guarded with a mutex to avoid concurrent
map read/write errors.
Automatic merge from submit-queue
lister-gen updates
- Remove "zz_generated." prefix from generated lister file names
- Add support for expansion interfaces
- Switch to new generated JobLister
@deads2k @liggitt @sttts @mikedanese @caesarxuchao for the lister-gen changes
@soltysh @deads2k for the informer / job controller changes
Automatic merge from submit-queue
Remove GetRootContext method from VolumeHost interface
Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.
Per #33951 and #35127.
Depends on #33663; only the last commit is relevant to this PR.
Automatic merge from submit-queue
Update how we detect overlapping deployments
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#24152
**Special notes for your reviewer**: cc @kubernetes/deployment
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
When looking for overlapping deployments, we should also find other deployments that select current deployment's pods,
not just the ones whose pods are selected by current deployment.
Automatic merge from submit-queue
Controller changes for perma failed deployments
This PR adds support for reporting failed deployments based on a timeout
parameter defined in the spec. If there is no progress for the amount
of time defined as progressDeadlineSeconds then the deployment will be
marked as failed by a Progressing condition with a ProgressDeadlineExceeded
reason.
Follow-up to https://github.com/kubernetes/kubernetes/pull/19343
Docs at kubernetes/kubernetes.github.io#1337
Fixes https://github.com/kubernetes/kubernetes/issues/14519
@kubernetes/deployment @smarterclayton
Because we can run multiple workers to delete namespaces simultaneously, the
operationNotSupportedCache needs to be guarded with a mutex to avoid concurrent
map read/write errors.
This commit adds support for failing deployments based on a timeout
parameter defined in the spec. If there is no progress for the amount
of time defined as progressDeadlineSeconds then the deployment will be
marked as failed by adding a condition with a ProgressDeadlineExceeded
reason in it. Progress in the context of a deployment means the creation
or adoption of a new replica set, scaling up new pods, and scaling down
old pods.
Automatic merge from submit-queue
Fix how we iterate over active jobs when removing them for Replace policy
When fixing the Replace Active removal I used wrong for loop construct which panics :/ This PR fixes that by using for range.
@janetkuo ptal
@jessfraz this will also be a cherry-pick candidate for 1.4, I remember we've picked the aforementioned fix as well
Automatic merge from submit-queue
Set reason and message on Pod during nodecontroller eviction
**What this PR does / why we need it**: Pods which are evicted by the nodecontroller due to network partition, or unresponsive kubelet should be differentiated from termination initiated by other sources. The reason/message are consumed by kubectl to provide a better summary using get/describe.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#35725
**Release note**:
```release-note
Pods that are terminating due to eviction by the nodecontroller (typically due to unresponsive kubelet, or network partition) now surface in `kubectl get` output
as being in state "Unknown", along with a longer description in `kubectl describe` output.
```
Pods which are evicted by the nodecontroller due to network
malfunction, or unresponsive kubelet should be differentiated
from termination initiated by other sources. The reason/message
are consumed by kubectl to provide a better summary using get/describe.
When looking for overlapping deployments, we should also find other deployments that select current deployment's pods,
not just the ones whose pods are selected by current deployment.
Automatic merge from submit-queue
Making the pod.alpha.kubernetes.io/initialized annotation optional in PetSet pods
**What this PR does / why we need it**: As of now, the absence of the annotation `pod.alpha.kubernetes.io/initialized` in PetSets causes the PetSet controller to effectively "pause". Being a debug hook, users expect that its absence has no effect on the working of a PetSet. This PR inverts the logic so that we let the PetSet controller operate as expected in the absence of the annotation.
Letting the annotation remain alpha seems ok. Renaming it to something more meaningful needs further discussion.
**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes https://github.com/kubernetes/kubernetes/issues/35498
**Special notes for your reviewer**:
**Release note**:
``` release-note
The annotation "pod.alpha.kubernetes.io/initialized" on StatefulSets (formerly PetSets) is now optional and only encouraged for debug use.
```
cc @erictune @smarterclayton @bprashanth @kubernetes/sig-apps
@kow3ns The examples will need to be cleaned up as well I think later on to remove them.
Automatic merge from submit-queue
Node controller to not force delete pods
Fixes https://github.com/kubernetes/kubernetes/issues/35145
- [x] e2e tests to test Petset, RC, Job.
- [x] Remove and cover other locations where we force-delete pods within the NodeController.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
``` release-note
Node controller no longer force-deletes pods from the api-server.
* For StatefulSet (previously PetSet), this change means creation of replacement pods is blocked until old pods are definitely not running (indicated either by the kubelet returning from partitioned state, or deletion of the Node object, or deletion of the instance in the cloud provider, or force deletion of the pod from the api-server). This has the desirable outcome of "fencing" to prevent "split brain" scenarios.
* For all other existing controllers except StatefulSet , this has no effect on the ability of the controller to replace pods because the controllers do not reuse pod names (they use generate-name).
* User-written controllers that reuse names of pod objects should evaluate this change.
```
Automatic merge from submit-queue
Add tooling to generate listers
Add lister-gen tool to auto-generate listers. So far this PR only demonstrates replacing the manually-written `StoreToLimitRangeLister` with the generated `LimitRangeLister`, as it's a small and easy swap.
cc @deads2k @liggitt @sttts @nikhiljindal @lavalamp @smarterclayton @derekwaynecarr @kubernetes/sig-api-machinery @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
quota controller uses informers if available for pod calculation
This PR does the following:
1. plumb informer factory into quota registry and evaluators
2. pod quota evaluator uses informers for determining aggregrate usage instead of making direct calls
3. admission code path does not use informers because
1. we do not want to add new watches in apiserver
2. admission code path does not require aggregate usage calculation
As a result, quota controller is much faster in re-calculating quota usage when it observes a pod deletion.
Follow-on PRs will make similar changes for other informer backed resources (pvcs next).
/cc @deads2k @mfojtik @smarterclayton @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Remove Job also from .status.active for Replace strategy
When iterating over list of Jobs we're removing each of them when strategy is replace. Unfortunately, the job reference was not removed from `.status.active` which cause the controller trying to remove it once again during next run and failed removing what was already removed during previous run. This was cause by not removing the reference previously. This PR fixes that and cleans logs a bit, in that controller.
@erictune fyi
@janetkuo ptal
Automatic merge from submit-queue
Let release_1_5 clientset include multiple versions of a group
Fix#35237
This PR make versioned clientset to include multiple versions of a group. Currently only `batch` has `v1` and `v2alpha1`. The clientset interface now looks like:
```go
BatchV2alpha1() v2alpha1batch.BatchV2alpha1Interface
BatchV1() v1batch.BatchV1Interface
// Deprecated: please explicitly pick a version if possible.
Batch() v1batch.BatchV1Interface
```
Commit "update client-gen to say internalversion rather than unversioned" fixes https://github.com/kubernetes/kubernetes/issues/24481.
cc @kubernetes/sig-api-machinery @soltysh @deads2k @nikhiljindal
```release-note
release_1_5 clientset supports multiple versions of a group.
```
Automatic merge from submit-queue
Make overlapping deployments deletable
@kubernetes/deployment ptal
Fixes https://github.com/kubernetes/kubernetes/issues/34466 by 1) not adding the overlapping annotation in the working deployment, 2) updates observedGeneration for overlapping deployments, and 3) updates the kubectl deployment reaper to do non-cascading deletion for deployments with the overlapping annotation.
Automatic merge from submit-queue
convert SA controller to shared informers
convert the SA controller to shared informer + workqueue.
I think one of @derekwaynecarr @ncdc or @liggitt
Automatic merge from submit-queue
Simplify negotiation in server in preparation for multi version support
This is a pre-factor for #33900 to simplify runtime.NegotiatedSerializer, tighten up a few abstractions that may break when clients can request different client versions, and pave the way for better negotiation.
View this as pure simplification.
Automatic merge from submit-queue
Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached
volumes kept in
the actual state cache. If the volume is no longer attached to the node,
the actual state will be updated to reflect the truth. In turn,
reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
Automatic merge from submit-queue
Moving some force deletion logic from the NC into the PodGC
**What this PR does / why we need it**: Moves some pod force-deletion behavior into the PodGC, which is a better place for these.
This should be a NOP because we're just moving functionality
around and thanks to #35476, the podGC controller should always
run.
Related: https://github.com/kubernetes/kubernetes/pull/34160, https://github.com/kubernetes/kubernetes/issues/35145
cc @gmarek @kubernetes/sig-apps
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
More details are explained in PR #33760
Alter how runtime.SerializeInfo is represented to simplify negotiation
and reduce the need to allocate during negotiation. Simplify the dynamic
client's logic around negotiating type. Add more tests for media type
handling where necessary.
Automatic merge from submit-queue
Fix potential panic in namespace controller when rapidly create/delet…
Fixes https://github.com/kubernetes/kubernetes/issues/33676
The theory is this could occur in either of the following scenarios:
1. HA environment where a GET to a different API server than what the WATCH was read from
1. In a many controller scenario (i.e. where multiple finalizers participate), a namespace that is created and deleted with the same name could trip up the other namespace controller to see a namespace with the same name that was not actually in a delete state. Added checks to verify uid matches across retry operations.
/cc @liggitt @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Node status updater should SetNodeStatusUpdateNeeded if it fails to
update status
When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
Objects from shared informer must not be changed, they are shared among all
controllers.
This fixes CacheMutationDetector panic with this output:
CACHE *api.Node[5] ALTERED!
{"metadata":{"name":"ip-172-18-8-71.ec2.internal","selfLink":"/api/v1/nodes/ip-172-18-8-71.ec2.internal","uid":"73d07d16-976e-11e6-8225-0e2f14b56070","resourceVersion":"136","creationTimestamp":"2016-10-21T09:12:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"t2.medium","beta.kubernetes.io/os":"linux","failure-domain.beta.kubernetes.io/region":"us-east-1","failure-domain.beta.kubernetes.io/zone":"us-east-1d","kubernetes.io/hostname":"ip-172-18-8-71.ec2.internal"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"i-9cb6180f","providerID":"aws:///us-east-1d/i-9cb6180f"},"status":{"capacity":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"allocatable":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"InodePressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoInodePressure","message":"kubelet has no inode pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:22Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"InternalIP","address":"172.18.8.71"},{"type":"LegacyHostIP","address":"172.18.8.71"},{"type":"ExternalIP","address":"54.85.104.236"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"78a79498db8e4fdc9ac24b5e436a982c","systemUUID":"EC2BB406-5467-4ABE-B54D-D9993C45714F","bootID":"2553d6b8-1ddb-4ef0-902a-d09a807b89ba","kernelVersion":"4.6.7-300.fc24.x86_64","osImage":"Fedora 24 (Cloud Edition)","containerRuntimeVersion":"docker://1.10.3","kubeletVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","kubeProxyVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-release:latest"],"sizeBytes":714569002},{"names":["openshift/origin-haproxy-router-base:latest"],"sizeBytes":294417608},{"names":["openshift/origin-base:latest"],"sizeBytes":275310761},{"names":["docker.io/centos@sha256:2ae0d2c881c7123870114fb9cc7afabd1e31f9888dac8286884f6cf59373ed9b","docker.io/centos:centos7"],"sizeBytes":196744353},{"names":["gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff","gcr.io/google_containers/busybox:1.24"],"sizeBytes":1113554},{"names":["gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516","gcr.io/google_containers/pause-amd64:3.0"],"sizeBytes":746888}],"volumesInUse":["kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352"]
A: ,"volumesAttached":[{"name":"kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352","devicePath":"/dev/xvdba"}]}}
B: }}
update status
When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
Restoring the code which was removed in 528bf7a. When there are no sufficient
resources on node or there is a conflicting host port event is emited.
It helps with debugging.
Fixes#31369
Automatic merge from submit-queue
Create restclient interface
Refactoring of code to allow replace *restclient.RESTClient with any RESTClient implementation that implements restclient.RESTClientInterface interface.
Automatic merge from submit-queue
Add an informer for StorageClass
Add an informer for `StorageClass` for later consumption in quota.
/cc @eparis @deads2k @erinboyd
Automatic merge from submit-queue
PVC informer lister supports listing
This will be used in follow-on PRs for quota evaluation backed by informers for pvcs.
/cc @deads2k @eparis @kubernetes/sig-storage
Automatic merge from submit-queue
Adding default StorageClass annotation printout for resource_printer and describer and some refactoring
adding ISDEFAULT for _kubectl get storageclass_ output
```
[root@screeley-sc1 gce]# kubectl get storageclass
NAME TYPE ISDEFAULT
another-class kubernetes.io/gce-pd NO
generic1-slow kubernetes.io/gce-pd YES
generic2-fast kubernetes.io/gce-pd YES
```
```release-note
Add ISDEFAULT to kubectl get storageClass output
```
@kubernetes/sig-storage
Automatic merge from submit-queue
fix more RS controller flakes
I saw another flake:
```
panic: Fail in goroutine after TestUpdatePods has completed
/usr/local/go/src/runtime/panic.go:500 +0x1ae
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x17d
/usr/local/go/src/runtime/panic.go:458 +0x271
/usr/local/go/src/testing/testing.go:412 +0x182
/usr/local/go/src/testing/testing.go:484 +0x95
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:619 +0x1d2
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:414 +0x191
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:403 +0x39
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set.go:169 +0x42
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:87 +0x70
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:88 +0xbe
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:49 +0x5b
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/replicaset/replica_set_test.go:625 +0x369
```
This resolves that by separating the listers from the watch like it used to be in this set of tests. The tests were like this before the refactor. I think they limit utility, but I'm not prepared to re-write them all.
@kargakis
Automatic merge from submit-queue
+optional tag for OpenAPI spec
OpenAPI rely on "omitempty" json tag to determine if a field is optional or not. This change will add "+optional" tag to all fields with "omitempty" json tag and support the tag in OpenAPI spec generator.
Automatic merge from submit-queue
controller: set minReadySeconds in deployment's replica sets
* Estimate available pods for a deployment by using minReadySeconds on
the replica set.
* Stop requeueing deployments on pod events, superseded by following the
replica set status.
* Cleanup redundant deployment utilities
Fixes https://github.com/kubernetes/kubernetes/issues/26079
@kubernetes/deployment ptal
Automatic merge from submit-queue
Pass whole PVC to provisioner plugins
Gluster provisioner is interested in namespace of PVCs that are being provisioned and I don't want to add at as a new field in `volume.VolumeOptions` - it would contain almost whole PVC.
Let's rework `VolumeOptions` and pass direct reference to PVC there instead of some "interesting" fields and let the provisioner to pick information it is interested in.
There was lot of refactoring in volume plugins to apply this change (too many plugins), however the logic is simple and it's all the same in all plugins.
@rootfs @humblec
Automatic merge from submit-queue
NodeController waits for informer sync before doing anything
cc @lavalamp @davidopp
```release-note
NodeController waits for full sync of all it's informers before taking any action.
```
Automatic merge from submit-queue
Run rbac authorizer from cache
RBAC authorization can be run very effectively out of a cache. The cache is a normal reflector backed cache (shared informer).
I've split this into three parts:
1. slim down the authorizer interfaces
1. boilerplate for adding rbac shared informers and associated listers which conform to the new interfaces
1. wiring
@liggitt @ericchiang @kubernetes/sig-auth
Automatic merge from submit-queue
Handle DeletedFinalStateUnknown in NodeController
Fix#34692
```release-note
Fix panic in NodeController caused by receiving DeletedFinalStateUnknown object from the cache.
```
cc @davidopp
* Estimate available pods for a deployment by using minReadySeconds on
the replica set.
* Stop requeueing deployments on pod events, superseded by following the
replica set status.
* Cleanup redundant deployment utilities
Automatic merge from submit-queue
Revert "Error out when any RS has more available pods then its spec r…
Reverts https://github.com/kubernetes/kubernetes/pull/29808
The PR is wrong because we can have more available pods than desired every time we scale down.
@kubernetes/deployment ptal
Automatic merge from submit-queue
Fix missleading comment
**What this PR does / why we need it**: It just fixes misleading comment. It took me some time to figure out real behavior.
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.
Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
Automatic merge from submit-queue
Copy finalizers from template spec to pod.
**What this PR does / why we need it**: The PodTemplateSpec has a finalizers field whose contents are not copied over to a pod during creation.
Automatic merge from submit-queue
convert deployment controller to shared informers
Converts the deployment controller to shared informers.
@kargakis I think you've been in here. Pretty straight forward swap.
Fixes#27687
Automatic merge from submit-queue
controller: save older revisions for Deployment's replica sets
@jwforres the only usable way I could find for multiple old revisions for a single replica set is to stuff them as comma-separated values.
@kubernetes/deployment this retains old revisions served by a replica set inside an annotation.
Fixes https://github.com/kubernetes/kubernetes/issues/33844
Automatic merge from submit-queue
update deployment and replicaset listers
Updates the deployment lister to avoid copies and updates the deployment controller to use shared informers.
Pushing WIP to see which tests are broken.
Automatic merge from submit-queue
Delete evicted pet
If pet was evicted by kubelet - it will stuck in this state forever.
By analogy to regular pod we need to re-create pet so that it will
be re-scheduled to another node, so in order to re-create pet
and preserve consitent naming we will delete it in petset controller
and create after that.
fixes: https://github.com/kubernetes/kubernetes/issues/31098
Automatic merge from submit-queue
Fix issue in updating device path when volume is attached multiple times
When volume is attached, it is possible that the actual state
already has this volume object (e.g., the volume is attached to multiple
nodes, or volume was detached and attached again). We need to update the
device path in such situation, otherwise, the device path would be stale
information and cause kubelet mount to the wrong device.
This PR partially fixes issue #29324
When volume is attached, it is possible that the actual state
already has this volume object (e.g., the volume is attached to multiple
nodes, or volume was detached and attached again). We need to update the
device path in such situation, otherwise, the device path would be stale
information and cause kubelet mount to the wrong device.
This PR partially fixes issue #29324
Automatic merge from submit-queue
PetSet replica count status test
**What this PR does / why we need it**: It adds a test for PetSet status replica count. It should fail now, but will pass when https://github.com/kubernetes/kubernetes/pull/32117 is merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#31965
**Special notes for your reviewer**: It will need to be rebased after #32117 is merged in, don't need detailed review before that.
**Release note**:
```release-note
NONE
```
Added fakeKubeClient and other fake types needed to test what is sent to
API when replica count is updated. These fakes can be extended for
other tests.
Automatic merge from submit-queue
Fix TestCreateWithNonExistentOwner
Fix#30228
As https://github.com/kubernetes/kubernetes/issues/30228#issuecomment-248779567 described, the GC did delete the garbage, it's the test logic failed.
The test used to rely on `gc.QueuesDrained()`, which could return before the GC finished processing. It seems to be the only possible reason of the test failure. Hence, this PR changed the test to poll for the deletion of garbage.
Automatic merge from submit-queue
MinReadySeconds / AvailableReplicas for ReplicaSets
This PR adds minReadySeconds and availableReplicas for replica sets / replication controllers
Partially addresses https://github.com/kubernetes/kubernetes/issues/28381
cc: @mfojtik
@bgrant0607 for the api changes, @janetkuo for controller changes
Automatic merge from submit-queue
PetSet returns valid replica count in status
**What this PR does / why we need it**: It prevents the PetSet replica count to be invalid regardless of pods not being created due to
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#31965
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.
Also, if we want to use different values for the Node.Name (which is
an important step for making installation easier), we need to keep
better control over this.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Automatic merge from submit-queue
Refactor volume controller parameters into a structure
`persistentvolumecontroller.NewPersistentVolumeController` has 11 arguments now,
put them into a structure.
Also, rename `NewPersistentVolumeController` to `NewController`, `persistentvolume`
is already name of the package.
Fixes#30219
We had another bug where we confused the hostname with the NodeName.
To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
Automatic merge from submit-queue
controller: don't retry deployments with overlapping selectors
Returning an error will cause the deployment to be requeued. We should
just emit an event for deployments with overlapping selectors and silently
drop then out of the queue. This should be transitioned to a Condition
once we have them.
@kubernetes/deployment ptal
In order to determine whether a node should run its daemon pod,
DaemonController creates a dummy pod based on DaemonSet's template and
then uses scheduler predicates (currently GeneralPredicates) to test
whether such pod can be run by the node. The problem was that
DaemonController was not setting Namespace for the dummy pod. This was
not affecting currently used GeneralPredicates but this problem could
bite later when some namespace-dependent predicates are added to
GeneralPredicates or directly to DaemonController's node checks
(e.g. pod affinity).
Stumbled upon it while working on e2e test for #31136
Returning an error will cause the deployment to be requeued. We should
just emit an event for deployments with overlapping selectors and silently
drop then out of the queue. This should be transitioned to a Condition
once we have them.
persistentvolumecontroller.NewPersistentVolumeController has 11 arguments now,
put them into a structure.
Also, rename NewPersistentVolumeController to NewController, persistentvolume
is already name of the package.
Fixes#30219
Automatic merge from submit-queue
Do not report error when deleting an attached volume
Persistent volume controller should not send warning events to a PV and mark the PV as failed when the volume is still attached.
This happens when a user quickly deletes a pod and associated PVC - PV is slowly detaching, while the PVC is already deleted and the PV enters Failed phase.
`Deleter.Deleter` can now return `tryAgainError`, which is sent as INFO to the PV to let the user know we did not forget to delete the PV, however the PV stays in Released state. The controller tries again in the next sync (15 seconds by default).
Fixes#31511
Automatic merge from submit-queue
simplify RC listers
Make the RC and SVC listers use the common list functions that more closely match client APIs, are consistent with other listers, and avoid unnecessary copies.
Automatic merge from submit-queue
Allow garbage collection to work against different API prefixes
The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.
Allows OpenShift to use the GC
Automatic merge from submit-queue
Remove hacks from ScheduledJobs cron spec parsing
Previusly `github.com/robfig/cron` library did not allow passing cron spec without seconds. First commit updates the library, which has additional method ParseStandard which follows the standard cron spec, iow. minute, hour, day of month, month, day of week.
@janetkuo @erictune as promised in #30227 I've updated the library and now I'm updating it in k8s
Automatic merge from submit-queue
Move HighWaterMark to the top of the struct in order to fix arm, second time
ref: #33117
Sorry for not fixing everyone at once, but I seriously wasn't prepared for that quick LGTM 😄, so here's the other half.
@lavalamp
> lgtm, but seriously, this is terrible, we probably have this bug all over. And what if someone embeds the etcdWatcher struct in something else not at the top? We need the compiler to enforce things like this, it just can't be done manually. Can you file or link a golang issue for this?
I totally agree! There isn't currently a way of programmatically detecting this unfortunately.
I guess @davecheney or @minux can explain better to you why it's so hard.
This is noted in https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md as a corner case indeed.
@pwittrock This should be cherrypicked toghether with #33117
Automatic merge from submit-queue
Fix race condition in setting node statusUpdateNeeded flag
This PR fixes the race condition in setting node statusUpdateNeeded flag
in master's attachdetach controller. This flag is used to indicate
whether a node status has been updated by the node_status_updater or
not. When updater finishes update a node status, it is set to false.
When the node status is changed such as volume is detached or new volume
is attached to the node, the flag is set to true so that updater can
update the status again. The previous workflow has a race condition as
follows
1. updater gets the currently attached volume list from the node which needs to be
updated.
2. A new volume A is attached to the same node right after 1 and set the
flag to TRUE
3. updater updates the node attached volume list (which does not include volume A) and then set the flag to FALSE.
The result is that volume A will be never added to the attached volume
list so at node side, this volume is never attached.
So in this PR, the flag is set to FALSE when updater tries to get the
attached volume list (as in an atomic operation). So in the above
example, after step 2, the flag will be TRUE again, in step 3, updater
does not set the flag if updates is sucessful. So after that, flag is
still TRUE and in next round of update, the node status will be updated.
This PR fixes the race condition in setting node statusUpdateNeeded flag
in master's attachdetach controller. This flag is used to indicate
whether a node status has been updated by the node_status_updater or
not. When updater finishes update a node status, it is set to false.
When the node status is changed such as volume is detached or new volume
is attached to the node, the flag is set to true so that updater can
update the status again. The previous workflow has a race condition as
follows
1. updater gets the currently attached volume list from the node which needs to be
updated.
2. A new volume A is attached to the same node right after 1 and set the
flag to TRUE
3. updater updates the node attached volume list (which does not include volume A) and then set the flag to FALSE.
The result is that volume A will be never added to the attached volume
list so at node side, this volume is never attached.
So in this PR, the flag is set to FALSE when updater tries to get the
attached volume list (as in an atomic operation). So in the above
example, after step 2, the flag will be TRUE again, in step 3, updater
does not set the flag if updates is sucessful. So after that, flag is
still TRUE and in next round of update, the node status will be updated.
This PR also changes a unit test due to the workflow changes
Automatic merge from submit-queue
Send recycle events from pod to pv.
This allows users to diagnose what's wrong with recycler. Recycler pods are started automatically with a cryptic name and they are deleted immediately when they finish.
e.g, `kubectl describe pv` could show that NFS cannot be mounted (and how many pods have tried it):
```
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
59m 59m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
53m 53m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
46m 46m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
40m 40m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
33m 33m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
27m 27m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
1h 3m 75 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
1h 3m 76 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Pod was active on the node longer than specified deadline
1h 1m 12 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
20m 1m 4 {persistentvolume-controller } Warning RecyclerPod (events with common reason combined)
```
These steps were necessary:
- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion
- pass all these events through volume plugins to volume controller
- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table (too much copy-paste)
- fix all unit tests along the way
The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.