Automatic merge from submit-queue
Fix missleading comment
**What this PR does / why we need it**: It just fixes misleading comment. It took me some time to figure out real behavior.
Automatic merge from submit-queue
convert deployment controller to shared informers
Converts the deployment controller to shared informers.
@kargakis I think you've been in here. Pretty straight forward swap.
Fixes#27687
Automatic merge from submit-queue
controller: save older revisions for Deployment's replica sets
@jwforres the only usable way I could find for multiple old revisions for a single replica set is to stuff them as comma-separated values.
@kubernetes/deployment this retains old revisions served by a replica set inside an annotation.
Fixes https://github.com/kubernetes/kubernetes/issues/33844
Returning an error will cause the deployment to be requeued. We should
just emit an event for deployments with overlapping selectors and silently
drop then out of the queue. This should be transitioned to a Condition
once we have them.
Automatic merge from submit-queue
Use updated deployment after rollback
@kubernetes/deployment
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
1. When overlapping deployments are discovered, annotate them
2. Expose those overlapping annotations as warnings in kubectl describe
3. Only respect the earliest updated one (skip syncing all other overlapping deployments)
4. Use indexer instead of store for deployment lister
Automatic merge from submit-queue
Fix deployment e2e test: waitDeploymentStatus should error when entering an invalid state
Follow up #28162
1. We should check that max unavailable and max surge aren't violated at all times in e2e tests (didn't check this in deployment scaled rollout yet, but we should wait for it to become valid and then continue do the check until it finishes)
2. Fix some minor bugs in e2e tests
@kubernetes/deployment
When a new rollout with a different size than the previous size of the
deployment is initiated then only the new replica set will notice the
new size. Old replica sets are not updated by the rollout path.
Automatic merge from submit-queue
controller: wait for synced old replica sets on Recreate
Partially fixes https://github.com/kubernetes/kubernetes/issues/27362
Any other work on it should be handled in the replica set level (and/or kubelet if it's required)
@kubernetes/deployment PTAL
Changes:
* moved waiting for synced caches before starting any work
* refactored worker() to really quit on quit
* changed queue to a ratelimiting queue and added retries on errors
* deep-copy deployments before mutating - we still need to deep-copy
replica sets and pods
* rolling.go (has all the logic for rolling deployments)
* recreate.go (has all the logic for recreate deployments)
* sync.go (has all the logic for getting and scaling replica sets)
* rollback.go (has all the logic for rolling back a deployment)
* util.go (contains all the utilities used throughout the controller)
Leave back at deployment_controller.go all the necessary bits for
creating, setting up, and running the controller loop.
Also add package documentation.
Since fake clientset now correctly tracks objects created by deployment
controller, it triggers different controller behavior: controller only
creates replica set once and updates deployment once.
Automatic merge from submit-queue
Proportionally scale paused and rolling deployments
Enable paused and rolling deployments to be proportionally scaled.
Also have cleanup policy work for paused deployments.
Fixes#20853Fixes#20966Fixes#20754
@bgrant0607 @janetkuo @ironcladlou @nikhiljindal
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/20273)
<!-- Reviewable:end -->
Due to rounding down for maxUnavailable, we may end up with deployments
that have zero surge and unavailable pods something that 1) is not allowed
as per validation, 2) blocks deployments. If we end up in such a situation
set maxUnavailable to 1 on the theory that surge might not work due to
quota.