Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
Automatic merge from submit-queue (batch tested with PRs 41146, 41486, 41482, 41538, 41784)
Switch statefulset controller to shared informers
Originally part of #40097
I *think* the controller currently makes a deep copy of a StatefulSet before it mutates it, but I'm not 100% sure. For those who are most familiar with this code, could you please confirm?
@beeps @smarterclayton @ingvagabund @sttts @liggitt @deads2k @kubernetes/sig-apps-pr-reviews @kubernetes/sig-scalability-pr-reviews @timothysc @gmarek @wojtek-t
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)
Fix deployment helper - no assumptions on only one new ReplicaSet
#40415
**Release note**:
```release-note
NONE
```
@kubernetes/sig-apps-bugs
Conversions can mutate the underlying object (and ours were).
Make a deepcopy before our first conversion at the very start
of the reconciler method in order to avoid mutating the shared
informer cache during conversion.
Fixes#41768
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
changes to cleanup the volume plugin for recycle
**What this PR does / why we need it**:
Code cleanup. Changing from creating a new interface from the plugin, that then calls a function to recycle a volume, to adding the function to the plugin itself.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#26230
**Special notes for your reviewer**:
Took same approach from closed PR #28432.
Do you want the approach to be the same for NewDeleter(), NewMounter(), NewUnMounter() and should they be in this same PR or submit different PR's for those?
**Release note**:
```NONE
```
Drop the secondary queue and add either ratelimited or after the
required amount of time that we need to wait directly in the main
queue. In this way we can always be sure that we will sync back
the Deployment if its progress has yet to resolve into a complete
(NewReplicaSetAvailable) or TimedOut condition.
Automatic merge from submit-queue
Make controller-manager resilient to stale serviceaccount tokens
Now that the controller manager is spinning up controller loops using service accounts, we need to be more proactive in making sure the clients will actually work.
Future additional work:
* make a controller that reaps invalid service account tokens (c.f. https://github.com/kubernetes/kubernetes/issues/20165)
* allow updating the client held by a controller with a new token while the controller is running (c.f. https://github.com/kubernetes/kubernetes/issues/4672)
Automatic merge from submit-queue
Convert HPA controller to support HPA v2 mechanics
This PR converts the HPA controller to support the mechanics from HPA v2.
The HPA controller continues to make use of the HPA v1 client, but utilizes
the conversion logic to work with autoscaling/v2alpha1 objects internally.
It is the follow-up PR to #36033 and part of kubernetes/features#117.
**Release note**:
```release-note
NONE
```
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones. Sadly they forgot to account for
multiple mounts. This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.
Fix#35695
Automatic merge from submit-queue
Switch service controller to shared informers
Originally part of #40097
cc @deads2k @smarterclayton @gmarek @wojtek-t @timothysc @sttts @liggitt @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41604, 41273, 41547)
Switch pv controller to shared informer
This is WIP because I still need to do something with bazel? and add 'get storageclasses' to the controller-manager rbac role
@jsafrane PTAL and make sure I did not break anything in the PV controller. Do we need to clone the volumes/claims we get from the shared informer before we use them? I could not find a place where we modify them but you would know for certain.
cc @ncdc because I copied what you did in your other PRs.
Automatic merge from submit-queue (batch tested with PRs 41517, 41494, 41163)
Deployment: filter out old RSes that are deleted or with non-zero replicas before cleanup
Fixes#36379
cc @zmerlynn @yujuhong @kargakis @kubernetes/sig-apps-bugs
To prepare for implementing ControllerRef across all controllers,
this pushes the common adopt/orphan logic into ControllerRefManager
so each controller doesn't have to duplicate it.
This also shares the adopt/orphan logic between Pods and ReplicaSets,
so it lives in only one place.
This commit converts the HPA controller over to using the new version of
the HorizontalPodAutoscaler object found in autoscaling/v2alpha1. Note
that while the autoscaler will accept requests for object metrics, the
scale client will return an error on attempts to get object metrics
(since that requires the new custom metrics API, which is not yet
implemented).
This also enables the HPA object in v2alpha1 as a retrievable API
version by default.
Automatic merge from submit-queue
Remove alpha provisioning
This is the first part of https://github.com/kubernetes/features/issues/36
@kubernetes/sig-storage-misc
**Release note**:
```release-note
Alpha version of dynamic volume provisioning is removed in this release. Annotation
"volume.alpha.kubernetes.io/storage-class" does not have any special meaning. A default storage class
and DefaultStorageClass admission plugin can be used to preserve similar behavior of Kubernetes cluster,
see https://kubernetes.io/docs/user-guide/persistent-volumes/#class-1 for details.
```
Automatic merge from submit-queue
Switch serviceaccounts controller to generated shared informers
Originally part of #40097
cc @deads2k @sttts @liggitt @smarterclayton @gmarek @wojtek-t @timothysc @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue
Switch resourcequota controller to shared informers
Originally part of #40097
I have had some issues with this change in the past, when I updated `pkg/quota` to use the new informers while `pkg/controller/resourcequota` remained on the old informers. In this PR, both are switched to using the new informers. The issues in the past were lots of flakey test failures in the ResourceQuota e2es, where it would randomly fail to see deletions and handle replenishment. I am hoping that now that everything here is consistently using the new informers, there won't be any more of these flakes, but it's something to keep an eye out for.
I also think `pkg/controller/resourcequota` could be cleaned up. I don't think there's really any need for `replenishment_controller.go` any more since it's no longer running individual controllers per kind to replenish. It instead just uses the shared informer and adds event handlers to it. But maybe we do that in a follow up.
cc @derekwaynecarr @smarterclayton @wojtek-t @deads2k @sttts @liggitt @timothysc @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41196, 41252, 41300, 39179, 41449)
controller: cleanup workload controllers a bit
* Switches glog.Errorf to utilruntime.HandleError in DS and RC controllers
* Drops a couple of unused variables in the DS, SS, and Deployment controllers
* Updates some comments
@kubernetes/sig-apps-misc
It's not an error when recycle/delete/provision operation cannot be started
because it has failed recently. It will be restarted automatically when
backoff expires.
Automatic merge from submit-queue (batch tested with PRs 41115, 41212, 41346, 41340, 41172)
Enable PodTolerateNodeTaints predicate in DaemonSet controller
Ref #28687, this enables the PodTolerateNodeTaints predicate to the daemonset controller
cc @Random-Liu @dchen1107 @davidopp @mikedanese @kubernetes/sig-apps-pr-reviews @kubernetes/sig-node-pr-reviews @kargakis @lukaszo
```release-note
Make DaemonSet controller respect node taints and pod tolerations.
```
* Switches glog.Errorf to utilruntime.HandleError in DS and RC controllers
* Drops a couple of unused variables in the DS, SS, and Deployment controllers
* Updates some comments
Automatic merge from submit-queue (batch tested with PRs 41137, 41268)
Allow the CertificateController to use any Signer implementation.
**What this PR does / why we need it**:
This will allow developers to create `CertificateController`s with arbitrary `Signer`s, instead of forcing the use of `CFSSLSigner`. It matches the behavior of allowing an arbitrary `AutoApprover` to be passed in the constructor.
**Release note**:
```release-note
NONE
```
CC @mikedanese
Automatic merge from submit-queue (batch tested with PRs 41248, 41214)
Switch hpa controller to shared informer
**What this PR does / why we need it**: switch the hpa controller to use a shared informer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: Only the last commit is relevant. The others are from #40759, #41114, #41148
**Release note**:
```release-note
```
cc @smarterclayton @deads2k @sttts @liggitt @DirectXMan12 @timothysc @kubernetes/sig-scalability-pr-reviews @jszczepkowski @mwielgus @piosz
Automatic merge from submit-queue (batch tested with PRs 39418, 41175, 40355, 41114, 32325)
TaintController
```release-note
This PR adds a manager to NodeController that is responsible for removing Pods from Nodes tainted with NoExecute Taints. This feature is beta (as the rest of taints) and enabled by default. It's gated by controller-manager enable-taint-manager flag.
```
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)
Promote TokenReview to v1
Peer to https://github.com/kubernetes/kubernetes/pull/40709
We have multiple features that depend on this API:
- [webhook authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication)
- [kubelet delegated authentication](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication)
- add-on API server delegated authentication
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating the token
This PR promotes the existing v1beta1 API to v1 with no changes
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authentication.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
StatefulSet hardening
**What this PR does / why we need it**:
This PR contains the following changes to StatefulSet. Only one change effects the semantics of how the controller operates (This is described in #38418), and this change only brings the controller into conformance with its documented behavior.
1. pcb and pcb controller are removed and their functionality is encapsulated in StatefulPodControlInterface. This class modules the design contoller.PodControlInterface and provides an abstraction to clientset.Interface which is useful for testing purposes.
2. IdentityMappers has been removed to clarify what properties of a Pod are mutated by the controller. All mutations are performed in the UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes sorted Pods by CreationTimestamp. This is brittle and not resilient to clock skew. The current control loop, which implements the same logic, is in stateful_set_control.go. The Pods are now sorted and considered by their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a StatefulSet's Selector also match the Name of the StatefulSet. This will make the controller resilient to overlapping, and will be enhanced by the addition of ControllerRefs.
5. The total lines of production code have been reduced, and the total number of unit tests has been increased. All new code has 100% unit coverage giving the module 83% coverage. Tests for StatefulSetController have been added, but it is not practical to achieve greater coverage in unit testing for this code (the e2e tests for StatefulSet cover these areas).
6. Issue #38418 is fixed in that StaefulSet will ensure that all Pods that are predecessors of another Pod are Running and Ready prior to launching a new Pod. This removes the potential for deadlock when a Pod needs to be rescheduled while its predecessor is hung in Pending or Initializing.
7. All reference to pet have been removed from the code and comments.
**Which issue this PR fixes**
fixes #38418,#36859
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes issue #38418 which, under circumstance, could cause StatefulSet to deadlock.
Mediates issue #36859. StatefulSet only acts on Pods whose identity matches the StatefulSet, providing a partial mediation for overlapping controllers.
```
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
Implement TTL controller and use the ttl annotation attached to node in secret manager
For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.
Apiserver cannot keep up with it very easily.
Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.
So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is).
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.
@dchen1107 @thockin @liggitt
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)
Set all node conditions to Unknown when node is unreachable
**What this PR does / why we need it**:
Sets all node conditions to Unknown when node does not report status/unreachable
**Which issue this PR fixes**
fixes https://github.com/kubernetes/kubernetes/issues/36273
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)
Switch CSR controller to use shared informer
Switch the CSR controller to use a shared informer. Originally part of #40097 but I'm splitting that up into multiple PRs.
I have added a test to try to ensure we don't mutate the cache. It could use some fleshing out for additional coverage but it gets the initial job done, I think.
cc @mikedanese @deads2k @liggitt @sttts @kubernetes/sig-scalability-pr-reviews
1. pcb and pcb controller are removed and their functionality is
encapsulated in StatefulPodControlInterface.
2. IdentityMappers has been removed to clarify what properties of a Pod are
mutated by the controller. All mutations are performed in the
UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes
sorted Pods by CreationTimestamp. This is brittle and not resilient to
clock skew. The current control loop, which implements the same logic,
is in stateful_set_control.go. The Pods are now sorted and considered by
their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a
StatefulSet's Selector also match the Name of the StatefulSet. This will
make the controller resilient to overlapping, and will be enhanced by
the addition of ControllerRefs.
Automatic merge from submit-queue
Update owners file for job and cronjob controller
I've just noticed we have outdated OWNERS files for job and cronjob controllers.
@erictune ptal
@kubernetes/sig-contributor-experience-pr-reviews fyi
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
refactor approver and signer interfaces to be consisten w.r.t. apiserver interaction
This makes it so that only the controller loop talks to the
API server directly. The signatures for Sign and Approve also
become more consistent, while allowing the Signer to report
conditions (which it wasn't able to do before).
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Promote SubjectAccessReview to v1
We have multiple features that depend on this API:
SubjectAccessReview
- [webhook authorization](https://kubernetes.io/docs/admin/authorization/#webhook-mode)
- [kubelet delegated authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization)
- add-on API server delegated authorization
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating access
- A typo was discovered in the SubjectAccessReviewSpec Groups field name
This PR promotes the existing v1beta1 API to v1, with the only change being the typo fix to the groups field. (fixes https://github.com/kubernetes/kubernetes/issues/32709)
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authorization.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue
federation: Refactoring namespaced resources deletion code from kube ns controller and sharing it with fed ns controller
Ref https://github.com/kubernetes/kubernetes/issues/33612
Refactoring code in kube namespace controller to delete all resources in a namespace when the namespace is deleted. Refactored this code into a separate NamespacedResourcesDeleter class and calling it from federation namespace controller.
This is required for enabling cascading deletion of namespaced resources in federation apiserver.
Before this PR, we were directly deleting the namespaced resources and assuming that they go away immediately. With cascading deletion, we will have to wait for the corresponding controllers to first delete the resources from underlying clusters and then delete the resource from federation control plane. NamespacedResourcesDeleter has this waiting logic.
cc @kubernetes/sig-federation-misc @caesarxuchao @derekwaynecarr @mwielgus
Automatic merge from submit-queue
Replace hand-written informers with generated ones
Replace existing uses of hand-written informers with generated ones.
Follow-up commits will switch the use of one-off informers to shared
informers.
This is a precursor to #40097. That PR will switch one-off informers to shared informers for the majority of the code base (but not quite all of it...).
NOTE: this does create a second set of shared informers in the kube-controller-manager. This will be resolved back down to a single factory once #40097 is reviewed and merged.
There are a couple of places where I expanded the # of caches we wait for in the calls to `WaitForCacheSync` - please pay attention to those. I also added in a commented-out wait in the attach/detach controller. If @kubernetes/sig-storage-pr-reviews is ok with enabling the waiting, I'll do it (I'll just need to tweak an integration test slightly).
@deads2k @sttts @smarterclayton @liggitt @soltysh @timothysc @lavalamp @wojtek-t @gmarek @sjenning @derekwaynecarr @kubernetes/sig-scalability-pr-reviews
This makes it so that only the controller loop talks to the
API server directly. The signatures for Sign and Approve also
become more consistent, while allowing the Signer to report
conditions (which it wasn't able to do before).
Automatic merge from submit-queue
Update daemon set controller OWNERS file
Adding myself as reviewer, adding @mikedanese as approver
cc @kargakis @lukasredynk
Automatic merge from submit-queue (batch tested with PRs 35782, 35831, 39279, 40853, 40867)
genericapiserver: cut off more dependencies – episode 7
Follow-up of https://github.com/kubernetes/kubernetes/pull/40822
approved based on #40363
Automatic merge from submit-queue (batch tested with PRs 40855, 40859)
PV binding: send an event when there are no PVs to bind
This is similar to scheduler that says "no nodes available to schedule pods"
when it can't schedule a pod.
@kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 40810, 40695)
Prevent pv controller from forcefully overwrite provisioned volume name
**What this PR does / why we need it**:
This PR adds a fix to prevents the PV controller from forcefully overwriting the provisioned volume's name with the generated PV name. Instead, it overwrites the volume's name only when it is missing. This allows dynamic provisioner implementers to set the name of the volume to a value that they choose.
**Which issue this PR fixes**
This PR does not have an issue affiliated, but it will allow PR #38924 to properly implement dynamically provisioned volume in namespaces other than default.
Automatic merge from submit-queue (batch tested with PRs 39169, 40719, 38954, 40808, 40689)
Add StatefulSets checks at Service level
Hi!
Please let me propose some very small e2e testsuite enhancement.
This PR removed a `TODO` about checking governing service at unit test level (which is hard) and adds this to e2e testsuite.
Thanks
Sebastian
This fix prevents the PV controller from forcefully overwriting the provisioned volume's name with the generated PV name. Instead, it allows dynamic provisioner implementers to set the name of the volume to a value that they choose.
Automatic merge from submit-queue (batch tested with PRs 34543, 40606)
sync client-go and move util/workqueue
The vision of client-go is that it provides enough utilities to build a reasonable controller. It has been copying `util/workqueue`. This makes it authoritative.
@liggitt I'm getting really close to making client-go authoritative ptal.
approved based on https://github.com/kubernetes/kubernetes/issues/40363
Automatic merge from submit-queue
controller: don't run informers in unit tests when unnecessary
Fixes https://github.com/kubernetes/kubernetes/issues/39908
@mfojtik it seems that using informers makes the deployment sync for the initial relist so this races with the enqueue that these tests are testing.
Automatic merge from submit-queue
Decrease Daemonset burst replicas due to DoS conditions.
**What this PR does / why we need it**:
We are seeing DoS conditions on our Registry if were running a large cluster with too many daemonsets bursting at once.
**Special notes for your reviewer**:
I decided not to plumb through yet another variable to the command line. Ideally such parameters could be tweaked via a configuration file.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 40126, 40565, 38777, 40564, 40572)
Do not swallow error in asw.updateNodeStatusUpdateNeeded
Ref #39056
Bubble the error up to `SetNodeUpdateStatusNeeded` and log it out.
NOTE: This does not modify interface of `SetNodeUpdateStatusNeeded`
Automatic merge from submit-queue (batch tested with PRs 40239, 40397, 40449, 40448, 40360)
move the discovery and dynamic clients
Moved the dynamic client, discovery client, testing/core, and testing/cache to `client-go`. Dependencies on api groups we don't have generated clients for have dropped out, so federation, kubeadm, and imagepolicy.
@caesarxuchao @sttts
approved based on https://github.com/kubernetes/kubernetes/issues/40363
Automatic merge from submit-queue (batch tested with PRs 39538, 40188, 40357, 38214, 40195)
genericapiserver: cut off more dependencies – episode 2
Compare commit subjects.
approved based on #40363
Automatic merge from submit-queue (batch tested with PRs 39064, 40294)
Refactor persistent volume tests
This is an attempt to make the binder tests a bit more concise. The PVCs are being created by a "templating" function. There is also a handful of PVs in the tests but those vary quite more and I don't think similar approach would save us much code.
Reference:
https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29006#-KPJuVeDE0O6TvDP9jia
@jsafrane: I hope this is what you have on mind.
Automatic merge from submit-queue (batch tested with PRs 40196, 40143, 40277)
Emit warning event when CronJob cannot determine starting time
**What this PR does / why we need it**:
In #39608, we've modified the error message for when a CronJob has too many unmet starting times to enumerate to figure out the next starting time. This makes it more "actionable", and the user can now set a deadline to avoid running into this. However, the error message is still only controller level AFAIK and thus not exposed to the user. From his perspective, there is no way to tell why the CronJob is not scheduling the next instance.
The PR adds a warning event in addition to the error in the controller manager's log.
**Which issue this PR fixes**: This is an addition to PR #39608 regarding #36311.
**Special notes for your reviewer**: cc @soltysh
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 40232, 40235, 40237, 40240)
Fixup pet terminology in log and user-facing events
**What this PR does / why we need it**:
Removes some user-facing strings for pet terminology.
Automatic merge from submit-queue
make client-go more authoritative
Builds on https://github.com/kubernetes/kubernetes/pull/40103
This moves a few more support package to client-go for origination.
1. restclient/watch - nodep
1. util/flowcontrol - used interface
1. util/integer, util/clock - used in controllers and in support of util/flowcontrol
Automatic merge from submit-queue
controller: decouple cleanup policy from deployment strategies
Deployments get cleaned up only when they are paused, they get scaled up/down,
or when the strategy that drives rollouts completes. This means that stuck
deployments that fall into none of the above categories will not get cleaned
up. Since cleanup is already safe by itself (we only delete old replica sets
that are synced by the replica set controller and have no replicas) we can
execute it for every deployment when there is no intention to rollback.
Fixes https://github.com/kubernetes/kubernetes/issues/40068
Deployments get cleaned up only when they are paused, they get scaled up/down,
or when the strategy that drives rollouts completes. This means that stuck
deployments that fall into none of the above categories will not get cleaned
up. Since cleanup is already safe by itself (we only delete old replica sets
that are synced by the replica set controller and have no replicas) we can
execute it for every deployment when there is no intention to rollback.
Automatic merge from submit-queue (batch tested with PRs 34763, 38706, 39939, 40020)
Use Statefulset instead in e2e and controller
Quick fix ref: #35534
We should finish the issue to meet v1.6 milestone.
Automatic merge from submit-queue
genericapiserver: cut off pkg/serviceaccount dependency
**Blocked** by pkg/api/validation/genericvalidation to be split up and moved into apimachinery.
Automatic merge from submit-queue
Do not list CronJob unmet starting times beyond deadline
**What this PR does / why we need it**:
See #36311. `getRecentUnmetScheduleTimes` gives up after 100 unmet times to avoid wasting too much CPU or memory generating all the times, as it generates them sequentially.
When concurrency is forbidden, this is conceptually un-necessary: we only need the last unmet start time. This suggests that when concurrency is forbidden, we could generate times by going backward in time from now. This is not very practical as CronJob currently relies on a package that only provides `Next` and no `Prev`. Hand-cooking a `Prev` does not seem like a good idea. I could submit a PR to the cron library to add a `Prev` method, and use that when concurrency is forbidden through something like `getLastUnmetScheduleTime`. This would be `O(1)` and there would be no limit involved.
(edit: actually, even for the other concurrency settings, we only start the last unmet start times -- there is a `TODO` in the controller to actually start all of them, but that is not implemented at the moment. This means the solution would apply, at least temporarily, to all concurrency settings).
cc @soltysh what do you think?
In the meantime, I would suggest to do something simple. Currently, the user has no way to configure anything to ensure that his CronJob will not get stuck if one job takes more that 100 unmet times.
`getRecentUnmetScheduleTimes` starts with an initial time corresponding to the last start (or to the creation of the CronJob, if nothing has started yet). However, when `StartingDeadlineSeconds` is set, the controller will not start anything that is older than the deadline, so if the last start is way beyond the deadline, we are generating potentially lots of unmet start times that will not be considered by the scheduler for scheduling anyway.
Consider a job running every minute, where the last instance has taken 120 minutes. This means there are more than 100 unmet times when we start counting from the last start time.
**The PR makes `getRecentUnmetScheduleTimes` only consider times that do not fall beyond the deadline.** Here, the CronJob can be configured with a `StartingDeadlineSeconds` of, say, 10 minutes. After the 120min job has run, `getRecentUnmetScheduleTimes` will only consider the times in the last 10 minutes from now, and will not get stuck.
As a side note on the max. number of unmet times to use as limits in terms of CPU used by the controller: I have run a quick benchmark on my i7 mac. Schedules corresponding to "once a week" tend to be more expensive to generate unmet times for. Just FYI.
```
+--------------+---------------+--------------+
| SCHEDULE | MISSED STARTS | TIMING |
+--------------+---------------+--------------+
| */1 * * * ? | 100 | 383.645µs |
| */30 * * * ? | 100 | 354.765µs |
| 30 1 * * ? | 100 | 1.065124ms |
| 30 1 * * 0 | 100 | 1.80034ms |
| */1 * * * ? | 500 | 1.341365ms |
| */30 * * * ? | 500 | 1.814441ms |
| 30 1 * * ? | 500 | 8.475012ms |
| 30 1 * * 0 | 500 | 10.020613ms |
| */1 * * * ? | 1000 | 2.551697ms |
| */30 * * * ? | 1000 | 4.075813ms |
| 30 1 * * ? | 1000 | 17.674945ms |
| 30 1 * * 0 | 1000 | 19.149324ms |
| */1 * * * ? | 10000 | 25.725531ms |
| */30 * * * ? | 10000 | 87.520022ms |
| 30 1 * * ? | 10000 | 174.29216ms |
| 30 1 * * 0 | 10000 | 196.565748ms |
+--------------+---------------+--------------+
```
using
```.go
package main
import (
"fmt"
"time"
"os"
"strconv"
"github.com/robfig/cron"
"github.com/olekukonko/tablewriter"
)
func timeSchedule(schedule string, iterations int) (time.Duration) {
sched, err := cron.ParseStandard(schedule)
if err != nil {
panic(fmt.Sprintf("Unparseable schedule: %s", err))
}
start := time.Now()
t := time.Now()
for i := 1; i <= iterations; i++ {
t = sched.Next(t)
}
return time.Since(start)
}
func main() {
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Schedule", "Missed starts", "Timing"})
schedules := []string{"*/1 * * * ?", "*/30 * * * ?", "30 1 * * ?", "30 1 * * 0"}
iteration_nums := []int{100, 500, 1000, 10000}
for _, iterations := range iteration_nums {
for _, schedule := range schedules {
table.Append([]string{schedule,
strconv.Itoa(iterations),
timeSchedule(schedule, iterations).String()})
}
}
table.Render()
}
```
**Which issue this PR fixes**: fixes#36311
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 38592, 39949, 39946, 39882)
move api/errors to apimachinery
`pkg/api/errors` is a set of helpers around `meta/v1.Status` that help to create and interpret various apiserver errors. Things like `.NewNotFound` and `IsNotFound` pairings. This pull moves it into apimachinery for use by the clients and servers.
@smarterclayton @lavalamp First commit is the move plus minor fitting. Second commit is straight replace and generation.
Automatic merge from submit-queue
Updated unit tests
@janetkuo updated the flaky unit test to have the same structure with regard to uncasting as the rest of the tests. ptal
Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)
Update deployment equality helper
@mfojtik @janetkuo this is split out of https://github.com/kubernetes/kubernetes/pull/38714 to reduce the size of that PR, ptal
Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)
Made cache.Controller to be interface.
**What this PR does / why we need it**:
#37504
Automatic merge from submit-queue
replace global registry in apimachinery with global registry in k8s.io/kubernetes
We'd like to remove all globals, but our immediate problem is that a shared registry between k8s.io/kubernetes and k8s.io/client-go doesn't work. Since client-go makes a copy, we can actually keep a global registry with other globals in pkg/api for now.
@kubernetes/sig-api-machinery-misc @lavalamp @smarterclayton @sttts
Automatic merge from submit-queue (batch tested with PRs 39661, 39740, 39801, 39468, 39743)
fix nodeStatusUpdateRetry count exceeding condition judgement
When tryUpdateNodeStatus() return err,err!=nil, but nc.kubeClient.Core().Nodes().Get() return no err, err==nil,
And we run nodeStatusUpdateRetry times, when for loop ends, err == nil, we can not print error info and run continue, so maybe the condition judgement is not right
Maybe caused #38671
When tryUpdateNodeStatus() return err,err!=nil, but nc.kubeClient.Core().Nodes().Get() return no err, err==nil,
And we run nodeStatusUpdateRetry times, when for loop ends, err == nil, we can not print error info and run continue, so the condition judgement is wrong.
Automatic merge from submit-queue (batch tested with PRs 39483, 39088, 38787)
daemonset: differentiate between cases in nodeShouldRun
specifically we need to differentiate between wanting to run,
should run and should continue running. This is required to
support all taint effects and will improve reporting and end
user debuggability.
fixes https://github.com/kubernetes/kubernetes/issues/28839 among other things
secifically we need to differentiate between wanting to run,
should run and should continue running. This is required to
support all taint effects and will improve reporting and end
user debuggability.
Automatic merge from submit-queue (batch tested with PRs 39684, 39577, 38989, 39534, 39702)
kubelet: request client auth certificates from certificate API.
This fixes kubeadm and --experiment-kubelet-bootstrap.
cc @liggitt
Automatic merge from submit-queue (batch tested with PRs 39694, 39383, 39651, 39691, 39497)
HPA Controller: Check for 0-sum request value
In certain conditions in which the set of metrics returned by Heapster
is completely disjoint from the set of pods returned by the API server,
we can have a request sum of zero, which can cause a panic (due to
division by zero). This checks for that condition.
Fixes#39680
**Release note**:
```release-note
Fixes an HPA-related panic due to division-by-zero.
```
Automatic merge from submit-queue
certificates: add a signing profile to the internal types
Here is a strawman of a CertificateSigningProfile type which would be used by the certificates controller when configuring cfssl. Side question: what magnitude of change warrants a design proposal?
@liggitt @gtank
In certain conditions in which the set of metrics returned by Heapster
is completely disjoint from the set of pods returned by the API server,
we can have a request sum of zero, which can cause a panic (due to
division by zero). This checks for that condition.
Fixes#39680
Automatic merge from submit-queue (batch tested with PRs 39628, 39551, 38746, 38352, 39607)
Increasing times on reconciling volumes fixing impact to AWS.
#**What this PR does / why we need it**:
We are currently blocked by API timeouts with PV volumes. See https://github.com/kubernetes/kubernetes/issues/39526. This is a workaround, not a fix.
**Special notes for your reviewer**:
A second PR will be dropped with CLI cobra options in it, but we are starting with increasing the reconciliation periods. I am dropping this without major testing and will test on our AWS account. Will be marked WIP until I run smoke tests.
**Release note**:
```release-note
Provide kubernetes-controller-manager flags to control volume attach/detach reconciler sync. The duration of the syncs can be controlled, and the syncs can be shut off as well.
```
Automatic merge from submit-queue (batch tested with PRs 37845, 39439, 39514, 39457, 38866)
Log a warning message when failed to find kind for resource in garbage collector controller
at this time, I do not think thirdparty api group version resources should be taken care by garbage collector controllers, and this line of call will fail actually: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/garbagecollector/garbagecollector.go#L565, and as a result, the garbagecollector controller failed to start.
Automatic merge from submit-queue
Remove jobs that do not exist from active list of CronJob
**What this PR does / why we need it**: This PR modifies the controller for CronJob to remove from the active job list any job that does not exist anymore, to avoid staying blocked in active state forever. See #37957.
**Which issue this PR fixes**: fixes#37957
**Special notes for your reviewer**:
**Release note**:
```
```
Automatic merge from submit-queue (batch tested with PRs 39284, 39367)
Remove HostRecord annotation (beta feature)
The annotation has made it to GA so this code should be deleted.
**Release note**:
```release-note
The 'endpoints.beta.kubernetes.io/hostnames-map' annotation is no longer supported. Users can use the 'Endpoints.subsets[].addresses[].hostname' field instead.
```
Automatic merge from submit-queue (batch tested with PRs 39092, 39126, 37380, 37093, 39237)
Endpoints with TolerateUnready annotation, should list Pods in state terminating
**What this PR does / why we need it**:
We are using preStop lifecycle hooks to gracefully remove a node from a cluster. This hook is potentially long running and after the preStop hook is fired, the DNS resolution of the soon to be stopped Pod is failing, which causes a failure there.
**Special notes for your reviewer**:
Would be great to backport that to 1.4, 1.3
**Release note**:
```release-note
Endpoints, that tolerate unready Pods, are now listing Pods in state Terminating as well
```
@bprashanth
Automatic merge from submit-queue (batch tested with PRs 39075, 39350, 39353)
Move pkg/api.{Context,RequestContextMapper} into pkg/genericapiserver/api/request
**Based on #39350**
Automatic merge from submit-queue
DaemonSet ObservedGeneration
Extracting ObserverdGeneration part from #31693. It also implements #7328 for DaemonSets.
cc @kargakis
Automatic merge from submit-queue (batch tested with PRs 39150, 38615)
Add work queues to PV controller
PV controller should not use Controller.Requeue, as as it is not available in
shared informers. We need to implement our own work queues instead, where we
can enqueue volumes/claims as we want.
PV controller should not use Controller.Requeue, as as it is not available in
shared informers. We need to implement our own work queues instead where we
can enqueue volumes/claims as we want.
Automatic merge from submit-queue
Avoid unnecessary memory allocations
Low-hanging fruits in saving memory allocations. During our 5000-node kubemark runs I've see this:
ControllerManager:
- 40.17% k8s.io/kubernetes/pkg/util/system.IsMasterNode
- 19.04% k8s.io/kubernetes/pkg/controller.(*PodControllerRefManager).Classify
Scheduler:
- 42.74% k8s.io/kubernetes/plugin/pkg/scheduler/algrorithm/predicates.(*MaxPDVolumeCountChecker).filterVolumes
This PR is eliminating all of those.
Automatic merge from submit-queue (batch tested with PRs 39093, 34273)
start breaking up controller manager into two pieces
This PR addresses: https://github.com/kubernetes/features/issues/88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourceQuotaController
- namespaceController
- deploymentController
etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
I have added a new cloud provider called "external", which signals the controller-manager that
cloud provider specific loops are being run by another controller. I have added these changes in such
a way that the existing cloud providers are not affected. This change is completely backwards compatible, and does not require any changes to the way kubernetes is run today.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will be addressed in a different commit/issue.
@alena1108 @ibuildthecloud @thockin @dchen1107
**Special notes for your reviewer**:
@thockin - Im making this **WIP** PR to ensure that I don't stray too far from everyone's view of how we should make this change. As you can see, only one controller, namely `nodecontroller` can be disabled with the `--cloudprovider=external` flag at the moment. I'm working on cleaning up the `rancher-controller-manger` that I wrote to test this.
Secondly, I'd like to use this PR to address cloudprovider specific code in kubelet and api-server.
**Kubelet**
Kubelet uses provider specific code for node registration and for checking node-status. I thought of two ways to divide the kubelet:
- We could start a cloud provider specific kubelet on each host as a part of kubernetes, and this cloud-specific-kubelet does node registration and node-status checks.
- Create a kubelet plugin for each provider, which will be started by kubelet as a long running service. This plugin can be packaged as a binary.
I'm leaning towards the first option. That way, kubelet does not have to manage another process, and we can offload the process management of the cloud-provider-specific-kubelet to something like systemd.
@dchen1107 @thockin what do you think?
**Kube-apiserver**
Kube-apiserver uses provider specific code for distributing ssh keys to all the nodes of a cluster. Do you have any suggestions about how to address this?
**Release note**:
``` release-note
```
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
Automatic merge from submit-queue
Fix DaemonSet cache mutation
**What this PR does / why we need it**: stops the DaemonSetController from mutating the DaemonSet shared informer cache
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#38985
cc @deads2k @mikedanese @lavalamp @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 36888, 38180, 38855, 38590)
Fix variable shadowing in exponential backoff when deleting volumes
While https://github.com/kubernetes/kubernetes/pull/38339 implemented exponential backoff on
volume deletion, that PR suffers from a minor bug when error thrown on volume deletion is anything other than `VolumeInUse` errors - in which case exponential backoff will not work.
This PR fixes that. This PR also makes unit tests more deterministic because exponential backoff changed the way operations are permitted.
CC @jsafrane @childsb @wongma7
Automatic merge from submit-queue (batch tested with PRs 38426, 38917, 38891, 38935)
if statement must be true
**What this PR does / why we need it**:
if len(metrics.Items)==0, the function would been returned. so the statement if len(metrics.Items) > 0 is redudant, it must be true.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Add dsStoreSynced so we also wait on this cache when starting the
DaemonSetController.
Switch to using a fake clientset in the unit tests.
Fix TestNumberReadyStatus so it doesn't expect the cache to be mutated.
Automatic merge from submit-queue (batch tested with PRs 34353, 33837, 38878)
Revert "daemonset: bail out after we enqueue once"
I get overzealous sometimes.
Reverts kubernetes/kubernetes#38780
Automatic merge from submit-queue
Fix Recreate for Deployments and stop using events in e2e tests
Fixes https://github.com/kubernetes/kubernetes/issues/36453 by removing events from the deployment tests. The test about events during a Rolling deployment is redundant so I just removed it (we already have another test specifically for Rolling deployments).
Closes https://github.com/kubernetes/kubernetes/issues/32567 (preferred to use pod LISTs instead of a new status API field for replica sets that would add many more writes to replica sets).
@kubernetes/deployment
Automatic merge from submit-queue
daemonset: bail out after we enqueue once
This isn't terrible because we dedup in the queue but it's a waste of
cycles.
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)
Rename "release_1_5" clientset to just "clientset"
We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.
@kubernetes/sig-api-machinery @deads2k
```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
Automatic merge from submit-queue
controller: adopt pods only when controller is not deleted
When a replica set is deleted it will continue adopting pods thus driving the worker that handles it in erroring out because the adoption is [always cancelled](59c313730c/pkg/controller/controller_ref_manager.go (L110)) in the controller reference manager.
```
E1212 14:40:31.245773 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.258462 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259131 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.259149 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
I1212 14:40:31.268012 7964 deployment_controller.go:314] Error syncing deployment e2e-tests-deployment-2rr3m/test-rollover-deployment: Operation cannot be fulfilled on deployments.extensions "test-rollover-deployment": the object has been modified; please apply your changes to the latest version and try again
E1212 14:40:31.277252 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277276 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.277287 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289148 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-b6s4x_82fa8343-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289169 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-73c3m_791e16cb-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289176 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-wrmt8_791e3d46-c070-11e6-a234-68f72840e7df because the controlller is being deleted
E1212 14:40:31.289181 7964 replica_set.go:616] cancel the adopt attempt for pod e2e-tests-deployment-2rr3m_test-rollover-deployment-1981456318-bmqpn_81482114-c070-11e6-a234-68f72840e7df because the controlller is being deleted
```
@kubernetes/deployment @caesarxuchao
Automatic merge from submit-queue
Curating Owners: pkg/controller
cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed. Being an approver or reviewer
of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
bump log level on service status update
ref: https://github.com/kubernetes/kubernetes/issues/38349
I tried to reproduce the problem in #38349 and failed. Not sure why service status update failed and service controller skip status update in the next round. What I have observed is that if service status update failed due to conflict, the next round of processServiceUpdate will correct it.
Bumping log level to get a better signal when it occurs.
Automatic merge from submit-queue (batch tested with PRs 38608, 38299)
controller: set unavailableReplicas correctly when scaling down
```
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
```
The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
@kubernetes/deployment
Automatic merge from submit-queue
Remove json serialization annotations from internal types
fixes#3933
Internal types should never be serialized, and including json serialization tags on them makes it possible to accidentally do that without realizing it.
fixes in this PR:
* types
* [x] remove json tags from internal types
* [x] fix references from serialized types to internal ObjectMeta
* generation
* [x] remove generated json codecs for internal types (they should never be used)
* kubectl
* [x] fix `apply` to operate on versioned object
* [x] fix sorting by field to operate on versioned object
* [x] fix `--record` to build annotation patch using versioned object
* hpa
* [x] fix unmarshaling to internal CustomMetricTargetList in validation
* thirdpartyresources
* [x] fix encoding API responses using internal ObjectMeta
* tests
* [x] fix tests to use versioned objects when checking encoded content
* [x] fix tests passing internal objects to generic printers
follow ups (will open tracking issues or additional PRs):
- [ ] remove json tags from internal kubeconfig types (`kubectl config set` pathfinding needs to work against external type)
- [ ] HPA should version CustomMetricTargetList serialization in annotations
- [ ] revisit how TPR resthandlers encoding objects
- [ ] audit and add tests for printer use (human-readable printer requires internal versions, generic printers require external versions)
- [ ] add static analysis tests preventing new internal types from adding tags
- [ ] add static analysis tests requiring json tags on external types (and enforcing lower-case first letter)
- [ ] add more tests for `kubectl get` exercising known and unknown types with all output options
Automatic merge from submit-queue (batch tested with PRs 38354, 38371)
Add GetOptions parameter to Get() calls in client library
Ref #37473
This PR is super mechanical - the non trivial commits are:
- Update client generator
- Register GetOptions in batch/v2alpha1 group
Automatic merge from submit-queue (batch tested with PRs 38278, 37770)
Refactor REST storage to use generic defaults
This removes the repetition in the REST storage builders by moving the logic to `restoptions.ApplyOptions`. `registry.StorageWithCacher`/`generic.StorageDecorator` no longer assume that they can build the `keyFunc` for arbitrary objects. `restoptions.ApplyOptions` uses the `registry.Store`'s `KeyFunc` for its call to `generic.StorageDecorator`.
```release-note
Cluster federation servers have changed the location in etcd where federated services are stored, so existing federated services must be deleted and recreated. Before upgrading, export all federated services from the federation server and delete the services. After upgrading the cluster, recreate the federated services from the exported data.
```
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)
Exponential back off when volume delete fails
**What this PR does / why we need it**:
This PR implements ability in pv_controller to back off when deleting a volume fails from plugin API.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Partly fixes#38295 , but I think volume delete is most problematic thing happening in pv_controller without any sort of backoff.
After this change the attempts of volume deletion look like:
```
controller : I1208 00:18:35.532061 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:20:50.578325 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:23:05.563488 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:25:20.599158 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:27:35.560009 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:29:50.594967 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:32:05.539168 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
controller : I1208 00:34:20.581665 16388 aws_util.go:55] Error deleting EBS Disk volume aws://us-east-1d/vol-abcdefg: VolumeInUse: Volume vol-abcdefg is currently attached to i-1234567
```
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)
controller: sync stuck deployments in a secondary queue
@kubernetes/deployment this makes Deployments not depend on a tight resync interval in order to estimate progress.
This implements pv_controller to exponentially backoff
when deleting a volume fails in Cloud API. It ensures that
we aren't making too many calls to Cloud API
deployment_controller.go:299] Error syncing deployment
e2e-tests-kubectl-2l7xx/e2e-test-nginx-deployment:
Deployment.extensions "e2e-test-nginx-deployment" is invalid:
status.unavailableReplicas: Invalid value: -1:
must be greater than or equal to 0
The validation error above occurs usually when a Deployment is
scaled down. In such a case we should default unavailableReplicas
to 0 instead of making an invalid api call.
Automatic merge from submit-queue
Add integration tests for desire state of world populator
Add integration tests for desire state of world populator
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994
Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.
This adds tests for code introduced here :
https://github.com/kubernetes/kubernetes/issues/26994
Via integration test we can now verify that if pod delete
event is somehow missed by AttachDetach controller - it still
get cleaned up by Desired State of World populator.
Automatic merge from submit-queue (batch tested with PRs 37608, 37103, 37320, 37607, 37678)
Fix issue #37377: Report an event on successful PVC provisioning
This is a simple patch to fix the issue #37377: On a successful PVC provisioning an event is emitted so it's clear the provisioning actually succeeded.
cc: @jsafrane
Automatic merge from submit-queue (batch tested with PRs 37945, 37498, 37391, 37209, 37169)
Refactor certificate controller to make approval an interface
@mikedanese
Automatic merge from submit-queue
SetNodeUpdateStatusNeeded whenever nodeAdd event is received
**What this PR does / why we need it**:
Bug fix and SetNodeStatusUpdateNeeded for a node whenever its api object is added. This is to ensure that we don't lose the attached list of volumes in the node when its api object is deleted and recreated.
fixes https://github.com/kubernetes/kubernetes/issues/37586https://github.com/kubernetes/kubernetes/issues/37585
**Special notes for your reviewer**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
Automatic merge from submit-queue
rescale immediately if the basic constraints are not satisfied
refactor reconcileAutoscaler.
If the basic constraints are not satisfied, we should rescale the target ref immediately.
Automatic merge from submit-queue
fix rbac informer. it's listers are all internal
Fixes https://github.com/kubernetes/kubernetes/issues/37615
The rbac informer still uses internal types in its listers, which means it must use internal clients for evaluation. Since its running inside the API server, this seems ok for now and we can/should fix it when generated informers come along. This just patches us to keep RBAC working.
@kubernetes/sig-auth @sttts @liggitt this is broken in master, let's get it sorted quickly.
Automatic merge from submit-queue
Add namespace index for limit ranger
Without this PR I'm seeing a huge number of lines like this:
```
Index with name namespace does not exist
```
Those are coming from LimitRanger admission controller - this PR fixes those.
Automatic merge from submit-queue
controller manager refactors
The controller manager needs some significant cleanup. This starts us down the patch by respecting parameters like `stopCh`, simplifying discovery checks, removing unnecessary parameters, preventing unncessary fatals, and using our client builder.
@sttts @ncdc
Automatic merge from submit-queue
Fix package aliases to follow golang convention
Some package aliases are not not align with golang convention https://blog.golang.org/package-names. This PR fixes them. Also adds a verify script and presubmit checks.
Fixes#35070.
cc/ @timstclair @Random-Liu
Many providers need to do some sort of node name -> IP or instanceID
lookup before they can use the list of hostnames passed to
EnsureLoadBalancer/UpdateLoadBalancer.
This change just passes the full Node object instead of simply the node
name, allowing providers to use the node's provider ID and cached
addresses without additional lookups. Using `node.Name` reproduces the
old behaviour.
Automatic merge from submit-queue
hold namespaces briefly before processing deletion
possible fix for #36891
in HA scenarios (either HA apiserver or HA etcd), it is possible for deletion of resources from namespace cleanup to race with creation of objects in the terminating namespace
HA master timeline:
1. "delete namespace n" API call goes to apiserver 1, deletion timestamp is set in etcd
2. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
3. "create deployment d" API call goes to apiserver 2, gets persisted to etcd
4. apiserver 2 observes namespace deletion, stops allowing new objects to be created
5. namespace controller finishes deleting listed deployments, deletes namespace
HA etcd timeline:
1. "create deployment d" API call goes to apiserver, gets persisted to etcd
2. "delete namespace n" API call goes to apiserver, deletion timestamp is set in etcd
3. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
4. list call goes to non-leader etcd member that hasn't observed the new deployment or the deleted namespace yet
5. namespace controller finishes deleting the listed deployments, deletes namespace
In both cases, simply waiting to clean up the namespace (either for etcd members to observe objects created at the last second in the namespace, or for other apiservers to observe the namespace move to terminating phase and disallow additional creations) resolves the issue
Possible other fixes:
* do a second sweep of objects before deleting the namespace
* have the namespace controller check for and clean up objects in namespaces that no longer exist
* ...?
Automatic merge from submit-queue
Add more logging around Pod deletion
After this PR we'll have at least V(2) level log near all Pod deletions.
@saad-ali - this is required by GKE to help with diagnosing possible problem.
cc @dchen1107 @wojtek-t
Automatic merge from submit-queue
Add logs near force deletions of Pods
We should always log something when control plane force deletes the Pod.
@davidopp I think that logging force deletions is enough, or do you think we should log soft deletions as well?
cc @deads2k
Automatic merge from submit-queue
Fix kubectl Stratigic Merge Patch compatibility
As @smarterclayton pointed out in [comment1](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290820) and [comment2](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290847) in PR #35647,
we cannot assume the API servers publish version and they shares the same version.
This PR removes all the calls of GetServerSupportedSMPatchVersion().
Change the behavior of `apply` and `edit` to:
Retrying with the old patch version, if the new version fails.
Default other usage of SMPatch to the new version, since they don't update list of primitives.
fixes#36916
cc: @pwittrock @smarterclayton
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
For the first two cases, we need to check the mount path and convert
them back to the original format.
Automatic merge from submit-queue
Restore event messages for replica sets in the deployment controller
Needed to unblock release upgrade tests (see https://github.com/kubernetes/kubernetes/issues/36453)
@kubernetes/deployment ptal
Automatic merge from submit-queue
Fix strategic patch for list of primitive type with merge sementic
Fix strategic patch for list of primitive type when the patch strategy is `merge`.
Before: we cannot replace or delete an item in a list of primitive, e.g. string, when the patch strategy is `merge`. It will always append new items to the list.
This patch will generate a map to update the list of primitive type.
The server with this patch will accept either a new patch or an old patch.
The client will found out the APIserver version before generate the patch.
Fixes#35163, #32398
cc: @pwittrock @fabianofranz
``` release-note
Fix strategic patch for list of primitive type when patch strategy is `merge` to remove deleted objects.
```
Automatic merge from submit-queue
Do not handle AlreadyExists errors yet
Until we fix https://github.com/kubernetes/kubernetes/issues/29735 (use a new hashing algo) we should not handle AlreadyExists (was added recently in the perma-failed PR).
@kubernetes/deployment
Automatic merge from submit-queue
Better messaging for missing volume binaries on host
**What this PR does / why we need it**:
When mount binaries are not present on a host, the error returned is a generic one.
This change is to check the mount binaries before the mount and return a user-friendly error message.
This change is specific to GCI and the flag is experimental now.
https://github.com/kubernetes/kubernetes/issues/36098
**Release note**:
Introduces a flag `check-node-capabilities-before-mount` which if set, enables a check (`CanMount()`) prior to mount operations to verify that the required components (binaries, etc.) to mount the volume are available on the underlying node. If the check is enabled and `CanMount()` returns an error, the mount operation fails. Implements the `CanMount()` check for NFS.
Sample output post change :
rkouj@rkouj0:~/go/src/k8s.io/kubernetes$ kubectl describe pods
Name: sleepyrc-fzhyl
Namespace: default
Node: e2e-test-rkouj-minion-group-oxxa/10.240.0.3
Start Time: Mon, 07 Nov 2016 21:28:36 -0800
Labels: name=sleepy
Status: Pending
IP:
Controllers: ReplicationController/sleepyrc
Containers:
sleepycontainer1:
Container ID:
Image: gcr.io/google_containers/busybox
Image ID:
Port:
Command:
sleep
6000
QoS Tier:
cpu: Burstable
memory: BestEffort
Requests:
cpu: 100m
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
data:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 127.0.0.1
Path: /export
ReadOnly: false
default-token-d13tj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d13tj
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned sleepyrc-fzhyl to e2e-test-rkouj-minion-group-oxxa
6s 3s 4 {kubelet e2e-test-rkouj-minion-group-oxxa} Warning FailedMount Unable to mount volume kubernetes.io/nfs/32c7ef16-a574-11e6-813d-42010af00002-data (spec.Name: data) on pod sleepyrc-fzhyl (UID: 32c7ef16-a574-11e6-813d-42010af00002). Verify that your node machine has the required components before attempting to mount this volume type. Required binary /sbin/mount.nfs is missing
Automatic merge from submit-queue
Implement external provisioning proposal
In other words, add "provisioned-by" annotation to all PVCs that should be provisioned dynamically.
Most of the changes are actually in tests.
@kubernetes/sig-storage
Automatic merge from submit-queue
Use generation in pod disruption budget
Fixes#35324
Previously it was possible to use allowedDirsruptions calculated for the previous spec with the current spec. With generation check API servers always make sure that allowedDisruptions were calculated for the current spec.
At the same time I set the registry policy to only accept updates if the version based on which the update was made matches to the current version in etcd. That ensures that parallel eviction executions don't use the same allowed disruption.
cc: @davidopp @kargakis @wojtek-t
Automatic merge from submit-queue
Use available informers in quota replenishment
more iteration on the goal to use informers where available in quota system. this time adding persistent volume claims so the same informer is used here and https://github.com/kubernetes/kubernetes/pull/36316
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
Automatic merge from submit-queue
Rename ScheduledJobs to CronJobs
I went with @smarterclayton idea of registering named types in schema. This way we can support both the new (CronJobs) and old (ScheduledJobs) resource name. Fixes#32150.
fyi @erictune @caesarxuchao @janetkuo
Not ready yet, but getting close there...
**Release note**:
```release-note
Rename ScheduledJobs to CronJobs.
```
Automatic merge from submit-queue
Add more events to disruption controller
To provide users with information that their PDB may not be working as intended.
cc: @davidopp
Automatic merge from submit-queue
Fix possible race in operationNotSupportedCache
Because we can run multiple workers to delete namespaces simultaneously, the
operationNotSupportedCache needs to be guarded with a mutex to avoid concurrent
map read/write errors.