- Increase the global level for broadcaster's logging to 3 so that users can ignore event messages by lowering the logging level. It reduces information noise.
- Making sure the context is properly injected into the broadcaster, this will allow the -v flag value to be used also in that broadcaster, rather than the above global value.
- test: use cancellation from ktesting
- golangci-hints: checked error return value
Controls the lifecycle of the ServiceCIDRs adding finalizers and
setting the Ready condition in status when they are created, and
removing the finalizers once it is safe to remove (no orphan IPAddresses)
An IPAddress is orphan if there are no ServiceCIDR containing it.
Change-Id: Icbe31e1ed8525fa04df3b741c8a817e5f2a49e80
controllers enabled by default should define feature gates in
ControllerDescriptor.requiredFeatureGates and not during a descriptor
registration in NewControllerDescriptors
- These metadata can be used to handle controllers in a generic way.
- This enables showing feature gated controllers in kube-controller-manager's help.
- It is possible to obtain a controllerName in the InitFunc so it can be passed down to and used by the controller.
metadata about a controller:
- name
- requiredFeatureGates
- isDisabledByDefault
- isCloudProviderController
* Job: Handle error returned from AddEventHandler function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Use the error message the similar to CronJob
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Clean up error messages
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the tesing.T on the second place in the args for the newControllerFromClient function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.T on the second place in the args for the newControllerFromClientWithClock function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Call t.Helper()
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the createJobControllerWithSharedInformers function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the startJobControllerAndWaitForCaches function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Adapt TestFinializerCleanup to the eventhandler error
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
---------
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
KEP-2593 proposed to expand the existing node-ipam controller
to be configurable via a ClusterCIDR objects, however, there
were reasonable doubts on the SIG about the feature and after
several months of dicussions we decided to not move forward
with the KEP intree, hence, we are going to remove the existing
code, that is still in alpha.
https://groups.google.com/g/kubernetes-sig-network/c/nts1xEZ--gQ/m/2aTOUNFFAAAJ
Change-Id: Ieaf2007b0b23c296cde333247bfb672441fe6dfc
* [API REVIEW] ValidatingAdmissionPolicyStatucController config.
worker count.
* ValidatingAdmissionPolicyStatus controller.
* remove CEL typechecking from API server.
* fix initializer tests.
* remove type checking integration tests
from API server integration tests.
* validatingadmissionpolicy-status options.
* grant access to VAP controller.
* add defaulting unit test.
* generated: ./hack/update-codegen.sh
* add OWNERS for VAP status controller.
* type checking test case.
When someone decides that a Pod should definitely run on a specific node, they
can create the Pod with spec.nodeName already set. Some custom scheduler might
do that. Then kubelet starts to check the pod and (if DRA is enabled) will
refuse to run it, either because the claims are still waiting for the first
consumer or the pod wasn't added to reservedFor. Both are things the scheduler
normally does.
Also, if a pod got scheduled while the DRA feature was off in the
kube-scheduler, a pod can reach the same state.
The resource claim controller can handle these two cases by taking over for the
kube-scheduler when nodeName is set. Triggering an allocation is simpler than
in the scheduler because all it takes is creating the right
PodSchedulingContext with spec.selectedNode set. There's no need to list nodes
because that choice was already made, permanently. Adding the pod to
reservedFor also isn't hard.
What's currently missing is triggering de-allocation of claims to re-allocate
them for the desired node. This is not important for claims that get created
for the pod from a template and then only get used once, but it might be
worthwhile to add de-allocation in the future.