While currently those objects only get published by the kubelet for node-local
resources, this could change once we also support network-attached
resources. Dropping the "Node" prefix enables such a future extension.
The NodeName in ResourceSlice and StructuredResourceHandle then becomes
optional. The kubelet still needs to provide one and it must match its own node
name, otherwise it doesn't have permission to access ResourceSlice objects.
When allocation was done by the scheduler, the controller needs to do the
deallocation because there is no control-plane controller which could react to
"DeallocationRequested".
When a claim uses structured parameters, as indicated by the resource class
flag, the scheduler is responsible for allocating it. To do this it needs to
gather information about available node resources by watching
NodeResourceSlices and then match the in-tree claim parameters against those
resources.
The kubelet running on one node should not be allowed to access
NodeResourceSlice objects belonging to some other node, as defined by the
NodeResourceSlice.NodeName field.
When enabling DynamicResourceAllocation the dynamicresource plugin may
error during scheduling with:
```
E0212 08:57:53.817268 1 framework.go:1323] "Plugin failed" err="podschedulingcontexts.resource.k8s.io \"pod\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>" plugin="DynamicResources" pod="gpu-test2/pod"
```
NFTables proxy will now drop traffic directed towards unallocated
ClusterIPs and reject traffic directed towards invalid ports of
Cluster IPs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
This commit defines the ClusterTrustBundlePEM projected volume types.
These types have been renamed from the KEP (PEMTrustAnchors) in order to
leave open the possibility of a similar projection drawing from a
yet-to-exist namespaced-scoped TrustBundle object, which came up during
KEP discussion.
* Add the projection field to internal and v1 APIs.
* Add validation to ensure that usages of the project must specify a
name and path.
* Add TODO covering admission control to forbid mirror pods from using
the projection.
Part of KEP-3257.
Controls the lifecycle of the ServiceCIDRs adding finalizers and
setting the Ready condition in status when they are created, and
removing the finalizers once it is safe to remove (no orphan IPAddresses)
An IPAddress is orphan if there are no ServiceCIDR containing it.
Change-Id: Icbe31e1ed8525fa04df3b741c8a817e5f2a49e80
PVC and containers shared the same ResourceRequirements struct to define their
API. When resource claims were added, that struct got extended, which
accidentally also changed the PVC API. To avoid such a mistake from happening
again, PVC now uses its own VolumeResourceRequirements struct.
The `Claims` field gets removed because risk of breaking someone is low:
theoretically, YAML files which have a claims field for volumes now
get rejected when validating against the OpenAPI. Such files
have never made sense and should be fixed.
Code that uses the struct definitions needs to be updated.
* Add slash ended urls for service-account-issuer-discovery to match API in swagger
* update the comment for adding slash-ended URLs
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
---------
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
* [API REVIEW] ValidatingAdmissionPolicyStatucController config.
worker count.
* ValidatingAdmissionPolicyStatus controller.
* remove CEL typechecking from API server.
* fix initializer tests.
* remove type checking integration tests
from API server integration tests.
* validatingadmissionpolicy-status options.
* grant access to VAP controller.
* add defaulting unit test.
* generated: ./hack/update-codegen.sh
* add OWNERS for VAP status controller.
* type checking test case.
When someone decides that a Pod should definitely run on a specific node, they
can create the Pod with spec.nodeName already set. Some custom scheduler might
do that. Then kubelet starts to check the pod and (if DRA is enabled) will
refuse to run it, either because the claims are still waiting for the first
consumer or the pod wasn't added to reservedFor. Both are things the scheduler
normally does.
Also, if a pod got scheduled while the DRA feature was off in the
kube-scheduler, a pod can reach the same state.
The resource claim controller can handle these two cases by taking over for the
kube-scheduler when nodeName is set. Triggering an allocation is simpler than
in the scheduler because all it takes is creating the right
PodSchedulingContext with spec.selectedNode set. There's no need to list nodes
because that choice was already made, permanently. Adding the pod to
reservedFor also isn't hard.
What's currently missing is triggering de-allocation of claims to re-allocate
them for the desired node. This is not important for claims that get created
for the pod from a template and then only get used once, but it might be
worthwhile to add de-allocation in the future.
The status determines which claims kubelet is allowed to access when claims get
created from a template. Therefore kubelet must not be allowed to modify that
part of the status, because otherwise it could add an entry and then gain
access to a claim it should have access to.
As discussed during the KEP and code review of dynamic resource allocation for
Kubernetes 1.26, to increase security kubelet should only get read access to
those ResourceClaim objects that are needed by a pod on the node.
This can be enforced easily by the node authorizer:
- the names of the objects are defined by the pod status
- there is no indirection as with volumes (pod -> pvc -> pv)
Normally the graph only gets updated when a pod is not scheduled yet.
Resource claim status changes can still happen after that, so they
also must trigger an update.
Generating the name avoids all potential name collisions. It's not clear how
much of a problem that was because users can avoid them and the deterministic
names for generic ephemeral volumes have not led to reports from users. But
using generated names is not too hard either.
What makes it relatively easy is that the new pod.status.resourceClaimStatus
map stores the generated name for kubelet and node authorizer, i.e. the
information in the pod is sufficient to determine the name of the
ResourceClaim.
The resource claim controller becomes a bit more complex and now needs
permission to modify the pod status. The new failure scenario of "ResourceClaim
created, updating pod status fails" is handled with the help of a new special
"resource.kubernetes.io/pod-claim-name" annotation that together with the owner
reference identifies exactly for what a ResourceClaim was generated, so
updating the pod status can be retried for existing ResourceClaims.
The transition from deterministic names is handled with a special case for that
recovery code path: a ResourceClaim with no annotation and a name that follows
the Kubernetes <= 1.27 naming pattern is assumed to be generated for that pod
claim and gets added to the pod status.
There's no immediate need for it, but just in case that it may become relevant,
the name of the generated ResourceClaim may also be left unset to record that
no claim was needed. Components processing such a pod can skip whatever they
normally would do for the claim. To ensure that they do and also cover other
cases properly ("no known field is set", "must check ownership"),
resourceclaim.Name gets extended.
When a pod is done, but not getting removed yet for while, then a claim that
got generated for that pod can be deleted already. This then also triggers
deallocation.