This makes the API nicer:
resourceClaims:
- name: with-template
resourceClaimTemplateName: test-inline-claim-template
- name: with-claim
resourceClaimName: test-shared-claim
Previously, this was:
resourceClaims:
- name: with-template
source:
resourceClaimTemplateName: test-inline-claim-template
- name: with-claim
source:
resourceClaimName: test-shared-claim
A more long-term benefit is that other, future alternatives
might not make sense under the "source" umbrella.
This is a breaking change. It's justified because DRA is still
alpha and will have several other API breaks in 1.31.
* Add `Linux{Sandbox,Container}SecurityContext.SupplementalGroupsPolicy` and `ContainerStatus.user` in cri-api
* Add `PodSecurityContext.SupplementalGroupsPolicy`, `ContainerStatus.User` and its featuregate
* Implement DropDisabledPodFields for PodSecurityContext.SupplementalGroupsPolicy and ContainerStatus.User fields
* Implement kubelet so to wire between SecurityContext.SupplementalGroupsPolicy/ContainerStatus.User and cri-api in kubelet
* Clarify `SupplementalGroupsPolicy` is an OS depdendent field.
* Make `ContainerStatus.User` is initially attached user identity to the first process in the ContainerStatus
It is because, the process identity can be dynamic if the initially attached identity
has enough privilege calling setuid/setgid/setgroups syscalls in Linux.
* Rewording suggestion applied
* Add TODO comment for updating SupplementalGroupsPolicy default value in v1.34
* Added validations for SupplementalGroupsPolicy and ContainerUser
* No need featuregate check in validation when adding new field with no default value
* fix typo: identitiy -> identity
This commit defines the ClusterTrustBundlePEM projected volume types.
These types have been renamed from the KEP (PEMTrustAnchors) in order to
leave open the possibility of a similar projection drawing from a
yet-to-exist namespaced-scoped TrustBundle object, which came up during
KEP discussion.
* Add the projection field to internal and v1 APIs.
* Add validation to ensure that usages of the project must specify a
name and path.
* Add TODO covering admission control to forbid mirror pods from using
the projection.
Part of KEP-3257.
PVC and containers shared the same ResourceRequirements struct to define their
API. When resource claims were added, that struct got extended, which
accidentally also changed the PVC API. To avoid such a mistake from happening
again, PVC now uses its own VolumeResourceRequirements struct.
The `Claims` field gets removed because risk of breaking someone is low:
theoretically, YAML files which have a claims field for volumes now
get rejected when validating against the OpenAPI. Such files
have never made sense and should be fixed.
Code that uses the struct definitions needs to be updated.
Generating the name avoids all potential name collisions. It's not clear how
much of a problem that was because users can avoid them and the deterministic
names for generic ephemeral volumes have not led to reports from users. But
using generated names is not too hard either.
What makes it relatively easy is that the new pod.status.resourceClaimStatus
map stores the generated name for kubelet and node authorizer, i.e. the
information in the pod is sufficient to determine the name of the
ResourceClaim.
The resource claim controller becomes a bit more complex and now needs
permission to modify the pod status. The new failure scenario of "ResourceClaim
created, updating pod status fails" is handled with the help of a new special
"resource.kubernetes.io/pod-claim-name" annotation that together with the owner
reference identifies exactly for what a ResourceClaim was generated, so
updating the pod status can be retried for existing ResourceClaims.
The transition from deterministic names is handled with a special case for that
recovery code path: a ResourceClaim with no annotation and a name that follows
the Kubernetes <= 1.27 naming pattern is assumed to be generated for that pod
claim and gets added to the pod status.
There's no immediate need for it, but just in case that it may become relevant,
the name of the generated ResourceClaim may also be left unset to record that
no claim was needed. Components processing such a pod can skip whatever they
normally would do for the claim. To ensure that they do and also cover other
cases properly ("no known field is set", "must check ownership"),
resourceclaim.Name gets extended.
- Add SidecarContaienrs feature gate
- Add ContainerRestartPolicy type
- Add RestartPolicy field to the Container
- Drop RestartPolicy field if the feature is disabled
- Add validation for the SidecarContainers
- Allow restartable init containaers to have a startup probe
Based on https://groups.google.com/g/kubernetes-sig-storage/c/h5751_B5LQM, the
consensus was to start the deprecation in v1.28.
This commit start the deprecation process of RBD plugin from in-tree
drivers.
ACTION REQUIRED:
RBD volume plugin ( `kubernetes.io/rbd`) has been deprecated in this release
and will be removed in a subsequent release. Alternative is to use RBD CSI driver
(https://github.com/ceph/ceph-csi/) in your Kubernetes Cluster.
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Volume names are validated to be unique and always have been. The cited
issues are all about apply getting messed up, not the aspiserver
allowing dups.
```
$ k create -f /tmp/bad.yaml
The Deployment "bad-volumes-test" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "config"
$ k apply --server-side -f /tmp/bad.yaml
Error from server: failed to create typed patch object (default/bad-volumes-test; apps/v1, Kind=Deployment): .spec.template.spec.volumes: duplicate entries for key [name="config"]
$ k apply -f /tmp/bad.yaml -o json | jq '.spec.template.spec.volumes'
The Deployment "bad-volumes-test" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "config"
```
https://groups.google.com/a/kubernetes.io/g/dev/c/g8rwL-qnQhk
based on above, the consensus was to start the deprecation in v1.28.
This commit start the deprecation process of CephFS plugin from
in-tree drivers.
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>