1. Define ContainerResizePolicy and add it to Container struct.
2. Add ResourcesAllocated and Resources fields to ContainerStatus struct.
3. Define ResourcesResizeStatus and add it to PodStatus struct.
4. Add InPlacePodVerticalScaling feature gate and drop disabled fields.
5. ResizePolicy validation & defaulting and Resources mutability for CPU/Memory.
6. Various fixes from code review feedback (originally committed on Apr 12, 2022)
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources
PVC and containers share the same ResourceRequirements struct. The Claims field
in it only makes sense when used in containers. When used in a PVC, the field
should have been rejected by validation. This was overlooked when introducing
it, so now persisted objects might have it set and/or people may have started
to rely on it being accepted even when it has no effect.
Therefore we cannot reject it in validation anymore, but we can still strip
it out on create or update.
The Services API should warn users about some IP addresses
representations, mainly because some of them are not allowed
by the golang std parsers since go 1.17
Specifically:
- IPv4 addresses with leading zeros, that may cause security risks
- IPv6 addresses in non canonical format, that may cause problems
with controllers hotlooping or cause security issues
Change-Id: Ife50a651d1b22dc4c318e42bd3e5f2e5f88ecbcd
This adds a new resource.k8s.io API group with v1alpha1 as version. It contains
four new types: resource.ResourceClaim, resource.ResourceClass, resource.ResourceClaimTemplate, and
resource.PodScheduling.
- New API field .spec.schedulingGates
- Validation and drop disabled fields
- Disallow binding a Pod carrying non-nil schedulingGates
- Disallow creating a Pod with non-nil nodeName and non-nil schedulingGates
- Adds a {type:PodScheduled, reason:WaitingForGates} condition if necessary
- New literal SchedulingGated in the STATUS column of `k get pod`
* Code Refactoring
- added some function comments
- spelling errors
Signed-off-by: Dipankar Das <dipankardas0115@gmail.com>
* Some typo fix in resource under pkg/api/v1
Signed-off-by: Dipankar Das <dipankardas0115@gmail.com>
* Grammer corrections in api/v1/pod
Signed-off-by: Dipankar Das <dipankardas0115@gmail.com>
* Function description changes in pkg/api/v1
- pod
- resource
Signed-off-by: Dipankar Das <dipankardas0115@gmail.com>
Signed-off-by: Dipankar Das <dipankardas0115@gmail.com>
This commit just adds a validation according to KEP-127. We check that
only the supported volumes for phase 1 of the KEP are accepted.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
This change is to promote local storage capacity isolation feature to GA
At the same time, to allow rootless system disable this feature due to
unable to get root fs, this change introduced a new kubelet config
"localStorageCapacityIsolation". By default it is set to true. For
rootless systems, they can set this configuration to false to disable
the feature. Once it is set, user cannot set ephemeral-storage
request/limit because capacity and allocatable will not be set.
Change-Id: I48a52e737c6a09e9131454db6ad31247b56c000a
We now partly drop the support for seccomp annotations which is planned
for v1.25 as part of the KEP:
https://github.com/kubernetes/enhancements/issues/135
Pod security policies are not touched by this change and therefore we
have to keep the annotation key constants.
This means we only allow the usage of the annotations for backwards
compatibility reasons while the synchronization of the field to
annotation is no longer supported. Using the annotations for static pods
is also not supported any more.
Making the annotations fully non-functional will be deferred to a
future release.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
GlusterFS is one of the first dynamic provisioner which made into
Kubernetes release v1.4.
https://github.com/kubernetes/kubernetes/pull/30888
When CSI plugins/drivers to start appear, glusterfs' CSI driver
came into existence, however this project is not maintianed at
present and the last release happened few years back.
https://github.com/gluster/gluster-csi-driver/releases/tag/v0.0.9
The possibilities of migration to compatible CSI driver was also
discussed https://github.com/kubernetes/kubernetes/issues/100897
and consensus was to start the deprecation in v1.25.
This commit start the deprecation process of glusterfs plugin from
in-tree drivers.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>