The generated ResourceClaim name and the names of the ResourceClaimTemplate and
ResourceClaim referenced by a pod must be valid according to the resource API,
otherwise the pod cannot start.
Checking this was removed from the original implementation out of concerns
about validating fields in core against limitations imposed by a separate,
alpha API. But as this was pointed out again in
https://github.com/kubernetes/kubernetes/pull/116254#discussion_r1134010324
it gets added back.
The same strings that worked before still work now. In particular, the
constraints for a spec.resourceClaim.name are still the same (DNS label).
Annotation
As part of this change, kube-proxy accepts any value for either
annotation that is not "disabled".
Change-Id: Idfc26eb4cc97ff062649dc52ed29823a64fc59a4
1. Define ContainerResizePolicy and add it to Container struct.
2. Add ResourcesAllocated and Resources fields to ContainerStatus struct.
3. Define ResourcesResizeStatus and add it to PodStatus struct.
4. Add InPlacePodVerticalScaling feature gate and drop disabled fields.
5. ResizePolicy validation & defaulting and Resources mutability for CPU/Memory.
6. Various fixes from code review feedback (originally committed on Apr 12, 2022)
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources
PVC and containers share the same ResourceRequirements struct. The Claims field
in it only makes sense when used in containers. When used in a PVC, the field
should have been rejected by validation. This was overlooked when introducing
it, so now persisted objects might have it set and/or people may have started
to rely on it being accepted even when it has no effect.
Therefore we cannot reject it in validation anymore, but we can still strip
it out on create or update.
PV.Spec.CSI.*SecretReference.Name should be allowed to have up to be
limited to 253 characters (DNS1123Subdomain) and not to 63 characters
(DNS1123Label), so all possible Secrets names can be used as secrets in a
PV.
This is continuation of
https://github.com/kubernetes/kubernetes/pull/108331 / Kubernetes 1.25,
which allowed updating PVs with long secret names, if the previous PV had
long secret name too. This makes sure downgrade from 1.27 to 1.26 works well
and allows PVs created in 1.27 to be updated in 1.26.
Now the long secret names are accepted during PV creation too.
so that it explicitly describe group information defined in the
container image will be kept. This also adds e2e test case of
SupplementalGroups with pre-defined groups in the container
image to make the behaivier clearer.
- New API field .spec.schedulingGates
- Validation and drop disabled fields
- Disallow binding a Pod carrying non-nil schedulingGates
- Disallow creating a Pod with non-nil nodeName and non-nil schedulingGates
- Adds a {type:PodScheduled, reason:WaitingForGates} condition if necessary
- New literal SchedulingGated in the STATUS column of `k get pod`
Removed the unit tests that test the cases when the MixedProtocolLBService feature flag was false - the feature flag is locked to true with GA
Added an integration test to test whether the API server accepts an LB Service with different protocols.
Added an e2e test to test whether a service which is exposed by a multi-protocol LB Service is accessible via both ports.
Removed the conditional validation that compared the new and the old Service definitions during an update - the feature flag is locked to true with GA.
Fixes instances of #98213 (to ultimately complete #98213 linting is
required).
This commit fixes a few instances of a common mistake done when writing
parallel subtests or Ginkgo tests (basically any test in which the test
closure is dynamically created in a loop and the loop doesn't wait for
the test closure to complete).
I'm developing a very specific linter that detects this king of mistake
and these are the only violations of it it found in this repo (it's not
airtight so there may be more).
In the case of Ginkgo tests, without this fix, only the last entry in
the loop iteratee is actually tested. In the case of Parallel tests I
think it's the same problem but maybe a bit different, iiuc it depends
on the execution speed.
Waiting for the CI to confirm the tests are still passing, even after
this fix - since it's likely it's the first time those test cases are
executed - they may be buggy or testing code that is buggy.
Another instance of this is in `test/e2e/storage/csi_mock_volume.go` and
is still failing so it has been left out of this commit and will be
addressed in a separate one
Introduce networking/v1alpha1 api group.
Add `ClusterCIDR` type to networking/v1alpha1 api group, this type
will enable the NodeIPAM controller to support multiple ClusterCIDRs.
This commit just adds a validation according to KEP-127. We check that
only the supported volumes for phase 1 of the KEP are accepted.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>