If no propagation policy has been set, the pods associated
with the jobs are going to linger because of OrphanDependents
policy set currently. This patch ensures that a warning
will be thrown when the user explicitly doesn't set deletionPolicy.
More context: https://github.com/kubernetes/kubernetes/pull/103449#discussion_r675820335
For tracking Job Pods that have finished but are not yet counted as failed or succeeded
And feature gate JobTrackingWithFinalizers
Change-Id: I3e080f3ec090922640384b692e88eaf9a544d3b5
It is not uncommon for users to Create a Service and not specify things
like ClusterIP and NodePort, which we then allocate for them. They same
that YAML somewhere and later use it again in an Update, but then it
fails.
That's because we detected them trying to set a ClusterIP from a value
to "", which is not allowed. If it was just NodePort, they would
actually succeed and reallocate a new port.
After this change, we try to "patch" updates where the user did not
specify those values from the old object.
Modify the behavior of the AnyVolumeDataSource alpha feature gate to enable
a new field, DataSourceRef, rather than modifying the behavior of the
existing DataSource field. This allows addition Volume Populators in a way
that doesn't risk breaking backwards compatibility, although it will
result in eventually deprecating the DataSource field.
* pkg/features: promote the ServiceInternalTrafficPolicy field to Beta and on by default
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/api/service/testing: update Service test fixture functions to set internalTrafficPolicy=Cluster by default
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/apis/core/validation: add more Service validation tests for internalTrafficPolicy
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/registry/core/service/storage: fix failing Service REST storage tests to use internalTrafficPolicy: Cluster
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/registry/core/service/storage: add two test cases for Service REST TestServiceRegistryInternalTrafficPolicyClusterThenLocal and TestServiceRegistryInternalTrafficPolicyLocalThenCluster
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/registry/core/service: update strategy unit tests to expect default
internalTrafficPolicy=Cluster
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/proxy/ipvs: fix unit test Test_EndpointSliceReadyAndTerminatingLocal to use internalTrafficPolicy=Cluster
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/apis/core: update fuzzers to set Service internalTrafficPolicy field
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
* pkg/api/service/testing: refactor Service test fixtures to use Tweak funcs
Signed-off-by: Andrew Sy Kim <kim.andrewsy@gmail.com>
1. add AllocateLoadBalancerNodePorts fields in specs for validation test cases
2. update fuzzer
3. in resource quota e2e, allocate node port for loadbalancer type service and
exceed the node port quota
Signed-off-by: Hanlin Shi <shihanlin9@gmail.com>
This change updates the CSR API to add a new, optional field called
expirationSeconds. This field is a request to the signer for the
maximum duration the client wishes the cert to have. The signer is
free to ignore this request based on its own internal policy. The
signers built-in to KCM will honor this field if it is not set to a
value greater than --cluster-signing-duration. The minimum allowed
value for this field is 600 seconds (ten minutes).
This change will help enforce safer durations for certificates in
the Kube ecosystem and will help related projects such as
cert-manager with their migration to the Kube CSR API.
Future enhancements may update the Kubelet to take advantage of this
field when it is configured in a way that can tolerate shorter
certificate lifespans with regular rotation.
Signed-off-by: Monis Khan <mok@vmware.com>
This test was sometimes using the "inner" REST and sometimes using the
"outer" REST. This commit changes all but one test to use the outer.
The remaining test needs rework.
The rest api for services was validating that, on updates, both
the old and new service have the same type. That guarantees that
the type is going to be the same after that, thus we don't need
to validate the service type on the old and the new service.