Remove the comment "As of v1.22, this field is beta and is controlled
via the CSRDuration feature gate" from the expirationSeconds field's
godoc.
Mark the "CSRDuration" feature gate as GA in 1.24, lock its value to
"true", and remove the various logic which handled when the gate was
"false".
Update conformance test to check that the CertificateSigningRequest's
Spec.ExpirationSeconds field is stored, but do not check if the field
is honored since this functionality is optional.
- Lock feature gate to true and schedule for deletion in 1.26
- Remove checks on feature gate
- Graduate E2E test to Conformance
Change-Id: I6814819d318edaed5c86dae4055f4b050a4d39fd
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Run the hack/update* commands to regenerate files
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Update staging/src/k8s.io/api/core/v1/types.go
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
Update staging/src/k8s.io/api/core/v1/types.go
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
more files that needed updates
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
removed references to Docker in Kubernetes API
This commit corrects struct field names in the godoc for various
storage volume plugins volume sources and persistent volume sources.
Additional Ref# #105963 (comment)
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The field names in godoc for various core storage structs have been
corrected with this commit.
Additional Ref# #105963 (comment)
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The field names in godoc for PersistentVolumeSource and
VolumeSource have been corrected with this commit.
Additional Ref# #105963 (comment)
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Implements server side field validation behind the
`ServerSideFieldValidation` feature gate. With the
feature enabled, any create/update/patch request
with the `fieldValidation` query param set to
"Strict" will error if the object in the request
body have unknown fields. A value of "Warn"
(also the default when the feautre is enabled)
will succeed the request with a warning.
When the feature is disabled (or the query param
has a value of "Ignore"), the request will succeed
as it previously had with no indications of any
unknown or duplicate fields.
Signed-off-by: wangyysde <net_use@bzhy.com>
Generation swagger.json.
Use v2 path for hpa_cpu_field.
run update-codegen.sh
Signed-off-by: wangyysde <net_use@bzhy.com>
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
to keep a count of the pods that have the ready condition.
Also:
- Add feature gate JobReadyPods.
- Add Ready to describe.
Change-Id: Ib934730a430a8e2a2f485671e345fe2330006939
Enable feature by default.
Update integration tests for other features to assume that finalizers are present.
Change-Id: Ie969344f572627dba882c0e862e5700dadaf3026
The feature gate gets locked to "true", with the goal to remove it in two
releases.
All code now can assume that the feature is enabled. Tests for "feature
disabled" are no longer needed and get removed.
Some code wasn't using the new helper functions yet. That gets changed while
touching those lines.
* Clarify ReadyReplicas docs
Clarifies and aligns the docs regarding ReadyReplicas in the
ReplicaSetStatus, DeploymentStatus, StatefulSetStatus and the
ReplicationControllerStatus.
* Clarify NumberReady docs
Clarifies and aligns the docs regarding NumberReady in the
DaemonSetStatus.
* Autogenerate docs
* Fix doc text with PR feedback
* Autogen docs
* Apply suggestions from code review
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
* Autogen docs
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
For tracking Job Pods that have finished but are not yet counted as failed or succeeded
And feature gate JobTrackingWithFinalizers
Change-Id: I3e080f3ec090922640384b692e88eaf9a544d3b5
Modify the behavior of the AnyVolumeDataSource alpha feature gate to enable
a new field, DataSourceRef, rather than modifying the behavior of the
existing DataSource field. This allows addition Volume Populators in a way
that doesn't risk breaking backwards compatibility, although it will
result in eventually deprecating the DataSource field.
Fix the godoc for RollingUpdateDaemonSet to state that
spec.updateStrategy.rollingUpdate.maxUnavailable is rounded up.
A recent commit changed the godoc to say that the value of this field
was rounded down, but the actual implementation rounds up and always has
rounded up. (This is in contrast to Deployments, where
spec.strategy.rollingUpdate.maxUnavailable is rounded down.)
Follow-up to commit 5aa53f885c.
* api/openapi-spec/swagger.json:
* staging/src/k8s.io/api/apps/v1/generated.proto:
* pkg/apis/apps/types.go:
* staging/src/k8s.io/api/apps/v1/types.go:
* staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go:
* staging/src/k8s.io/api/apps/v1beta2/generated.proto:
* staging/src/k8s.io/api/apps/v1beta2/types.go:
* staging/src/k8s.io/api/apps/v1beta2/types_swagger_doc_generated.go:
* staging/src/k8s.io/api/extensions/v1beta1/generated.proto:
* staging/src/k8s.io/api/extensions/v1beta1/types.go:
* staging/src/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go:
* staging/src/k8s.io/cli-runtime/artifacts/openapi/swagger.json:
* staging/src/k8s.io/kubectl/testdata/openapi/swagger.json:
Change "rounding down" to "rounding up".
#### What type of PR is this?
/kind bug
#### What this PR does / why we need it:
This PR adds descriptions for the following:
1. Metadata and List of Struct `StatefulSetLists`
2. Metadata of Struct `StatefulSet`
#### Which issue(s) this PR fixes:
Ref #99675
This changes the `/ephemeralcontainers` subresource of `/pods` to use
the `Pod` kind rather than `EphemeralContainers`.
When designing this API initially it seemed preferable to create a new
kind containing only the pod's ephemeral containers, similar to how
binding and scaling work.
It later became clear that this made admission control more difficult
because the controller wouldn't be presented with the entire Pod, so we
updated this to operate on the entire Pod, similar to how `/status`
works.
Ensure that all label selectors are treated as atomic values,
to exclude situations when selectors are being corrupted by
different actors attempting to apply their overlapping definition
for this field with server-side-apply.
(Resource|Verb)All are meaningless in the context of openapi spec. I saw
ResourceAll used in an RBAC policy.
Change-Id: I8ab5f230bed23be902f77cadee3fbcdec6b24064
1. Add API definitions;
2. Add feature gate and drops the field when feature gate is not on;
3. Set default values for the field;
4. Add API Validation
5. add kube-proxy iptables and ipvs implementations
6. add tests
* Removes discovery v1alpha1 API
* Replaces per Endpoint Topology with a read only DeprecatedTopology
in GA API
* Adds per Endpoint Zone field in GA API
* Fix merge conflict in kube_features
* Add alpha support for EndPort in Network Policy
Signed-off-by: Ricardo Pchevuzinske Katz <ricardo.katz@gmail.com>
* Add alpha support for EndPort in Network Policy
Signed-off-by: Ricardo Pchevuzinske Katz <ricardo.katz@gmail.com>
* Add alpha support for EndPort in Network Policy
Signed-off-by: Ricardo Pchevuzinske Katz <ricardo.katz@gmail.com>
* Correct some nits
Signed-off-by: Ricardo Pchevuzinske Katz <ricardo.katz@gmail.com>
* Add alpha support for EndPort in Network Policy
* Add alpha support for EndPort in Network Policy
* Add alpha support for EndPort in Network Policy
* Add alpha support for EndPort in Network Policy
The alternative to this would be to special-case code-generator. Since
it legit wants codegen, it seems wrong to make it be _examples (which tools
should ignore).
Make examples an "internal module" so the main go.mod for
k8s.io/code-generator does not get too polluted.
* Mixed protocol support for Services with type=LoadBalancer
KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20200103-mixed-protocol-lb.md
Add new feature gate to control the support of mixed protocols in Services with type=LoadBalancer
Add new fields to the ServiceStatus
Add Ports to the LoadBalancerIngress, so cloud provider implementations can report the status of the requested load balanc
er ports
Add ServiceCondition to the ServiceStatus so Service controllers can indicate the conditions of the Service
* regenerate conflicting stuff
- Remove feature gate consideration from EndpointSlice validation
- Deprecate topology field, note that it will be removed in future
release
- Update kube-proxy to check for NodeName if feature gate is enabled
- Add comments indicating the feature gates that can be used to enable
alpha API fields
- Add comments explaining use of deprecated address type in tests
In addition to adding NodeName, this notes that the topology field will
be deprecated soon. It also removes the IP address type that was
deprecated in Kubernetes 1.17 and intended to be removed in 1.20.
- The main idea here is that we want to 1) prevent potentially large CA
bundles from being set in an exec plugin's environment and 2) ensure
that the exec plugin is getting everything it needs in order to talk to
a cluster.
- Avoid breaking existing manual declarations of rest.Config instances by
moving exec Cluster to kubeconfig internal type.
- Use client.authentication.k8s.io/exec to qualify exec cluster extension.
- Deep copy the exec Cluster.Config when we copy a rest.Config.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Service has had a problem since forever:
- User creates a service type=LoadBalancer
- We silently allocate them a NodePort
- User changes type to ClusterIP
- We fail the operation because they did not clear NodePort
They never asked for or used the NodePort!
Dual-stack introduced some dependent fields that get auto-wiped on
updates. This carries it further.
If you squint, you can see Service as a big, messy discriminated union,
with type as the discriminator. Ignoring fields for non-selected
union-modes seems right.
This introduces the potential for an apply loop. Specifically, we will
accept YAML that we did not previously accept. Apply could see the
field in local YAML and not in the server and repeatedly try to patch it
in. But since that YAML is currently an error, it seems like a very low
risk. Almost nobody actually specifies their own NodePort values.
To mitigate this somewhat, we only auto-wipe on updates. The same YAML
would fail to create. This is a little inconsistent. We could
auto-wipe on create, too, at the risk of more potential impact.
To do this properly, we need to know the old and new values, which means
we can not do it in defaulting or conversion. So we do it in strategy.
This change also adds unit tests and updates e2e tests to rely on and
verify this behavior.
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>