Service has had a problem since forever:
- User creates a service type=LoadBalancer
- We silently allocate them a NodePort
- User changes type to ClusterIP
- We fail the operation because they did not clear NodePort
They never asked for or used the NodePort!
Dual-stack introduced some dependent fields that get auto-wiped on
updates. This carries it further.
If you squint, you can see Service as a big, messy discriminated union,
with type as the discriminator. Ignoring fields for non-selected
union-modes seems right.
This introduces the potential for an apply loop. Specifically, we will
accept YAML that we did not previously accept. Apply could see the
field in local YAML and not in the server and repeatedly try to patch it
in. But since that YAML is currently an error, it seems like a very low
risk. Almost nobody actually specifies their own NodePort values.
To mitigate this somewhat, we only auto-wipe on updates. The same YAML
would fail to create. This is a little inconsistent. We could
auto-wipe on create, too, at the risk of more potential impact.
To do this properly, we need to know the old and new values, which means
we can not do it in defaulting or conversion. So we do it in strategy.
This change also adds unit tests and updates e2e tests to rely on and
verify this behavior.
Generally try to waive away folks who see a particular event stream
and feel tempted to extrapolate and build tooling that expects the
same underlying resource transition chain to continue to produce a
similar event stream as the underlying components evolve and are
updated. New controllers should not be constrained to be
backwards-compatible with previous versions with regard to Event
emission. This is distinct from the Event type itself, which has the
usual Kubernetes-API compatibility commitments for versioned types.
The EventTTL default has been 1h since 7e258b85bd (Reduce TTL for
events in etcd from 48hrs to 1hr, 2015-03-11, #5315), and remains so
today:
$ git --no-pager log -1 --format='%h %s' origin/master
8e5c02255c Merge pull request #90942 from ii/ii-create-pod%2Bpodstatus-resource-lifecycle-test
$ git --no-pager grep EventTTL: 8e5c02255c cmd/kube-apiserver/app/options/options.go
8e5c02255cc:cmd/kube-apiserver/app/options/options.go: EventTTL: 1 * time.Hour,
In this space [1,2]:
To avoid filling up master's disk, a retention policy is enforced:
events are removed one hour after the last occurrence. To provide
longer history and aggregation capabilities, a third party solution
should be installed to capture events.
...
Note: It is not guaranteed that all events happening in a cluster
will be exported to Stackdriver. One possible scenario when events
will not be exported is when event exporter is not running
(e.g. during restart or upgrade). In most cases it's fine to use
events for purposes like setting up metrics and alerts, but you
should be aware of the potential inaccuracy.
...
To prevent disturbing your workloads, event exporter does not have
resources set and is in the best effort QOS class, which means that
it will be the first to be killed in the case of resource
starvation.
Although that's talking more about export from etcd -> external
storage, and not about cluster components submitting events to etcd.
[1]: https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/
[2]: https://github.com/kubernetes/website/pull/4155/files#diff-d8eb69c5436aa38b396d4f3ed75e4792R10
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>
Log a warning instead and continue with the update. This is useful in cases where
the number of nodes is changing due to autoscaling or updgrades. It is possible
that the nodes picked by service controller don't all exist when gce layer lists
them. Update should still succeed with the nodes in the input that are valid.
This will still return an error if 0 nodes were found, when a non-zero input was passed in.
same
Currently, if users try to use external diff command with arguments
will fail because the entire command won't be available through $PATH.
This patch allow users to use external diff tools with args (or not)
via KUBECTL_EXTERNAL_DIFF env.
Reference: https://github.com/kubernetes/kubectl/issues/937
Signed-off-by: Douglas Schilling Landgraf <dougsland@redhat.com>
vendor/k8s.io/metrics/pkg/client/custom_metrics/multi_client.go:49:4: ineffective break statement. Did you mean to break out of the outer loop? (SA4011)
vendor/k8s.io/metrics/pkg/client/custom_metrics/versioned_client.go:38:2: var codecs is unused (U1000)
The available_controller creates short-lived clients to sync remote APIService
objects. These clients are constructed with HTTP transports that cannot
be cached by client-go (because client-go won't know whether the TLS configs
have dynamic functions or not), which may spam idle connections. A local
cache works because we know all the configs share the same dialer
function, and can only vary on the dynamic cert/key.
Some unit tests throw this warning.
W1013 09:06:21.581870 176998 helpers.go:567] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
This patch removes the warning by using --dry-run=client instead of --dry-run=true.
The unit tests that are affected are:
make test WHAT=./vendor/k8s.io/kubectl/pkg/cmd/apply GOFLAGS=-v
make test WHAT=./vendor/k8s.io/kubectl/pkg/cmd/create GOFLAGS=-v
Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
Add `create ingress` unit tests
Move src code to staging dir
Update create command to reflect new API
Replaced deprecated `extensions` api with `networking`
Fix `missing strict dependencies`
Update BUILD
Update BUILD
Fix commit conflict with upstream
Update after review
* Removed obsolete files
* Moved v1beta to v1 api
Fixed gofmt
Fixed deps imports
Merge with PR #94327
Revert changes
Revert go.mod
Revert BUILD
No need to update generated BUILD
Add required deps to BUILD
Update BUILD
Clarify that `REMOTE_PORT` is interpreted as identifying a _Service_ port when provided `TYPE` is `service`.
Also, highlight support for specifying a named port as `REMOTE_PORT`.
When a VMSS is being deleted, instances are removed first. The VMSS
itself will disappear once empty. That delay is generally enough for
kube-controller-manager to delete the corresponding k8s nodes, but
might not when busy or throttled (for instance).
If kubernetes nodes remains after their backing VMSS were removed, Azure
cloud-provider will fail listing that VMSS VMs, and downstream callers
(ie. `InstanceExistsByProviderID`) won't account those errors for a
missing instance. The nodes will remain (still considered as "existing"),
and controller-manager will indefinitely retry to VMSS VMs list it,
draining API calls quotas, potentially causing throttling.
In practice a missing scale set implies instances attributed to that
VMSS don't exists either: `InstanceExistsByProviderID` (part of the
general cloud provider interface) should return false in that case.
The label triage/support has been reclassified as kind/support. The
kind/* family of labels makes more logical sense, as they describe the
"kind" of thing an issue or PR is.
For more information, see the announcement email:
https://groups.google.com/g/kubernetes-dev/c/YcaJpsjjLKw/m/i15cLLx5CAAJ
PATCH verb is used when creating a namespace using server-side apply,
while POST verb is used when creating a namespace using client-side
apply.
The difference in path between the two ways to create a namespace led to
an inconsistency when calling webhooks. When server-side apply is used,
the request sent to webhooks has the field "namespace" populated with
the name of namespace being created. On the other hand, when using
client-side apply the "namespace" field is omitted.
This commit aims to make the behaviour consistent and populates the
"namespace" field when creating a namespace using POST verb (i.e.
client-side apply).
fixed syntax, wrote a test
fixed a test
.
1
Update staging/src/k8s.io/apimachinery/pkg/util/intstr/intstr_test.go
Co-Authored-By: Joel Speed <Joel.speed@hotmail.co.uk>
added test
.
fix
fix test
fixed a test
gofmt
lint
fix
function name
validation fix
.
godocs added
.