There are two writes yet only one read on a non-buffered channel that is
created locally and not passed anywhere else.
Therefore, it could leak one of its two spawned Goroutines if either:
* The provided `f` takes longer than an erroneous result from
`waiter.WaitForHealthyKubelet`, or;
* The provided `f` completes before an erroneous result from
`waiter.WaitForHealthyKubelet`.
The fix is to add a one-element buffer so that the channel write happens
for the second Goroutine in these cases, allowing it to finish and freeing
references to the now-buffered channel, letting it to be GC'd.
This field is not used in the kubeadm code. It was brought from
cli-runtime where it's used to support complex relationship between
command line parameters, which is not present in kubeadm.
of service subnets.
Update DNS, Cert, dry-run logic to support list of Service CIDRs.
Added unit tests for GetKubernetesServiceCIDR and updated
GetDNSIP() unit test to inclue dual-sack cases.
Whenever kubeadm needs to fetch its configuration from the cluster, it gets
the component configuration of all supported components (currently only kubelet
and kube-proxy). However, kube-proxy is deemed an optional component and its
installation may be skipped (by skipping the addon/kube-proxy phase on init).
When kube-proxy's installation is skipped, its config map is not created and
all kubeadm operations, that fetch the config from the cluster, are bound to
fail with "not found" or "forbidden" (because of missing RBAC rules) errors.
To fix this issue, we have to ignore the 403 and 404 errors, returned on an
attempt to fetch kube-proxy's component config from the cluster.
The `GetFromKubeProxyConfigMap` function now supports returning nil for both
error and object to indicate just such a case.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
The new etcd balancer (>3.3.14, 3.4.0) uses an asynchronous resolver for
endpoints. Without "WithBlock", the client may return before the
connection is up.
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
- replace all stray calls of os.Exit() to util.CheckError() instead
- CheckError() now checks if the klog verbosity level is >=5
and shows a stack trace of the error
- don't call klog.Fatal in version.go
It seems undesirable that Kubernetes as a system should be
blocking a node if it's Linux kernel is way too new.
If such a problem even occurs we should exclude versions from
the list of supported versions instead of blocking users
from trying e.g. the latest 7.0.0-beta kernel because our
validators are not aware of this new version.
Etcd v3.3.0 added the --listen-metrics-urls flag which allows specifying
addition URLs to the already present /health and /metrics endpoints.
While /health and /metrics are enabled for URLS defined with
--listen-client-urls (v3+ ?) they do require HTTPS.
Replace the present etcdctl based liveness probe with a standard HTTP
GET v1.Probe that connects to http://127.0.0.1:2381/health.
These endpoints are not reachable from the outside and only available
for localhost connections.
For file discovery, in case the user feeds a file for the CA
from the kubeconfig, make sure it's preloaded and embedded using
the new function EnsureCertificateAuthorityIsEmbedded().
This commit also applies cleanup:
- unroll validateKubeConfig() into ValidateConfigInfo() as this way
the default cluster can be re-used.
- in ValidateConfigInfo() reuse the variable config instead of creating
a new variable kubeconfig.
- make the Ensure* functions return descriptive errors instead of
wrapping the errors on the side of the callers.
When adding a new etcd member the etcd cluster can enter a state
of vote, where any new members added at the exact same time will
fail with an error right away.
Implement exponential backoff retry around the MemberAdd call.
This solves a kubeadm problem when concurrently joining
control-plane nodes with stacked etcd members.
From experiment, a few retries with milliseconds apart are
sufficient to achieve the concurrent join of a 3xCP cluster.
Apply the same backoff to MemberRemove in case the concurrent
removal of members fails for similar reasons.
Instead of creating a Docker client and fetching an Info object
from the docker enpoint, call the "docker info" command
and populate a local dockerInfo struct from JSON output.
Also
- add unit tests.
- update import boss and bazel.
This change affects "test/e2e_node/e2e_node_suite_test.go"
as it consumes this Docker validator by calling
"system.ValidateSpec()".
MarshalClusterConfigurationToBytes has capabilities to output the component
configs, as separate YAML documents, besides the kubeadm ClusterConfiguration
kind. This is no longer necessary for the following reasons:
- All current use cases of this function require only the ClusterConfiguration.
- It will output component configs only if they are not the default ones. This
can produce undeterministic output and, thus, cause potential problems.
- There are only hacky ways to dump the ClusterConfiguration only (without the
component configs).
Hence, we simplify things by replacing the function with direct calls to the
underlaying MarshalToYamlForCodecs. Thus marshalling only ClusterConfiguration,
when needed.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
one test also proved it did not call the internet
but this was not fool proof as it did not return a string
and thus could be called with something expecting to fail.
updated the network calls to be package local so tests could pass their
own implementation. A public interface was not provided as it would not
be likely this would ever be needed or wanted.
During the control plane joins, sometimes the control plane returns an
expected error when trying to download the `kubeadm-config` ConfigMap.
This is a workaround for this issue until the root cause is completely
identified and fixed.
Ideally, this commit should be reverted in the near future.