When trying to bring up a cluster via kubeadm with these feature gates enabled,
kube-proxy fails because it didn't know about them:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
featureGates: {"DynamicResourceAllocation":true,"ContextualLogging":true}
runtimeConfig: {"resource.k8s.io/v1alpha1":"true"}
=>
2023-01-20T07:07:54.474966617Z stderr F E0120 07:07:54.474846 1 run.go:74] "command failed" err="failed complete: unrecognized feature gate: ContextualLogging"
The effect of the logging feature gates is minor for kube-proxy, supporting
them is mostly useful for the sake of consistency and to support kubeadm.
ContextForChannel uses a goroutine to transform a channel close to
a context cancel. However, this exposes a synchronization issue if
we want to unify the underlying implementation between contextless
and with context - a ConditionFunc that closes the channel today
expects the behavior that no subsequent conditions will be invoked
(we have a test in wait_test.go TestUntilReturnsImmediately that
verifies this expectation). We can't unify the implementation
without ensuring this property holds.
To do that this commit changes from the goroutine propagation to
implementing context.Context and using stopCh as the Done(). We
then implement Err() by returning context.Canceled and stub the
other methods. Since our context cannot be explicitly cancelled
by users, we cease to return the cancelFn and callers that need
that behavior must wrap the context as normal.
This should be invisible to clients - they would already observe
the same behavior from the context, and the existing error
behavior of Poll* is preserved (which ignores ctx.Err()).
As a side effect, one less goroutine is created making it more
efficient.
enforceRequirements will run preflight checks, including whether the user
is privileged is not. Because of this, the test will make different assertions
based on the user's UID. However, we don't have UIDs on Windows, so we're asserting
the wrong thing.
This fix addresses the issue.
`genCSRConfig.kubeadmConfig` is possible to be nil if there any error
from the config loading, so access the field should only be done if
there is no error in the previous step.
Signed-off-by: Dave Chen <dave.chen@arm.com>
In the dual-stack case, iptables.NewDualStackProxier and
ipvs.NewDualStackProxier filtered the nodeport addresses values by IP
family before creating the single-stack proxiers. But in the
single-stack case, the kube-proxy startup code just passed the value
to the single-stack proxiers without validation, so they had to
re-check it themselves. Fix that.
Kube-proxy was checking that iptables supports both IPv4 and IPv6 and
falling back to single-stack if not. But it always fell back to the
primary IP family, regardless of which family iptables supported...
Fix it so that if the primary IP family isn't supported then it bails
out entirely.
It was just saying the copy of file failed with `exit status 1`,
no much details for what's going wrong.
Combine the stderr and stdout and show those info will be easier
for us to fix the problem.
Signed-off-by: Dave Chen <dave.chen@arm.com>
If we are dry-running, do not attempt to fetch the /version
resource and just return the stored FakeServerVersion,
which is done when constructing the dry-run client in
upgrade/common.go#getClient().
The problem here is that during upgrade
dry-run client reactors are backed by a dynamic client
via NewClientBackedDryRunGetterFromKubeconfig() and
for GetActions there seems to be no analog to
Discovery().Serverversion() resource for a dynamic client(?).
The kubeadm dry run client reactor code is flawed as it assumes
all invoked "get" verb actions can be casted to GetAction.
Apparently that is not the case when Discovery().ServerVersion()
and other discovery calls are made. In such cases the action
type is the bare ActionImpl.
Catch if an action can be casted to ActionImpl and construct a
GetAction from it. GetActionImpl only suppersets ActionImpl with
a Name field (empty string in this case).
Add unit test for Discovery().ServerVersion().
There seems to be a bug where it's not possible to write to
/etc/kubernetes/tmp... at the time of backing up the old kubelet
config.yaml
Also this kubelet config backup only targets "upgrade node"
and it should also target "upgrade apply".
Revert the related changes until a fully working feature
is implemented.