* cmd/kube-proxy support contextual logging
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
* use ktesting.NewTestContext(t) in unit test
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
* use ktesting.NewTestContext(t) in unit test
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
* remove unnecessary blank line & add cmd/kube-proxy to contextual section in logcheck.conf
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
* add more contextual logging
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
* new lint yaml
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
---------
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
If the user deletes the /var/lib/kubelet manually, "reset" will throw
an error that the dir is missing. Instead of handling this error,
print it as a warning and skip unmount of directories inside it.
This allows "reset" to continue to be reentrant and can be called
even even if "init/join" are not called yet and some of the
k8s directories on a node do not exist.
Continue to error on individual unmount errors.
Remove the function absoluteKubeletRunDirectory() and
call filepath.EvalSymlinks() directly.
Add new a v1beta4.ResetConfiguration.UnmountFlags field that
can be used to pass in Linux unmount2() flags such as MNT_FORCE.
Default value continues to be 0 - i.e. no flags.
Instead of warnings when syscall.Unmount() causes errors,
store all the errors in an aggregate. Abort the reset operation if
at least one unmount error was encountered.
The function TryRunCommand() uses an exponential backoff,
which is good, but it's inconsistent and only used in a couple
of places.
Remove its usage in the token.go#UpdateOrCreateTokens()
and switch to using the standard function used in other places -
PollUntilContextTimeout().
Remove wait.go#TryRunCommand(), as there are no other usages.
The function wait.go#WaitForKubeletAndFunc() has been used in
a number of places in kubeadm. It starts a go routine to wait for
the kubelet /healthz and in parallel starts another go routine
to wait for an custom function.
This logic is problematic. If kubeadm is waiting for the kubelet
in parallel with something that requires the kubelet, the right
solution would be to first wait for the kubelet in serial and only
then proceed with the other action. The parallelism here particularly
during "init" required a unwanted "initial timeout" of 40s, before
the kubelet waiting even starts. In most cases, this makes the kubelet
waiter to not even start, while the main point of waiting becomes
the "other action".
- Remove the function WaitForKubeletAndFunc() from the Waiter interface.
- Rename the function WaitForHealthyKubelet() to just WaitForKubelet()
to be consistent with the naming WaitForAPI().
- Update WaitForKubelet() to not use TryRunCommand() and instead
use PollUntilContextTimeout().
- Remove the "initial timeout" of 40s in WaitForKubelet().
- Make both WaitForKubelet() and WaitForAPI() use similar error
handling and output.
- Update all usage of WaitForKubelet() to be a serial call before
any other action, such as another wait* call.
- Make the default wait timeout for the kubelet
/healthz to be 1 minute (kubeadmconstants.DefaultKubeletTimeout).
- Apply updates to all implementations of the Waiter interface.
Explictly show the help msg that `kubeadm upgrade plan` can only run
on the node where "admin.conf" exists, normally, this is the control
plane node.
Signed-off-by: Dave Chen <dave.chen@arm.com>
The notes printed to the user from common.go when
loadConfig fails are outdated and incorrect.
If the config cannot be loaded the user should not be instructed
to re-upload the config with kubeadm commands. Instead they
should do it manually with kubectl.
On loadConfig() error just wrap the error in a simple message
and show it to the user.
The current setup stomps missing IsNotFound errors for Node objects.
The underlying fetching of init configuration uses
the node object to construct an initconfiguration for this
upgrade process, so if the Node is missing the kube-config CM
will be reported as missing, which is incorrect.