Conceptually, snapshots have to be taken while the pod and thus the volume
exist. Snapshotting has an issue where flushing of data is not guaranteed while
the volume is still staged on the node, so the test relied on deleting the pod
and checking for the volume to be unused. That part of the test cannot be done
for ephmeral volumes.
This adds a new test pattern and uses it for the inline volume tests. Because
the kind of volume now varies more, validation of the mount or block device is
always done by the caller of TestEphemeral.
hostPath volume plugin creates a directory within /tmp on host machine, to be mounted as volume.
inject-pod writes content to the volume, and a client-pod tried the read the contents and verify.
when SELinux is enabled on the host, client-pod can not read the content, with permission denied.
running the client-pod as privileged, so that it can access the volume content, even when SEinux is enabled on the host.
Enable feature by default.
Update integration tests for other features to assume that finalizers are present.
Change-Id: Ie969344f572627dba882c0e862e5700dadaf3026
Besides "subPath should unmount if pod is gracefully deleted while kubelet is
down" we also need a special case for "subPath should unmount if pod is force
deleted while kubelet is down".
This fixes a test failure in https://testgrid.k8s.io/sig-storage-kubernetes#gce-serial
It shouldn't make any difference, but it's better to actually test that
assumption.
All existing tests which create pods get converted by skipping the explicit PVC
creation for the ephemeral case and instead modifying the test pod so that it
has a volume claim template with the same spec as the PVC.
The feature gate gets locked to "true", with the goal to remove it in two
releases.
All code now can assume that the feature is enabled. Tests for "feature
disabled" are no longer needed and get removed.
Some code wasn't using the new helper functions yet. That gets changed while
touching those lines.
Each e2e test knows it wants to restart a running kubelet or a
non-running kubelet. The vast majority of times, we want to
restart a running kubelet (e.g. to change config or to check
some properties hold across kubelet crashes/restarts), but sometimes
we stop the kubelet, do some actions and only then restart.
To accomodate both use cases, we just expose the `running` boolean
flag to the e2e tests.
Having the `restartKubelet` explicitly restarting a running kubelet
helps us to trobuleshoot e2e failures on which the kubelet
was supposed to be running, while it was not; attempting a restart
in such cases only murkied the waters further, making the
troubleshooting and the eventual fix harder.
In the happy path, no expected change in behaviour.
Signed-off-by: Francesco Romani <fromani@redhat.com>
In the `restartKubelet` helper, we use `exec.Command`, whose
return value is the output as the command, but as `[]byte`.
The way we logged the output of the command was as value, making
the output, meant to be human readable, unnecessarily hard to read.
We fix this annoying behaviour converting the output to string before
to log it out, making pretty obvious to understand the outcome of
the command.
Signed-off-by: Francesco Romani <fromani@redhat.com>