The assumption so far was that all drivers support read/write
volumes. That might not necessarily be true, so we have to let the
test driver specify it and then test accordingly.
Another aspect that is worth testing is whether the driver correctly
creates a new volume for each pod even if the volume attributes are
the same. However, drivers are not required to do that, so again we
have to let the test driver specify that.
After deleting a pod, we need to be sure that it really is gone,
otherwise there is a race condition: if we remove the CSI driver that
is responsible for the volume used by the pod before the pod is
actually deleted, deleting the pod will fail.
Once we have deleted the pod and the volume, we want to be sure that
NodeUnpublishVolume was called for it. The main motivation was to
check this for inline ephemeral volumes, but the same additional check
also makes sense for other volumes.
We need the 1.2.0 driver for that because that has support for
detecting the volume mode dynamically, and we need to deploy a
CSIDriver object which enables pod info (for the dynamic detection)
and both modes (to satisfy the new mode sanity check).
This ensures that the files are in sync with:
hostpath: v1.2.0-rc3
external-attacher: v2.0.1
external-provisioner: v1.3.0
external-resizer: v0.2.0
external-snapshotter: v1.2.0
driver-registrar/rbac.yaml is obsolete because only
node-driver-registrar is in use now and does not need RBAC rules.
mock/e2e-test-rbac.yaml was not used anywhere.
The README.md files were updated to indicate that these really are
files copied from elsewhere. To avoid the need to constantly edit
these files on each update, <version> is used as placeholder in the URL.
The feature is complete and supported by an increasing number of CSI
drivers, but before it can be really used, it should be moved out of
alpha into beta.
Moving pod related functions from e2e/framework/pv_util.go to
e2e/framework/pod in order to allow refactoring of pv_util.go into its
own package.
Signed-off-by: alejandrox1 <alarcj137@gmail.com>
it turns out that the framework.TestContext.IPFamily variable is
not available for the DNS tests if they don't run in the initial
Ginkgo node when running in parallel.
We add a function to the framework to allow us to run command
only once per each Ginkgo node parallel execution.
It also adds a method to detect if the cluster is IPv6.
The use of the framework.TestContext.IPFamily variable guarantees
consistency all over the testing because this variable is only
assigned at the beginning of the testing.
Source code paths during //test/e2e/framework/log:go_default_test in
the Kubernetes CI start with relative paths. To avoid too broad
matching of the regex, those paths that occur in practice are named
explicitly as alternatives to the leading slash.
All failures are worth logging immediately, not just unexpected
errors. That helps understand tests that have long-running cleanup
operations with their own logging, because the failure will be visible
inside the test output.
The logging in framework.ExpectNoError also was rather poor, because
it only showed the error, but not the additional information about it.
Tests suites now should use log.Fail as Gomega failure handler instead
of ginkgowrapper.Fail. log.Fail will handle the logging for all
failures before proceeding to record the failure in Ginkgo.
Because logging is always done also after a test failure, additional
failures during cleanup are now visible. Ginkgo itself just ignores
them.
This test runs a fake Ginkgo suite with various errors and checks how
logger.go respectively ginkgowrapper.go handle this. Right now, the
outcome is sub-optimal:
- some test failures (those that use framework.Failf or
framework.ExpectNoError) are logged immediately, Gomega failures
are not
- framework.ExpectNoError logs just the error, which is often useless
without the additional explanation
- failures that occur after some others are not reported at all;
this can happen in cleanup code and while that code should better
be written so that it contines instead of failing on an assertion,
in practice quite a lot of code fails and when it does, the output
would be useful to have
- the full stack trace is odd and doesn't start with the expected
function name
Example output:
• Failure [0.002 seconds]
log
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:35
fails [It]
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:39
Jul 17 12:00:52.545: I'm failing.
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:41
Full Stack Trace
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger.go:51 +0x143
k8s.io/kubernetes/test/e2e/framework/log.Failf(...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger.go:43
k8s.io/kubernetes/test/e2e/framework/log_test.glob..func1.2.1(...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:41
k8s.io/kubernetes/test/e2e/framework/log_test.glob..func1.2()
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:42 +0x52
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00029b020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:65 +0x1c8
testing.tRunner(0xc000358600, 0x19818c0)
/nvme/gopath/go/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
/nvme/gopath/go/src/testing/testing.go:916 +0x35a
------------------------------
Jul 17 12:00:52.545: INFO: before
Jul 17 12:00:52.545: INFO: I'm failing.
Jul 17 12:00:52.547: INFO: after
• Failure [0.002 seconds]
log
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:35
asserts [It]
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:44
false is never true
Expected
<bool>: false
to equal
<bool>: true
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:45
Full Stack Trace
/nvme/gopath/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f1
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc00035f6c0, 0x1b42140, 0xc000350dd0, 0xc000350de0, 0x1, 0x1, 0x42e35f)
/nvme/gopath/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38 +0xc7
k8s.io/kubernetes/test/e2e/framework/log_test.glob..func1.3()
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:45 +0x17e
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00029b0e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:65 +0x1c8
testing.tRunner(0xc000358600, 0x19818c0)
/nvme/gopath/go/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
/nvme/gopath/go/src/testing/testing.go:916 +0x35a
------------------------------
Jul 17 12:00:52.548: INFO: before
Jul 17 12:00:52.549: INFO: after
• Failure [0.002 seconds]
log
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:35
error [It]
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:47
hard-coded error
Unexpected error:
<*errors.errorString | 0xc000351930>: {
s: "I'm an error, nice to meet to.",
}
I'm an error, nice to meet to.
occurred
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:49
Full Stack Trace
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/util.go:1376 +0x191
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/util.go:1367
k8s.io/kubernetes/test/e2e/framework/log_test.glob..func1.4()
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:49 +0xc9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00029b200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/framework/log/logger_test.go:65 +0x1c8
testing.tRunner(0xc000358600, 0x19818c0)
/nvme/gopath/go/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
/nvme/gopath/go/src/testing/testing.go:916 +0x35a
------------------------------
Jul 17 12:00:52.550: INFO: before
Jul 17 12:00:52.550: INFO: Unexpected error occurred: I'm an error, nice to meet to.
Jul 17 12:00:52.551: INFO: after
This makes sub packages of e2e test framework to use log functions
of core framework instead for avoiding circular dependencies.
NOTE: subpackage pod would make circular dependency if changing here.
So we need to take care of it with another PR.
PrintPerfData is called at e2e node tests and it depends on e2elog
and e2emetrics. This moves the function to the place which calls
the function for removing unnecessary dependencies from e2e node
subpackage.
Currently, Kubernetes supports running as different user (RunAsUser),
but it only supports UIDs, which does not work on Windows.
Which is why the field SecurityContext.WindowsOptions.RunAsUserName
was introduced, to allow us to run the container entrypoints with
a different user than its default one.
This commit adds E2E tests which will validate this behaviour. The
tests are Windows only, and they will be skipped if --node-os-distro
is not "windows".
This makes sub packages of e2e test framework to use log functions
of core framework instead for avoiding circular dependencies.
NOTE: test/e2e/framework/kubelet, test/e2e/framework/metrics and
test/e2e/framework/node will make circular dependencies if
updating them. It is necessary to solve them in advance before
this work.
Promotes the VolumePVCDataSource feature (cloning) to beta for the 1.16
release.
Since alpha release in 1.15 there have been a number of minor bug fixes
in the CSI Hospath Provisioner and the CSI provisioner sidecar. We've
also added e2e tests using the Hostpath provisioner.
If requesting an substantial amount of huge page memory
(like the tests), it is recommended to to it as early
after boot as possible, or by using the kernel cmd line.
There are some functions of e2e test framework and it is useful to
read the test code by using these functions.
This replaces gomega calls with these functions under test/e2e/node/
Skips IPv6 tests on Windows.
Skips sysctl tests on Windows.
Skips network policy tests on Windows.
Skips RunAsUser / FSGroup / file permissions related tests, as those are
not supported on Windows.
Skips the test "should preserve source pod IP for traffic thru service cluster IP"
on Windows, as it creates a Pod with HostNetwork=true, which is unsupported.
What works and what doesn't work on Windows has been documented here:
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md#windows--linux-considerations
Add a Feature:HPA tag to these tests so they're not picked up by
the release-blocking job that focuses on [Serial] tests (but
excludes [Feature:.*] tests)
They take a combined 70 minutes on average. If they really need
to be in release-blocking as implemented, we should consider a
separate job to focus just on this feature.
Add a Feature:RegularResourceUsageTracking tag to these tests so
they're not picked up by the release-blocking job that focuses
on [Serial] tests (but excludes [Feature:.*] tests)
They take a combined 65 minutes on average. If they really need
to be in release-blocking as implemented, we should consider a
separate job to focus just on this feature.
HandleFlags() was used at e2e package and it depends on sub e2e
framework "config" in core e2e framework. That was invalid dependency.
So this moves HandleFlags() to e2e package for simple dependency.