e2e test validates the following 3 endpoints
- listCoreV1LimitRangeForAllNamespaces
- patchCoreV1NamespacedLimitRange
- deleteCoreV1CollectionNamespacedLimitRange
* Update Url string to have only one slash
Signed-off-by: Akanksha Kumari <akankshakumari393@gmail.com>
* Trim / from Right in hostname
Signed-off-by: Akanksha Kumari <akankshakumari393@gmail.com>
Considering we have removed the glusterfs driver from the
repo, we no longer need the test server for CI. This commit
remove the same
Ref# https://github.com/kubernetes/kubernetes/pull/112015
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
- Moves kms proto apis to the staging repo
- Updates generate and verify kms proto scripts to check staging repo
Signed-off-by: Anish Ramasekar <anish.ramasekar@gmail.com>
* Add e2e tests for events command
* Run events tests as normal e2e instead conformance
Conformance tests are only for GA features. Since `kubectl events`
currently is in alpha stage, e2e tests for this command should be
run as standard e2e.
The kubectl delete -f commands in stop-kubemark.sh may get stuck if the
cluster setup is removed concurrently. It is not important for these
commands to succeed so a timeout of 5m is set.
Since we have upgraded to snapshot controller version to v6, the
snapshot tests looks to be failing in the testgrid. It has been mainly
because the latest version of snapshot controller stopped serving
v1beta1 APIs. The sidecar image versions in the tests also has to be
updated to make sure these are compatible.
This commit add missing RBAC rules for the controller as per the
latest version.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
When waiting for the default service account in a new namespace, not finding
one was reported as "unexpected error: timed out waiting for the condition"...
etcd only fully supports linux && amd64, the other architectures
and OS are only guaranteed to build, see:
https://etcd.io/docs/v3.5/op-guide/supported-platform/#support-tiers
Skip the test that use etcd on not well supported environment to
guarantee the stability of the test.
Getting a message about "pod ran to completion" is confusing when the pod
hasn't been able to start at all. The failed state now has a different message.
To address the previous ambiguity, the success state is described as "ran to
completion successfully".
When Ginkgo shows a BeforeEach/AfterEach/DeferCleanup, then it can only show
the source code where the callback was registered because there is no
description parameter. This can be improved by passing a custom CodeLocation.
Because a description like "set up framework" might not be enough, the source
code is still shown, too.
This is useful for running a driver on a subset of all ready nodes:
- use e2enode.GetBoundedReadySchedulableNodes with a suitable
maximum number of nodes to determine how much nodes are available
for a test
- define pod anti-affinity in the PodTemplate:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/instance: xxxxxxx
topologyKey: kubernetes.io/hostname
- set the ReplicaSetSpec.Replicas value to the number of nodes
If the control plane emits anything at the time when the test runs, for example
"unable to sync kubernetes service", the test breaks because that additional
output is unexpected.
Pulling the CreateKubeConfig function from the expensive to build
test/utils/apiserver package had a considerable impact on the overall build
time because that package depends on a lot of other packages.
Because only that one function is needed by the framework, that extra build
time can be avoided by moving it into its own package.
The framework.AddCleanupAction API was a workaround for Ginkgo v1 not invoking
AfterEach callbacks after a test failure. Ginkgo v2 not only fixed that, but
also added a DeferCleanup API which can be used to run some code if (and only
if!) the corresponding setup code ran. In several cases that makes the test
cleanup simpler.
This covers multiple facets of the current framework and of Ginkgo:
- Ginkgo output is verbose and includes detailed progress
messages (BeforeEach/AfterEach tracing).
- Namespace creation.
- Order of callback invocation.
This runs etcd and an apiserver using it inside the test process. The caller
can either use the ClientSet or the config file. More options might get added
in the future.
Co-author: Antonio Ojea <antonio.ojea.garcia@gmail.com>
When using By or some other Ginkgo output functions, Ginkgo v2 now adds a time
stamp at the end of the line that we need to ignore. Will become relevant when
testing more complete output.
For cleanup purposes the ginkgo.DeferCleanup is a better replacement for
f.AddAfterEach:
- the cleanup only gets executed when the corresponding setup code ran
and can use the same local variables
- the callback runs after the test and before the framework
deletes namespaces (as before)
- if one callback fails, the others still get executed
For the original purpose (https://github.com/kubernetes/kubernetes/pull/86177 "This is
very useful for custom gathering scripts.") it is now possible to use
ginkgo.AfterEach because it will always get executed. Just beware that its
callbacks run in first-in-first-out order.
In contrast to ginkgo.AfterEach, ginkgo.DeferCleanup runs the callback in
first-in-last-out order. Using it makes the following test code work as
expected:
f := framework.NewDefaultFramework("some test")
ginkgo.AfterEach(func() {
// do something with f.ClientSet
})
Previously, f.ClientSet was already set to nil by the framework's cleanup code.
After updating gRPC in node-driver-registrar from v1.40.0 to v1.47.0 the
behavior of gRPC change in a way such that it no longer detected the
single-sided closing of the stream as a loss of connection. This caused gRPC in
the e2e.test to get stuck, possibly in a Read or Write for the HTTP stream
because those have neither a context nor a timeout.
Changing the connection handling so that all active connections are tracking in
the listener and closing them when the listener gets closed fixed this problem.
Some scripts and tools still relied on the deprecated flags, the ones
which are about to be removed.
This is intentionally not a complete removal of all those flags in the entire
repo. This would lead to much more code churn also in places where commands
still accept the flags because they use klog directly.
The custom progress reporter gets invoked via ginkgo.ReportAfterEach after each
test. The problem was that the e2e framework unconditionally enables Ginkgo's
-progress output which shows execution of all nodes, including this
ReportAfterEach. The effect were over 1000 lines of useless output at the start
of a test run while skipping disabled tests.
The solution is to tell Ginkgo that the ReportAfterEach isn't meant to be
reported.
This change updates TestAggregatedAPIServer and the related test
server wiring to exercise the full network path between the Kube API
server and the aggregated API server. We now assert that the wardle
API service and Kube API server discovery endpoints are fully healthy.
CRUD operations are performed through the Kube API server to the
wardle API server.
Signed-off-by: Monis Khan <mok@microsoft.com>
Contextual logging cannot be enabled manually because there is no feature gate
flag. Enabling the feature unconditionally:
- should be low risk for E2E testing
- if it fails, we want to know
- is useful to get better log output from code which already supports it
We don't want klog to print to anything other than GinkgoWriter, but it still
used os.Stderr in addition to GinkgoWriter when printing log entries with
severity >= error. Changing "stderrthreshold" fixes that.
The unit test for framework output handling didn't test klog behavior. Now it
does:
- os.Stderr is redirected, should be empty
- a new test invokes klog
The WaitFor* refactoring in 07c34eb400 had an oversight what timeout parameter
is used for calling WaitForAllPodsCondition() in WaitForPodsWithLabelRunningReady()
so the calls to WaitForPodsWithLabelRunningReady() ended up ignoring the user
provided timeout. Fix that.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Followup on https://github.com/kubernetes/kubernetes/pull/111846. This
particular test was left out from that PR because once it was enabled it
started failing. It was desired to merge
https://github.com/kubernetes/kubernetes/pull/111846 irrespective of
this particular test.
The failure in the test was caused due to the
`createFSGroupRequestPreHook` mock CSI driver hook function assuming
that the request object passed to it is an instance of the respective
struct, but it's actually a pointer instead. This resulted in the hook
function not fulfilling its purpose, and the so the test failed.
Fixes instances of #98213 (to ultimately complete #98213 linting is
required).
This commit fixes a few instances of a common mistake done when writing
parallel subtests or Ginkgo tests (basically any test in which the test
closure is dynamically created in a loop and the loop doesn't wait for
the test closure to complete).
I'm developing a very specific linter that detects this king of mistake
and these are the only violations of it it found in this repo (it's not
airtight so there may be more).
In the case of Ginkgo tests, without this fix, only the last entry in
the loop iteratee is actually tested. In the case of Parallel tests I
think it's the same problem but maybe a bit different, iiuc it depends
on the execution speed.
Waiting for the CI to confirm the tests are still passing, even after
this fix - since it's likely it's the first time those test cases are
executed - they may be buggy or testing code that is buggy.
Another instance of this is in `test/e2e/storage/csi_mock_volume.go` and
is still failing so it has been left out of this commit and will be
addressed in a separate one
The main purpose of this change is to update the e2e Netpol tests to use
the srandard CreateNamespace function from the Framework. Before this
change, a custom Namespace creation function was used, with the
following consequences:
* Pod security admission settings had to be enforced locally (not using
the centralized mechanism)
* the custom function was brittle, not waiting for default Namespace
ServiceAccount creation, causing tests to fail in some infrastructures
* tests were not benefiting from standard framework capabilities:
Namespace name generation, automatic Namespace deletion, etc.
As part of this change, we also do the following:
* clearly decouple responsibilities between the Model, which defines the
K8s objects to be created, and the KubeManager, which has access to
runtime information (actual Namespace names after their creation by
the framework, Service IPs, etc.)
* simplify / clean-up tests and remove as much unneeded logic / funtions
as possible for easier long-term maintenance
* remove the useFixedNamespaces compile-time constant switch, which
aimed at re-using existing K8s resources across test cases. The
reasons: a) it is currently broken as setting it to true causes most
tests to panic on the master branch, b) it is not a good idea to have
some switch like this which changes the behavior of the tests and is
never exercised in CI, c) it cannot possibly work as different test
cases have different Model requirements (e.g., the protocols list can
differ) and hence different K8s resource requirements.
For #108298
Signed-off-by: Antonin Bas <abas@vmware.com>
Introduce networking/v1alpha1 api group.
Add `ClusterCIDR` type to networking/v1alpha1 api group, this type
will enable the NodeIPAM controller to support multiple ClusterCIDRs.