If something goes wrong during the Azure cloud detection, trying to cast
the returned value will result in the following panic and give no clue
as to what the error was.
```
panic: interface conversion: cloudprovider.Interface is nil, not *azure.Cloud
goroutine 1 [running]:
k8s.io/kubernetes/test/e2e/framework/providers/azure.newProvider()
test/e2e/framework/providers/azure/azure.go:50 +0x2b5
k8s.io/kubernetes/test/e2e/framework.SetupProviderConfig({0xc0007966b8, 0x5})
test/e2e/framework/provider.go:82 +0x1a6
```
CreatePod and MakePod only accepted an `isPrivileged` boolean, which made it
impossible to write tests using those helpers which work in a default
framework.Framework, because the default there is LevelRestricted.
The simple boolean gets replaced with admissionapi.Level. Passing
LevelRestricted does the same as calling e2epod.MixinRestrictedPodSecurity.
Instead of explicitly passing a constant to these modified helpers, most tests
get updated to pass f.NamespacePodSecurityLevel. This has the advantage
that if that level gets lowered in the future, tests only need to be updated in
one place.
In some cases, helpers taking client+namespace+timeouts parameters get replaced
with passing the Framework instance to get access to
f.NamespacePodSecurityEnforceLevel. These helpers don't need separate
parameters because in practice all they ever used where the values from the
Framework instance.
The namespace the crictical pod was referring to was wrong, because it
was using the generated one instead of `kube-system`. This and the
resulting test condition is now fixed.
The test seems to run only in `ci-crio-cgroupv1-node-e2e-flaky` for now.
Closes https://github.com/kubernetes/kubernetes/issues/109296
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
the e2e framwork use active loops to wait for certain async operations,
these loops need to retry on some operations and fail in others.
For the functions that depend on some operations to happen, the
apiserver may return 503 errors until that specific service is
available, so we should retry on those too.
Change-Id: Ib3d194184f6385b9d3d151c7055f27c97c21c3ff
* Fix flaky HPA e2e tests by not failing on context cancelled
Consume requests are sent during test execution in a loop in a separate goroutine. Once the test completes, it is expected that a consumption request may be pending. Cancelling the request during cleanup should not cause test failures.
Tests started being flaky since #112923 introduced passing test context that gets cancelled during cleanup.
* Use PollUntilContextTimeout and restructure error ignoring logic
Copied and modified RemoveString function from
k/k/pkg/util/slice/slice.go to e2e/framework/pod/pod_client.go
This is the last dependency from e2e framework to k/k/pkg/util
Copied and modified pod format function from
k/k/pkg/kubelet/util/format/pod.go to e2e/framework/pod/pod_client.go
This is the last dependency from e2e framework to k/k/pkg/kubelet
It's conceptually wrong to have dependencies to k/k/pkg in
the e2e framework code. They should be moved to corresponding
packages, in this particular case to the test/e2e_node.
They contain some nice-to-have improvements (for example, better printing of
errors with gomega/format.Object) but nothing that is critical right now.
"go mod tidy" was run manually in
staging/src/k8s.io/kms/internal/plugins/mock (https://github.com/kubernetes/kubernetes/pull/116613
not merged yet).
Add node e2e test to verify that static pods can be started after a
previous static pod with the same config temporarily failed termination.
The scenario is:
1. Static pod is started
2. Static pod is deleted
3. Static pod termination fails (internally `syncTerminatedPod` fails)
4. At later time, pod termination should succeed
5. New static pod with the same config is (re)-added
6. New static pod is expected to start successfully
To repro this scenario, setup a pod using a NFS mount. The NFS server is
stopped which will result in volumes failing to unmount and
`syncTerminatedPod` to fail. The NFS server is later started, allowing
the volume to unmount successfully.
xref:
1. https://github.com/kubernetes/kubernetes/pull/113145#issuecomment-1289587988
2. https://github.com/kubernetes/kubernetes/pull/113065
3. https://github.com/kubernetes/kubernetes/pull/113093
Signed-off-by: David Porter <david@porter.me>