This yaml file uses `docker.io/alpine/socat:1.7.4.3-r0`, either we figure out
how to replace the image or just eliminate the yaml itself if it is not
being used for testing anything in this repository.
Found this when we run `e2e.test --list-images`, the dockerhub image reference
above shows up which gives a false impression that we depend on this image
for our testing purposes. Also we should NOT depend on a dockerhub image anyways!
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Removes kube-proxy specific proxy type detection and globally increases
the timeout for session affinity testing so that it works for more
use-cases by default (noteably including IPVS)
Developers who are unaware of the Ginkgo wrappers in the framework might end up
passing the label decorators directly to Ginkgo. Previously, this led to an
error that was hard to understand without background knowledge:
Unknown Decorator
ginkgo.It("must deallocate on non graceful node shutdown", f.WithSerial(), f.WithDisruptive(), f.WithSlow(), func(ctx context.Context) {
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/dra/dra.go:527
[It] node was passed an unknown decorator:
'framework.label{parts:[]string{"Serial"}, extra:""}'
Learn more at: http://onsi.github.io/ginkgo/#node-decorators-overview
When including a special field that Ginkgo dumps the message gets a bit better:
Unknown Decorator
ginkgo.It("must deallocate on non graceful node shutdown", f.WithSerial(), f.WithDisruptive(), f.WithSlow(), func(ctx context.Context) {
/nvme/gopath/src/k8s.io/kubernetes/test/e2e/dra/dra.go:527
[It] node was passed an unknown decorator:
'framework.label{parts:[]string{"Serial"}, extra:"", explanation:"If you see
this as part of an \"Unknown Decorator\" error from Ginkgo, then you need to
replace the ginkgo.It/Context/Describe call with the corresponding
framework.It/Context/Describe or (if available) f.It/Context/Describe."}'
Learn more at: http://onsi.github.io/ginkgo/#node-decorators-overview
Nov 29 22:56:18.559: INFO: Waited 371.058378ms for the sample-apiserver to be ready to handle requests.
Nov 29 22:56:49.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: { } Scheduled: Successfully assigned e2e-openapiv3-5878/sample-apiserver-deployment-58dfd44dd-gp8nd to ip-10-0-80-91.us-west-2.compute.internal
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-apiserver-deployment-58dfd44dd to 1
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd: {replicaset-controller } SuccessfulCreate: Created pod: sample-apiserver-deployment-58dfd44dd-gp8nd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {multus } AddedInterface: Add eth0 [10.129.2.137/23] from ovn-kubernetes
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD" in 3.068195738s (3.068206328s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao" in 9.1810668s (9.18107711s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container etcd
Nov 29 22:56:49.570: INFO: POD NODE PHASE GRACE CONDITIONS
Nov 29 22:56:49.570: INFO:
Nov 29 22:56:49.702: INFO: skipping dumping cluster info - cluster too large
k8s.io/kubernetes/test/e2e/apimachinery.glob..func19.3({0x7f470406fbf8, 0xc001c4b530})
k8s.io/kubernetes@v1.27.1/test/e2e/apimachinery/openapiv3.go:161 +0x6bf
fail [runtime/panic.go:260]: Test Panicked: runtime error: invalid memory address or nil pointer dereference
Ginkgo exit error 1: exit with code 1
Signed-off-by: Dan Williams <dcbw@redhat.com>
The e2e test patch the status of a ResourceQuota resources and tries to
verify the controller reset its status, however, the controller ignores
the updates and only reconcile the objects every a predefined interval,
by default 5 minutes.
Since the test polls for 5 minutes, there are some edge cases that the
time to reconcile the object by the reconcile loop is greater than 5
minutes failing the test.
To take into account the time to reconcile the objects and the reconcile
loop period, we increase by one minute the poll loop.
Change-Id: I30f7fda36cdfb47c543b5b2b120e39f7d6c2442d
Because labels are currently typically added also to the spec texts, we don't
need to write them separately.
This redundancy got introduced in f2cfbf44b1 when registering all inline tags
also as labels.
- Remove redundant tests
- Fix formatting of the query command by using fmt.Sprintf to
prevent spurious characters from being introduced
- Fix running of the journalctl command on the node by add the
default options
- Restrict running the tests on a single node