Nov 29 22:56:18.559: INFO: Waited 371.058378ms for the sample-apiserver to be ready to handle requests.
Nov 29 22:56:49.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: { } Scheduled: Successfully assigned e2e-openapiv3-5878/sample-apiserver-deployment-58dfd44dd-gp8nd to ip-10-0-80-91.us-west-2.compute.internal
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-apiserver-deployment-58dfd44dd to 1
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd: {replicaset-controller } SuccessfulCreate: Created pod: sample-apiserver-deployment-58dfd44dd-gp8nd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {multus } AddedInterface: Add eth0 [10.129.2.137/23] from ovn-kubernetes
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD" in 3.068195738s (3.068206328s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao" in 9.1810668s (9.18107711s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container etcd
Nov 29 22:56:49.570: INFO: POD NODE PHASE GRACE CONDITIONS
Nov 29 22:56:49.570: INFO:
Nov 29 22:56:49.702: INFO: skipping dumping cluster info - cluster too large
k8s.io/kubernetes/test/e2e/apimachinery.glob..func19.3({0x7f470406fbf8, 0xc001c4b530})
k8s.io/kubernetes@v1.27.1/test/e2e/apimachinery/openapiv3.go:161 +0x6bf
fail [runtime/panic.go:260]: Test Panicked: runtime error: invalid memory address or nil pointer dereference
Ginkgo exit error 1: exit with code 1
Signed-off-by: Dan Williams <dcbw@redhat.com>
The e2e test patch the status of a ResourceQuota resources and tries to
verify the controller reset its status, however, the controller ignores
the updates and only reconcile the objects every a predefined interval,
by default 5 minutes.
Since the test polls for 5 minutes, there are some edge cases that the
time to reconcile the object by the reconcile loop is greater than 5
minutes failing the test.
To take into account the time to reconcile the objects and the reconcile
loop period, we increase by one minute the poll loop.
Change-Id: I30f7fda36cdfb47c543b5b2b120e39f7d6c2442d
Because labels are currently typically added also to the spec texts, we don't
need to write them separately.
This redundancy got introduced in f2cfbf44b1 when registering all inline tags
also as labels.
- Remove redundant tests
- Fix formatting of the query command by using fmt.Sprintf to
prevent spurious characters from being introduced
- Fix running of the journalctl command on the node by add the
default options
- Restrict running the tests on a single node
kube-dns as an alternative DNS addon to CoreDNS hasn't been supported
since 1.22 when kubeadm's v1beta3 API was added.
Remove the related tests from the e2e_kubeadm test framework.
Add Azure to the list of providers that support accessing nodes
using SSH.
Note: This will require a follow up PR adding the required
environment variables, AZURE_SSH_KEY, KUBE_SSH_BASTION to the test
configuration.
Add a test that checks if the CRB (kubeadm:cluster-admins)
used for binding admin.conf file users (part of the
kubeadm:cluster-admins Group) to the "cluster-admins"
ClusterRole exists in kubeadm clusters.
It does that only for versions newer than the version
when this feature was added.
This checks that the With* label functions are used instead of the previous
inline tags. To catch strings passed to Ginkgo directly instead of the
framework wrapper functions, the final test specs are checked.
This changes the text registration so that tags for which the framework has a
dedicated API (features, feature gates, slow, serial, etc.) those APIs are
used.
Arbitrary, custom tags are still left in place for now.
Since ServiceCIDR and IPAddresses are mostly API driven integration
test will give us a good coverage, exercising real use cases like
the migration from one ServiceCIDR range to a new range.
* It is observed in some of the periodic job results that the kubelet along with few other logs
are not getting copied to the artifacts directory once the node e2e tests are executed
* Following is the sample error log that is displayed once the tests are run
```
I1031 13:15:49.056897 40204 ssh.go:146] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /home/svanka/.ssh/google_compute_engine core@35.185.108.51 -- sudo ls core@35.185.108.51:/tmp/node-e2e-20231031T125637/results/*.log]
E1031 13:16:15.346641 40204 ssh.go:149] failed to run SSH command: out: ls: cannot access 'core@35.185.108.51:/tmp/node-e2e-20231031T125637/results/*.log': No such file or directory
, err: exit status 2
```
* This change fixes the above issue and helps in gathering the required test artifacts once the tests execution is completed
Signed-off-by: Sai Ramesh Vanka <svanka@redhat.com>
Add retry logic to the `assertConsistentConnectivity` function from
the `test/e2e/windows/hybrid_network.go` file.
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
This test depends on CDI support in a runtime and doesn't work
with the out-of-the box Containerd. Marking it as a NodeSpecialFeature
should fix Containerd CI job failures.
update pod replacement policy feature flag comment and refactor the e2e test for pod replacement policy
minor fixes for pod replacement policy and e2e test
fix wrong assertions for pod replacement policy e2e test
more fixes to pod replacement policy e2e test
refactor PodReplacementPolicy e2e test to use finalizers
fix unit tests when pod replacement policy feature flag is promoted to beta
fix podgc controller unit tests when pod replacement feature is enabled
fix lint issue in pod replacement policy e2e test
assert no error in defer function for removing finalizer in pod replacement policy e2e test
implement test using a sh trap for pod replacement policy
reduce sleep after SIGTERM in pod replacement policy e2e test to 5s
Linting together with an upcoming klog update finds this problem:
test/images/sample-device-plugin/sampledeviceplugin.go:165:4: printf: k8s.io/klog/v2.Errorf does not support error-wrapping directive %w (govet)
klog.Errorf("Failed to add watch to %q: %w", triggerPath, err)
^