This reverts commit 0c2cee5676e64976f9e767f40c4c4750a8eeb11f.
The health check containers are not required for any test, but we want
to run them anyway to ensure that they cause no unexpected issues.
The purpose of the pod created by `createBalancedPodForNodes()` is to ensure
that all nodes have equal resource requests (as seen by the scheduler). This
prevents the default scheduling behavior (which attempts to balance resource requests)
from interfering with e2e's which test other priorities/score plugins.
Because the scheduler only worries about requests, specifying `Limits` in this pod
is unnecessary. In fact, if the calculated "balancing" limit is too low, it can cause
the balancing pod to never start due to OOMKill errors, leading to flakes and failures.
- Test all versions to make sure each resource version is in the
mappings
- Fail when request info contains an unrecognized version. We have tests
that guarantee that all known versions are in the mappings. If we
get a version in request info that is not there we should fail fast to
prevent inconsistent behaviour (e.g. for some reason the mappings is
not up to date).
Ensure all known versions are in mappings
In one mock test, the snapshotter needs permission to read
secrets. That was disabled in the RBAC files of recent releases. We
need to patch it back in during deployment.
Co-Authored-By: Riaan Kleinhans <riaan@ii.coop>
e2e test validates the following 2 extra endpoints
- listAppsV1NamespacedReplicaSet
- deleteAppsV1CollectionNamespacedReplicaSet
They are not needed for any of the tests and in practice apparently
caused enough overhead that even unrelated tests timed out. For
example, in the pull-kubernetes-e2e-kind test, 43 out of 5771 tests
failed, including tests from sig-node, sig-cli, sig-api-machinery,
sig-network.
Mirroring the various YAML files by hand is tedious. The new
update-hostpath.sh does all the necessary steps automatically.
The result is now a bit more consistent with the upstream repos in the
sense that the original file names and paths for the RBAC YAML files
are used.
The csi-hostpath-testing.yaml is included for the sake of
completeness, but not used during E2E testing.
The new hostpath driver release is v1.6.2, which adds the
external-health-monitor for the first time.
We were avoiding the scheduled using the pod.Spec.NodeName directly,
however, once we switched to using the node selector, the no_snat
e2e test started to fail because was trying to schedule pods on
nodes with taints, hence, failing the test.
based on this comment in
ea07644522/test/e2e/framework/pod/node_selection.go (L96-L101)
// pod.Spec.NodeName should not be set directly because
// it will bypass the scheduler, potentially causing
// kubelet to Fail the pod immediately if it's out of
// resources. Instead, we want the pod to remain
// pending in the scheduler until the node has resources
// freed up.
* Use deep copies in `PrepareForUpdate()`
* Preserve select metadata from new pod
* Use patch to add ephemeral container `kubectl debug`
* Distinguish between pod vs /ephemeralcontainers NotFound
This replaces the check of mount propagation to/from the host OS mount
namespace to a similar check about the mount namespace where kubelet is
running (which may or may not be the same mount namespace as the host
OS).
This addresses issue #100259
This changes the `/ephemeralcontainers` subresource of `/pods` to use
the `Pod` kind rather than `EphemeralContainers`.
When designing this API initially it seemed preferable to create a new
kind containing only the pod's ephemeral containers, similar to how
binding and scaling work.
It later became clear that this made admission control more difficult
because the controller wouldn't be presented with the entire Pod, so we
updated this to operate on the entire Pod, similar to how `/status`
works.
To run the tests in a single node cluster, create two pods consuming 2/5 of the extended resource instead of one consuming 2/3.
The low priority pod will be consuming 2/5 of the extended resource instead so in case there's only a single node,
a high priority pod consuming 2/5 of the extended resource can be still scheduled. Thus, making sure only the low priority pod
gets preempted once the preemptor pod consuming 2/5 of the extended resource gets scheduled while keeping the high priority pod untouched.
the tests with pods using hostNetwork need to bind pods for the
test. Since they use hostNetwork the ports are limited, hence, if
more than one run in parallel, one is going to fail because will not
be able to get the port.
The test verifies a specific feature, in which GPUs are required, thus, cannot
be run in most testing environments. We should exclude this test from most test jobs.
We'll be doing this by adding the [Feature:GPUDevicePlugin] tag (which is also being
used by test/e2e/scheduling/nvidia-gpus.go), and then add it to the ginkgo skip regex.
The pause image should not run sleep commands. This will cause pod fail
to start correctly.
See details in issue #100047. We discovered some behavior about
development in certain cases like new pod fail to start correctly, but will be further investigated.
Change-Id: I9761bbefa694f6fe51a6f1e7561fa7e566ce4d8f
We can support topology detection for all drivers with a single
topology key by allowing different nodes to have the same topology
value. This makes the test a bit more generic and its configuration
simpler.
Drivers need to opt into the new test. Depending on how the driver
describes its behavior, the check can be more specific. Currently it
distinguishes between getting any kind of information about the
storage class (i.e. assuming that capacity is not exhausted) and
getting one object per node (for local storage). Discovering nodes
only works for CSI drivers.
The immediate usage of this new test is for csi-driver-host-path with
the deployment for distributed provisioning and storage capacity
tracking. Periodic kubernetes-csi Prow and pre-merge jobs can run this
test.
The alternative would have been to write a test that manages the
deployment of the csi-driver-host-path driver itself, i.e. use the E2E
manifests. But that would have implied duplicating the
deployments (in-tree and in csi-driver-host-path) and then changing
kubernetes-csi Prow jobs to somehow run for in-tree driver definition
with newer components, something that currently isn't done. The test
then also wouldn't be applicable to out-of-tree driver deployments.
Yet another alternative would be to create a separate E2E test suite
either in csi-driver-host-path or external-provisioner. The advantage
of such an approach is that the test can be written exactly for the
expected behavior of a deployment and thus be more precise than the
generic version of the test in k/k. But this again wouldn't be
reusable for other drivers and also a lot of work to set up as no such
E2E test suite currently exists.
Removing myself from OWNERS as I haven't had the bandwidth to go over
these reviews for a long time.
This should have been part of #99718, which has already been merged.
Previously the code used to delete pods serially.
In this patch we factor out code to do that in parallel,
using goroutines.
This shaves some time in the e2e tm test run with no intended
changes in behaviour.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The image "e2eteam/powershell-helper:6.2.7-linux-cache" is a Linux image. Because we're running "docker buildx build --platform windows/amd64", docker buildx will consider it as a Windows image unless we explicitly specify otherwise. If the image's platform is not correctly identified, we can run into problems when trying to build the image.
We are already doing something similar with the windows-servercore-cache image.
These tests were not performing evictions because:
- Test pods were not on the test node (fixed with spec.nodeName)
- The test pods are unmanaged and require the --force flag to be evicted
(added the flag)
- The test node does not run pods, which prevents pod deletion from
finalizing (worked around with --skip-wait-for-delete-timeout)
We can cache the powershell-helper image's results into a scratch Linux image using
docker buildx. This will allow us to spend less time pulling the data we need from the
powershell-helper image when we need it.
Additionally, docker buildx might have some issues with cross-registry images, so this
will allow us to circumvent it.
While adding annotations to the namespace, using the Update API may result in
conflicts as "the object has been modified; please apply your changes to the
latest version and try again". Use Patch API to avoid this.
Signed-off-by: Chaitanya Bandi <kbandi@cs.stonybrook.edu>
While labeling the namespace using the Update API may result in conflicts as
"the object has been modified; please apply your changes to the latest version
and try again". Use Patch API to avoid this.
Signed-off-by: Chaitanya Bandi <kbandi@cs.stonybrook.edu>
The Topology Manager e2e tests wants to run on real multi-NUMA system
and want to consume real devices supported by device plugins; SRIOV
devices happen to be the most commonly available of such devices.
The tests need to wait for resource availability before to actually
run the tests, or they will fail with a false negative, also relatively
hard to debug.
An optimization was added in commit 56106439cf to minimize the restarts,
speed up the execution and make a nasty, yet not fully understood, flake
with SRIOV device plugin much less likely.
Unfortunately the pod-scope tests were mistakenly left over.
This Patch fixes that.
CI lanes did NOT fail (and will not fail) because the CI machines aren't
multi NUMA nor expose SRIOV devices, so the relevant portion of the test
will just skip, avoiding the issue.
However, this resurfaces when running the testsuite on bare metal; this
is how we noticed.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The tests:
"Pod liveness probe, container exec timeout, restart"
"Pod readiness probe, container exec timeout, not ready"
cannot be run against a kubelet older than 1.20.
Tag them with [MinimumKubeletVersion:1.20].
Adds and implements ResetFieldsProvder interface in order to ensure that
the fieldmanager no longer owns fields that get reset before the object
is persisted.
Co-authored-by: Kevin Wiesmueller <kwiesmul@redhat.com>
Co-authored-by: Kevin Delgado <kevindelgado@google.com>
Reverting af3e118b1f and
2242d0ffc4 as these tests fail when
ExecProbeTimeout feature gate is turned on.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
it can happen that there are multiple slices, and some of them
outdated, so we relax the test to check that at least one
has the corresponding fields mirrored.
Add feature gate to disable the GetAllocatableResources API.
The feature gate isd alpha stage, disabled by default.
Add e2e test to demonstrate the behaviour with feature gate disabled.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Document why teardownSRIOVPod has to wait for all the containers
to be gone before to end, and why is important.
Additionally, change the code to wait for all the containers to be gone,
not just the first. This is both a little cleaner and a little safer,
even though it seems the current code caused no issues so far.
Signed-off-by: Francesco Romani <fromani@redhat.com>
speedup the cleanup after testcases deleting pods in separate
goroutines.
The post-test cleanup stage must be done carefully since pod require
exclusive allocation - so pods must take all the steps to properly
cleanup the tests to avoid to pollute the environment, but
this has a negative effect on test duration (take longer).
Hence, we add safe speedups like doing pod deletions in parallel.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Add e2e tests for the new GetAllocatableResources API.
The tests are added in the `podresources_test` suite
created previously in this series.
Signed-off-by: Francesco Romani <fromani@redhat.com>
When listening on udp, the reply is sent using a src address which is
the address of the gateway interface. This means that when listening to
any, the reply can be sent out with a src ip which is different from the
request's target ip. This confuses natting and "connectionful" udp
services do not work.
Here, we force the endpoint to listen from the hostIP and from podIPs,
to cover both dual stack and legacy clusters.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
As discussed during the alpha review, the ReadOnly field is not really
needed because volume mounts can also be read-only. It's a historical
oddity that can be avoided for generic ephemeral volumes as part
of the promotion to beta.
* namespace by name default labelling
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
Co-authored-by: Abhishek Raut <rauta@vmware.com>
* Make some logic improvement into default namespace label
* Fix unit tests
* minor change to trigger the CI
* Correct some tests and validation behaviors
* Add Canonicalize normalization and improve validation
* Remove label validation that should be dealt by strategy
* Update defaults_test.go
add fuzzer
ns spec
* remove the finalizer thingy
* Fix integration test
* Add namespace canonicalize unit test
* Improve validation code and code comments
* move validation of labels to validateupdate
* spacex will save us all
* add comment to testget
* readablility of canonicalize
* Added namespace finalize and status update validation
* comment about ungenerated names
* correcting a missing line on storage_test
* Update the namespace validation unit test
* Add more missing unit test changes
* Let's just blast the value. Also documenting the workflow here
* Remove unnecessary validations
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
Co-authored-by: Abhishek Raut <rauta@vmware.com>
Co-authored-by: Ricardo Pchevuzinske Katz <ricardo.katz@gmail.com>
Add support to the endpoint slice mirroring controller to mirror
annotations, in addition to labels, but don´t mirror endpoint
triggertime annotation.
Also, fix a bug in the endpointslice mirroring controller, that
wasn't updating the mirrored slice with the new labels, in case
that only the endpoint labels were modified.
Defaults and validation are such that the field has to be set when
the feature is enabled, just as for the other boolean fields. This
was missing in some tests, which was okay as long as they ran
with the feature disabled. Once it gets enabled, validation will
flag the missing field as error.
Other tests didn't run at all.
Updates comment on building dependencies step in the local node test
runner to reflect the binaries that are actually produced.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
* Removes discovery v1alpha1 API
* Replaces per Endpoint Topology with a read only DeprecatedTopology
in GA API
* Adds per Endpoint Zone field in GA API
This is to consume the changes for binding the udp listeners of netexec
to specific addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
The current udp implementation listens on any for tcp, udp and tcp. There
are some cases where it makes sense to listen on specific addresses
(especially udp, see https://github.com/kubernetes/kubernetes/issues/95565).
This is because UDP is connectionless, and in order to conntrack to
work, the application must ensure that the src of the reply is the same
as the dest of the request. The easiest way to do that is to bind
explicitly on an ip.
Here we pass an optional parameter that contains a comma separated list
of addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
- Test was failing due to using `sleep infinity` inside the busybox
container which was going into a crash loop. `sleep infinity` isn't
supported by the sleep version in busybox, so replace it with a `while
true; sleep loop`.
- Replace usage of dbus message emitting from gdbus to dbus-send. The
test was failing on ubuntu which doesn't have gdbus installed.
dbus-send is installed on COS and Ubuntu, so use it instead.
- Replace check of pod phase with the test util function `PodRunningReady`
which checks both phase as well as pod ready condition.
- Add some more verbose logging to ease future debugging.
We've dropped the content-type field since it is effectively unbounded
(we had a sec-vuln about this before actually). We retain all other
fields, despite their unboundedness due to the fact that we can now
explicitly set bounds on label values.
Change-Id: Icc483fc6a17ea6382928f4448643cda6f3e21adb
The `apparmor_parser` binary is not really required for a system to run
AppArmor from a Kubernetes perspective. How to apply the profile is more
in the responsibility of lower level runtimes like CRI-O and containerd,
which may do the binary check on their own.
This synchronizes the current libcontainer implementation with the
vendored Kubernetes source code and allows distributions to use
AppArmor, even when they do not have the parser available in
`/sbin/apparmor_parser`.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>
The server service monitors the kubelet service and restart it
once the service is down, to avoid kubelet double restarting
we will stop the kubelet service and wait until the kubelet will be
restarted and the node will be ready.
Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
e2e test validates the following service status endpoints
- patchCoreV1NamespacedServiceStatus
- readCoreV1NamespacedServiceStatus
- replaceCoreV1NamespacedServiceStatus
Also includes untested service endpoint
- patchCoreV1NamespacedService
The same SHA cannot be pushed twice to the staging registry. Because some images were
mirrored, their SHAs remained unchanged. This addresses this issue.
Sharing the same connection for multiple streams should have worked,
but ran into unexpected timeouts:
I0227 08:07:49.754263 80029 portproxy.go:109] container "mock" in pod csi-mock-volumes-4037-2061/csi-mockplugin-0 is running
E0227 08:07:49.779359 80029 portproxy.go:178] prepare forwarding csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: dialer failed: unable to upgrade connection: pod not found ("csi-mockplugin-0_csi-mock-volumes-4037-2061")
I0227 08:07:50.782705 80029 portproxy.go:109] container "mock" in pod csi-mock-volumes-4037-2061/csi-mockplugin-0 is running
I0227 08:07:50.809326 80029 portproxy.go:125] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: starting connection polling
I0227 08:07:50.909544 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #0, 0 open
I0227 08:07:50.912436 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #0
I0227 08:07:50.912503 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #0
I0227 08:07:50.913161 80029 portproxy.go:322] forward connection #0 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
E0227 08:07:50.913324 80029 portproxy.go:242] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: an error occurred connecting to the remote port: error forwarding port 9000 to pod 66662ea1ab30b4193dac0102c49be840971d337c802cc0c8bbc074214522bd13, uid : failed to execute portforward in network namespace "/var/run/netns/cni-c15e4e36-dad9-8316-c301-33af9dad5717": failed to dial 9000: dial tcp4 127.0.0.1:9000: connect: connection refused
I0227 08:07:50.913371 80029 portproxy.go:340] forward connection #0 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
W0227 08:07:50.913487 80029 server.go:669] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"
I0227 08:07:51.009519 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #1, 0 open
I0227 08:07:51.011912 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #1
I0227 08:07:51.011973 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #1
I0227 08:07:51.013677 80029 portproxy.go:322] forward connection #1 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:07:51.013720 80029 portproxy.go:340] forward connection #1 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
W0227 08:07:51.013794 80029 server.go:669] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"
E0227 08:07:51.017026 80029 portproxy.go:242] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: an error occurred connecting to the remote port: error forwarding port 9000 to pod 66662ea1ab30b4193dac0102c49be840971d337c802cc0c8bbc074214522bd13, uid : failed to execute portforward in network namespace "/var/run/netns/cni-c15e4e36-dad9-8316-c301-33af9dad5717": failed to dial 9000: dial tcp4 127.0.0.1:9000: connect: connection refused
I0227 08:07:51.109515 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #2, 0 open
I0227 08:07:51.111479 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #2
I0227 08:07:51.111519 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #2
I0227 08:07:51.209519 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #3, 1 open
I0227 08:07:51.766305 80029 csi.go:377] gRPC call: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0227 08:07:51.768304 80029 csi.go:377] gRPC call: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4037","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0227 08:07:51.770494 80029 csi.go:377] gRPC call: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0227 08:07:51.772899 80029 csi.go:377] gRPC call: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0227 08:08:21.209901 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:08:21.209980 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #3, 1 open
I0227 08:08:51.211522 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating data stream: Timeout occurred
I0227 08:08:51.211566 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #3, 1 open
I0227 08:08:51.213451 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #3
I0227 08:08:51.213498 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #3
I0227 08:08:51.309540 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #4, 2 open
I0227 08:08:52.215358 80029 portproxy.go:322] forward connection #3 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:08:52.215475 80029 portproxy.go:340] forward connection #3 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
I0227 08:09:21.310003 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:09:21.310086 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #4, 1 open
I0227 08:09:51.311854 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating data stream: Timeout occurred
I0227 08:09:51.311908 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #4, 1 open
I0227 08:09:51.314415 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #4
I0227 08:09:51.314497 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #4
I0227 08:09:51.409527 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #5, 2 open
I0227 08:09:52.326203 80029 portproxy.go:322] forward connection #4 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:09:52.326277 80029 portproxy.go:340] forward connection #4 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
I0227 08:10:21.409892 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:10:21.409954 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #5, 1 open
I0227 08:10:51.411455 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating data stream: Timeout occurred
I0227 08:10:51.411557 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #5, 1 open
I0227 08:10:51.413229 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #5
I0227 08:10:51.413274 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #5
I0227 08:10:51.509508 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #6, 2 open
I0227 08:10:52.414862 80029 portproxy.go:322] forward connection #5 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:10:52.414931 80029 portproxy.go:340] forward connection #5 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
I0227 08:11:21.509879 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:11:21.509934 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #6, 1 open
I0227 08:11:51.511519 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating data stream: Timeout occurred
I0227 08:11:51.511568 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #6, 1 open
I0227 08:11:51.513519 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #6
I0227 08:11:51.513571 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #6
I0227 08:11:51.609504 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #7, 2 open
I0227 08:11:52.517799 80029 portproxy.go:322] forward connection #6 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:11:52.517918 80029 portproxy.go:340] forward connection #6 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
I0227 08:12:21.609856 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:12:21.609909 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #7, 1 open
I0227 08:12:51.611494 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating data stream: Timeout occurred
I0227 08:12:51.611555 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #7, 1 open
I0227 08:12:51.613289 80029 portproxy.go:155] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: created a new connection #7
I0227 08:12:51.613343 80029 portproxy.go:286] forward listener for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: got a new connection #7
I0227 08:12:51.709535 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #8, 2 open
I0227 08:12:52.615858 80029 portproxy.go:322] forward connection #7 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: remote side closed the stream
I0227 08:12:52.615989 80029 portproxy.go:340] forward connection #7 for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: closing our side
W0227 08:12:52.616116 80029 server.go:669] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"
I0227 08:13:21.709934 80029 portproxy.go:151] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: no connection: error creating error stream: Timeout occurred
I0227 08:13:21.709997 80029 portproxy.go:148] port forwarding for csi-mock-volumes-4037-2061/csi-mockplugin-0:9000: trying to create a new connection #8, 1 open
Feb 27 08:13:30.916: FAIL: Failed to register CSIDriver csi-mock-csi-mock-volumes-4037
Unexpected error:
<*errors.errorString | 0xc002666220>: {
s: "error waiting for CSI driver csi-mock-csi-mock-volumes-4037 registration on node kind-worker2: timed out waiting for the condition",
}
error waiting for CSI driver csi-mock-csi-mock-volumes-4037 registration on node kind-worker2: timed out waiting for the condition
occurred
Instead of trying to use the client-go portforward package as-is it is
simpler to copy some code from it and then use the http stream
directly. That way we don't need to go through a local listening
socket and error handling and logging becomes simpler.
This replaces embedding of JavaScript code into the mock driver that
runs inside the cluster with Go callbacks which run inside the
e2e.test suite itself. In contrast to the JavaScript hooks, they have
direct access to all parameters and can fabricate arbitrary responses,
not just error codes.
Because the callbacks run in the same process as the test itself, it
is possible to set up two-way communication via shared variables or
channels. This opens the door for writing better tests. Some of the
existing tests that poll mock driver output could be simplified, but
that can be addressed later.
For now, only tests using hooks use embedding. How gRPC calls are
retrieved is abstracted behind the CSIMockTestDriver interface, so
tests don't need to be modified when switching between embedding
and remote mock driver.
The function must modify the content of the "creds" pointer, not the
pointer.
Found via hack/verify-staticcheck.sh after importing the code into
Kubernetes. It is uncertain whether this bug had any consequences.
Caught by verify-typecheck.sh after importing the code into
Kubernetes:
ERROR(linux/arm): /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:404:20: math.MaxUint32 (untyped int constant 4294967295) overflows int
ERROR(linux/arm): /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:795:20: math.MaxUint32 (untyped int constant 4294967295) overflows int
ERROR(linux/386): /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:404:20: math.MaxUint32 (untyped int constant 4294967295) overflows int
ERROR(linux/386): /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:795:20: math.MaxUint32 (untyped int constant 4294967295) overflows int
ERROR(windows/386): /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:404:20: math.MaxUint32 (untyped int constant 4294967295) overflows int
ERROR(windows/386):
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock/service/controller.go:795:20:
math.MaxUint32 (untyped int constant 4294967295) overflows int
Instead of producing our own error message, we can show the original
value and the error from strconv.
This is a verbatim copy of the corresponding files in csi-test v4.0.2.
They'll be modified in future commits to make the code usable when
embedded in e2e.test. Some of those changes may be worthwhile
backporting to csi-test, but this is uncertain at this time.
We don't need much concurrency and having too many worker threads has
one disadvantage (besides resource usage): when the sidecar looses the
connection to the CSI driver, it calls klog.Fatal, which prints all
gouroutines. This can lead to much output.
If MaxSurge is set, the controller will attempt to double up nodes
up to the allowed limit with a new pod, and then when the most recent
(by hash) pod is ready, trigger deletion on the old pod. If the old
pod goes unready before the new pod is ready, the old pod is immediately
deleted. If an old pod goes unready before a new pod is placed on that
node, a new pod is immediately added for that node even past the MaxSurge
limit.
The backoff clock is used consistently throughout the daemonset controller
as an injectable clock for the purposes of testing.