e2e test validates the following 3 extra endpoints
- readApiregistrationV1APIServiceStatus
- patchApiregistrationV1APIService
- listApiregistrationV1APIService
I've observed this test occasionally failing due to 403 errors. I think there's something racing within apiserver w/ respect to RBAC and that if this test were more patient, then it would not flake this way.
We've had a fun case of `Sample API Server using the current
Aggregator` failing due to DNS returning a response for localhost that
is not 127.0.0.1 and does not exist on the node, causing etcd trying to
bind to a non-existing address and consequently failing. Trying to spare
others this fun :)
Add the "get" and "watch" verbs to the ClusterRole created
for the sample apiserver. Without this, the test complains about
"Failed to watch..." the resources in question.
Strictly speaking the "get" verb doesn't seem to be needed, but
this aligns the e2e test with the example at
staging/src/k8s.io/sample-apiserver/artifacts/example/rbac.yaml
This is part of the transition to using framework/log instead
of the Logf inside the framework package. This will help with
import size/cycles when importing the framework or subpackages
This is the continuation of the refactoring of framework/deployment_utils.go
into framework/deployment.
Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com>
Remove usage of the aggregated clientset in the e2e testing framework
itself. We have one test that consumes the clientset in the suite
and it's in test/e2e/apimachinery/aggregator.go, which was recently
promoted to conformance in 8101b86.
This test now obtains a local copy of the aggregated clientset.
The suite still has to compile the internal client in.
One possible solution here is to move this test in a separate suite,
yet it's unclear of how to tackle the problem now that the test
has to run as part of the conformance suite.
The image ``quay.io/coreos/etcd:v3.3.10`` does not have Windows
support and Windows Containers cannot be spawned using it.
Makes the etcd image's registry configurable, so the tests can be
configured to use a registry which has Windows support.
Investigated issue 63622. The test usually passes. When it does it seems
to take almost 30 seconds for the sample-apiserver to start returning
2xx rather than 4xx to flunder requests. On the failing tests I looked
at it was taking almost 45 seconds for the sample-apiserver to become
healthy. I bumped the wait/timeout in the test for this case to 60
seconds. I also added a log statement to make it easier to track how
long it was taking for the sample-apiserver to come up. Once we have a
bit more history I will log a bug for the long start up time.
Fixed go format error.