Looking deeper into the logs there are a lot of errors like:
`script exited with error 1`
Initial reaction was that there was a problem with download, but it
looks like the script we use to register the qemu emulators may be at
fault, let's try this alternate mechanism.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
The conformance test for ServiceAccountIssuerDiscovery is currently
configured with --in-cluster-discovery, which only supports token
validation against in-cluster endpoints. Many cloud providers provide
their own, external endpoints for OIDC discovery, and because the iss
claim in tokens will point to these endpoints, but the client in this
test only trusts the Cluster CA, it will fail to connect to the external
discovery endpoints when validating the token.
To ensure that the conformance test at least supports scenario where
both the discovery doc endpoint and JWKS endpoint are cluster-local and
the scenario where both endpoints are cluster-external, this PR has the
test try both and requires at least one to pass.
Caveat: The test still won't support a configuration where one
endpoint is cluster-local and the other is external. We don't yet have
evidence that this is a configuration that is used in practice, so this
initial hotfix will at least fix the conformance test for the "both
external" configuration we know providers already use. Note that if one
endpoint is cluster-local, and the other is cluster-external, tokens can
still only be validated in-cluster, because both endpoints must be
accessible to Relying Parties that validate tokens.
This is to consume the changes for binding the udp listeners of netexec
to specific addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
The current udp implementation listens on any for tcp, udp and tcp. There
are some cases where it makes sense to listen on specific addresses
(especially udp, see https://github.com/kubernetes/kubernetes/issues/95565).
This is because UDP is connectionless, and in order to conntrack to
work, the application must ensure that the src of the reply is the same
as the dest of the request. The easiest way to do that is to bind
explicitly on an ip.
Here we pass an optional parameter that contains a comma separated list
of addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
We cannot have any RUN commands in the Windows stage when using docker buildx,
which is why we were using the busybox-helper image. The purpose of the image
was to contain a few things that we would obtain by running a few commands:
- symlinks for the busybox binary
- run vcredist_x64.exe which would also give us the vcruntime140.dll which is
necessary for dig or httpd.
There are alternatives to the commands above that can be achieved in a Linux stage
as well:
- we can create the symlinks in a Linux stage with ln -s. Copying them over to
Windows will allow them to work just as well as if they were being copied over
from a Windows image. The 'Files\' prefix issue to the symlink target still persists.
- we can download the vcruntime140.dll directly, allowing us to skip the vcredist_x64.exe
installation.
Many README files and other docs contained a link to a an appspot
tracking app that is no longer active. Following the links leads to an
error about Go 1.9 no longer being supported. Go 1.9 support was dropped
in appspot in 2019 and disabled June 2020.
This also resulted in a broken image link displaying when viewing these
files on GitHub. Since the app is no longer functioning, and since it
causes a potentially (but granted, minor) confusing error to display,
this just removes those links as I don't believe they are needed
anymore.
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
Provides a response that includes a body and a method. This response
will enable a client (e2e test) to confirm that a proxy did not alter
the http method.
Currently, some of the E2E test images have Windows support, and one of the goals is for most of
them to have Windows support. For that, the Image Builder is currently building those Windows
container images using a few Windows Server nodes (for 1809, 1903, 1909) with Remote Docker
enabled which are hosted on an azure subscription dedicated for CNCF.
With this, the Windows nodes dependency is removed entirely, as the images can be also built with
docker buildx. One additional benefit to this is that adding new supported Windows OS versions
to the E2E test images manifest lists becomes a lot easier (we wouldn't have to create a new Windows
Server node that matches that new OS version, assign DNS name, update certificates, etc.), and it
also becomes easier for other people to build their own E2E windows test images.
However, some dependencies are still required to run on a Windows machine. To solve this, we can
just pull helper images: e2eteam/powershell-helper:6.2.7 and e2eteam/busybox-helper:1.29.0. Their
Dockerfiles and a Makefile for them has been included in this commit. If any change is required to
them, then a new image will be built and tagged under a different version, but they are pretty
straight-forward and shouldn't require changes.
However, there is a small concern when it comes to the build time: Windows servercore images are
very large (for example, mcr.microsoft.com/windows/servercore:ltsc2019 is 4.99GB uncompressed, and
about ~2 GB compressed - those images are already cached on the Windows Server builder nodes, so
this isn't an issue there), and we currently support 1809, 1903, and 1909 (soon to add 2004).
This can lead to build times that are too big.
We have changed the base image to nanoserver (uncompressed size: 250MB), but some images still
require some DLLs or some other dependencies that can be fetched from a servercore image.
A separate job has been defined that would build a scratch windows-servercore-cache image monthly,
and then we can just get those dependencies from this cache, which will be very small.
This would be preferred, as the Windows images update periodically, and those dependencies
could be updated as well.
'agnhost' image uses hardcoded 'cluster.local' value for DNS domain.
It leads to failure of a bunch of HPA tests when test cluster is
configured to use custom DNS domain and there is no alias for
default 'cluster.local' one.
So, fix it by reusing it's own function for reading DNS domain suffixes.
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
Using Windows nanoserver container images as a base instead of the current
Windows servercore image will reduce the image size by about ~10x.
However, the nanoserver image lacks several things we need:
- netapi32.dll
- powershell
- certain powershell commands
- chocolatey cannot be used
When building the nanoserver images, we are going to use a Windows servercore helper,
in which we are going to install the necessary dependencies, and then copy them over
to our nanoserver image, including necessary DLLs.
Other notable changes include:
- switch from wget to curl (wget was a powershell alias).
- implement in code getting the DNS suffix list and DNS server list.
- reimplement getting file permissions for mounttest.
There's currently no way to know whether an error is for SCTP or
UDP, for example:
Jul 24 09:55:54.469: INFO: netserver-0[e2e-nettest-3476].container[webserver].log
2020/07/24 09:53:52 Started UDP server
2020/07/24 09:53:52 Error occurred. error:protocol not supported
In this case the "Error occurred. error:protocol not supported" is
actually for the SCTP socket. Make that more apparent.