We can indirectly retrieve the kube-cross version from the
`build/build-image/cross/VERSION` for the sample-apiserver. This allows
us to simplify the handling in `build/dependencies.yaml` as well as
the required approval (via `OWNERS`) if the kube-cross version changes.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
- ceph deploy on ARM64 depends on "libec_jerasure_neon.so" which is not included
in `ceph-base` package in fedora26 distro, updated the distro to fedora33 to
fix the issue
```
sh ./mon.sh "$(hostname -i)"
/usr/lib64/ceph/erasure-code/libec_jerasure_neon.so: cannot open shared object file
```
- default pool `rbd` is not created on arm64, need to created this pool manually.
```
rbd import --image-feature layering block foo
rbd: error opening default pool 'rbd'
```
Signed-off-by: Dave Chen <dave.chen@arm.com>
Looking deeper into the logs there are a lot of errors like:
`script exited with error 1`
Initial reaction was that there was a problem with download, but it
looks like the script we use to register the qemu emulators may be at
fault, let's try this alternate mechanism.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
The conformance test for ServiceAccountIssuerDiscovery is currently
configured with --in-cluster-discovery, which only supports token
validation against in-cluster endpoints. Many cloud providers provide
their own, external endpoints for OIDC discovery, and because the iss
claim in tokens will point to these endpoints, but the client in this
test only trusts the Cluster CA, it will fail to connect to the external
discovery endpoints when validating the token.
To ensure that the conformance test at least supports scenario where
both the discovery doc endpoint and JWKS endpoint are cluster-local and
the scenario where both endpoints are cluster-external, this PR has the
test try both and requires at least one to pass.
Caveat: The test still won't support a configuration where one
endpoint is cluster-local and the other is external. We don't yet have
evidence that this is a configuration that is used in practice, so this
initial hotfix will at least fix the conformance test for the "both
external" configuration we know providers already use. Note that if one
endpoint is cluster-local, and the other is cluster-external, tokens can
still only be validated in-cluster, because both endpoints must be
accessible to Relying Parties that validate tokens.
The image "e2eteam/powershell-helper:6.2.7-linux-cache" is a Linux image. Because we're running "docker buildx build --platform windows/amd64", docker buildx will consider it as a Windows image unless we explicitly specify otherwise. If the image's platform is not correctly identified, we can run into problems when trying to build the image.
We are already doing something similar with the windows-servercore-cache image.
We can cache the powershell-helper image's results into a scratch Linux image using
docker buildx. This will allow us to spend less time pulling the data we need from the
powershell-helper image when we need it.
Additionally, docker buildx might have some issues with cross-registry images, so this
will allow us to circumvent it.
This is to consume the changes for binding the udp listeners of netexec
to specific addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
The current udp implementation listens on any for tcp, udp and tcp. There
are some cases where it makes sense to listen on specific addresses
(especially udp, see https://github.com/kubernetes/kubernetes/issues/95565).
This is because UDP is connectionless, and in order to conntrack to
work, the application must ensure that the src of the reply is the same
as the dest of the request. The easiest way to do that is to bind
explicitly on an ip.
Here we pass an optional parameter that contains a comma separated list
of addresses.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>
The same SHA cannot be pushed twice to the staging registry. Because some images were
mirrored, their SHAs remained unchanged. This addresses this issue.
nginx expects to find its conf and logs folder locally, and fails if it cannot find them.
cd-ing into the the nginx folder solves this issue. This is a similar approach to the
echoserver image, which also uses nginx.
A 32-bit php was included in the images, instead of the 64-bit one. The base image
is nanoserver-based, which does not support 32-bit apps. Because of this, httpd
fails to start.
Additionally, we've previously removed the busybox-helper dependency, but was
left in in the httpd images. This removes the dependency from the httpd images.
Due to the dockerhub rate limiting, we had to find an alternative solution. We've mirrored the dockerhub
images into our own.
Additionally, our own busybox, httpd, and nginx images also have Windows support.