When stopping polling, the provided messages becomes the complete failure
message. This means that the code which calls gomega.StopTrying must include
the pod in the message instead of just summarizing the phase. This makes the
failure more useful.
Fix issue 123266
> CI: `post-kubernetes-push-e2e-agnhost-test-images` is failing
> (`gcr.io/k8s-staging-e2e-test-images/agnhost:2.46-linux-amd64 is a manifest list`)
To avoid creating a manifest list with the recent version of buildx,
`--provenance=false --sbom=false` has to be specified.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Merge vishh/stress@eab4e3384b into
agnhost.
Old usage: `stress -mem-alloc-size 12Mi -mem-alloc-sleep 10s -mem-total 4Gi`
New usage: `agnhost stress --mem-alloc-size 12Mi --mem-alloc-sleep 10s --mem-total 4Gi`
This is a part of the steps to migrate from legacy Schema 1 images
(issue 123146)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Currently published image version doesn't include latest code changes,
for example CDI support.
Manifests include different and outdated image versions
snuck in there while I was working on the test, but is ultimately not necessary to test the functionality.
skipping healthz check resulted in leaking goroutines from poststarthooks.
Don't implement interfaces that trigger tests with in-line and
pre-provisioned vSphere volumes.
With cloud provider removal, the in-tree vSphere tests won't be able to
create a volume in vSphere and thus test in-line volumes in Pods and
pre-provisioned PVs. Only dynamically provisioned volumes can be used for
testing, because they're provisioned by the vSphere CSI driver.
Refactor the code related to creating an internal type load balancer in the e2e tests for network load balancers. The modification removes the check for the "azure" provider and updates it to only check for "gke" and "gce" providers. This change ensures that the test only runs when the cluster is using "gke" or "gce" as the provider. The counterpart test is in the out-of-tree cloud provider azure.
This moves adding a pod to ReservedFor out of the main scheduling cycle into
PreBind. There it is done concurrently in different goroutines. For claims
which were specifically allocated for a pod (the most common case), that
usually makes no difference because the claim is already reserved.
It starts to matter when that pod then cannot be scheduled for other reasons,
because then the claim gets unreserved to allow deallocating it. It also
matters for claims that are created separately and then get used multiple times
by different pods.
Because multiple pods might get added to the same claim rapidly independently
from each other, it makes sense to do all claim status updates via patching:
then it is no longer necessary to have an up-to-date copy of the claim because
the patch operation will succeed if (and only if) the patched claim is valid.
Server-side-apply cannot be used for this because a client always has to send
the full list of all entries that it wants to be set, i.e. it cannot add one
entry unless it knows the full list.