Generating the name avoids all potential name collisions. It's not clear how
much of a problem that was because users can avoid them and the deterministic
names for generic ephemeral volumes have not led to reports from users. But
using generated names is not too hard either.
What makes it relatively easy is that the new pod.status.resourceClaimStatus
map stores the generated name for kubelet and node authorizer, i.e. the
information in the pod is sufficient to determine the name of the
ResourceClaim.
The resource claim controller becomes a bit more complex and now needs
permission to modify the pod status. The new failure scenario of "ResourceClaim
created, updating pod status fails" is handled with the help of a new special
"resource.kubernetes.io/pod-claim-name" annotation that together with the owner
reference identifies exactly for what a ResourceClaim was generated, so
updating the pod status can be retried for existing ResourceClaims.
The transition from deterministic names is handled with a special case for that
recovery code path: a ResourceClaim with no annotation and a name that follows
the Kubernetes <= 1.27 naming pattern is assumed to be generated for that pod
claim and gets added to the pod status.
There's no immediate need for it, but just in case that it may become relevant,
the name of the generated ResourceClaim may also be left unset to record that
no claim was needed. Components processing such a pod can skip whatever they
normally would do for the claim. To ensure that they do and also cover other
cases properly ("no known field is set", "must check ownership"),
resourceclaim.Name gets extended.
test/e2e
This is home to e2e tests used for presubmit, periodic, and postsubmit jobs.
Some of these jobs are merge-blocking, some are release-blocking.
e2e test ownership
All e2e tests must adhere to the following policies:
- the test must be owned by one and only one SIG
- the test must live in/underneath a sig-owned package matching pattern:
test/e2e/[{subpath}/]{sig}/..., e.g.test/e2e/auth- all tests owned by sig-authtest/e2e/common/storage- all testscommonto cluster-level and node-level e2e tests, owned by sig-nodetest/e2e/upgrade/apps- all tests used inupgradetesting, owned by sig-apps
- each sig-owned package should have an OWNERS file defining relevant approvers and labels for the owning sig, e.g.
# test/e2e/node/OWNERS
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- alice
- bob
- cynthia
emeritus_approvers:
- dave
reviewers:
- sig-node-reviewers
labels:
- sig/node
- packages that use
{subpath}should have animports.gofile importing sig-owned packages (for ginkgo's benefit), e.g.
// test/e2e/common/imports.go
package common
import (
// ensure these packages are scanned by ginkgo for e2e tests
_ "k8s.io/kubernetes/test/e2e/common/network"
_ "k8s.io/kubernetes/test/e2e/common/node"
_ "k8s.io/kubernetes/test/e2e/common/storage"
)
- test ownership must be declared via a top-level SIGDescribe call defined in the sig-owned package, e.g.
// test/e2e/lifecycle/framework.go
package lifecycle
import "github.com/onsi/ginkgo"
// SIGDescribe annotates the test with the SIG label.
func SIGDescribe(text string, body func()) bool {
return ginkgo.Describe("[sig-cluster-lifecycle] "+text, body)
}
// test/e2e/lifecycle/bootstrap/bootstrap_signer.go
package bootstrap
import (
"github.com/onsi/ginkgo"
"k8s.io/kubernetes/test/e2e/lifecycle"
)
var _ = lifecycle.SIGDescribe("[Feature:BootstrapTokens]", func() {
/* ... */
ginkgo.It("should sign the new added bootstrap tokens", func(ctx context.Context) {
/* ... */
})
/* etc */
})
These polices are enforced:
- via the merge-blocking presubmit job
pull-kubernetes-verify - which ends up running
hack/verify-e2e-test-ownership.sh - which can also be run via
make verify WHAT=e2e-test-ownership