Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Implements OIDC distributed claims. Next step to enable this feature is to enable claim caching. A distributed claim allows the OIDC provider to delegate a claim to a separate URL. Distributed claims are of the form as seen below, and are defined in the OIDC Connect Core 1.0, section 5.6.2. See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims Example claim: ``` { ... (other normal claims)... "_claim_names": { "groups": "src1" }, "_claim_sources": { "src1": { "endpoint": "https://www.example.com", "access_token": "f005ba11" }, }, } ``` Example response to a followup request to https://www.example.com is a JWT-encoded claim token: ``` { "iss": "https://www.example.com", "aud": "my-client", "groups": ["team1", "team2"], "exp": 9876543210 } ``` Apart from the indirection, the distributed claim behaves exactly the same as a standard claim. For Kubernetes, this means that the token must be verified using the same approach as for the original OIDC token. This requires the presence of "iss", "aud" and "exp" claims in addition to "groups". All existing OIDC options (e.g. groups prefix) apply. Any claim can be made distributed, even though the "groups" claim is the primary use case. Allows groups to be a single string due to https://github.com/kubernetes/kubernetes/issues/33290, even though OIDC defines "groups" claim to be an array of strings. So, this will be parsed correctly: ``` { "iss": "https://www.example.com", "aud": "my-client", "groups": "team1", "exp": 9876543210 } ``` Expects that distributed claims endpoints return JWT, per OIDC specs. In case both a standard and a distributed claim with the same name exist, standard claim wins. The specs seem undecided about the correct approach here. Distributed claims are resolved serially. This could be parallelized for performance if needed. Aggregated claims are silently skipped. Support could be added if needed. **What this PR does / why we need it**: Makes it possible to retrieve many group memberships by offloading to a dedicated backend for groups resolution. **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes #62920 **Special notes for your reviewer**: There are a few TODOs that seem better handled in separate commits. **Release note**: ```release-note Lays groundwork for OIDC distributed claims handling in the apiserver authentication token checker. A distributed claim allows the OIDC provider to delegate a claim to a separate URL. Distributed claims are of the form as seen below, and are defined in the OIDC Connect Core 1.0, section 5.6.2. For details, see: http://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims ```
External Repository Staging Area
This directory is the staging area for packages that have been split to their own repository. The content here will be periodically published to respective top-level k8s.io repositories.
Repositories currently staged here:
k8s.io/apiextensions-apiserverk8s.io/apik8s.io/apimachineryk8s.io/apiserverk8s.io/client-gok8s.io/kube-aggregatork8s.io/code-generatork8s.io/metricsk8s.io/sample-apiserverk8s.io/sample-controller
The code in the staging/ directory is authoritative, i.e. the only copy of the code. You can directly modify such code.
Using staged repositories from Kubernetes code
Kubernetes code uses the repositories in this directory via symlinks in the
vendor/k8s.io directory into this staging area. For example, when
Kubernetes code imports a package from the k8s.io/client-go repository, that
import is resolved to staging/src/k8s.io/client-go relative to the project
root:
// pkg/example/some_code.go
package example
import (
"k8s.io/client-go/dynamic" // resolves to staging/src/k8s.io/client-go/dynamic
)
Once the change-over to external repositories is complete, these repositories
will actually be vendored from k8s.io/<package-name>.