Making the LoggingConfiguration part of the versioned component-base/config API
had the theoretic advantage that components could have offered different
configuration APIs with experimental features limited to alpha versions (for
example, sanitization offered only in a v1alpha1.KubeletConfiguration). Some
components could have decided to only use stable logging options.
In practice, this wasn't done. Furthermore, we don't want different components
to make different choices regarding which logging features they offer to
users. It should always be the same everywhere, for the sake of consistency.
This can be achieved with a saner Go API by dropping the distinction between
internal and external LoggingConfiguration types. Different stability levels of
indidividual fields have to be covered by documentation (done) and potentially
feature gates (not currently done).
Advantages:
- everything related to logging is under component-base/logs;
previously this was scattered across different packages and
different files under "logs" (why some code was in logs/config.go
vs. logs/options.go vs. logs/logs.go always confused me again
and again when coming back to the code):
- long-term config and command line API are clearly separated
into the "api" package underneath that
- logs/logs.go itself only deals with legacy global flags and
logging configuration
- removal of separate Go APIs like logs.BindLoggingFlags and
logs.Options
- LogRegistry becomes an implementation detail, with less code
and less exported functionality (only registration needs to
be exported, querying is internal)
InitLogs overrides the klog default and turns contextual logging off. This
ensures that it is only enabled in Kubernetes commands that explicitly enable
it via a feature gate. A feature gate for it gets defined in
k8s.io/component-base/logs and is then used by Options.ValidateAndApply.
The effect of disabling contextual logging is very limited according to
benchmarks with kube-scheduler. The feature gets added anyway to satisfy the
PRR recommendation that features should be controllable.
The following commands have support for contextual logging:
- kube-apiserver
- kube-controller-manager
- kubelet
- kube-scheduler
- component-base/logs example
Supporting a feature gate check in ValidateAndApply and not in InitLogs is a
simplification: changing InitLogs to accept a FeatureGate would have implied
changing also component-base/cli.Run. This didn't seem worthwhile because
ValidateAndApply already covers the relevant commands.
In various places log messages where emitted as part of validation or even
before it (for example, cli.PrintFlags). Those log messages did not use the
final logging configuration, for example text output instead of JSON or not the
final verbosity. The last point became more obvious after moving the setup of
verbosity into logs.Options.Apply because PrintFlags never printed anything
anymore.
In order to force applications to deal with logging as soon as possible, the
Options.Validate and Options.Apply methods are now private. Applications should
use the new Options.ValidateAndApply directly after parsing.
These three options are the ones from logs.AddFlags which are not deprecated.
Therefore it makes sense to make them available also via the configuration file
support in the one command which currently supports that (kubelet).
Long-term, all commands should use LoggingConfiguration, either with a
configuration file (as in kubelet) or via flags (kube-scheduler,
kube-apiserver, kube-controller-manager).
Short-term, both approaches have to be supported. As the majority of the
commands only use logs.AddFlags, that function by default continues to register
the flags and only leaves that to Options.AddFlags when explicitly requested.
A drive-by bug fix is done for log flushing: the periodic flushing called
klog.Flush and therefore missed explicit flushing of the newer logr
backend. This bug was never present in any release Kubernetes and therefore the
fix is not submitted in a separate PR.
The new handler is meant to be executed at the end of the delegation chain.
It simply checks if the request have been made before the server has installed all known HTTP paths.
In that case it returns a 503 response otherwise it returns a 404.
We don't want to add additional checks to the readyz path as it might prevent fixing bricked clusters.
This specific handler is meant to "protect" requests that arrive before the paths and handlers are fully initialized.
When API Priority and Fairness is enabled, the inflight limits must
add up to something positive.
This rejects the configuration that prompted
https://github.com/kubernetes/kubernetes/issues/102885
Update help for max inflight flags
FeatureGate acts as a secondary switch to disable cloud-controller loops
in KCM, Kubelet and KAPI.
Provide comprehensive logging information to users, so they will be
guided in adoption of out-of-tree cloud provider implementation.
In case a malformed flag is passed to k8s components
such as "–foo", where "–" is not an ASCII dash character,
the components currently silently ignore the flag
and treat it as a positional argument.
Make k8s components/commands exit with an error if a positional argument
that is not empty is found. Include a custom error message for all
components except kubeadm, as cobra.NoArgs is used in a lot of
places already (can be fixed in a followup).
The kubelet already handles this properly - e.g.:
'unknown command: "–foo"'
This change affects:
- cloud-controller-manager
- kube-apiserver
- kube-controller-manager
- kube-proxy
- kubeadm {alpha|config|token|version}
- kubemark
Signed-off-by: Monis Khan <mok@vmware.com>
Signed-off-by: Lubomir I. Ivanov <lubomirivanov@vmware.com>
The old flag name doesn't make sense with the renamed API Priority and
Fairness feature, and it's still safe to change the flag since it hasn't done
anything useful in a released k8s version yet.
- Add handlers for service account issuer metadata.
- Add option to manually override JWKS URI.
- Add unit and integration tests.
- Add a separate ServiceAccountIssuerDiscovery feature gate.
Additional notes:
- If not explicitly overridden, the JWKS URI will be based on
the API server's external address and port.
- The metadata server is configured with the validating key set rather
than the signing key set. This allows for key rotation because tokens
can still be validated by the keys exposed in the JWKs URL, even if the
signing key has been rotated (note this may still be a short window if
tokens have short lifetimes).
- The trust model of OIDC discovery requires that the relying party
fetch the issuer metadata via HTTPS; the trust of the issuer metadata
comes from the server presenting a TLS certificate with a trust chain
back to the from the relying party's root(s) of trust. For tests, we use
a local issuer (https://kubernetes.default.svc) for the certificate
so that workloads within the cluster can authenticate it when fetching
OIDC metadata. An API server cannot validly claim https://kubernetes.io,
but within the cluster, it is the authority for kubernetes.default.svc,
according to the in-cluster config.
Co-authored-by: Michael Taufen <mtaufen@google.com>
If konnectivity service is enabled, the etcd client will now use it.
This did require moving a few methods to break circular dependencies.
Factored in feedback from lavalamp and wenjiaswe.
Got the proxy-server coming up in the master.
Added certs and have it comiung up with those certs.
Added a daemonset to run the network-agent.
Adding support for agent running as a sameon set on every node.
Added quick hack to test that proxy server/agent were correctly
tunneling traffic to the kubelet.
Added more WIP for reading network proxy configuration.
Get flags set correctly and fix connection services.
Adding missing ApplyTo
Added ConnectivityService.
Fixed build directives. Added connectivity service configuration.
Fixed log levels.
Fixed minor issues for feature turned off.
Fixed boilerplate and format.
Moved log dialer initialization earlier as per Liggits suggestion.
Fixed a few minor issues in the configuration for GCE.
Fixed scheme allocation
Adding unit test.
Added test for direct connectivity service.
Switching to injecting the Lookup method rather than using a Singleton.
First round of mikedaneses feedback.
Fixed deployment to use yaml and other changes suggested by MikeDanese.
Switched network proxy server/agent which are kebab-case not camelCase.
Picked up DIAL_RSP fix.
Factored in deads2k feedback.
Feedback from mikedanese
Factored in second round of feedback from David.
Fix path in verify.
Factored in anfernee's feedback.
First part of lavalamps feedback.
Factored in more changes from lavalamp and mikedanese.
Renamed network-proxy to konnectivity-server and konnectivity-agent.
Fixed tolerations and config file checking.
Added missing strptr
Finished lavalamps requested rename.
Disambiguating konnectivity service by renaming it egress selector.
Switched feature flag to KUBE_ENABLE_EGRESS_VIA_KONNECTIVITY_SERVICE
On local networks (such as the typical connection path between
control plane components) gzip compression increases CPU use and
end to end p99 latency rather than decreasing it. Disable compression
within the control plane components like a 1.15 cluster would be
configured.