The notes printed to the user from common.go when
loadConfig fails are outdated and incorrect.
If the config cannot be loaded the user should not be instructed
to re-upload the config with kubeadm commands. Instead they
should do it manually with kubectl.
On loadConfig() error just wrap the error in a simple message
and show it to the user.
The current setup stomps missing IsNotFound errors for Node objects.
The underlying fetching of init configuration uses
the node object to construct an initconfiguration for this
upgrade process, so if the Node is missing the kube-config CM
will be reported as missing, which is incorrect.
The component connection between kube-apiserver and kubelet does not
require the "O" field on the Subject to be set to the
"system:masters" privileged group. It can be a less
privileged group like "kubeadm:cluster-admins".
Change the group in the apiserve-kubelet-client
certificate specification. This cert is passed to
--kubelet-client-certificate.
kubemark's proxy mode exists to test how kube-proxy affects the load
on the apiserver, not how it affects the load on the node. There's no
need to generate fake iptables commands, because that all happens
entirely independently of the api watchers.
The addition of the "super-admin.conf" functionality required
init.go's Client() to create RBAC rules on its first creation.
However this created a problem with the "wait-control-plane" phase
of "kubeadm init" where a client is needed to connect to the
API server Discovery API's "/healthz" endpoint. The logic that ensures
the RBAC became the step where the API server wait was polled for.
To avoid this, introduce a new InitData function ClientWithoutBootstrap.
In "wait-control-plane" use this client, which has no permissions
(anonymous), but is sufficient to connect to the "/healthz".
Pending changes here would be:
- Stop using the "/healthz", instead a regular REST client from
the kubelet cert/key can be constructed.
- Make the wait for kubelet / API server linear (not in go routines).
This commit resolves an issue where certain KubeletConfig fields, specifically:
- FileCheckFrequency
- VolumeStatsAggPeriod
- EvictionPressureTransitionPeriod
- Authorization.Mode
- EvictionHard
were inadvertently overridden when not explicitly set in drop-in configs. To retain the
original values if they were absent in the drop-in configs, mergeKubeletConfigurations
uses a JSON patch merge strategy to selectively merge configurations. It prevents essential
configuration settings from being overridden, ensuring a more predictable behavior for users.
Signed-off-by: Sohan Kunkerkar <sohank2602@gmail.com>
Co-authored-by: Peter Hunt <pehunt@redhat.com>
And update most of the comments to refer to "nftables" rather than
"iptables" (even though it doesn't actually do any nftables updating
at this point).
For now the proxy also internally creates a
utiliptablestesting.FakeIPTables to keep the existing sync code
compiling.
Controls the lifecycle of the ServiceCIDRs adding finalizers and
setting the Ready condition in status when they are created, and
removing the finalizers once it is safe to remove (no orphan IPAddresses)
An IPAddress is orphan if there are no ServiceCIDR containing it.
Change-Id: Icbe31e1ed8525fa04df3b741c8a817e5f2a49e80
In EnsureAdminClusterRoleBindingImpl() there are a couple of
polls around CRB create calls. When testing the function
a short retry and a timeout are used. These introduce around
2x20 fake client "connections" / poll iterations under a couple
of test cases with 2 seconds overall test increase.
Given the polls in EnsureAdminClusterRoleBindingImpl()
are of type PollUntilContextTimeout() with "immediate" set to "true",
the short retry / time out can be removed when testing,
because one poll iteration is guaranteed and the tested function
is at 100% coverage with reactors and test cases.
controllers enabled by default should define feature gates in
ControllerDescriptor.requiredFeatureGates and not during a descriptor
registration in NewControllerDescriptors
Poll CRB create calls for kubeadm:cluster-admins when using the
super-admin.conf credential. The prior create call that uses the
credential admin.conf was already polled. Polling this subsequent
call seems advisable to ensure that momentary errors in between
cannot trip EnsureAdminClusterRoleBindingImpl().
- These metadata can be used to handle controllers in a generic way.
- This enables showing feature gated controllers in kube-controller-manager's help.
- It is possible to obtain a controllerName in the InitFunc so it can be passed down to and used by the controller.
metadata about a controller:
- name
- requiredFeatureGates
- isDisabledByDefault
- isCloudProviderController
- Update unit tests in certs_test.go related to the "renew" CLI command.
- In /init, (d *initData) Client(), make sure that the new logic
for bootstrapping an "admin.conf" user is performed, by calling
EnsureAdminClusterRoleBinding() from the phases backend. Add a
"adminKubeConfigBootstrapped" flag that helps call this logic only
once per "kubeadm init" binary execution.
- In /phases/init include a new subphase for generating
the "super-admin.conf" file.
- In /phases/reset make sure the file "super-admin.conf" is
cleaned if present. Update unit tests.
- Register the new file in /certs/renewal, so that the
file is renewed if present. If not present the common message "MISSING"
is shown. Same for other certs/kubeconfig files.
- In /kubeconfig, update the spec for admin.conf to use
the "kubeadm:cluster-admins" Group. A new spec is added for
the "super-admin.conf" file that uses the "system:masters" Group.
- Add a new function EnsureAdminClusterRoleBinding() that includes
logic to ensure that admin.conf contains a User that is properly
bound on the "cluster-admin" built-in ClusterRole. This requires
bootstrapping using the "system:masters" containing "super-admin.conf".
Add detailed unit tests for this new logic.
- In /upgrade#PerformPostUpgradeTasks() add logic to create the
"admin.conf" and "super-admin.conf" with the new, updated specs.
Add detailed unit tests for this new logic.
- In /upgrade#StaticPodControlPlane() ensure that renewal of
"super-admin.conf" is performed if the file exists.
Update unit tests.
- Add the new file name: super-admin.conf and a function
to return its default path GetSuperAdminKubeConfigPath()
- Add the ClusterAdminsGroupAndClusterRoleBinding object name.
A new --init-only flag is added tha makes kube-proxy perform
configuration that requires privileged mode and exit. It is
intended to be executed in a privileged initContainer, while
the main container may run with a stricter securityContext
* Job: Handle error returned from AddEventHandler function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Use the error message the similar to CronJob
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Clean up error messages
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the tesing.T on the second place in the args for the newControllerFromClient function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.T on the second place in the args for the newControllerFromClientWithClock function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Call t.Helper()
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the createJobControllerWithSharedInformers function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the startJobControllerAndWaitForCaches function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Adapt TestFinializerCleanup to the eventhandler error
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
---------
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
These were found with a modified klog that enables "go vet" to check klog call
parameters:
cmd/kubeadm/app/features/features.go:149:4: printf: k8s.io/klog/v2.Warningf format %t has arg v of wrong type string (govet)
klog.Warningf("Setting deprecated feature gate %s=%t. It will be removed in a future release.", k, v)
test/images/sample-device-plugin/sampledeviceplugin.go:147:5: printf: k8s.io/klog/v2.Errorf does not support error-wrapping directive %w (govet)
klog.Errorf("error: %w", err)
test/images/sample-device-plugin/sampledeviceplugin.go:155:3: printf: k8s.io/klog/v2.Errorf does not support error-wrapping directive %w (govet)
klog.Errorf("Failed to add watch to %q: %w", triggerPath, err)
staging/src/k8s.io/code-generator/cmd/prerelease-lifecycle-gen/prerelease-lifecycle-generators/status.go:207:5: printf: k8s.io/klog/v2.Fatalf does not support error-wrapping directive %w (govet)
klog.Fatalf("Package %v: unsupported %s value: %q :%w", i, tagEnabledName, ptag.value, err)
staging/src/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go:286:3: printf: (k8s.io/klog/v2.Verbose).Infof format %s reads arg #1, but call has 0 args (govet)
klog.V(4).Infof("Node %s missing in vSphere cloud provider cache, trying node informer")
staging/src/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go:302:3: printf: (k8s.io/klog/v2.Verbose).Infof format %s reads arg #1, but call has 0 args (govet)
klog.V(4).Infof("Node %s missing in vSphere cloud provider caches, trying the API server")
Turn on FeatureGate MergeCLIArgumentsWithConfig to keep the legacy way of management of
ignorePreflightErrors, which means the value defined by the flag `ignore-preflight-errors`
will be merged with the value `ignorePreflightErrors` defined in the config file.
Otherwise, the value defined by the flag will replace the value from the config file if set.
Signed-off-by: Dave Chen <dave.chen@arm.com>
KEP-2593 proposed to expand the existing node-ipam controller
to be configurable via a ClusterCIDR objects, however, there
were reasonable doubts on the SIG about the feature and after
several months of dicussions we decided to not move forward
with the KEP intree, hence, we are going to remove the existing
code, that is still in alpha.
https://groups.google.com/g/kubernetes-sig-network/c/nts1xEZ--gQ/m/2aTOUNFFAAAJ
Change-Id: Ieaf2007b0b23c296cde333247bfb672441fe6dfc
This removes deprecated sets.String and sets.Int
- replace sets.String with sets.Set[string]
- replace sets.Int with sets.Set[int]
- replace sets.NewString with sets.New[string]
- replace sets.NewInt with sets.New[int]
- replace sets.(OLD).List with sets.List(NEW)
Add v1beta4.ClusterConfiguration.EncryptionAlgorithm field (string)
and allow the user to configure the cluster asymetric encryption
algorithm to be either "RSA" (default, 2048 pkey size) or "ECDSA" (P-256).
Add validation and fuzzing. Conversion from v1beta3 is not required
because an empty field value is accepted and defaulted to RSA if needed.
Leverage the existing configuration option (feature gate) PublicKeysECDSA
but rename the backend fields, arguments, function names to be more
generic - EncryptionAlgorithm instead of PublicKeyAlgorithm.
That is because once the feature gate is enabled the algorithm
configuration also applies to private keys. It also uses the kubeadm API
type (string) instead of the x509.PublicKeyAlgorithm enum (int).
Deprecate the PublicKeysECDSA feature gate with a message.
It should be removed with the release of v1beta4 or maximum one release
later (it is an alpha FG).
`kubeadm upgrade plan` uses to support the configure of component
configs(kubeproxy and kubelet) in a config file and then check if
the version is supported or not, if it's not supported it will be
marked as a unsupported version and require to manually upgrade
the component.
This feature will make the upgrade config API much harder as this
violates the no-mutation principle for upgrade, and we have seen it's
quite problematic to do like this.
This change removes the support of configurable component configs for
`kubeadm upgrade plan`, along with the removal, the logic to parse
the config file to decide whether a manual upgrade for the component
configs is needed is removed as well.
NOTE that API is not changed, i.e. `ManualUpgradeRequired` is not removed
from `ComponentConfigVersionState` but it's no-op now.
Signed-off-by: Dave Chen <dave.chen@arm.com>
- move fabriziopandini to emeritus_approvers for /test/e2e*
and /cmd/kubeadm. fabriziopandini remains in /OWNERS_ALIASES
under sig-cluster-lifecycle-leads.
- remove RA489 as reviewer for /test/e2e* and /cmd/kubeadm
The package "k8s.io/kubernetes/cmd/kubeadm/app/util/pkiutil"
is used for a couple of function calls:
- pkiutil.NewCertAndKey() to generate a cert/key pair
- pkiutil.WriteCertAndKey() to write the pair to disk
Unroll and simplify the functions to obtain the same functionality
while removing the cmd/kubeadm dependency.
github.com/docker/distribution/reference has a new home github.com/distribution/reference
and a new tag v0.5.0. Let's switch to that.
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
I moved a simpler condition to the beginning of the function (when the error == nil). This has substantially streamlined the function for readability and comprehension of its logic flow.
When defining a ClusterIP Service, we can specify externalIP, and the
traffic policy of externalIP is subject to externalTrafficPolicy.
However, the policy can't be set when type is not NodePort or
LoadBalancer, and will default to Cluster when kube-proxy processes the
Service.
This commit updates the defaulting and validation of Service to allow
specifying ExternalTrafficPolicy for ClusterIP Services with
ExternalIPs.
Signed-off-by: Quan Tian <qtian@vmware.com>
component configs is only needed for `kubeadm init`, the `join` and `reset` doesn't
need to provid the config with component configs.
Signed-off-by: Dave Chen <dave.chen@arm.com>
As part of this change, the code responsible for managing the sandbox
image within the kubelet has been removed. Previously, the kubelet used
to prevent sandbox image from the garbage collection process. However,
with this update, the responsibility of managing the sandbox containers
has been shifted to the CRI implementation itself. By allowing sandbox
image pinning from CRI, we improve efficiency and simplify the kubelet's
interaction with the container runtime. As a result, the kubelet can now
rely on the container runtime's built-in mechanisms for sandbox container
lifecycle management.
Signed-off-by: Sohan Kunkerkar <sohank2602@gmail.com>