Automatic merge from submit-queue
Edge-based userspace LB in kube-proxy
@thockin @bowei - if one of you could take a look if that PR doesn't break some basic kube-proxy assumptions. The similar change for winuserproxy should be pretty trivial.
And we should also do that for iptables, but that requires splitting the iptables code to syncProxyRules (which from what I know @thockin already started working on so we should probably wait for it to be done).
Automatic merge from submit-queue
Add support for IP aliases for pod IPs (GCP alpha feature)
```release-note
Adds support for allocation of pod IPs via IP aliases.
# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.
## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh
# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
Automatic merge from submit-queue
Remove alphaProvisioner in PVController and AlphaStorageClassAnnotation
remove alpha annotation and alphaProvisioner
**Release note**:
```release-note
NONE
```
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
The exported or public functions without a comment results into golint failure
in various generated files. The changes in this patch takes care of about 36
related lint failures.
Given below is an example lint error,
zz_generated.conversion.go:91:1: exported function
Convert_v1alpha1_Binding_To_servicecatalog_Binding should have comment or be
unexported
Automatic merge from submit-queue (batch tested with PRs 43870, 30302, 42722, 43736)
Admission plugin to merge pod and namespace tolerations for restricting pod placement on nodes
```release-note
This admission plugin checks for tolerations on the pod being admitted and its namespace, and verifies if there is any conflict. If there is no conflict, then it merges the pod's namespace tolerations with the the pod's tolerations and it verifies them against its namespace' whitelist of tolerations and returns. If a namespace does not have its default or whitelist tolerations specified, then cluster level default and whitelist is used. An example of its versioned config:
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "PodTolerationRestriction"
configuration:
apiVersion: podtolerationrestriction.admission.k8s.io/v1alpha1
kind: Configuration
default:
- Key: key1
Value: value1
- Key: key2
Value: value2
whitelist:
- Key: key1
Value: value1
- Key: key2
Value: value2
```
Automatic merge from submit-queue
Add dockershim only mode
This PR added a `experimental-dockershim` hidden flag in kubelet to run dockershim only.
We introduce this flag mainly for cri validation test. In the future we should compile dockershim into another binary.
@yujuhong @feiskyer @xlgao-zju
/cc @kubernetes/sig-node-pr-reviews
Automatic merge from submit-queue
add rancher credential provider
This adds rancher as a credential provider in kubernetes.
@erictune This might be a good opportunity to discuss adding a provision for people to have their own credential providers that is similar to the new cloud provider changes (https://github.com/kubernetes/community/pull/128). WDYT?
```
release-note
Added Rancher Credential Provider to use Rancher Registry credentials when running in a Rancher cluster
```
Automatic merge from submit-queue (batch tested with PRs 43777, 44121)
Add patchMergeKey and patchStrategy support to OpenAPI
Support generating Open API extensions for strategic merge patch tags in go struct tags
Support `patchStrategy` and `patchMergeKey`.
Also support checking if the Open API extension and struct tags match.
```release-note
Support generating Open API extensions for strategic merge patch tags in go struct tags
```
cc: @pwittrock @ymqytw
(Description mostly copied from #43833)
Automatic merge from submit-queue (batch tested with PRs 43951, 43386)
kubeadm: Fix issue when kubeadm reset isn't working and the docker service is disabled
**What this PR does / why we need it**:
If the docker service is disabled, the preflight check lib will return a warning.
That warning _should not_ matter when deciding whether to reset docker state or not.
The current code skips the docker reset if the docker service is disabled, which is a bug.
Also, `Check()` must not return a `nil` slice.
It should be added that I **really don't like what we have at the moment**, I'd love to discuss with the node team to add something to CRI that basically says, "remove everything on this node" so we can stop doing this. Basically, kubeadm could talk to the specified socket (by default dockershim.sock), and call the CRI interface and say that everything should be cleaned up. This would then be cross-CRI-implementation at the same time and would work if you're using rkt, cri-o or whatever.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
helps in https://github.com/kubernetes/kubernetes/issues/43950
**Special notes for your reviewer**:
**Release note**:
```release-note
kubeadm: Make `kubeadm reset` tolerant of a disabled docker service.
```
@mikedanese @jbeda @dmmcquay @pipejakob @yujuhong @freehan
Automatic merge from submit-queue
Adding krousey as a kubeadm reviewer and owner
I would like to join the illustrious ranks of kubeadm owners. I plan to spend a considerable amount of time integrating this tool into our GCE and GKE deployments. If approver is too much, I would still like to be a reviewer.
I will mark this as "Do not merge" until I see approval from all current owners.
Automatic merge from submit-queue (batch tested with PRs 44143, 44133)
Fix panic in kubeadm master node setup
The problem was [caught](https://travis-ci.org/Mirantis/kubeadm-dind-cluster/jobs/218999640#L3249) by kubeadm-dind-cluster CI.
```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.1
[init] Using Authorization mode: RBAC
[preflight] Skipping pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 19.017839 seconds
panic: assignment to entry in nil map
goroutine 1 [running]:
panic(0x1b62140, 0xc4203f0380)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/kubernetes/cmd/kubeadm/app/phases/apiconfig.attemptToUpdateMasterRoleLabelsAndTaints(0xc420b18be0, 0x4e, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/apiconfig/setupmaster.go:57 +0x15b
k8s.io/kubernetes/cmd/kubeadm/app/phases/apiconfig.UpdateMasterRoleLabelsAndTaints(0xc420b18be0, 0x1a, 0xc420b18be0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/apiconfig/setupmaster.go:86 +0x2f
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*Init).Run(0xc4201a4040, 0x29886e0, 0xc420022010, 0x1c73d01, 0xc4201a4040)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:220 +0x29c
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc4203a46c0, 0xc420660680, 0x0, 0x2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:86 +0x197
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4203a46c0, 0xc420660560, 0x2, 0x2, 0xc4203a46c0, 0xc420660560)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x439
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4203b1d40, 0xc4203a4b40, 0xc4203a46c0, 0xc4203a4000)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x367
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4203b1d40, 0xc42046c420, 0x29886a0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubeadm/app.Run(0xc420627f70, 0xc4200001a0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:35 +0xe8
main.main()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:26 +0x22
```
Automatic merge from submit-queue
[Federation] Remove FEDERATIONS_DOMAIN_MAP references
Remove all references to FEDERATIONS_DOMAIN_MAP as this method is no longer is used and is replaced by adding federation domain map to kube-dns configmap.
cc @madhusudancs @kubernetes/sig-federation-pr-reviews
**Release note**:
```
[Federation] Mechanism of adding `federation domain maps` to kube-dns deployment via `--federations` flag is superseded by adding/updating `federations` key in `kube-system/kube-dns` configmap. If user is using kubefed tool to join cluster federation, adding federation domain maps to kube-dns is already taken care by `kubefed join` and does not need further action.
```
Automatic merge from submit-queue (batch tested with PRs 44097, 42772, 43880, 44031, 44066)
kubeadm: Wait for node before updating labels and taints
**What this PR does / why we need it**:
Adds again (removed in #43881) waiting for at last single node appearance during kubeadm attempt to update master role labels and taints.
**Which issue this PR fixes**:
fixeskubernetes/kubeadm#221
**Release note**:
```NONE
```
Use shared informers instead of creating local controllers/reflectors
for the proxy's endpoints and service configs. This allows downstream
integrators to pass in preexisting shared informers to save on memory &
cpu usage.
This also enables the cache mutation detector for kube-proxy for those
presubmit jobs that already turn it on.
Kubelet flags are not necessarily appropriate for the KubeletConfiguration
object. For example, this PR also removes HostnameOverride and NodeIP
from KubeletConfiguration. This is a preleminary step to enabling Nodes
to share configurations, as part of the dynamic Kubelet configuration
feature (#29459). Fields that must be unique for each node inhibit
sharing, because their values, by definition, cannot be shared.
Automatic merge from submit-queue
kubelet: change image-gc-high-threshold below docker dm.min_free_space
docker dm.min_free_space defaults to 10%, which "specifies the min free space percent in a thin pool require for new device creation to succeed....Whenever a new a thin pool device is created (during docker pull or during container creation), the Engine checks if the minimum free space is available. If sufficient space is unavailable, then device creation fails and any relevant docker operation fails." [1]
This setting is preventing the storage usage to cross the 90% limit. However, image GC is expected to kick in only beyond image-gc-high-threshold. The image-gc-high-threshold has a default value of 90%, and hence GC never triggers. If image-gc-high-threshold is set to a value lower than (100 - dm.min_free_space)%, GC triggers.
xref https://bugzilla.redhat.com/show_bug.cgi?id=1408309
```release-note
changed kubelet default image-gc-high-threshold to 85% to resolve a conflict with default settings in docker that prevented image garbage collection from resolving low disk space situations when using devicemapper storage.
```
@derekwaynecarr @sdodson @rhvgoyal
Automatic merge from submit-queue
fix typo in kubeadm join -h
```
Flags:
--config string Path to kubeadm config file
--discovery-file string A file or url from which to load cluster information
--discovery-token string A token used to validate cluster information fetched from the master
--skip-preflight-checks skip preflight checks normally run before modifying the system
--tls-bootstrap-token string A token used for TLS bootstrapping
--token string Use this token for both discovery-token and tls-bootstrap-token
```
Automatic merge from submit-queue
Make RBAC post-start hook conditional on RBAC authorizer being used
Makes the RBAC post-start hook (and reconciliation) conditional on the RBAC authorizer being used
Ensures we don't set up unnecessary objects.
```release-note
RBAC role and rolebinding auto-reconciliation is now performed only when the RBAC authorization mode is enabled.
```
Automatic merge from submit-queue (batch tested with PRs 42360, 43109, 43737, 43853)
kubeadm: test-cmds for kubeadm completion
**What this PR does / why we need it**: Adding test-cmds for kubeadm completion.
Adding tests is a WIP from #34136
**Special notes for your reviewer**: /cc @luxas
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
don't wait for first kubelet to be ready and drop dummy deploy
Per https://github.com/kubernetes/kubernetes/issues/43815#issuecomment-290270198, I suggest that we drop both the node ready and the dummy deployment check altogether for 1.6 and move them to a validation phase for 1.7.
I really think we should drop these checks altogether. CreateClientAndWaitForAPI should create a client and wait for the API, not create dummy deployments and wait for nodes to register and be healthy. These are end to end validations and this is the wrong place to do this stuff. We need an explicit final validation phase for this.
```release-note
Fix a deadlock in kubeadm master initialization.
```
Fixes#43815
Automatic merge from submit-queue (batch tested with PRs 42617, 43247, 43509, 43644, 43820)
Bugfix: OpenAPI-gen was not generating extensions correctly
Fixes a bug in openapi-gen that generated invalid code if x-kubernetes extensions defined in types.go. The location of VendorExtensions was wrong.
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
remove legacy insecure port options from genericapiserver
The insecure port has been a source of problems and it will prevent proper aggregation into a cluster, so the genericapiserver has no need for it. In addition, there's no reason for it to be in the main kube-apiserver flow either. This pull removes it from genericapiserver and removes it from the shared kube-apiserver code. It's still wired up in the command, but its no longer possible for someone to mess up and start using in mainline code.
@kubernetes/sig-api-machinery-misc @ncdc
Automatic merge from submit-queue
proxy to IP instead of name, but still use host verification
I think I found a setting that lets us proxy to an IP and still do hostname verification on the certificate.
@liggitt @sttts Can you see if you agree that this knob does what I think it does? Last commit only, still needs tests.
Automatic merge from submit-queue (batch tested with PRs 42900, 43044, 42896, 43308, 43621)
require codecfactory
The genericapiserver requires a codec to start. Help new comers to the API by forcing them to set it when they create a new config.
Automatic merge from submit-queue (batch tested with PRs 43149, 41399, 43154, 43569, 42507)
kubeadm: only print stderr/stdout if failed test
**What this PR does / why we need it**: This PR changes when stdout/stderr will be logged during a kubeadm test-cmd test. It's useful when a real failure occurs to only see the failure rather than output that looks like it might be a failure
**Special notes for your reviewer**: /cc @luxas @marun
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
break kube-apiserver start into stages
This is a code shuffle which breaks the kube-apiserver start into
1. set defaults on the options
1. create the generic config from the options
1. create the master config from the generic config and the options
This makes apiserver composition easy/possible later on.
Automatic merge from submit-queue (batch tested with PRs 43144, 42671, 43226, 43314, 43361)
don't start controllers against unhealthy master
Operating against an unhealthy apiserver is unpredictable. Some clients like `kubectl` need to be best effort in this regard so that you can debug broken apiservers. Controllers shouldn't run against unhealthy masters.
Automatic merge from submit-queue (batch tested with PRs 43144, 42671, 43226, 43314, 43361)
start informers as a post-start-hook
Switches the shared informer start to a post start hook to make future API server composition easier. PostStartHooks will have to be unioned for server composition and this ensures that we don't accidentally skip starting them.
Automatic merge from submit-queue
Better messaging when GKE certificate signing fails.
**What this PR does / why we need it**:
On errors, the GKE signing API can respond with a JSON body that contains an error message explaining the failure. If we're able to extract it, use that message when reporting the error instead of the generic error returned by the webhook library. Also, always add an event to the CSR object on signing errors.
**Release note**:
```release-note
NONE
```
CC @mikedanese @jcbsmpsn
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)
Enable storage class support in Azure File volume
**What this PR does / why we need it**:
Support StorageClass in Azure file volume
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support StorageClass in Azure file volume
```
Automatic merge from submit-queue
kubeadm: Remove an outdated comment
Now that `AdvertiseAddress` is a `string` and not
`AdvertiseAddresses` a `[]string` this comment is no longer
necessary.
@k8s-mirror-cluster-lifecycle-misc RFR
**What this PR does / why we need it**
Just a little house cleaning by removing an outdated comment.
**Release note**:
```release-note
NONE
```
On errors, the GKE signing API can respond with a JSON body that
contains an error message explaining the failure. If we're able to
extract it, use that message when reporting the error instead of the
generic error returned by the webhook library. Also, always add an event
to the CSR object on signing errors.
Automatic merge from submit-queue
Use realistic value for the memory example of kube-reserved and system-reserved
Use realistic value for the memory example of kube-reserved and system-reserved
Currently, kublet help shows the memory example of
kube-reserved and system-reserved as 150G. This 150G is not realistic
value and it leads misconfiguration or confusion. This patch changes
to example value as 500Mi.
Before(same with system-reserved):
```
--kube-reserved value A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=150G) pairs that describe resources reserved for kubernetes system components. Currently only cpu and memory are supported. See http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md for more detail. [default=none]
```
After(same with system-reserved):
```
--kube-reserved value A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi) pairs that describe resources reserved for kubernetes system components. Currently only cpu and memory are supported. See http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md for more detail. [default=none]
```
Automatic merge from submit-queue (batch tested with PRs 43355, 42827)
kubeadm: In-cluster DNS should be used when self-hosting
**What this PR does / why we need it**:
I noticed that the master components doesn't use the built-in cluster DNS which they really should do in order to be able to discover other services inside the cluster (like extension API Servers like service catalog).
This is a really small change that fixes a misconfiguration that had slipped though earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@jbeda @bowei @MrHohn
Automatic merge from submit-queue
kubeadm: Default to v1.6.0 stable in offline scenarios in beforehand
**What this PR does / why we need it**:
In offline scenarios, kubeadm will fallback to the latest well-known version.
This PR bumps that to v1.6. We can merge now, and in the small gap between the merge of this PR and the actual v1.6 release, kubeadm devs will have to explicitely set k8s version.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@jbeda
Automatic merge from submit-queue (batch tested with PRs 43018, 42713)
kubeadm: Don't drain and remove the current node on kubeadm reset
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
In v1.5, `kubeadm reset` would drain your node and remove it from your cluster if you specified, but now in v1.6 we can't do that due to the RBAC rules we have set up.
After conversations with @liggitt, I also agree this functionality was somehow a little mis-placed (though still very convenient to use), so we're removing it for v1.6.
It's the system administrator's duty to drain and remove nodes from the cluster, not the nodes' responsibility.
The current behavior is therefore a bug that needs to be fixed in v1.6
**Release note**:
```release-note
kubeadm: `kubeadm reset` won't drain and remove the current node anymore
```
@liggitt @deads2k @jbeda @dmmcquay @pires @errordeveloper
Automatic merge from submit-queue
remove dgoodwin and dmmcquay to kubeadm reviewers
@dgoodwin says he needs to work on other stuff right now. @dmmcquay says he wants to help with reviews.
Automatic merge from submit-queue (batch tested with PRs 42940, 42906, 42970, 42848)
Improve kubeadm init message
Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.
Signed-off-by: Joe Beda <joe.github@bedafamily.com>
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42969, 42966)
kubeadm: update kubeadm banner to beta
**What this PR does / why we need it**: Updates the intro banner for kubeadm, which used to state it is in alpha (but we are going to beta). This also updates the tagged github group (one that no longer exists) to the sig-cluster-lifecycle-misc group.
**Special notes for your reviewer**: /cc @jbeda
**Release note**:
```release-note
NONE
```
Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.
Signed-off-by: Joe Beda <joe.github@bedafamily.com>
Automatic merge from submit-queue (batch tested with PRs 42728, 42278)
[Federation] Create integration test fixture for api
This PR factors a reusable fixture for the federation api server out of the existing integration test.
Targets #40705
cc: @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42762, 42739, 42425, 42778)
kubeadm: update docker version for CE and EE
**What this PR does / why we need it**: Update regex for docker version to also capture new CE and EE versions.
**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/189
**Special notes for your reviewer**: /cc @jbeda @luxas
**Release note**:
```release-note
NONE
```
This change allows validators to pass warnings as well as errors. This
was needed because of how support for docker 1.13+ and the new EE and CE
versions is currently being handled.
Automatic merge from submit-queue
kubeadm: Delete the dummy Deployment properly
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubeadm/issues/149
**Special notes for your reviewer**:
Earlier, the Pod created by the Deployment wasn't deleted. With this option it is.
As suggested by @deads2k, thank you!
This is a bug fix for v1.6
**Release note**:
```release-note
```
@mikedanese @jbeda @dmmcquay @pires @errordeveloper @deads2k @caesarxuchao
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)
DaemonSet: Respect ControllerRef
**What this PR does / why we need it**:
This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings DaemonSet into full compliance with ControllerRef. See the individual commit messages for details.
**Which issue this PR fixes**:
This ensures that DaemonSet does not fight with other controllers over control of Pods.
**Special notes for your reviewer**:
**Release note**:
```release-note
DaemonSet now respects ControllerRef to avoid fighting over Pods.
```
cc @erictune @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)
Add pprof trace support
Add support for `/debug/pprof/trace`
Can wait for master to reopen for 1.7.
cc @smarterclayton @wojtek-t @gmarek @timothysc @jeremyeder @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue
kubeadm: Make kube-apiserver's liveness probe match its bindport.
The `kube-apiserver` liveness probe port had previously been hardcoded, so if you used `--apiserver-bind-port` to override the default port (6443), then the health check for the pod would quickly fail and kubelet would continuously kill the apiserver.
**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubeadm/issues/196
**Release note**:
```release-note
kubeadm: fix kube-apiserver liveness probe port when --apiserver-bind-port given
```
Automatic merge from submit-queue (batch tested with PRs 41890, 42593, 42633, 42626, 42609)
make all controllers obey the disable flags
Fixes https://github.com/kubernetes/kubernetes/issues/42592
Some controllers weren't disable-able. This fixes them so they obey our flags.
@ncdc
It had previously been hardcoded, so if you used --apiserver-bind-port
to override the default port (6443), then the health check for the pod
would quickly fail and kubelet would continuously kill the apiserver.
Automatic merge from submit-queue (batch tested with PRs 41826, 42405)
Add stubDomains and upstreamNameservers configuration to kube-dns
```release-note
Updates the dnsmasq cache/mux layer to be managed by dnsmasq-nanny.
dnsmasq-nanny manages dnsmasq based on values from the
kube-system:kube-dns configmap:
"stubDomains": {
"acme.local": ["1.2.3.4"]
},
is a map of domain to list of nameservers for the domain. This is used
to inject private DNS domains into the kube-dns namespace. In the above
example, any DNS requests for *.acme.local will be served by the
nameserver 1.2.3.4.
"upstreamNameservers": ["8.8.8.8", "8.8.4.4"]
is a list of upstreamNameservers to use, overriding the configuration
specified in /etc/resolv.conf.
```
Automatic merge from submit-queue
Fix Multizone pv creation on GCE
When Multizone is enabled static PV creation on GCE
fails because Cloud provider configuration is not
available in admission plugins.
cc @derekwaynecarr @childsb
Automatic merge from submit-queue
kubeadm: Fix the nodeSelector and scheduler mounts when using the self-hosted mode
**What this PR does / why we need it**:
The self-hosted option in `kubeadm` was broken.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42528
**Special notes for your reviewer**:
**Release note**:
```release-note
```
/cc @luxas
Automatic merge from submit-queue
Remove the kube-discovery binary from the tree
**What this PR does / why we need it**:
kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md
However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
- https://github.com/kubernetes/kubernetes/pull/36101
- https://github.com/kubernetes/kubernetes/pull/41281
- https://github.com/kubernetes/kubernetes/pull/41417
So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
Automatic merge from submit-queue
kubeadm: Hook up kubeadm against the BootstrapSigner
**What this PR does / why we need it**:
This PR makes kubeadm able to use the BootstrapSigner.
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C
# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION
67a96d.02405a1773564431 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'.
# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo
# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C
# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```
**Release note**:
```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
Automatic merge from submit-queue
Eviction Manager Enforces Allocatable Thresholds
This PR modifies the eviction manager to enforce node allocatable thresholds for memory as described in kubernetes/community#348.
This PR should be merged after #41234.
cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-node-feature-requests @vishh
** Why is this a bug/regression**
Kubelet uses `oom_score_adj` to enforce QoS policies. But the `oom_score_adj` is based on overall memory requested, which means that a Burstable pod that requested a lot of memory can lead to OOM kills for Guaranteed pods, which violates QoS. Even worse, we have observed system daemons like kubelet or kube-proxy being killed by the OOM killer.
Without this PR, v1.6 will have node stability issues and regressions in an existing GA feature `out of Resource` handling.
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)
Dell EMC ScaleIO Volume Plugin
**What this PR does / why we need it**
This PR implements the Kubernetes volume plugin to allow pods to seamlessly access and use data stored on ScaleIO volumes. [ScaleIO](https://www.emc.com/storage/scaleio/index.htm) is a software-based storage platform that creates a pool of distributed block storage using locally attached disks on every server. The code for this PR supports persistent volumes using PVs, PVCs, and dynamic provisioning.
You can find examples of how to use and configure the ScaleIO Kubernetes volume plugin in [examples/volumes/scaleio/README.md](examples/volumes/scaleio/README.md).
**Special notes for your reviewer**:
To facilitate code review, commits for source code implementation are separated from other artifacts such as generated, docs, and vendored sources.
```release-note
ScaleIO Kubernetes Volume Plugin added enabling pods to seamlessly access and use data stored on ScaleIO volumes.
```
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)
enable cgroups tiers and node allocatable enforcement on pods by default.
```release-note
Pods are launched in a separate cgroup hierarchy than system services.
```
Depends on #41753
cc @derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)
kubelet: enable qos-level memory limits
```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```
Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314
**Note: QoS level cgroups have to be enabled for any of this to take effect.**
Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.
For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.
If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.
The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`
The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.
@derekwaynecarr @vishh
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)
kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names
**What this PR does / why we need it**:
- For the beta kubeadm init UX, we need this change
- Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
This PR depends on https://github.com/kubernetes/kubernetes/pull/41897
**Release note**:
```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt