The PluginManager almost duplicates the network plugin interface, but
not quite since the Init() function should be called by whatever
actually finds and creates the network plugin instance. Only then
does it get passed off to the PluginManager.
The Manager synchronizes pod-specific network operations like setup,
teardown, and pod network status. It passes through all other
operations so that runtimes don't have to cache the network plugin
directly, but can use the PluginManager as a wrapper.
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)
Delay Deletion of a Pod until volumes are cleaned up
#41436 fixed the bug that caused #41095 and #40239 to have to be reverted. Now that the bug is fixed, this shouldn't cause problems.
@vishh @derekwaynecarr @sjenning @jingxu97 @kubernetes/sig-storage-misc
Automatic merge from submit-queue (batch tested with PRs 41531, 40417, 41434)
Always detach volumes in operator executor
**What this PR does / why we need it**:
Instead of marking a volume as detached immediately in Kubelet's
reconciler, delegate the marking asynchronously to the operator
executor. This is necessary to prevent race conditions with other
operations mutating the same volume state.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler reaches detach volume section, detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark mount, throws an error because
volume is no longer in actual state of world list. After this, kubelet isn't aware of the mount
so doesn't try to unmount again.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#32881, fixes ##37854 (maybe)
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
kubeadm: Migrate to client-go
**What this PR does / why we need it**: Finish the migration for kubeadm to use client-go wherever possible
**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/52
**Special notes for your reviewer**: /cc @luxas @pires
**Release note**:
```release-note
NONE
```
Some imports dont exist yet (or so it seems) in client-go (examples
being:
- "k8s.io/kubernetes/pkg/api/validation"
- "k8s.io/kubernetes/pkg/util/initsystem"
- "k8s.io/kubernetes/pkg/util/node"
one change in kubelet to import to client-go
Automatic merge from submit-queue
Allow multipe DNS servers as comma-seperated argument for kubelet --dns
This PR explores how kubectls "--dns" could be extended to specify multiple DNS servers for in-cluster PODs. Testing on the local libvirt-coreos cluster shows that multiple DNS server are injected without issues.
Specifying multiple DNS servers increases resilience against
- Packet drops
- Single server failure
I am debugging services that do 50+ DNS requests for a single incoming interactive request, thus highly increase the chance of a slowdown (+5s) due to a single packet drop. Switching to two DNS servers will reduce the impact of the issues (roughly +1s on glibc, 0s on musl, error-rate goes down to error-rate^2).
Note that there is no need to change any runtime related code as far as I know. In the case of "default" dns the /etc/resolv.conf is parsed and multiple DNS server are send to the backend anyway. This only adds the same capability for the clusterFirst case.
I've heard from @thockin that multiple DNS entries are somehow considered. I've no idea what was considered, though. This is what I would like to see for our production use, though.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41360, 41423, 41430, 40647, 41352)
kubelet: reduce extraneous logging for pods using host network
For pods using the host network, kubelet/shim should not log
error/warning messages when determining the pod IP address.
Automatic merge from submit-queue (batch tested with PRs 41196, 41252, 41300, 39179, 41449)
record ReduceCPULimits result err info if err returned
record ReduceCPULimits result err info if err returned for debug
Automatic merge from submit-queue
Fix bug in status manager TerminatePod
In TerminatePod, we previously pass pod.Status to updateStatusInternal. This is a bug, since it is the original status that we are given. Not only does it skip updates made to container statuses, but in some cases it reverted the pod's status to an earlier version, since it was being passed a stale status initially.
This was the case in #40239 and #41095. As shown in #40239, the pod's status is set to running after it is set to failed, occasionally causing very long delays in pod deletion since we have to wait for this to be corrected.
This PR fixes the bug, adds some helpful debugging statements, and adds a unit test for TerminatePod (which for some reason didnt exist before?).
@kubernetes/sig-node-bugs @vish @Random-Liu
Automatic merge from submit-queue
Make EnableCRI default to true
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.
Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release. If both flags are specified,
the --enable-cri flag overrides the --experimental-cri flag.
Automatic merge from submit-queue
fix comment
**What this PR does / why we need it**:
fix comment
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.
Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release.
To safely mark a volume detached when the volume controller manager is used.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark volume in use, throws an error because
volume is no longer in actual state of world list.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
Implement TTL controller and use the ttl annotation attached to node in secret manager
For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.
Apiserver cannot keep up with it very easily.
Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.
So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is).
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.
@dchen1107 @thockin @liggitt
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Add debug logging to eviction manager
**What this PR does / why we need it**:
This PR adds debug logging to eviction manager.
We need it to help users understand when/why eviction manager is/is not making decisions to support information gathering during support.
Automatic merge from submit-queue (batch tested with PRs 40873, 40948, 39580, 41065, 40815)
[CRI] Enable Hostport Feature for Dockershim
Commits:
1. Refactor common hostport util logics and add more tests
2. Add HostportManager which can ADD/DEL hostports instead of a complete sync.
3. Add Interface for retreiving portMappings information of a pod in Network Host interface.
Implement GetPodPortMappings interface in dockerService.
4. Teach kubenet to use HostportManager
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
Let ReadLogs return when there is a read error.
Fixes a bug in kuberuntime log.
Today, @yujuhong found that once we cancel `kubectl logs -f` with `Ctrl+C`, kuberuntime will keep complaining:
```
27939 kuberuntime_logs.go:192] Failed with err write tcp 10.240.0.4:10250->10.240.0.2:53913: write: broken pipe when writing log for log file "/var/log/pods/5bb76510-ed71-11e6-ad02-42010af00002/busybox_0.log": &{timestamp:{sec:63622095387 nsec:625309193 loc:0x484c440} stream:stdout log:[84 117 101 32 70 101 98 32 32 55 32 50 48 58 49 54 58 50 55 32 85 84 67 32 50 48 49 55 10]}
```
This is because kuberuntime keeps writing to the connection even though it is already closed. Actually, kuberuntime should return and report error whenever there is a writing error.
Ref the [docker code](3a4ae1f661/pkg/stdcopy/stdcopy.go (L159-L167))
I'm still creating the cluster and verifying this fix. Will post the result here after that.
/cc @yujuhong @kubernetes/sig-node-bugs
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
kubelet/network-cni-plugin: modify the log's info
**What this PR does / why we need it**:
Checking the startup logs of kubelet, i can always find a error like this:
"E1215 10:19:24.891724 2752 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d"
It will appears, neither i use cni network-plugin or not.
After analysis codes, i thought it should be a warn log, because it will not produce any actions like as exit or abort, and just ignored when not any valid plugins exit.
thank you!
Automatic merge from submit-queue (batch tested with PRs 41103, 41042, 41097, 40946, 40770)
Use Clientset interface in KubeletDeps
**What this PR does / why we need it**:
This replaces the Clientset struct with the equivalent interface for the KubeClient injected via KubeletDeps. This is useful for testing and for accessing the Node and Pod status event stream without an API server.
**Special notes for your reviewer**:
Follow up to #4907
**Release note**:
`NONE`
Automatic merge from submit-queue (batch tested with PRs 41103, 41042, 41097, 40946, 40770)
dockershim: set security option separators based on the docker version
Also add a version cache to avoid hitting the docker daemon frequently.
This is part of #38164