Commit Graph

1392 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
eeac23282d Merge pull request #31446 from liggitt/log-streaming
Automatic merge from submit-queue

Fix hang/websocket timeout when streaming container log with no content

When streaming and following a container log, no response headers are sent from the kubelet `containerLogs` endpoint until the first byte of content is written to the log. This propagates back to the API server, which also will not send response headers until it gets response headers from the kubelet. That includes upgrade headers, which means a websocket connection upgrade is not performed and can time out.

To recreate, create a busybox pod that runs `/bin/sh -c 'sleep 30 && echo foo && sleep 10'`

As soon as the pod starts, query the kubelet API:
```
curl -N -k -v 'https://<node>:10250/containerLogs/<ns>/<pod>/<container>?follow=true&limitBytes=100'
```

or the master API:
```
curl -N -k -v 'http://<master>:8080/api/v1/<ns>/pods/<pod>/log?follow=true&limitBytes=100'
```

In both cases, notice that the response headers are not sent until the first byte of log content is available.

This PR:
* does a 0-byte write prior to handing off to the container runtime stream copy. That commits the response header, even if the subsequent copy blocks waiting for the first byte of content from the log.
* fixes a bug with the "ping" frame sent to websocket streams, which was not respecting the requested protocol (it was sending a binary frame to a websocket that requested a base64 text protocol)
* fixes a bug in the limitwriter, which was not propagating 0-length writes, even before the writer's limit was reached
2016-08-26 06:09:43 -07:00
Jordan Liggitt
0deddb1a62
Do initial 0-byte write to stdout when streaming container logs 2016-08-25 14:29:22 -04:00
Michael Taufen
f277205f4f Kubelet Refactoring
This refactor removes the legacy KubeletConfig object and adds a new
KubeletDeps object, which contains injected runtime objects and
separates them from static config. It also reduces NewMainKubelet to two
arguments: a KubeletConfiguration and a KubeletDeps.

Some mesos and kubemark code was affected by this change, and has been
modified accordingly.

And a few final notes:

KubeletDeps:
KubeletDeps will be a temporary bin for things we might consider
"injected dependencies", until we have a better dependency injection
story for the Kubelet. We will have to discuss this eventually.

RunOnce:
We will likely not pull new KubeletConfiguration from the API server
when in runonce mode, so it doesn't make sense to make this something
that can be configured centrally. We will leave it as a flag-only option
for now. Additionally, it is increasingly looking like nobody actually uses the
Kubelet's runonce mode anymore, so it may be a candidate for deprecation
and removal.
2016-08-25 10:57:31 -07:00
Dr. Stefan Schimanski
e356e52247 Add sysctl whitelist on the node 2016-08-25 13:22:01 +02:00
Kubernetes Submit Queue
bb9523bd0f Merge pull request #31157 from pmorie/kubelet-move
Automatic merge from submit-queue

Kubelet code move: volume / util

Addresses some odds and ends that I apparently missed earlier.  Preparation for kubelet code-move ENDGAME.

cc @kubernetes/sig-node
2016-08-25 00:20:39 -07:00
Kubernetes Submit Queue
189a870ec8 Merge pull request #30376 from justinsb/kubenet_mtu
Automatic merge from submit-queue

Add kubelet --network-plugin-mtu flag for MTU selection

* Add network-plugin-mtu option which lets us pass down a MTU to a network provider (currently processed by kubenet)
* Add a test, and thus make sysctl testable
2016-08-23 21:54:50 -07:00
Kubernetes Submit Queue
64210f43ff Merge pull request #30429 from ZTE-PaaS/zhangke-patch-023
Automatic merge from submit-queue

two nits for kubelet syncPod

a useless ‘(’ and a log level should be info
2016-08-23 15:04:59 -07:00
Justin Santa Barbara
902ba4e249 Add network-plugin-mtu option for MTU selection
MTU selection is difficult, and if there is a transport such as IPSEC in
use may be impossible.  So we allow specification of the MTU with the
network-plugin-mtu flag, and we pass this down into the network
provider.

Currently implemented by kubenet.
2016-08-23 01:50:58 -04:00
Paul Morie
b91ad76066 Kubelet code move: volume / util 2016-08-22 23:35:11 -04:00
Tim St. Clair
f94df59791
Remove apparmor dependency on pkg/kubelet/lifecycle 2016-08-21 20:59:11 -07:00
Kubernetes Submit Queue
5d54c55710 Merge pull request #30212 from feiskyer/kuberuntime-flag
Automatic merge from submit-queue

Kubelet: add --container-runtime-endpoint and --image-service-endpoint

Flag `--container-runtime-endpoint` (overrides `--container-runtime`) is introduced to identify the unix socket file of the remote runtime service. And flag `--image-service-endpoint` is introduced to identify the unix socket file of the image service.

This PR is part of #28789 Milestone 0. 

CC @yujuhong @Random-Liu
2016-08-21 12:03:10 -07:00
Clayton Coleman
e1ebde9f92
Add spec.nodeName and spec.serviceAccountName to downward env var
The serviceAccountName is occasionally useful for clients running on
Kube that need to know who they are when talking to other components.

The nodeName is useful for PetSet or DaemonSet pods that need to make
calls back to the API to fetch info about their node.

Both fields are immutable, and cannot easily be retrieved in another
way.
2016-08-20 15:50:36 -04:00
Kubernetes Submit Queue
1b79bc1812 Merge pull request #30731 from ncdc/exec-probe-message
Automatic merge from submit-queue

Always return command output for exec probes and kubelet RunInContainer

Always return command output for exec probes and kubelet RunInContainer, even if the command invocation returns nonzero.

When #24921 replaced RunInContainer with ExecInContainer, it introduced a change where an exec probe that failed no longer included the stdout/stderr from the probe in the event. For example, when running at log level 4, you see:

```
I0816 15:01:36.259826 29713 exec.go:38] Exec probe response: "Failed to access the status endpoint : HTTP Error 404: Not Found.\nHawkular metrics has only been running for 7\n seconds not aborting yet.\n"
```

But the event looks like this:

```
54s 22s 5 hawkular-metrics-hjme4 Pod spec.containers{hawkular-metrics} Warning Unhealthy {kubelet corbeau} Readiness probe failed:
```

Note the absence of the exec probe response after "Readiness probe failed". This PR restores the previous behavior.

cc @kubernetes/rh-cluster-infra @mwringe 

xref https://github.com/openshift/origin/issues/10424
2016-08-20 05:41:44 -07:00
Kubernetes Submit Queue
9e09839477 Merge pull request #30487 from ronnielai/container-gc
Automatic merge from submit-queue

Delete all dead containers only after the syncing for the evicted pod is done.
2016-08-20 01:03:39 -07:00
Kubernetes Submit Queue
e9815020eb Merge pull request #30475 from derekwaynecarr/pod-cgroup
Automatic merge from submit-queue

Unblock iterative development on pod-level cgroups

In order to allow forward progress on this feature, it takes the commits from #28017 #29049 and then it globally disables the flag that allows these features to be exercised in the kubelet.  The flag can be re-added to the kubelet when its actually ready.

/cc @vishh @dubstack @kubernetes/rh-cluster-infra
2016-08-19 21:06:48 -07:00
Kubernetes Submit Queue
6ce405c6ee Merge pull request #27778 from screeley44/k8-vol-executor
Automatic merge from submit-queue

Add Events for operation_executor to show status of mounts, failed/successful to show in describe events

Fixes #27590 
@saad-ali @pmorie @erinboyd

After talking with @pmorie last week about the above issue, I decided to poke around and see if I could remedy.  The refactoring broke my previous UXP merged PR's that correctly showed failed mount errors in the describe events.  However, Not sure I implemented correctly, but it tested out and seems to be working, let me know what I missed or if this is not the correct approach.

```
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2m		2m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned nfs-bb-pod1 to 127.0.0.1
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedMount	Unable to mount volumes for pod "nfs-bb-pod1_default(a94f64f1-37c9-11e6-9aa5-52540073d346)": timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  38s		38s		1	{kubelet }				Warning		FailedMount	Unable to mount volumes for pod "a94f64f1-37c9-11e6-9aa5-52540073d346": Mount failed: exit status 32
Mounting arguments: nfs1.rhs:/opt/data99 /var/lib/kubelet/pods/a94f64f1-37c9-11e6-9aa5-52540073d346/volumes/kubernetes.io~nfs/nfsvol nfs []
Output: mount.nfs: Connection timed out

Resolution hint: Check and make sure the NFS Server exists (ensure that correct IPAddress/Hostname was given) and is available/reachable.
Also make sure firewall ports are open on both client and NFS Server (2049 v4 and 2049, 20048 and 111 for v3).
Use commands telnet <nfs server> <port> and showmount <nfs server> to help test connectivity.
```
2016-08-19 08:27:48 -07:00
dubstack
4ddfe172ce Add support for pod container management 2016-08-19 11:07:33 -04:00
Pengfei Ni
b36ace9a57 Kubelet: add --container-runtime-endpoint and --image-service-endpoint
New flag --container-runtime-endpoint (overrides --container-runtime)
is introduced to kubelet which identifies the unix socket file of
the remote runtime service. And new flag --image-service-endpoint is
introduced to kubelet which identifies the unix socket file of the
image service.
2016-08-19 10:22:44 +08:00
Minhan Xia
1acaa1db09 Revert "Revert "syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE"" 2016-08-18 10:19:48 -07:00
Andy Goldstein
c3fe759fec Always return exec command output
Always return exec command output, even if the command invocation returns nonzero. This applies to
exec probes and kubelet RunInContainer calls.
2016-08-17 16:21:19 -04:00
Kubernetes Submit Queue
f3f818a190 Merge pull request #29639 from aveshagarwal/master-default-resources-limits-fix
Automatic merge from submit-queue

Fix default resource limits (node allocatable) for downward api volumes and env vars

@kubernetes/rh-cluster-infra  @pmorie @derekwaynecarr
2016-08-17 11:37:41 -07:00
Scott Creeley
782d7d9815 Add Events for operation_executor to show status of mounts, failed or successful 2016-08-17 09:53:47 -04:00
Daniel Smith
2aa0bb2dfc Revert "syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE" 2016-08-16 18:12:28 -07:00
Kubernetes Submit Queue
d412d5721d Merge pull request #30486 from freehan/lbsrcfix
Automatic merge from submit-queue

syncNetworkUtil in kubelet and fix loadbalancerSourceRange on GCE

fixes: #29997 #29039

@yujuhong Can you take a look at the kubelet part?

@girishkalele KUBE-MARK-DROP is the chain for dropping connections. Marked connection will be drop  in INPUT/OUTPUT chain of filter table. Let me know if this is good enough for your use case.
2016-08-16 15:22:34 -07:00
Avesh Agarwal
52a60fe3be Fix default resource limits (node capacities) for downward api volumes 2016-08-16 14:41:17 -04:00
Kubernetes Submit Queue
5962874414 Merge pull request #30118 from timstclair/aa-hookup
Automatic merge from submit-queue

Implement AppArmor Kubelet support

Includes PR https://github.com/kubernetes/kubernetes/pull/29812

Implements the Kubelet logic for AppArmor based on the alpha API proposed [here](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/apparmor.md). Also adds an E2E test, and I ran manual tests.

Remaining work: PodSecurityPolicy support, profile loader daemon, documentation, (maybe) beta API.

/cc @jfrazelle @Amey-D @kubernetes/sig-node 

*Note on release-note-none: I am implementing AppArmor over multiple PRs. I will submit a single release note once the implementation is done to cover all of them.*
2016-08-15 22:32:58 -07:00
Minhan Xia
3bf8679232 add syncNetworkUtil in kubelet 2016-08-15 17:42:35 -07:00
bindata-mockuser
e067f7548f Delete all dead containers only after pod syncing is done. 2016-08-15 14:36:51 -07:00
Tim St. Clair
3c7896719b
Implement AppArmor Kubelet support 2016-08-15 13:25:17 -07:00
Jing Xu
f19a1148db This change supports robust kubelet volume cleanup
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
2016-08-15 11:29:15 -07:00
Ke Zhang
3950f3253a two nits for kubelet syncPod 2016-08-12 09:18:29 +08:00
Yu-Ju Hong
8e48221c24 kubelet: mark source ready after updating the cache
This ensures that cleanup routines don't start until the cache content is
up-to-date.
2016-08-10 17:55:10 -07:00
Kubernetes Submit Queue
a9af8a56b4 Merge pull request #30325 from ronnielai/test1
Automatic merge from submit-queue

Fixing a potential container deletion GC timing issue 

If pod manager is updated before all containers in a pod are deleted, the container clean up logic should still be triggered.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30325)
<!-- Reviewable:end -->
2016-08-10 03:13:13 -07:00
bindata-mockuser
8ee2dc88f2 Container deletion should still happen when pod is removed from pod manager 2016-08-09 16:51:55 -07:00
mksalawa
2749ec7555 Create PredicateFailureReason, modify scheduler predicate interface. 2016-08-09 14:01:46 +02:00
Davanum Srinivas
e0edfebe82 Verify volume.GetPath() never returns ""
Add a new helper method volume.GetPath(Mounter) instead of calling
the GetPath() of the Mounter directly. Check if GetPath() is returning
a "" and convert that into an error. At this point, we only have
information about the type of the Mounter, so let's log that if
there is a problem

Fixes #23163
2016-08-05 08:45:33 -04:00
Kubernetes Submit Queue
34e51d8ce9 Merge pull request #30095 from ronnielai/image-gc-2
Automatic merge from submit-queue

Moving image gc to pkg/kubelet/images
2016-08-05 03:11:33 -07:00
Kubernetes Submit Queue
4700b6fb3c Merge pull request #29880 from derekwaynecarr/disk-pressure-image-gc
Automatic merge from submit-queue

Node disk pressure should induce image gc

If the node reports disk pressure, prior to evicting pods, the node should clean up unused images.
2016-08-04 17:03:19 -07:00
Kubernetes Submit Queue
88f987e7e2 Merge pull request #29973 from ZTE-PaaS/zhangke-patch-016
Automatic merge from submit-queue

optimize podKiller for reading channel

Reading kl.podKillingCh should check whether ok first, then to process data
2016-08-04 16:25:54 -07:00
derekwaynecarr
68bc47ecc6 Add support to invoke image gc in response to disk eviction thresholds 2016-08-04 17:13:08 -04:00
bindata-mockuser
0c76d85cc8 moving image gc to images 2016-08-04 12:26:06 -07:00
Kubernetes Submit Queue
1933462c7b Merge pull request #29925 from ronnielai/container-gc
Automatic merge from submit-queue

Delete containers when pod is evicted

#29803
2016-08-04 04:20:02 -07:00
Ron Lai
8bc4444f16 Delete containers when pod is deleted 2016-08-03 15:56:04 -07:00
Ke Zhang
5d19daa2e2 optimize podKiller for reading channel 2016-08-03 15:36:04 +08:00
Andrey Kurilin
9f1c3a4c56 Fix various typos in kubelet 2016-08-03 01:14:44 +03:00
k8s-merge-robot
63602348a4 Merge pull request #29009 from bboreham/hairpin-via-cni
Automatic merge from submit-queue

Use the CNI bridge plugin to set hairpin mode

Following up this part of #23711:

>  I'd like to wait until containernetworking/cni#175 lands and then just pass the request through to CNI.

The code here just
 * passes the required setting down from kubenet to CNI
 * disables `DockerManager` from doing hairpin-veth, if kubenet is in use

Note to test you need a very recent version of the CNI `bridge` plugin; the one brought in by #28799 should be OK.

Also relates to https://github.com/kubernetes/kubernetes/issues/19766#issuecomment-232722864
2016-07-31 10:08:06 -07:00
k8s-merge-robot
c5756d22e2 Merge pull request #29779 from 249043822/patch-1
Automatic merge from submit-queue

make log description more readable
2016-07-29 17:25:28 -07:00
k8s-merge-robot
2c4599bf45 Merge pull request #28793 from ronnielai/container-gc
Automatic merge from submit-queue

Trigger container cleanup within a pod when a container exiting event is detected

#25239
2016-07-29 16:40:01 -07:00
k8s-merge-robot
5760acf603 Merge pull request #29596 from matttproud/fix/time-leaks/remainder
Automatic merge from submit-queue

pkg/various: plug leaky time.New{Timer,Ticker}s

According to the documentation for Go package time, `time.Ticker` and
`time.Timer` are uncollectable by garbage collector finalizers.  They
leak until otherwise stopped.  This commit ensures that all remaining
instances are stopped upon departure from their relative scopes.

Similar efforts were incrementally done in #29439 and #29114.

```release-note
* pkg/various: plugged various time.Ticker and time.Timer leaks.
```
2016-07-29 14:06:47 -07:00
KeZhang
fe031d3347 make log description more readable 2016-07-29 22:50:56 +08:00