Commit Graph

12610 Commits

Author SHA1 Message Date
Jan Safranek
ee74cc4354 Fix fake event recorder race
Event recorder should wait for some time to get all expected events, the event
may be written by another goroutine that just have finished.

It should not slow down the test in most cases, only when there is a bug and
expected event is not sent.
2016-06-01 10:16:35 +02:00
k8s-merge-robot
04f77dd602 Merge pull request #26556 from jsafrane/fix-format
Automatic merge from submit-queue

Fix log arguments.

'i' is not printed.
@kubernetes/sig-storage
2016-05-31 21:24:50 -07:00
k8s-merge-robot
38d5be4f36 Merge pull request #26555 from jsafrane/stabilize-test-flakes
Automatic merge from submit-queue

Stabilize controller unit tests.

Remove test "5-1", it's flaky as it depends on order of execution of goroutines. When the controller starts, existing claim is enqueued as "initial sync event" and a new volume is enqueued to separate goroutine. It is not deterministic which goroutine processes its events first and there is no way how to tell that the claim event was processed.

Also, force resync of the controllers after the test to make sure all events are processed.

Fixes unit test flakes.
@kubernetes/sig-storage
2016-05-31 17:06:12 -07:00
k8s-merge-robot
5288a255f4 Merge pull request #25567 from gmarek/validate
Automatic merge from submit-queue

Add Controller field to OwnerReference

cc @davidopp
2016-05-31 14:21:38 -07:00
k8s-merge-robot
52cc96d5a0 Merge pull request #24569 from williamsandrew/elb-proxy-protocol
Automatic merge from submit-queue

AWS: ELB proxy protocol support via annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol

This is a ~~work in progress~~ branch that adds support for the Proxy Protocol with Elastic Load Balancers. The proxy protocol is documented here: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt. It allows us to pass the "real ip" address of a client to pods behind services.

As it stands now, we create an ELB policy on the load balancer that enables the proxy protocol. We then enumerate each node port assigned to the load balancer and add our newly created policy to it. The manual process is documented here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html


Right now, I’m looking to get some feedback on the approach before I dive too much deeper in the code. More precisely, I have questions regarding the following:

1) Right now I just check that a certain annotation exists on the service regardless of what its value is. Assuming we’re going to enable this feature via an annotation, what is the expected experience? This decision likely depends on the answers to the next questions.

2) Right now the implementation enables the proxy protocol on every ELB backend. The actual ELB API expects you to add the policy for each configured backend. Do we want the ability to configure the proxy protocol on a per service port basis? For example, if a service exposes TCP 80 and 443, would we want the ability to only enable the proxy protocol on port 443? Does this overcomplicate the implementation? If we wanted to go this direction we could do something like ...

```
{
  "service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "tcp:80,tcp:443"
}
```

3) I avoided this because I was concerned with scope creep and our organization doesn’t need it, but could/should our implementation be adjusted to just handle ELB policies in general? I hadn’t used the ELB API until I started working on this branch so I don’t know how realistic this is. I also don't know how common this use case is as our organization has used our own load balancing setup prior to Kubernetes. This page has a couple of examples at the bottom: http://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer-policy.html

cc @justinsb

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24569)
<!-- Reviewable:end -->
2016-05-31 12:37:57 -07:00
k8s-merge-robot
d957e78a41 Merge pull request #25253 from soltysh/issue24533
Automatic merge from submit-queue

kubectl run --restart=Never creates pods

Fixes #24533.

@bgrant0607 @janetkuo ptal
/fyi @thockin

```release-note
* kubectl run --restart=Never creates pods
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-05-31 11:44:05 -07:00
gmarek
778b1df717 Add Controller to api/meta 2016-05-31 20:21:05 +02:00
k8s-merge-robot
484830c763 Merge pull request #26564 from wojtek-t/fix_pod_annotations
Automatic merge from submit-queue

Fix apiservers crashes

Ref #26563
2016-05-31 10:55:48 -07:00
Andrew Williams
01d9cddda5 Add Amazon ELB proxy protocol support
Add ELB proxy protocol support via the annotation
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol". This
allows servers like Nginx and Haproxy to retrieve the real IP address of
a remote client.
2016-05-31 10:33:16 -05:00
Wojciech Tyczynski
d002cb1d63 Fix apiservers crashes 2016-05-31 17:26:35 +02:00
k8s-merge-robot
38181bb3fb Merge pull request #25917 from pmorie/pv-selector
Automatic merge from submit-queue

Add LabelSelector to PersistentVolumeClaimSpec

Implements #25413.

@kubernetes/sig-storage @bgrant0607 @thockin @jsafrane @eparis
2016-05-31 08:22:07 -07:00
k8s-merge-robot
9a4c2feecb Merge pull request #26177 from yifan-gu/fix_docker_auth
Automatic merge from submit-queue

rkt: Fix docker auth config save directory to avoid race.

Fixes #https://github.com/kubernetes/kubernetes/issues/26117

cc @euank @sjpotter
2016-05-31 07:33:49 -07:00
gmarek
a6dd89d797 Add Controller field to OwnerReference 2016-05-31 15:33:35 +02:00
Paul Morie
acfcb73533 Regen for pv selector 2016-05-31 09:32:23 -04:00
k8s-merge-robot
ae1fb82cfc Merge pull request #26073 from piosz/remove-metrics-group
Automatic merge from submit-queue

Removed metrics api group

```release-note
Removed metrics api group
```
The group is empty and unused. Kubelet Metrics API is defined in Kubelet code. Master Metrics API is defined in Heapster. Removing to avoid the confusion.

[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-05-31 03:50:24 -07:00
Jan Safranek
21059e8b6d Fix log arguments.
'i' is not printed.
2016-05-31 12:12:15 +02:00
Jan Safranek
011eac7c8b Stabilize controller unit tests.
Remove test "5-1", it's flaky as it depends on order of execution of
goroutines. When the controller starts, existing claim is enqueued as
"initial sync event" and a new volume is enqueued to separate goroutine.
It is not deterministic which goroutine processes its events first and
there is no way how to tell that the claim event was processed.

Also, force resync of the controllers after the test to make sure all
events are processed.
2016-05-31 12:07:47 +02:00
k8s-merge-robot
c805303644 Merge pull request #26162 from jszczepkowski/kubectl-fix2
Automatic merge from submit-queue

Fixed check in kubectl autoscale.

```release-note
Fixed check in kubectl autoscale: cpu consumption can be higher than 100%.
```

[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()

Fixed check in kubectl autoscale: cpu consumption can be higher than 100%. Fixes #25815.
2016-05-31 03:00:05 -07:00
Piotr Szczesniak
22dc21d703 Removed metrics api group 2016-05-31 09:48:39 +02:00
gmarek
7cac170214 AllocateOrOccupyCIDR returs quickly 2016-05-31 09:11:42 +02:00
k8s-merge-robot
d1277e34fd Merge pull request #25913 from pweil-/ds-tombstone
Automatic merge from submit-queue

daemonset handle DeletedFinalStateUnknown

During an e2e run in OpenShift we ran into the DS controller panic when handling `DeletedFinalStateUnknown`.  This PR checks for `DeletedFinalStateUnknown` and queues the embedded object if it is a `DaemonSet`.

@mikedanese - would you mind taking a look?
@deads2k  

```
panic: interface conversion: interface is cache.DeletedFinalStateUnknown, not *extensions.DaemonSet

goroutine 4369 [running]:
k8s.io/kubernetes/pkg/controller/daemon.func·005(0x2f8a0c0, 0xc20b559680)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:160 +0x50
k8s.io/kubernetes/pkg/controller/framework.ResourceEventHandlerFuncs.OnDelete(0xc20a0ae090, 0xc20a0ae0a0, 0xc20a0ae0b0, 0x2f8a0c0, 0xc20b559680)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:178 +0x41
k8s.io/kubernetes/pkg/controller/framework.(*ResourceEventHandlerFuncs).OnDelete(0xc20b8ebf20, 0x2f8a0c0, 0xc20b559680)
	<autogenerated>:25 +0xb5
k8s.io/kubernetes/pkg/controller/framework.func·001(0x2f8a280, 0xc20b5522e0, 0x0, 0x0)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:248 +0x4be
k8s.io/kubernetes/pkg/controller/framework.(*Controller).processLoop(0xc20bb727e0)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:122 +0x6f
k8s.io/kubernetes/pkg/controller/framework.*Controller.(k8s.io/kubernetes/pkg/controller/framework.processLoop)·fm()
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x27
k8s.io/kubernetes/pkg/util/wait.func·001()
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:66 +0x61
k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc209f8cfb8, 0x3b9aca00, 0x0, 0xc2080543c0)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:67 +0x8f
k8s.io/kubernetes/pkg/util/wait.Until(0xc209f8cfb8, 0x3b9aca00, 0xc2080543c0)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x4a
k8s.io/kubernetes/pkg/controller/framework.(*Controller).Run(0xc20bb727e0, 0xc2080543c0)
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x1fb
created by k8s.io/kubernetes/pkg/controller/daemon.(*DaemonSetsController).Run
	/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:212 +0xae
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_check/1002/artifact/origin/artifacts/test-cmd/logs/openshift.log
2016-05-30 17:54:17 -07:00
Paul Morie
4ffa3c6754 Add label selector to match criteria for claims to volumes 2016-05-30 12:11:12 -04:00
Paul Morie
faa112bad1 Add selector to PersistentVolumeClaim 2016-05-30 12:09:50 -04:00
k8s-merge-robot
dff1ed1497 Merge pull request #26106 from soltysh/scheduledjob_validation
Automatic merge from submit-queue

ScheduledJob validation

@erictune while playing earlier today I've noticed `suspend` isn't a pointer which requires it to be set. Additionally the validation for job selectors is too strict in that it requires the selector to match produced pods, which doesn't make sense for SJ, I've changed it to being forbidden to set entirely.

[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-05-30 09:05:01 -07:00
k8s-merge-robot
6a234a2cc2 Merge pull request #24882 from brendandburns/3rdparty
Automatic merge from submit-queue

Add support for 3rd party objects to kubectl label

Fixes https://github.com/kubernetes/kubernetes/issues/24583

@kubernetes/rh-ux
2016-05-30 07:24:51 -07:00
Maciej Szulik
e1aa8835d9 Generated changes to suspend becoming pointer for ScheduledJob 2016-05-30 15:52:58 +02:00
Maciej Szulik
d8b9495ea0 Change suspend to be pointer for ScheduledJob and modify validation to forbid setting job selectors 2016-05-30 15:43:23 +02:00
k8s-merge-robot
9aeeef1d81 Merge pull request #26414 from jsafrane/reduce-sync-period
Automatic merge from submit-queue

Reduce volume controller sync period

fixes #24236 and most probably also fixes #25294.
Needs #25881! With the cache, binder is not affected by sync period. Without the cache, binding of 1000 PVCs takes more than 5 minutes (instead of ~70 seconds).

15 seconds were chosen by fair 2d10 roll :-)
2016-05-30 05:54:51 -07:00
Yifan Gu
1d40f471b4 rkt: Fix docker auth config save directory to avoid race. 2016-05-30 20:40:31 +08:00
k8s-merge-robot
e531a7784e Merge pull request #26242 from metral/refactor-get
Automatic merge from submit-queue

fix recursive get for proper err display

- refactor code to use `Infos()` instead
- fixes https://github.com/kubernetes/kubernetes/issues/26241
2016-05-30 05:04:04 -07:00
k8s-merge-robot
5643b7498f Merge pull request #25881 from jsafrane/devel/pv-add-cache
Automatic merge from submit-queue

volume controller: Add cache with the latest version of PVs and PVCs

When the controller binds a PV to PVC, it saves both objects to etcd. However, there is still an old version of these objects in the controller Informer cache. So, when a new PVC comes, the PV is still seen as available and may get bound to the new PVC. This will be blocked by etcd, still, it creates unnecessary traffic that slows everything down.

To make everything worse, when periodic sync with the old PVC is performed, this PVC is seen by the controller as Pending (while it's already Bound on etcd) and will be bound to a different PV. Writing to this PV won't be blocked by etcd, only subsequent write of the PVC fails. So, the controller will need to roll back the PV in another transaction(s). The controller can keep itself pretty busy this way.

Also, we save bound PVs (and PVCs) as two transactions - we save say PV.Spec first and then .Status. The controller gets "PV.Spec updated" event from etcd and tries to fix the Status, as it seems to the controller it's outdated. This write again fails - there already is a correct version in etcd.

As we can't influence the Informer cache, it is read-only to the controller, this patch introduces second cache in the controller, which holds latest and greatest version on PVs and PVCs to prevent these useless writes to etcd . It gets updated with events from etcd *and* after etcd confirms successful save of PV/PVC modified by the controller.

The cache stores only *pointers* to PVs/PVCs, so in ideal case it shares the actual object data with the informer cache. They will diverge only for a short time when the controller modifies something and the informer cache did not get update events yet.

@kubernetes/sig-storage
2016-05-30 04:13:18 -07:00
k8s-merge-robot
270e85960b Merge pull request #23801 from sttts/sttts-kubectl-completion-cmd
Automatic merge from submit-queue

Move shell completion generation into 'kubectl completion' command

Remove static shell completion scripts from the repo and add `completion` command to kubectl:

```bash
$ source <(kubectl completion bash)
```

or

```bash
$ source <(kubectl completion zsh)
```

This makes maintenance easier because no static scripts must be generated and committed anymore in the repo.

Moreover, kubectl is self-contained again for the user including the latest completion code. I am thinking about the use-case of updating kubectl via gcloud (or some package manager). The completion code is always in-sync, without the need to download a `contrib/completion/bash/kubectl` file from github.

Opinions are welcome /cc @eparis @nak3 

Fixes https://github.com/openshift/origin/issues/5290

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/23801)
<!-- Reviewable:end -->
2016-05-30 01:30:16 -07:00
Jan Safranek
2aa9f1dd8f Reduce volume controller sync period 2016-05-30 09:59:31 +02:00
k8s-merge-robot
60c1b4e75f Merge pull request #25804 from mfojtik/add-batch-client
Automatic merge from submit-queue

Add BatchClient into clientset adaption

@soltysh FYI
2016-05-30 00:40:59 -07:00
Dr. Stefan Schimanski
a79a420fde Move shell completion generation into 'kubectl completion' command 2016-05-30 07:23:36 +02:00
k8s-merge-robot
77de942e08 Merge pull request #26451 from Random-Liu/cache_image_history
Automatic merge from submit-queue

Kubelet: Cache image history to eliminate the performance regression

Fix https://github.com/kubernetes/kubernetes/issues/25057.

The image history operation takes almost 50% of cpu usage in kubelet performance test. We should cache image history instead of getting it from runtime everytime.

This PR cached image history in imageStatsProvider and added unit test.

@yujuhong @vishh 
/cc @kubernetes/sig-node 

Mark v1.3 because this is a relatively significant performance regression.

[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-05-29 20:49:01 -07:00
k8s-merge-robot
195842713d Merge pull request #26418 from AdoHe/kubectl_env
Automatic merge from submit-queue

fix strategy patch diff list issue

fixes #25585 

@janetkuo @pwittrock ptal.
2016-05-29 18:18:14 -07:00
k8s-merge-robot
63931d39a3 Merge pull request #26349 from kargakis/fix-reaper-not-found-error
Automatic merge from submit-queue

kubectl: cast scale errors to actual errors when deleting

Fixes some of the deployment reaper timeouts in e2e

@kubernetes/deployment @soltysh
2016-05-29 17:29:30 -07:00
k8s-merge-robot
32da727ca1 Merge pull request #26264 from luxas/remove_flannel_default
Automatic merge from submit-queue

Do not call NewFlannelServer() unless flannel overlay is enabled

Ref: #26093 

This makes so kubelet does not warn the user that iptables isn't in PATH, although the user didn't enable the flannel overlay.

@vishh @freehan @bprashanth
2016-05-29 15:49:00 -07:00
k8s-merge-robot
72479b82e0 Merge pull request #26019 from gyuho/kubectl_slice_append
Automatic merge from submit-queue

pkg/kubectl: preallocate slice
2016-05-29 15:00:15 -07:00
k8s-merge-robot
eed13d702f Merge pull request #26253 from xiangpengzhao/fix_assertnotnil
Automatic merge from submit-queue

Add assert.NotNil for test case

I hardcode the `DefaultInterfaceName` from `eth0` to `eth-k8sdefault` at release 1.2.0,  in order to test my CNI plugins. When running the test, it panics and prints wrongly formatted messages as below.

In the test case `TestBuildSummary`, `containerInfoV2ToNetworkStats` will return `nil` if `DefaultInterfaceName` is not `eth0`. So maybe we should add `assert.NotNil` to the test case.

```
ok      k8s.io/kubernetes/pkg/kubelet/server    0.591s
W0523 03:25:28.257074    2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=node:FooNode)
W0523 03:25:28.257322    2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod1)
W0523 03:25:28.257361    2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod0)
W0523 03:25:28.257419    2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test2_pod0)
--- FAIL: TestBuildSummary (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x471817]

goroutine 16 [running]:
testing.func·006()
        /usr/src/go/src/testing/testing.go:441 +0x181
k8s.io/kubernetes/pkg/kubelet/server/stats.checkNetworkStats(0xc20806d3b0, 0x140bbc0, 0x4, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:296 +0xc07
k8s.io/kubernetes/pkg/kubelet/server/stats.TestBuildSummary(0xc20806d3b0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:124 +0x11d2
testing.tRunner(0xc20806d3b0, 0x1e43180)
        /usr/src/go/src/testing/testing.go:447 +0xbf
created by testing.RunTests
        /usr/src/go/src/testing/testing.go:555 +0xa8b
```
2016-05-29 14:13:00 -07:00
k8s-merge-robot
0fc573296d Merge pull request #26169 from victorgp/master
Automatic merge from submit-queue

Setting TLS1.2 minimum because TLS1.0 and TLS1.1 are vulnerable

TLS1.0 is known as vulnerable since it can be downgraded to SSL
https://blog.varonis.com/ssl-and-tls-1-0-no-longer-acceptable-for-pci-compliance/

TLS1.1 can be vulnerable if cipher RC4-SHA is used, and in Kubernetes it is, you can check it with
`
openssl s_client -cipher RC4-SHA -connect apiserver.k8s.example.com:443
`

https://www.globalsign.com/en/blog/poodle-vulnerability-expands-beyond-sslv3-to-tls/

Test suites like Qualys are reporting this Kubernetes issue as a level 3 vulnerability, they recommend to upgrade to TLS1.2 that is not affected, quoting Qualys:

`
RC4 should not be used where possible. One reason that RC4 was still being used was BEAST and Lucky13 attacks against CBC mode ciphers in
SSL and
TLS. However, TLSv 1.2 or later address these issues.
`
2016-05-29 13:24:46 -07:00
k8s-merge-robot
10b271c6de Merge pull request #26078 from mfojtik/fix-nil-annnotations
Automatic merge from submit-queue

Fix panic when the namespace flag is not present

We don't set the namespace in OpenShift, so we need to check if the namespace flag is present.
2016-05-29 10:32:33 -07:00
k8s-merge-robot
14a6fb7d0f Merge pull request #26018 from gyuho/slice_append
Automatic merge from submit-queue

pkg/runtime: preallocate slice in unstructured.go
2016-05-29 07:20:50 -07:00
k8s-merge-robot
7030dca4c8 Merge pull request #25989 from jingxu97/bug-tmpdir
Automatic merge from submit-queue

use MkTmpDir instead of ioutil.TempDir in testing

fixes #20243
2016-05-29 06:32:36 -07:00
k8s-merge-robot
a99e4ca793 Merge pull request #25678 from rajdeepd/branch1
Automatic merge from submit-queue

Added Test Cases for Pod

Test case modified for Pod
2016-05-29 04:53:06 -07:00
k8s-merge-robot
98af443209 Merge pull request #26398 from euank/various-kubenet-fixes
Automatic merge from submit-queue

Various kubenet fixes (panics and bugs and cidrs, oh my)

This PR fixes the following issues:

1. Corrects an inverse error-check that prevented `shaper.Reset` from ever being called with a correct ip address
2. Fix an issue where `parseCIDR` would fail after a kubelet restart due to an IP being stored instead of a CIDR being stored in the cache.
3. Fix an issue where kubenet could panic in TearDownPod if it was called before SetUpPod (e.g. after a kubelet restart).. because of bug number 1, this didn't happen except in rare situations (see 2 for why such a rare situation might happen)

This adds a test, but more would definitely be useful.
The commits are also granular enough I could split this up more if desired.

I'm also not super-familiar with this code, so review and feedback would be welcome.

Testing done:
```
$ cat examples/egress/egress.yml
 apiVersion: v1
kind: Pod
metadata:
  labels:
    name: egress
  name: egress-output
  annotations: {"kubernetes.io/ingress-bandwidth": "300k"}
spec:
  restartPolicy: Never
  containers:
    - name: egress
      image: busybox
      command: ["sh", "-c", "sleep 60"]
$ cat kubelet.log
...
Running: tc filter add dev cbr0 protocol ip parent 1:0 prio 1 u32 match ip dst 10.0.0.5/32 flowid 1:1
# setup
...
Running: tc filter del dev cbr0 parent 1:proto ip prio 1 handle 800::800 u32
# teardown
```

I also did various other bits of manual testing and logging to hunt down the panic and other issues, but don't have anything to paste for that 

cc @dcbw @kubernetes/sig-network
2016-05-29 04:04:22 -07:00
k8s-merge-robot
e6a02ac511 Merge pull request #26060 from jonboulle/asdf
Automatic merge from submit-queue

Fix quantity.CanonicalizeBytes docstring name
2016-05-29 03:07:03 -07:00
k8s-merge-robot
577cdf937d Merge pull request #26415 from wojtek-t/network_not_ready
Automatic merge from submit-queue

Add a NodeCondition "NetworkUnavaiable" to prevent scheduling onto a node until the routes have been created 

This is new version of #26267 (based on top of that one).

The new workflow is:
- we have an "NetworkNotReady" condition
- Kubelet when it creates a node, it sets it to "true"
- RouteController will set it to "false" when the route is created
- Scheduler is scheduling only on nodes that doesn't have "NetworkNotReady ==true" condition

@gmarek @bgrant0607 @zmerlynn @cjcullen @derekwaynecarr @danwinship @dcbw @lavalamp @vishh
2016-05-29 03:06:59 -07:00
k8s-merge-robot
d00dec7825 Merge pull request #26397 from euank/fixReadOnlyRootfsPanic
Automatic merge from submit-queue

rkt: Fix panic in setting ReadOnlyRootFS

What the title says. I wish this method were broken out in a reasonably unit testable way. fixing this panic is more important for the second though, testing will come in a later commit.

I observed the panic in a `./hack/local-up-cluster.sh` run with rkt as the container runtime.

This is also the panic that's failing our jenkins against master ([recent run](https://console.cloud.google.com/m/cloudstorage/b/rktnetes-jenkins/o/logs/kubernetes-e2e-gce/1946/artifacts/jenkins-e2e-minion-group-qjh3/kubelet.log for the log output of a recent run))

cc @tmrts @yifan-gu
2016-05-29 02:17:09 -07:00