Event recorder should wait for some time to get all expected events, the event
may be written by another goroutine that just have finished.
It should not slow down the test in most cases, only when there is a bug and
expected event is not sent.
Automatic merge from submit-queue
Stabilize controller unit tests.
Remove test "5-1", it's flaky as it depends on order of execution of goroutines. When the controller starts, existing claim is enqueued as "initial sync event" and a new volume is enqueued to separate goroutine. It is not deterministic which goroutine processes its events first and there is no way how to tell that the claim event was processed.
Also, force resync of the controllers after the test to make sure all events are processed.
Fixes unit test flakes.
@kubernetes/sig-storage
Automatic merge from submit-queue
AWS: ELB proxy protocol support via annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol
This is a ~~work in progress~~ branch that adds support for the Proxy Protocol with Elastic Load Balancers. The proxy protocol is documented here: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt. It allows us to pass the "real ip" address of a client to pods behind services.
As it stands now, we create an ELB policy on the load balancer that enables the proxy protocol. We then enumerate each node port assigned to the load balancer and add our newly created policy to it. The manual process is documented here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
Right now, I’m looking to get some feedback on the approach before I dive too much deeper in the code. More precisely, I have questions regarding the following:
1) Right now I just check that a certain annotation exists on the service regardless of what its value is. Assuming we’re going to enable this feature via an annotation, what is the expected experience? This decision likely depends on the answers to the next questions.
2) Right now the implementation enables the proxy protocol on every ELB backend. The actual ELB API expects you to add the policy for each configured backend. Do we want the ability to configure the proxy protocol on a per service port basis? For example, if a service exposes TCP 80 and 443, would we want the ability to only enable the proxy protocol on port 443? Does this overcomplicate the implementation? If we wanted to go this direction we could do something like ...
```
{
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "tcp:80,tcp:443"
}
```
3) I avoided this because I was concerned with scope creep and our organization doesn’t need it, but could/should our implementation be adjusted to just handle ELB policies in general? I hadn’t used the ELB API until I started working on this branch so I don’t know how realistic this is. I also don't know how common this use case is as our organization has used our own load balancing setup prior to Kubernetes. This page has a couple of examples at the bottom: http://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer-policy.html
cc @justinsb
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24569)
<!-- Reviewable:end -->
Add ELB proxy protocol support via the annotation
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol". This
allows servers like Nginx and Haproxy to retrieve the real IP address of
a remote client.
Remove test "5-1", it's flaky as it depends on order of execution of
goroutines. When the controller starts, existing claim is enqueued as
"initial sync event" and a new volume is enqueued to separate goroutine.
It is not deterministic which goroutine processes its events first and
there is no way how to tell that the claim event was processed.
Also, force resync of the controllers after the test to make sure all
events are processed.
Automatic merge from submit-queue
daemonset handle DeletedFinalStateUnknown
During an e2e run in OpenShift we ran into the DS controller panic when handling `DeletedFinalStateUnknown`. This PR checks for `DeletedFinalStateUnknown` and queues the embedded object if it is a `DaemonSet`.
@mikedanese - would you mind taking a look?
@deads2k
```
panic: interface conversion: interface is cache.DeletedFinalStateUnknown, not *extensions.DaemonSet
goroutine 4369 [running]:
k8s.io/kubernetes/pkg/controller/daemon.func·005(0x2f8a0c0, 0xc20b559680)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:160 +0x50
k8s.io/kubernetes/pkg/controller/framework.ResourceEventHandlerFuncs.OnDelete(0xc20a0ae090, 0xc20a0ae0a0, 0xc20a0ae0b0, 0x2f8a0c0, 0xc20b559680)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:178 +0x41
k8s.io/kubernetes/pkg/controller/framework.(*ResourceEventHandlerFuncs).OnDelete(0xc20b8ebf20, 0x2f8a0c0, 0xc20b559680)
<autogenerated>:25 +0xb5
k8s.io/kubernetes/pkg/controller/framework.func·001(0x2f8a280, 0xc20b5522e0, 0x0, 0x0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:248 +0x4be
k8s.io/kubernetes/pkg/controller/framework.(*Controller).processLoop(0xc20bb727e0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:122 +0x6f
k8s.io/kubernetes/pkg/controller/framework.*Controller.(k8s.io/kubernetes/pkg/controller/framework.processLoop)·fm()
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x27
k8s.io/kubernetes/pkg/util/wait.func·001()
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:66 +0x61
k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc209f8cfb8, 0x3b9aca00, 0x0, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:67 +0x8f
k8s.io/kubernetes/pkg/util/wait.Until(0xc209f8cfb8, 0x3b9aca00, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x4a
k8s.io/kubernetes/pkg/controller/framework.(*Controller).Run(0xc20bb727e0, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x1fb
created by k8s.io/kubernetes/pkg/controller/daemon.(*DaemonSetsController).Run
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:212 +0xae
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_check/1002/artifact/origin/artifacts/test-cmd/logs/openshift.log
Automatic merge from submit-queue
ScheduledJob validation
@erictune while playing earlier today I've noticed `suspend` isn't a pointer which requires it to be set. Additionally the validation for job selectors is too strict in that it requires the selector to match produced pods, which doesn't make sense for SJ, I've changed it to being forbidden to set entirely.
[]()
Automatic merge from submit-queue
Reduce volume controller sync period
fixes#24236 and most probably also fixes#25294.
Needs #25881! With the cache, binder is not affected by sync period. Without the cache, binding of 1000 PVCs takes more than 5 minutes (instead of ~70 seconds).
15 seconds were chosen by fair 2d10 roll :-)
Automatic merge from submit-queue
volume controller: Add cache with the latest version of PVs and PVCs
When the controller binds a PV to PVC, it saves both objects to etcd. However, there is still an old version of these objects in the controller Informer cache. So, when a new PVC comes, the PV is still seen as available and may get bound to the new PVC. This will be blocked by etcd, still, it creates unnecessary traffic that slows everything down.
To make everything worse, when periodic sync with the old PVC is performed, this PVC is seen by the controller as Pending (while it's already Bound on etcd) and will be bound to a different PV. Writing to this PV won't be blocked by etcd, only subsequent write of the PVC fails. So, the controller will need to roll back the PV in another transaction(s). The controller can keep itself pretty busy this way.
Also, we save bound PVs (and PVCs) as two transactions - we save say PV.Spec first and then .Status. The controller gets "PV.Spec updated" event from etcd and tries to fix the Status, as it seems to the controller it's outdated. This write again fails - there already is a correct version in etcd.
As we can't influence the Informer cache, it is read-only to the controller, this patch introduces second cache in the controller, which holds latest and greatest version on PVs and PVCs to prevent these useless writes to etcd . It gets updated with events from etcd *and* after etcd confirms successful save of PV/PVC modified by the controller.
The cache stores only *pointers* to PVs/PVCs, so in ideal case it shares the actual object data with the informer cache. They will diverge only for a short time when the controller modifies something and the informer cache did not get update events yet.
@kubernetes/sig-storage
Automatic merge from submit-queue
Move shell completion generation into 'kubectl completion' command
Remove static shell completion scripts from the repo and add `completion` command to kubectl:
```bash
$ source <(kubectl completion bash)
```
or
```bash
$ source <(kubectl completion zsh)
```
This makes maintenance easier because no static scripts must be generated and committed anymore in the repo.
Moreover, kubectl is self-contained again for the user including the latest completion code. I am thinking about the use-case of updating kubectl via gcloud (or some package manager). The completion code is always in-sync, without the need to download a `contrib/completion/bash/kubectl` file from github.
Opinions are welcome /cc @eparis @nak3
Fixes https://github.com/openshift/origin/issues/5290
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/23801)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Kubelet: Cache image history to eliminate the performance regression
Fix https://github.com/kubernetes/kubernetes/issues/25057.
The image history operation takes almost 50% of cpu usage in kubelet performance test. We should cache image history instead of getting it from runtime everytime.
This PR cached image history in imageStatsProvider and added unit test.
@yujuhong @vishh
/cc @kubernetes/sig-node
Mark v1.3 because this is a relatively significant performance regression.
[]()
Automatic merge from submit-queue
kubectl: cast scale errors to actual errors when deleting
Fixes some of the deployment reaper timeouts in e2e
@kubernetes/deployment @soltysh
Automatic merge from submit-queue
Do not call NewFlannelServer() unless flannel overlay is enabled
Ref: #26093
This makes so kubelet does not warn the user that iptables isn't in PATH, although the user didn't enable the flannel overlay.
@vishh @freehan @bprashanth
Automatic merge from submit-queue
Add assert.NotNil for test case
I hardcode the `DefaultInterfaceName` from `eth0` to `eth-k8sdefault` at release 1.2.0, in order to test my CNI plugins. When running the test, it panics and prints wrongly formatted messages as below.
In the test case `TestBuildSummary`, `containerInfoV2ToNetworkStats` will return `nil` if `DefaultInterfaceName` is not `eth0`. So maybe we should add `assert.NotNil` to the test case.
```
ok k8s.io/kubernetes/pkg/kubelet/server 0.591s
W0523 03:25:28.257074 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=node:FooNode)
W0523 03:25:28.257322 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod1)
W0523 03:25:28.257361 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod0)
W0523 03:25:28.257419 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test2_pod0)
--- FAIL: TestBuildSummary (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x471817]
goroutine 16 [running]:
testing.func·006()
/usr/src/go/src/testing/testing.go:441 +0x181
k8s.io/kubernetes/pkg/kubelet/server/stats.checkNetworkStats(0xc20806d3b0, 0x140bbc0, 0x4, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:296 +0xc07
k8s.io/kubernetes/pkg/kubelet/server/stats.TestBuildSummary(0xc20806d3b0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:124 +0x11d2
testing.tRunner(0xc20806d3b0, 0x1e43180)
/usr/src/go/src/testing/testing.go:447 +0xbf
created by testing.RunTests
/usr/src/go/src/testing/testing.go:555 +0xa8b
```
Automatic merge from submit-queue
Setting TLS1.2 minimum because TLS1.0 and TLS1.1 are vulnerable
TLS1.0 is known as vulnerable since it can be downgraded to SSL
https://blog.varonis.com/ssl-and-tls-1-0-no-longer-acceptable-for-pci-compliance/
TLS1.1 can be vulnerable if cipher RC4-SHA is used, and in Kubernetes it is, you can check it with
`
openssl s_client -cipher RC4-SHA -connect apiserver.k8s.example.com:443
`
https://www.globalsign.com/en/blog/poodle-vulnerability-expands-beyond-sslv3-to-tls/
Test suites like Qualys are reporting this Kubernetes issue as a level 3 vulnerability, they recommend to upgrade to TLS1.2 that is not affected, quoting Qualys:
`
RC4 should not be used where possible. One reason that RC4 was still being used was BEAST and Lucky13 attacks against CBC mode ciphers in
SSL and
TLS. However, TLSv 1.2 or later address these issues.
`
Automatic merge from submit-queue
Fix panic when the namespace flag is not present
We don't set the namespace in OpenShift, so we need to check if the namespace flag is present.
Automatic merge from submit-queue
Various kubenet fixes (panics and bugs and cidrs, oh my)
This PR fixes the following issues:
1. Corrects an inverse error-check that prevented `shaper.Reset` from ever being called with a correct ip address
2. Fix an issue where `parseCIDR` would fail after a kubelet restart due to an IP being stored instead of a CIDR being stored in the cache.
3. Fix an issue where kubenet could panic in TearDownPod if it was called before SetUpPod (e.g. after a kubelet restart).. because of bug number 1, this didn't happen except in rare situations (see 2 for why such a rare situation might happen)
This adds a test, but more would definitely be useful.
The commits are also granular enough I could split this up more if desired.
I'm also not super-familiar with this code, so review and feedback would be welcome.
Testing done:
```
$ cat examples/egress/egress.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: egress
name: egress-output
annotations: {"kubernetes.io/ingress-bandwidth": "300k"}
spec:
restartPolicy: Never
containers:
- name: egress
image: busybox
command: ["sh", "-c", "sleep 60"]
$ cat kubelet.log
...
Running: tc filter add dev cbr0 protocol ip parent 1:0 prio 1 u32 match ip dst 10.0.0.5/32 flowid 1:1
# setup
...
Running: tc filter del dev cbr0 parent 1:proto ip prio 1 handle 800::800 u32
# teardown
```
I also did various other bits of manual testing and logging to hunt down the panic and other issues, but don't have anything to paste for that
cc @dcbw @kubernetes/sig-network
Automatic merge from submit-queue
Add a NodeCondition "NetworkUnavaiable" to prevent scheduling onto a node until the routes have been created
This is new version of #26267 (based on top of that one).
The new workflow is:
- we have an "NetworkNotReady" condition
- Kubelet when it creates a node, it sets it to "true"
- RouteController will set it to "false" when the route is created
- Scheduler is scheduling only on nodes that doesn't have "NetworkNotReady ==true" condition
@gmarek @bgrant0607 @zmerlynn @cjcullen @derekwaynecarr @danwinship @dcbw @lavalamp @vishh
Automatic merge from submit-queue
rkt: Fix panic in setting ReadOnlyRootFS
What the title says. I wish this method were broken out in a reasonably unit testable way. fixing this panic is more important for the second though, testing will come in a later commit.
I observed the panic in a `./hack/local-up-cluster.sh` run with rkt as the container runtime.
This is also the panic that's failing our jenkins against master ([recent run](https://console.cloud.google.com/m/cloudstorage/b/rktnetes-jenkins/o/logs/kubernetes-e2e-gce/1946/artifacts/jenkins-e2e-minion-group-qjh3/kubelet.log for the log output of a recent run))
cc @tmrts @yifan-gu