Commit Graph

4055 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
f55fc7a9e3 Merge pull request #38734 from bprashanth/ing_cleanup_timeout
Automatic merge from submit-queue (batch tested with PRs 38689, 38743, 38734, 38430)

Temporarily bump e2e cleanup timeout
2016-12-13 21:25:35 -08:00
bprashanth
f898bc5ecf Temporarily bump e2e cleanup timeout 2016-12-13 15:13:32 -08:00
Kubernetes Submit Queue
a9c5f67509 Merge pull request #38668 from bprashanth/glbc_version
Automatic merge from submit-queue

Bump glbc version, cleanup test

Matches https://github.com/kubernetes/ingress/pull/55
2016-12-13 13:27:01 -08:00
Kubernetes Submit Queue
4505224cd3 Merge pull request #35436 from danwinship/utilversion
Automatic merge from submit-queue

Add a package for handling version numbers (including non-"Semantic" versions)

As noted in #32401, we are using Semantic Version-parsing libraries to parse version numbers that aren't necessarily "Semantic". Although, contrary to what I'd said there, it turns out that this wasn't actually currently a problem for the iptables code, because the regexp used to extract the version number out of the "iptables --version" output only pulled out three components, so given "iptables v1.4.19.1", it would have extracted just "1.4.19". Still, it could be a problem if they later release "1.5" rather than "1.5.0", or if we eventually need to _compare_ against a 4-digit version number.

Also, as noted in #23854, we were also using two different semver libraries in different parts of the code (plus a wrapper around one of them in pkg/version).

This PR adds pkg/util/version, with code to parse and compare both semver and non-semver version strings, and then updates kubernetes to use it everywhere (including getting rid of a bunch of code duplication in kubelet by making utilversion.Version implement the kubecontainer.Version interface directly).

Ironically, this does not actually allow us to get rid of either of the vendored semver libraries, because we still have other dependencies that depend on each of them. (cadvisor uses blang/semver and etcd uses coreos/go-semver)

fixes #32401, #23854
2016-12-13 12:10:38 -08:00
Kubernetes Submit Queue
d13067ed37 Merge pull request #38507 from gmarek/res-gat-kubemark
Automatic merge from submit-queue (batch tested with PRs 38695, 38507)

Fix resource gatherer for kubemark

Not working yet...
2016-12-13 07:30:35 -08:00
gmarek
1b1a4aef6a Fix resource gatherer for kubemark 2016-12-13 14:59:22 +01:00
Kubernetes Submit Queue
380ad617f3 Merge pull request #38461 from gmarek/job
Automatic merge from submit-queue

Add an option to run Job in Density/Load config

cc @timothysc @jeremyeder 

@erictune @soltysh - I run this test and it seems to me that Job has noticeably worse performance than Deployment. I'll create an issue for this, but this PR is for easy repro.
2016-12-13 05:57:18 -08:00
Dan Winship
f369372dad Drop version-parsing from pkg/version
pkg/version is now just version constants, etc, not version parsing
2016-12-13 08:53:19 -05:00
gmarek
c9e78f1cd5 Add an option to run Job in Density/Load config 2016-12-13 13:21:30 +01:00
Kubernetes Submit Queue
b14f57ca7e Merge pull request #38620 from wojtek-t/increase_wait_for_nodes_timeout
Automatic merge from submit-queue (batch tested with PRs 38617, 38620)

Increase timeout for waiting for nodes
2016-12-13 03:46:29 -08:00
Kubernetes Submit Queue
99f876bb78 Merge pull request #38609 from wojtek-t/cleanup_annoying_test_logs
Automatic merge from submit-queue

Reduce amount of annoing logs in large clusters
2016-12-13 02:12:07 -08:00
Wojciech Tyczynski
6051870a48 Allow for configuring timeout for waiting for nodes 2016-12-13 09:55:34 +01:00
bprashanth
fc57d76018 Delete static-ip after ingress has cleaned up 2016-12-12 19:06:08 -08:00
Mike Danese
82d9ed770c fix examples/ compilation so that test/ also compiles
fix network-tester cauldron serve_hostnames
2016-12-12 15:14:49 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Wojciech Tyczynski
ebdef4d57e Reduce amount of annoing logs in large clusters 2016-12-12 14:43:41 +01:00
Michail Kargakis
0d95d71e65 test: cleanup test logs for deployments 2016-12-12 12:19:51 +01:00
Wojciech Tyczynski
b1da629374 Fix services in load test 2016-12-12 09:40:43 +01:00
Clayton Coleman
c52d510a24 refactor: generated 2016-12-10 18:05:53 -05:00
Clayton Coleman
38127a4e7e Update disruption test 2016-12-10 18:05:37 -05:00
Clayton Coleman
42d410fdde Switch to use pkg/apis/meta/v1/unstructured and the new interfaces
Avoid directly accessing an unstructured type if it is not required.
2016-12-10 18:05:28 -05:00
Janet Kuo
0e2b0a6f55 Rename pet to stateful pods in statefulset e2e tests logs 2016-12-09 16:01:13 -08:00
Kubernetes Submit Queue
b72c006eb3 Merge pull request #34554 from derekwaynecarr/quota-storage-class
Automatic merge from submit-queue (batch tested with PRs 37270, 38309, 37568, 34554)

Ability to quota storage by storage class

Adds the ability to quota storage by storage class.
1. `<storage-class>.storageclass.storage.k8s.io/persistentvolumeclaims` - quota the number of claims with a specific storage class
2. `<storage-class>.storageclass.storage.k8s.io/requests.storage` - quota the cumulative request for storage in a particular storage class.

For example:

```
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: 100
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: 5
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: 10
    bronze.storageclass.storage.k8s.io.kubernetes.io/requests.storage: 100Gi
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: 15
$ kubectl create -f quota.yaml
$ cat pvc-bronze.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  generateName: pvc-bronze-
  annotations:
    volume.beta.kubernetes.io/storage-class: "bronze"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl create -f pvc-bronze.yaml
$ kubectl get quota storage-quota -o yaml
apiVersion: v1
kind: ResourceQuota
...
status:
  hard:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "15"
    bronze.storageclass.storage.k8s.io/requests.storage: 100Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    persistentvolumeclaims: "100"
    requests.storage: 100Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "10"
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
  used:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "1"
    bronze.storageclass.storage.k8s.io/requests.storage: 8Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    gold.storageclass.storage.k8s.io/requests.storage: "0"
    persistentvolumeclaims: "1"
    requests.storage: 8Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    silver.storageclass.storage.k8s.io/requests.storage: "0"
```
2016-12-09 14:11:21 -08:00
Kubernetes Submit Queue
59cfdfb8db Merge pull request #38463 from jszczepkowski/hpa-e2e-fix5
Automatic merge from submit-queue (batch tested with PRs 37860, 38429, 38451, 36050, 38463)

HPA e2e tests: fixed waiting for service creation.
2016-12-09 13:22:20 -08:00
Derek Carr
459a7a05f1 Ability to quota storage by storage class 2016-12-09 13:26:59 -05:00
Kubernetes Submit Queue
695fbb8fb6 Merge pull request #38284 from Crassirostris/kibana-test-fix-2
Automatic merge from submit-queue

Increase Kibana e2e test timeout

10 minutes is not always enough to start kibana instance, most probably increasing the timeout will fix https://github.com/kubernetes/kubernetes/issues/36809

Follow-up of https://github.com/kubernetes/kubernetes/pull/36192

CC @piosz
2016-12-09 08:49:42 -08:00
Kubernetes Submit Queue
f5f109ca11 Merge pull request #38292 from gmarek/daemon
Automatic merge from submit-queue

Add Daemons to Load/Density tests

cc @jeremyeder @timstclair @sjug
2016-12-09 07:29:15 -08:00
Kubernetes Submit Queue
72b52d4334 Merge pull request #38302 from Crassirostris/revert-logging-e2e-verbosity
Automatic merge from submit-queue

Revert "Make logging for gcl e2e test more verbose"

Revert test change in favor of https://github.com/kubernetes/kubernetes/pull/38213

CC @piosz
2016-12-09 06:49:21 -08:00
Kubernetes Submit Queue
52e1b36961 Merge pull request #38462 from gmarek/print
Automatic merge from submit-queue

Make resource gatherer print the data about resource usage in case of…
2016-12-09 06:07:57 -08:00
Jerzy Szczepkowski
2b070e1724 HPA e2e tests: fixed waiting for service creation.
HPA e2e tests: fixed waiting for service creation. Fixes #32512.
2016-12-09 14:55:51 +01:00
gmarek
bfe2a2b03c Add Daemons to Load/Density tests 2016-12-09 14:31:46 +01:00
gmarek
3361576f3b Make resource gatherer print the data about resource usage in case of failure 2016-12-09 11:55:40 +01:00
Wojciech Tyczynski
a9ec31209e GetOptions - fix tests 2016-12-09 09:42:01 +01:00
Kubernetes Submit Queue
81dd16a4ae Merge pull request #38244 from MrHohn/e2e-reboot-drop
Automatic merge from submit-queue (batch tested with PRs 36419, 38330, 37718, 38244, 38375)

Guarantees drop packets commands succeed in reboot test

Fixes the main case in #33405 and #36230.
Previous attempted fix in #38057.

During the reboot test, the iptables command that was supposed to take the node offline failed to exec. 
Turned out the xtables lock was holding by other processes led to this failure. Logs as below:
```
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: stdout:
"+ sleep 10
+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT
Another app is currently holding the xtables lock. Perhaps you want to use the -w option?"
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: stderr:    ""
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: exit code: 0
```

This reboot test won't pass if any one of these iptables commands fails. This PR put "reboot" commands into while loops to guarantee it retries until succeed.

`sudo iptables -t filter -nL` is removed since it is clear now that the `FILTER` rules won't be clobbered.

(Tests passed on local cluster.)

@bprashanth
2016-12-08 17:13:58 -08:00
Kubernetes Submit Queue
b0b6f3c256 Merge pull request #38401 from liggitt/addressable-deep-copy
Automatic merge from submit-queue (batch tested with PRs 36071, 32752, 37998, 38350, 38401)

Pass addressable values to DeepCopy

Extracted from https://github.com/kubernetes/kubernetes/pull/35728

These are the places we are currently calling DeepCopy incorrectly, and we need to fix, even if we don't pick up the changes to DeepCopy in #35728:
* creating a new cloner means we have no generated functions registered
* passing non-addressable values doesn't pick up generated deep copy functions, and forces us into reflective mode
2016-12-08 16:26:00 -08:00
Kubernetes Submit Queue
c294bf0d06 Merge pull request #38350 from spxtr/ExpectoPatronum
Automatic merge from submit-queue (batch tested with PRs 36071, 32752, 37998, 38350, 38401)

Eradicate ExpectNoError from test/e2e.

```
$ cd test/e2e
$ sed -i "s/\tExpectNoError/\tframework.ExpectNoError/g" *.go
```
2016-12-08 16:25:58 -08:00
Zihong Zheng
055a76f005 Guarantees drop packets commands succeed in reboot test 2016-12-08 13:28:22 -08:00
Jordan Liggitt
6819706adf Pass addressable values to DeepCopy 2016-12-08 14:16:01 -05:00
Madhusudan.C.S
c1cede22cf Use the serviceShard variable in the service shard block, not the service variable. 2016-12-08 11:13:30 -08:00
Kubernetes Submit Queue
126a842832 Merge pull request #38377 from jszczepkowski/hpa-e2e-fix4
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

HPA e2e tests: fixed problem w/blocking channel.
2016-12-08 10:51:54 -08:00
Kubernetes Submit Queue
907a80c7af Merge pull request #37837 from gmarek/secrets
Automatic merge from submit-queue

Add secrets to Density and Load tests

cc @jeremyeder @timstclair @sjug
2016-12-08 08:36:03 -08:00
gmarek
be3889810d Add secrets to Density and Load tests 2016-12-08 11:14:43 +01:00
Kubernetes Submit Queue
99c066efa7 Merge pull request #38260 from fraenkel/port_forward_readiness
Automatic merge from submit-queue

Wait for the port to be ready before starting

Fixes the portforward flakes. See #27673 & #27680
2016-12-08 02:10:46 -08:00
Jerzy Szczepkowski
e94d2fdc4e HPA e2e tests: fixed problem w/blocking channel.
HPA e2e tests: fixed problem w/blocking channel. Resolves #38298.
2016-12-08 10:59:58 +01:00
Kubernetes Submit Queue
258971002f Merge pull request #37850 from MrHohn/gke-dns-autoscale
Automatic merge from submit-queue (batch tested with PRs 37092, 37850)

Turns on dns horizontal scaling tests for GKE

Seems like the dns-autoscaler is already enabled in [this recent gke build](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/769/).
Turning on the corresponding e2e tests to increase test coverage.

Probably better to wait for this fix #37261 to go in first.

@bowei @bprashanth 
cc @maisem @roberthbailey
2016-12-07 18:13:11 -08:00
Joe Finney
c9edc1c9be Eradicate ExpectNoError from test/e2e. 2016-12-07 17:51:35 -08:00
Kubernetes Submit Queue
350c14b81c Merge pull request #38141 from bprashanth/lb_testing
Automatic merge from submit-queue (batch tested with PRs 37325, 38313, 38141, 38321, 38333)

Cleanup firewalls, add nginx ingress to presubmit

Make the firewall cleanup code follow the same pattern as the other cleanup functions, and add the nginx ingress e2e to presubmit. 

Planning to watch the test for a bit, and if it works alright, I'll add the other Ingress e2e to post-submit merge blocker.
2016-12-07 17:14:18 -08:00
Kubernetes Submit Queue
4b44926f90 Merge pull request #37325 from ivan4th/fix-e2e-with-complete-pods-in-kube-system-ns
Automatic merge from submit-queue (batch tested with PRs 37325, 38313, 38141, 38321, 38333)

Fix running e2e with 'Completed' kube-system pods

As of now, e2e runner keeps waiting for pods in `kube-system` namespace to be "Running and Ready" if there are any pods in `Completed` state in that namespace.
This for example happens after following [Kubernetes Hosted Installation](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/#kubernetes-hosted-installation) instructions for Calico, making it impossible to run conformance tests against the cluster. It's also to possible to reproduce the problem like that:
```
$ cat testjob.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: tst
  namespace: kube-system
spec:
  template:
    metadata:
      name: tst
    spec:
      containers:
      - name: tst
        image: busybox
        command: ["echo",  "test"]
      restartPolicy: Never
$ kubectl create -f testjob.yaml
$ go run hack/e2e.go -v --test --test_args='--ginkgo.focus=existing\s+RC'
```
2016-12-07 17:14:14 -08:00
Zihong Zheng
69dc74bab3 Turns on dns horizontal scaling tests for GKE 2016-12-07 16:05:22 -08:00
Kubernetes Submit Queue
66f5d07e05 Merge pull request #38255 from bprashanth/svc_cleanup
Automatic merge from submit-queue

Delete regional static-ip instead of global for type=lb

Global vs region is the difference between 
```
$ gcloud compute addresses delete foo --global
$ gcloud compute addresses delete foo --region us-central1
```

Type=LoadBalancer users the second type and were were doing the first. 
Also adds some logging.
2016-12-07 13:38:50 -08:00