Commit Graph

4061 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
32946c5bd0 Merge pull request #38820 from jszczepkowski/e2e-not-ready-nodes
Automatic merge from submit-queue (batch tested with PRs 38818, 38813, 38820)

E2E test lib: improved logging of not ready nodes.
2016-12-15 11:04:21 -08:00
Jerzy Szczepkowski
ec17af655f E2E test lib: improved logging of not ready nodes.
E2E test lib: improved logging of not ready nodes.
2016-12-15 18:23:18 +01:00
Kubernetes Submit Queue
d8efc779ed Merge pull request #38154 from caesarxuchao/rename-release_1_5
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)

Rename "release_1_5" clientset to just "clientset"

We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.

@kubernetes/sig-api-machinery @deads2k 

```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
2016-12-14 14:21:51 -08:00
Chao Xu
6709b7ada2 run hack/update-codegen.sh
run hack/verify-gofmt.sh
update bazel
2016-12-14 12:39:49 -08:00
Chao Xu
c81057be2c move federation_release_1_5 to federation_clientset 2016-12-14 12:39:49 -08:00
Chao Xu
03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Kubernetes Submit Queue
f55fc7a9e3 Merge pull request #38734 from bprashanth/ing_cleanup_timeout
Automatic merge from submit-queue (batch tested with PRs 38689, 38743, 38734, 38430)

Temporarily bump e2e cleanup timeout
2016-12-13 21:25:35 -08:00
bprashanth
f898bc5ecf Temporarily bump e2e cleanup timeout 2016-12-13 15:13:32 -08:00
Kubernetes Submit Queue
a9c5f67509 Merge pull request #38668 from bprashanth/glbc_version
Automatic merge from submit-queue

Bump glbc version, cleanup test

Matches https://github.com/kubernetes/ingress/pull/55
2016-12-13 13:27:01 -08:00
Kubernetes Submit Queue
4505224cd3 Merge pull request #35436 from danwinship/utilversion
Automatic merge from submit-queue

Add a package for handling version numbers (including non-"Semantic" versions)

As noted in #32401, we are using Semantic Version-parsing libraries to parse version numbers that aren't necessarily "Semantic". Although, contrary to what I'd said there, it turns out that this wasn't actually currently a problem for the iptables code, because the regexp used to extract the version number out of the "iptables --version" output only pulled out three components, so given "iptables v1.4.19.1", it would have extracted just "1.4.19". Still, it could be a problem if they later release "1.5" rather than "1.5.0", or if we eventually need to _compare_ against a 4-digit version number.

Also, as noted in #23854, we were also using two different semver libraries in different parts of the code (plus a wrapper around one of them in pkg/version).

This PR adds pkg/util/version, with code to parse and compare both semver and non-semver version strings, and then updates kubernetes to use it everywhere (including getting rid of a bunch of code duplication in kubelet by making utilversion.Version implement the kubecontainer.Version interface directly).

Ironically, this does not actually allow us to get rid of either of the vendored semver libraries, because we still have other dependencies that depend on each of them. (cadvisor uses blang/semver and etcd uses coreos/go-semver)

fixes #32401, #23854
2016-12-13 12:10:38 -08:00
Kubernetes Submit Queue
d13067ed37 Merge pull request #38507 from gmarek/res-gat-kubemark
Automatic merge from submit-queue (batch tested with PRs 38695, 38507)

Fix resource gatherer for kubemark

Not working yet...
2016-12-13 07:30:35 -08:00
gmarek
1b1a4aef6a Fix resource gatherer for kubemark 2016-12-13 14:59:22 +01:00
Kubernetes Submit Queue
380ad617f3 Merge pull request #38461 from gmarek/job
Automatic merge from submit-queue

Add an option to run Job in Density/Load config

cc @timothysc @jeremyeder 

@erictune @soltysh - I run this test and it seems to me that Job has noticeably worse performance than Deployment. I'll create an issue for this, but this PR is for easy repro.
2016-12-13 05:57:18 -08:00
Dan Winship
f369372dad Drop version-parsing from pkg/version
pkg/version is now just version constants, etc, not version parsing
2016-12-13 08:53:19 -05:00
gmarek
c9e78f1cd5 Add an option to run Job in Density/Load config 2016-12-13 13:21:30 +01:00
Kubernetes Submit Queue
b14f57ca7e Merge pull request #38620 from wojtek-t/increase_wait_for_nodes_timeout
Automatic merge from submit-queue (batch tested with PRs 38617, 38620)

Increase timeout for waiting for nodes
2016-12-13 03:46:29 -08:00
Kubernetes Submit Queue
99f876bb78 Merge pull request #38609 from wojtek-t/cleanup_annoying_test_logs
Automatic merge from submit-queue

Reduce amount of annoing logs in large clusters
2016-12-13 02:12:07 -08:00
Wojciech Tyczynski
6051870a48 Allow for configuring timeout for waiting for nodes 2016-12-13 09:55:34 +01:00
bprashanth
fc57d76018 Delete static-ip after ingress has cleaned up 2016-12-12 19:06:08 -08:00
Mike Danese
82d9ed770c fix examples/ compilation so that test/ also compiles
fix network-tester cauldron serve_hostnames
2016-12-12 15:14:49 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Wojciech Tyczynski
ebdef4d57e Reduce amount of annoing logs in large clusters 2016-12-12 14:43:41 +01:00
Michail Kargakis
0d95d71e65 test: cleanup test logs for deployments 2016-12-12 12:19:51 +01:00
Wojciech Tyczynski
b1da629374 Fix services in load test 2016-12-12 09:40:43 +01:00
Clayton Coleman
c52d510a24 refactor: generated 2016-12-10 18:05:53 -05:00
Clayton Coleman
38127a4e7e Update disruption test 2016-12-10 18:05:37 -05:00
Clayton Coleman
42d410fdde Switch to use pkg/apis/meta/v1/unstructured and the new interfaces
Avoid directly accessing an unstructured type if it is not required.
2016-12-10 18:05:28 -05:00
Janet Kuo
0e2b0a6f55 Rename pet to stateful pods in statefulset e2e tests logs 2016-12-09 16:01:13 -08:00
Kubernetes Submit Queue
b72c006eb3 Merge pull request #34554 from derekwaynecarr/quota-storage-class
Automatic merge from submit-queue (batch tested with PRs 37270, 38309, 37568, 34554)

Ability to quota storage by storage class

Adds the ability to quota storage by storage class.
1. `<storage-class>.storageclass.storage.k8s.io/persistentvolumeclaims` - quota the number of claims with a specific storage class
2. `<storage-class>.storageclass.storage.k8s.io/requests.storage` - quota the cumulative request for storage in a particular storage class.

For example:

```
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: 100
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: 5
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: 10
    bronze.storageclass.storage.k8s.io.kubernetes.io/requests.storage: 100Gi
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: 15
$ kubectl create -f quota.yaml
$ cat pvc-bronze.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  generateName: pvc-bronze-
  annotations:
    volume.beta.kubernetes.io/storage-class: "bronze"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl create -f pvc-bronze.yaml
$ kubectl get quota storage-quota -o yaml
apiVersion: v1
kind: ResourceQuota
...
status:
  hard:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "15"
    bronze.storageclass.storage.k8s.io/requests.storage: 100Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
    gold.storageclass.storage.k8s.io/requests.storage: 50Gi
    persistentvolumeclaims: "100"
    requests.storage: 100Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "10"
    silver.storageclass.storage.k8s.io/requests.storage: 75Gi
  used:
    bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "1"
    bronze.storageclass.storage.k8s.io/requests.storage: 8Gi
    gold.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    gold.storageclass.storage.k8s.io/requests.storage: "0"
    persistentvolumeclaims: "1"
    requests.storage: 8Gi
    silver.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
    silver.storageclass.storage.k8s.io/requests.storage: "0"
```
2016-12-09 14:11:21 -08:00
Kubernetes Submit Queue
59cfdfb8db Merge pull request #38463 from jszczepkowski/hpa-e2e-fix5
Automatic merge from submit-queue (batch tested with PRs 37860, 38429, 38451, 36050, 38463)

HPA e2e tests: fixed waiting for service creation.
2016-12-09 13:22:20 -08:00
Derek Carr
459a7a05f1 Ability to quota storage by storage class 2016-12-09 13:26:59 -05:00
Kubernetes Submit Queue
695fbb8fb6 Merge pull request #38284 from Crassirostris/kibana-test-fix-2
Automatic merge from submit-queue

Increase Kibana e2e test timeout

10 minutes is not always enough to start kibana instance, most probably increasing the timeout will fix https://github.com/kubernetes/kubernetes/issues/36809

Follow-up of https://github.com/kubernetes/kubernetes/pull/36192

CC @piosz
2016-12-09 08:49:42 -08:00
Kubernetes Submit Queue
f5f109ca11 Merge pull request #38292 from gmarek/daemon
Automatic merge from submit-queue

Add Daemons to Load/Density tests

cc @jeremyeder @timstclair @sjug
2016-12-09 07:29:15 -08:00
Kubernetes Submit Queue
72b52d4334 Merge pull request #38302 from Crassirostris/revert-logging-e2e-verbosity
Automatic merge from submit-queue

Revert "Make logging for gcl e2e test more verbose"

Revert test change in favor of https://github.com/kubernetes/kubernetes/pull/38213

CC @piosz
2016-12-09 06:49:21 -08:00
Kubernetes Submit Queue
52e1b36961 Merge pull request #38462 from gmarek/print
Automatic merge from submit-queue

Make resource gatherer print the data about resource usage in case of…
2016-12-09 06:07:57 -08:00
Jerzy Szczepkowski
2b070e1724 HPA e2e tests: fixed waiting for service creation.
HPA e2e tests: fixed waiting for service creation. Fixes #32512.
2016-12-09 14:55:51 +01:00
gmarek
bfe2a2b03c Add Daemons to Load/Density tests 2016-12-09 14:31:46 +01:00
gmarek
3361576f3b Make resource gatherer print the data about resource usage in case of failure 2016-12-09 11:55:40 +01:00
Wojciech Tyczynski
a9ec31209e GetOptions - fix tests 2016-12-09 09:42:01 +01:00
Kubernetes Submit Queue
81dd16a4ae Merge pull request #38244 from MrHohn/e2e-reboot-drop
Automatic merge from submit-queue (batch tested with PRs 36419, 38330, 37718, 38244, 38375)

Guarantees drop packets commands succeed in reboot test

Fixes the main case in #33405 and #36230.
Previous attempted fix in #38057.

During the reboot test, the iptables command that was supposed to take the node offline failed to exec. 
Turned out the xtables lock was holding by other processes led to this failure. Logs as below:
```
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: stdout:
"+ sleep 10
+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT
Another app is currently holding the xtables lock. Perhaps you want to use the -w option?"
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: stderr:    ""
I1202 20:00:29.686] Dec  2 20:00:29.685: INFO: ssh jenkins@146.148.111.167:22: exit code: 0
```

This reboot test won't pass if any one of these iptables commands fails. This PR put "reboot" commands into while loops to guarantee it retries until succeed.

`sudo iptables -t filter -nL` is removed since it is clear now that the `FILTER` rules won't be clobbered.

(Tests passed on local cluster.)

@bprashanth
2016-12-08 17:13:58 -08:00
Kubernetes Submit Queue
b0b6f3c256 Merge pull request #38401 from liggitt/addressable-deep-copy
Automatic merge from submit-queue (batch tested with PRs 36071, 32752, 37998, 38350, 38401)

Pass addressable values to DeepCopy

Extracted from https://github.com/kubernetes/kubernetes/pull/35728

These are the places we are currently calling DeepCopy incorrectly, and we need to fix, even if we don't pick up the changes to DeepCopy in #35728:
* creating a new cloner means we have no generated functions registered
* passing non-addressable values doesn't pick up generated deep copy functions, and forces us into reflective mode
2016-12-08 16:26:00 -08:00
Kubernetes Submit Queue
c294bf0d06 Merge pull request #38350 from spxtr/ExpectoPatronum
Automatic merge from submit-queue (batch tested with PRs 36071, 32752, 37998, 38350, 38401)

Eradicate ExpectNoError from test/e2e.

```
$ cd test/e2e
$ sed -i "s/\tExpectNoError/\tframework.ExpectNoError/g" *.go
```
2016-12-08 16:25:58 -08:00
Zihong Zheng
055a76f005 Guarantees drop packets commands succeed in reboot test 2016-12-08 13:28:22 -08:00
Jordan Liggitt
6819706adf Pass addressable values to DeepCopy 2016-12-08 14:16:01 -05:00
Madhusudan.C.S
c1cede22cf Use the serviceShard variable in the service shard block, not the service variable. 2016-12-08 11:13:30 -08:00
Kubernetes Submit Queue
126a842832 Merge pull request #38377 from jszczepkowski/hpa-e2e-fix4
Automatic merge from submit-queue (batch tested with PRs 38377, 36365, 36648, 37691, 38339)

HPA e2e tests: fixed problem w/blocking channel.
2016-12-08 10:51:54 -08:00
Kubernetes Submit Queue
907a80c7af Merge pull request #37837 from gmarek/secrets
Automatic merge from submit-queue

Add secrets to Density and Load tests

cc @jeremyeder @timstclair @sjug
2016-12-08 08:36:03 -08:00
gmarek
be3889810d Add secrets to Density and Load tests 2016-12-08 11:14:43 +01:00
Kubernetes Submit Queue
99c066efa7 Merge pull request #38260 from fraenkel/port_forward_readiness
Automatic merge from submit-queue

Wait for the port to be ready before starting

Fixes the portforward flakes. See #27673 & #27680
2016-12-08 02:10:46 -08:00
Jerzy Szczepkowski
e94d2fdc4e HPA e2e tests: fixed problem w/blocking channel.
HPA e2e tests: fixed problem w/blocking channel. Resolves #38298.
2016-12-08 10:59:58 +01:00