Commit Graph

29256 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
671efe1255 Merge pull request #54506 from juanvallejo/jvallejo/remove-check-resource-builder
Automatic merge from submit-queue (batch tested with PRs 54399, 54557, 54506). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

remove mutual exclusivity check - local, unstructured builder attrs

**Release note**:
```release-note
NONE
```
Remove incorrect mutual exclusivity check between the resource builder `Local` and `Unstructured` attributes.

Related comment: https://github.com/kubernetes/kubernetes/pull/48763#discussion_r146437490
Followup to https://github.com/kubernetes/kubernetes/pull/48763

cc @smarterclayton
2017-10-25 09:50:09 -07:00
Ferran Rodenas
c1802dc472 Update kubectl drain command to use policy V1Beta1 instead of unversioned API
Signed-off-by: Ferran Rodenas <frodenas@gmail.com>
2017-10-25 16:44:57 +02:00
zhengjiajin
d2bd6a7ff1 Remove kubectl create namespace dependencies on kubernetes/pkg/api 2017-10-25 20:26:11 +08:00
wackxu
bfe6474dd0 update the wrong format of string TargetPort 2017-10-25 19:49:36 +08:00
Yecheng Fu
f2af1af82f RBD Plugin: No need to acquire advisory lock any more!
With central attachdetach controller, we don't need to lock the image
any more. But for backward compatibility, we should:

1) Check if the image is still used by nodes running old kubelet in
attaching.
2) Clean old rbd.json file and remove lock if found in detaching.
2017-10-25 18:31:57 +08:00
André Martins
bdbdac21a5 updating-bazel with pkg/apis/networking
Signed-off-by: André Martins <aanm90@gmail.com>
2017-10-25 12:18:28 +02:00
André Martins
35d976fda8 Enhanced the network policy describer
Signed-off-by: André Martins <aanm90@gmail.com>
2017-10-25 12:18:28 +02:00
Yecheng Fu
3e570ad36d RBD Plugin: Remove deviceMountPath before return on error
Attach.MountDevice.
2017-10-25 17:44:19 +08:00
Yecheng Fu
ba0d275f3b RBD Plugin: Implement Attacher/Detacher interfaces.
1) Modify rbdPlugin to implement volume.AttachableVolumePlugin
   interface.
2) Add rbdAttacher/rbdDetacher structs to implement
   volume.Attacher/Detacher interfaces.
3) Add mount.SafeFormatAndMount/mount.Exec fields to rbdPlugin, and
   setup them in rbdPlugin.Init for later uses.
   Attacher/Mounter/Unmounter/Detacher reference rbdPlugin to use mounter
   and exec. This simplifies code.
4) Add testcase struct to abstract RBD Plugin test case, etc.
5) Add newRBD constructor to unify rbd struct initialization.
2017-10-25 17:43:17 +08:00
Yecheng Fu
a768092f9a RBD Plugin: Prepare to implement Attacher/Detacher interfaces.
1) Fix FakeMounter.IsLikelyNotMountPoint to return ErrNotExist if the
   directory does not exist. Mounter.IsLikelyNotMountPoint interface
   requires this, and RBD plugin depends on it.
2017-10-25 17:07:27 +08:00
andyzhangx
9bcb82df6e change default kind value of azure disk pv
make a const for default azure disk kind
2017-10-25 06:17:55 +00:00
Xing Zhou
86813f161e Added unit test cases for the public methods of pkg/util/taints.go
Added unit test cases for the public methods of pkg/util/taints.go
2017-10-25 13:51:14 +08:00
andyzhangx
db3c5a7881 fix#50150: azure disk mount failure on coreos
add log message
2017-10-25 05:19:57 +00:00
Kubernetes Submit Queue
6ba5f77d5d Merge pull request #53920 from apelisse/diff
Automatic merge from submit-queue (batch tested with PRs 53051, 52489, 53920). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Implement `kubectl alpha diff` to diff resources

`kubectl alpha diff` lets you diff your resources against live
resources, or last applied, or even preview what changes are going to be
applied on the cluster.

This is still quite premature, and mostly untested.

**What this PR does / why we need it**: 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
Clearly not ready for Release note.
```release-note
NONE
```

kubernetes/community#287
2017-10-24 21:38:23 -07:00
Kubernetes Submit Queue
88ac8c0227 Merge pull request #52245 from zjj2wry/describe-sa
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix issue(#52244)kubectl describe serviceaccount have redundance null…

… line, we should keep accordance for kubectl describe command



**What this PR does / why we need it**:
close issue #52244

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-24 20:37:24 -07:00
Jiaying Zhang
e501f01d85 Move podDevices code into a separate file. 2017-10-24 17:48:59 -07:00
Kubernetes Submit Queue
33e4f178f2 Merge pull request #54438 from vmware/canonical_name_fix
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fixing usage of clustered datastore name to be absolute datastore name

**What this PR does / why we need it**:
This PR adds the step to convert the datastore name in form of a folder/cluster to absolute datastore name.

**Which issue this PR fixes**: fixes vmware#312 

**Special notes for your reviewer**:
Testing. All the clustered datastore tests pass now
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export CLUSTER_DATASTORE=dscl1/sharedVmfs-0
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export VSPHERE_SPBM_POLICY_DS_CLUSTER=gold_cluster
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test  --test_args='--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build641208048/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 17:29:39 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 17:29:39 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 17:29:39 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/18 17:29:39 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/18 17:29:39 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore...
2017/10/18 17:29:39 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 17:29:39 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 286.302987ms
2017/10/18 17:29:39 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty", GitCommit:"2bfcf4a7d1d6d4f257fe7ee96dcb47484b8e3a5f", GitTreeState:"dirty", BuildDate:"2017-10-18T23:37:58Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17663+ba93892625a3e8", GitCommit:"ba93892625a3e8a246f3bc660d56a2635e7e7f9f", GitTreeState:"clean", BuildDate:"2017-10-18T20:57:49Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 17:29:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 296.318449ms
2017/10/18 17:29:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore
Conformance test: not doing test setup.
Oct 18 17:29:41.634: INFO: Overriding default scale value of zero to 1
Oct 18 17:29:41.634: INFO: Overriding default milliseconds value of zero to 5000
I1018 17:29:41.829967   32645 e2e.go:383] Starting e2e run "98fbc41f-b464-11e7-8e40-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508372981 - Will randomize all specs
Will run 3 of 714 specs

Oct 18 17:29:41.889: INFO: >>> kubeConfig: /tmp/kube171.json
Oct 18 17:29:41.894: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 17:29:41.928: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 17:29:42.057: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 17:29:42.057: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 17:29:42.063: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 17:29:42.063: INFO: Dumping network health container logs from all nodes...
Oct 18 17:29:42.076: INFO: Client version: v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty
Oct 18 17:29:42.079: INFO: Server version: v1.6.0-alpha.0.17663+ba93892625a3e8
SSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:29:42.079: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
STEP: creating a test vsphere volume
W1018 17:29:43.087373   32645 datacenter.go:156] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] kubevols/kube-dummyDisk.vmdk was not found
STEP: Creating pod
STEP: Waiting for pod to be ready
STEP: Verifying volume is attached
STEP: Deleting pod
Oct 18 17:30:07.268: INFO: Deleting pod "vsphere-e2e-vtzbg" in namespace "e2e-tests-volume-provision-bchh4"
Oct 18 17:30:07.301: INFO: Wait up to 5m0s for pod "vsphere-e2e-vtzbg" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:30:49.429: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/e2e-vmdk-e2e-tests-volume-provision-bchh4.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting the vsphere volume
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:30:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-bchh4" for this suite.
Oct 18 17:30:58.203: INFO: namespace: e2e-tests-volume-provision-bchh4, resource: bindings, ignored listing per whitelist
Oct 18 17:30:58.225: INFO: namespace e2e-tests-volume-provision-bchh4 deletion completed in 8.444766463s

• [SLOW TEST:76.146 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:30:58.231: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:30:58.402: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-6dwrm to have phase Bound
Oct 18 17:30:58.410: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:00.416: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:02.430: INFO: PersistentVolumeClaim pvc-6dwrm found and phase=Bound (4.027818895s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:31:26.666: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-hgcj6 --namespace=e2e-tests-volume-provision-sfbcf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:31:27.171: INFO: stderr: ""
Oct 18 17:31:27.171: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:31:27.171: INFO: Deleting pod "pvc-tester-hgcj6" in namespace "e2e-tests-volume-provision-sfbcf"
Oct 18 17:31:27.202: INFO: Wait up to 5m0s for pod "pvc-tester-hgcj6" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:32:11.324: INFO: Volume "[sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-c6d8e174-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node2".
Oct 18 17:32:11.324: INFO: Deleting PersistentVolumeClaim "pvc-6dwrm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:32:11.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-sfbcf" for this suite.
Oct 18 17:32:19.867: INFO: namespace: e2e-tests-volume-provision-sfbcf, resource: bindings, ignored listing per whitelist
Oct 18 17:32:19.876: INFO: namespace e2e-tests-volume-provision-sfbcf deletion completed in 8.460287539s

• [SLOW TEST:81.645 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:32:19.878: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:32:20.024: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zrcmm to have phase Bound
Oct 18 17:32:20.034: INFO: PersistentVolumeClaim pvc-zrcmm found but phase is Pending instead of Bound.
Oct 18 17:32:22.040: INFO: PersistentVolumeClaim pvc-zrcmm found and phase=Bound (2.016093024s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:32:30.266: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-2hns4 --namespace=e2e-tests-volume-provision-llvbq -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:32:30.774: INFO: stderr: ""
Oct 18 17:32:30.774: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:32:30.774: INFO: Deleting pod "pvc-tester-2hns4" in namespace "e2e-tests-volume-provision-llvbq"
Oct 18 17:32:30.806: INFO: Wait up to 5m0s for pod "pvc-tester-2hns4" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:33:20.972: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-f77fba36-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node4".
Oct 18 17:33:20.972: INFO: Deleting PersistentVolumeClaim "pvc-zrcmm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:33:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-llvbq" for this suite.
Oct 18 17:33:29.490: INFO: namespace: e2e-tests-volume-provision-llvbq, resource: bindings, ignored listing per whitelist
Oct 18 17:33:29.554: INFO: namespace e2e-tests-volume-provision-llvbq deletion completed in 8.444585266s

• [SLOW TEST:69.676 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 17:33:29.559: INFO: Running AfterSuite actions on all node
Oct 18 17:33:29.559: INFO: Running AfterSuite actions on node 1

Ran 3 of 714 Specs in 227.670 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 711 Skipped PASS

Ginkgo ran 1 suite in 3m48.395491704s
Test Suite Passed
```


**Release note**:

```release-note
```
2017-10-24 17:33:38 -07:00
Kubernetes Submit Queue
ee9983355e Merge pull request #54500 from MrHohn/service-controller-more-logging
Automatic merge from submit-queue (batch tested with PRs 54366, 54500). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Service controller / gce loadbalancer logging cleanup

**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/52495, while looking into the potential scalability issue service controller may have, I found it really hard to debug as the log sometimes doesn't reference which LB it is for, or sometimes it just doesn't log at all. It get even harder that in correctness CIs we reduce verbose level to v1:
67bc30718f/jobs/env/ci-kubernetes-e2e-gce-scale-correctness.env (L16).

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE

**Special notes for your reviewer**:
/assign @nicksardo @bowei 
cc @shyamjvs 

**Release note**:

```release-note
NONE
```
2017-10-24 16:54:02 -07:00
Kubernetes Submit Queue
8f79857b42 Merge pull request #54094 from juanvallejo/jvallejo/exit-plugin-cmd-w-correct-exit-code
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

exit with correct exit code on plugin failure

**Release note**:
```release-note
NONE
```

Correctly exits after a plugin command execution failure with the correct exit code.
Related downstream bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1501756

**Before**
```
$ kubectl plugin will-fail-plugin
ls: cannot access 'does-not-exist': No such file or directory
error: exit status 2

$ echo $?
1
```

**After**
```
$ kubectl plugin will-fail-plugin
ls: cannot access 'does-not-exist': No such file or directory
error: exit status 2

$ echo $?
2
```

cc @fabianofranz
2017-10-24 15:59:13 -07:00
Kubernetes Submit Queue
4ca155cbc1 Merge pull request #54184 from MrHohn/fix-service-controller-retry
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix retry logic in service controller

**What this PR does / why we need it**: Make service controller don't retry on doNotRetry service update failure.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #54183

**Special notes for your reviewer**:
/assign @nicksardo @bowei 

**Release note**:

```release-note
NONE
```
2017-10-24 15:59:06 -07:00
Davanum Srinivas
6ae7201d74 Deprecate using cloud provider to set host address feature
Long term plan is to remove all uses of cloud provider from kube api
server. As part of that, we need to remove the dependency on
figuring out the host address of the node running the kube api server
using the cloud provider. In this review, we log a warning that this
feature that is usually used for example with swagger generation
will go away in the future.
2017-10-24 17:39:33 -04:00
Jiaying Zhang
ff4e8d429e Device plugin code refactoring to cope with file move.
While moving device_plugin_handler_test.go from pkg/kubelet/cm/ to
pkg/kubelet/cm/deviceplugin/, we can no longer uses cm in its tests
because that would cause a cycle dependency. To solve this problem,
I moved the main cm GetResources functionality as well as part of the
current device plugin handler Allocate functionality into a new device
plugin handler function, GetDeviceRunContainerOptions(). This
refactoring is also needed by another PR 51895 that moves device
allocation into admission phase. Now device plugin handler Allocate()
first checks whether there is cached device runtime state and only
issues Allocate grpc call if there is no cached state available.
The new GetDeviceRunContainerOptions() function simply returns device
runtime config from the cached state. To support this change, extended the
podDevices struct and checkpoint data structure with device runtime state.
2017-10-24 14:38:15 -07:00
Jiaying Zhang
796f488789 Move device plugin related files under pkg/kubelet/cm/deviceplugin/. 2017-10-24 14:17:20 -07:00
Kubernetes Submit Queue
039f1565e6 Merge pull request #54427 from bowei/owners
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Missed approvers for controller/service

```release-note
NONE
```
2017-10-24 14:16:48 -07:00
Kubernetes Submit Queue
2ad7d6f711 Merge pull request #54083 from juanvallejo/jvallejo/update-resource-builder-cmd-drain
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

update resource selector - kubectl drain

Followup to https://github.com/kubernetes/kubernetes/pull/52917

**Release note**:
```release-note
NONE
```

Updates resource builder in cmd/drain.go to parse resource args similar to other commands.

cc @liggitt
2017-10-24 14:16:39 -07:00
Kubernetes Submit Queue
39218a1b44 Merge pull request #54270 from WanLinghao/set_command_fix
Automatic merge from submit-queue (batch tested with PRs 54270, 54479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix kubectl set command

**What this PR does / why we need it**:

1. fix some errors in 'kubectl set' series command

2.remove unused function in kubectl set env command

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```
NONE
```
2017-10-24 11:52:04 -07:00
juanvallejo
6144799f75 remove mutual exclusivity check - local, unstructured builder attrs 2017-10-24 14:16:47 -04:00
Zihong Zheng
d6e8920a00 gce_loadbalancer_external: Critical path logging cleanup 2017-10-24 10:52:14 -07:00
Zihong Zheng
e3eac372f0 service_controller: Include service key in error messages 2017-10-24 10:51:15 -07:00
Maru Newby
efed772d42 Update openapi bazel build to support vendored build
This will enable vendoring projects like federation to generate
openapi code for k8s.io/kubernetes.
2017-10-24 08:59:19 -07:00
Kubernetes Submit Queue
5f030f4568 Merge pull request #54454 from cofyc/rbd_status
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

RBD Plugin: rbdStatus only check output of successful `rbd status` run

**What this PR does / why we need it**:

Current RBDUtil.rbdStatus implementation never return error in any cases, even if command not found or `rbd status` never succeed. This PR change it to only check output of successful `rbd status` run, and return error on other cases.

Because there are maybe network problem or ceph cluster unresponsive conditions which will cause `rbd status` command to fail. We cannot assume there is no watchers on given image. It's better to return error, and let the caller to decide what to do.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #


**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-24 08:35:14 -07:00
Kubernetes Submit Queue
16cdda003c Merge pull request #54302 from sbezverk/refactor_rbd_volume
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Refactor RBD volume

Refactor RBD Volume Persistent Volume Spec so RBD PV's SecretRef
allows referencing a secret from a persistent volume in any namespace.
This allows locating credentials for persistent volumes in namespaces
other than the one containing the PVC.
Closes #54432
```release-note
RBD Persistent Volume Sources can now reference User's Secret in namespaces other than the namespace of the bound Persistent Volume Claim
```
2017-10-24 08:35:11 -07:00
Kubernetes Submit Queue
9d6739c5a8 Merge pull request #54380 from zhangxiaoyu-zidif/delete-err-check-printer
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Delete redundant err check

**What this PR does / why we need it**:
This process is redundant.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-24 08:35:07 -07:00
yanxuean
8da0d836f7 a typo in dockershim.cm.containerManager.doWork
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
2017-10-24 22:51:47 +08:00
Kubernetes Submit Queue
8d3e39d1af Merge pull request #54401 from k82cn/node_ctrl_reviewer
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Volunteer to be reviewer of NodeController.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A

**Release note**:

```release-note
None
```
2017-10-24 07:39:19 -07:00
Kubernetes Submit Queue
9807360fe3 Merge pull request #53956 from m1093782566/proxy-metrics
Automatic merge from submit-queue (batch tested with PRs 52479, 53956). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Register sync proxy rules latency metrics in app level

**What this PR does / why we need it**:

IMO, should may should register proxy metrics in app level instead of in specific proxy mode, e.g. iptables, ipvs, winkernel...

By registering sync proxy rules latency metrics in app level, we can reuse codes among different proxiers.

**Which issue this PR fixes**: 

closes #53957

**Special notes for your reviewer**:

@wojtek-t What do you think about it?

**Release note**:

```release-note
NONE
```
2017-10-24 00:48:26 -07:00
yanxuean
988694ff62 error log message in buildCNIRuntimeConf
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
2017-10-24 15:30:13 +08:00
yanxuean
dc0f3ce05c remove redendancy code for cni
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
2017-10-24 15:21:55 +08:00
m1093782566
876c73024c migrate ip cmd to netlink 2017-10-24 13:26:07 +08:00
Yecheng Fu
bcacfff798 RBD Plugin: rbdStatus only check output of successful rbd status run 2017-10-24 13:11:57 +08:00
m1093782566
fa94105866 implement dummy device operation by netlink 2017-10-24 11:41:36 +08:00
Antoine Pelisse
ed38f43495 apply/strategy: Improve test performance
Test running time goes from 150 seconds to 7 seconds with that one
simple trick. Do not re-parse openapi schema for every test, but keep
one version for the whole test (specifically since it's read-only anyway).
2017-10-23 20:29:39 -07:00
Kubernetes Submit Queue
e02d4a9855 Merge pull request #52897 from huzhengchuan/fix/incorrect_links_api
Automatic merge from submit-queue (batch tested with PRs 52556, 52897, 54342). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix broken links in api after moving proposals to subdirs

**What this PR does / why we need it**:
fix incorrect links in api after kubernetes/community#1010

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes  kubernetes/community#918

**Special notes for your reviewer**:
CC @bgrant0607
**Release note**:

```
NONE
```
2017-10-23 19:34:02 -07:00
m1093782566
9dce640213 fix review comments 2017-10-24 10:30:38 +08:00
Kubernetes Submit Queue
78c7e7f79f Merge pull request #54441 from apelisse/fix-apply-govet
Automatic merge from submit-queue (batch tested with PRs 53479, 54373, 54441, 54443). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix govet in pkg/kubectl/apply

**What this PR does / why we need it**: Fix govet warning by explicitly using field keys.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2017-10-23 18:39:08 -07:00
Kubernetes Submit Queue
f7dbe30f2b Merge pull request #54373 from dims/better-error-check-for-cluster-info
Automatic merge from submit-queue (batch tested with PRs 53479, 54373, 54441, 54443). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Better error check for kubectl cluster-info

**What this PR does / why we need it**:

In RunClusterInfo, We try to connect to the api service and get some
information, but we just ignore the errors from Do().Visit(). We should
return the error returned by Do().Visit() so the "kubectl cluster-info"
would fail correctly with an error code.
In RunClusterInfo, We try to connect to the api service and get some
information, but we just ignore the errors from Do().Visit(). We should
return the error returned by Do().Visit() so the "kubectl cluster-info"
would fail correctly with an error code.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes #54349

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-23 18:39:05 -07:00
Klaus Ma
998e544d31 Removed compatibality code for kubelet 1.2. 2017-10-24 09:05:50 +08:00
chentao1596
9a70273edc Adding unit tests to methods of file's util 2017-10-24 08:26:58 +08:00
Kubernetes Submit Queue
424c4b90d4 Merge pull request #54402 from k82cn/reviewer_of_daemonset
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Volunteer to be reviewer of DaemonSet

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A


**Release note**:

```release-note
None
```
2017-10-23 17:16:09 -07:00
Kubernetes Submit Queue
6949b518d6 Merge pull request #54245 from deads2k/cli-03-postraw
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add kubectl create --raw -f

Adds `--raw` to `kubectl create` to match `kubectl get --raw`.  It re-uses the transport, reads the input stream (stdin or a single file for now) and posts directly to the endpoint specified.  This let's you direct data directly at a subresource (as a for instance).  I'd like to see this extended to `kubectl replace` too, so that we have full access to subresources via scripting without having to reproduce the transports.

@kubernetes/sig-cli-pr-reviews 

```release-note
add `--raw` to `kubectl create` to POST using the normal transport
```
2017-10-23 17:16:01 -07:00