Commit Graph

21349 Commits

Author SHA1 Message Date
Yu-Ju Hong
fccf34ccb6 Remove various references of dockertools
Also update the bazel files.
2017-05-11 10:01:41 -07:00
Yu-Ju Hong
4b72d229f7 Migrate unit tests for image pulling credentials and error handling
Also remove the dockertools package completely.
2017-05-11 10:01:41 -07:00
Crazykev
ebb5c3d13d return success if ImageNotFound in RemoveImage()
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
2017-05-11 23:00:34 +08:00
Julien Balestra
00d87a7209 Remove the termination-log files, the finished- marker file during the GC 2017-05-11 16:36:44 +02:00
deads2k
e91716a2db orphan when kubectl delete --cascade=false 2017-05-11 09:11:07 -04:00
zhengjiajin
77c207b424 small change to clear 2017-05-11 20:09:31 +08:00
zhangxiaoyu-zidif
65080ea1c1 ParsePodFullName():code robustness 2017-05-11 19:14:16 +08:00
Xianglin Gao
0144803c07 Forcibly remove container
Signed-off-by: Xianglin Gao <xlgao@zju.edu.cn>
2017-05-11 18:39:37 +08:00
fangyuhao [方宇浩]
5976b9c8a3 client.go: format err 2017-05-11 18:17:33 +08:00
Jiangtian Li
1760767047 Add error to function return 2017-05-11 00:30:07 -07:00
Jiangtian Li
33d878bc5a Run ./hack/update-bazel.sh to update deps in BUILD 2017-05-11 00:29:48 -07:00
Jiangtian Li
1eda859bf9 Fix the issue in unqualified name where DNS client such as ping or iwr validate name in response and original question. Switch to use miekg's DNS library 2017-05-11 00:29:20 -07:00
Kubernetes Submit Queue
9a0f5ccb33 Merge pull request #45480 from xiangpengzhao/scheduledjob-cronjob
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)

Rename vars scheduledJob to cronJob in describe.go

**What this PR does / why we need it**:
Rename vars scheduledJob to cronJob in describe.go

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
There might still be some leftovers in other places.
@soltysh 

**Release note**:

```release-note
NONE
```
2017-05-11 00:12:40 -07:00
Kubernetes Submit Queue
947db16df7 Merge pull request #45579 from shiywang/refactor-visit-patch
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Refactor functions in editoptions.go to use less arguments

Fixes https://github.com/kubernetes/kubernetes/issues/45521
/assign @mengqiy 
will rebase pr https://github.com/kubernetes/kubernetes/pull/42256 after this get merged
2017-05-10 23:20:42 -07:00
Kubernetes Submit Queue
873ce9ca4a Merge pull request #45515 from derekwaynecarr/ignore-openrc
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Ignore openrc cgroup

**What this PR does / why we need it**:
It is a work-around for the following: https://github.com/opencontainers/runc/issues/1440

**Special notes for your reviewer**:
I am open to a cleaner way to do this, but we have many developer users on Macs that ran containerized kubelets that are not able to run them right now due to the inclusion of openrc tripping up our existence checks.  Ideally, runc can give us a call to say "does this exist according to what runc knows about".  Or we could add a whitelist check.  Right now, this was the smallest hack pending more discussion.
2017-05-10 23:20:40 -07:00
Kubernetes Submit Queue
fc7ae99327 Merge pull request #45478 from HardySimpson/fix-endpoints-del
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

fix endpoints controller del lead-election endpoints

when there are multiple controller-manager instances,  we observe that it will delete leader-election endpoints after 5min,  and cause re-election, add a check to avoid that

Fixes #45585

error log

```
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "GET /api/v1/endpoints HTTP/1.1" 200 1175 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 404 123 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 398 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 404 141 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) ku
```



release-note

```release-note
none
```
2017-05-10 21:34:43 -07:00
Kubernetes Submit Queue
b0d024fee1 Merge pull request #45569 from vmware/fix_VolumesAreAttached
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Fixing VolumesAreAttached and DisksAreAttached functions in vSphere

**What this PR does / why we need it**:

In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.

Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.

**Logs before fix**

```
{"log":"E0508 21:31:20.902501       1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552       1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575       1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596       1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```



In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.


**Logs after fix**
```
{"log":"E0509 20:25:37.982152       1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190       1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220       1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```

```
{"log":"I0509 20:25:41.267425       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}

{"log":"I0509 20:26:12.535962       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}

{"log":"I0509 20:26:40.355656       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}

```




**Which issue this PR fixes**
fixes #45464, https://github.com/vmware/kubernetes/issues/116

**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty

**performed many fail over with large volumes (30GB) attached to the pod.**

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-3xcvc
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-qx0b0
Node:		node2/172.1.9.0
Status:		Running

Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-7849s
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-26lp1
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-4pdtl
Node:		node3/172.1.87.0
Status:		Running


Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-t375f
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-pn6ps
Node:		node3/172.1.87.0
Status:		Running

powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-0wqc1
Node:		node1/172.1.98.0
Status:		Running

powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-821nc
Node:		node3/172.1.87.0
Status:		Running


**Release note**:

```release-note
NONE
```

CC:  @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
2017-05-10 21:34:37 -07:00
Kubernetes Submit Queue
1f3b158a10 Merge pull request #45194 from yujuhong/rm-cri-flag
Automatic merge from submit-queue

Remove the deprecated `--enable-cri` flag

Except for rkt, CRI is the default and only integration point for
container runtimes.

```release-note
Remove the deprecated `--enable-cri` flag. CRI is now the default, 
and the only way to integrate with kubelet for the container runtimes.
```
2017-05-10 20:46:24 -07:00
Kubernetes Submit Queue
b0399114fe Merge pull request #38636 from dhawal55/internal-elb
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. 

fixes #38633

Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
2017-05-10 19:31:45 -07:00
yaxinlx
c280b7cab7 There is a rule in using go channel: never close a channel in the
receiver side.

fix https://github.com/kubernetes/kubernetes/issues/45215

delete the channel close line

change the event channel element type to struct{}

go fmt

eventCh channel is not essential to be buffered
2017-05-11 10:19:28 +08:00
Kubernetes Submit Queue
b040513aab Merge pull request #43067 from xilabao/dedup-in-printer
Automatic merge from submit-queue

De-duplication in printer
2017-05-10 19:08:59 -07:00
xilabao
02deeb224e fix specialized verbs in create role 2017-05-11 09:32:43 +08:00
Kubernetes Submit Queue
a86392a326 Merge pull request #45333 from colemickens/cmpr-cpfix
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

azure: improve user agent string

**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: none 

**Release note**:

```release-note
NONE
```

cc: @brendandburns
2017-05-10 17:47:45 -07:00
Kubernetes Submit Queue
aacc9729f1 Merge pull request #44781 from wongma7/outervolumespec
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

Ensure desired state of world populator runs before volume reconstructor

If the kubelet's volumemanager reconstructor for actual state of world runs before the desired state of world has been populated, the pods in the actual state of world will have some incorrect volume information: namely outerVolumeSpecName, which if incorrect leads to part of the issue here https://github.com/kubernetes/kubernetes/issues/43515, because WaitForVolumeAttachAndMount searches the actual state of world with the correct outerVolumeSpecName and won't find it so reports 'timeout waiting....', etc. forever for existing pods. The comments acknowledge that this is a known issue

The all sources ready check doesn't work because the sources being ready doesn't necessarily mean the desired state of world populator added pods from the sources. So instead let's put the all sources ready check in the *populator*, and when the sources are ready, it will be able to populate the desired state of world and make "HasAddedPods()" return true. THEN, the reconstructor may run.

@jingxu97 PTAL, you wrote all of the reconstruction stuff

```release-note
NONE
```
2017-05-10 17:47:43 -07:00
Random-Liu
613c42b89b Make a log line more clear in kuberuntime_manager.go. 2017-05-10 16:32:00 -07:00
David Ashpole
b69dacbd86 remove unused fields from Kubelet struct 2017-05-10 16:25:09 -07:00
Kubernetes Submit Queue
14b898d115 Merge pull request #45595 from justinsb/sts_alias_2
Automatic merge from submit-queue

Add sts alias for kubectl statefulset
2017-05-10 16:06:58 -07:00
Matthew Wong
9c6223f885 Don't attempt to make and chmod subPath if it already exists 2017-05-10 18:47:03 -04:00
Shyam Jeedigunta
27fa52390b Use real proxier inside hollow-proxy but with mocked syscalls 2017-05-10 23:45:26 +02:00
Yu-Ju Hong
daa329c9ae Remove the deprecated --enable-cri flag
Except for rkt, CRI is the default and only integration point for
container runtimes.
2017-05-10 13:03:41 -07:00
Kubernetes Submit Queue
3ddbed969b Merge pull request #45490 from deads2k/owners-01-extensions
Automatic merge from submit-queue

add owners to new packages

Adds owners files to some packages that need it.
2017-05-10 12:51:51 -07:00
Kubernetes Submit Queue
bfa18037ce Merge pull request #45404 from wojtek-t/edge_based_winuserspace_proxy
Automatic merge from submit-queue

Edge based winuserspace proxy

Last PR in the series of making kube-proxy event-based.

This is a sibling PR to https://github.com/kubernetes/kubernetes/pull/45356 that is already merged.
The second commit is removing the code that is no longer used.
2017-05-10 12:51:43 -07:00
Kubernetes Submit Queue
77b2e6302c Merge pull request #45236 from verb/sharedpid-2-default
Automatic merge from submit-queue

Enable shared PID namespace by default for docker pods

**What this PR does / why we need it**: This PR enables PID namespace sharing for docker pods by default, bringing the behavior of docker in line with the other CRI runtimes when used with docker >= 1.13.1.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #1615

**Special notes for your reviewer**: cc @dchen1107 @yujuhong 

**Release note**:

```release-note
Kubernetes now shares a single PID namespace among all containers in a pod when running with docker >= 1.13.1. This means processes can now signal processes in other containers in a pod, but it also means that the `kubectl exec {pod} kill 1` pattern will cause the pod to be restarted rather than a single container.
```
2017-05-10 12:06:01 -07:00
Derek Carr
4e002eacb1 Do not fail cgroup exists checks for unknown controllers 2017-05-10 14:52:09 -04:00
divyenpatel
9f89b57b74 fix implementation of VolumesAreAttached function 2017-05-10 10:16:13 -07:00
Justin Santa Barbara
e1fdb8b027 Add sts alias for kubectl statefulset
Saves a lot of typing!
2017-05-10 09:57:36 -04:00
Wojciech Tyczynski
ce752e3fc9 Remove no-longer used code in proxy/config 2017-05-10 12:16:35 +02:00
Wojciech Tyczynski
57d35d5acb Switch winuserspace proxy to be event based for services 2017-05-10 12:14:37 +02:00
Shiyang Wang
d43f4bb3b6 refactor functions in editoptions.go to use less arguments 2017-05-10 16:46:04 +08:00
zhangxiaoyu-zidif
00b67443f0 daemoncontroller.go:format for 2017-05-10 14:06:34 +08:00
Kubernetes Submit Queue
3fbfafdd0a Merge pull request #45523 from colemickens/cmpr-cpfix3
Automatic merge from submit-queue

azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity

**What this PR does / why we need it**:

1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #43683

**Special notes for your reviewer**: n/a

**Release note**:

```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
2017-05-09 22:07:55 -07:00
xiangpengzhao
a9a36fcf4b Display <none> for "kubectl get pods -o wide" when node is empty. 2017-05-10 12:53:14 +08:00
Kubernetes Submit Queue
148b5da60b Merge pull request #44746 from xiangpengzhao/fix-podpreset
Automatic merge from submit-queue

Add support for PodPreset in `kubectl get` command

**What this PR does / why we need it**:
PR title

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44736

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-09 21:16:17 -07:00
Kubernetes Submit Queue
51a3413371 Merge pull request #45307 from yujuhong/mv-docker-client
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Migrate the docker client code from dockertools to dockershim

Move docker client code from dockertools to dockershim/libdocker. This includes
DockerInterface (renamed to Interface), FakeDockerClient, etc.

This is part of #43234
2017-05-09 20:23:44 -07:00
Kubernetes Submit Queue
61593ba8b8 Merge pull request #45453 from k82cn/k8s_45220
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Init cache with assigned non-terminated pods before scheduling

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45220

**Release note**:

```release-note
The fix makes scheduling go routine waiting for cache (e.g. Pod) to be synced.
```
2017-05-09 20:23:37 -07:00
Lee Verberne
f83337a8ac Fix AssertCalls usage for kubelet fake runtimes
Despite its name, AssertCalls() does not assert anything. It returns an
error that must be checked. This was causing false negatives for
a handful of unit tests.
2017-05-10 01:40:58 +00:00
Kubernetes Submit Queue
7c3f8c9bcf Merge pull request #45181 from vmware/NodeAddressesIPV6IssueNew
Automatic merge from submit-queue

Filter out IPV6 addresses from NodeAddresses() returned by vSphere

The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:

- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.

However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine. 

Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.

@divyenpatel @abrarshivani @pdhamdhere @tusharnt

**Release note**:

```release-note
None
```
2017-05-09 18:16:03 -07:00
xiangpengzhao
baafbf406e Add support for PodPreset in kubectl get command 2017-05-10 08:59:22 +08:00
xilabao
697efd1baf De-duplication in printer 2017-05-10 08:45:20 +08:00
Matthew Wong
bbe82a2688 Ensure desired state of world populator runs before volume reconstructor 2017-05-09 18:25:59 -04:00