Commit Graph

20807 Commits

Author SHA1 Message Date
deads2k
e91716a2db orphan when kubectl delete --cascade=false 2017-05-11 09:11:07 -04:00
Kubernetes Submit Queue
9a0f5ccb33 Merge pull request #45480 from xiangpengzhao/scheduledjob-cronjob
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)

Rename vars scheduledJob to cronJob in describe.go

**What this PR does / why we need it**:
Rename vars scheduledJob to cronJob in describe.go

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
There might still be some leftovers in other places.
@soltysh 

**Release note**:

```release-note
NONE
```
2017-05-11 00:12:40 -07:00
Kubernetes Submit Queue
947db16df7 Merge pull request #45579 from shiywang/refactor-visit-patch
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Refactor functions in editoptions.go to use less arguments

Fixes https://github.com/kubernetes/kubernetes/issues/45521
/assign @mengqiy 
will rebase pr https://github.com/kubernetes/kubernetes/pull/42256 after this get merged
2017-05-10 23:20:42 -07:00
Kubernetes Submit Queue
873ce9ca4a Merge pull request #45515 from derekwaynecarr/ignore-openrc
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Ignore openrc cgroup

**What this PR does / why we need it**:
It is a work-around for the following: https://github.com/opencontainers/runc/issues/1440

**Special notes for your reviewer**:
I am open to a cleaner way to do this, but we have many developer users on Macs that ran containerized kubelets that are not able to run them right now due to the inclusion of openrc tripping up our existence checks.  Ideally, runc can give us a call to say "does this exist according to what runc knows about".  Or we could add a whitelist check.  Right now, this was the smallest hack pending more discussion.
2017-05-10 23:20:40 -07:00
Kubernetes Submit Queue
fc7ae99327 Merge pull request #45478 from HardySimpson/fix-endpoints-del
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

fix endpoints controller del lead-election endpoints

when there are multiple controller-manager instances,  we observe that it will delete leader-election endpoints after 5min,  and cause re-election, add a check to avoid that

Fixes #45585

error log

```
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "GET /api/v1/endpoints HTTP/1.1" 200 1175 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 404 123 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 398 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 404 141 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) ku
```



release-note

```release-note
none
```
2017-05-10 21:34:43 -07:00
Kubernetes Submit Queue
b0d024fee1 Merge pull request #45569 from vmware/fix_VolumesAreAttached
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Fixing VolumesAreAttached and DisksAreAttached functions in vSphere

**What this PR does / why we need it**:

In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.

Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.

**Logs before fix**

```
{"log":"E0508 21:31:20.902501       1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552       1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575       1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596       1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```



In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.


**Logs after fix**
```
{"log":"E0509 20:25:37.982152       1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190       1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220       1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```

```
{"log":"I0509 20:25:41.267425       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}

{"log":"I0509 20:26:12.535962       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}

{"log":"I0509 20:26:40.355656       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}

```




**Which issue this PR fixes**
fixes #45464, https://github.com/vmware/kubernetes/issues/116

**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty

**performed many fail over with large volumes (30GB) attached to the pod.**

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-3xcvc
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-qx0b0
Node:		node2/172.1.9.0
Status:		Running

Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-7849s
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-26lp1
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-4pdtl
Node:		node3/172.1.87.0
Status:		Running


Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-t375f
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-pn6ps
Node:		node3/172.1.87.0
Status:		Running

powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-0wqc1
Node:		node1/172.1.98.0
Status:		Running

powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-821nc
Node:		node3/172.1.87.0
Status:		Running


**Release note**:

```release-note
NONE
```

CC:  @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
2017-05-10 21:34:37 -07:00
Kubernetes Submit Queue
1f3b158a10 Merge pull request #45194 from yujuhong/rm-cri-flag
Automatic merge from submit-queue

Remove the deprecated `--enable-cri` flag

Except for rkt, CRI is the default and only integration point for
container runtimes.

```release-note
Remove the deprecated `--enable-cri` flag. CRI is now the default, 
and the only way to integrate with kubelet for the container runtimes.
```
2017-05-10 20:46:24 -07:00
Kubernetes Submit Queue
b0399114fe Merge pull request #38636 from dhawal55/internal-elb
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. 

fixes #38633

Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
2017-05-10 19:31:45 -07:00
Kubernetes Submit Queue
b040513aab Merge pull request #43067 from xilabao/dedup-in-printer
Automatic merge from submit-queue

De-duplication in printer
2017-05-10 19:08:59 -07:00
Kubernetes Submit Queue
a86392a326 Merge pull request #45333 from colemickens/cmpr-cpfix
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

azure: improve user agent string

**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: none 

**Release note**:

```release-note
NONE
```

cc: @brendandburns
2017-05-10 17:47:45 -07:00
Kubernetes Submit Queue
aacc9729f1 Merge pull request #44781 from wongma7/outervolumespec
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

Ensure desired state of world populator runs before volume reconstructor

If the kubelet's volumemanager reconstructor for actual state of world runs before the desired state of world has been populated, the pods in the actual state of world will have some incorrect volume information: namely outerVolumeSpecName, which if incorrect leads to part of the issue here https://github.com/kubernetes/kubernetes/issues/43515, because WaitForVolumeAttachAndMount searches the actual state of world with the correct outerVolumeSpecName and won't find it so reports 'timeout waiting....', etc. forever for existing pods. The comments acknowledge that this is a known issue

The all sources ready check doesn't work because the sources being ready doesn't necessarily mean the desired state of world populator added pods from the sources. So instead let's put the all sources ready check in the *populator*, and when the sources are ready, it will be able to populate the desired state of world and make "HasAddedPods()" return true. THEN, the reconstructor may run.

@jingxu97 PTAL, you wrote all of the reconstruction stuff

```release-note
NONE
```
2017-05-10 17:47:43 -07:00
Kubernetes Submit Queue
14b898d115 Merge pull request #45595 from justinsb/sts_alias_2
Automatic merge from submit-queue

Add sts alias for kubectl statefulset
2017-05-10 16:06:58 -07:00
Yu-Ju Hong
daa329c9ae Remove the deprecated --enable-cri flag
Except for rkt, CRI is the default and only integration point for
container runtimes.
2017-05-10 13:03:41 -07:00
Kubernetes Submit Queue
3ddbed969b Merge pull request #45490 from deads2k/owners-01-extensions
Automatic merge from submit-queue

add owners to new packages

Adds owners files to some packages that need it.
2017-05-10 12:51:51 -07:00
Kubernetes Submit Queue
bfa18037ce Merge pull request #45404 from wojtek-t/edge_based_winuserspace_proxy
Automatic merge from submit-queue

Edge based winuserspace proxy

Last PR in the series of making kube-proxy event-based.

This is a sibling PR to https://github.com/kubernetes/kubernetes/pull/45356 that is already merged.
The second commit is removing the code that is no longer used.
2017-05-10 12:51:43 -07:00
Kubernetes Submit Queue
77b2e6302c Merge pull request #45236 from verb/sharedpid-2-default
Automatic merge from submit-queue

Enable shared PID namespace by default for docker pods

**What this PR does / why we need it**: This PR enables PID namespace sharing for docker pods by default, bringing the behavior of docker in line with the other CRI runtimes when used with docker >= 1.13.1.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #1615

**Special notes for your reviewer**: cc @dchen1107 @yujuhong 

**Release note**:

```release-note
Kubernetes now shares a single PID namespace among all containers in a pod when running with docker >= 1.13.1. This means processes can now signal processes in other containers in a pod, but it also means that the `kubectl exec {pod} kill 1` pattern will cause the pod to be restarted rather than a single container.
```
2017-05-10 12:06:01 -07:00
Derek Carr
4e002eacb1 Do not fail cgroup exists checks for unknown controllers 2017-05-10 14:52:09 -04:00
divyenpatel
9f89b57b74 fix implementation of VolumesAreAttached function 2017-05-10 10:16:13 -07:00
Justin Santa Barbara
e1fdb8b027 Add sts alias for kubectl statefulset
Saves a lot of typing!
2017-05-10 09:57:36 -04:00
Wojciech Tyczynski
ce752e3fc9 Remove no-longer used code in proxy/config 2017-05-10 12:16:35 +02:00
Wojciech Tyczynski
57d35d5acb Switch winuserspace proxy to be event based for services 2017-05-10 12:14:37 +02:00
Shiyang Wang
d43f4bb3b6 refactor functions in editoptions.go to use less arguments 2017-05-10 16:46:04 +08:00
Kubernetes Submit Queue
3fbfafdd0a Merge pull request #45523 from colemickens/cmpr-cpfix3
Automatic merge from submit-queue

azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity

**What this PR does / why we need it**:

1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #43683

**Special notes for your reviewer**: n/a

**Release note**:

```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
2017-05-09 22:07:55 -07:00
Kubernetes Submit Queue
148b5da60b Merge pull request #44746 from xiangpengzhao/fix-podpreset
Automatic merge from submit-queue

Add support for PodPreset in `kubectl get` command

**What this PR does / why we need it**:
PR title

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44736

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-09 21:16:17 -07:00
Kubernetes Submit Queue
51a3413371 Merge pull request #45307 from yujuhong/mv-docker-client
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Migrate the docker client code from dockertools to dockershim

Move docker client code from dockertools to dockershim/libdocker. This includes
DockerInterface (renamed to Interface), FakeDockerClient, etc.

This is part of #43234
2017-05-09 20:23:44 -07:00
Kubernetes Submit Queue
61593ba8b8 Merge pull request #45453 from k82cn/k8s_45220
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Init cache with assigned non-terminated pods before scheduling

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45220

**Release note**:

```release-note
The fix makes scheduling go routine waiting for cache (e.g. Pod) to be synced.
```
2017-05-09 20:23:37 -07:00
Kubernetes Submit Queue
7c3f8c9bcf Merge pull request #45181 from vmware/NodeAddressesIPV6IssueNew
Automatic merge from submit-queue

Filter out IPV6 addresses from NodeAddresses() returned by vSphere

The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:

- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.

However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine. 

Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.

@divyenpatel @abrarshivani @pdhamdhere @tusharnt

**Release note**:

```release-note
None
```
2017-05-09 18:16:03 -07:00
xiangpengzhao
baafbf406e Add support for PodPreset in kubectl get command 2017-05-10 08:59:22 +08:00
xilabao
697efd1baf De-duplication in printer 2017-05-10 08:45:20 +08:00
Matthew Wong
bbe82a2688 Ensure desired state of world populator runs before volume reconstructor 2017-05-09 18:25:59 -04:00
Kubernetes Submit Queue
76889118d7 Merge pull request #45280 from JulienBalestra/run-pod-inside-unique-netns
Automatic merge from submit-queue

rkt: Generate a new Network Namespace for each Pod

**What this PR does / why we need it**:

This PR concerns the Kubelet with the Container runtime rkt.
Currently, when a Pod stops and the kubelet restart it, the Pod will use the **same network namespace** based on its PodID.

When the Garbage Collection is triggered, it delete all the old resources and the current network namespace.

The Pods and all containers inside it loose the _eth0_ interface.
I explained more in details in #45149 how to reproduce this behavior.

This PR generates a new unique network namespace name for each new/restarting Pod.
The Garbage collection retrieve the correct network namespace and remove it safely.

**Which issue this PR fixes** : 

fix #45149 

**Special notes for your reviewer**:

Following @yifan-gu guidelines, so maybe expecting him for the final review.

**Release note**:

`NONE`
2017-05-09 15:07:34 -07:00
Kubernetes Submit Queue
aee07e9464 Merge pull request #45446 from zdj6373/cni
Automatic merge from submit-queue

cni Log changes

Newly modified log error, modified
2017-05-09 14:23:32 -07:00
Dhawal Patel
0e57b912a6 Update comment on ServiceAnnotationLoadBalancerInternal 2017-05-09 13:41:15 -07:00
Kubernetes Submit Queue
b60d322c27 Merge pull request #44991 from aaronlevy/cns
Automatic merge from submit-queue

Skip inspecting pod network if unknown namespace

**What this PR does / why we need it**:

If we fail to determine the network namespace of a container we still try to inspect the state - even though there is no way for it to succeed. This leads to errors like:

> NetworkPlugin cni failed on the status hook for pod "X": Unexpected command output nsenter: cannot open : No such file or directory

Instead, if we cannot determine the network namespace, we should just exit with a (hopefully) more clear error message.

I left the wording as assuming a terminated pod, based on:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockershim/helpers.go#L208-L211

ref: 
https://github.com/kubernetes-incubator/bootkube/issues/475
https://github.com/coreos/coreos-kubernetes/issues/856
2017-05-09 13:36:56 -07:00
Kubernetes Submit Queue
f8f9d7db93 Merge pull request #45304 from deads2k/controller-03-ns-discovery
Automatic merge from submit-queue (batch tested with PRs 45304, 45006, 45527)

increase the QPS for namespace controller

The namespace controller is really chatty. Especially to discovery since that involves two requests for every API version available. This bumps the QPS and burst on the namespace controller to avoid being stuck waiting.
2017-05-09 12:04:41 -07:00
Klaus Ma
7bf698a2c8 generated codes. 2017-05-10 01:50:38 +08:00
Kubernetes Submit Queue
202a9f8445 Merge pull request #42317 from NickrenREN/attach-detach-error-info-print
Automatic merge from submit-queue

add and clear err message about RemoveVolumeFromReportAsAttached()

**Release note**:

```release-note
NONE
```
2017-05-09 10:44:32 -07:00
Kubernetes Submit Queue
fc28762671 Merge pull request #45448 from zhangxiaoyu-zidif/cleancode-nfs-return-err
Automatic merge from submit-queue (batch tested with PRs 44798, 45537, 45448, 45432)

nfs.go: cleancode err

**What this PR does / why we need it**:
The modification makes  code clean, simple, and easy to inspect. 

**Release note**:

```release-note
NONE
```
2017-05-09 08:29:37 -07:00
Kubernetes Submit Queue
49626c975b Merge pull request #44798 from zetaab/master
Automatic merge from submit-queue

Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones

**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions. 

**Which issue this PR fixes** fixes #44735

**Special notes for your reviewer**:

example:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: mysql
        image: adfinissygroup/k8s-mariadb-galera-centos:v002
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        volumeMounts:
        - name: storage
          mountPath: /data
        readinessProbe:
          exec:
            command:
            - /usr/share/container-scripts/mysql/readiness-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
  volumeClaimTemplates:
  - metadata:
      name: storage
      annotations:
        volume.beta.kubernetes.io/storage-class: all
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 12Gi
```

If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.

Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).

There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.

Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves. 
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.


cc @rootfs @anguslees @jsafrane 

```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
2017-05-09 08:10:44 -07:00
Kubernetes Submit Queue
49e5435529 Merge pull request #45403 from sttts/sttts-tri-state-watch-capacity
Automatic merge from submit-queue

apiserver: injectable default watch cache size

This makes it possible to override the default watch capacity in the REST options getter. Before this PR the default is written into the storage struct explicitly, and if it is the default, the REST options getter didn't know. With this the PR the default is applied late and can be injected from the outside.
2017-05-09 07:27:35 -07:00
Kubernetes Submit Queue
02d75cb453 Merge pull request #45481 from CaoShuFeng/xtables/lock
Automatic merge from submit-queue

Remove leaked tmp file in unit tests

Some unit tests leave a temp file in work space:
pkg/util/iptables/xtables.lock
This patch remove that file
@dcbw 
**Release note**:

```NONE
```
2017-05-09 06:40:31 -07:00
Kubernetes Submit Queue
d602ea69dc Merge pull request #45295 from rootfs/vol-owner
Automatic merge from submit-queue

add rootfs gnufied and childsb to volume approver

**What this PR does / why we need it**:
add me and @gnufied @childsb to volume approver 
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-09 04:13:00 -07:00
JulienBalestra
7a2e0e24f7 Generate a new Network Namespace for each Pod. 2017-05-09 09:59:00 +02:00
Cole Mickens
3fc0c05d83 azure: instances: remove dead code 2017-05-09 00:00:12 -07:00
Cole Mickens
c349d36da3 azure: loadbalancer: fix sourceAddrPrefix support
Fixes support for multiple instances of loadBalancerSourceRanges.
Previously, the names of the rules for each address range conflicted
causing only one to be applied. Now each gets a unique name.
2017-05-08 23:58:29 -07:00
Cole Mickens
355c2be7a0 azure: loadbalancer: support UDP svc ports+rules 2017-05-08 23:58:25 -07:00
Kubernetes Submit Queue
20fa30e4b5 Merge pull request #45330 from NickrenREN/openstack-backoff
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)

Add exponential backoff to openstack loadbalancer functions

Using  exponential backoff to lower openstack load and reduce API call throttling


**Release note**:

```release-note
NONE
```
2017-05-08 23:00:38 -07:00
Kubernetes Submit Queue
f036725a0e Merge pull request #45018 from ravisantoshgudimetla/cleanup_qos#39148
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)

Clean up for qos.go

**What this PR does / why we need it**:
Seems we are not using any of those functions. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39148



**Release note**:

```release-note
A small clean up to remove unnecessary functions.
```
2017-05-08 23:00:36 -07:00
Cole Mickens
8b50b83067 azure: loadbalancer: respect svc sessionaffinity
If the Service spec sets sessionAffinity, reflects that in the
configuration specified for the Azure loadbalancer.
2017-05-08 20:08:05 -07:00
Balu Dontu
d05b279d9b Filter out IPV6 addresses from NodeAddresses() returned by vSphere 2017-05-08 18:23:06 -07:00