Automatic merge from submit-queue (batch tested with PRs 46489, 46281, 46463, 46114, 43946)
AWS: consider instances of all states in DisksAreAttached, not just "running"
Require callers of `getInstancesByNodeNames(Cached)` to specify the states they want to filter instances by, if any. DisksAreAttached, cannot only get "running" instances because of the following attach/detach bug we discovered:
1. Node A stops (or reboots) and stays down for x amount of time
2. Kube reschedules all pods to different nodes; the ones using ebs volumes cannot run because their volumes are still attached to node A
3. Verify volumes are attached check happens while node A is down
4. Since aws ebs bulk verify filters by running nodes, it assumes the volumes attached to node A are detached and removes them all from ASW
5. Node A comes back; its volumes are still attached to it but the attach detach controller has removed them all from asw and so will never detach them even though they are no longer desired on this node and in fact desired elsewhere
6. Pods cannot run because their volumes are still attached to node A
So the idea here is to remove the wrong assumption that callers of `getInstancesByNodeNames(Cached)` only want "running" nodes.
I hope this isn't too confusing, open to alternative ways of fixing the bug + making the code nice.
ping @gnufied @kubernetes/sig-storage-bugs
```release-note
Fix AWS EBS volumes not getting detached from node if routine to verify volumes are attached runs while the node is down
```
Automatic merge from submit-queue
AWS: support node port health check
**What this PR does / why we need it**:
if a custom health check is set from the beta annotation on a service it
should be used for the ELB health check. This patch adds support for
that.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Let me know if any tests need to be added.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46252, 45524, 46236, 46277, 46522)
Make GCE load-balancers create health checks for nodes
From #14661. Proposal on kubernetes/community#552. Fixes#46313.
Bullet points:
- Create nodes health check and firewall (for health checking) for non-OnlyLocal service.
- Create local traffic health check and firewall (for health checking) for OnlyLocal service.
- Version skew:
- Don't create nodes health check if any nodes has version < 1.7.0.
- Don't backfill nodes health check on existing LBs unless users explicitly trigger it.
**Release note**:
```release-note
GCE Cloud Provider: New created LoadBalancer type Service now have health checks for nodes by default.
An existing LoadBalancer will have health check attached to it when:
- Change Service.Spec.Type from LoadBalancer to others and flip it back.
- Any effective change on Service.Spec.ExternalTrafficPolicy.
```
Automatic merge from submit-queue (batch tested with PRs 46383, 45645, 45923, 44884, 46294)
Created unit tests for GCE cloud provider storage interface.
- Currently covers CreateDisk and DeleteDisk, GetAutoLabelsForPD
- Created ServiceManager interface in gce.go to facilitate mocking in tests.
**What this PR does / why we need it**:
Increasing test coverage for GCE Persistent Disk.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44573
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45518, 46127, 46146, 45932, 45003)
aws: Support for ELB tagging by users
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
GCE - Retrieve subnetwork name/url from gce.conf
**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.
**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.
Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added deprecation notice and guidance for cloud providers.
**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers.
**Which issue this PR fixes**
**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Fix provisioned GCE PD not being reused if already exists
@jsafrane PTAL
This is another attempt at https://github.com/kubernetes/kubernetes/pull/38702 . We have observed that `gce.service.Disks.Insert(gce.projectID, zone, diskToCreate).Do()` instantly gets an error response of alreadyExists, so we must check for it.
I am not sure if we still need to check for the error after `waitForZoneOp`; I think that if there is an alreadyExists error, the `Do()` above will always respond with it instantly. But because I'm not sure, and to be safe, I will leave it.
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Only retrieve relevant volumes
**What this PR does / why we need it**:
Improves performance for Cinder volume attach/detach calls.
Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubernetes/issues/26404
**Special notes for your reviewer**:
There were 2 ways of addressing this problem:
1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.
Since #1 does effectively the same as #2, I went with it because it ensures BC.
One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily.
**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.
That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
Automatic merge from submit-queue (batch tested with PRs 45891, 46147)
Watching ClusterId from within GCE cloud provider
**What this PR does / why we need it**:
Adds the ability for the GCE cloud provider to watch a config map for `clusterId` and `providerId`.
WIP - still needs more testing
cc @MrHohn @csbell @madhusudancs @thockin @bowei @nikhiljindal
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
vSphere storage policy support for dynamic volume provisioning
Till now, vSphere cloud provider provides support to configure persistent volume with VSAN storage capabilities - kubernetes#42974. Right now this only works with VSAN.
Also there might be other use cases:
- The user might need a way to configure a policy on other datastores like VMFS, NFS etc.
- Use Storage IO control, VMCrypt policies for a persistent disk.
We can achieve about 2 use cases by using existing storage policies which are already created on vCenter using the Storage Policy Based Management service. The user will specify the SPBM policy ID as part of dynamic provisioning
- resultant persistent volume will have the policy configured with it.
- The persistent volume will be created on the compatible datastore that satisfies the storage policy requirements.
- If there are multiple compatible datastores, the datastore with the max free space would be chosen by default.
- If the user specifies the datastore along with the storage policy ID, the volume will created on this datastore if its compatible. In case if the user specified datastore is incompatible, it would error out the reasons for incompatibility to the user.
- Also, the user will be able to see the associations of persistent volume object with the policy on the vCenter once the volume is attached to the node.
For instance in the below example, the volume will created on a compatible datastore with max free space that satisfies the "Gold" storage policy requirements.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagepolicyName: Gold
```
For instance in the below example, the vSphere CP checks if "VSANDatastore" is compatible with "Gold" storage policy requirements. If yes, volume will be provisioned on "VSANDatastore" else it will error that "VSANDatastore" is not compatible with the exact reason for failure.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagepolicyName: Gold
datastore: VSANDatastore
```
As a part of this change, 4 commits have been added to this PR.
1. Vendor changes for vmware/govmomi
2. Changes to the VsphereVirtualDiskVolumeSource in the Kubernetes API. Added 2 additional fields StoragePolicyName, StoragePolicyID
3. Swagger and Open spec API changes.
4. vSphere Cloud Provider changes to implement the storage policy support.
**Release note**:
```release-note
vSphere cloud provider: vSphere Storage policy Support for dynamic volume provisioning
```
Automatic merge from submit-queue
Adapt loadbalancer deleting/updating when using cloudprovider openstack in openstack/liberty
**What this PR does / why we need it**:
Make an extra verification on the returned listeners and pools because gophercloud query doesn't filter the results by loadbalancerID / listenerID respectively when using **openstack/librerty**.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#33759
**Special notes for your reviewer**:
#33759 it's supposed to have a pull request which fixes this problem but in the release 1.5 loadbalancers doesn't use that patched code.
**Release note**:
NONE
```release-note
```
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Same internal and external ip for vSphere Cloud Provider
Currently, vSphere Cloud Provider reports internal ip as container ip addresses. This PR modifies vSphere Cloud Provider to report same ip address as both internal and external that is provided by vmware infrastructure.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Add approvers to vsphere cloudprovider
This PR adds approvers for vSphere Cloud provider.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Use beta GCP API instead of alpha in CloudCIDR controller
The feature we are using has been promoted to beta.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45374, 44537, 45739, 44474, 45888)
Fix attach volume to instance repeatedly
1.When volume's status is 'attaching', controllermanager will attach
it again and return err. So it is necessary to check volume's
status before attach/detach volume.
2. When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, so don't return err when attachments is None
Fix bug: #44536
Automatic merge from submit-queue (batch tested with PRs 45623, 45241, 45460, 41162)
Promotes Source IP preservation for Virtual IPs from Beta to GA
Fixes#33625. Feature issue: kubernetes/features#27.
Bullet points:
- Declare 2 fields (ExternalTraffic and HealthCheckNodePort) that mirror the ESIPP annotations.
- ESIPP alpha annotations will be ignored.
- Existing ESIPP beta annotations will still be fully supported.
- Allow promoting beta annotations to first class fields or reversely.
- Disallow setting invalid ExternalTraffic and HealthCheckNodePort on services. Default ExternalTraffic field for nodePort or loadBalancer type service to "Global" if not set.
**Release note**:
```release-note
Promotes Source IP preservation for Virtual IPs to GA.
Two api fields are defined correspondingly:
- Service.Spec.ExternalTrafficPolicy <- 'service.beta.kubernetes.io/external-traffic' annotation.
- Service.Spec.HealthCheckNodePort <- 'service.beta.kubernetes.io/healthcheck-nodeport' annotation.
```
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Fixing VolumesAreAttached and DisksAreAttached functions in vSphere
**What this PR does / why we need it**:
In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.
Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.
**Logs before fix**
```
{"log":"E0508 21:31:20.902501 1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552 1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575 1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596 1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```
In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.
**Logs after fix**
```
{"log":"E0509 20:25:37.982152 1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190 1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220 1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```
```
{"log":"I0509 20:25:41.267425 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}
{"log":"I0509 20:26:12.535962 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}
{"log":"I0509 20:26:40.355656 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}
```
**Which issue this PR fixes**
fixes#45464, https://github.com/vmware/kubernetes/issues/116
**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty
**performed many fail over with large volumes (30GB) attached to the pod.**
$ kubectl describe pod
Name: wordpress-mysql-2789807967-3xcvc
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-qx0b0
Node: node2/172.1.9.0
Status: Running
Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-7849s
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-26lp1
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-4pdtl
Node: node3/172.1.87.0
Status: Running
Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-t375f
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-pn6ps
Node: node3/172.1.87.0
Status: Running
powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1
$ kubectl describe pods
Name: wordpress-mysql-2789807967-0wqc1
Node: node1/172.1.98.0
Status: Running
powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-821nc
Node: node3/172.1.87.0
Status: Running
**Release note**:
```release-note
NONE
```
CC: @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0.
fixes#38633
Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
azure: improve user agent string
**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: none
**Release note**:
```release-note
NONE
```
cc: @brendandburns