Automatic merge from submit-queue
Mark volume as detached when node does not exist for vsphere
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and
return false without error.
Fix#50266
**Special notes for your reviewer**:
/assign @jingxu97
**Release note**:
```release-note
NONE
```
vSphere has limitation of 80 characters for vmName.
with vsphere-k8s prefix and "vmdisk.volumeOptions.Name" vmName can become easily bigger than 80 chars.
Used hash funciton just of the "vmdisk.volumeOptions.Name" part as cleanup dummyVm logic depends on prefix "vsphere-k8s"
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and
return false without error.
Fix#50266
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Same internal and external ip for vSphere Cloud Provider
Currently, vSphere Cloud Provider reports internal ip as container ip addresses. This PR modifies vSphere Cloud Provider to report same ip address as both internal and external that is provided by vmware infrastructure.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Add approvers to vsphere cloudprovider
This PR adds approvers for vSphere Cloud provider.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Filter out IPV6 addresses from NodeAddresses() returned by vSphere
The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:
- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.
However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine.
Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.
@divyenpatel @abrarshivani @pdhamdhere @tusharnt
**Release note**:
```release-note
None
```
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue
Fix adding disks to more than one scsi adapter. Fixes#42399
**What this PR does / why we need it**: Allows a single node to use more than 16 disks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42399
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix adding disks to more than one scsi adapter.
```
Automatic merge from submit-queue
Remove VCenterPort from vsphere cloud provider.
**What this PR does / why we need it**:
Address a bug inside vsphere cloud provider when a port number other than 443 is specified inside the config file.
The url which is used for communicating with govmomi should not include port number.
A port number other than 443 will result in 404 error.
VCenterPort stays in VSphereConfig structure for backward compatibility.
**Which issue this PR fixes** : fixes https://github.com/kubernetes/kubernetes-anywhere/issues/338
The url which is used for communicating with govmomi should not include
port number. A port number other than 443 will result in 404 error.
VCenterPort stays in VSphereConfig structure for backward compatibility.
Automatic merge from submit-queue (batch tested with PRs 41223, 40892, 41220, 41207, 41242)
Fixes#40819 and Fixes#33114
**What this PR does / why we need it**:
Start looking up the virtual machine by it's UUID in vSphere again. Looking up by IP address is problematic and can either not return a VM entirely, or could return the wrong VM.
Retrieves the VM's UUID in one of two methods - either by a `vm-uuid` entry in the cloud config file on the VM, or via sysfs. The sysfs route requires root access, but restores the previous functionality.
Multiple VMs in a vCenter cluster can share an IP address - for example, if you have multiple VM networks, but they're all isolated and use the same address range. Additionally, flannel network address ranges can overlap.
vSphere seems to have a limitation of reporting no more than 16 interfaces from a virtual machine, so it's possible that the IP address list on a VM is completely untrustworthy anyhow - it can either be empty (because the 16 interfaces it found were veth interfaces with no IP address), or it can report the flannel IP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#40819Fixes#33114
**Special notes for your reviewer**:
**Release note**:
```release-note
Reverts to looking up the current VM in vSphere using the machine's UUID, either obtained via sysfs or via the `vm-uuid` parameter in the cloud configuration file.
```
Start looking up the virtual machine by it's UUID in vSphere again. Looking up by IP address is problematic and can either not return a VM entirely, or could return the wrong VM.
Retrieves the VM's UUID in one of two methods - either by a `vm-uuid` entry in the cloud config file on the VM, or via sysfs. The sysfs route requires root access, but restores the previous functionality.
Multiple VMs in a vCenter cluster can share an IP address - for example, if you have multiple VM networks, but they're all isolated and use the same address range. Additionally, flannel network address ranges can overlap.
vSphere seems to have a limitation of reporting no more than 16 interfaces from a virtual machine, so it's possible that the IP address list on a VM is completely untrustworthy anyhow - it can either be empty (because the 16 interfaces it found were veth interfaces with no IP address), or it can report the flannel IP.
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)
Fix for detach volume when node is not present/ powered off
Fixes#33061
When a vm is reported as no longer present in cloud provider and is deleted by node controller, there are no attempts to detach respective volumes. For example, if a VM is powered off or paused, and pods are migrated to other nodes. In the case of vSphere, the VM cannot be started again because the VM still holds mount points to volumes that are now mounted to other VMs.
In order to re-join this node again, you will have to manually detach these volumes from the powered off vm before starting it.
The current fix will make sure the mount points are deleted when the VM is powered off. Since all the mount points are deleted, the VM can be powered on again.
This is a workaround proposal only. I still don't see the kubernetes issuing a detach request to the vsphere cloud provider which should be the case. (Details in original issue #33061 )
@luomiao @kerneltime @pdhamdhere @jingxu97 @saad-ali
addressed review comments
Addressed review comment for pv_reclaimpolicy.go to verify content of the volume
addressed 2nd round of review comments
addressed 3rd round of review comments from jeffvance