Files
kubernetes/pkg/cloudprovider
Kubernetes Submit Queue 63cc42a861 Merge pull request #52131 from vmware/BulkVerifyVolumesImplVsphere
Automatic merge from submit-queue (batch tested with PRs 50068, 52406, 52394, 48551, 52131). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Implement bulk polling of volumes for vSphere

This PR implements bulk polling of volumes - BulkVerifyVolumes() API for vSphere. 

With the existing implementation, vSphere makes multiple calls to VC to check if the volume is attached to a node. If there are "N" volumes attached on "M" nodes, vSphere makes "N" VCenter calls to check if the volumes are attached to VC for all "N" volumes. Also, by default Kubernetes queries if the volumes are attached to nodes every 1 minute. This will substantially increase the number of calls made by vSphere cloud provider to vCenter.

Inorder to prevent this, vSphere cloud provider implements the BulkVerifyVolumes() API in which only a single call is made to vCenter to check if all the volumes are attached to the respective nodes. Irrespective of the number of volumes attached to nodes, the number of vCenter calls will always be 1 on a query to BulkVerifyVolumes() API by kubernetes.

@rohitjogvmw @divyenpatel @luomiao

```release-note
BulkVerifyVolumes() implementation for vSphere
```
2017-09-23 20:55:54 -07:00
..

Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Cloud Providers in this directory will continue to be actively developed or maintained and supported at their current level of support as a longer-term solution evolves.

Overview:

The mechanism for supporting cloud providers is currently in transition: the original method of implementing cloud provider-specific functionality within the main kubernetes tree (here) is no longer advised; however, the proposed solution is still in development.

Guidance for potential cloud providers:

  • Support for cloud providers is currently in a state of flux. Background information on motivation and the proposal for improving is in the github proposal.
  • In support of this plan, a new cloud-controller-manager binary was added in 1.6. This was the first of several steps (see the proposal for more information).
  • Attempts to contribute new cloud providers or (to a lesser extent) persistent volumes to the core repo will likely meet with some pushback from reviewers/approvers.
  • It is understood that this is an unfortunate situation in which 'the old way is no longer supported but the new way is not ready yet', but the initial path is unsustainable, and contributors are encouraged to participate in the implementation of the proposed long-term solution, as there is risk that PRs for new cloud providers here will not be approved.
  • Though the fully productized support envisioned in the proposal is still 2 - 3 releases out, the foundational work is underway, and a motivated cloud provider could accomplish the work in a forward-looking way. Contributors are encouraged to assist with the implementation of the design outlined in the proposal.

Some additional context on status / direction:

  • 1.6 added a new cloud-controller-manager binary that may be used for testing the new out-of-core cloudprovider flow.
  • Setting cloud-provider=external allows for creation of a separate controller-manager binary
  • 1.7 adds extensible admission control, further enabling topology customization.