Automatic merge from submit-queue (batch tested with PRs 65492, 65516, 65447). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. fix azure disk creation issue when specifying external resource group **What this PR does / why we need it**: fix azure disk creation issue when specifying external resource group, after azure disk creation succeeded, it fails to get azure disk state since it's still using original resource group **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes #65515 **Special notes for your reviewer**: Together with https://github.com/kubernetes/kubernetes/pull/65443, this feature has been done, I will cherry-pick to prior versions later. So in the end, we have two ways to make azure disk dynamic provision under an external resource group - specify `resourcegroup` parameter in azure disk storage class ``` kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: hdd provisioner: kubernetes.io/azure-disk parameters: skuname: Standard_LRS kind: managed cachingmode: None resourcegroup: USER-SPECIFIED-RG ``` - specify `volume.beta.kubernetes.io/resource-group` in PVC annotations ``` kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-azuredisk annotations: volume.beta.kubernetes.io/resource-group: "USER-SPECIFIED-RG" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: hdd ``` **Release note**: ``` fix azure disk issue when specifying external resource group ``` /kind bug /sig azure @jsafrane @rootfs Just FYI @khenidak @brendandburns @feiskyer
Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Cloud Providers in this directory will continue to be actively developed or maintained and supported at their current level of support as a longer-term solution evolves.
Overview:
The mechanism for supporting cloud providers is currently in transition: the original method of implementing cloud provider-specific functionality within the main kubernetes tree (here) is no longer advised; however, the proposed solution is still in development.
Guidance for potential cloud providers:
- Support for cloud providers is currently in a state of flux. Background information on motivation and the proposal for improving is in the github proposal.
- In support of this plan, a new cloud-controller-manager binary was added in 1.6. This was the first of several steps (see the proposal for more information).
- Attempts to contribute new cloud providers or (to a lesser extent) persistent volumes to the core repo will likely meet with some pushback from reviewers/approvers.
- It is understood that this is an unfortunate situation in which 'the old way is no longer supported but the new way is not ready yet', but the initial path is unsustainable, and contributors are encouraged to participate in the implementation of the proposed long-term solution, as there is risk that PRs for new cloud providers here will not be approved.
- Though the fully productized support envisioned in the proposal is still 2 - 3 releases out, the foundational work is underway, and a motivated cloud provider could accomplish the work in a forward-looking way. Contributors are encouraged to assist with the implementation of the design outlined in the proposal.
Some additional context on status / direction:
- 1.6 added a new cloud-controller-manager binary that may be used for testing the new out-of-core cloudprovider flow.
- Setting cloud-provider=external allows for creation of a separate controller-manager binary
- 1.7 adds extensible admission control, further enabling topology customization.