Merge pull request #44868 from vmware/dsclustersupport
Automatic merge from submit-queue
Adding datastore cluster support for dynamic and static pv
**What this PR does / why we need it**:
Customer reported with version 1.4.7 he could use a datastore that is in a cluster as a vsphere volume. When he upgraded to 1.6.0, this same exact path does not work and throws a datastore not found error.
This PR is adding support to allow using datastore within cluster for volume provisioning.
**Which issue this PR fixes** :
fixes https://github.com/kubernetes/kubernetes/issues/44007
**Special notes for your reviewer**:
**Created datastore cluster as below.**

**Verified dynamic PV provisioning and pod creation using datastore (sharedVmfs-0) in a cluster (DatastoreCluster).**
```
$ cat thin_sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: DatastoreCluster/sharedVmfs-0
```
```
$ kubectl create -f thin_sc.yaml
storageclass "thin" created
$ kubectl describe storageclass thin
Name: thin
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/vsphere-volume
Parameters: datastore=DatastoreCluster/sharedVmfs-0,diskformat=thin
No events.
$
```
```
$ kubectl create -f thin_pvc.yaml
persistentvolumeclaim "thinclaim" created
```
```
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
thinclaim Bound pvc-581805e3-290d-11e7-9ad8-005056bd81ef 2Gi RWO 1m
```
```
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-581805e3-290d-11e7-9ad8-005056bd81ef 2Gi RWO Delete Bound default/thinclaim 1m
```
```
$ kubectl describe pvc thinclaim
Name: thinclaim
Namespace: default
StorageClass: thin
Status: Bound
Volume: pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
39s 39s 1 {persistentvolume-controller } Normal ProvisioningSucceeded Successfully provisioned volume pvc-581805e3-290d-11e7-9ad8-005056bd81ef using kubernetes.io/vsphere-volume
```
```
$ kubectl describe pv pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Name: pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels: <none>
StorageClass:
Status: Bound
Claim: default/thinclaim
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [DatastoreCluster/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-581805e3-290d-11e7-9ad8-005056bd81ef.vmdk
FSType: ext4
No events.
```
```
$ kubectl create -f thin_pod.yaml
pod "thinclaimpod" created
```
```
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
thinclaimpod 1/1 Running 0 1m
```
```
$ kubectl describe pod thinclaimpod
Name: thinclaimpod
Namespace: default
Node: node3/172.1.56.0
Start Time: Mon, 24 Apr 2017 09:46:56 -0700
Labels: <none>
Status: Running
IP: 172.1.56.3
Controllers: <none>
Containers:
test-container:
Container ID: docker://487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
Image: gcr.io/google_containers/busybox:1.24
Image ID: docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
Port:
Command:
/bin/sh
-c
echo 'hello' > /mnt/volume1/index.html && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
State: Running
Started: Mon, 24 Apr 2017 09:47:16 -0700
Ready: True
Restart Count: 0
Volume Mounts:
/mnt/volume1 from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: thinclaim
ReadOnly: false
default-token-cqcq1:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cqcq1
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
40s 40s 1 {default-scheduler } Normal Scheduled Successfully assigned thinclaimpod to node3
22s 22s 1 {kubelet node3} spec.containers{test-container} Normal Pulling pulling image "gcr.io/google_containers/busybox:1.24"
21s 21s 1 {kubelet node3} spec.containers{test-container} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox:1.24"
21s 21s 1 {kubelet node3} spec.containers{test-container} Normal Created Created container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
21s 21s 1 {kubelet node3} spec.containers{test-container} Normal Started Started container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
```
```
$ kubectl delete pod thinclaimpod
pod "thinclaimpod" deleted
```
Verified Disk is detached from the node
```
$ kubectl delete pvc thinclaim
persistentvolumeclaim "thinclaim" deleted
$ kubectl get pv
No resources found.
```
Verified Disk is deleted from the datastore.
Also verified above life cycle using non clustered datastore.
**Verified Using static PV in the datastore cluster for pod provisioning.**
```
# pwd
/vmfs/volumes/sharedVmfs-0/kubevols
# vmkfstools -c 2g test.vmdk
Create: 100% done
# ls
test-flat.vmdk test.vmdk
```
```
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: inject-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
securityContext:
seLinuxOptions:
level: "s0:c0,c1"
restartPolicy: Never
volumes:
- name: test-volume
vsphereVolume:
volumePath: "[DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk"
fsType: ext4
```
```
$ kubectl create -f pod.yaml
pod "inject-pod" created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
inject-pod 1/1 Running 0 19s
$ kubectl describe pod inject-pod
Name: inject-pod
Namespace: default
Node: node3/172.1.56.0
Start Time: Mon, 24 Apr 2017 10:27:22 -0700
Labels: <none>
Status: Running
IP: 172.1.56.3
Controllers: <none>
Containers:
test-container:
Container ID: docker://ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
Image: gcr.io/google_containers/busybox:1.24
Image ID: docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
Port:
Command:
/bin/sh
-c
echo 'hello' > /mnt/volume1/index.html && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
State: Running
Started: Mon, 24 Apr 2017 10:27:40 -0700
Ready: True
Restart Count: 0
Volume Mounts:
/mnt/volume1 from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
test-volume:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk
FSType: ext4
default-token-cqcq1:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cqcq1
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
44s 44s 1 {default-scheduler } Normal Scheduled Successfully assigned inject-pod to node3
26s 26s 1 {kubelet node3} spec.containers{test-container} Normal Pulled Container image "gcr.io/google_containers/busybox:1.24" already present on machine
26s 26s 1 {kubelet node3} spec.containers{test-container} Normal Created Created container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
26s 26s 1 {kubelet node3} spec.containers{test-container} Normal Started Started container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
```
**Release note**:
```release-note
none
```
cc: @BaluDontu @moserke @tusharnt @pdhamdhere