- Add reclaim policy to newVolume() call.
- Implement reactor Volumes().Get().
- Implement mock volume plugin.
- Add recycler tests.
- Add a synchronization condition to controller.scheduleOperation
- we need to pause the controller here, let the test to do some bad things
to the controller and test error cases in recycleVolumeOperation.
Test framework gets more and more complicated... But this is the last piece,
I promise.
We need to keep list of running recyclers, deleters and provisioners in
memory in order not to start a new recycling/deleting/provisioning twice
for the same volume/claim.
This will be eventually replaced by GoRoutineMap from PR #24838.
There is a race in dynamic provisioning:
1. User creates a claim to dynamically provision a volume.
2. Kubernetes start provisioning the volume.
3. User deletes the claim before 2. is finished.
4. Kubernetes finished provisioning the volume.
The volume is in Pending state. This volume should be deleted instead of
moving to Available state.
Volume names have now format <cluster-name>-dynamic-<pv-name>.
pv-name is guaranteed to be unique in Kubernetes cluster, adding
<cluster-name> ensures we don't conflict with any running cluster
in the cloud project (kube-controller-manager --cluster-name=XXX).
'kubernetes' is the default cluster name.
Recycle controller tries to recycle or delete a PV several times.
It stores count of failed attempts and timestamp of the last attempt in
annotations of the PV.
By default, the controller tries to recycle/delete a PV 3 times in
10 minutes interval. These values are configurable by
kube-controller-manager --pv-recycler-maximum-retry=X --pvclaimbinder-sync-period=Y
arguments.
Fixes#19860 (it may be easier to look at the issue to see exact sequence
to reproduce the bug and understand the fix).
When PersistentVolumeProvisionerController.reconcileClaim() is called with the
same claim in short succession (e.g. the claim is created by an user and
at the same time periodic check of all claims is scheduled), the second
reconcileClaim() call gets an old copy of the claim as its parameter.
The method should always reload the claim to get a fresh copy with all
annotations, possibly added by previous reconcileClaim() call.
The same applies to PersistentVolumeClaimBinder.syncClaim().
Also update all the test to store claims in "fake" API server before calling
syncClaim and reconcileClaim.
We should always load the newest version of the volume from APIserver before
processing it.
When PersistentVolumeProvisionerController.reconcileVolume() is called with the
same volume in short succession (e.g. the volume is created by a provisioner
and at the same time periodic check of all volumes is scheduled), the second
reconcileVolume() call gets an old copy of the volume as its parameter and
it does not see annotations updated by the previous call.
This may result in one volume being provisioned several times, creating orphan
volumes in the cloud.
The same error is in PersistentVolumeClaimBinder.syncVolume().