During volume detach, the following might happen in reconciler
1. Pod is deleting
2. remove volume from reportedAsAttached, so node status updater will
update volumeAttached list
3. detach failed due to some issue
4. volume is added back in reportedAsAttached
5. reconciler loops again the volume, remove volume from
reportedAsAttached
6. detach will not be trigged because exponential back off, detach call
will fail with exponential backoff error
7. another pod is added which using the same volume on the same node
8. reconciler loops and it will NOT try to tigger detach anymore
At this point, volume is still attached and in actual state, but
volumeAttached list in node status does not has this volume anymore, and
will block volume mount from kubelet.
The fix in first round is to add volume back into the volume list that
need to reported as attached at step 6 when detach call failed with
error (exponentical backoff). However this might has some performance
issue if detach fail for a while. During this time, volume will be keep
removing/adding back to node status which will cause a surge of API
calls.
So we changed to logic to check first whether operation is safe to retry which
means no pending operation or it is not in exponentical backoff time
period before calling detach. This way we can avoid keep removing/adding
volume from node status.
Change-Id: I5d4e760c880d72937d34b9d3e904ecad125f802e
This cleans up a log message that looks like:
I0312 14:36:50.280018 12866 operation_generator.go:869] UnmountDevice succeeded for volume "my-volume" %!(EXTRA string=UnmountDevice succeeded for volume "my-volume" (UniqueName: "kubernetes.io/csi/smb.csi.k8s.io^my-volume") on node "my-node")
should mark volume mount in actual state even if volume expansion fails so that
reconciler can tear down the volume when needed. To avoid pods start
using it, mark volume as uncertain instead of mounted.
Will add unit test after the logic is reviewed.
Change-Id: I5aebfa11ec93235a87af8f17bea7f7b1570b603d
When UnmountDevice fails, kubelet treat the volume mount as uncertain,
because it does not know at which stage UnmountDevice failed. It may be
already partially unmonted / destroyed.
As result, MountDevice will be performer when a new Pod is started on the
node after UnmountDevice faiure.
When NodeStage times out and does not prepare destination device and user
deletes corresponding pod, the driver may continue staging the volume in
background. Kubernetes must call NodeUnstage to "cancel" this operation.
Therefore TearDownDevice should be called even when the target directory
does not exist (yet).
Volume mount should be marked as uncertain after NodeStage / NodePublish
timeout or similar error, when the driver can continue with the operation in
background.
Filesystem mismatch is a special event. This could indicate
either user has asked for incorrect filesystem or there is a error
from which mount operation can not recover on retry.
Co-Authored-By: Jordan Liggitt <jordan@liggitt.net>
- Move SetUpDevice to BlockVolumeStager
- Move MapPodDevice to BlockVolumePublisher
- Move TearDownDevice to BlockVolumeUnstager
- Move UnmapPodDevice to BlockVolumeUnpublisher
- Implement BlockVolumePublisher only in local and csi plugin
- Implement BlockVolumeUnstager only in fc, iscsi, rbd, and csi plugin
- Implement BlockVolumeStager and BlockVolumeUnpublisher only in csi plugin
- Rename MapDevice to MapPodDevice in BlockVolumeMapper
- Add UnmapPodDevice in BlockVolumeUnmapper (This will be used by csi driver later)
- Add CustomBlockVolumeMapper and CustomBlockVolumeUnmapper interface
- Move SetUpDevice and MapPodDevice to CustomBlockVolumeMapper
- Move TearDownDevice and UnmapPodDevice to CustomBlockVolumeUnmapper
- Implement CustomBlockVolumeMapper only in local and csi plugin
- Implement CustomBlockVolumeUnmapper only in fc, iscsi, rbd, and csi plugin
- Change MapPodDevice to return path and SetUpDevice not to return path