In the following code pattern, the log message will get logged with v=0 in JSON
output although conceptually it has a higher verbosity:
if klog.V(5).Enabled() {
klog.Info("hello world")
}
Having the actual verbosity in the JSON output is relevant, for example for
filtering out only the important info messages. The solution is to use
klog.V(5).Info or something similar.
Whether the outer if is necessary at all depends on how complex the parameters
are. The return value of klog.V can be captured in a variable and be used
multiple times to avoid the overhead for that function call and to avoid
repeating the verbosity level.
During volume detach, the following might happen in reconciler
1. Pod is deleting
2. remove volume from reportedAsAttached, so node status updater will
update volumeAttached list
3. detach failed due to some issue
4. volume is added back in reportedAsAttached
5. reconciler loops again the volume, remove volume from
reportedAsAttached
6. detach will not be trigged because exponential back off, detach call
will fail with exponential backoff error
7. another pod is added which using the same volume on the same node
8. reconciler loops and it will NOT try to tigger detach anymore
At this point, volume is still attached and in actual state, but
volumeAttached list in node status does not has this volume anymore, and
will block volume mount from kubelet.
The fix in first round is to add volume back into the volume list that
need to reported as attached at step 6 when detach call failed with
error (exponentical backoff). However this might has some performance
issue if detach fail for a while. During this time, volume will be keep
removing/adding back to node status which will cause a surge of API
calls.
So we changed to logic to check first whether operation is safe to retry which
means no pending operation or it is not in exponentical backoff time
period before calling detach. This way we can avoid keep removing/adding
volume from node status.
Change-Id: I5d4e760c880d72937d34b9d3e904ecad125f802e
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add list of pods that use a volume to multiattach events
So users knows what pods are blocking a volume and can realize their error.
**Release note**:
```release-note
NONE
```
UX:
* User can get one of following events, depending what other pod(s) are already using a volume and in which namespace they are:
```
Multi-Attach error for volume"volume-name" Volume is already exclusively attached to one node and can't be attached to another
Multi-Attach error for volume "volume-name" Volume is already used by pod(s) pod3 and 1 pod(s) in different namespaces
```
* controller-manager gets always full logs:
* When the node where is the volume attached is known:
```
Multi-Attach error for volume "volume-name" (UniqueName: "fake-plugin/volume-name") from node "node1" Volume is already used by pods ns2/pod2, ns1/pod3 on node node2, node3
```
* When the node where is the volume attached is not known:
```
Multi-Attach error for volume "volume-name" (UniqueName: "fake-plugin/volume-name") from node "node1" Volume is already exclusively attached to node node2 and can't be attached to another
```
/kind bug
/sig storage
/assign @gnufied
It is possible that by the time we check for multiattach
error on node, the reconciler loop may not have processed second
volume and hence we are going to retry for multiattach error
on node before giving up and marking the test as failed.
Only relying on the NewAttacher/Detacher call counts is not enough as they
happen in parallel to the testing/verification code and thus the actual
attaching/detaching may not be done yet, resulting in flaky test results.
Fixes#46244
Automatic merge from submit-queue (batch tested with PRs 42252, 42251, 42249, 47512, 47887)
volumes: promote some logs from info -> warning
Part of #40583
```release-note
NONE
```