Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)
kubelet/network: report but tolerate errors returned from GetNetNS()
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
@kubernetes/rh-networking @kubernetes/sig-network-bugs @DirectXMan12
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
** reason for this change **
CNI has recently introduced a new configuration list feature. This
allows for plugin chaining. It also supports varied plugin versions.
This fixes the race that happens in rktnetes when pod B invokes
'kubenet.SetUpPod()' before another pod A becomes actually running.
The second 'kubenet.SetUpPod()' call will not pick up the pod A
and thus overwrite the host port iptable rules that breaks pod A.
This PR fixes the case by listing all 'active pods' (all non-exited
pods) instead of only running pods.
MTU selection is difficult, and if there is a transport such as IPSEC in
use may be impossible. So we allow specification of the MTU with the
network-plugin-mtu flag, and we pass this down into the network
provider.
Currently implemented by kubenet.