Due to recent os.Stat() issue for Windows 20H2, added the logic to use
os.Lstat() if os.Stat() fails for Windows
Change-Id: Ic9fc07ecc6e9c2984e854ac77121ff61c8e0d041
Imporved testing turned these up:
1) Headless+Selectorless, on a single-stack cluster, policy=PreferDual
Prior to this commit, the result was a single IPFamiliy (because we
checked that the 2nd allocator was present). This changes that case to
populate both families (we don't care if the allocator exists), which is
the same as RequireDual.
2) ClusterIP, user specifies 2 families but no IPs
Prior to this commit, the policy was inferred to be SingleStack. This
changes that case to correctly default to RequireDual when 2 families
are present but no IPs.
Without this error, kube-scheduler was simply ignoring the special
volume source and scheduled the pod. This was unlikely to work in
practice because the volume might have needed binding or the feature
is also disabled on kubelet which then doesn't know what to do with
the volume.
Silently ignoring the unsupported volume type leads to:
Warning FailedMount 8s kubelet Unable to attach or mount volumes: unmounted volumes=[my-csi-volume default-token-bsnbz], unattached volumes=[my-csi-volume default-token-bsnbz]: failed to get Plugin from volumeSpec for volume "my-csi-volume" err=no volume plugin matched
The new message is easier to understand:
Warning FailedMount 6s (x5 over 49s) kubelet Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[my-csi-volume default-token-rwlpp]: volume my-csi-volume is a generic ephemeral volume, but that feature is disabled in kubelet
This updates the EndpointSlice controller to make use of the
EndpointSlice tracker to identify when expected changes are not present
in the cache yet. If this is detected, the controller will wait to sync
until all expected updates have been received. This should help avoid
race conditions that would result in duplicate EndpointSlices or failed
attempts to update stale EndpointSlices. To simplify this logic, this
also moves the EndpointSlice tracker from relying on resource versions
to generations.
The `apparmor_parser` binary is not really required for a system to run
AppArmor from a Kubernetes perspective. How to apply the profile is more
in the responsibility of lower level runtimes like CRI-O and containerd,
which may do the binary check on their own.
This synchronizes the current libcontainer implementation with the
vendored Kubernetes source code and allows distributions to use
AppArmor, even when they do not have the parser available in
`/sbin/apparmor_parser`.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>