When adding functionality to the kubelet package and a test file, is
kind of painful to run unit tests today locally.
We usually can't run specifying the test file, as if xx_test.go and
xx.go use the same package, we need to specify all the dependencies. As
soon as xx.go uses the Kuebelet type (we need to do that to fake a
kubelet in the unit tests), this is completely impossible to do in
practice.
So the other option is to run the unit tests for the whole package or
run only a specific funtion. Running a single function can work in some
cases, but it is painful when we want to test all the functions we
wrote. On the other hand, running the test for the whole package is very
slow.
Today some unit tests try to connect to the API server (with retries)
create and list lot of pods/volumes, etc. This makes running the unit
test for the kubelet package slow.
This patch tries to make running the unit test for the whole package
more palatable. This patch adds a skip if the short version was
requested (go test -short ...), so we don't try to connect
to the API server or skip other slow tests.
Before this patch running the unit tests took in my computer (I've run
it several times so the compilation is already done):
$ time go test -v
real 0m21.303s
user 0m9.033s
sys 0m2.052s
With this patch it takes ~1/3 of the time:
$ time go test -short -v
real 0m7.825s
user 0m9.588s
sys 0m1.723s
Around 8 seconds is something I can wait to run the tests :)
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
A number of race conditions exist when pods are terminated early in
their lifecycle because components in the kubelet need to know "no
running containers" or "containers can't be started from now on" but
were relying on outdated state.
Only the pod worker knows whether containers are being started for
a given pod, which is required to know when a pod is "terminated"
(no running containers, none coming). Move that responsibility and
podKiller function into the pod workers, and have everything that
was killing the pod go into the UpdatePod loop. Split syncPod into
three phases - setup, terminate containers, and cleanup pod - and
have transitions between those methods be visible to other
components. After this change, to kill a pod you tell the pod worker
to UpdatePod({UpdateType: SyncPodKill, Pod: pod}).
Several places in the kubelet were incorrect about whether they
were handling terminating (should stop running, might have
containers) or terminated (no running containers) pods. The pod worker
exposes methods that allow other loops to know when to set up or tear
down resources based on the state of the pod - these methods remove
the possibility of race conditions by ensuring a single component is
responsible for knowing each pod's allowed state and other components
simply delegate to checking whether they are in the window by UID.
Removing containers now no longer blocks final pod deletion in the
API server and are handled as background cleanup. Node shutdown
no longer marks pods as failed as they can be restarted in the
next step.
See https://docs.google.com/document/d/1Pic5TPntdJnYfIpBeZndDelM-AbS4FN9H2GTLFhoJ04/edit# for details
podVolumesExist() should consider also uncertain volumes (where kubelet
does not know if a volume was fully unmounted) when checking for pod's
volumes. Added GetPossiblyMountedVolumesForPod for that.
Adding uncertain mounts to GetMountedVolumesForPod would potentially break
other callers (e.g. `verifyVolumesMountedFunc`).
Volumes that are provisioned with `VolumeMode: Block` often have a
MetrucsProvider interface declared in their type. However, the
MetricsProvider should implement a GetMetrics() function. In the cases
where the storage drivers do not implement GetMetrics(), a panic can
occur.
Usual type-assertions are not sufficient in this case. All assertions
assume the interface is present. There is no straight forward way to
verify that a valid GetMetrics() function is provided.
By adding SupportsMetrics(), storage driver implementations require
careful reviewing for metrics support.
check in-memory cache whether volumes are still mounted and check disk directory for the volume paths instead of mounted volumes check
Signed-off-by: Mucahit Kurt <mucahitkurt@gmail.com>
- Rename MapDevice to MapPodDevice in BlockVolumeMapper
- Add UnmapPodDevice in BlockVolumeUnmapper (This will be used by csi driver later)
- Add CustomBlockVolumeMapper and CustomBlockVolumeUnmapper interface
- Move SetUpDevice and MapPodDevice to CustomBlockVolumeMapper
- Move TearDownDevice and UnmapPodDevice to CustomBlockVolumeUnmapper
- Implement CustomBlockVolumeMapper only in local and csi plugin
- Implement CustomBlockVolumeUnmapper only in fc, iscsi, rbd, and csi plugin
- Change MapPodDevice to return path and SetUpDevice not to return path
DesiredStateOfWorldPopulator should skip a volume that is not used in any
pod. "Used" means either mounted (via volumeMounts) or used as raw block
device (via volumeDevices).
Especially when block feature is disabled, a block volume must not get into
DesiredStateOfWorld, because it would be formatted and mounted there.
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)
Cleanup temp dirs
So funny story my /tmp ran out of space running the unit tests so I am cleaning up all the temp dirs we create.