Clarify the error message returned when trying to use the docker runtime
on a Kubelet that was compiled without Docker.
We removed the "w/" and "w/o", which can be confusing abbreviations, and
also add slightly more detail on the actual error.
We do not want to run the `cadvisor/docker` init when we are using the
dockerless build tags. We can ensure this by isolating into a separate
file with the proper build tag constraints.
As the final step, add the `dockerless` tags to all files in the
dockershim. Using `-tags=dockerless` in `go build`, we can compile
kubelet without the dockershim.
Once cadvisor no longer depends on `docker/docker`, compiling with
`-tags=dockerless` should be sufficient to compile the Kubelet w/o a
dependency on `docker/docker`.
DockerLegacyService interface is used throughout `pkg/kubelet`.
It used to live in the `pkg/kubelet/dockershim` package. While we
would eventually like to remove it entirely, we need to give users some form
of warning.
By including the interface in
`pkg/kubelet/legacy/logs.go`, we ensure the interface is
available to `pkg/kubelet`, even when we are building with the `dockerless`
tag (i.e. not compiling the dockershim).
While the interface always exists, there will be no implementations of the
interface when building with the `dockerless` tag. The lack of
implementations should not be an issue, as we only expect `pkg/kubelet` code
to need an implementation of the `DockerLegacyService` when we are using
docker. If we are using docker, but building with the `dockerless` tag, than
this will be just one of many things that breaks.
`pkg/kubelet/legacy` might not be the best name for the package... I'm
very open to finding a different package name or even an already
existing package.
We can remove all uses of `dockershim` from `cmd/kubelet`, by just
passing the docker options to the kubelet in their pure form, instead of
using them to create a `dockerClientConfig` (which is defined in
dockershim). We can then construct the `dockerClientConfig` only when we
actually need it.
Extract a `runDockershim` function into a file outside of `kubelet.go`.
We can use build tags to compile two separate functions... one which
actually runs dockershim and one that is a no-op.
Remove one of two uses of Dockershim in `cmd/kubelet`. The other is for
creating a docker client which we pass to the Kubelet... we will handle
that refactor in a separate diff.
I'm fairly confident, though need to double check, that no one is
actually using this experimental dockershim behavior. If they are, I
think we will want to find a new way to support it (that doesn't require
using the Kubelet only to launch Dockershim).
Following changes in #87730, Kubelet is directly hcsshim to gather stats.
However, unlike `docker stats` API that was used before, hcsshim does not
keep information about exited containers.
When the Kubelet lists containers (`docker_container.go:ListContainers()`),
it sets `All: true`, retrieving non-running containers.
When docker stats is called with such container id, it'll return a valid JSON
with all values set to 0. The non-running containers are filtered later on in the process.
When the hcsshim is called with such container id, it'll return an error, effectively
stopping the stats retrieval for all containers.
yaml has comments, so we can explain why we have certain rules or
certain prefixes
for those files that weren't already commented yaml, I converted them to
yaml and took a best guess at comments based on the PRs that introduced
or updated them
The expectation is that exclusive CPU allocations happen at pod
creation time. When a container restarts, it should not have its
exclusive CPU allocations removed, and it should not need to
re-allocate CPUs.
There are a few places in the current code that look for containers
that have exited and call CpuManager.RemoveContainer() to clean up
the container. This will end up deleting any exclusive CPU
allocations for that container, and if the container restarts within
the same pod it will end up using the default cpuset rather than
what should be exclusive CPUs.
Removing those calls and adding resource cleanup at allocation
time should get rid of the problem.
Signed-off-by: Chris Friesen <chris.friesen@windriver.com>