Automatic merge from submit-queue
Kubelet: Start using the official docker engine-api
For #23563.
This is the **first step** in the roadmap of switching to docker [engine-api](https://github.com/docker/engine-api).
In this PR, I keep the old `DockerInterface` and implement it with the new engine-api.
With this approach, we could switch to engine-api with minimum change, so that we could:
* Test the engine-api without huge refactoring.
* Send following PRs to refactor functions in `DockerInterface` separately so as to avoid a huge change in one PR.
I've tested this PR locally, it passed all the node conformance test:
```
make test_e2e_node
Ran 19 of 19 Specs in 823.395 seconds
SUCCESS! -- 19 Passed | 0 Failed | 0 Pending | 0 Skipped PASS
Ginkgo ran 1 suite in 13m49.429979585s
Test Suite Passed
```
And it also passed the jenkins gce e2e test:
```
go run hack/e2e.go -test -v --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]"
Ran 161 of 268 Specs in 4570.214 seconds
SUCCESS! -- 161 Passed | 0 Failed | 0 Pending | 107 Skipped PASS
Ginkgo ran 1 suite in 1h16m16.325934558s
Test Suite Passed
2016/03/25 15:12:42 e2e.go:196: Step 'Ginkgo tests' finished in 1h16m18.918754301s
```
I'm writing the design document, and will post the switching roadmap in an umbrella issue soon.
@kubernetes/sig-node
This is step one for cross-region ECR support and has no visible effects yet.
I'm not crazy about the name LazyProvide. Perhaps the interface method could
remain like that and the package method of the same name could become
LateBind(). I still don't understand why the credential provider has a
DockerConfigEntry that has the same fields but is distinct from
docker.AuthConfiguration. I had to write a converter now that we do that in
more than one place.
In step two, I'll add another intermediate, lazy provider for each AWS region,
whose empty LazyAuthConfiguration will have a refresh time of months or years.
Behind the scenes, it'll use an actual ecrProvider with the usual ~12 hour
credentials, that will get created (and later refreshed) only when kubelet is
attempting to pull an image. If we simply turned ecrProvider directly into a
lazy provider, we would bypass all the caching and get new credentials for
each image pulled.
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
Remove GetDockerServerVersion() from DockerContainerCommandRunner interface,
replaced with runtime.Version(). Also added Version type in runtime for version
comparision.
- When 'getent hosts localhost' returns '::1' the creation of the
listener fails because of the port parsing which uses ":" as a
separator
- Use of net.SplitHostPort() to do the job
- Adding unit tests to ensure that the creation succeeds
- On docker.go: adds a test on the presence the socat command which was
failing silenty if not installed
- Code Review 1
- Fixed typo on Expected
- The UT now fails if the PortForwarder could not be created
- Code Review 2
- Simplify socat error message
- Changing t.Fatal to to.Error on unit tests
- Code Review 3
- Removing useless uses cases in unit tests
- Code Review 4
- Removing useless initiliasiation of PortForwarder
- Changing error message
- Code Review 5
- Simplifying TestCast struct
- Adding addition test in one test case
- Closing the listener
- Code Review 6
- Improving unit test
Use go-dockerclient's APIVersion to check the minimum required Docker
version, as it contains methods for parsing the ApiVersion response from
the Docker daemon and for comparing 2 APIVersion objects.
Remove kubelet.getPodInfraContainer().
Remove dockertools.RemoveContainerWithID().
Remove dockertools.FindContainersByPod().
Also replace the useless test with a test for GetPods().
We want to stop leaking more docker details into kubelet, and we also want to
consolidate some of the existing docker interfaces/structs. This change creates
DockerManager as the new home of some functions in dockertools/docker.go. It
also absorbs containerRunner. In addition, GetDockerPodStatus is renamed to
GetPodStatus with the entire pod passed to it so that it is simialr to the what
is defined in the container Runtime interface.
Eventually, DockerManager should implement the container Runtime interface, and
integrate DockerCache with a flag to turn on/off caching. Code in kubelet.go
should not be using docker client directly.
Functions Build/ParseDockerName now work with struct instead of the long
list of arguments. This new struct also was reused in the kubelet.go
instead of auxilary podContainer struct.
There are two main goals for this change.
1. Fix the naming scheme in kubelet so that it accepts DNS subdomain
name/namespaces correctly (#4920). The design is discussed in #3453.
2. Prepare for syncing the static pods back to the apiserver(#4090). This
includes
- Eliminate the source component in the internal full pod name (#4922). Pods
no longer need sources as they will all be sync'd via apiserver.
- Changing the naming scheme for the static (file-, http-, and etcd-based)
pods such that they are distinguishable when syncing back to the apiserver.
The changes includes:
* name = <pod.Name>-<hostname>
* namespace = <cluster_namespace> (i.e. "default" for now).
* container_name = k8s_<contianer_name>.<hash_of_container>_<pod_name>_<namespace>_<uid>_<random>
Note that this is not backward-compatible, meaning the kubelet won't recognize
existing running containers using the old naming scheme.
Since the parsing function doesn't return an error all the components
returned empty strings. This caused us to enforce the MaxContainerLimit
as a global limit instead of a per-container limit.
Fixes#4413.
Sometimes for external applications it is important to identify
exactly what images are running. Since tags can be moved to point
to newer builds this information can be used to identify old images
running.
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
This should only have been triggered by tests, and those should now be fixed.
I tested by calling panic() if UID was blank in BuildDockerName() or if number
of fields was < 5 in ParseDockerName(). All errors were fixed.
Make all kubelet config sources ensure that UID and Namespace are defaulted, if
need be.
We can *almost* disable the "if blank" logic for UID, except for tests that
call APIs that do not run through SyncPods. We really ought to be enforcing
invariants better.