We have 2 scenarios where we copy /etc/hosts
- with host network (we just copy the /etc/hosts from node)
- without host network (create a fresh /etc/hosts from pod info)
We are having trouble figuring out whether a /etc/hosts in a
pod/container has been "fixed-up" or not. And whether we used
host network or a fresh /etc/hosts in the various ways we start
up the tests which are:
- VM/box against a remote cluster
- As a container inside the k8s cluster
- DIND scenario in CI where test runs inside a managed container
Please see previous mis-guided attempt to fix this problem at
ba20e63446 In this commit we revert
the code from there as well.
So we should make sure:
- we always add a header if we touched the file
- we add slightly different headers so we can figure out if we used the
host network or not.
Update the test case to inject /etc/hosts from node to another path
(/etc/hosts-original) as well and use that to compare.
"KubeletManagedEtcHosts should test kubelet managed /etc/hosts file"
conformance test fails in the CI's Docker-In-Docker environment.
This test mounts a /etc/hosts file and checks if "# Kubernetes-managed
hosts file." string is present or not under various conditions. The
specific failure with DIND happens when the /etc/hosts picked up
from the box where e2e test are running already has this string. This
happens because our CI runs on kubernetes and the e2e tests are running
in a container that was started on kubernetes (and hence already has
that string)
To avoid this situation, we create a new /etc/hosts file with known
contents (and does not have the "# Kubernetes-managed hosts file."
string)
Note: this still makes the test fail if a retry occurs, but
will give us more information regarding whether or not the
test flake could be occuring due to delay in mounting of
/etc/hosts.