separated from the apiserver running locally on the master node so that it
can be optionally enabled or disabled as needed.
Also, fix the healthchecking configuration for the master components, which
was previously only working by coincidence:
If a kubelet doesn't register with a master, it never bothers to figure out
what its local address is. In which case it ends up constructing a URL like
http://:8080/healthz for the http probe. This happens to work on the master
because all of the pods are using host networking and explicitly binding to
127.0.0.1. Once the kubelet is registered with the master and it determines
the local node address, it tries to healthcheck on an address where the pod
isn't listening and the kubelet periodically restarts each master component
when the liveness probe fails.
When deploying the kubernetes using Ubuntu's script, the value of configuration item `DOCKER_OPTS` is not set to `/etc/default/docker`.
This commit is to fix this bug.
Currently make-ca-cert.sh uses (equiv of)
mktemp -d --tmpdir kube.XXXXX
but --tmpdir is not a valid option on OS X. Switch to
mktemp -d -t kube.XXXXX
Which is valid, but subtly different between OS X and Linux. The
directory you get back will be different on each.
Linux: ${tmpdir}/kube.y5Bsu/
OS X: ${tmpdir}/kube.XXXXX.VQ81oOui/
Instead of hard coding kube-cert and /srv/kubernetes allow these to be
overwritten by environment variables. / is immutable on some systems
and so /srv is not a possible location to store data.
Not every cluster can be validated the same way. Factoring out the
validate-cluster call into a kube-util.sh function allows customization.
This allows to proceed with GoogleCloudPlatform/kubernetes#10049 before
the mid/long-term unified cluster validation in GoogleCloudPlatform/kubernetes#11908
is implemented. Otherwise, the later blocks the former.
When executing kube-up on a ubuntu cluster I'm getting the following error:
bash: /root/kube/make-ca-cert: No such file or directory
Removed line as it is invalid and is duplicated by another line.
This will allow more successful kube-up.sh executions. Since kube-apiserver doesn't start on the first try after etcd first starts up possibly due to the lack of resources on my server.
The AWS API requires a signature on method calls, including the
timestamp to prevent replay attacks. A time drift of up to 5 minutes
between client and server is tolerated.
However, if the client clock drifts by >5 minutes, the server will start
to reject API calls (with the cryptic "AWS was not able to validate the
provided access credentials").
To prevent this happening, we install ntp on all nodes.
Fix#11371
Previously we would rely on the s3 bucket's region being configured
correctly, at least for the existence check. By querying for the bucket
region and then going direct to the correct region, we avoid errors and
we avoid potential eventual consistency problems.
May be related to issue: #12109