With some regularity, if the root certificate file needs to be generated
the apiserver could come up on the non-secure port before the cert
was generated.
`hack/local-up-cluster.sh` requires that apiserver.crt exists
before the replication controller starts. Otherwise service accounts
and secrets don't work.
This change just takes the certificate handling code out of the `go`.
Also add related new flags to apiserver:
"--oidc-issuer-url", "--oidc-client-id", "--oidc-ca-file", "--oidc-username-claim",
to enable OpenID Connect authentication.
Since pflag can handle net.IPNet arguements use that code. This means
that our code no longer has casts back and forth and just natively uses
net.IPNet.
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
PR #10643 Started adding the dns names for the kubernetes master to self
sign certs which were created. The kubelet uses this same code, and thus
the kubelet cert started saying it was valid for these name as well.
While hardless, the kubelet cert shouldn't claim to be these things. So
make the caller explicitly list both their ip and dns subject alt names.
A cert from GCE shows:
- IP Address:23.236.49.122
- IP Address:10.0.0.1
- DNS:kubernetes,
- DNS:kubernetes.default
- DNS:kubernetes.default.svc
- DNS:kubernetes.default.svc.cluster.local
- DNS:e2e-test-zml-master
A similarly configured self signed cert shows:
- IP Address:23.236.49.122
- IP Address:10.0.0.1
- DNS:kubernetes
- DNS:kubernetes.default
- DNS:kubernetes.default.svc
So we are missing the fqdn kubernetes.default.svc.cluster.local. The
apiserver does not even know the fqdn! it's defined entirely by the
kubelet! We also do not have the cluster name certificate. This may be
--cluster-name= argument to the apiserver but will take a bit more
research.