On bionic, we don't have eth0 hard coded. example below, so we use `ip
route` to figure out the default ethernet interface
```
dims@kubernetes-master:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 42:01:0a:80:00:23 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:b2:4e:dd:86 brd ff:ff:ff:ff:ff:ff
```
Also, bionic uses systemd-resolver by default and adds entries in
/etc/resolv.conf that CoreDNS does not link. So follow the
recommendation in the documentation to specify resolv.conf explicitly
- Makes Windows Server 2019 the default version for Windows clusters on
GCP, since 1809 will be EOL in a few months.
- Adds Windows Server version 1909 as a Windows node choice.
- Use Windows images with updates from January 2020.
- Cleans up the code that sets the node image.
"Shielded" nodes have a virtual TPM attached which is used for
generating the client certificate, instead of using a bootstrap
kubeconfig. Determining which to use happens during node startup based
on the instance metadata.
NodeLocalDNS addon listens on both DNS_SERVER_IP as well as LOCAL_DNS_IP. So cluster-dns flag can continue to be DNS_SERVER_IP in all cases.
Documented the various variables in the yaml.
Got the proxy-server coming up in the master.
Added certs and have it comiung up with those certs.
Added a daemonset to run the network-agent.
Adding support for agent running as a sameon set on every node.
Added quick hack to test that proxy server/agent were correctly
tunneling traffic to the kubelet.
Added more WIP for reading network proxy configuration.
Get flags set correctly and fix connection services.
Adding missing ApplyTo
Added ConnectivityService.
Fixed build directives. Added connectivity service configuration.
Fixed log levels.
Fixed minor issues for feature turned off.
Fixed boilerplate and format.
Moved log dialer initialization earlier as per Liggits suggestion.
Fixed a few minor issues in the configuration for GCE.
Fixed scheme allocation
Adding unit test.
Added test for direct connectivity service.
Switching to injecting the Lookup method rather than using a Singleton.
First round of mikedaneses feedback.
Fixed deployment to use yaml and other changes suggested by MikeDanese.
Switched network proxy server/agent which are kebab-case not camelCase.
Picked up DIAL_RSP fix.
Factored in deads2k feedback.
Feedback from mikedanese
Factored in second round of feedback from David.
Fix path in verify.
Factored in anfernee's feedback.
First part of lavalamps feedback.
Factored in more changes from lavalamp and mikedanese.
Renamed network-proxy to konnectivity-server and konnectivity-agent.
Fixed tolerations and config file checking.
Added missing strptr
Finished lavalamps requested rename.
Disambiguating konnectivity service by renaming it egress selector.
Switched feature flag to KUBE_ENABLE_EGRESS_VIA_KONNECTIVITY_SERVICE
The feature caused tests to fail when it was enabled.
- https://github.com/kubernetes/kubernetes/issues/78628
Work is in progress to fix the feature, but until that work is complete,
we will disable it in the GCE scripts.
Till a few days ago, it was possible to ssh into master and access cluster via insecure master port.
Now, the master insecure port has been disabled, we're not able to do that anymore.
This PR aims to fix that by uploading the kubeconfig to the master metadata during cluster setup in tests.