This should ensure all load balancers get deleted even if a reordering of
watch events causes us to strand one after its service has been deleted,
because the sync will notice that the service controller's cache has a
service in it that no longer exists in the apiserver.
It could still leak in the case that the controller manager is killed
between when it leaks something and the sync runs, but this should
improve things.
Creating a cluster from scratch takes about 7 minutes. But if you just
rebuild the binaries and want to update those you don't want to have to
rerun the entire thing. There is an ansible tag 'binary-update' which
will do that. Now one can do
```
ANSIBLE_TAGS=binary-update vagrant provision
```
And it will push the new binaries.
If you are using locally build binaries as a developer you likely will
want to just push those binaries to an existing cluster, not rerun the
entire playbook. Add a tag to do just that.
Do the /etc/host creation with vagrant, so it uses internal instead of
external ips (hostmanager only knew about the public ip)
Ignore errors on docker failure when 'restarting' docker in flannel
handler. If this is a clean install, we haven't run 'node' yet so docker
isn't installed so it doesn't need to be started. It would be better to
be more specific in ignoring errors though...
This was originally submitted to pick up v0.3.1 of the cloud logging
plugin which had a fix for the name 'metadata' failing to resolve.
Since new releases of google-fluentd have this fix, it is no longer
required.
I've done some additional testing of 'gem update' behavior in the interim
and I think it is ok to use in targeted situations, but we should not be
doing an unconstrained update in general. The issue is that updating a
gem may bring new dependencies, some of those dependencies may include
native code, so it may try to launch a compiler, which isn't desirable
and prone to failure.
If we do need to grab an updated gem in the future we should specify an
explicit version and the --minimal-deps flag.