e2e auth breakage it caused. The fix is to not set project/zone/kube_master
to the empty string partway through the script, which I should have
realized was a bad idea in the first place.
The Go coverage tool does not currently support recording a coverage
data profile across packages, so we must manually combine these coverage
profiles and use it to produce an HTML report when KUBE_COVER is
nonempty. The exact value of KUBE_COVER is now ignored; KUBE_COVERMODE
can be used to set the coverage mode from the default of "atomic".
Additionally, if KUBE_GOVERALLS_BIN is set, hack/test-go.sh will attempt
to report coverage results to Coveralls.io. This is intended to be used
with the Travis build.
Since Jenkins has hopefully been set up properly to read test failures
from junit*.xml files, only exit with a nonzero status when there are
infrastructure failures. If there are only test failures, the nonzero
exit status will be ignored.
Also, disable Ginkgo's colors to make the Jenkin console logs more
readable.
For now, keep the finishRunning() wrapper but use a straight cmd.Run()
call instead of the convoluted goroutine trying to catch signals.
It turns out that Unix process group handling is enough to interrupt
pending processes when stopping the run with something like a Ctrl+C
which should be enough.
Tested:
- Full e2e run with hack/e2e-test.sh, two tests failed but looks like
they've been failing before this change.
- Started a hack/e2e.go -v -build and interrupted it with Ctrl+C,
confirmed that build-release.sh was killed in the process.
This does away with the giant dump from cobra for kubectl and instead
generates md files which contain similar information, but one per verb.
This might work well as part of the cobra project, instead of doing it
in kube, but this gets us nice, linked, documentation right now. If
people like it, I will try to get something similar into cobra.
Try to cleanup if there is a failure in script at any point.
Handle undefined vars in cleanup.
Wait longer for apiserver.
Exit if apiserver doesn't come up.
And actually, make it more better: Go ahead and tear down the cluster
even when tests fail, but (hopefully) relay the test exit status
correctly. This fails if there's a double error (if -down *also*
fails, we'll fail due to errexit), but either way is a build failure,
and this means that the teardown of a test failure build isn't getting
charged to the next run.