* Add a test/e2e/shell.go that slurps up everything in hack/e2e-suite
and runs it as a bash test, borrowing all the code from hack/e2e.go.
* Rip out all the crap in hack/e2e.go that deal with multiple tests
* Move hack/e2e-suite/goe2e.sh to hack/ginkgo-e2e.sh so that it
doesn't get slurped up.
Simply incorporate some of the boilerplate from hack/e2e.go into the
scripts in hack/e2e-suite.
Use environment variables with default values to allow overrides in
kubectl command line and to use a versioned package root.
Tested:
- Ran `go run hack/e2e.go -test` on a test cluster.
- Ran the test cases individually.
- Ran hack/e2e-suite/goe2e.sh -t Pods to confirm it takes arguments.
- Also fixed cluster/test-network.sh (which should be more and more irrelevant.)
This is another step in removing external dependencies of the Go e2e tests.
Remove references to this file on list of files required to run e2e tests.
Also use an unique name for the pod, so that failure in cleanup of a
previous run should not break a new run with a name conflict.
Tested by running cmd/e2e -t TestPodUpdate against an API server in GCE.
This will increase build times on Jenkins, but should make the build
times consistent and make them pull from sources every time versus
leftover artifacts. Also upping timeout. (Try to address recent
some aborted builds.)
This is another step in removing external dependencies of the Go e2e tests.
Also remove other references to this file.
Tested by running cmd/e2e -t TestKubeletSendsEvent against an API server in GCE.
Use the E2E_REPORT_DIR global environment variable to define the
location where the JUnit XML reports should be saved.
Modify the Jenkins e2e.sh script to export that variable pointing to the
top of the Jenkins build tree.
Tested by running `E2E_REPORT_DIR=${PWD}/.. hack/e2e-test.sh` and
confirmed ../junit.xml was generated and looked good.
Removed auth for Grafana to facilitate usage via service proxy on the api-server.
Added a grafana service
Removed elasticsearch dependency for monitoring - faster startup times.
This was staring at me yesterday, and I even commented that "huh,
there's got to be something wrong with the firewall rules, but then
job/kubernetes-e2e-gce/1002/tapResults/ made it obvious: If you have
two e2e jobs running at the same time in the same project (hint:
Jenkins does), they'll race with each other, since resource names are
project scoped.
And a config description. This doesn't yet have much - first want to
make sure I can do the build job. Next I'll submit the e2e script with
its twiddles and switch those over. (After going to 3 e2es, I think
it's finally time for version control.)
This exposes the proper v1beta3 API endpoint when the user specifies
the --runtime_config=api/v1beta3 argument to the apiserver. v1beta3
is still considered experimental and subject to change.
--runtime_config is a map of string keys and values, that can be
specified by providing
--runtime_config=a=b,b=c,d,e
Only the key must be specified, the value can be omitted.
Enables v1beta3 in hack/local-up-cluster.sh and hack/test-cmd.sh
Scheduler uses Reflector from pkg/client/cache.
It defines some helper classes.
I'd like to use those helpers with pkg/client/cache
in kube-proxy and kubelet too.
There are quite a few 'composite literal uses unkeyed fields' errors that I have kept out of this patch.
And there's a couple where vet just seems confused. These are the easiest ones.
What I really want is
https://github.com/GoogleCloudPlatform/kubernetes/issues/2953, but
haven't had a chance to code that yet. Maybe it's time. (Then I'd
remove the provider-specific test and just say "is it > 0.7.2, or does
it claim to be capable of something from the future?" The latter
covers the HEAD server case .. though just bumping the server version
immediately after release might also accomplish that, too.)
After this DNS is resolvable from the host, if the DNS server is targetted
explicitly. This does NOT add the cluster DNS to the host's resolv.conf. That
is a larger problem, with distro-specific tie-ins and circular deps.
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
by adding support for testing official release versions and writing a
simple script to first bring up and test the old version before
upgrading it and testing it again. Related to supporting in-place
upgrades of masters (#2524).
Make each test available for execution --times times. Acts like a
multi-deck shoe. (Otherwise this is easily scriptable outside the
e2e.go command).
Useful for "I want to walk away for a few hours leaving some end-to-ends
running", until we have more stress tests. Also useful for detecting
flakes.
Reporting is changed to break out Passed, Flaky and Failed. I chose to
keep all three lines even if --times isn't on, just for consistency in
scraping. Similarly, it always outputs the counts now. A report looks
like:
2014/12/09 07:31:21 Passed tests: goe2e.sh[100/100] guestbook.sh[100/100] private.sh[100/100] services.sh[100/100]
2014/12/09 07:31:21 Flaky tests: basic.sh[99/100] certs.sh[99/100] monitoring.sh[98/100] pd.sh[98/100] update.sh[98/100]
2014/12/09 07:31:21 Failed tests:
2014/12/09 07:31:21 8 test(s) failed.
Replaces the client public interface but leaves old references to "minions"
for a later refactor. Selects the path "nodes" for v1beta3 and "minions"
for older versions.
Currently, we run the e2e tests in whatever order readdir happens to
return, which is random on some filesystems, name sorted on others,
create order on others, etc. Eventually, we may want to be
automatically hermetic between e2e tests (especially as we introduce
more resource destructive tests), but until then, it would be useful
if we permute the test order randomly between runs to ensure that
developers don't accidentally rely on a particular order. This
introduces a form of forced hermeticism, since improper state cleanup
from one test may not perturb a given test, but there's probably *a*
test in the suite that the order will perturb, so the RNG will find
that order eventually.
Adds logging of the generated seed, and an --orderseed argument that
can be used to re-run in the same order. Also sorts the pass/fail list
now for easier human reading.
Minor usability nuisance: If you run:
go run hack/e2e.go -v -test
.. and you don't happen to have an up e2e cluster, it should fail
fast, rather than chugging through every test and having them fall
over.
This commit brings two main changes, notably:
Two new options that can be set as environment variables
- DOCKER_OPTS: any arbitrary set of docker options. Example: --tlsverify
- DOCKER_NATIVE: This forces the use of the native docker available.
This is most useful if you're on OS X and do not want
to use boot2docker.
Now uses 'docker cp' instead of tar piping to transfer files. This
currently must be done by copying the binaries off of the docker volume
and into a local filesystem (/tmp) before a docker cp is done. This
workaround will no longer be necessary after bug fix
https://github.com/docker/docker/pull/8509 makes it into stable.
This was necessary because the tar | tar method was creating corrupted
archives on OS X even with the < /dev/null workaround.
This change refactors the way Kubelet's DockerPuller handles the docker config credentials to utilize a new credentialprovider library.
The credentialprovider library is based on several of the files from the Kubelet's dockertools directory, but supports a new pluggable model for retrieving a .dockercfg-compatible JSON blob with credentials.
With this change, the Kubelet will lazily ask for the docker config from a set of DockerConfigProvider extensions each time it needs a credential.
This change provides common implementations of DockerConfigProvider for:
- "Default": load .dockercfg from disk
- "Caching": wraps another provider in a cache that expires after a pre-specified lifetime.
GCP-only:
- "google-dockercfg": reads a .dockercfg from a GCE instance's metadata
- "google-dockercfg-url": reads a .dockercfg from a URL specified in a GCE instance's metadata.
- "google-container-registry": reads an access token from GCE metadata into a password field.
Also fix up cert generation. It was failing during the first salt highstate when trying to chown the certs as the apiserver user didn't exist yet. Fix this by creating a 'kube-cert' group and chgrping the files to that. Then make the apiserver a member of that group.
Fixes#2365Fixes#2368
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....