This is the last piece of Clayton's #26179 to be implemented with file tags.
All diffs are accounted for. Followup will use this to streamline some
packages.
Also add some V(5) debugging - it was helpful in diagnosing various issues, it
may be helpful again.
This drives most of the logic of deep-copy generation from tags like:
// +deepcopy-gen=package
..rather than hardcoded lists of packages. This will make it possible to
subsequently generate code ONLY for packages that need it *right now*, rather
than all of them always.
Also remove pkgs that really do not need deep-copies (no symbols used
anywhere).
Automatic merge from submit-queue
Dedent
Adding the dedent package and then applying it to the kubectl help commands. Also updating the documentation to reflect the use of dedent.
Automatic merge from submit-queue
Allow conformance tests to run on non-GCE providers
fixes https://github.com/kubernetes/kubernetes/issues/26869
Creates a skeleton provider which has all the required function stubs -- but will allow a previously set "skeleton" KUBERNETES_PROVIDER to not be overriden with "gce".
Also includes other improvements:
- Makefile rule to run tests against remote instance using existing host or image
- Makefile will reuse an instance created from an image if it was not torn down
- Runner starts gce instances in parallel with building source
- Runner uses instance ip instead of hostname so that it doesn't need to resolve
- Runner supports cleaning up files and processes on an instance without stopping / deleting it
- Runner runs tests using `ginkgo` binary to support running tests in parallel
Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates
Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
Automatic merge from submit-queue
Introduce node memory pressure condition to scheduler
Following the work done by @derekwaynecarr at https://github.com/kubernetes/kubernetes/pull/21274, introducing memory pressure predicate for scheduler.
Missing:
* write down unit-test
* test the implementation
At the moment this is a heads up for further discussion how the new node's memory pressure condition should be handled in the generic scheduler.
**Additional info**
* Based on [1], only best effort pods are subject to filtering.
* Based on [2], best effort pods are those pods "iff requests & limits are not specified for any resource across all containers".
[1] 542668cc79/docs/proposals/kubelet-eviction.md (scheduler)
[2] https://github.com/kubernetes/kubernetes/pull/14943
Automatic merge from submit-queue
devel/ tree further minor edits
Address line wrap issue #1488. Also cleans up other minor editing issues in the docs/devel/* tree such as spelling errors, links, content tables...
Signed-off-by: Mike Brown <brownwm@us.ibm.com>
Automatic merge from submit-queue
devel/ tree more minor edits
Address line wrap issue #1488. Also cleans up other minor editing issues in the docs/devel/* tree such as spelling errors, links, content tables...
Signed-off-by: Mike Brown <brownwm@us.ibm.com>