Automatic merge from submit-queue
[Flaky PR Test] Fix summary test
fixes issue: #46797
As we can see in the [example failure build log](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet/4319/build-log.txt), the summary containers are pinging google 100s of times a second. This causes the summary container to be killed occasionally, and fail the test. The summary containers are only supposed to ping every 10 seconds according to the current test. As it turns out, we were missing a semicolon, and were not sleeping between pings. For background, we ping google to generate network traffic, so that the summary test can validate network metrics.
This PR adds the semicolon to make the container sleep between calls, and decreases the sleep time from 10 seconds to 1 second, as 1 call / 10 seconds did not produce enough activity.
cc @kubernetes/kubernetes-build-cops @dchen1107
Automatic merge from submit-queue (batch tested with PRs 46648, 46500, 46238, 46668, 46557)
Add an e2e test for AdvancedAuditing
Enable a simple "advanced auditing" setup for e2e tests running on GCE, and add an e2e test that creates & deletes a pod, a secret, and verifies that they're audited.
Includes https://github.com/kubernetes/kubernetes/pull/46548
For https://github.com/kubernetes/features/issues/22
/cc @ericchiang @sttts @soltysh @ihmccreery
Automatic merge from submit-queue (batch tested with PRs 46648, 46500, 46238, 46668, 46557)
Support validating package versions in node conformance test
**What this PR does / why we need it**:
This PR adds a package validator in node conformance test for checking whether the locally installed packages meet the image spec.
**Special notes for your reviewer**:
The image spec for GKE (which has the package spec) will be in a separate PR. Then we will publish a new node conformance test image for GKE whose name should use the convention in https://github.com/kubernetes/kubernetes/issues/45760 and have `gke` in it.
**Release note**:
```
NONE
```
Respect PDBs during node upgrades and add test coverage to the
ServiceTest upgrade test. Modified that test so that we include pod
anti-affinity constraints and a PDB.
This PR adds the check for local storage request when admitting pods. If
the local storage request exceeds the available resource, pod will be
rejected.
This PR adds the support for allocatable local storage (scratch space).
This feature is only for root file system which is shared by kubernetes
componenets, users' containers and/or images. User could use
--kube-reserved flag to reserve the storage for kube system components.
If the allocatable storage for user's pods is used up, some pods will be
evicted to free the storage resource.
Automatic merge from submit-queue (batch tested with PRs 43505, 45168, 46439, 46677, 46623)
Add TPR to CRD migration helper.
This is a helper for migrating TPR data to CustomResource. It's rather hacky because it requires crossing apiserver boundaries, but doing it this way keeps the mess contained to the TPR code, which is scheduled for deletion anyway.
It's also not completely hands-free because making it resilient enough to be completely automated is too involved to be worth it for an alpha-to-beta migration, and would require investing significant effort to fix up soon-to-be-deleted TPR code. Instead, this feature will be documented as a best-effort helper whose results should be verified by hand.
The intended benefit of this over a totally manual process is that it should be possible to copy TPR data into a CRD without having to tear everything down in the middle. The process would look like this:
1. Upgrade to k8s 1.7. Nothing happens to your TPRs.
1. Create CRD with group/version and resource names that match the TPR. Still nothing happens to your TPRs, as the CRD is hidden by the overlapping TPR.
1. Delete the TPR. The TPR data is converted to CustomResource data, and the CRD begins serving at the same REST path.
Note that the old TPR data is left behind by this process, so watchers should not receive DELETE events. This also means the user can revert to the pre-migration state by recreating the TPR definition.
Ref. https://github.com/kubernetes/kubernetes/issues/45728
Centralize the creation of the dialer during startup.
Have the dialer then passed in to both APIServer and Aggregator.
Aggregator the sets the dialer on its Transport base.
This should allow the SSTunnel to be used but also allow the Aggregation
Auth to work with it.
Depending on Environment InsecureSkipTLSVerify *may* need to be set to
true.
Fixed as few tests to call CreateDialer as part of start-up.
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)
Local storage plugin
**What this PR does / why we need it**:
Volume plugin implementation for local persistent volumes. Scheduler predicate will direct already-bound PVCs to the node that the local PV is at. PVC binding still happens independently.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part of #43640
**Release note**:
```
Alpha feature: Local volume plugin allows local directories to be created and consumed as a Persistent Volume. These volumes have node affinity and pods will only be scheduled to the node that the volume is at.
```
Automatic merge from submit-queue
Node authorizer
This PR implements the authorization portion of https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubelet-authorizer.md and kubernetes/features#279:
* Adds a new authorization mode (`Node`) that authorizes requests from nodes based on a graph of related pods,secrets,configmaps,pvcs, and pvs:
* Watches pods, adds edges (secret -> pod, configmap -> pod, pvc -> pod, pod -> node)
* Watches pvs, adds edges (secret -> pv, pv -> pvc)
* When both Node and RBAC authorization modes are enabled, the default RBAC binding that grants the `system:node` role to the `system:nodes` group is not automatically created.
* Tightens the `NodeRestriction` admission plugin to require identifiable nodes for requests from users in the `system:nodes` group.
This authorization mode is intended to be used in combination with the `NodeRestriction` admission plugin, which limits the pods and nodes a node may modify. To enable in combination with RBAC authorization and the NodeRestriction admission plugin:
* start the API server with `--authorization-mode=Node,RBAC --admission-control=...,NodeRestriction,...`
* start kubelets with TLS boostrapping or with client credentials that place them in the `system:nodes` group with a username of `system:node:<nodeName>`
```release-note
kube-apiserver: a new authorization mode (`--authorization-mode=Node`) authorizes nodes to access secrets, configmaps, persistent volume claims and persistent volumes related to their pods.
* Nodes must use client credentials that place them in the `system:nodes` group with a username of `system:node:<nodeName>` in order to be authorized by the node authorizer (the credentials obtained by the kubelet via TLS bootstrapping satisfy these requirements)
* When used in combination with the `RBAC` authorization mode (`--authorization-mode=Node,RBAC`), the `system:node` role is no longer automatically granted to the `system:nodes` group.
```
```release-note
RBAC: the automatic binding of the `system:node` role to the `system:nodes` group is deprecated and will not be created in future releases. It is recommended that nodes be authorized using the new `Node` authorization mode instead. Installations that wish to continue giving all members of the `system:nodes` group the `system:node` role (which grants broad read access, including all secrets and configmaps) must create an installation-specific ClusterRoleBinding.
```
Follow-up:
- [ ] enable e2e CI environment with admission and authorizer enabled (blocked by kubelet TLS bootstrapping enablement in https://github.com/kubernetes/kubernetes/pull/40760)
- [ ] optionally enable this authorizer and admission plugin in kubeadm
- [ ] optionally enable this authorizer and admission plugin in kube-up
Automatic merge from submit-queue
Switch gcloud compute copy-files to scp
gcloud is deprecating `gcloud compute copy-files` and switching to `gcloud compute scp`. Make the change before things start to break.
https://cloud.google.com/sdk/gcloud/reference/compute/copy-files
Warnings we get: `W0529 10:28:59.097] WARNING: `gcloud compute copy-files` is deprecated. Please use `gcloud compute scp` instead. Note that `gcloud compute scp` does not have recursive copy on by default. To turn on recursion, use the `--recurse` flag.`
/cc @jlowdermilk
Automatic merge from submit-queue
Support grabbing test suite metrics
**What this PR does / why we need it**:
Add support for grabbing metrics that cover the entire test suite's execution.
Update the "interesting" controller-manager metrics to match the
current names for the garbage collector, and add namespace controller
metrics to the list.
If you enable `--gather-suite-metrics-at-teardown`, the metrics file is written to a file with a name such as `MetricsForE2ESuite_2017-05-25T20:25:57Z.json` in the `--report-dir`. If you don't specify `--report-dir`, the metrics are written to the test log output.
I'd like to enable this for some of the `pull-*` CI jobs, which will require a separate PR to test-infra.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-testing-pr-reviews @smarterclayton @wojtek-t @gmarek @derekwaynecarr @timothysc
Automatic merge from submit-queue (batch tested with PRs 45534, 37212, 46613, 46350)
[e2e]Fix define redundant parameter
When timeout to reach HTTP service, redundant parameter make the
error is nil.
Automatic merge from submit-queue
no-snat test
This test checks that Pods can communicate with each other in the same cluster without SNAT.
I intend to create a job that runs this in small clusters (\~3 nodes) at a low frequency (\~once per day) so that we have a signal as we work on allowing multiple non-masquerade CIDRs to be configured (see [kubernetes-incubator/ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent), for example).
/cc @dnardo