Commit Graph

7534 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
98e5496aa2 Merge pull request #46677 from enisoc/tpr-migrate-etcd
Automatic merge from submit-queue (batch tested with PRs 43505, 45168, 46439, 46677, 46623)

Add TPR to CRD migration helper.

This is a helper for migrating TPR data to CustomResource. It's rather hacky because it requires crossing apiserver boundaries, but doing it this way keeps the mess contained to the TPR code, which is scheduled for deletion anyway.

It's also not completely hands-free because making it resilient enough to be completely automated is too involved to be worth it for an alpha-to-beta migration, and would require investing significant effort to fix up soon-to-be-deleted TPR code. Instead, this feature will be documented as a best-effort helper whose results should be verified by hand.

The intended benefit of this over a totally manual process is that it should be possible to copy TPR data into a CRD without having to tear everything down in the middle. The process would look like this:

1. Upgrade to k8s 1.7. Nothing happens to your TPRs.
1. Create CRD with group/version and resource names that match the TPR. Still nothing happens to your TPRs, as the CRD is hidden by the overlapping TPR.
1. Delete the TPR. The TPR data is converted to CustomResource data, and the CRD begins serving at the same REST path.

Note that the old TPR data is left behind by this process, so watchers should not receive DELETE events. This also means the user can revert to the pre-migration state by recreating the TPR definition.

Ref. https://github.com/kubernetes/kubernetes/issues/45728
2017-06-01 05:43:44 -07:00
Anthony Yeh
ba59e14d44 Add TPR to CRD migration helper. 2017-05-31 19:07:38 -07:00
Shyam Jeedigunta
52ef3e6e94 Performance tests also cover configmaps now 2017-05-31 13:13:15 +02:00
Kubernetes Submit Queue
0aad9d30e3 Merge pull request #44897 from msau42/local-storage-plugin
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)

Local storage plugin

**What this PR does / why we need it**:
Volume plugin implementation for local persistent volumes.  Scheduler predicate will direct already-bound PVCs to the node that the local PV is at.  PVC binding still happens independently.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 
Part of #43640

**Release note**:

```
Alpha feature: Local volume plugin allows local directories to be created and consumed as a Persistent Volume.  These volumes have node affinity and pods will only be scheduled to the node that the volume is at.
```
2017-05-30 23:20:02 -07:00
Kubernetes Submit Queue
5995690396 Merge pull request #46076 from liggitt/node-authorizer
Automatic merge from submit-queue

Node authorizer

This PR implements the authorization portion of https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubelet-authorizer.md and kubernetes/features#279:
* Adds a new authorization mode (`Node`) that authorizes requests from nodes based on a graph of related pods,secrets,configmaps,pvcs, and pvs:
  * Watches pods, adds edges (secret -> pod, configmap -> pod, pvc -> pod, pod -> node)
  * Watches pvs, adds edges (secret -> pv, pv -> pvc)
  * When both Node and RBAC authorization modes are enabled, the default RBAC binding that grants the `system:node` role to the `system:nodes` group is not automatically created.
* Tightens the `NodeRestriction` admission plugin to require identifiable nodes for requests from users in the `system:nodes` group.

This authorization mode is intended to be used in combination with the `NodeRestriction` admission plugin, which limits the pods and nodes a node may modify. To enable in combination with RBAC authorization and the NodeRestriction admission plugin:
* start the API server with `--authorization-mode=Node,RBAC --admission-control=...,NodeRestriction,...`
* start kubelets with TLS boostrapping or with client credentials that place them in the `system:nodes` group with a username of `system:node:<nodeName>`

```release-note
kube-apiserver: a new authorization mode (`--authorization-mode=Node`) authorizes nodes to access secrets, configmaps, persistent volume claims and persistent volumes related to their pods.
* Nodes must use client credentials that place them in the `system:nodes` group with a username of `system:node:<nodeName>` in order to be authorized by the node authorizer (the credentials obtained by the kubelet via TLS bootstrapping satisfy these requirements)
* When used in combination with the `RBAC` authorization mode (`--authorization-mode=Node,RBAC`), the `system:node` role is no longer automatically granted to the `system:nodes` group.
```

```release-note
RBAC: the automatic binding of the `system:node` role to the `system:nodes` group is deprecated and will not be created in future releases. It is recommended that nodes be authorized using the new `Node` authorization mode instead. Installations that wish to continue giving all members of the `system:nodes` group the `system:node` role (which grants broad read access, including all secrets and configmaps) must create an installation-specific ClusterRoleBinding.
```

Follow-up:
- [ ] enable e2e CI environment with admission and authorizer enabled (blocked by kubelet TLS bootstrapping enablement in https://github.com/kubernetes/kubernetes/pull/40760)
- [ ] optionally enable this authorizer and admission plugin in kubeadm
- [ ] optionally enable this authorizer and admission plugin in kube-up
2017-05-30 22:42:54 -07:00
Kubernetes Submit Queue
1f213765f6 Merge pull request #46521 from dashpole/summary_container_restart
Automatic merge from submit-queue

Fix Cross-Build, and reduce test to 1 restart to reduce flakyness

In response to https://github.com/kubernetes/kubernetes/pull/46308#issuecomment-304248450

This fixes the error: `test/e2e_node/summary_test.go:138: constant 100000000000 overflows int` from the cross build.

This [recent flake](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet/4179) was because the container restarted during the period where the test expected to Continually see the container in the Summary API.

/assign @dchen1107 
cc @gmarek @luxas 

/release-note-none
2017-05-30 21:45:56 -07:00
Kubernetes Submit Queue
32bce030d8 Merge pull request #46635 from krzyzacy/copy-files
Automatic merge from submit-queue

Switch gcloud compute copy-files to scp

gcloud is deprecating `gcloud compute copy-files` and switching to `gcloud compute scp`. Make the change before things start to break.

https://cloud.google.com/sdk/gcloud/reference/compute/copy-files

Warnings we get: `W0529 10:28:59.097] WARNING: `gcloud compute copy-files` is deprecated.  Please use `gcloud compute scp` instead.  Note that `gcloud compute scp` does not have recursive copy on by default.  To turn on recursion, use the `--recurse` flag.`

/cc @jlowdermilk
2017-05-30 19:35:50 -07:00
Kubernetes Submit Queue
40dcbc4eb3 Merge pull request #46461 from ncdc/e2e-suite-metrics
Automatic merge from submit-queue

Support grabbing test suite metrics

**What this PR does / why we need it**:
Add support for grabbing metrics that cover the entire test suite's execution.

Update the "interesting" controller-manager metrics to match the
current names for the garbage collector, and add namespace controller
metrics to the list.

If you enable `--gather-suite-metrics-at-teardown`, the metrics file is written to a file with a name such as `MetricsForE2ESuite_2017-05-25T20:25:57Z.json` in the `--report-dir`. If you don't specify `--report-dir`, the metrics are written to the test log output.

I'd like to enable this for some of the `pull-*` CI jobs, which will require a separate PR to test-infra.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

@kubernetes/sig-testing-pr-reviews @smarterclayton @wojtek-t @gmarek @derekwaynecarr @timothysc
2017-05-30 16:41:49 -07:00
Jordan Liggitt
fc8e915a4b Add Node authorization mode based on graph of node-related objects 2017-05-30 16:53:03 -04:00
David Ashpole
e2718f3bc5 fix crossbuild, verify container restarts, and restart only once 2017-05-30 13:15:22 -07:00
Sen Lu
d237e54a24 Switch gcloud compute copy-files to scp 2017-05-30 10:19:33 -07:00
Kubernetes Submit Queue
36548b07cd Merge pull request #46605 from shyamjvs/fix-perfdata-subresource
Automatic merge from submit-queue (batch tested with PRs 46552, 46608, 46390, 46605, 46459)

Make kubemark scripts fail fast

Fixes https://github.com/kubernetes/kubernetes/issues/46601

/cc @wojtek-t @gmarek
2017-05-30 08:42:00 -07:00
Kubernetes Submit Queue
38b26db33a Merge pull request #46613 from FengyunPan/fix-e2e-service
Automatic merge from submit-queue (batch tested with PRs 45534, 37212, 46613, 46350)

[e2e]Fix define redundant parameter

When timeout to reach HTTP service, redundant parameter make the
error is nil.
2017-05-30 04:46:04 -07:00
Shyam Jeedigunta
02092312bb Make kubemark scripts fail fast 2017-05-30 11:59:13 +02:00
gmarek
0cc1999e16 Make log-monitor give up on trying to ssh to a dead node after some time 2017-05-30 10:27:10 +02:00
FengyunPan
38e8c32a26 [e2e]Fix define redundant parameter
When timeout to reach HTTP service, redundant parameter make the
error is nil.
2017-05-30 16:09:33 +08:00
Kubernetes Submit Queue
755d368c4a Merge pull request #45782 from mtaufen/no-snat-test
Automatic merge from submit-queue

no-snat test

This test checks that Pods can communicate with each other in the same cluster without SNAT.

I intend to create a job that runs this in small clusters (\~3 nodes) at a low frequency (\~once per day) so that we have a signal as we work on allowing multiple non-masquerade CIDRs to be configured (see [kubernetes-incubator/ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent), for example).

/cc @dnardo
2017-05-29 16:19:46 -07:00
Kubernetes Submit Queue
52337d5db6 Merge pull request #46502 from gmarek/run_kubemark_tests
Automatic merge from submit-queue

Fix kubemark/run-e2e-tests.sh

This should make most common arguments work.

cc @shyamjvs
2017-05-29 09:35:01 -07:00
Kubernetes Submit Queue
d9f3ea5191 Merge pull request #46593 from shyamjvs/fix-perfdata-subresource
Automatic merge from submit-queue

Fix minor bugs in setting API call metrics with subresource

Based on changes from https://github.com/kubernetes/kubernetes/pull/46354

/cc @wojtek-t @smarterclayton
2017-05-29 08:45:02 -07:00
gmarek
0ca6aeb95c Fix kubemark/run-e2e-tests.sh 2017-05-29 15:20:54 +02:00
Shyam Jeedigunta
e897b21506 Fix minor bugs in setting API call metrics with subresource 2017-05-29 15:04:52 +02:00
Wojciech Tyczynski
1583912dd0 Fix panics in load test 2017-05-29 13:09:53 +02:00
Kubernetes Submit Queue
451d0a436c Merge pull request #46509 from k82cn/add_k82cn_as_approver
Automatic merge from submit-queue

Added k82cn as one of scheduler approver.

According to the requirement of Approver at [community-membership.md](https://github.com/kubernetes/community/blob/master/community-membership.md), I meet the requirements as follow; so I'd like to add myself as an approver of scheduler.

* Reviewer of the codebase for at least 3 months
[k82cn]: [~3 months](6cc40678b6 )
* Primary reviewer for at least 10 substantial PRs to the codebase
[k82cn] Reviewed [40 PRs](https://github.com/issues?q=assignee%3Ak82cn+is%3Aclosed)
* Reviewed or merged at least 30 PRs to the codebase
[k82cn]: 71 merged PRs in kubernetes/kubernetes, and ~100 PRs in kuberentes at https://goo.gl/j2D1fR

As an approver,

* I agree to only approve familiar PRs
* I agree to be responsive to review/approve requests as per community expectations
* I agree to continue my reviewer work as per community expectations
* I agree to continue my contribution, e.g. PRs, mentor contributors
2017-05-28 22:01:32 -07:00
Kubernetes Submit Queue
1444d252e1 Merge pull request #46457 from nicksardo/gce-api-refactor
Automatic merge from submit-queue (batch tested with PRs 46407, 46457)

GCE - Refactor API for firewall and backend service creation

**What this PR does / why we need it**:
 - Currently, firewall creation function actually instantiates the firewall object; this is inconsistent with the rest of GCE api calls. The API normally gets passed in an existing object.
 - Necessary information for firewall creation, (`computeHostTags`,`nodeTags`,`networkURL`,`subnetworkURL`,`region`) were private to within the package. These now have public getters.
 - Consumers might need to know whether the cluster is running on a cross-project network. A new `OnXPN` func will make that information available.
 - Backend services for regions have been added. Global ones have been renamed to specify global. 
 - NamedPort management of instance groups has been changed from an `AddPortsToInstanceGroup` func (and missing complementary `Remove...`) to a single, simple `SetNamedPortsOfInstanceGroup`
 - Addressed nitpick review comments of #45524 

ILB needs the regional backend services and firewall refactor.  The ingress controller needs the new `OnXPN` func to decide whether to create a firewall.

**Release note**:
```release-note
NONE
```
2017-05-28 13:16:58 -07:00
Kubernetes Submit Queue
382a170054 Merge pull request #39164 from danwinship/networkpolicy-v1
Automatic merge from submit-queue

Move NetworkPolicy to v1

Move NetworkPolicy to v1

@kubernetes/sig-network-misc 

**Release note**:
```release-note
NetworkPolicy has been moved from `extensions/v1beta1` to the new
`networking.k8s.io/v1` API group. The structure remains unchanged from
the beta1 API.

The `net.beta.kubernetes.io/network-policy` annotation on Namespaces
to opt in to isolation has been removed. Instead, isolation is now
determined at a per-pod level, with pods being isolated if there is
any NetworkPolicy whose spec.podSelector targets them. Pods that are
targeted by NetworkPolicies accept traffic that is accepted by any of
the NetworkPolicies (and nothing else), and pods that are not targeted
by any NetworkPolicy accept all traffic by default.

Action Required:

When upgrading to Kubernetes 1.7 (and a network plugin that supports
the new NetworkPolicy v1 semantics), to ensure full behavioral
compatibility with v1beta1:

    1. In Namespaces that previously had the "DefaultDeny" annotation,
       you can create equivalent v1 semantics by creating a
       NetworkPolicy that matches all pods but does not allow any
       traffic:

           kind: NetworkPolicy
           apiVersion: networking.k8s.io/v1
           metadata:
             name: default-deny
           spec:
             podSelector:

       This will ensure that pods that aren't match by any other
       NetworkPolicy will continue to be fully-isolated, as they were
       before.

    2. In Namespaces that previously did not have the "DefaultDeny"
       annotation, you should delete any existing NetworkPolicy
       objects. These would have had no effect before, but with v1
       semantics they might cause some traffic to be blocked that you
       didn't intend to be blocked.
```
2017-05-28 11:13:14 -07:00
Dan Winship
0683e55fc1 Add networking.k8s.io v1 API, with NetworkPolicy 2017-05-28 10:11:01 -04:00
Kubernetes Submit Queue
1fd6e97ad9 Merge pull request #46538 from shyamjvs/kubemark-chmod
Automatic merge from submit-queue

chmod +x kubemark scripts

Just noticed. We don't need to do chmod each time.

/cc @wojtek-t @gmarek
2017-05-27 23:01:48 -07:00
Kubernetes Submit Queue
f219f3c153 Merge pull request #46558 from MrHohn/esipp-endpoint-waittime
Automatic merge from submit-queue

Apply KubeProxyEndpointLagTimeout to ESIPP tests

Fixes #46533.

The previous construction of ESIPP tests is weird, so I redo it a bit.

A 30 seconds `KubeProxyEndpointLagTimeout` is introduced, as these tests ain't verifying performance, may be better to not make it too tight.

/assign @thockin 

**Release note**:

```release-note
NONE
```
2017-05-27 11:17:51 -07:00
Nick Sardo
9063526dfb GCE: Refactor firewalls/backendservices api; other small changes 2017-05-27 10:25:03 -07:00
Kubernetes Submit Queue
daee6d4826 Merge pull request #45524 from MrHohn/l4-lb-healthcheck
Automatic merge from submit-queue (batch tested with PRs 46252, 45524, 46236, 46277, 46522)

Make GCE load-balancers create health checks for nodes

From #14661. Proposal on kubernetes/community#552. Fixes #46313.

Bullet points:
- Create nodes health check and firewall (for health checking) for non-OnlyLocal service.
- Create local traffic health check and firewall (for health checking) for OnlyLocal service.
- Version skew: 
   - Don't create nodes health check if any nodes has version < 1.7.0.
   - Don't backfill nodes health check on existing LBs unless users explicitly trigger it.

**Release note**:

```release-note
GCE Cloud Provider: New created LoadBalancer type Service now have health checks for nodes by default.
An existing LoadBalancer will have health check attached to it when:
- Change Service.Spec.Type from LoadBalancer to others and flip it back.
- Any effective change on Service.Spec.ExternalTrafficPolicy.
```
2017-05-26 19:47:57 -07:00
Zihong Zheng
e332828690 Apply KubeProxyEndpointLagTimeout to ESIPP tests 2017-05-26 18:14:20 -07:00
Kubernetes Submit Queue
2b084af6dd Merge pull request #46484 from guoyunxian/remove
Automatic merge from submit-queue (batch tested with PRs 45809, 46515, 46484, 46516, 45614)

Remove the reduplicated case judement

This patch remove the  reduplicated case judgement
2017-05-26 16:59:04 -07:00
Kubernetes Submit Queue
bd1311a0a4 Merge pull request #46515 from ncdc/vet
Automatic merge from submit-queue (batch tested with PRs 45809, 46515, 46484, 46516, 45614)

Fix incorrect printf format

**What this PR does / why we need it**: changes `%s` to `%d` for something that is actually an `int` (found via `make vet`).

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-26 16:59:02 -07:00
Michael Taufen
a653603e13 no-snat test
Test checks that Pods can communicate with each other in the same
cluster without SNAT.
2017-05-26 13:45:10 -07:00
Zihong Zheng
897da549bc Autogenerated files 2017-05-26 13:19:14 -07:00
Zihong Zheng
a61cc7f477 Update firewall e2e test for LB healthcheck firewall 2017-05-26 13:18:50 -07:00
Shyam Jeedigunta
b72cbc074c chmod +x kubemark scripts 2017-05-26 22:03:12 +02:00
Michelle Au
f385dfcb3b Address review comments 2017-05-26 11:48:31 -07:00
Andy Goldstein
ab76f7320a Fix incorrect printf format 2017-05-26 11:36:52 -04:00
Andy Goldstein
41345418cb Support grabbing test suite metrics
Update the "interesting" controller-manager metrics to match the
current names for the garbage collector, and add namespace controller
metrics to the list.
2017-05-26 11:21:27 -04:00
Klaus Ma
68a34c1baf Added k82cn as kube-scheduler approver. 2017-05-26 22:26:20 +08:00
guoyunxian
0bf96a3ca4 Remove the same case judement
This patch remove the same case judement
2017-05-26 17:28:53 +08:00
Chao Xu
262799f91f serve the api in kube-apiserver 2017-05-25 23:55:15 -07:00
Kubernetes Submit Queue
7d37a2685c Merge pull request #45867 from kow3ns/controller-history
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Controller history

**What this PR does / why we need it**:
Implements the ControllerRevision API object and clientset to allow for the implementation of StatefulSet update and DaemonSet history

```release-note
ControllerRevision type added for StatefulSet and DaemonSet history.
```
2017-05-25 22:42:08 -07:00
Kubernetes Submit Queue
54a47a6f1d Merge pull request #46308 from dashpole/summary_container_restart
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Summary Test looks at pods that have containers that restart.

Occasionally, the node can report extra containers that had been restarted through the summary API.
This test change tests a pod that restarts, and hopefully should allow us to reproduce and debug this behavior.

/assign @dchen1107 

/release-note-none
2017-05-25 22:42:04 -07:00
Kubernetes Submit Queue
59ee250ced Merge pull request #46429 from wojtek-t/bump_go_to_183
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Bump Go version to 1.8.3

This PR also removed this patched version of Go 1.8.1 which we used to use to workaround performance problem of Go 1.8.1.

Fix https://github.com/kubernetes/kubernetes/issues/45216
Ref #46391

@timothysc @bradfitz
2017-05-25 22:42:01 -07:00
Kubernetes Submit Queue
c60bc53921 Merge pull request #46434 from shyamjvs/kubemark-config-upload
Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)

Copy kubeconfig to kubemark master

This should save the effort of digging through jenkins agent and its container to get the kubeconfig.
Ideally we should have kubectl directly working on the kubemark master, but I'm facing some issues due to older version of kubectl present by default on the node.

cc @wojtek-t @gmarek
2017-05-25 21:39:59 -07:00
Kubernetes Submit Queue
b8dc4915f7 Merge pull request #46423 from gmarek/fix_perf
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

Fix performance test issues

Fix #46198
2017-05-25 19:41:04 -07:00
Kubernetes Submit Queue
b9416c2c91 Merge pull request #46320 from vmware/e2evSphereStoragePolicySupport
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

e2e tests for storage policy support in Kubernetes

This PR covers e2e test cases for vSphere storage policy support in Kubernetes - #46176.

The following test scenario have been implemented.
- Specify only SPBM storage policy name.
     - Verify if the disk is provisioned on a compatible datastore with max free space.
- Specify a storage policy name which is not defined on VC.
    - Verify if PVC create errors out that no pbm profile with this policy is found.
- Specify both SPBM storage policy name and VSAN capabilities together.
    - Verify if PVC create errors out that you can't use both SPBM policy name with VSAN capabilities. You can only specify one.
- Specify SPBM storage policy name with user specified datastore which is non-compatible.
   - Verify if PVC create errors out that it can't provision a disk on a non-compatible datastore.

@jeffvance @divyenpatel

**Release note**:

```release-note
None
```
2017-05-25 19:41:02 -07:00
Kubernetes Submit Queue
470a6a45d5 Merge pull request #45949 from NickrenREN/kubelet-metric
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

Unregister some metrics

delete some registered metrics since they are not observed


**Release note**:
```release-note
NONE
```
2017-05-25 19:40:58 -07:00