Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
Implement TTL controller and use the ttl annotation attached to node in secret manager
For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.
Apiserver cannot keep up with it very easily.
Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.
So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is).
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.
@dchen1107 @thockin @liggitt
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)
fix scheduler performance test script
**What this PR does / why we need it**:
`test-performance.sh` is in dir `kubernetes/test/integration/scheduler_perf`
the dir `kubernetes/test/component/scheduler/perf` does not exist
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Fix some funky funcs.
This is code cleanup. Fix function declarations and remove stale comment.
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Upgrade test for deployments
Upgrade test for Deployments. Should prevent issues like https://github.com/kubernetes/kubernetes/issues/40415 in the future.
@krousey @janetkuo @soltysh
Haven't managed to run it locally...
```
$ go run hack/e2e.go --up --test --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/latest --upgrade-image=gci"
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-down.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-down.sh' finished in 7.278236ms
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-up.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-up.sh' finished in 5.286328ms
2017/02/02 11:43:22 e2e.go:946: Running: ./cluster/kubectl.sh version --match-server-version=false
2017/02/02 11:43:22 e2e.go:948: Step './cluster/kubectl.sh version --match-server-version=false' finished in 213.847259ms
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-status.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-status.sh' finished in 103.253183ms
2017/02/02 11:43:22 e2e.go:230: Something went wrong: encountered 2 errors: [exit status 1 exit status 1]
exit status 1
```
@krousey any eta for when the upgrade framework will be integrated in the pr builder?
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
add k8s.io/sample-apiserver to demonstrate how to build an aggregated API server
builds on https://github.com/kubernetes/kubernetes/pull/41093
This creates a sample API server is a separate staging repo to guarantee no cheating with `k8s.io/kubernetes` dependencies. The sample is run during integration tests (simple tests on it so far) to ensure that it continues to run.
@sttts @kubernetes/sig-api-machinery-misc ptal
@pwittrock @pmorie @kris-nova an aggregated API server example that will stay up to date.
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
Remove deprecated kubelet flags that look safe to remove
Removes:
```
--config
--auth-path
--resource-container
--system-container
```
which have all been marked deprecated since at least 1.4 and look safe to remove.
```release-note
The deprecated flags --config, --auth-path, --resource-container, and --system-container were removed.
```
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)
Use privileged containers for statefulset e2e tests
Test containers need to run as spc_t in order to interact with the host
filesystem under /tmp, as the tests for StatefulSet are doing. Docker
will transition the container into this domain when running the container
as privileged.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
**Release note**:
```release-note
NONE
```
/cc @ncdc @soltysh @pmorie
1. pcb and pcb controller are removed and their functionality is
encapsulated in StatefulPodControlInterface.
2. IdentityMappers has been removed to clarify what properties of a Pod are
mutated by the controller. All mutations are performed in the
UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes
sorted Pods by CreationTimestamp. This is brittle and not resilient to
clock skew. The current control loop, which implements the same logic,
is in stateful_set_control.go. The Pods are now sorted and considered by
their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a
StatefulSet's Selector also match the Name of the StatefulSet. This will
make the controller resilient to overlapping, and will be enhanced by
the addition of ControllerRefs.
Automatic merge from submit-queue (batch tested with PRs 40175, 41107, 41111, 40893, 40919)
[Federation][e2e] Move Cluster Registration to federation-up.sh
**What this PR does / why we need it**:
Remove cluster register/unregister calls from test case BeforeEach/AfterEach blocks.
Register clusters once in federation-up.sh
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40768
**Special notes for your reviewer**:
**Release note**: `NONE`
cc: @madhusudancs @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
e2e tests for vSphere cloud provider
**What this PR does / why we need it**:
This PR contains changes for existing e2e volume provisioning test cases for running on vsphere cloud provider.
**Following is the summary of changes made in existing e2e test cases**
**Added test/e2e/persistent_volumes-vsphere.go**
- This test verifies deleting a PVC before the pod does not cause pod deletion to fail on PD detach and deleting the PV before the pod does not cause pod deletion to fail on PD detach.
**test/e2e/volume_provisioning.go**
- This test creates a StorageClass and claim with dynamic provisioning and alpha dynamic provisioning annotations and verifies that required volumes are getting created. Test also verifies that created volume is readable and retaining data.
- Added vsphere as supported cloud provider. Also set pluginName to "kubernetes.io/vsphere-volume" for vsphere cloud provider.
**test/e2e/volumes.go**
- Added test spec for vsphere
- This test creates requested volume, mount it on the pod, write some random content at /opt/0/index.html and verifies file contents are perfect to make sure we don't see the content from previous test runs.
- This test also passes "1234" as fsGroup to mount volume and verifies fsGroup is set correctly.
**added test/e2e/vsphere_utils.go**
- Added function verifyVSphereDiskAttached - Verify the persistent disk attached to the node.
- Added function waitForVSphereDiskToDetach - Wait until vsphere vmdk is deteched from the given node or time out after 5 minutes
- Added getVSpherePersistentVolumeSpec - create vsphere volume spec with given VMDK volume path, Reclaim Policy and labels
- Added getVSpherePersistentVolumeClaimSpec - get vsphere persistent volume spec with given selector labels
- createVSphereVolume - function to create vmdk volume
**Following is the summary of new e2e tests added with this PR**
**test/e2e/vsphere_volume_placement.go**
- contains volume placement tests using node label selector
- Test Back-to-back pod creation/deletion with the same volume source on the same worker node
- Test Back-to-back pod creation/deletion with the same volume source attach/detach to different worker nodes
**test/e2e/pv_reclaimpolicy.go**
- contains tests for PV/PVC - Reclaiming Policy
- Test verifies persistent volume should be deleted when reclaimPolicy on the PV is set to delete and associated claim is deleted
- Test also verified that persistent volume should be retained when reclaimPolicy on the PV is set to retain and associated claim is deleted
**test/e2e/pvc_label_selector.go**
- This is function test for Selector-Label Volume Binding Feature.
- Verify volume with the matching label is bounded with the PVC.
Other changes
Updated pkg/cloudprovider/providers/vsphere/BUILD and test/e2e/BUILD
**Which issue this PR fixes** *
fixes # 41087
**Special notes for your reviewer**:
Updated tests were executed on kubernetes v1.4.8 release on vsphere.
Test steps are provided in comments
@kerneltime @BaluDontu
Test containers need to run as spc_t in order to interact with the host
filesystem under /tmp, as the tests for StatefulSet are doing. Docker
will transition the container into this domain when running the container
as privileged.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
Remove checks for pods responding in deployment e2e tests
Fixes#39879
Remove it because it caused deployment e2e tests sometimes timed out waiting for pods responding, and pods responding isn't related to deployment controller and is not a prerequisite of deployment e2e tests.
@kargakis
addressed review comments
Addressed review comment for pv_reclaimpolicy.go to verify content of the volume
addressed 2nd round of review comments
addressed 3rd round of review comments from jeffvance
Automatic merge from submit-queue (batch tested with PRs 41023, 41031, 40947)
scrub aggregator names to eliminate discovery
Cleanup old uses of `discovery`. Also removes the legacy functionality.
@kubernetes/sig-api-machinery-misc @sttts
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Promote SubjectAccessReview to v1
We have multiple features that depend on this API:
SubjectAccessReview
- [webhook authorization](https://kubernetes.io/docs/admin/authorization/#webhook-mode)
- [kubelet delegated authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization)
- add-on API server delegated authorization
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating access
- A typo was discovered in the SubjectAccessReviewSpec Groups field name
This PR promotes the existing v1beta1 API to v1, with the only change being the typo fix to the groups field. (fixes https://github.com/kubernetes/kubernetes/issues/32709)
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authorization.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Bump GCI to gci-beta-56-9000-80-0
cc/ @Random-Liu @adityakali
Changelogs since gci-dev-56-8977-0-0 (currently used in Kubernetes):
- "net.ipv4.conf.eth0.forwarding" and "net.ipv4.ip_forward" may get reset to 0
- Track CVE-2016-9962 in Docker in GCI
- Linux kernel CVE-2016-7097
- Linux kernel CVE-2015-8964
- Linux kernel CVE-2016-6828
- Linux kernel CVE-2016-7917
- Linux kernel CVE-2016-7042
- Linux kernel CVE-2016-9793
- Linux kernel CVE-2016-7039 and CVE-2016-8666
- Linux kernel CVE-2016-8655
- Toolbox: allow docker image to be loaded from local tarball
- Update compute-image-package in GCI
- Change the product name on /etc/os-release (to COS)
- Remove 'dogfood' from HWID_OVERRIDE in /etc/lsb-release
- Include Google NVME extensions to optimize LocalSSD performance.
- /proc/<pid>/io missing on GCI (enables process stats accounting)
- Enable BLK_DEV_THROTTLING
cc/ @roberthbailey @fabioy for GKE cluster update
Automatic merge from submit-queue
federation: Refactoring namespaced resources deletion code from kube ns controller and sharing it with fed ns controller
Ref https://github.com/kubernetes/kubernetes/issues/33612
Refactoring code in kube namespace controller to delete all resources in a namespace when the namespace is deleted. Refactored this code into a separate NamespacedResourcesDeleter class and calling it from federation namespace controller.
This is required for enabling cascading deletion of namespaced resources in federation apiserver.
Before this PR, we were directly deleting the namespaced resources and assuming that they go away immediately. With cascading deletion, we will have to wait for the corresponding controllers to first delete the resources from underlying clusters and then delete the resource from federation control plane. NamespacedResourcesDeleter has this waiting logic.
cc @kubernetes/sig-federation-misc @caesarxuchao @derekwaynecarr @mwielgus
Automatic merge from submit-queue (batch tested with PRs 40385, 40786, 40999, 41026, 40996)
Refactor federated services tests a bit to move a test that requires no cluster creation to a separate block.
Follow up to PR #40769.
cc @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue
Replace hand-written informers with generated ones
Replace existing uses of hand-written informers with generated ones.
Follow-up commits will switch the use of one-off informers to shared
informers.
This is a precursor to #40097. That PR will switch one-off informers to shared informers for the majority of the code base (but not quite all of it...).
NOTE: this does create a second set of shared informers in the kube-controller-manager. This will be resolved back down to a single factory once #40097 is reviewed and merged.
There are a couple of places where I expanded the # of caches we wait for in the calls to `WaitForCacheSync` - please pay attention to those. I also added in a commented-out wait in the attach/detach controller. If @kubernetes/sig-storage-pr-reviews is ok with enabling the waiting, I'll do it (I'll just need to tweak an integration test slightly).
@deads2k @sttts @smarterclayton @liggitt @soltysh @timothysc @lavalamp @wojtek-t @gmarek @sjenning @derekwaynecarr @kubernetes/sig-scalability-pr-reviews