Automatic merge from submit-queue
[Federation] Expose autoscaling apis through federation api server
This PR implements first part of federated pod autoscaler.
The issue to handle the whole feature is https://github.com/kubernetes/kubernetes/issues/38974
cc @kubernetes/sig-cluster-federation
@shashidharatd @kshafiee @deepak-vij
**Release note**:
```
federation users can now use federated autoscaling resources and create federated horizontalpodautoscalers
```
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)
kubeadm: add optional self-hosted deployment
**What this PR does / why we need it**: add an optional self-hosted deployment type to `kubeadm`, for master components only, namely `apiserver`, `controller-manager` and `scheduler`.
**Which issue this PR fixes**: closes#38407
**Special notes for your reviewer**: /cc @aaronlevy @luxas @dgoodwin
**Release note**:
```release-note
kubeadm: add optional self-hosted deployment for apiserver, controller-manager and scheduler.
```
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)
kubelet: storage: teardown terminated pod volumes
This is a continuation of the work done in https://github.com/kubernetes/kubernetes/pull/36779
There really is no reason to keep volumes for terminated pods attached on the node. This PR extends the removal of volumes on the node from memory-backed (the current policy) to all volumes.
@pmorie raised a concern an impact debugging volume related issues if terminated pod volumes are removed. To address this issue, the PR adds a `--keep-terminated-pod-volumes` flag the kubelet and sets it for `hack/local-up-cluster.sh`.
For consideration in 1.6.
Fixes#35406
@derekwaynecarr @vishh @dashpole
```release-note
kubelet tears down pod volumes on pod termination rather than pod deletion
```
Automatic merge from submit-queue (batch tested with PRs 40168, 40165, 39158, 39966, 40190)
CRI: upgrade protobuf to v3
For #38854, this PR upgrades CRI protobuf version to v3, and also updated related packages for confirming to new api.
**Release note**:
```
CRI: upgrade protobuf version to v3.
```
Automatic merge from submit-queue (batch tested with PRs 39772, 39831, 39481, 40167, 40149)
Add //hack:verify-boilerplate rule.
This pattern is working well in test-infra. I'll add the gofmt and go vet rules next.
Automatic merge from submit-queue
promote certificates api to beta
Mostly posting to see what breaks but also this API is ready to be promoted.
```release-note
Promote certificates.k8s.io to beta and enable it by default. Users using the alpha certificates API should delete v1alpha1 CSRs from the API before upgrading and recreate them as v1beta1 CSR after upgrading.
```
@kubernetes/api-approvers @jcbsmpsn @pipejakob
Automatic merge from submit-queue
Add authorization mode to kubeadm
This PR adds an option in `kubeadm` to allow a user to specify an [authorization plugin](https://kubernetes.io/docs/admin/authorization/). It defaults to RBAC.
Automatic merge from submit-queue
Build release tars using bazel
**What this PR does / why we need it**: builds equivalents of the various kubernetes release tarballs, solely using bazel.
For example, you can now do
```console
$ make bazel-release
$ hack/e2e.go -v -up -test -down
```
**Special notes for your reviewer**: this is currently dependent on 3b29803eb5, which I have yet to turn into a pull request, since I'm still trying to figure out if this is the best approach.
Basically, the issue comes up with the way we generate the various server docker image tarfiles and load them on nodes:
* we `md5sum` the binary being encapsulated (e.g. kube-proxy) and save that to `$binary.docker_tag` in the server tarball
* we then build the docker image and tag using that md5sum (e.g. `gcr.io/google_containers/kube-proxy:$MD5SUM`)
* we `docker save` this image, which embeds the full tag in the `$binary.tar` file.
* on cluster startup, we `docker load` these tarballs, which are loaded with the tag that we'd created at build time. the nodes then use the `$binary.docker_tag` file to find the right image.
With the current bazel `docker_build` rule, the tag isn't saved in the docker image tar, so the node is unable to find the image after `docker load`ing it.
My changes to the rule save the tag in the docker image tar, though I don't know if there are subtle issues with it. (Maybe we want to only tag when `--stamp` is given?)
Also, the docker images produced by bazel have the timestamp set to the unix epoch, which is not great for debugging. Might be another thing to change with a `--stamp`.
Long story short, we probably need to follow up with bazel folks on the best way to solve this problem.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 39625, 39842)
Add RBAC v1beta1
Add `rbac.authorization.k8s.io/v1beta1`. This scrubs `v1alpha1` to remove cruft, then add `v1beta1`. We'll update other bits of infrastructure to code to `v1beta1` as a separate step.
```release-note
The `attributeRestrictions` field has been removed from the PolicyRule type in the rbac.authorization.k8s.io/v1alpha1 API. The field was not used by the RBAC authorizer.
```
@kubernetes/sig-auth-misc @liggitt @erictune
Automatic merge from submit-queue
Remove packages which are now apimachinery
Removes all the content from the packages that were moved to `apimachinery`. This will force all vendoring projects to figure out what's wrong. I had to leave many empty marker packages behind to have verify-godep succeed on vendoring heapster.
@sttts straight deletes and simple adds
Automatic merge from submit-queue (batch tested with PRs 39911, 40002, 39969, 40012, 40009)
kubectl: fix rollback dryrun when version is not specified
@kubernetes/sig-cli-misc
Automatic merge from submit-queue (batch tested with PRs 39609, 39105)
Stop running most unit tests outside of bazel.
Lets not duplicate our efforts. The two I still run here are the two we currently skip in bazel. We should fix those.
Automatic merge from submit-queue (batch tested with PRs 39803, 39698, 39537, 39478)
include bootstrap admin in super-user group, ensure tokens file is correct on upgrades
Fixes https://github.com/kubernetes/kubernetes/issues/39532
Possible issues with cluster bring-up scripts:
- [x] known_tokens.csv and basic_auth.csv is not rewritten if the file already exists
* new users (like the controller manager) are not available on upgrade
* changed users (like the kubelet username change) are not reflected
* group additions (like the addition of admin to the superuser group) don't take effect on upgrade
* this PR updates the token and basicauth files line-by-line to preserve user additions, but also ensure new data is persisted
- [x] existing 1.5 clusters may depend on more permissive ABAC permissions (or customized ABAC policies). This PR adds an option to enable existing ABAC policy files for clusters that are upgrading
Follow-ups:
- [ ] both scripts are loading e2e role-bindings, which only be loaded in e2e tests, not in normal kube-up scenarios
- [ ] when upgrading, set the option to use existing ABAC policy files
- [ ] update bootstrap superuser client certs to add superuser group? ("We also have a certificate that "used to be" a super-user. On GCE, it has CN "kubecfg", on GKE it's "client"")
- [ ] define (but do not load by default) a relaxed set of RBAC roles/rolebindings matching legacy ABAC, and document how to load that for new clusters that do not want to isolate user permissions