Automatic merge from submit-queue
Unify fluentd-gcp configurations
There're two different configs and two different pod specs for fluentd agent for GCL: one for GCI and one for CVM. This PR makes it possible to use only one config and only one pod spec.
CC @piosz
Automatic merge from submit-queue
Set strategy spec for kube-dns to support zero downtime rolling update
From #37728 and coreos/kube-aws#111.
Set `maxUnavailable` to 0 to prevent DNS service outage during update when the replica number is only 1.
Also keeps all kube-dns yaml files in sync.
@bowei @thockin
Automatic merge from submit-queue
Update Stateful Set example files for 1.5
1. Remove initialized annotation from statefulset examples
2. Update storage class annotation to beta in statefulset examples
3. Remove alpha limitation on PetSet in cassandra example
cc @erictune @foxish @kow3ns @enisoc @chrislovecnm @kubernetes/sig-apps
```release-note
NONE
```
Automatic merge from submit-queue
Fixes Addon Manager's pruning issue for old Deployments
Fixes#37641.
Attaches the `last-applied`annotations to the existing Deployments for pruning.
Below images are built and pushed:
- gcr.io/google-containers/kube-addon-manager:v6.1
- gcr.io/google-containers/kube-addon-manager-amd64:v6.1
- gcr.io/google-containers/kube-addon-manager-arm:v6.1
- gcr.io/google-containers/kube-addon-manager-arm64:v6.1
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.1
@mikedanese
cc @saad-ali @krousey
Automatic merge from submit-queue
Set Dashboard UI version to v1.5.0-beta1
There will be one more such PR coming for 1.5 release. In one week.
Setting release note to none. Will set notes for final version PR.
Github release info:
https://github.com/kubernetes/dashboard/releases/tag/v1.5.0-beta1
Automatic merge from submit-queue
Deploy a default StorageClass instance on AWS and GCE
This needs a newer kubectl in kube-addons-manager container. It's quite tricky to test as I cannot push new container image to gcr.io and I must copy the newer container manually.
cc @kubernetes/sig-storage
**Release note**:
```release-note
Kubernetes now installs a default StorageClass object when deployed on AWS, GCE and
OpenStack with kube-up.sh scripts. This StorageClass will automatically provision
a PeristentVolume in corresponding cloud for a PersistentVolumeClaim that cannot be
satisfied by any existing matching PersistentVolume in Kubernetes.
To override this default provisioning, administrators must manually delete this default StorageClass.
```
Automatic merge from submit-queue
Bumps up Addon Manager to v6.0 with full support of kubectl apply
Below images are built and pushed:
- gcr.io/google-containers/kube-addon-manager:v6.0
- gcr.io/google-containers/kube-addon-manager-amd64:v6.0
- gcr.io/google-containers/kube-addon-manager-arm:v6.0
- gcr.io/google-containers/kube-addon-manager-arm64:v6.0
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.0
The actual change made is upgrade kubectl version from `v1.5.0-alpha.1` to `v1.5.0-beta.1`, which is released today.
@mikedanese
@saad-ali This need to get into 1.5 because Addon Manager v6.0-alpha.1 (currently in used) does not have full support of `kubectl apply --prune`.
Automatic merge from submit-queue
fix: elasticsearch template mapping to parse kubernetes.labels
**What this PR does / why we need it**:
This PR updates the field mappings for the elasticsearch template that ships with the EFK stack implementation.
Specifically, elasticsearch cannot parse the `kubernetes.labels` object because it attempts to treat it as a string and produces an error. This update treats `kubernetes.labels` as an object and all of the properties within as a string, allowing accurate indexing and allowing users in kibana to search on `kubernetes.labels.*`.
**Release note**:
```release-note
Fluentd/Elastisearch add-on: correctly parse and index kubernetes labels
```
Automatic merge from submit-queue
Add kube-proxy logs to fluentd configs
Related to https://github.com/kubernetes/kubernetes/issues/37107
Makes fluentd collect logs from kube-proxy. It's completely backward-compatible change that does not cause problems currently, so I suggest not to bump version.
cc @piosz
- Adds command line flags --config-map, --config-map-ns.
- Fixes 36194 (https://github.com/kubernetes/kubernetes/issues/36194)
- Update kube-dns yamls
- Update bazel (hack/update-bazel.sh)
- Update known command line flags
- Temporarily reference new kube-dns image (this will be fixed with
a separate commit when the DNS image is created)
Automatic merge from submit-queue
Update Dashboard UI version to 1.4.2
**What this PR does / why we need it**:
Dashboard 1.4.2 contains a fix for an XSS security bug, so I think it would be prudent to update the Dashboard version 'shipped' with kubernetes to this version
**Special notes for your reviewer**:
**Release note**:
- Updated dashboard version in addons to 1.4.2```
Automatic merge from submit-queue
Fix Docker Registry image version to 2.5.1
`registry:2` is constantly being updated with new versions. This means there's a possibility that the image may be changed unintentionally. For example, when the Pod is rescheduled on nodes that does not already have the image, depending on the time of the pull, `registry:2` may result in different images.
Fix this to the latest `registry:2.5.1` instead to avoid this problem.
@uluyol @freehan
Dashboard 1.4.2 contains a fix for an XSS security bug, so I think it would be prudent to update the Dashboard version 'shipped' with kubernetes to this version
Automatic merge from submit-queue
Migrates addons from RCs to Deployments
Fixes#33698.
Below addons are being migrated:
- kube-dns
- GLBC default backend
- Dashboard UI
- Kibana
For the new deployments, the version suffixes are removed from their names. Version related labels are also removed because they are confusing and not needed any more with regard to how Deployment and the new Addon Manager works.
The `replica` field in `kube-dns` Deployment manifest is removed for the incoming DNS horizontal autoscaling feature #33239.
The `replica` field in `Dashboard` Deployment manifest is also removed because the rescheduler e2e test is manually scaling it.
Some resource limit related fields in `heapster-controller.yaml` are removed, as they will be set up by the `addon resizer` containers. Detailed reasons in #34513.
Three e2e tests are modified:
- `rescheduler.go`: Changed to resize Dashboard UI Deployment instead of ReplicationController.
- `addon_update.go`: Some namespace related changes in order to make it compatible with the new Addon Manager.
- `dns_autoscaling.go`: Changed to examine kube-dns Deployment instead of ReplicationController.
Both of above two tests passed on my own cluster. The upgrade process --- from old Addons with RCs to new Addons with Deployments --- was also tested and worked as expected.
The last commit upgrades Addon Manager to v6.0. It is still a work in process and currently waiting for #35220 to be finished. (The Addon Manager image in used comes from a non-official registry but it mostly works except some corner cases.)
@piosz @gmarek could you please review the heapster part and the rescheduler test?
@mikedanese @thockin
cc @kubernetes/sig-cluster-lifecycle
---
Notes:
- Kube-dns manifest still uses *-rc.yaml for the new Deployment. The stale file names are preserved here for receiving faster review. May send out PR to re-organize kube-dns's file names after this.
- Heapster Deployment's name remains in the old fashion(with `-v1.2.0` suffix) for avoiding describe this upgrade transition explicitly. In this way we don't need to attach fake apply labels to the old Deployments.
Automatic merge from submit-queue
Fix startup script bug in kibana image
Big thanks to @lhopki01 for noticing this!
As mention in discussion in https://github.com/kubernetes/kubernetes/pull/36103 current image crashes if we don't want to work behind proxy because of string interpolation in bash.
@piosz
Automatic merge from submit-queue
Fixes token_found bug in addon manager
From #35832.
Above PR exposed addon manager's logs on Jenkins, found below error on the gce e2e test artifacts:
```
Error from server: serviceaccounts "default" not found
error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil
== default service account in the kube-system namespace has token Error executing template: template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil. Printing more information for debugging the template:
template was:
{{with index .secrets 0}}{{.name}}{{end}}
raw data was:
{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"default","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/serviceaccounts/default","uid":"de3f2f85-9d6a-11e6-9df3-42010af00002","resourceVersion":"48","creationTimestamp":"2016-10-29T00:01:40Z"}}
object given to template engine was:
map[apiVersion:v1 metadata:map[selfLink:/api/v1/namespaces/kube-system/serviceaccounts/default uid:de3f2f85-9d6a-11e6-9df3-42010af00002 resourceVersion:48 creationTimestamp:2016-10-29T00:01:40Z name:default namespace:kube-system] kind:ServiceAccount] ==
```
Seems like the script failed to retrieve service token at the first time and mistakenly used the error message as the token content. Fixes by replacing `|| true` with if condition.
https://hub.docker.com/r/library/registry/tags/
`registry:2` is constantly being updated with new versions. This means there's a possibility that the image may be changed unintentionally. For example, when the Pod is rescheduled on nodes that does not already have the image, depending on the time of the pull, `registry:2` may result in different images.
Fix this to the latest `registry:2.5.1` instead to avoid this problem.
Automatic merge from submit-queue
Update fluentd-gcp configuration
Related to #32762
Though it's not a final solution to the fluentd OOM problems, it increases number of logs that can be handled without losses by
- switching to the file buffering, making buffering mechanism more resilient
- decreasing size of the buffer, decreasing the amount of memory needed
- decreasing number of threads handling the load, since number of chunks is lower than previous number of threads
which results in decrease in theoretical throughput. Tests to confirm cases covered by this change will follow.
cc @piosz @edsiper @repeatedly please take look and confirm that all of these changed are meaningful.