Automatic merge from submit-queue
Add support for limiting grace period during soft eviction
Adds eviction manager support in kubelet for max pod graceful termination period when a soft eviction is met.
```release-note
Kubelet evicts pods when available memory falls below configured eviction thresholds
```
/cc @vishh
Automatic merge from submit-queue
Use protobufs by default to communicate with apiserver (still store JSONs in etcd)
@lavalamp @kubernetes/sig-api-machinery
Automatic merge from submit-queue
Cache Webhook Authentication responses
Add a simple LRU cache w/ 2 minute TTL to the webhook authenticator.
Kubectl is a little spammy, w/ >= 4 API requests per command. This also prevents a single unauthenticated user from being able to DOS the remote authenticator.
Automatic merge from submit-queue
Add NetworkPolicy API Resource
API implementation of https://github.com/kubernetes/kubernetes/pull/24154
Still to do:
- [x] Get it working (See comments)
- [x] Make sure user-facing comments are correct.
- [x] Update naming in response to #24154
- [x] kubectl / client support
- [x] Release note.
```release-note
Implement NetworkPolicy v1beta1 API object / client support.
```
Next Steps:
- UTs in separate PR.
- e2e test in separate PR.
- make `Ports` + `From` pointers to slices (TODOs in code - to be done when auto-gen is fixed)
CC @thockin
[]()
Automatic merge from submit-queue
Adding support objects for integrating dynamic client the kubectl builder
Kubectl will try to decode into `runtime.VersionedObjects`, so the `UnstructuredJSONScheme` needs to handle that intelligently.
Kubectl's builder also needs a `meta.RESTMapper` and `runtime.Typer`. The `meta.RESTMapper` requires a `runtime.ObjectConvertor` (spelling?) that works with `runtime.Unstructured`. The mapper and typer required discovery info, so I just put that in the kubectl util package since it didn't really seem to fit anywhere else.
Subsequent PRs will be using these in kubectl.
cc @kubernetes/sig-api-machinery @smarterclayton @liggitt @lavalamp
Automatic merge from submit-queue
Add support for PersistentVolumeClaim in Attacher/Detacher interface
The attach detach interface does not support volumes which are referenced through PVCs. This PR adds that support
Automatic merge from submit-queue
Only expose top N images in `NodeStatus`
Fix#25209
Sorted the image and only pick set top 50 sized images in node status.
cc @vishh
Automatic merge from submit-queue
Extend secrets volumes with path control
As per [1] this PR extends secrets mapped into volume with:
* key-to-path mapping the same way as is for configmap. E.g.
```
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "default"
},
"spec": {
"containers": [{
"name": "mypod",
"image": "redis",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "mysecret",
"items": [{
"key": "username",
"path": "my-username"
}]
}
}]
}
}
```
Here the ``spec.volumes[0].secret.items`` added changing original target ``/etc/foo/username`` to ``/etc/foo/my-username``.
* secondly, refactoring ``pkg/volumes/secrets/secrets.go`` volume plugin to use ``AtomicWritter`` to project a secret into file.
[1] https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md#changes-to-secret
Automatic merge from submit-queue
volume recycler: Don't start a new recycler pod if one already exists.
Recycling is a long duration process and when the recycler controller is restarted in the meantime, it should not start a new recycler pod if there is one already running.
This means that the recycler pod must have deterministic name based on name of the recycled PV, we then get name conflicts when creating the pod.
Two things need to be changed:
- recycler controller and recycler plugins must pass the PV.Name to place, where the pod is created. This is most of the patch and it should be pretty straightforward.
- create recycler pod with deterministic name and check "already exists" error.
When at it, remove useless 'resourceVersion' argument and make log messages starting with lowercase.
There is an unit test to check the behavior + there is an e2e test that checks that regular recycling is not broken (it does not try to run two recycler pods in parallel as the recycler is single-threaded now).
Automatic merge from submit-queue
Updaing QoS policy to be at the pod level
Quality of Service will be derived from an entire Pod Spec, instead of being derived from resource specifications of individual resources per-container.
A Pod is `Guaranteed` iff all its containers have limits == requests for all the first-class resources (cpu, memory as of now).
A Pod is `BestEffort` iff requests & limits are not specified for any resource across all containers.
A Pod is `Burstable` otherwise.
Note: Existing pods might be more susceptible to OOM Kills on the node due to this PR! To protect pods from being OOM killed on the node, set `limits` for all resources across all containers in a pod.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/14943)
<!-- Reviewable:end -->
Automatic merge from submit-queue
add CIDR allocator for NodeController
This PR:
* use pkg/controller/framework to watch nodes and reduce lists when allocate CIDR for node
* decouple the cidr allocation logic from monitoring status logic
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/19242)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Add 'kubectl set image'
```release-note
Add "kubectl set image" for easier updating container images (for pods or resources with pod templates).
```
**Usage:**
```
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
```
**Example:**
```console
# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.
$ kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1
# Update all deployments' nginx container's image to 'nginx:1.9.1'
$ kubectl set image deployments nginx=nginx:1.9.1 --all
# Update image of all containers of daemonset abc to 'nginx:1.9.1'
$ kubectl set image daemonset abc *=nginx:1.9.1
# Print result (in yaml format) of updating nginx container image from local file, without hitting the server
$ kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
```
I abandoned the `--container=xxx --image=xxx` flags in the [deploy proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/deploy.md#kubectl-set) since it's much easier to use with just KEY=VALUE (CONTAINER_NAME=CONTAINER_IMAGE) pairs.
Ref #21648
@kubernetes/kubectl @bgrant0607 @kubernetes/sig-config
[]()
Automatic merge from submit-queue
kubelet: Don't attempt to apply the oom score if container exited already
Containers could terminate before kubelet applies the oom score. This is normal
and the function should not error out.
This addresses #25844 partially.
/cc @smarterclayton @Random-Liu
Automatic merge from submit-queue
Fixes panic on round tripper when TLS under a proxy
When under a proxy with a valid cert from a trusted authority, the `SpdyRoundTripper` will likely not have a `*tls.Config` (no cert verification nor `InsecureSkipVerify` happened), which will result in a panic. So we have to create a new `*tls.Config` to be able to create a TLS client right after. If `RootCAs` in that new config is nil, the system pool will be used.
@ncdc PTAL
[]()
Automatic merge from submit-queue
NodeController doesn't evict Pods if no Nodes are Ready
Fix#13412#24597
When NodeControllers don't see any Ready Node it goes into "network segmentation mode". In this mode it cancels all evictions and don't evict any Pods.
It leaves network segmentation mode when it sees at least one Ready Node. When leaving it resets all timers, so each Node has full grace period to reconnect to the cluster.
cc @lavalamp @davidopp @mml @wojtek-t @fgrzadkowski