Automatic merge from submit-queue
plumb Update resthandler to allow old/new comparisons in admission
Rework how updated objects are passed to rest storage Update methods (first pass at https://github.com/kubernetes/kubernetes/pull/23928#discussion_r61444342)
* allows centralizing precondition checks (uid and resourceVersion)
* allows admission to have the old and new objects on patch/update operations (sets us up for field level authorization, differential quota updates, etc)
* allows patch operations to avoid double-GETting the object to apply the patch
Overview of important changes:
* pkg/api/rest/rest.go
* changes `rest.Update` interface to give rest storage an `UpdatedObjectInfo` interface instead of the object directly. To get the updated object, the storage must call `UpdatedObject()`, passing in the current object
* pkg/api/rest/update.go
* provides a default `UpdatedObjectInfo` impl
* passes a copy of the updated object through any provided transforming functions and returns it when asked
* builds UID preconditions from the updated object if they can be extracted
* pkg/apiserver/resthandler.go
* Reworks update and patch operations to give old objects to admission
* pkg/registry/generic/registry/store.go
* Calls `UpdatedObject()` inside `GuaranteedUpdate` so it can provide the old object
Todo:
- [x] Update rest.Update interface:
* Given the name of the object being updated
* To get the updated object data, the rest storage must pass the current object (fetched using the name) to an `UpdatedObject(ctx, oldObject) (newObject, error)` func. This is typically done inside a `GuaranteedUpdate` call.
- [x] Add old object to admission attributes interface
- [x] Update resthandler Update to move admission into the UpdatedObject() call
- [x] Update resthandler Patch to move the patch application and admission into the UpdatedObject() call
- [x] Add resttest tests to make sure oldObj is correctly passed to UpdatedObject(), and errors propagate back up
Follow-up:
* populate oldObject in admission for delete operations?
* update quota plugin to use `GetOldObject()` in admission attributes
* admission plugin to gate ownerReference modification on delete permission
* Decide how to handle preconditions (does that belong in the storage layer or in the resthander layer?)
Previously keys were sorted as strings, thus it was possible
to see such order as 1, 10, 2, 3, 4, 5.
Ints64 helper implemented in util/slice module to sort []int64
Instead of just rate limits to operation polling, send all API calls
through a rate limited RoundTripper.
This isn't a perfect solution, since the QPS is obviously getting
split between different controllers, etc., but it's also spread across
different APIs, which, in practice, rate limit differently.
Fixes#26119 (hopefully)
This provides a basic implementation for setting a stage1 on a per-pod
basis via an annotation.
It's possible this feature should be gated behind additional knobs, such
as a kubelet flag to filter allowed stage1s, or a check akin to what
priviliged gets in the apiserver.
Currently, it checks `AllowPrivileged`, as a means to let people disable
this feature, though overloading it as stage1 and privileged isn't
ideal.
Automatic merge from submit-queue
Handle federated service name lookups in kube-dns.
For the domain name queries that fail to match any records in the local
kube-dns cache, we check if the queried name matches the federation
pattern. If it does, we send a CNAME response to the federated name.
For more details look at the comments in the code.
Tests are coming ...
Also, this PR is based on @ArtfulCoder's PR #23930. So please review only the last commit here.
PTAL @ArtfulCoder @thockin @quinton-hoole @nikhiljindal
[]()
For the domain name queries that fail to match any records in the local
kube-dns cache, we check if the queried name matches the federation
pattern. If it does, we send a CNAME response to the federated name.
For more details look at the comments in the code.
Automatic merge from submit-queue
Support sort-by timestamp in kubectl get
## Pull Request Guidelines
1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
1. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
1. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
```release-note
```
**Before:**
```console
$ kubectl get svc --sort-by='{.metadata.creationTimestamp}'
proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
F0513 16:46:49.499894 29562 sorting_printer.go:182] Field {.metadata.creationTimestamp} in TypeMeta:<kind:"Service" apiVersion:"v1" > metadata:<name:"kubernetes" generateName:"" namespace:"default" selfLink:"/api/v1/namespaces/default/services/kubernetes" uid:"b88b4739-1964-11e6-9ac3-64510658e388" resourceVersion:"8" generation:0 creationTimestamp:<2016-05-13T16:45:06-07:00> labels:<key:"component" value:"apiserver" > labels:<key:"provider" value:"kubernetes" > > spec:<ports:<name:"https" protocol:"TCP" port:443 targetPort:<type:0 intVal:443 strVal:"" > nodePort:0 > clusterIP:"10.0.0.1" type:"ClusterIP" sessionAffinity:"ClientIP" loadBalancerIP:"" > status:<loadBalancer:<> > is an unsortable type: struct, err: unsortable type: struct
```
**After:**
```console
$ kubectl get svc --sort-by='{.metadata.creationTimestamp}'
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 48s
frontend 10.0.0.108 <none> 80/TCP 10s
```
@kubernetes/kubectl
[]()
Automatic merge from submit-queue
cache: reflector should never stop watching
A recent change tries to separate resync and relist. The motivation
was to avoid triggering relist when a resync is required.
However, the change is not effective since it stops the watcher. As hongchao
mentioned in the original comment, today's storage interface will not deliever
any progress notification to the watch chan. So any watcher that does not receive
events for the last few seconds will not be able to catch up from the previous
index after a hard close since the index of the last received event is out of
the cache window inside etcd2.
This pull request tries to fix this issue by not stoping watcher when a resync is
required.
/cc @hongchaodeng @wojtek-t @timothysc @rrati @smarterclayton
Automatic merge from submit-queue
vSphere Volume Plugin Implementation
This PR implements vSphere Volume plugin support in Kubernetes (ref. issue #23932).
A recent change tries to separate resync and relist. The motivation
was to avoid triggering relist when a resync is required.
However, the change is not effective since it stops the watcher. As hongchao
mentioned in the original comment, today's storage interface will not deliever
any progress notification to the watch chan. So any watcher that does not receive
events for the last few seconds will not be able to catch up from the previous
index after a hard close since the index of the last received event is out of
the cache window inside etcd2.
This pull request tries to fix this issue by not stoping watcher when a resync is
required.