The result of this was that an update to a Service would release the
NodePort temporarily (the repair loop would fix it in a minute). During
that window, another Service could get allocated that Port.
Automatic merge from submit-queue
Add websocket support for port forwarding
#32880
**Release note**:
```release-note
Port forwarding can forward over websockets or SPDY.
```
- adjust ports to int32
- CRI flows the websocket ports as query params
- Do not validate ports since the protocol is unknown
SPDY flows the ports as headers and websockets uses query params
- Only flow query params if there is at least one port query param
- split out port forwarding into its own package
Allow multiple port forwarding ports
- Make it easy to determine which port is tied to which channel
- odd channels are for data
- even channels are for errors
- allow comma separated ports to specify multiple ports
Add portfowardtester 1.2 to whitelist
Automatic merge from submit-queue (batch tested with PRs 40638, 40742, 40710, 40718, 40763)
move pkg/storage to apiserver
Mechanical move of `pkg/storage` (not sub packages) to `k8s.io/apiserver`.
@sttts
Automatic merge from submit-queue (batch tested with PRs 40111, 40368, 40342, 40274, 39443)
Eliminate "Unknown service type: ExternalName"
When creating an ExternalName service, rest.go still generate the warning message "Unknown service type: ExternalName". This should be eliminated as this type of service is supported now.
Automatic merge from submit-queue (batch tested with PRs 38739, 40480, 40495, 40172, 40393)
Rename controller pkg/registry/core/controller to pkg/registry/core/r…
…eplicationcontroller
**What this PR does / why we need it**:
Rename controller pkg/registry/core/controller to pkg/registry/core/replicationcontroller
This will clarify the purpose of the controller since intent is replicationcontroller
Please refer to
https://github.com/kubernetes/kubernetes/issues/17648
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
NONE
**Release note**:
```NONE
```
Automatic merge from submit-queue
move pkg/fields to apimachinery
Purely mechanical move of `pkg/fields` to apimachinery.
Discussed with @lavalamp on slack. Moving this an `labels` to apimachinery.
@liggitt any concerns? I think the idea of field selection should become generic and this ends up shared between client and server, so this is a more logical location.
Automatic merge from submit-queue
run staging client-go update
Chasing to see what real problems we have in staging-client-go.
@sttts you get similar results?
Automatic merge from submit-queue
replace global registry in apimachinery with global registry in k8s.io/kubernetes
We'd like to remove all globals, but our immediate problem is that a shared registry between k8s.io/kubernetes and k8s.io/client-go doesn't work. Since client-go makes a copy, we can actually keep a global registry with other globals in pkg/api for now.
@kubernetes/sig-api-machinery-misc @lavalamp @smarterclayton @sttts
Automatic merge from submit-queue (batch tested with PRs 38354, 38371)
Add GetOptions parameter to Get() calls in client library
Ref #37473
This PR is super mechanical - the non trivial commits are:
- Update client generator
- Register GetOptions in batch/v2alpha1 group
Automatic merge from submit-queue (batch tested with PRs 35300, 36709, 37643, 37813, 37697)
[etcd] test cleanup: remove unnecessary AddPrefix()
What?
Remove etcdtest.AddPrefix() in tests. They will be automatically prepended in etcd storage.
Why?
ref: #36290#36374
After the change, it will double prepend.
Automatic merge from submit-queue
Reuse fields and labels
This should significantly reduce memory allocations in apiserver in large cluster.
Explanation:
- every kubelet is refreshing watch every 5-10 minutes (this generally is not causing relist - it just renews watch)
- that means, in 5000-node cluster, we are issuing ~10 watches per second
- since we don't have "watch heartbets", the watch is issued from previously received resourceVersion
- to make some assumption, let's assume pods are evenly spread across pods, and writes for them are evenly spread - that means, that a given kubelet is interested in 1 per 5000 pod changes
- with that assumption, each watch, has to process 2500 (on average) previous watch events
- for each of such even, we are currently computing fields.
This PR is fixing this problem.