Automatic merge from submit-queue
More RC/RS controller logging updates
We were comparing the address of the old and new RC.spec.replicas and we
have to compare the values. This only affects logging.
Update RS controller to match RC controller to log when spec.replicas
changes, not status.replicas.
@kargakis @janetkuo @sttts @liggitt
Automatic merge from submit-queue
Minor cleanups
Minor improvements:
- `ValidateNoNewFinalizers`: remove unused const
- Mention that mutation of `spec.initContainers[*].image` field is allowed
- Improve godoc comments
Automatic merge from submit-queue
Print conditions of RC/RS in 'kubectl describe' command
**What this PR does / why we need it**:
If conditions of RC/RS exist, print them in 'kubectl describe' command.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Print conditions of RC/RS in 'kubectl describe' command.
```
We were comparing the address of the old and new RC.spec.replicas and we
have to compare the values. This only affects logging.
Update RS controller to match RC controller to log when spec.replicas
changes, not status.replicas.
Automatic merge from submit-queue (batch tested with PRs 41498, 44487)
Use len of pods in stateful set error
**What this PR does / why we need it**:
Sync stateful set reports wrong error, we need to fix it.
**Release note**:
```release-note
`NONE`
```
Automatic merge from submit-queue
cinder: Add support for the KVM virtio-scsi driver
**What this PR does / why we need it**:
The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
Without this, volume attaching on an openstack cluster using this kvm driver doesn't work
**Special notes for your reviewer**:
Does this need e2e tests? I couldn't find anywhere to add another openstack configuration used in the e2e tests.
Wiki page about this: https://wiki.openstack.org/wiki/Virtio-scsi-for-bdm
**Release note**:
```release-note
cinder: Add support for the KVM virtio-scsi driver
```
Automatic merge from submit-queue (batch tested with PRs 44722, 44704, 44681, 44494, 39732)
Fix issue #34242: Attach/detach should recover from a crash
When the attach/detach controller crashes and a pod with attached PV is deleted afterwards the controller will never detach the pod's attached volumes. To prevent this the controller should try to recover the state from the nodes status and figure out which volumes to detach. This requires some changes in the volume providers too: the only information available from the nodes is the volume name and the device path. The controller needs to find the correct volume plugin and reconstruct the volume spec just from the name. This required a small change also in the volume plugin interface.
Fixes Issue #34242.
cc: @jsafrane @jingxu97
Automatic merge from submit-queue (batch tested with PRs 44722, 44704, 44681, 44494, 39732)
Don't rebuild endpoints map in iptables kube-proxy all the time.
@thockin - i think that this PR should help with yours https://github.com/kubernetes/kubernetes/pull/41030 - it (besides performance improvements) clearly defines when update because of endpoints is needed. If we do the same for services (I'm happy to help with it), i think it should be much simpler.
But please take a look if it makes sense from your perspective too.
Automatic merge from submit-queue (batch tested with PRs 44594, 44651)
remove strings.compare(), use string native operation
I notice we use strings.Compare() in some code, we can remove it and use native operation.
Automatic merge from submit-queue
Delete deprecated node phase in kubect describe node.
**What this PR does / why we need it**:
Since NodePhase is no longer used, delete it in `kubect describe node` result.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
ref: https://github.com/kubernetes/kubernetes/pull/44388
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42177, 42176, 44721)
Job: Respect ControllerRef
**What this PR does / why we need it**:
This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings Job into full compliance with ControllerRef. See the individual commit messages for details.
**Which issue this PR fixes**:
This ensures that Job does not fight with other controllers over control of Pods.
Ref: #24433
**Special notes for your reviewer**:
**Release note**:
```release-note
Job controller now respects ControllerRef to avoid fighting over Pods.
```
cc @erictune @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42177, 42176, 44721)
CronJob: Respect ControllerRef
**What this PR does / why we need it**:
This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings CronJob into compliance with ControllerRef. See the individual commit messages for details.
**Which issue this PR fixes**:
This ensures that other controllers do not fight over control of objects that a CronJob owns.
**Special notes for your reviewer**:
**Release note**:
```release-note
CronJob controller now respects ControllerRef to avoid fighting with other controllers.
```
cc @erictune @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 44555, 44238)
openstack: remove field flavor_to_resource
I believe there is no usage about `flavor_to_resource`, and I think there is no need to build that information, too.
cc @anguslees
**Release note:**
```
NONE
```
Automatic merge from submit-queue
Refactoring reorganize taints function in kubectl to expose operations
**What this PR does / why we need it**:
This adds some UX functionality when specifying taints using kubectl.
For example:
```
./kubectl.sh taint nodes XYZ dedicated1=abca2:NoSchedule
node "XYZ" tainted
./kubectl.sh taint nodes XYZ dedicated1=abca1:NoSchedule --overwrite=True
node "XYZ overwritten
./kubectl.sh taint nodes XYZ dedicated1-
node "XYZ" untainted
./kubectl.sh taint nodes XYZ dedicated=abca1:NoSchedule dedicated1-
node "XYZ" modified
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43167
**Release note**:
```
Fixed the output of kubectl taint node command with minor improvements.
```
When the attach/detach controller crashes and a pod with attached PV is deleted
afterwards the controller will never detach the pod's attached volumes. To
prevent this the controller should try to recover the state from the nodes
status.
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Fix panic when using `kubeadm init` with vsphere cloud-provider
**What this PR does / why we need it**:
Check if the reference is nil when finding machine reference by UUID.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44603
**Special notes for your reviewer**:
This is just a quick fix for the panic.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Implement LRU for AWS device allocator
On failure to attach do not use device from pool
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure
that a bigger pool of devices is available.
The Job Listers still use selectors, because this is the
behavior expected by callers. This clarifies the meaning of the
returned list. Some callers may need to switch to using
GetControllerOf() instead, but that is a separate, case-by-case issue.
Automatic merge from submit-queue
Edge based services in proxy
This is sibling effort to what I did for endpoints in KubeProxy.
This PR is first one (changing config & iptables) - userspace will follow.
Automatic merge from submit-queue
Fixes an issue in cide_set.go
Function getBeginingAndEndIndices may return
end index too big
**What this PR does / why we need it**:
Fixes getBeginingAndEndIndices() in cidr_set.go
End index is off by one when s.clusterMaskSize >= maskSize
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44558
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)
Add support for Azure internal load balancer
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901
**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer
Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.
**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.
3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
- If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
- If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.
4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.
**Release note**:
Automatic merge from submit-queue (batch tested with PRs 44222, 44614, 44292, 44638)
Smarter generic getters and describers
Makes printers and describers smarter for generic resources.
This traverses unstructured objects and prints their attributes for generic resources (TPR, federated API, etc) in `kubectl get` and `kubectl describe`. Makes use of the object's field names to come up with a best guess for describer labels and get headers, and field value types to understand how to better print it, indent, etc.
A nice intermediate solution while we don't have [get and describe extensions](https://github.com/kubernetes/community/pull/308).
Examples:
```
$ kubectl get serviceclasses
NAME KIND BINDABLE BROKER NAME OSB GUID
user-provided-service ServiceClass.v1alpha1.servicecatalog.k8s.io false ups-broker 4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468
```
```
$ kubectl describe serviceclasses/user-provided-service
Name: user-provided-service
Namespace:
Labels: <none>
Annotations: FOO=BAR
openshift.io/deployment.phase=test
OSB Metadata: <nil>
Kind: ServiceClass
Metadata:
Self Link: /apis/servicecatalog.k8s.io/v1alpha1/serviceclassesuser-provided-service
UID: 1509bd96-1b05-11e7-98bd-0242ac110006
Resource Version: 256
Creation Timestamp: 2017-04-06T20:10:29Z
Broker Name: ups-broker
Bindable: false
Plan Updatable: false
OSB GUID: 4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468
API Version: servicecatalog.k8s.io/v1alpha1
Plans:
Name: default
OSB GUID: 86064792-7ea2-467b-af93-ac9694d96d52
OSB Free: true
OSB Metadata: <nil>
Events: <none>
```
**Release note**:
```release-note
Improved output on 'kubectl get' and 'kubectl describe' for generic objects.
```
PTAL @pmorie @pwittrock @kubernetes/sig-cli-pr-reviews
This should only happen if the Jobs were created by an older version
of the CronJob controller, since from now on we add ControllerRef upon
creation.
CronJob doesn't do actual adoption because it doesn't use label
selectors to find its Jobs. However, we should apply ControllerRef
for potential server-side cascading deletion, and to advise other
controllers we own these objects.