Automatic merge from submit-queue
Add E2E tests for Azure internal loadbalancer support, fix an issue for public IP resource deletion.
**What this PR does / why we need it**:
- Add E2E tests for Azure internal loadbalancer support: https://github.com/kubernetes/kubernetes/pull/43510
- Fix an issue that public IP resource not get deleted when switching from external loadbalancer to internal static loadbalancer.
**Special notes for your reviewer**:
1. Add new Azure resource tag to Public IP resources to indicate kubernetes managed resources.
Currently we determine whether the public IP resource should be deleted by looking at LoadBalancerIp property on spec. In the scenario 'Switching from external loadbalancer to internal loadbalancer with static IP', that value might have been updated for internal loadbalancer. So here we're to add an explicit tag for kubernetes managed resources.
2. Merge cleanupPublicIP logic into cleanupLoadBalancer
**Release note**:
NONE
CC @brendandburns @colemickens
Automatic merge from submit-queue
Azure for cloud-controller-manager
**What this PR does / why we need it**:
This implements the NodeAddressesByProviderID and InstanceTypeByProviderID methods used by the cloud-controller-manager to the Azure provider.
**Release note**:
```release-note
NONE
```
Addresses #47257
- leveraging Config struct (—cloud-config) to store backoff and rate limit on/off and performance configuration
- added add’l error logging
- enabled backoff for vm GET requests
- added info and error logs for appropriate backoff conditions/states
- rationalized log idioms across all resource requests that are backoff-enabled
- processRetryResponse as a wait.ConditionFunc needs to supress errors if it wants the caller to continue backing off
An initial attempt at engaging exponential backoff for API error responses.
Uses k8s.io/client-go/util/flowcontrol; implementation inspired by GCE
cloudprovider backoff.
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
azure: improve user agent string
**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: none
**Release note**:
```release-note
NONE
```
cc: @brendandburns
Fixes support for multiple instances of loadBalancerSourceRanges.
Previously, the names of the rules for each address range conflicted
causing only one to be applied. Now each gets a unique name.
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)
Add support for Azure internal load balancer
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901
**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer
Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.
**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.
3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
- If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
- If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.
4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.
**Release note**:
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)
Enable storage class support in Azure File volume
**What this PR does / why we need it**:
Support StorageClass in Azure file volume
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support StorageClass in Azure file volume
```
Automatic merge from submit-queue (batch tested with PRs 41139, 41186, 38882, 37698, 42034)
Add support for bring-your-own ip address for Services on Azure
@colemickens @codablock
Automatic merge from submit-queue
Set custom PollingDelay of 5 seconds for Azure VirtualMachinesClient
The default polling delay of 1 minute results in very long delays when
an Azure Disk is attached to a node. It gets worse as go-autorest
doubles the default delay to 2 minutes.
Please see: https://github.com/kubernetes/kubernetes/issues/35180#issuecomment-273085063
Only the PollingDelay for VirtualMachinesClient is modified here to
avoid too much pressure on Azure quotas.
Release Nodes:
```release-note
Reduce time needed to attach Azure disks
```
The default polling delay of 1 minute results in very long delays when
an Azure Disk is attached to a node. It gets worse as go-autorest
doubles the default delay to 2 minutes.
Please see: https://github.com/kubernetes/kubernetes/issues/35180#issuecomment-273085063
Only the PollingDelay for VirtualMachinesClient is modified here to
avoid too much pressure on Azure quotas.