Commit Graph

1273 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
3adb9b428b Merge pull request #46660 from jackfrancis/azure-cloudprovider-backoff
Automatic merge from submit-queue (batch tested with PRs 43005, 46660, 46385, 46991, 47103)

Azure cloudprovider retry using flowcontrol

An initial attempt at engaging exponential backoff for API error responses.

Addresses #47048

Uses k8s.io/client-go/util/flowcontrol; implementation inspired by GCE
cloudprovider backoff.



**What this PR does / why we need it**:

The existing azure cloudprovider implementation has no guard rails in place to adapt to unexpected underlying operational conditions (i.e., clogs in resource plumbing between k8s runtime and the cloud API). The purpose of these changes is to support exponential backoff wrapping around API calls; and to support targeted rate limiting. Both of these options are configurable via `--cloud-config`.

Implementation inspired by the GCE's use of `k8s.io/client-go/util/flowcontrol` and `k8s.io/apimachinery/pkg/util/wait`, this PR likewise uses `flowcontrol` for rate limiting; and `wait` to thinly wrap backoff retry attempts to the API.

**Special notes for your reviewer**:


Pay especial note to the declaration of retry-able conditions from an unsuccessful HTTP request:
- all `4xx` and `5xx` HTTP responses
- non-nil error responses

And the declaration of retry success conditions:
- `2xx` HTTP responses

Tests updated to include additions to `Config`.

Those may be incomplete, or in other ways non-representative.

**Release note**:

```release-note
Added exponential backoff to Azure cloudprovider
```
2017-06-07 13:30:58 -07:00
Jack Francis
acb65170f3 preferring float32 for rate limit QPS param 2017-06-06 22:21:14 -07:00
Jack Francis
2accbbd618 go vet errata 2017-06-06 22:12:49 -07:00
Jack Francis
6d73a09dcc rate limiting everywhere
not waiting to rate limit until we get an error response from the API, doing so on initial request for all API requests
2017-06-06 22:09:57 -07:00
Jack Francis
148e923f65 az.getVirtualMachine already rate-limited
we don’t need to rate limit the calls _to_ it
2017-06-06 14:55:07 -07:00
Jack Francis
ac931aa1e0 rate limiting on all azure sdk GET requests 2017-06-06 11:19:29 -07:00
Jack Francis
af5ce2fcc5 test coverage
We want to ensure that backoff and rate limit configuration is opt-in
2017-06-06 09:50:28 -07:00
Christoph Blecker
1bdc7a29ae
Update docs/ URLs to point to proper locations 2017-06-05 22:13:54 -07:00
Jack Francis
3f3aa279b9 configurable backoff
- leveraging Config struct (—cloud-config) to store backoff and rate limit on/off and performance configuration
- added add’l error logging
- enabled backoff for vm GET requests
2017-06-05 16:06:50 -07:00
Nick Sardo
025f178b7e Use new kubelet apis pkg for labels 2017-06-04 10:26:33 -07:00
Nick Sardo
7248c61ea5 Update test utilities & build file 2017-06-04 10:25:05 -07:00
Nick Sardo
05aaef3edc Hook external & internal lb together 2017-06-04 10:25:05 -07:00
Nick Sardo
660452dee1 Add internal LB logic 2017-06-04 10:25:05 -07:00
Nick Sardo
1283d65538 Modify external LB logic 2017-06-04 10:25:05 -07:00
Nick Sardo
2cdaf1f32b Refactor compute API calls 2017-06-04 10:25:05 -07:00
Nick Sardo
b631061f05 Rename gce_staticip.go to gce_addresses.go 2017-06-04 10:25:05 -07:00
Nick Sardo
66773fea4b Rename gce_loadbalancer.go to gce_loadbalancer_external.go 2017-06-04 10:25:05 -07:00
Kubernetes Submit Queue
4220b7303e Merge pull request #45500 from nbutton23/nbutton-aws-elb-security-group
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

AWS: Allow configuration of a single security group for ELBs

**What this PR does / why we need it**:
AWS has a hard limit on the number of Security Groups (500).  Right now every time an ELB is created Kubernetes is creating a new Security Group.  This allows for specifying a Security Group to use for all ELBS

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
For some reason the Diff tool makes this look like it was way more changes than it really was. 
**Release note**:

```release-note
```
2017-06-03 08:08:40 -07:00
Kubernetes Submit Queue
348bf1e032 Merge pull request #46627 from deads2k/api-12-labels
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

move labels to components which own the APIs

During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery.  They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet.  This pull moves the labels into their owning components and out of API machinery.

@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers 
@derekwaynecarr  since most of these are related to the kubelet
2017-06-02 23:37:38 -07:00
Kubernetes Submit Queue
0d4fda7746 Merge pull request #46462 from vmware/vsphere-storage-metrics
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Add metric collections for vSphere cloud provider operations

**What this PR does / why we need it**:
This PR adds metric collections for vSphere Cloud Provider Operations.

**Which issue this PR fixes** 
fixes #

**Special notes for your reviewer**:
Verified Prometheus pod is able to scrape vSphere metrics from Kubernetes Controller’s URL.

`providers/vsphere/vsphere.go` file is intentionally kept not formatted with gofmt, to keep diff review-able.

After review is complete, I will apply the formatting. 

**Release note**:

```release-note
None
```

@BaluDontu @tusharnt

@gnufied Verified with executing various operations on the Kubernetes Cluster deployed using this change.

```
$ curl -s 10.160.18.128:10252/metrics | grep "cloudprovider_vsphere"
# HELP cloudprovider_vsphere_api_request_duration_seconds Latency of vsphere api call
# TYPE cloudprovider_vsphere_api_request_duration_seconds histogram
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="AttachVolume"} 3.9742241939999996
cloudprovider_vsphere_api_request_duration_seconds_count{request="AttachVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="2.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="10"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="+Inf"} 1
cloudprovider_vsphere_api_request_duration_seconds_sum{request="CreateVolume"} 0.920856776
cloudprovider_vsphere_api_request_duration_seconds_count{request="CreateVolume"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.5"} 2
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="1"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DeleteVolume"} 1.3301585450000002
cloudprovider_vsphere_api_request_duration_seconds_count{request="DeleteVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="1"} 4
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="2.5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="10"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="+Inf"} 6
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DetachVolume"} 5.350829375
cloudprovider_vsphere_api_request_duration_seconds_count{request="DetachVolume"} 6
# HELP cloudprovider_vsphere_api_request_errors vsphere Api errors
# TYPE cloudprovider_vsphere_api_request_errors counter
cloudprovider_vsphere_api_request_errors{request="DeleteVolume"} 4
# HELP cloudprovider_vsphere_operation_duration_seconds Latency of vsphere operation call
# TYPE cloudprovider_vsphere_operation_duration_seconds histogram
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="1"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="AttachVolumeOperation"} 4.732579923
cloudprovider_vsphere_operation_duration_seconds_count{operation="AttachVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="2.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="10"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeOperation"} 1.2753096990000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithPolicyOperation"} 15.066558008
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithPolicyOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="+Inf"} 2
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithRawVSANPolicyOperation"} 21.805354686
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithRawVSANPolicyOperation"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DeleteVolumeOperation"} 1.4869503179999999
cloudprovider_vsphere_operation_duration_seconds_count{operation="DeleteVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="1"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="2.5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="10"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="+Inf"} 6
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DetachVolumeOperation"} 7.15601343
cloudprovider_vsphere_operation_duration_seconds_count{operation="DetachVolumeOperation"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.25"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.5"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DiskIsAttachedOperation"} 1.0603705730000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="DiskIsAttachedOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.25"} 4
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="1"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="2.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="10"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="+Inf"} 12
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DisksAreAttachedOperation"} 3.282661207
cloudprovider_vsphere_operation_duration_seconds_count{operation="DisksAreAttachedOperation"} 12
# HELP cloudprovider_vsphere_operation_errors vsphere operation errors
# TYPE cloudprovider_vsphere_operation_errors counter
cloudprovider_vsphere_operation_errors{operation="DeleteVolumeOperation"} 4
```
2017-06-02 19:53:42 -07:00
Jack Francis
7e6c689e58 backoff logging, error handling, wait.ConditionFunc
- added info and error logs for appropriate backoff conditions/states
- rationalized log idioms across all resource requests that are backoff-enabled
- processRetryResponse as a wait.ConditionFunc needs to supress errors if it wants the caller to continue backing off
2017-06-02 15:35:20 -07:00
Jack Francis
c5dd95fc22 update-bazel.sh mods 2017-06-02 09:59:07 -07:00
Jack Francis
17f8dc53af two optimizations
- removed unnecessary return statements
- optimized HTTP response code evaluations as numeric comparisons
2017-06-01 13:58:11 -07:00
Kubernetes Submit Queue
5c048ac258 Merge pull request #45168 from redbaron/fix-aws-tagging
Automatic merge from submit-queue (batch tested with PRs 43505, 45168, 46439, 46677, 46623)

fix AWS tagging to add missing tags only

It seems that intention of original code was to build map of missing
tags and call AWS API to add just them, but due to typo full
set of tags was always (re)added

```release-note
NONE
```
2017-06-01 05:43:39 -07:00
Kubernetes Submit Queue
43ac38e29e Merge pull request #45049 from wongma7/volumeinuse
Automatic merge from submit-queue (batch tested with PRs 46686, 45049, 46323, 45708, 46487)

Log an EBS vol's instance when attaching fails because VolumeInUse

Messages now look something like this:
E0427 15:44:37.617134   16932 attacher.go:73] Error attaching volume "vol-00095ddceae1a96ed": Error attaching EBS volume "vol-00095ddceae1a96ed" to instance "i-245203b7": VolumeInUse: vol-00095ddceae1a96ed is already attached to an instance
        status code: 400, request id: f510c439-64fe-43ea-b3ef-f496a5cd0577. The volume is currently attached to instance "i-072d9328131bcd9cd"
weird that AWS doesn't bother to put that information in there for us (it does when you try to delete a vol that's in use)
```release-note
NONE
```
2017-06-01 03:42:05 -07:00
Jack Francis
c95af06154 errata
arg cruft in CreateOrUpdateSGWithRetry function declaration
2017-05-31 12:03:22 -07:00
Jack Francis
c6c6cc790e errata, wait.ExponentialBackoff, regex HTTP codes
- corrected Copyright copy/paste
- now actually implementing exponential backoff instead of regular interval retries
- using more general HTTP response code success/failure determination (e.g., 5xx for retry)
- net/http constants ftw
2017-05-31 11:53:02 -07:00
deads2k
954eb3ceb9 move labels to components which own the APIs 2017-05-31 10:32:06 -04:00
Kubernetes Submit Queue
0ff75d74d3 Merge pull request #46436 from rootfs/openstack-client
Automatic merge from submit-queue (batch tested with PRs 46394, 46650, 46436, 46673, 46212)

refactor and export openstack service clients

**What this PR does / why we need it**:
Refactor and export openstack service client.
Exporting OpenStack client so other projects can use the them to call functions that are not implemented in openstack cloud providers yet.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-31 00:14:07 -07:00
Huamin Chen
4d4bdf11de refactor and export openstack service clients
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-05-31 00:36:33 +00:00
divyenpatel
85dcf6d52c Adding vsphere Storage API Latency and Error Metrics support
fix bazel failure
2017-05-30 16:54:30 -07:00
Jack Francis
f200f9a1e8 Azure cloudprovider retry using flowcontrol
An initial attempt at engaging exponential backoff for API error responses.

Uses k8s.io/client-go/util/flowcontrol; implementation inspired by GCE
cloudprovider backoff.
2017-05-30 14:50:31 -07:00
Kubernetes Submit Queue
222d247489 Merge pull request #46463 from wongma7/getinstances
Automatic merge from submit-queue (batch tested with PRs 46489, 46281, 46463, 46114, 43946)

AWS: consider instances of all states in DisksAreAttached, not just "running"

Require callers of `getInstancesByNodeNames(Cached)` to specify the states they want to filter instances by, if any. DisksAreAttached, cannot only get "running" instances because of the following attach/detach bug we discovered:

1. Node A stops (or reboots) and stays down for x amount of time
2. Kube reschedules all pods to different nodes; the ones using ebs volumes cannot run because their volumes are still attached to node A
3. Verify volumes are attached check happens while node A is down
4. Since aws ebs bulk verify filters by running nodes, it assumes the volumes attached to node A are detached and removes them all from ASW
5. Node A comes back; its volumes are still attached to it but the attach detach controller has removed them all from asw and so will never detach them even though they are no longer desired on this node and in fact desired elsewhere
6. Pods cannot run because their volumes are still attached to node A

So the idea here is to remove the wrong assumption that callers of `getInstancesByNodeNames(Cached)` only want "running" nodes.

I hope this isn't too confusing, open to alternative ways of fixing the bug + making the code nice.

ping @gnufied @kubernetes/sig-storage-bugs

```release-note
Fix AWS EBS volumes not getting detached from node if routine to verify volumes are attached runs while the node is down
```
2017-05-30 11:59:04 -07:00
Kubernetes Submit Queue
aee0ced31f Merge pull request #43585 from foolusion/add-health-check-node-port-to-aws-loadbalancer
Automatic merge from submit-queue

AWS: support node port health check

**What this PR does / why we need it**:
if a custom health check is set from the beta annotation on a service it
should be used for the ELB health check. This patch adds support for
that.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
Let me know if any tests need to be added.
**Release note**:

```release-note
```
2017-05-29 15:29:51 -07:00
Nick Sardo
9063526dfb GCE: Refactor firewalls/backendservices api; other small changes 2017-05-27 10:25:03 -07:00
Kubernetes Submit Queue
daee6d4826 Merge pull request #45524 from MrHohn/l4-lb-healthcheck
Automatic merge from submit-queue (batch tested with PRs 46252, 45524, 46236, 46277, 46522)

Make GCE load-balancers create health checks for nodes

From #14661. Proposal on kubernetes/community#552. Fixes #46313.

Bullet points:
- Create nodes health check and firewall (for health checking) for non-OnlyLocal service.
- Create local traffic health check and firewall (for health checking) for OnlyLocal service.
- Version skew: 
   - Don't create nodes health check if any nodes has version < 1.7.0.
   - Don't backfill nodes health check on existing LBs unless users explicitly trigger it.

**Release note**:

```release-note
GCE Cloud Provider: New created LoadBalancer type Service now have health checks for nodes by default.
An existing LoadBalancer will have health check attached to it when:
- Change Service.Spec.Type from LoadBalancer to others and flip it back.
- Any effective change on Service.Spec.ExternalTrafficPolicy.
```
2017-05-26 19:47:57 -07:00
Kubernetes Submit Queue
58e98cfc25 Merge pull request #46545 from nicksardo/gce-reviewers
Automatic merge from submit-queue

Add reviewers for GCE cloud provider

**Release note**:
```release-note
NONE
```
2017-05-26 17:43:11 -07:00
Nick Sardo
5b00c38fd9 Add approvers for GCE cloud provider 2017-05-26 16:42:20 -07:00
Zihong Zheng
897da549bc Autogenerated files 2017-05-26 13:19:14 -07:00
Zihong Zheng
b4633b0600 Create nodes health checks for non-OnlyLocal services 2017-05-26 13:18:50 -07:00
Kubernetes Submit Queue
f8cfeef174 Merge pull request #44884 from verult/master
Automatic merge from submit-queue (batch tested with PRs 46383, 45645, 45923, 44884, 46294)

Created unit tests for GCE cloud provider storage interface.

- Currently covers CreateDisk and DeleteDisk, GetAutoLabelsForPD
- Created ServiceManager interface in gce.go to facilitate mocking in tests.



**What this PR does / why we need it**:
Increasing test coverage for GCE Persistent Disk.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44573 

**Release note**:

```release-note
NONE
```
2017-05-26 12:58:05 -07:00
Matthew Wong
319c608fdd Get instances of all states in DisksAreAttached, not just "running" 2017-05-25 17:08:30 -04:00
Matthew Wong
9afbb356de Log an EBS vol's instance when attaching fails because VolumeInUse 2017-05-25 15:07:12 -04:00
Kubernetes Submit Queue
29b3bb44ba Merge pull request #45932 from lpabon/elbtag_pr
Automatic merge from submit-queue (batch tested with PRs 45518, 46127, 46146, 45932, 45003)

aws: Support for ELB tagging by users

This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.

Closes https://github.com/kubernetes/community/pull/404
2017-05-25 11:46:06 -07:00
Kubernetes Submit Queue
9c1480bb61 Merge pull request #46366 from nicksardo/gce-subnetwork-url
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

GCE - Retrieve subnetwork name/url from gce.conf 

**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.

**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.

Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs. 


**Release note**:
```release-note
NONE
```
2017-05-25 03:14:05 -07:00
Cheng Xing
2141b0fb80 Created unit tests for GCE cloud provider storage interface.
- Currently covers CreateDisk and DeleteDisk, GetAutoLabelsForPD
- Created ServiceManager interface in gce.go to facilitate mocking in tests.
2017-05-24 15:50:22 -07:00
Kubernetes Submit Queue
89a76b8c8b Merge pull request #46128 from jagosan/master
Automatic merge from submit-queue

Added deprecation notice and guidance for cloud providers.

**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers. 

**Which issue this PR fixes** 

**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
2017-05-24 14:19:01 -07:00
Nick Sardo
435303c647 Add subnetworkURL to GCE provider 2017-05-24 09:35:51 -07:00
Kubernetes Submit Queue
6f7eac63c2 Merge pull request #46315 from wongma7/gcepdalready
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

Fix provisioned GCE PD not being reused if already exists

@jsafrane PTAL 

This is another attempt at https://github.com/kubernetes/kubernetes/pull/38702 . We have observed that `gce.service.Disks.Insert(gce.projectID, zone, diskToCreate).Do()` instantly gets an error response of alreadyExists, so we must check for it.

I am not sure if we still need to check for the error after `waitForZoneOp`; I think that if there is an alreadyExists error, the `Do()` above will always respond with it instantly. But because I'm not sure, and to be safe, I will leave it.
2017-05-24 06:47:03 -07:00
Kubernetes Submit Queue
70dd10cc50 Merge pull request #41785 from jamiehannaford/cinder-performance
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

Only retrieve relevant volumes

**What this PR does / why we need it**:

Improves performance for Cinder volume attach/detach calls. 

Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/26404

**Special notes for your reviewer**:

There were 2 ways of addressing this problem:

1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.

Since #1 does effectively the same as #2, I went with it because it ensures BC.

One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily. 

**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
2017-05-24 06:46:59 -07:00