Commit Graph

1309 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
ead8c98cdb Merge pull request #45987 from nicksardo/cloud-init-kubeclient
Automatic merge from submit-queue

Initialize cloud providers with a K8s clientBuilder

**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.

Please leave your thoughts/comments.

**Release note**:
```release-note
NONE
```
2017-05-18 20:51:24 -07:00
Kubernetes Submit Queue
be71ec717b Merge pull request #45201 from vmware/network_id
Automatic merge from submit-queue

Same internal and external ip for vSphere Cloud Provider

Currently, vSphere Cloud Provider reports internal ip as container ip addresses. This PR modifies vSphere Cloud Provider to report same ip address as both internal and external that is provided by vmware infrastructure. 
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
2017-05-18 13:31:02 -07:00
Kubernetes Submit Queue
f231576f29 Merge pull request #45443 from abrarshivani/owners_cloud_providers
Automatic merge from submit-queue

Add approvers to vsphere cloudprovider

This PR adds approvers for vSphere Cloud provider.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
2017-05-18 11:36:25 -07:00
Kubernetes Submit Queue
f760d5a592 Merge pull request #46001 from bowei/alpha-to-beta
Automatic merge from submit-queue

Use beta GCP API instead of alpha in CloudCIDR controller

The feature we are using has been promoted to beta.

```release-note
NONE
```
2017-05-18 11:36:19 -07:00
NickrenREN
9370808a35 Add myself to openstack review pool 2017-05-18 13:37:48 +08:00
Bowei Du
c77ffb2685 Use beta GCP API instead of alpha in CloudCIDR controller
The feature we are using has been promoted to beta.
2017-05-17 16:18:29 -07:00
Nick Sardo
87a5edd2cd Initialize cloud providers with a K8s clientBuilder 2017-05-17 14:38:25 -07:00
Kubernetes Submit Queue
bcf5837c94 Merge pull request #45912 from nicksardo/gce-src-ip-dedupe
Automatic merge from submit-queue (batch tested with PRs 45884, 45879, 45912, 45444, 45874)

[GCE] Removed duplicate CIDR

**What this PR does / why we need it**:
Removes a duplicate CIDR in the list of LB source CIDRs.
https://cloud.google.com/compute/docs/load-balancing/network/ and https://cloud.google.com/compute/docs/load-balancing/http/ both list `35.191.0.0/16`.  Only one is needed.

**Release note**:

```release-note
NONE
```
2017-05-16 22:18:54 -07:00
Kubernetes Submit Queue
f171683242 Merge pull request #44537 from FengyunPan/fix-volume-bug
Automatic merge from submit-queue (batch tested with PRs 45374, 44537, 45739, 44474, 45888)

Fix attach volume to instance repeatedly

1.When volume's status is 'attaching', controllermanager will attach
    it again and return err. So it is necessary to check volume's
    status before attach/detach volume.

   2. When volume's status is 'attaching', its attachments will be None,
    controllermanager can't get device path and make some failed event.
    But it is normal, so don't return err when attachments is None

Fix bug: #44536
2017-05-16 18:10:55 -07:00
Nick Sardo
908bcc3b24 Removed duplicate CIDR 2017-05-16 14:24:57 -07:00
Abrar Shivani
c7a22a588f Made internal-and-external-ip-same 2017-05-12 18:04:15 -07:00
Kubernetes Submit Queue
35eba22cc7 Merge pull request #41162 from MrHohn/esipp-ga
Automatic merge from submit-queue (batch tested with PRs 45623, 45241, 45460, 41162)

Promotes Source IP preservation for Virtual IPs from Beta to GA

Fixes #33625. Feature issue: kubernetes/features#27.

Bullet points:
- Declare 2 fields (ExternalTraffic and HealthCheckNodePort) that mirror the ESIPP annotations.
- ESIPP alpha annotations will be ignored.
- Existing ESIPP beta annotations will still be fully supported.
- Allow promoting beta annotations to first class fields or reversely.
- Disallow setting invalid ExternalTraffic and HealthCheckNodePort on services. Default ExternalTraffic field for nodePort or loadBalancer type service to "Global" if not set.

**Release note**:

```release-note
Promotes Source IP preservation for Virtual IPs to GA.

Two api fields are defined correspondingly:
- Service.Spec.ExternalTrafficPolicy <- 'service.beta.kubernetes.io/external-traffic' annotation.
- Service.Spec.HealthCheckNodePort <- 'service.beta.kubernetes.io/healthcheck-nodeport' annotation.
```
2017-05-12 15:00:46 -07:00
Zihong Zheng
7ed716a997 Change to use ESIPP first class fields and update comments 2017-05-12 10:59:00 -07:00
FengyunPan
4a6e1f2a1d Don't return err when volume's status is 'attaching'
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
2017-05-12 19:53:50 +08:00
fangyuhao [方宇浩]
5976b9c8a3 client.go: format err 2017-05-11 18:17:33 +08:00
Kubernetes Submit Queue
b0d024fee1 Merge pull request #45569 from vmware/fix_VolumesAreAttached
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Fixing VolumesAreAttached and DisksAreAttached functions in vSphere

**What this PR does / why we need it**:

In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.

Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.

**Logs before fix**

```
{"log":"E0508 21:31:20.902501       1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552       1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575       1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596       1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```



In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.


**Logs after fix**
```
{"log":"E0509 20:25:37.982152       1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190       1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220       1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```

```
{"log":"I0509 20:25:41.267425       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}

{"log":"I0509 20:26:12.535962       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}

{"log":"I0509 20:26:40.355656       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}

```




**Which issue this PR fixes**
fixes #45464, https://github.com/vmware/kubernetes/issues/116

**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty

**performed many fail over with large volumes (30GB) attached to the pod.**

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-3xcvc
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-qx0b0
Node:		node2/172.1.9.0
Status:		Running

Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-7849s
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-26lp1
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-4pdtl
Node:		node3/172.1.87.0
Status:		Running


Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-t375f
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-pn6ps
Node:		node3/172.1.87.0
Status:		Running

powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-0wqc1
Node:		node1/172.1.98.0
Status:		Running

powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-821nc
Node:		node3/172.1.87.0
Status:		Running


**Release note**:

```release-note
NONE
```

CC:  @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
2017-05-10 21:34:37 -07:00
Kubernetes Submit Queue
b0399114fe Merge pull request #38636 from dhawal55/internal-elb
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. 

fixes #38633

Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
2017-05-10 19:31:45 -07:00
Kubernetes Submit Queue
a86392a326 Merge pull request #45333 from colemickens/cmpr-cpfix
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

azure: improve user agent string

**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: none 

**Release note**:

```release-note
NONE
```

cc: @brendandburns
2017-05-10 17:47:45 -07:00
divyenpatel
9f89b57b74 fix implementation of VolumesAreAttached function 2017-05-10 10:16:13 -07:00
Kubernetes Submit Queue
3fbfafdd0a Merge pull request #45523 from colemickens/cmpr-cpfix3
Automatic merge from submit-queue

azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity

**What this PR does / why we need it**:

1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #43683

**Special notes for your reviewer**: n/a

**Release note**:

```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
2017-05-09 22:07:55 -07:00
Kubernetes Submit Queue
7c3f8c9bcf Merge pull request #45181 from vmware/NodeAddressesIPV6IssueNew
Automatic merge from submit-queue

Filter out IPV6 addresses from NodeAddresses() returned by vSphere

The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:

- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.

However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine. 

Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.

@divyenpatel @abrarshivani @pdhamdhere @tusharnt

**Release note**:

```release-note
None
```
2017-05-09 18:16:03 -07:00
Dhawal Patel
0e57b912a6 Update comment on ServiceAnnotationLoadBalancerInternal 2017-05-09 13:41:15 -07:00
Kubernetes Submit Queue
49626c975b Merge pull request #44798 from zetaab/master
Automatic merge from submit-queue

Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones

**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions. 

**Which issue this PR fixes** fixes #44735

**Special notes for your reviewer**:

example:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: mysql
        image: adfinissygroup/k8s-mariadb-galera-centos:v002
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        volumeMounts:
        - name: storage
          mountPath: /data
        readinessProbe:
          exec:
            command:
            - /usr/share/container-scripts/mysql/readiness-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
  volumeClaimTemplates:
  - metadata:
      name: storage
      annotations:
        volume.beta.kubernetes.io/storage-class: all
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 12Gi
```

If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.

Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).

There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.

Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves. 
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.


cc @rootfs @anguslees @jsafrane 

```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
2017-05-09 08:10:44 -07:00
Cole Mickens
3fc0c05d83 azure: instances: remove dead code 2017-05-09 00:00:12 -07:00
Cole Mickens
c349d36da3 azure: loadbalancer: fix sourceAddrPrefix support
Fixes support for multiple instances of loadBalancerSourceRanges.
Previously, the names of the rules for each address range conflicted
causing only one to be applied. Now each gets a unique name.
2017-05-08 23:58:29 -07:00
Cole Mickens
355c2be7a0 azure: loadbalancer: support UDP svc ports+rules 2017-05-08 23:58:25 -07:00
Kubernetes Submit Queue
20fa30e4b5 Merge pull request #45330 from NickrenREN/openstack-backoff
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)

Add exponential backoff to openstack loadbalancer functions

Using  exponential backoff to lower openstack load and reduce API call throttling


**Release note**:

```release-note
NONE
```
2017-05-08 23:00:38 -07:00
Cole Mickens
8b50b83067 azure: loadbalancer: respect svc sessionaffinity
If the Service spec sets sessionAffinity, reflects that in the
configuration specified for the Azure loadbalancer.
2017-05-08 20:08:05 -07:00
Balu Dontu
d05b279d9b Filter out IPV6 addresses from NodeAddresses() returned by vSphere 2017-05-08 18:23:06 -07:00
Kubernetes Submit Queue
a062782524 Merge pull request #44258 from wlan0/master
Automatic merge from submit-queue (batch tested with PRs 45508, 44258, 44126, 45441, 45320)

cloud initialize node in external cloud controller

@thockin This PR adds support in the `cloud-controller-manager` to initialize nodes (instead of kubelet, which did it previously)

This also adds support in the kubelet to skip node cloud initialization when `--cloud-provider=external`

Specifically,

Kubelet

1. The kubelet has a new flag called `--provider-id` which uniquely identifies a node in an external DB
2. The kubelet sets a node taint - called "ExternalCloudProvider=true:NoSchedule" if cloudprovider == "external"

Cloud-Controller-Manager

1. The cloud-controller-manager listens on "AddNode" events, and then processes nodes that starts with that above taint. It performs the cloud node initialization steps that were previously being done by the kubelet.
2. On addition of node, it figures out the zone, region, instance-type, removes the above taint and updates the node.
3. Then periodically queries the cloudprovider for node addresses (which was previously done by the kubelet) and updates the node if there are new addresses

```release-note
NONE  
```
2017-05-08 16:34:43 -07:00
Kubernetes Submit Queue
52903829b1 Merge pull request #45311 from vmware/fix_fetch_VM_UUID
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)

Fetch VM UUID from - /sys/class/dmi/id/product_serial

**What this PR does / why we need it**:
Current code fetch VM uuid using uuid reported at `'/sys/devices/virtual/dmi/id/product_uuid'.` This doesn't work with all the distros like Ubuntu 16.04 and Fedora. 

updating code to fetch VM uuid from `/sys/class/dmi/id/product_serial`



**Which issue this PR fixes**
fixes #

**Special notes for your reviewer**:
Verified UUID  is matching with VM UUID on ubuntu 16.04, Cent OS 7.3 , and Photon OS

@BaluDontu @tusharnt

**Release note**:

```release-note
NONE
```
2017-05-08 15:46:37 -07:00
Nathan Button
06779586cd Clean up and restructure. 2017-05-08 10:12:16 -06:00
Nathan Button
ddaac519dc If ElbSecurityGroup is set then use it instead of creating another SG 2017-05-08 10:12:16 -06:00
wlan0
45d2bc06b7 cloud initialize node in external cloud controller 2017-05-05 16:51:45 -07:00
Abrar Shivani
d6ba5d48c1 Add approvers to vsphere cloudprovider 2017-05-05 16:48:23 -07:00
Kubernetes Submit Queue
c6ce00968d Merge pull request #45392 from nicksardo/gce-get-stats
Automatic merge from submit-queue (batch tested with PRs 43006, 45305, 45390, 45412, 45392)

[GCE] Collect latency metric on get/list calls

**What this PR does / why we need it**:
Collects latency & count measurements on GET and LIST operations to GCE cloud.

**Release note**:
```release-note
NONE
```
2017-05-05 16:39:11 -07:00
Kubernetes Submit Queue
17d33ea82e Merge pull request #44830 from NickrenREN/remove-NodeLegacyHostIP
Automatic merge from submit-queue

Remove deprecated NodeLegacyHostIP

**Release note**:
```release-note
Remove deprecated node address type `NodeLegacyHostIP`.
```

ref #44807
2017-05-05 15:38:58 -07:00
David Constenla
a87d34ce40 added extra filter because in openestack/liberty gopher doesn't apply the indicated filters when querying pools and/or listeners
also added @FengyunPan modifications from PR#43055
2017-05-05 11:35:42 +02:00
NickrenREN
edea294ca2 Add exponential backoff to openstack loadbalancer functions
Using  exponential backoff to lower openstack load and reduce API call throttling
2017-05-05 10:24:32 +08:00
Nick Sardo
63841dadb1 missed a file 2017-05-04 18:26:45 -07:00
Nick Sardo
48d58a15ec Add missing underscore 2017-05-04 18:07:53 -07:00
Nick Sardo
14d2cf85a6 Undo capture of list clusters 2017-05-04 18:06:10 -07:00
Nick Sardo
4a51f8a186 Add metric capture on GETs 2017-05-04 18:04:34 -07:00
divyenpatel
6886d69f12 change way to fetch VM UUID from VM 2017-05-04 12:27:32 -07:00
Cole Mickens
b224e85ebd azure: improve user agent string 2017-05-04 01:10:13 -07:00
Jesse Haka
66e49eecca add possibility to leave AZ empty, and it will automatically generate zone for it
update bazel

fix gofmt

make getzones function lowercase

add az to log
2017-05-03 16:37:20 +03:00
Maxim Ivanov
9ef85a7e6d Restore buildTags in createTags 2017-05-02 06:32:52 +01:00
Maxim Ivanov
54203aaa9e fix AWS tagging to add missing tags only
It seems that intention of original code was to build map of missing
tags and call AWS API to add just them, but due to typo full
set of tags was always (re)added
2017-05-01 16:29:37 +01:00
Kubernetes Submit Queue
fd19b6ce3f Merge pull request #44868 from vmware/dsclustersupport
Automatic merge from submit-queue

Adding datastore cluster support for dynamic and static pv

**What this PR does / why we need it**:

Customer reported with version 1.4.7 he could use a datastore that is in a cluster as a vsphere volume. When he upgraded to 1.6.0, this same exact path does not work and throws a datastore not found error. 

This PR is adding support to allow using datastore within cluster for volume provisioning.

**Which issue this PR fixes** : 
fixes https://github.com/kubernetes/kubernetes/issues/44007

**Special notes for your reviewer**:

**Created datastore cluster as below.**

![ds-cluster](https://cloud.githubusercontent.com/assets/22985595/25350381/d2652c24-28d9-11e7-8659-097bd9b844bb.jpg)


**Verified  dynamic PV provisioning and pod creation using datastore (sharedVmfs-0) in a cluster (DatastoreCluster).**
```
$ cat thin_sc.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin
    datastore: DatastoreCluster/sharedVmfs-0
```


```
$ kubectl create -f thin_sc.yaml 
storageclass "thin" created
$ kubectl describe storageclass thin
Name:		thin
IsDefaultClass:	No
Annotations:	<none>
Provisioner:	kubernetes.io/vsphere-volume
Parameters:	datastore=DatastoreCluster/sharedVmfs-0,diskformat=thin
No events.
$ 
```


```
$ kubectl create -f thin_pvc.yaml 
persistentvolumeclaim "thinclaim" created
```

```
$ kubectl get pvc
NAME        STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
thinclaim   Bound     pvc-581805e3-290d-11e7-9ad8-005056bd81ef   2Gi        RWO           1m
```

```
$ kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               REASON    AGE
pvc-581805e3-290d-11e7-9ad8-005056bd81ef   2Gi        RWO           Delete          Bound     default/thinclaim             1m

```


```
$ kubectl describe pvc thinclaim
Name:		thinclaim
Namespace:	default
StorageClass:	thin
Status:		Bound
Volume:		pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels:		<none>
Capacity:	2Gi
Access Modes:	RWO
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  39s		39s		1	{persistentvolume-controller }			Normal		ProvisioningSucceeded	Successfully provisioned volume pvc-581805e3-290d-11e7-9ad8-005056bd81ef using kubernetes.io/vsphere-volume
```


```
$ kubectl describe pv pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Name:		pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels:		<none>
StorageClass:	
Status:		Bound
Claim:		default/thinclaim
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	2Gi
Message:	
Source:
    Type:	vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:	[DatastoreCluster/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-581805e3-290d-11e7-9ad8-005056bd81ef.vmdk
    FSType:	ext4
No events.

```
```

$ kubectl create -f thin_pod.yaml 
pod "thinclaimpod" created
```
```

$ kubectl get pod
NAME           READY     STATUS    RESTARTS   AGE
thinclaimpod   1/1       Running   0          1m
```


```
$ kubectl describe pod thinclaimpod
Name:		thinclaimpod
Namespace:	default
Node:		node3/172.1.56.0
Start Time:	Mon, 24 Apr 2017 09:46:56 -0700
Labels:		<none>
Status:		Running
IP:		172.1.56.3
Controllers:	<none>
Containers:
  test-container:
    Container ID:	docker://487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
    Image:		gcr.io/google_containers/busybox:1.24
    Image ID:		docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
    Port:		
    Command:
      /bin/sh
      -c
      echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
    State:		Running
      Started:		Mon, 24 Apr 2017 09:47:16 -0700
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /mnt/volume1 from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  test-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	thinclaim
    ReadOnly:	false
  default-token-cqcq1:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-cqcq1
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  40s		40s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned thinclaimpod to node3
  22s		22s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulling		pulling image "gcr.io/google_containers/busybox:1.24"
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulled		Successfully pulled image "gcr.io/google_containers/busybox:1.24"
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Created		Created container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Started		Started container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
```


```
$ kubectl delete pod thinclaimpod
pod "thinclaimpod" deleted
```

Verified Disk is detached from the node

```
$ kubectl delete pvc thinclaim
persistentvolumeclaim "thinclaim" deleted
$ kubectl get pv
No resources found.
```
Verified Disk is deleted from the datastore.
Also verified above life cycle using non clustered datastore.

**Verified Using static PV in the datastore cluster for pod provisioning.**
```
# pwd
/vmfs/volumes/sharedVmfs-0/kubevols
# vmkfstools -c 2g test.vmdk
Create: 100% done
# ls
test-flat.vmdk  test.vmdk
```



```
$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
    name: inject-pod
spec:
    containers:
    - name: test-container
      image: gcr.io/google_containers/busybox:1.24
      command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
      volumeMounts:
      - name: test-volume
        mountPath: /mnt/volume1
    securityContext:
      seLinuxOptions:
        level: "s0:c0,c1"
    restartPolicy: Never
    volumes:
    - name: test-volume
      vsphereVolume:
          volumePath: "[DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk"
          fsType: ext4
```

```
$ kubectl create -f pod.yaml 
pod "inject-pod" created

$ kubectl get pod
NAME         READY     STATUS    RESTARTS   AGE
inject-pod   1/1       Running   0          19s

$ kubectl describe pod inject-pod
Name:		inject-pod
Namespace:	default
Node:		node3/172.1.56.0
Start Time:	Mon, 24 Apr 2017 10:27:22 -0700
Labels:		<none>
Status:		Running
IP:		172.1.56.3
Controllers:	<none>
Containers:
  test-container:
    Container ID:	docker://ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
    Image:		gcr.io/google_containers/busybox:1.24
    Image ID:		docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
    Port:		
    Command:
      /bin/sh
      -c
      echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
    State:		Running
      Started:		Mon, 24 Apr 2017 10:27:40 -0700
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /mnt/volume1 from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  test-volume:
    Type:	vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:	[DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk
    FSType:	ext4
  default-token-cqcq1:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-cqcq1
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  44s		44s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned inject-pod to node3
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulled		Container image "gcr.io/google_containers/busybox:1.24" already present on machine
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Created		Created container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Started		Started container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
```


**Release note**:

```release-note
none
```

cc: @BaluDontu @moserke @tusharnt @pdhamdhere
2017-04-28 11:38:59 -07:00
Kubernetes Submit Queue
9afeabb642 Merge pull request #43477 from gnufied/cloudprovider-aws-metrics
Automatic merge from submit-queue

Start recording cloud provider metrics for AWS

**What this PR does / why we need it**:

This PR implements support for emitting metrics from AWS about storage operations.

**Which issue this PR fixes** 

Fixes https://github.com/kubernetes/features/issues/182

**Release note**:
```
Add support for emitting metrics from AWS cloudprovider about storage operations.
```
2017-04-28 01:35:17 -07:00
divyenpatel
821f8cd9b9 datastore cluster support
fix verify-gofmt failure
2017-04-27 17:12:45 -07:00
Kubernetes Submit Queue
09747e6bee Merge pull request #44510 from bowei/gce-metrics
Automatic merge from submit-queue (batch tested with PRs 44124, 44510)

Add metrics to all major gce operations (latency, errors)

```release-note
Add metrics to all major gce operations {latency, errors}

The new metrics are:

  cloudprovider_gce_api_request_duration_seconds{request, region, zone}
  cloudprovider_gce_api_request_errors{request, region, zone}
 
`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)

Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
  operation.
- Metrics label tuple would have resulted in many independent
  histograms stored, one for each disk. (Did not aggregate well).
```
2017-04-27 16:14:58 -07:00
Bowei Du
ee847ebf8a Add metrics to all major gce operations {latency, errors}
The new metrics is:

  cloudprovider_gce_api_request_duration_seconds{request, region, zone}
  cloudprovider_gce_api_request_errors{request, region, zone}

`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)

Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
  operation.
- Metrics label tuple would have resulted in many independent
  histograms stored, one for each disk. (Did not aggregate well).
2017-04-27 12:49:30 -07:00
Hemant Kumar
f2aa330a38 Start recording cloud provider metrics for AWS
Lets start recording storage metrics for AWS.
2017-04-27 15:26:32 -04:00
Balu Dontu
6228765b43 Optimize the time taken to create Persistent volumes with VSAN storage capabilities at scale and handle VPXD crashes 2017-04-26 13:33:21 -07:00
Kubernetes Submit Queue
ce2f0b1937 Merge pull request #44387 from jamiehannaford/fix-port-allocation
Automatic merge from submit-queue

Use provided VipPortID for OpenStack LB

**What this PR does / why we need it**:

When creating an OpenStack LoadBalancer, Kubernetes will search through the tenant trying to match the LB's VIP with a port. This is problematic because multiple ports may have the same fixed IP, therefore leading to routing inconsistencies. We should use the port ID provided by the LB's response body instead.

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/43909

**Special notes for your reviewer**:

Since this involves non-deterministic testing, it'd be best if we can run this in a staging environment for a few days before merging (say until early next week).

**Release note**:
```release-note
Fixes issue during LB creation where ports where incorrectly assigned to a floating IP
```
2017-04-23 20:50:49 -07:00
NickrenREN
7d00e5cfb6 remove deprecated NodeLegacyHostIP 2017-04-24 11:01:25 +08:00
Kubernetes Submit Queue
cdc0cbdac4 Merge pull request #41498 from mikebryant/cinder-virtio-scsi
Automatic merge from submit-queue

cinder: Add support for the KVM virtio-scsi driver

**What this PR does / why we need it**:

The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
Without this, volume attaching on an openstack cluster using this kvm driver doesn't work

**Special notes for your reviewer**:
Does this need e2e tests? I couldn't find anywhere to add another openstack configuration used in the e2e tests.

Wiki page about this: https://wiki.openstack.org/wiki/Virtio-scsi-for-bdm

**Release note**:

```release-note
cinder: Add support for the KVM virtio-scsi driver
```
2017-04-21 01:55:23 -07:00
Kubernetes Submit Queue
870585e8e1 Merge pull request #44651 from knightXun/string
Automatic merge from submit-queue (batch tested with PRs 44594, 44651)

remove strings.compare(), use string native operation

I notice we use strings.Compare() in some code, we can remove it and use native operation.
2017-04-20 14:08:59 -07:00
Kubernetes Submit Queue
223a8e598d Merge pull request #44238 from zhouhaibing089/no-flavor-usage
Automatic merge from submit-queue (batch tested with PRs 44555, 44238)

openstack: remove field flavor_to_resource

I believe there is no usage about `flavor_to_resource`, and I think there is no need to build that information, too.

cc @anguslees 

**Release note:**

```
NONE
```
2017-04-20 11:02:58 -07:00
Kubernetes Submit Queue
fba605ce05 Merge pull request #44661 from xiangpengzhao/fix-vsphere-panic
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)

Fix panic when using `kubeadm init` with vsphere cloud-provider

**What this PR does / why we need it**:
Check if the reference is nil when finding machine reference by UUID.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44603

**Special notes for your reviewer**:
This is just a quick fix for the panic.

**Release note**:

```release-note
NONE
```
2017-04-19 18:52:59 -07:00
Kubernetes Submit Queue
36c5d12cf4 Merge pull request #44452 from gnufied/fix-aws-device-failure-reuse
Automatic merge from submit-queue

Implement LRU for AWS device allocator

On failure to attach do not use device from pool
    
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure
that a bigger pool of devices is available.
2017-04-19 16:38:13 -07:00
Andrew O'Neill
e397ca4ba7 combine health check methods
I changed the function signature to contain protocol, port, and path.
When the service has a health check path and port set it will create an
HTTP health check that corresponds to the port and path. If those are
not set it will create a standard TCP health check on the first port
from the listeners that is not nil. As far as I know, there is no way to
tell if a Health Check should be HTTP vs HTTPS.
2017-04-19 14:12:28 -07:00
Hemant Kumar
a16ee2f514 Implement LRU for AWS device allocator
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure we
don't reuse recently freed devices
2017-04-19 16:52:57 -04:00
Kubernetes Submit Queue
712ccf3fa4 Merge pull request #44082 from zetaab/fixzone2
Automatic merge from submit-queue

use availability_zone instead of availability (update godep for gophercloud)

**What this PR does / why we need it**: there is typo in json variable name

**Which issue this PR fixes**: fixes #44032

**Special notes for your reviewer**:our openstack environment region name is not nova, so I tested this and it works now

All cinder blockstorages are using variable name availability_zone instead of availability. Docs: 

v3:
https://developer.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-a-volume-detail#create-a-volume

v2:
https://developer.openstack.org/api-ref/block-storage/v2/index.html?expanded=create-volume-detail#create-volume

I could not find v1 documentation anymore from openstack pages. However, https://developer.rackspace.com/docs/cloud-block-storage/v1/api-reference/cbs-volumes-operations/#create-a-volume documentation says also availability_zone is the correct one. 

Like mentioned in https://github.com/kubernetes/kubernetes/issues/44032#issuecomment-291488494 openstack CLI is using availability_zone
2017-04-19 03:26:25 -07:00
xiangpengzhao
be3fd5bb90
Add test case for getVMName 2017-04-19 17:16:39 +08:00
xiangpengzhao
d4cbea5902
Fix panic when using kubeadm init with vsphere cloud-provider 2017-04-19 16:03:08 +08:00
Kubernetes Submit Queue
d2060ade08 Merge pull request #43510 from karataliu/azurelb
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)

Add support for Azure internal load balancer

**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901

**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer

Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.

**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.

3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
   - If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
   - If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.

4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.

**Release note**:
2017-04-18 23:22:04 -07:00
xu fei
b0a3f492af remove strings.compare(), use string native operation 2017-04-19 09:32:29 +08:00
zhouhaibing089
8c021ea884 openstack: remove field flavor_to_resource 2017-04-17 14:01:04 +08:00
Chao Xu
d4850b6c2b move pkg/api/v1/helpers.go to subpackage 2017-04-14 14:25:11 -07:00
Mike Danese
a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
Kubernetes Submit Queue
f1c0c0a73c Merge pull request #42395 from nicksardo/gce-src-ranges
Automatic merge from submit-queue

Adding load balancer src cidrs to GCE cloudprovider

**What this PR does / why we need it**:
As of January 31st, 2018, GCP will be sending health checks and l7 traffic from two CIDRs and legacy health checks from three CIDS. This PR moves them into the cloudprovider package and provides a flag for override.

Another PR will need to be address firewall rule creation for external L4 network loadbalancing #40778

**Which issue this PR fixes**
Step one of #40778
Step one of https://github.com/kubernetes/ingress/issues/197

**Release note**:
```release-note
Add flags to GCE cloud provider to override known L4/L7 proxy & health check source cidrs
```
2017-04-12 19:57:43 -07:00
Jamie Hannaford
622c69c1e5 Use provided VipPortID for LB 2017-04-12 14:13:12 +02:00
Kubernetes Submit Queue
ceccd305ce Merge pull request #42147 from bowei/ip-alias-2
Automatic merge from submit-queue

Add support for IP aliases for pod IPs (GCP alpha feature)

```release-note
Adds support for allocation of pod IPs via IP aliases.

# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).

KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.

## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh

# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).

If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.

- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
  the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
2017-04-11 22:09:24 -07:00
Bowei Du
f61590c221 Adds support for PodCIDR allocation from the GCE cloud provider
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.

- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
  the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
2017-04-11 14:07:54 -07:00
Kubernetes Submit Queue
6283077fb5 Merge pull request #43545 from luomiao/vsphere-remove-loginInfo-on-workers-update
Automatic merge from submit-queue (batch tested with PRs 43545, 44293, 44221, 43888)

Remove credentials on worker nodes for vSphere cloud provider.

**What this PR does / why we need it**:
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.

**Which issue this PR fixes** : fixes #https://github.com/kubernetes/kubernetes/issues/35339

**Release note**:
2017-04-11 12:18:17 -07:00
Bowei Du
f5be63e0f7 Add PodCIDRs API for GCE (Google cloud alpha feature) 2017-04-10 12:05:02 -07:00
Kubernetes Submit Queue
41e9b80e5f Merge pull request #44235 from kubermatic/feature/configurable-aws-subnetid-routetableid
Automatic merge from submit-queue

Specify subnetid and routetableid via cloud provider config

**What this PR does / why we need it**:
This is a fix for https://github.com/kubernetes/kubernetes/pull/39996 which is needed since 1.6

Changes introduced from 1.6 broke partially(LoadBalancer) the support for running the master components in a different environment (different aws account/on premise). This PR will add support for specifying the Subnet & RouteTable to use via the cloud provider config.

**Release note**:

```release-note
AWS cloud provider: fix support running the master with a different AWS account or even on a different cloud provider than the nodes.
```
2017-04-08 11:19:21 -07:00
Henrik Schmidt
1c1f02fde3 Specify subnetid and routetableid via cloud provider config 2017-04-08 11:44:45 +02:00
Jesse Haka
5aad93abf5 fix format 2017-04-08 11:08:08 +03:00
Jesse Haka
2fb9fc4647 use AvailabilityZone instead of Availability 2017-04-08 10:51:49 +03:00
Kubernetes Submit Queue
9c9326114c Merge pull request #43777 from wlan0/provider-id
Automatic merge from submit-queue

move ProvideID indexed methods to right location

@bowei
2017-04-07 19:57:48 -07:00
Dong Liu
f20e9bf66d Update message log level for azure_loadbalancer. 2017-04-07 14:32:29 +08:00
Jan Safranek
67e1f2c08e Add e2e tests for storageclass
This reverts commit 22352d2844 and makes
gce.GetDiskByNameUnknownZone a public GCE cloud provider method.
2017-04-05 11:49:49 +02:00
Kubernetes Submit Queue
4ee6782db5 Merge pull request #42512 from kubermatic/scheeles-aws
Automatic merge from submit-queue (batch tested with PRs 43925, 42512)

AWS: add KubernetesClusterID as additional option when VPC is set

This is a small enhancement after the PRs https://github.com/kubernetes/kubernetes/pull/41695 and  https://github.com/kubernetes/kubernetes/pull/39996
## Release Notes
```release-note
AWS cloud provider: allow to set KubernetesClusterID or KubernetesClusterTag in combination with VPC.
```
2017-04-03 12:46:17 -07:00
Kubernetes Submit Queue
449a13c44c Merge pull request #40338 from gnufied/cloudprovider-gce-metrics
Automatic merge from submit-queue

Implement API usage metrics for gce storage

**What this PR does / why we need it**:

This PR implements support for emitting metrics from GCE about storage operations.

**Which issue this PR fixes** 

Fixes https://github.com/kubernetes/features/issues/182

**Release note**:
```
Add support for emitting metrics from GCE cloudprovider about storage operations.
```
2017-03-30 12:42:02 -07:00
Kubernetes Submit Queue
289ef62442 Merge pull request #43644 from nicksardo/gce-healthchecks
Automatic merge from submit-queue (batch tested with PRs 42617, 43247, 43509, 43644, 43820)

[GCE] Support legacy-https and generic health checks

**What this PR does / why we need it**:
- Adds CRUD functions to manage `compute.HttpsHealthChecks` 
The legacy HTTPS healthchecks will be used by the GLBC (GCE Load balancer Controller)

- Adds CRUD functions to manage `compute.HealthChecks`
These are required for the internal load balancer

- Removes the logic that disregards NotFound errors on DeleteHttpHealthChecks as this is useful information for callers. Here are the three known invocations within kubernetes: 
[gce/gce_loadbalancer.go#L457](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L457)): Only prints warning that HC wasn't deleted  -> acceptable
[gce/gce_loadbalancer.go#L465](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L465)): Err is ignored if not nil  -> acceptable
[e2e/framework/ingress_utils.go#L530](bc6e77d42f/test/e2e/framework/ingress_utils.go (L530)): Already checks if is NotFound error -> acceptable

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Step one of https://github.com/kubernetes/ingress/issues/494
Step one of #33483 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-03-29 16:05:25 -07:00
Miao Luo
6d1c4a3c49 Remove login info on workers for vsphere cloud provider.
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.
2017-03-28 23:20:38 -07:00
Cole Mickens
21250f1748 azure: reduce poll delay for all clients to 5 sec 2017-03-28 18:18:36 -07:00
Cole Mickens
5c21498dbf run update-bazel.sh 2017-03-28 18:08:22 -07:00
Cole Mickens
6eb7a1a366 azure: add k8s info to user-agent string 2017-03-28 15:17:03 -07:00
wlan0
655dfd1196 move ProvideID indexed methods to right location 2017-03-28 15:08:03 -07:00
Hemant Kumar
c4aaf47282 Implement API usage metrics for gce
This PR implements tracking of GCE API usage via prometheus metrics.
2017-03-28 16:33:21 -04:00
wlan0
a68c783dc8 Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.

Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
2017-03-27 23:13:13 -07:00
Kubernetes Submit Queue
3843108081 Merge pull request #42974 from vmware/VSANPolicyProvisioningForKubernetesOnKubernetesRepo
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

VSAN policy support for storage volume provisioning inside kubernetes

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
>   name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
>   hostFailuresToTolerate: "2"
>   diskStripes: "1"
>   cacheReservation: "20"
>   datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel 

**Release note**:

```release-note
None
```
2017-03-27 17:00:23 -07:00
Kubernetes Submit Queue
31e596e5ba Merge pull request #40423 from mkutsevol/feature/openstack_cinder_v1_2_auto
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)

Openstack cinder v1/v2/auto API support

**What this PR does / why we need it**:
It adds support for v2 cinder API + autodetection of available cinder API level (as in LBs).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39572

**Special notes for your reviewer**:
Based on work by @anguslees. The first two commits are just rebased from https://github.com/kubernetes/kubernetes/pull/36344 which already had a lgtm by @jbeda 

**Release note**:

```
Add support for v2 cinder API for openstack cloud provider. By default it autodetects the available version.
```
2017-03-27 12:49:22 -07:00
Balu Dontu
dbe94833eb VSAN policy support for storage volume provisioning inside kubernetes 2017-03-27 12:43:01 -07:00
Dong Liu
ed36aba8ba Add separate func 'cleanupLoadBalancer' and 'cleanupPublicIP' for Azure. 2017-03-27 15:19:16 +08:00
Dong Liu
54664d08dd Update reconcileSecurityGroup logic for Azure, add tests. 2017-03-27 12:52:21 +08:00
Dong Liu
4f44bf5e5a Update EnsureLoadBalancer, EnsureLoadBalancerDeleted for azure. 2017-03-27 12:51:56 +08:00
Dong Liu
7bf15f66fe Add annotation for internal load balancer type in Azure. 2017-03-27 12:39:29 +08:00
Kubernetes Submit Queue
3fcb7cb377 Merge pull request #42170 from rootfs/azure-file-prv
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)

Enable storage class support in Azure File volume

**What this PR does / why we need it**:
Support StorageClass in Azure file volume

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Support StorageClass in Azure file volume

```
2017-03-24 19:04:28 -07:00
Nick Sardo
baab99b823 Adding load balancer src ranges; support flag overrides 2017-03-24 16:36:19 -07:00
Nick Sardo
93cb2b41de Adding HTTPS and generic health checks to GCE 2017-03-24 14:24:42 -07:00
Kubernetes Submit Queue
bc6e77d42f Merge pull request #43635 from bowei/gce-owner
Automatic merge from submit-queue

Add bowei to OWNERS of cloudproviders/gce

```release-note
none
```
2017-03-24 14:16:48 -07:00
Kubernetes Submit Queue
fb537762fc Merge pull request #42297 from YuPengZTE/devErrorf
Automatic merge from submit-queue (batch tested with PRs 42237, 42297, 42279, 42436, 42551)

should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-03-24 14:16:23 -07:00
Bowei Du
0ab072dde8 Add bowei to OWNERS of cloudproviders/gce 2017-03-24 13:18:13 -07:00
Kubernetes Submit Queue
264c8b4340 Merge pull request #42034 from brendandburns/azure
Automatic merge from submit-queue (batch tested with PRs 41139, 41186, 38882, 37698, 42034)

Add support for bring-your-own ip address for Services on Azure

@colemickens @codablock
2017-03-24 12:33:29 -07:00
Kubernetes Submit Queue
92f8d9be38 Merge pull request #41696 from justinsb/rationalize_aws_owners
Automatic merge from submit-queue

Add approvers to the aws OWNERS file

Without this it was picking up reviewers from a much higher directory.

```release-note
NONE
```
2017-03-24 10:27:26 -07:00
Kubernetes Submit Queue
7eb02f54cd Merge pull request #42610 from timchenxiaoyu/wheretypo
Automatic merge from submit-queue

fix where typo
2017-03-24 10:26:10 -07:00
Bowei Du
dc1e614a72 Split the GCE cloud provider into more managable chunks
Each major interface is now in its own file. Any package private
functions that are only referenced by a particular module was also moved
to the corresponding file. All common helper functions were moved to
gce_util.go.

This change is a pure movement of code; no semantic changes were made.
2017-03-23 14:40:16 -07:00
Andrew O'Neill
864ea2fafd pkg/cloudprovider/providers/aws: add node port health check
if a custom health check is set from the beta annotation on a service it
should be used for the ELB health check. This patch adds support for
that.
2017-03-23 12:55:29 -07:00
Kubernetes Submit Queue
a84f100faa Merge pull request #42422 from vmware/fix-42399.kerneltime
Automatic merge from submit-queue

Fix adding disks to more than one scsi adapter. Fixes #42399

**What this PR does / why we need it**: Allows a single node to use more than 16 disks.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #42399

**Special notes for your reviewer**: 

**Release note**:

```release-note
Fix adding disks to more than one scsi adapter.
```
2017-03-22 19:23:19 -07:00
Maxym Kutsevol
89f596f408 Update deps 2017-03-21 20:46:06 +02:00
Maxym Kutsevol
2c05bb5336 Support for v1/v2/autoprobe openstack cinder blockstorage
Support for cinder v1/v2 api with the new gophercloud/gophercloud
library. API version is configurable and defaulting autodetection.
2017-03-21 20:46:03 +02:00
Kubernetes Submit Queue
a2d74cda38 Merge pull request #42452 from jingxu97/Mar/nodeNamePrefix
Automatic merge from submit-queue (batch tested with PRs 42452, 43399)

Modify getInstanceByName to avoid calling getInstancesByNames

This PR modify getInstanceByname to loop through all management zones
directly instead of calling getInstancesByNames. Currently
getInstancesByNames use a node name prefix as a filter to list the
instances. If the prefix does not match, it will return all instances
which is very wasteful since getInstanceByName only query one instance
with a specific name.

Partially fix issue #42445
2017-03-20 15:23:33 -07:00
Hemant Kumar
1de4c5bbe0 Fix AWS untagged instances
To revert to 1.5 behaviour we need to consider untagged
instances if no clusterID has been specified or found.
2017-03-17 14:05:52 -04:00
Brendan Burns
ea23cabfa0 Add support for bring-your-own ip address. 2017-03-14 20:36:55 -07:00
Kubernetes Submit Queue
e2218290cf Merge pull request #42444 from jingxu97/Mar/deleteVolume
Automatic merge from submit-queue (batch tested with PRs 42608, 42444)

Return nil when deleting non-exist GCE PD

When gce cloud tries to delete a disk, if the disk could not be found
from the zones, the function should return nil error. This modified behavior is also consistent with AWS
2017-03-10 12:50:24 -08:00
Kubernetes Submit Queue
9498a1270f Merge pull request #42024 from luomiao/fix-vsphere-remove-port
Automatic merge from submit-queue

Remove VCenterPort from vsphere cloud provider.

**What this PR does / why we need it**:
Address a bug inside vsphere cloud provider when a port number other than 443 is specified inside the config file.
The url which is used for communicating with govmomi should not include port number.
A port number other than 443 will result in 404 error.
VCenterPort stays in VSphereConfig structure for backward compatibility.

**Which issue this PR fixes** : fixes https://github.com/kubernetes/kubernetes-anywhere/issues/338
2017-03-09 15:59:33 -08:00
timchenxiaoyu
61f2202c6b fix where typo 2017-03-07 09:37:41 +08:00
wlan0
9875620388 add external cloudprovider to clerly denote the offloading off cloudprovider tasks 2017-03-06 10:45:13 -08:00
yupengzte
363f321f32 should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
2017-03-06 09:14:48 +08:00
Sebastian Scheele
fd09bb6934 AWS: add KubernetesClusterID as additional option when VPC is set 2017-03-03 16:57:12 -08:00
Jing Xu
880de79376 Return nil when deleting non-exist GCE PD
When gce cloud tries to delete a disk, if the disk could not be found
from the zones, the function should return nil error. This modified behavior is also consistent with AWS
2017-03-03 15:06:39 -08:00
Jing Xu
92f05da1ff Modify getInstanceByName to avoid calling getInstancesByNames
This PR modify getInstanceByname to loop through all management zones
directly instead of calling getInstancesByNames. Currently
getInstancesByNames use a node name prefix as a filter to list the
instances. If the prefix does not match, it will return all instances
which is very wasteful since getInstanceByName only query one instance
with a specific name.
2017-03-03 11:37:08 -08:00
Kubernetes Submit Queue
e9bbfb81c1 Merge pull request #41306 from gnufied/implement-interface-bulk-volume-poll
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Implement bulk polling of volumes

This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.

cc @justinsb
2017-03-03 10:54:38 -08:00
Ritesh H Shukla
383a42a4b4 Support adding disks to more than one scsi adapter Fixes #42399 2017-03-02 20:19:05 +00:00
Hemant Kumar
786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Sebastian Scheele
0be5e6041b AWS: run k8s master in different account or on a provider Currently the master and the nodes must run in the same account. With this change the master can run in a different AWS account or somewhere else.
Set the vpcID when dummy is created (+1 squashed commit)
Squashed commits:
[0b1ac6e83e] Use the VPC flag and KubernetesClusterTag as identifier (+1 squashed commit)
Squashed commits:
[962bc56e38] Remove again availabilityZone and fix naming (+1 squashed commit)
Squashed commits:
[e3d1b41807] Use the VCID flag as identifier (+1 squashed commit)
Squashed commits:
[5b99fe6243] Add flag for external master
2017-03-01 08:46:46 -08:00
Kubernetes Submit Queue
c6d11c778f Merge pull request #41695 from justinsb/shared_tag
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)

AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`

We recognize an additional cluster tag:

kubernetes.io/cluster/<clusterid>

This now allows us to share resources, in particular subnets.

In addition, the value is used to track ownership/lifecycle.  When we
create objects, we record the value as "owned".

We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.

```release-note
AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`
```
2017-03-01 04:10:01 -08:00
Kubernetes Submit Queue
7592564505 Merge pull request #41702 from justinsb/fix_34583
Automatic merge from submit-queue (batch tested with PRs 38676, 41765, 42103, 41833, 41702)

AWS: Skip instances that are taggged as a master

We recognize a few AWS tags, and skip over masters when finding zones
for dynamic volumes.  This will fix #34583.

This is not perfect, in that really the scheduler is the only component
that can correctly choose the zone, but should address the common
problem.

```release-note
AWS: Do not consider master instance zones for dynamic volume creation
```
2017-03-01 01:44:12 -08:00
Justin Santa Barbara
0b5ae5391e AWS: Support shared tag
We recognize an additional cluster tag:

kubernetes.io/cluster/<clusterid>

This now allows us to share resources, in particular subnets.

In addition, the value is used to track ownership/lifecycle.  When we
create objects, we record the value as "owned".

We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.
2017-02-27 16:30:12 -05:00
Huamin Chen
6782a48dfa Enable storage class support in Azure File volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-02-27 15:34:37 -05:00
Kubernetes Submit Queue
7224805c55 Merge pull request #41992 from colemickens/cmpr-azure-config-doc
Automatic merge from submit-queue (batch tested with PRs 35408, 41915, 41992, 41964, 41925)

azure: document config file (+ remove unused field)

**What this PR does / why we need it**:
* documents the config file used by the Azure cloudprovider
* removes an unused field that shouldn't have been added

```release-note
NONE
```
2017-02-26 18:07:57 -08:00
Kubernetes Submit Queue
9a218d406b Merge pull request #41309 from kars7e/add-cafile-openstack
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Add custom CA file to openstack cloud provider config

**What this PR does / why we need it**: Adds ability to specify custom CA bundle file to verify OpenStack endpoint against. Useful in tests and PoC deployments. Similar to what https://github.com/kubernetes/kubernetes/pull/35488 did for authentication.  


**Which issue this PR fixes**: None

**Special notes for your reviewer**: Based on https://github.com/kubernetes/kubernetes/pull/35488 which added support for custom CA file for authentication.

**Release note**:
2017-02-26 08:10:00 -08:00
Kubernetes Submit Queue
3adc12c5f5 Merge pull request #41113 from vmware/AddDatastoreParamForDynamicProvisioning
Automatic merge from submit-queue

Fix for Support selection of datastore for dynamic provisioning in vS…

Fixes #40558

Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.

With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.

For example:

User creates a storage class with the datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.

If the user creates a storage class without any datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)

@pdhamdhere @kerneltime
2017-02-23 22:10:42 -08:00
Miao Luo
6e96a1b8b0 Remove VCenterPort from vsphere cloud provider.
The url which is used for communicating with govmomi should not include
port number. A port number other than 443 will result in 404 error.
VCenterPort stays in VSphereConfig structure for backward compatibility.
2017-02-23 16:04:22 -08:00
Cole Mickens
af1389e232 fixup: clarify what's optional and why 2017-02-23 11:46:16 -08:00
Cole Mickens
3b7ad5c7f6 azure: document config file 2017-02-23 10:59:04 -08:00
Kubernetes Submit Queue
616d929828 Merge pull request #38702 from jsafrane/gce-provisioning-existing
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)

gce: Reuse unsuccessfully provisioned volumes.

GCE PD names generated by Kubernetes are guaranteed to be unique - they
contain name of the cluster and UID of the PVC that is behind it.
Presence of a GCE PD that has the same name as we want to provision
indicates that previous provisioning did not go well and most probably
the controller manager process was restarted in the meantime.

Kubernetes should reuse this volume and not provision a new one.

Fixes #38681
2017-02-23 07:54:33 -08:00
Balu Dontu
12f75f0b86 Fix for Support selection of datastore for dynamic provisioning in vSphere 2017-02-21 19:04:45 +00:00
Kubernetes Submit Queue
9c0e46bdff Merge pull request #40843 from luomiao/photon-cloud-provider-authentication-update
Automatic merge from submit-queue (batch tested with PRs 41756, 36344, 34259, 40843, 41526)

Update Photon Controller cloud provider for authentication support

Resolve Issue: [#40755](https://github.com/kubernetes/kubernetes/issues/40755)
1. Update the configuration file for Photon Controller cloud provider
2. Only master nodes can communicate with Photon Controller endpoint
3. Enable support for authentication-enabled Photon Controller endpoint
4. Update NodeAddresses function for query from local node

New format of photon controller config file:
```
[Global]
target = https://[LOAD_BALANCER_IP]:443
project = [PROJECT ID]
overrideIP = true
vmID = [LOCAL VM ID]
authentication = true
```
This config file will be automatically created by Photon Controller cluster management.

If authentication file is set to true, then a pc_login_info file should be placed under /etc/kubernetes with username and password.
This file can be created by user directly.
Or the user can choose to use kubernetes secret and a handling pod to avoid directly login to master nodes. This usage will be available with Photon Controller 1.2.
This is a temporary solution before metadata service becomes available in Photon Controller.
2017-02-20 13:39:39 -08:00
Kubernetes Submit Queue
8738e36c70 Merge pull request #34259 from liggitt/node-dns
Automatic merge from submit-queue (batch tested with PRs 41756, 36344, 34259, 40843, 41526)

add InternalDNS/ExternalDNS node address types

This PR adds internal/external DNS names to the types of NodeAddresses that can be reported by the kubelet.

will spawn follow up issues for cloud provider owners to include these when possible

```release-note
Nodes can now report two additional address types in their status: InternalDNS and ExternalDNS. The apiserver can use `--kubelet-preferred-address-types` to give priority to the type of address it uses to reach nodes.
```
2017-02-20 13:39:37 -08:00
Justin Santa Barbara
20cb4f16b3 Add approvers to the aws OWNERS file
Without this it was picking up reviewers from a much higher directory.
2017-02-20 11:44:05 -05:00
Angus Lees
c077c30004 Migrate rackspace/gophercloud -> gophercloud/gophercloud
This change migrates the 'openstack' provider and 'keystone'
authenticator plugin to the newer gophercloud/gophercloud library.

Note the 'rackspace' provider still uses rackspace/gophercloud.

Fixes #30404
2017-02-20 11:03:05 +11:00
Justin Santa Barbara
b1079f8813 AWS: Skip instances that are taggged as a master
We recognize a few AWS tags, and skip over masters when finding zones
for dynamic volumes.  This will fix #34583.

This is not perfect, in that really the scheduler is the only component
that can correctly choose the zone, but should address the common
problem.
2017-02-19 01:45:20 -05:00
Kubernetes Submit Queue
6823803772 Merge pull request #41239 from vmware/e2eTestsUpdate-v2
Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)

e2e test for storage class diskformat verification for vsphere cloud provider

**What this PR does / why we need it**:
This PR adds a new e2e test for vsphere cloud provider.
Test is to verify diskformat specified in storage-class is being honored while volume creation.

Steps:

1. Create StorageClass with diskformat set to valid type (supported options are `eagerzeroedthick`, `zeroedthick` and `thin`)
2. Create PVC which uses the StorageClass created in step 1.
3. Wait for PV to be provisioned.
4. Wait for PVC's status to become Bound
5. Create POD using PVC on specific node.
6. Wait for Disk to be attached to the node.
7. Get node VM's devices and find PV's Volume Disk.
8. Get Backing Info of the Volume Disk and obtain Property of `VirtualDiskFlatVer2BackingInfo` - `EagerlyScrub` and `ThinProvisioned`
9. Based on the value of `EagerlyScrub` and `ThinProvisioned`, verify if diskformat is correct.
10. Delete POD and Wait for Volume Disk to be detached from the Node.
11. Delete PVC, PV and Storage Class



**Which issue this PR fixes** *
fixes #

**Special notes for your reviewer**:
Test is executed against v1.6.0-alpha.1
Test is failing on v1.4.8

**Release Note**
```release-note
NONE
```
@kerneltime @BaluDontu @abrarshivani please review this PR
2017-02-15 20:05:09 -08:00
Mike Bryant
e2e924e023 cinder: Add support for virtio-scsi
The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
2017-02-15 17:27:31 +00:00