Commit Graph

1157 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
fd19b6ce3f Merge pull request #44868 from vmware/dsclustersupport
Automatic merge from submit-queue

Adding datastore cluster support for dynamic and static pv

**What this PR does / why we need it**:

Customer reported with version 1.4.7 he could use a datastore that is in a cluster as a vsphere volume. When he upgraded to 1.6.0, this same exact path does not work and throws a datastore not found error. 

This PR is adding support to allow using datastore within cluster for volume provisioning.

**Which issue this PR fixes** : 
fixes https://github.com/kubernetes/kubernetes/issues/44007

**Special notes for your reviewer**:

**Created datastore cluster as below.**

![ds-cluster](https://cloud.githubusercontent.com/assets/22985595/25350381/d2652c24-28d9-11e7-8659-097bd9b844bb.jpg)


**Verified  dynamic PV provisioning and pod creation using datastore (sharedVmfs-0) in a cluster (DatastoreCluster).**
```
$ cat thin_sc.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin
    datastore: DatastoreCluster/sharedVmfs-0
```


```
$ kubectl create -f thin_sc.yaml 
storageclass "thin" created
$ kubectl describe storageclass thin
Name:		thin
IsDefaultClass:	No
Annotations:	<none>
Provisioner:	kubernetes.io/vsphere-volume
Parameters:	datastore=DatastoreCluster/sharedVmfs-0,diskformat=thin
No events.
$ 
```


```
$ kubectl create -f thin_pvc.yaml 
persistentvolumeclaim "thinclaim" created
```

```
$ kubectl get pvc
NAME        STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
thinclaim   Bound     pvc-581805e3-290d-11e7-9ad8-005056bd81ef   2Gi        RWO           1m
```

```
$ kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               REASON    AGE
pvc-581805e3-290d-11e7-9ad8-005056bd81ef   2Gi        RWO           Delete          Bound     default/thinclaim             1m

```


```
$ kubectl describe pvc thinclaim
Name:		thinclaim
Namespace:	default
StorageClass:	thin
Status:		Bound
Volume:		pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels:		<none>
Capacity:	2Gi
Access Modes:	RWO
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  39s		39s		1	{persistentvolume-controller }			Normal		ProvisioningSucceeded	Successfully provisioned volume pvc-581805e3-290d-11e7-9ad8-005056bd81ef using kubernetes.io/vsphere-volume
```


```
$ kubectl describe pv pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Name:		pvc-581805e3-290d-11e7-9ad8-005056bd81ef
Labels:		<none>
StorageClass:	
Status:		Bound
Claim:		default/thinclaim
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	2Gi
Message:	
Source:
    Type:	vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:	[DatastoreCluster/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-581805e3-290d-11e7-9ad8-005056bd81ef.vmdk
    FSType:	ext4
No events.

```
```

$ kubectl create -f thin_pod.yaml 
pod "thinclaimpod" created
```
```

$ kubectl get pod
NAME           READY     STATUS    RESTARTS   AGE
thinclaimpod   1/1       Running   0          1m
```


```
$ kubectl describe pod thinclaimpod
Name:		thinclaimpod
Namespace:	default
Node:		node3/172.1.56.0
Start Time:	Mon, 24 Apr 2017 09:46:56 -0700
Labels:		<none>
Status:		Running
IP:		172.1.56.3
Controllers:	<none>
Containers:
  test-container:
    Container ID:	docker://487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
    Image:		gcr.io/google_containers/busybox:1.24
    Image ID:		docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
    Port:		
    Command:
      /bin/sh
      -c
      echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
    State:		Running
      Started:		Mon, 24 Apr 2017 09:47:16 -0700
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /mnt/volume1 from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  test-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	thinclaim
    ReadOnly:	false
  default-token-cqcq1:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-cqcq1
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  40s		40s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned thinclaimpod to node3
  22s		22s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulling		pulling image "gcr.io/google_containers/busybox:1.24"
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulled		Successfully pulled image "gcr.io/google_containers/busybox:1.24"
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Created		Created container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
  21s		21s		1	{kubelet node3}		spec.containers{test-container}	Normal		Started		Started container with id 487f77d92b92ee3d833b43967c8d42433e61cd45a58d8d6f462717301597c84f
```


```
$ kubectl delete pod thinclaimpod
pod "thinclaimpod" deleted
```

Verified Disk is detached from the node

```
$ kubectl delete pvc thinclaim
persistentvolumeclaim "thinclaim" deleted
$ kubectl get pv
No resources found.
```
Verified Disk is deleted from the datastore.
Also verified above life cycle using non clustered datastore.

**Verified Using static PV in the datastore cluster for pod provisioning.**
```
# pwd
/vmfs/volumes/sharedVmfs-0/kubevols
# vmkfstools -c 2g test.vmdk
Create: 100% done
# ls
test-flat.vmdk  test.vmdk
```



```
$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
    name: inject-pod
spec:
    containers:
    - name: test-container
      image: gcr.io/google_containers/busybox:1.24
      command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
      volumeMounts:
      - name: test-volume
        mountPath: /mnt/volume1
    securityContext:
      seLinuxOptions:
        level: "s0:c0,c1"
    restartPolicy: Never
    volumes:
    - name: test-volume
      vsphereVolume:
          volumePath: "[DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk"
          fsType: ext4
```

```
$ kubectl create -f pod.yaml 
pod "inject-pod" created

$ kubectl get pod
NAME         READY     STATUS    RESTARTS   AGE
inject-pod   1/1       Running   0          19s

$ kubectl describe pod inject-pod
Name:		inject-pod
Namespace:	default
Node:		node3/172.1.56.0
Start Time:	Mon, 24 Apr 2017 10:27:22 -0700
Labels:		<none>
Status:		Running
IP:		172.1.56.3
Controllers:	<none>
Containers:
  test-container:
    Container ID:	docker://ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
    Image:		gcr.io/google_containers/busybox:1.24
    Image ID:		docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9
    Port:		
    Command:
      /bin/sh
      -c
      echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done
    State:		Running
      Started:		Mon, 24 Apr 2017 10:27:40 -0700
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /mnt/volume1 from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqcq1 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  test-volume:
    Type:	vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:	[DatastoreCluster/sharedVmfs-0] kubevols/test.vmdk
    FSType:	ext4
  default-token-cqcq1:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-cqcq1
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  44s		44s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned inject-pod to node3
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Pulled		Container image "gcr.io/google_containers/busybox:1.24" already present on machine
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Created		Created container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
  26s		26s		1	{kubelet node3}		spec.containers{test-container}	Normal		Started		Started container with id ed14e058fbcc9c2d8d30ff67bd614e45cf086afbbff070744c5a461e87c45103
```


**Release note**:

```release-note
none
```

cc: @BaluDontu @moserke @tusharnt @pdhamdhere
2017-04-28 11:38:59 -07:00
Kubernetes Submit Queue
9afeabb642 Merge pull request #43477 from gnufied/cloudprovider-aws-metrics
Automatic merge from submit-queue

Start recording cloud provider metrics for AWS

**What this PR does / why we need it**:

This PR implements support for emitting metrics from AWS about storage operations.

**Which issue this PR fixes** 

Fixes https://github.com/kubernetes/features/issues/182

**Release note**:
```
Add support for emitting metrics from AWS cloudprovider about storage operations.
```
2017-04-28 01:35:17 -07:00
divyenpatel
821f8cd9b9 datastore cluster support
fix verify-gofmt failure
2017-04-27 17:12:45 -07:00
Kubernetes Submit Queue
09747e6bee Merge pull request #44510 from bowei/gce-metrics
Automatic merge from submit-queue (batch tested with PRs 44124, 44510)

Add metrics to all major gce operations (latency, errors)

```release-note
Add metrics to all major gce operations {latency, errors}

The new metrics are:

  cloudprovider_gce_api_request_duration_seconds{request, region, zone}
  cloudprovider_gce_api_request_errors{request, region, zone}
 
`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)

Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
  operation.
- Metrics label tuple would have resulted in many independent
  histograms stored, one for each disk. (Did not aggregate well).
```
2017-04-27 16:14:58 -07:00
Bowei Du
ee847ebf8a Add metrics to all major gce operations {latency, errors}
The new metrics is:

  cloudprovider_gce_api_request_duration_seconds{request, region, zone}
  cloudprovider_gce_api_request_errors{request, region, zone}

`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)

Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
  operation.
- Metrics label tuple would have resulted in many independent
  histograms stored, one for each disk. (Did not aggregate well).
2017-04-27 12:49:30 -07:00
Hemant Kumar
f2aa330a38 Start recording cloud provider metrics for AWS
Lets start recording storage metrics for AWS.
2017-04-27 15:26:32 -04:00
Balu Dontu
6228765b43 Optimize the time taken to create Persistent volumes with VSAN storage capabilities at scale and handle VPXD crashes 2017-04-26 13:33:21 -07:00
Kubernetes Submit Queue
ce2f0b1937 Merge pull request #44387 from jamiehannaford/fix-port-allocation
Automatic merge from submit-queue

Use provided VipPortID for OpenStack LB

**What this PR does / why we need it**:

When creating an OpenStack LoadBalancer, Kubernetes will search through the tenant trying to match the LB's VIP with a port. This is problematic because multiple ports may have the same fixed IP, therefore leading to routing inconsistencies. We should use the port ID provided by the LB's response body instead.

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/43909

**Special notes for your reviewer**:

Since this involves non-deterministic testing, it'd be best if we can run this in a staging environment for a few days before merging (say until early next week).

**Release note**:
```release-note
Fixes issue during LB creation where ports where incorrectly assigned to a floating IP
```
2017-04-23 20:50:49 -07:00
Kubernetes Submit Queue
cdc0cbdac4 Merge pull request #41498 from mikebryant/cinder-virtio-scsi
Automatic merge from submit-queue

cinder: Add support for the KVM virtio-scsi driver

**What this PR does / why we need it**:

The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
Without this, volume attaching on an openstack cluster using this kvm driver doesn't work

**Special notes for your reviewer**:
Does this need e2e tests? I couldn't find anywhere to add another openstack configuration used in the e2e tests.

Wiki page about this: https://wiki.openstack.org/wiki/Virtio-scsi-for-bdm

**Release note**:

```release-note
cinder: Add support for the KVM virtio-scsi driver
```
2017-04-21 01:55:23 -07:00
Kubernetes Submit Queue
870585e8e1 Merge pull request #44651 from knightXun/string
Automatic merge from submit-queue (batch tested with PRs 44594, 44651)

remove strings.compare(), use string native operation

I notice we use strings.Compare() in some code, we can remove it and use native operation.
2017-04-20 14:08:59 -07:00
Kubernetes Submit Queue
223a8e598d Merge pull request #44238 from zhouhaibing089/no-flavor-usage
Automatic merge from submit-queue (batch tested with PRs 44555, 44238)

openstack: remove field flavor_to_resource

I believe there is no usage about `flavor_to_resource`, and I think there is no need to build that information, too.

cc @anguslees 

**Release note:**

```
NONE
```
2017-04-20 11:02:58 -07:00
Kubernetes Submit Queue
fba605ce05 Merge pull request #44661 from xiangpengzhao/fix-vsphere-panic
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)

Fix panic when using `kubeadm init` with vsphere cloud-provider

**What this PR does / why we need it**:
Check if the reference is nil when finding machine reference by UUID.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44603

**Special notes for your reviewer**:
This is just a quick fix for the panic.

**Release note**:

```release-note
NONE
```
2017-04-19 18:52:59 -07:00
Kubernetes Submit Queue
36c5d12cf4 Merge pull request #44452 from gnufied/fix-aws-device-failure-reuse
Automatic merge from submit-queue

Implement LRU for AWS device allocator

On failure to attach do not use device from pool
    
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure
that a bigger pool of devices is available.
2017-04-19 16:38:13 -07:00
Hemant Kumar
a16ee2f514 Implement LRU for AWS device allocator
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure we
don't reuse recently freed devices
2017-04-19 16:52:57 -04:00
Kubernetes Submit Queue
712ccf3fa4 Merge pull request #44082 from zetaab/fixzone2
Automatic merge from submit-queue

use availability_zone instead of availability (update godep for gophercloud)

**What this PR does / why we need it**: there is typo in json variable name

**Which issue this PR fixes**: fixes #44032

**Special notes for your reviewer**:our openstack environment region name is not nova, so I tested this and it works now

All cinder blockstorages are using variable name availability_zone instead of availability. Docs: 

v3:
https://developer.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-a-volume-detail#create-a-volume

v2:
https://developer.openstack.org/api-ref/block-storage/v2/index.html?expanded=create-volume-detail#create-volume

I could not find v1 documentation anymore from openstack pages. However, https://developer.rackspace.com/docs/cloud-block-storage/v1/api-reference/cbs-volumes-operations/#create-a-volume documentation says also availability_zone is the correct one. 

Like mentioned in https://github.com/kubernetes/kubernetes/issues/44032#issuecomment-291488494 openstack CLI is using availability_zone
2017-04-19 03:26:25 -07:00
xiangpengzhao
be3fd5bb90
Add test case for getVMName 2017-04-19 17:16:39 +08:00
xiangpengzhao
d4cbea5902
Fix panic when using kubeadm init with vsphere cloud-provider 2017-04-19 16:03:08 +08:00
Kubernetes Submit Queue
d2060ade08 Merge pull request #43510 from karataliu/azurelb
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)

Add support for Azure internal load balancer

**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901

**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer

Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.

**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.

3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
   - If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
   - If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.

4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.

**Release note**:
2017-04-18 23:22:04 -07:00
xu fei
b0a3f492af remove strings.compare(), use string native operation 2017-04-19 09:32:29 +08:00
zhouhaibing089
8c021ea884 openstack: remove field flavor_to_resource 2017-04-17 14:01:04 +08:00
Chao Xu
d4850b6c2b move pkg/api/v1/helpers.go to subpackage 2017-04-14 14:25:11 -07:00
Mike Danese
a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
Kubernetes Submit Queue
f1c0c0a73c Merge pull request #42395 from nicksardo/gce-src-ranges
Automatic merge from submit-queue

Adding load balancer src cidrs to GCE cloudprovider

**What this PR does / why we need it**:
As of January 31st, 2018, GCP will be sending health checks and l7 traffic from two CIDRs and legacy health checks from three CIDS. This PR moves them into the cloudprovider package and provides a flag for override.

Another PR will need to be address firewall rule creation for external L4 network loadbalancing #40778

**Which issue this PR fixes**
Step one of #40778
Step one of https://github.com/kubernetes/ingress/issues/197

**Release note**:
```release-note
Add flags to GCE cloud provider to override known L4/L7 proxy & health check source cidrs
```
2017-04-12 19:57:43 -07:00
Jamie Hannaford
622c69c1e5 Use provided VipPortID for LB 2017-04-12 14:13:12 +02:00
Kubernetes Submit Queue
ceccd305ce Merge pull request #42147 from bowei/ip-alias-2
Automatic merge from submit-queue

Add support for IP aliases for pod IPs (GCP alpha feature)

```release-note
Adds support for allocation of pod IPs via IP aliases.

# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).

KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.

## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh

# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).

If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.

- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
  the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
2017-04-11 22:09:24 -07:00
Bowei Du
f61590c221 Adds support for PodCIDR allocation from the GCE cloud provider
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.

- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
  the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
2017-04-11 14:07:54 -07:00
Kubernetes Submit Queue
6283077fb5 Merge pull request #43545 from luomiao/vsphere-remove-loginInfo-on-workers-update
Automatic merge from submit-queue (batch tested with PRs 43545, 44293, 44221, 43888)

Remove credentials on worker nodes for vSphere cloud provider.

**What this PR does / why we need it**:
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.

**Which issue this PR fixes** : fixes #https://github.com/kubernetes/kubernetes/issues/35339

**Release note**:
2017-04-11 12:18:17 -07:00
Bowei Du
f5be63e0f7 Add PodCIDRs API for GCE (Google cloud alpha feature) 2017-04-10 12:05:02 -07:00
Kubernetes Submit Queue
41e9b80e5f Merge pull request #44235 from kubermatic/feature/configurable-aws-subnetid-routetableid
Automatic merge from submit-queue

Specify subnetid and routetableid via cloud provider config

**What this PR does / why we need it**:
This is a fix for https://github.com/kubernetes/kubernetes/pull/39996 which is needed since 1.6

Changes introduced from 1.6 broke partially(LoadBalancer) the support for running the master components in a different environment (different aws account/on premise). This PR will add support for specifying the Subnet & RouteTable to use via the cloud provider config.

**Release note**:

```release-note
AWS cloud provider: fix support running the master with a different AWS account or even on a different cloud provider than the nodes.
```
2017-04-08 11:19:21 -07:00
Henrik Schmidt
1c1f02fde3 Specify subnetid and routetableid via cloud provider config 2017-04-08 11:44:45 +02:00
Jesse Haka
5aad93abf5 fix format 2017-04-08 11:08:08 +03:00
Jesse Haka
2fb9fc4647 use AvailabilityZone instead of Availability 2017-04-08 10:51:49 +03:00
Kubernetes Submit Queue
9c9326114c Merge pull request #43777 from wlan0/provider-id
Automatic merge from submit-queue

move ProvideID indexed methods to right location

@bowei
2017-04-07 19:57:48 -07:00
Dong Liu
f20e9bf66d Update message log level for azure_loadbalancer. 2017-04-07 14:32:29 +08:00
Jan Safranek
67e1f2c08e Add e2e tests for storageclass
This reverts commit 22352d2844 and makes
gce.GetDiskByNameUnknownZone a public GCE cloud provider method.
2017-04-05 11:49:49 +02:00
Kubernetes Submit Queue
4ee6782db5 Merge pull request #42512 from kubermatic/scheeles-aws
Automatic merge from submit-queue (batch tested with PRs 43925, 42512)

AWS: add KubernetesClusterID as additional option when VPC is set

This is a small enhancement after the PRs https://github.com/kubernetes/kubernetes/pull/41695 and  https://github.com/kubernetes/kubernetes/pull/39996
## Release Notes
```release-note
AWS cloud provider: allow to set KubernetesClusterID or KubernetesClusterTag in combination with VPC.
```
2017-04-03 12:46:17 -07:00
Kubernetes Submit Queue
449a13c44c Merge pull request #40338 from gnufied/cloudprovider-gce-metrics
Automatic merge from submit-queue

Implement API usage metrics for gce storage

**What this PR does / why we need it**:

This PR implements support for emitting metrics from GCE about storage operations.

**Which issue this PR fixes** 

Fixes https://github.com/kubernetes/features/issues/182

**Release note**:
```
Add support for emitting metrics from GCE cloudprovider about storage operations.
```
2017-03-30 12:42:02 -07:00
Kubernetes Submit Queue
289ef62442 Merge pull request #43644 from nicksardo/gce-healthchecks
Automatic merge from submit-queue (batch tested with PRs 42617, 43247, 43509, 43644, 43820)

[GCE] Support legacy-https and generic health checks

**What this PR does / why we need it**:
- Adds CRUD functions to manage `compute.HttpsHealthChecks` 
The legacy HTTPS healthchecks will be used by the GLBC (GCE Load balancer Controller)

- Adds CRUD functions to manage `compute.HealthChecks`
These are required for the internal load balancer

- Removes the logic that disregards NotFound errors on DeleteHttpHealthChecks as this is useful information for callers. Here are the three known invocations within kubernetes: 
[gce/gce_loadbalancer.go#L457](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L457)): Only prints warning that HC wasn't deleted  -> acceptable
[gce/gce_loadbalancer.go#L465](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L465)): Err is ignored if not nil  -> acceptable
[e2e/framework/ingress_utils.go#L530](bc6e77d42f/test/e2e/framework/ingress_utils.go (L530)): Already checks if is NotFound error -> acceptable

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Step one of https://github.com/kubernetes/ingress/issues/494
Step one of #33483 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-03-29 16:05:25 -07:00
Miao Luo
6d1c4a3c49 Remove login info on workers for vsphere cloud provider.
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.
2017-03-28 23:20:38 -07:00
Cole Mickens
21250f1748 azure: reduce poll delay for all clients to 5 sec 2017-03-28 18:18:36 -07:00
Cole Mickens
5c21498dbf run update-bazel.sh 2017-03-28 18:08:22 -07:00
Cole Mickens
6eb7a1a366 azure: add k8s info to user-agent string 2017-03-28 15:17:03 -07:00
wlan0
655dfd1196 move ProvideID indexed methods to right location 2017-03-28 15:08:03 -07:00
Hemant Kumar
c4aaf47282 Implement API usage metrics for gce
This PR implements tracking of GCE API usage via prometheus metrics.
2017-03-28 16:33:21 -04:00
wlan0
a68c783dc8 Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.

Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
2017-03-27 23:13:13 -07:00
Kubernetes Submit Queue
3843108081 Merge pull request #42974 from vmware/VSANPolicyProvisioningForKubernetesOnKubernetesRepo
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

VSAN policy support for storage volume provisioning inside kubernetes

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
>   name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
>   hostFailuresToTolerate: "2"
>   diskStripes: "1"
>   cacheReservation: "20"
>   datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel 

**Release note**:

```release-note
None
```
2017-03-27 17:00:23 -07:00
Kubernetes Submit Queue
31e596e5ba Merge pull request #40423 from mkutsevol/feature/openstack_cinder_v1_2_auto
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)

Openstack cinder v1/v2/auto API support

**What this PR does / why we need it**:
It adds support for v2 cinder API + autodetection of available cinder API level (as in LBs).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39572

**Special notes for your reviewer**:
Based on work by @anguslees. The first two commits are just rebased from https://github.com/kubernetes/kubernetes/pull/36344 which already had a lgtm by @jbeda 

**Release note**:

```
Add support for v2 cinder API for openstack cloud provider. By default it autodetects the available version.
```
2017-03-27 12:49:22 -07:00
Balu Dontu
dbe94833eb VSAN policy support for storage volume provisioning inside kubernetes 2017-03-27 12:43:01 -07:00
Dong Liu
ed36aba8ba Add separate func 'cleanupLoadBalancer' and 'cleanupPublicIP' for Azure. 2017-03-27 15:19:16 +08:00
Dong Liu
54664d08dd Update reconcileSecurityGroup logic for Azure, add tests. 2017-03-27 12:52:21 +08:00