Commit Graph

872 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
5962b849f1 Merge pull request #43866 from kerneltime/patch-1
Automatic merge from submit-queue

Update owners to include kerneltime

**What this PR does / why we need it**: Update owners to include kerneltime to help with PRs
2017-04-10 13:40:35 -07:00
Kubernetes Submit Queue
97857e8390 Merge pull request #41687 from aliscott/fix_overwriting_err
Automatic merge from submit-queue

Fix original error being overwritten before returned
2017-04-09 23:16:32 -07:00
Kubernetes Submit Queue
c8f90171e4 Merge pull request #39678 from resouer/extract-resource
Automatic merge from submit-queue (batch tested with PRs 41775, 39678, 42629, 42524, 43028)

Extract resources functions belongs to api/util

Address: extract kubelet resources functions belongs to `pkg/api/v1/resource_helpers.go`
2017-04-07 17:44:14 -07:00
Kubernetes Submit Queue
854441643f Merge pull request #38801 from nak3/nfs-mkdir
Automatic merge from submit-queue

Catch error when failed to make directory in NFS volume plugin

NFS: Catch error when failed to make directory

Currently, NFS volume plugin doesn't catch the error from
os.MkdirAll. That makes it difficult to debug why failed to make the
directory. This patch adds error catch to os.MkdirAll.
2017-04-07 16:48:46 -07:00
Kubernetes Submit Queue
6198c469cd Merge pull request #39476 from rootfs/azure-logging
Automatic merge from submit-queue

azure disk: add logging on disk attach

**What this PR does / why we need it**:
While we were debugging a failed azure disk attach, we were missing logging information to identify the root cause. This fix logs information at each stage of attach to help identify where problem is once it happens again.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

NONE
2017-04-07 16:03:44 -07:00
Kubernetes Submit Queue
98a4c6ba7f Merge pull request #43396 from rootfs/iscsi-chap
Automatic merge from submit-queue (batch tested with PRs 44119, 42538, 43802, 42336, 43396)

iSCSI CHAP support

**What this PR does / why we need it**:
To support CHAP authentication in a multi-tenant setup
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Support iSCSI CHAP authentication
```
2017-04-07 14:09:42 -07:00
Huamin Chen
8eb6d6cfa7 update iSCSI README with CHAP instruction
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-04-07 16:38:29 +00:00
Huamin Chen
9298217126 Add iSCSI CHAP authentication
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-04-07 16:38:29 +00:00
Kubernetes Submit Queue
176eb0e509 Merge pull request #43861 from rootfs/fc-doc
Automatic merge from submit-queue

relocate FC multipath readme to examples from pkg/volume

Signed-off-by: rootfs <hchen@redhat.com>



**What this PR does / why we need it**:
`pkg/volume/README.md` is not a good place for Fiber Channel specific doc. Move the block into FC README.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-04-05 08:50:27 -07:00
Kubernetes Submit Queue
5ef8148b5e Merge pull request #41929 from abrarshivani/fstype_in_storage_class
Automatic merge from submit-queue (batch tested with PRs 44008, 41929)

vSphere Cloud Provider: Fstype in storage class

This PR does following,

1. Adds fstype support in storage class for vSphere Cloud Provider.
2. Modify examples to include fstype in storage class.
3. Adds fstype support in storage class for Photon Controller Cloud Provider (@luomiao)

Internally reviewed [here](https://github.com/vmware/kubernetes/pull/88).

cc @pdhamdhere @tusharnt @kerneltime @BaluDontu @divyenpatel @luomiao
2017-04-04 16:50:20 -07:00
Miao Luo
72a27daa3c Adds fstype support in storage class for Photon Cloud Provider. 2017-04-04 12:17:52 -07:00
Abrar Shivani
50c9cca487 Add support for fstype in Storage Class for vSphere Cloud Provider 2017-04-03 16:13:00 -07:00
Kubernetes Submit Queue
538c5c74b1 Merge pull request #42973 from gnufied/fix-vsphere-selinux
Automatic merge from submit-queue

Fix vsphere selinux support

Managed flag must be true for SELinux relabelling to work
for vsphere.

Fixes #42972
2017-04-03 13:59:56 -07:00
Jan Safranek
3fbf9cb451 Fix deletion of Gluster volumes
GetClassForVolume should check pv.spec.storageClassName together
with beta annotation.
2017-04-03 15:33:56 +02:00
Kubernetes Submit Queue
46343f37dd Merge pull request #42038 from humblec/glusterfs-backup-vol1
Automatic merge from submit-queue (batch tested with PRs 42038, 42083)

 Add backup-volfile-servers to mount option. 

This feature ensures the `backup servers` in the trusted pool is contacted if there is a failure in the connected server.
Mount option becomes:
mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glustermount/glusterpod-glusterfs.log,backup-volfile-servers=192.168.100.0:192.168.200.0:192.168.43.149 ..

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-04-03 04:07:19 -07:00
Harry Zhang
efb10b1821 Move extract resources to its pkg
Move ExtractContainerResourceValue
2017-04-03 13:06:48 +08:00
Kubernetes Submit Queue
b625085230 Merge pull request #42325 from tsmetana/remove-unused-method-from-og
Automatic merge from submit-queue

Remove unused method from operation_generator

This is only a removal of the GerifyVolumeIsSafeToDetach [sic] method from operation_executor. The method is not called from anywhere, moreover there is a private method named verifyVolumeIsSafeToDetach (which is being used). This looks like a cut&paste mistake that deserves to be cleaned.
```release-note
NONE
```
2017-03-31 10:56:40 -07:00
Kubernetes Submit Queue
7543bac563 Merge pull request #41952 from justinsb/curate_volumes_aws_ebs
Automatic merge from submit-queue

Curate owners for pkg/volume/aws_ebs

The previous list was algorithmically generated; applying some curation.

```release-note
NONE
```
2017-03-30 16:57:30 -07:00
Ritesh H Shukla
1052432f4a Update owners to include kerneltime 2017-03-30 11:01:27 -07:00
rootfs
cb6a7c946d relocate FC multipath readme to examples from pkg/volume
Signed-off-by: rootfs <hchen@redhat.com>
2017-03-30 11:15:25 -04:00
wlan0
a68c783dc8 Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.

Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
2017-03-27 23:13:13 -07:00
Kubernetes Submit Queue
3843108081 Merge pull request #42974 from vmware/VSANPolicyProvisioningForKubernetesOnKubernetesRepo
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

VSAN policy support for storage volume provisioning inside kubernetes

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
>   name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
>   hostFailuresToTolerate: "2"
>   diskStripes: "1"
>   cacheReservation: "20"
>   datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel 

**Release note**:

```release-note
None
```
2017-03-27 17:00:23 -07:00
Balu Dontu
dbe94833eb VSAN policy support for storage volume provisioning inside kubernetes 2017-03-27 12:43:01 -07:00
Alistair Scott
fc62687b2c Fix original error being overwritten before returned 2017-03-27 13:29:59 +01:00
Kubernetes Submit Queue
3fcb7cb377 Merge pull request #42170 from rootfs/azure-file-prv
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)

Enable storage class support in Azure File volume

**What this PR does / why we need it**:
Support StorageClass in Azure file volume

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Support StorageClass in Azure file volume

```
2017-03-24 19:04:28 -07:00
Kubernetes Submit Queue
803369b9cc Merge pull request #42006 from screeley44/error-events3
Automatic merge from submit-queue (batch tested with PRs 42522, 42545, 42556, 42006, 42631)

Fixes MountVolume.NewMounter errors not displayed to users via describe events

Fixes #42004 

This fixes the problem of mount errors being eaten and not displayed to users again.  Specifically erros caught in MountVolume.NewMounter (like missing endpoints, etc...)

Current behavior for any mount failure:

```
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----            -------------    --------    ------        -------
  12m        12m        1    default-scheduler            Normal        Scheduled    Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
  10m        1m        5    kubelet, 127.0.0.1            Warning        FailedMount    Unable to mount volumes for pod "glusterfs-bb-pod1_default(67c9dfa7-f9f5-11e6-aee2-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
  10m        1m        5    kubelet, 127.0.0.1            Warning        FailedSync    Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
```

New Behavior:

For example on glusterfs - deliberately didn't create endpoints, now correct message is displayed:
```
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2m		2m		1	default-scheduler			Normal		Scheduled	Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
  54s		54s		1	kubelet, 127.0.0.1			Warning		FailedMount	Unable to mount volumes for pod "glusterfs-bb-pod1_default(8edd2c25-fa09-11e6-92ae-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
  54s		54s		1	kubelet, 127.0.0.1			Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
  2m		6s		814	kubelet, 127.0.0.1			Warning		FailedMount	MountVolume.NewMounter failed for volume "kubernetes.io/glusterfs/8edd2c25-fa09-11e6-92ae-5254003a59cf-glusterfsvol" (spec.Name: "glusterfsvol") pod "8edd2c25-fa09-11e6-92ae-5254003a59cf" (UID: "8edd2c25-fa09-11e6-92ae-5254003a59cf") with: endpoints "glusterfs-cluster" not found
```
2017-03-24 15:10:33 -07:00
Kubernetes Submit Queue
fb537762fc Merge pull request #42297 from YuPengZTE/devErrorf
Automatic merge from submit-queue (batch tested with PRs 42237, 42297, 42279, 42436, 42551)

should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-03-24 14:16:23 -07:00
Kubernetes Submit Queue
1aff24cb53 Merge pull request #43217 from SEJeff/fix-spelling-tyop
Automatic merge from submit-queue

Fix spelling of the word successfully

A serious business project like kubernetes necessitates serious business logs.
2017-03-24 10:26:54 -07:00
Kubernetes Submit Queue
11610d0ed6 Merge pull request #42160 from gnufied/gnufied-pkg-volume-reviewer
Automatic merge from submit-queue

Add gnufied as reviewer for pkg/volume

I have helped review and contributed code to this
area already.

cc @saad-ali @jsafrane @childsb
2017-03-24 10:25:20 -07:00
Kubernetes Submit Queue
2df943ce50 Merge pull request #36698 from fabiand/no-mpathconf
Automatic merge from submit-queue

fc: Drop multipath.conf snippet

**What this PR does / why we need it**:
Removes multipath.conf - The code does not make use of it - or ensure s that it's getting used - and it should in addition be handled elsewehre.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```

A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.

This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-03-24 10:24:49 -07:00
Jeff Schroeder
a5afdfa17f
Fix spelling of the word successfully
Auto-generated via:
    git grep -l [Ss]uccesfully  | xargs sed -ri 's/([sS])uccesfully/\1uccessfully/g'

I noticed this when running kube-scheduler with --v4 and it is annoying.
Then manually reverted changed to the vendored bits.
2017-03-22 18:33:11 -05:00
Kubernetes Submit Queue
754effe332 Merge pull request #42949 from wenlxie/master
Automatic merge from submit-queue

recycle pod can't get the event since channel closed

What this PR does / why we need it:
We create a   hostPath type  PV with "Recycle" persistentVolumeReclaimPolicy,  and bind a PVC to it, but after deleted the PVC, the PV cannot become to available status. This is happened after we upgrade etcd to 3.0. The reason is:
If the channel used to get the pod message and events been abnormal closed(for example, the event channel maybe closed because of "required revision has been compacted" error), the function internalRecycleVolumeByWatchingPodUntilCompletion will stuck in a loop, and the recycle pod will not been deleted, the PV can not become into available status

Special notes for your reviewer:
None
Release note:
2017-03-16 02:41:11 -07:00
Vladimir Vivien
0715b32439 Update ScaleIO volume plugin default readOnly value
This commit updates the code to set the default value of the readOnly attribute to false.
It also updates the example docs to add full list of supported plugin attributes and doc.
2017-03-14 14:19:48 -04:00
wenlxie
33385214bc recycle pod can't get the event since the channel been closed 2017-03-14 10:35:08 +08:00
Hemant Kumar
a4a3d20934 Fix vsphere selinux support
Managed flag must be true for SELinux relabelling to work
for vsphere.
2017-03-12 23:21:07 -04:00
Hemant Kumar
12d6b87894 Validation PVs for mount options
We are going to move the validation in its own package
and we will be calling validation for individual volume types
as needed.
2017-03-09 18:24:37 -05:00
yupengzte
363f321f32 should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
2017-03-06 09:14:48 +08:00
Kubernetes Submit Queue
f9ccee7714 Merge pull request #42435 from dashpole/timestamps_for_fsstats
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats

Continuation of #33121.  Credit for most of this goes to @sjenning.  I added volume fs timestamps.

**why is this a bug** 
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads. 
This PR will also help deflake a node e2e test suite.

The eviction manager currently avoids evicting pods if metrics are old.  However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.

cc: @kubernetes/sig-storage-pr-reviews  @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
2017-03-03 23:21:48 -08:00
Vladimir Vivien
915a54180d Addition of ScaleIO Kubernetes Volume Plugin
This commits implements the Kubernetes volume plugin allowing pods to seamlessly access and use data stored on ScaleIO volumes.
2017-03-03 15:47:19 -05:00
Kubernetes Submit Queue
e9bbfb81c1 Merge pull request #41306 from gnufied/implement-interface-bulk-volume-poll
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Implement bulk polling of volumes

This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.

cc @justinsb
2017-03-03 10:54:38 -08:00
Kubernetes Submit Queue
ff9296fcad Merge pull request #35055 from ivan4th/make-downward-api-test-table-driven
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

Make Downward API test table-driven
2017-03-03 09:24:48 -08:00
David Ashpole
a90c7951d4 add volume timestamps 2017-03-02 15:01:59 -08:00
Hemant Kumar
786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Jan Safranek
9487552e41 Regenerate everything 2017-03-02 10:23:58 +01:00
Jan Safranek
7ae4152712 Move PV/PVC annotations to PV/PVC types.
They aren't part of storage.k8s.io/v1 or v1beta1 API.
Also move associated *GetClass functions.
2017-03-02 10:23:55 +01:00
Jan Safranek
a39bd53509 Explicitly use storage.k8s.io/v1beta1 everywhere.
v1 is not yet awailable on GKE and tests would fail.
2017-03-02 08:56:26 +01:00
Scott Creeley
762ca8e8a9 adding some debug 2017-03-01 13:30:21 -05:00
Hemant Kumar
2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Tomas Smetana
58edea18de Remove unused method from operation_generator 2017-03-01 10:42:53 +01:00
Kubernetes Submit Queue
4e46ae1d3b Merge pull request #41597 from rootfs/rbd-fencing2
Automatic merge from submit-queue (batch tested with PRs 41597, 42185, 42075, 42178, 41705)

force rbd image unlock if the image is not used

**What this PR does / why we need it**:
Ceph RBD image could be locked if the host that holds the lock is down. In such case, the image cannot be used by other Pods. 

The fix is to detect the orphaned locks and force unlock.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #31790

**Special notes for your reviewer**:

Note, previously, RBD volume plugin maps the image, mount it, and create a lock on the image. Since the proposed fix uses `rbd status` output to determine if the image is being used, the sequence has to change to: rbd lock checking (through `rbd lock list`), mapping check (through `rbd status`), forced unlock if necessary (through `rbd lock rm`), image lock, image mapping, and mount.




**Release note**:

```release-note
force unlock rbd image if the image is not used
```
2017-03-01 00:36:01 -08:00