Commit Graph

1230 Commits

Author SHA1 Message Date
Jan Safranek
7ae4152712 Move PV/PVC annotations to PV/PVC types.
They aren't part of storage.k8s.io/v1 or v1beta1 API.
Also move associated *GetClass functions.
2017-03-02 10:23:55 +01:00
Jan Safranek
a39bd53509 Explicitly use storage.k8s.io/v1beta1 everywhere.
v1 is not yet awailable on GKE and tests would fail.
2017-03-02 08:56:26 +01:00
Jimeng Liu
5c53a906bd remove unused StatusFailure constant 2017-03-01 14:21:50 -08:00
Scott Creeley
762ca8e8a9 adding some debug 2017-03-01 13:30:21 -05:00
Hemant Kumar
2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Tomas Smetana
58edea18de Remove unused method from operation_generator 2017-03-01 10:42:53 +01:00
Kubernetes Submit Queue
4e46ae1d3b Merge pull request #41597 from rootfs/rbd-fencing2
Automatic merge from submit-queue (batch tested with PRs 41597, 42185, 42075, 42178, 41705)

force rbd image unlock if the image is not used

**What this PR does / why we need it**:
Ceph RBD image could be locked if the host that holds the lock is down. In such case, the image cannot be used by other Pods. 

The fix is to detect the orphaned locks and force unlock.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #31790

**Special notes for your reviewer**:

Note, previously, RBD volume plugin maps the image, mount it, and create a lock on the image. Since the proposed fix uses `rbd status` output to determine if the image is being used, the sequence has to change to: rbd lock checking (through `rbd lock list`), mapping check (through `rbd status`), forced unlock if necessary (through `rbd lock rm`), image lock, image mapping, and mount.




**Release note**:

```release-note
force unlock rbd image if the image is not used
```
2017-03-01 00:36:01 -08:00
Aditya Dani
28df55fc31 Portworx Volume Driver in Kubernetes
- Add a new type PortworxVolumeSource
- Implement the kubernetes volume plugin for Portworx Volumes under pkg/volume/portworx
- The Portworx Volume Driver uses the libopenstorage/openstorage specifications and apis for volume operations.

Changes for k8s configuration and examples for portworx volumes.

- Add PortworxVolume hooks in kubectl, kube-controller-manager and validation.
- Add a README for PortworxVolume usage as PVs, PVCs and StorageClass.
- Add example spec files

Handle code review comments.

- Modified READMEs to incorporate to suggestions.
- Add a test for ReadWriteMany access mode.
- Use util.UnmountPath in TearDown.
- Add ReadOnly flag to PortworxVolumeSource
- Use hostname:port instead of unix sockets
- Delete the mount dir in TearDown.
- Fix link issue in persistentvolumes README
- In unit test check for mountpath after Setup is done.
- Add PVC Claim Name as a Portworx Volume Label

Generated code and documentation.
- Updated swagger spec
- Updated api-reference docs
- Updated generated code under pkg/api/v1

Godeps update for Portworx Volume Driver
- Adds github.com/libopenstorage/openstorage
- Adds go.pedge.io/pb/go/google/protobuf
- Updates Godep Licenses
2017-02-28 23:24:56 +00:00
Huamin Chen
6782a48dfa Enable storage class support in Azure File volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-02-27 15:34:37 -05:00
Hemant Kumar
54b0637a0e Add gnufied as reviewer for pkg/volume
I have helped review and contributed code to this
area already.
2017-02-27 09:12:15 -05:00
Kubernetes Submit Queue
d1f5331102 Merge pull request #41804 from chakri-nelluri/flex
Automatic merge from submit-queue (batch tested with PRs 41116, 41804, 42104, 42111, 42120)

Add support for attacher/detacher interface in Flex volume

Add support for attacher/detacher interface in Flex volume
This change breaks backward compatibility and requires to be release noted.

```release-note
Flex volume plugin is updated to support attach/detach interfaces. It broke backward compatibility. Please update your drivers and implement the new callouts. 
```
2017-02-27 04:10:25 -08:00
Kubernetes Submit Queue
cff3c99613 Merge pull request #41628 from humblec/glusterfs-refactor
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Factor new GetClusterNodes() out of CreateVolume().
2017-02-26 08:10:02 -08:00
Jordan Liggitt
41c88e0455
Revert "Merge pull request #40088 from jsafrane/storage-ga-v1"
This reverts commit 5984607cb9, reversing
changes made to 067f92e789.
2017-02-25 22:35:15 -05:00
Chakravarthy Nelluri
0d2af70e95 Add support for attacher/detacher interface in Flex volume 2017-02-24 20:18:06 -05:00
Jan Safranek
fa93f1c411 Update imports 2017-02-24 13:52:16 +01:00
Jan Safranek
cea7a46de1 Regenerate everything 2017-02-24 13:34:18 +01:00
Jan Safranek
3f6caca97a Add storage.k8s.io/v1 2017-02-24 13:34:18 +01:00
Fabian Deutsch
2367d7de2f volume: Document multipath configuration
Add some lines about how to enable multipath for block storage.
A new README was added, because multipath is relevant for at least
FC and iSCSI.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-02-24 09:59:11 +01:00
Fabian Deutsch
68c3502954 fc: Drop multipath.conf snippet
A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.

This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-02-24 09:59:11 +01:00
Humble Chirammal
43c0a6869d This feature ensures the backup servers in the trusted pool
is contacted if there is a failure in the connected server.
Mount option becomes:
mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glustermount/glusterpod-glusterfs.log,backup-volfile-servers=192.168.100.0:192.168.200.0:192.168.43.149 ..

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-24 13:00:34 +05:30
Kubernetes Submit Queue
3adc12c5f5 Merge pull request #41113 from vmware/AddDatastoreParamForDynamicProvisioning
Automatic merge from submit-queue

Fix for Support selection of datastore for dynamic provisioning in vS…

Fixes #40558

Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.

With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.

For example:

User creates a storage class with the datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.

If the user creates a storage class without any datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)

@pdhamdhere @kerneltime
2017-02-23 22:10:42 -08:00
Kubernetes Submit Queue
b5d010d6a3 Merge pull request #40910 from justinsb/fix_35695
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)

Allow multiple mounts in StatefulSet volume zone placement

We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones.  Sadly they forgot to account for
multiple mounts.  This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.

Fix #35695

```release-note
Fix zone placement heuristics so that multiple mounts in a StatefulSet pod are created in the same zone
```
2017-02-23 20:57:29 -08:00
Kubernetes Submit Queue
e373b5981a Merge pull request #41778 from NickrenREN/volume-typo
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)

fix some typos and var style

**Release note**:

```NONE
```
2017-02-23 07:54:37 -08:00
Justin Santa Barbara
62b8010aa2 Curate owners for pkg/volume/aws_ebs
The previous list was algorithmically generated; applying some curation.
2017-02-22 22:51:08 -05:00
Kubernetes Submit Queue
ae8f537c87 Merge pull request #41688 from humblec/iscsi-reviewer
Automatic merge from submit-queue

Update reviewer list for iscsi volume plugin.

Contributed nodiskconflict, multipath feature .etc to iscsi volume plugin. 
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-22 18:18:22 -08:00
Humble Chirammal
3ade29ff73 Factor new GetClusterNodes() out of CreateVolume()".
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-22 22:44:55 +05:30
Balu Dontu
12f75f0b86 Fix for Support selection of datastore for dynamic provisioning in vSphere 2017-02-21 19:04:45 +00:00
Kubernetes Submit Queue
a67e78e4fa Merge pull request #40317 from kpgriffith/recycle-vol-plug-cleanup
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)

changes to cleanup the volume plugin for recycle

**What this PR does / why we need it**:
Code cleanup. Changing from creating a new interface from the plugin, that then calls a function to recycle a volume, to adding the function to the plugin itself.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #26230

**Special notes for your reviewer**:
Took same approach from closed PR #28432.

Do you want the approach to be the same for NewDeleter(), NewMounter(), NewUnMounter() and should they be in this same PR or submit different PR's for those?

**Release note**:

```NONE
```
2017-02-21 07:45:40 -08:00
Ivan Shvedunov
e80ae63028 Make Downward API test table-driven 2017-02-21 16:43:30 +03:00
Johannes Scheuermann
96e43e406e Remove unnecessary constants and add type to secret 2017-02-21 14:02:46 +01:00
NickrenREN
6899dd85d4 fix some typos and var style 2017-02-21 17:08:14 +08:00
Jeff Peeler
ec701a65e8 Generated files for projected volume driver 2017-02-20 13:09:41 -05:00
Jeff Peeler
8fb1b71c66 Implements projected volume driver
Proposal: kubernetes/kubernetes#35313
2017-02-20 12:56:04 -05:00
Kubernetes Submit Queue
7236af6162 Merge pull request #39373 from apprenda/fix_configmap
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)

Fix ConfigMaps for Windows

**What this PR does / why we need it**: ConfigMaps were broken for Windows as the existing code used linux specific file paths. Updated the code in `kubelet_getters.go` to use `path/filepath` to get the directories. Also reverted back the code in `secret.go` as updating `kubelet_getters.go` to use `path/filepath` also fixes `secrets`

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/39372

```release-note
Fix ConfigMap for Windows Containers.
```

cc: @pires
2017-02-19 13:50:37 -08:00
Justin Santa Barbara
bba343d066 Allow multiple mounts in StatefulSet volume zone placement
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones.  Sadly they forgot to account for
multiple mounts.  This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.

Fix #35695
2017-02-19 02:20:04 -05:00
Humble Chirammal
1fd341ee72 Update reviewer list for iscsi volume plugin.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-18 13:12:03 +05:30
Kubernetes Submit Queue
34ffba6cd2 Merge pull request #40726 from humblec/gluster-provclean
Automatic merge from submit-queue (batch tested with PRs 40505, 34664, 37036, 40726, 41595)

Rename provisioner config struct
2017-02-16 17:05:15 -08:00
Huamin Chen
71406aa4a6 force rbd image unlock if the image is not used
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-02-16 15:55:55 -05:00
Kubernetes Submit Queue
f07112f78a Merge pull request #41461 from humblec/humble-reviewer
Automatic merge from submit-queue

Updating reviewer list.
2017-02-16 08:33:12 -08:00
Kubernetes Submit Queue
ddf4a0cad5 Merge pull request #40417 from jsravn/fix-reconciler-external-updates-race
Automatic merge from submit-queue (batch tested with PRs 41531, 40417, 41434)

Always detach volumes in operator executor

**What this PR does / why we need it**:

Instead of marking a volume as detached immediately in Kubelet's
reconciler, delegate the marking asynchronously to the operator
executor. This is necessary to prevent race conditions with other
operations mutating the same volume state.

An example of one such problem:

1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
   operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler reaches detach volume section, detects volume is no longer in desired state of world,
   removes it from volumes in use
7. MountVolume tries to mark mount, throws an error because
   volume is no longer in actual state of world list. After this, kubelet isn't aware of the mount
   so doesn't try to unmount again.
8. controller-manager tries to detach the volume, this fails because it
   is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.



**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #32881, fixes ##37854 (maybe)

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-15 23:01:07 -08:00
Humble Chirammal
5b1aa04ccc Updating reviewer list.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-15 13:19:51 +05:30
Cristian Pop
b23b475498 Implemented suggestions for #39202 fix to facilitate kubelet upgrade. The detachDisk behavior is now preserved for pods that were created before the kubelet upgrade. 2017-02-14 22:50:26 +02:00
Cristian Pop
2aaeefeeb8 Updated TestExtractDeviceAndPrefix and added TestExtractIface to reflect the changes brought by the #39202 fix. 2017-02-14 11:34:03 +02:00
Cristian Pop
b0d285c706 Fix for Premature iSCSI logout #39202. 2017-02-14 11:34:03 +02:00
Kubernetes Submit Queue
a75b61d7a3 Merge pull request #39928 from humblec/iscsi-multipath-backuptp
Automatic merge from submit-queue

Add mulitpath support to iscsi plugin

#issue https://github.com/kubernetes/kubernetes/issues/39345
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-13 12:18:55 -08:00
James Ravn
9992bd23c2 Mark detached only if no pending operations
To safely mark a volume detached when the volume controller manager is used.

An example of one such problem:

1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
   operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler detects volume is no longer in desired state of world,
   removes it from volumes in use
7. MountVolume tries to mark volume in use, throws an error because
   volume is no longer in actual state of world list.
8. controller-manager tries to detach the volume, this fails because it
   is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
2017-02-13 11:51:44 +00:00
Ferdinand Hübner
8fd0624bc4 resolve udevadm from PATH 2017-02-10 22:22:32 +01:00
Kubernetes Submit Queue
6ea92b47eb Merge pull request #39998 from DukeXar/cinder_instance_id
Automatic merge from submit-queue (batch tested with PRs 41246, 39998)

Cinder volume attacher: use instanceID instead of NodeID when verifying attachment

**What this PR does / why we need it**: Cinder volume attacher incorrectly uses NodeID instead of openstack instance id, so that reconciliation fails.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39978 

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-10 07:53:58 -08:00
Kubernetes Submit Queue
3ed7394cb1 Merge pull request #41042 from gnufied/add-gnufied-reviewer-gce-aws-volume
Automatic merge from submit-queue

Add gnufied as reviewer for aws and gce volumes

Adding myself as reviewer for aws and gce volume plugins. I understand the code well enough and have helped with review in those areas already.

cc @childsb @justinsb @saad-ali
2017-02-07 22:12:23 -08:00
Kubernetes Submit Queue
5034d96bfb Merge pull request #40861 from lucab/to-k8s/bump-test-images
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)

test: bump mounttest and mounttest-users images

This PR bumps two test images to latest versions:
 * mounttest to 0.8
 * mounttest-user to 0.5

It is a followup to https://github.com/kubernetes/kubernetes/pull/40613 and https://github.com/kubernetes/kubernetes/pull/40821.
2017-02-07 11:33:44 -08:00
Hemant Kumar
8fad1a6aec Add gnufied as reviewer for aws and gce volumes 2017-02-06 16:38:13 -05:00
Kevin Griffith
9448aa66ff cleanup the volume plugin for recycle
update commit to reflect changes
2017-02-06 10:38:49 -06:00
Humble Chirammal
332e26dc8c Add portals field to iscsi volume source to achieve multipathing.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-06 17:06:33 +05:30
Luca Bruno
85b1def175
test: update to use mounttest:0.8 and mounttest-user:0.5 2017-02-02 20:41:18 +00:00
Vladimir Vivien
8ebed57767 Prevent pv controller from forcefully overwrite provisioned volume name
This fix prevents the PV controller from forcefully overwriting the provisioned volume's name with the generated PV name.  Instead, it allows dynamic provisioner implementers to set the name of the volume to a value that they choose.
2017-02-01 12:19:20 -05:00
deads2k
8a12000402 move client/record 2017-01-31 19:14:13 -05:00
Kubernetes Submit Queue
1cd06fbcf0 Merge pull request #38797 from aaron12134/spell-obsession
Automatic merge from submit-queue (batch tested with PRs 38772, 38797, 40732, 40740)

Synchronous spellcheck for pkg/volume/*

**What this PR does / why we need it**: Increase code readability

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: Minor contribution 

**Release note**:

```release-note
```
2017-01-31 11:00:47 -08:00
Humble Chirammal
9c7c2dcd20 Renames provisioner config struct
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-01-31 23:04:32 +05:30
Dr. Stefan Schimanski
44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski
bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
deads2k
9488e2ba30 move testing/core to client-go 2017-01-26 13:54:40 -05:00
Dr. Stefan Schimanski
a0137e9b28 Update generated files 2017-01-25 19:49:45 +01:00
Dr. Stefan Schimanski
d7eb3b6870 pkg/util: move uuid and strategicpatch into k8s.io/apimachinery 2017-01-25 19:45:09 +01:00
Jitendra Bhurat
0cbf75c400
kubelet: Fix ConfigMap on Windows. 2017-01-24 18:40:49 +00:00
Kubernetes Submit Queue
f18a921a03 Merge pull request #40311 from deads2k/client-13-move-util
Automatic merge from submit-queue (batch tested with PRs 40299, 40311)

move authoritative client-go util out of pkg

Move `client-go/pkg/util` which are authoritative to `client-go/util` to make it easier to reason about what comes from where.
2017-01-24 08:59:59 -08:00
Kubernetes Submit Queue
68f123dfa0 Merge pull request #37275 from xiangfeiz/cinder-rescan-scsi
Automatic merge from submit-queue

Adding rescan scsi controller for cinder

For lsilogic scsi controller, attached cinder volume does not
appear under /dev/ automatically unless do a rescan.
This approach was used in vSphere volume provider before PR #27496
dropped support for lsilogic scsi controller.
2017-01-24 06:24:59 -08:00
deads2k
5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
Kubernetes Submit Queue
43286a82c6 Merge pull request #39981 from fraenkel/optional_configmaps_secrets
Automatic merge from submit-queue

Optional configmaps and secrets

Allow configmaps and secrets for environment variables and volume sources to be optional

Implements approved proposal c9f881b7bb

Release note:
```release-note
Volumes and environment variables populated from ConfigMap and Secret objects can now tolerate the named source object or specific keys being missing, by adding `optional: true` to the volume or environment variable source specifications.
```
2017-01-23 23:06:35 -08:00
Michael Fraenkel
ca207be4a3 Generated code 2017-01-23 20:12:24 -07:00
Michael Fraenkel
4e466040d9 Allow Optional ConfigMap and Secrets
- ConfigMaps and Secrets for Env or Volumes are allowed to be optional
2017-01-23 18:59:49 -07:00
Clayton Coleman
469df12038
refactor: move ListOptions references to metav1 2017-01-23 17:52:46 -05:00
Clayton Coleman
245b592fac
Convert core code to metav1.ListOptions 2017-01-23 17:52:45 -05:00
Wojciech Tyczynski
bf7138652f SecretVolume using secret manager 2017-01-23 16:10:01 +01:00
Kubernetes Submit Queue
b5929bfb2b Merge pull request #38789 from jessfraz/cleanup-temp-dirs
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)

Cleanup temp dirs

So funny story my /tmp ran out of space running the unit tests so I am cleaning up all the temp dirs we create.
2017-01-20 12:34:58 -08:00
deads2k
ee6752ef20 find and replace 2017-01-20 08:04:53 -05:00
deads2k
11e8068d3f move pkg/fields to apimachinery 2017-01-19 09:50:16 -05:00
Kubernetes Submit Queue
8f99b74466 Merge pull request #40030 from colemickens/colemickens-dyn-disk-name-length
Automatic merge from submit-queue (batch tested with PRs 39826, 40030)

azure disk: restrict length of name

**What this PR does / why we need it**:
Fixes dynamic disk provisioning on Azure by properly truncating the disk name to conform to the Azure API spec.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
n/a

**Special notes for your reviewer**:
n/a

**Release note**:
```release-note
azure disk: restrict name length for Azure specifications
```

cc: @rootfs
2017-01-17 21:37:02 -08:00
Kubernetes Submit Queue
f56b606985 Merge pull request #36520 from apelisse/owners-pkg-volume
Automatic merge from submit-queue

Curating Owners: pkg/volume

cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 19:56:39 -08:00
Anton Klautsan
084d801e0a Add unit-tests for DisksAreAttached 2017-01-18 01:55:39 +00:00
Anton Klautsan
2267588d95 Cinder volume attacher: use instanceID not NodeID 2017-01-18 01:52:46 +00:00
Saad Ali
9d08c9a3d9 Update OWNERS 2017-01-17 17:37:51 -08:00
Saad Ali
526578679b Update OWNERS 2017-01-17 17:36:34 -08:00
Saad Ali
6500f08e2f Update OWNERS 2017-01-17 17:32:49 -08:00
Saad Ali
5783a0ec4c Update OWNERS 2017-01-17 17:24:35 -08:00
Saad Ali
a420973187 Update OWNERS 2017-01-17 16:36:50 -08:00
Saad Ali
d25c20bc3f Update OWNERS 2017-01-17 16:35:21 -08:00
Saad Ali
d6ba2fc37a Update OWNERS 2017-01-17 16:34:34 -08:00
Saad Ali
7c67211734 Update OWNERS 2017-01-17 16:33:13 -08:00
Saad Ali
b0e588eec2 Update OWNERS 2017-01-17 16:31:46 -08:00
Saad Ali
cb1bbf14af Update OWNERS 2017-01-17 16:31:03 -08:00
Saad Ali
7918a8e8c2 Update OWNERS 2017-01-17 16:28:05 -08:00
Saad Ali
8e371e6dbf Update OWNERS 2017-01-17 16:26:16 -08:00
Saad Ali
c8fbfd93df Update OWNERS 2017-01-17 16:24:46 -08:00
Saad Ali
04f20a06a6 Update OWNERS 2017-01-17 16:24:25 -08:00
Saad Ali
9f1181dc55 Update OWNERS 2017-01-17 16:24:01 -08:00
Saad Ali
dc9eea2f3c Update OWNERS 2017-01-17 16:22:36 -08:00
Saad Ali
16cbb574e4 Update OWNERS 2017-01-17 16:20:24 -08:00
Saad Ali
70ed66cdf8 Update OWNERS 2017-01-17 16:14:14 -08:00
Saad Ali
8159e620a1 Update OWNERS 2017-01-17 16:13:17 -08:00
Saad Ali
602b682a2a Update OWNERS 2017-01-17 16:12:45 -08:00
Saad Ali
7ed31e4761 Update OWNERS 2017-01-17 16:08:57 -08:00
Saad Ali
b4ac15ae05 Update OWNERS 2017-01-17 16:05:40 -08:00
Clayton Coleman
9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
Clayton Coleman
36acd90aba
Move APIs and core code to use metav1.ObjectMeta 2017-01-17 16:17:18 -05:00
Cole Mickens
8adcf077f3 azure disk: restrict length of name 2017-01-17 09:38:53 -08:00
deads2k
8686d67c80 move pkg/util/rand 2017-01-16 16:04:03 -05:00
deads2k
77b4d55982 mechanical 2017-01-16 09:35:12 -05:00
Kubernetes Submit Queue
823d760ab5 Merge pull request #39844 from screeley44/replica_bug
Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)

fix bug not using volumetype config in create volume

fixes #39843 

@humblec 

we are building the volumetype config but I don't see where we are using it in the CreateVolume for dyn provisioning, this is why volumetype parameter from the Storage Class was being overlooked because we are hard coding constants like replicaCount which is always 3.

unless I'm missing something?
2017-01-13 13:40:43 -08:00
Scott Creeley
164809c86e fix bug not using volumetype config in create volume 2017-01-12 22:14:04 -05:00
Kubernetes Submit Queue
ee49906c45 Merge pull request #39661 from NickrenREN/clientset-redundant-modify
Automatic merge from submit-queue

fix redundant alias clientset

remove redundant alias clientset
2017-01-12 13:29:16 -08:00
NickrenREN
a12dea14e0 fix redundant alias clientset 2017-01-12 10:21:05 +08:00
rkouj
32766e3b6d Check if path exists before performing unmount 2017-01-11 14:33:05 -08:00
deads2k
6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Kubernetes Submit Queue
c03ec462fd Merge pull request #39477 from dashpole/zombie_wc
Automatic merge from submit-queue (batch tested with PRs 39486, 37288, 39477, 39455, 39542)

Fix wc zombie goroutine issue in volume util

See [Cadvisor #1558](https://github.com/google/cadvisor/pull/1558).  This should solve problems for those using images that do not support "wc".
cc: @timstclair
2017-01-10 14:33:15 -08:00
Humble Chirammal
90266eb7ce Let admin configure the volume type and parameters for gluster DP volumes
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-01-06 12:33:25 +05:30
Jeff Grafton
20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
David Ashpole
094cfd7244 Fixed wc zombie goroutine issue 2017-01-05 10:58:16 -08:00
Kubernetes Submit Queue
5503e5e6be Merge pull request #39413 from zdj6373/cinder
Automatic merge from submit-queue (batch tested with PRs 39433, 39413)

"Attach" function records information collation

In the "attach" function, the log information, for the variable "instanceid", has been described as "node", as well as recorded as "instance", recorded as "instance" should be better.
2017-01-05 10:35:18 -08:00
Huamin Chen
7dae0547ec azure disk: add logging on disk attach
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-01-05 18:29:06 +00:00
Kubernetes Submit Queue
fd7408d076 Merge pull request #39288 from rkouj/unit-test-operation-executor
Automatic merge from submit-queue

Add unit tests for operation_executor

Add unit test for `Unmount operations should start in parallel for all volume plugins`

cc: @saad-ali
2017-01-04 18:52:22 -08:00
Kubernetes Submit Queue
eb8739d3c1 Merge pull request #39311 from rkouj/refactor-tear-down-at
Automatic merge from submit-queue

Check if pathExists before performing Unmount

Unmount operation should not fail if path does not exist

Part two of: https://github.com/kubernetes/kubernetes/pull/38547
Plugins status captured here: https://github.com/kubernetes/kubernetes/issues/39251

cc: @saad-ali
2017-01-04 18:10:30 -08:00
Jess Frazelle
6f3212f831
cleanup flocker in /tmp
Signed-off-by: Jess Frazelle <acidburn@google.com>
2017-01-04 10:27:04 -08:00
Jess Frazelle
ce11f74961
cleanup flockerVolumeTest in /tmp
Signed-off-by: Jess Frazelle <acidburn@google.com>
2017-01-04 10:27:02 -08:00
Jess Frazelle
ba617fdd1b
cleanup metrics_du_test in /tmp
Signed-off-by: Jess Frazelle <acidburn@google.com>
2017-01-04 10:27:00 -08:00
Jess Frazelle
9183940293
cleanup atomic-write temp directories
Signed-off-by: Jess Frazelle <acidburn@google.com>
2017-01-04 10:26:22 -08:00
zdj6373
84316ad559 "Attach" function records information collation 2017-01-04 16:42:24 +08:00
Kubernetes Submit Queue
49fe0bea97 Merge pull request #37380 from jsafrane/rbd-errors
Automatic merge from submit-queue (batch tested with PRs 39092, 39126, 37380, 37093, 39237)

Improve error reporting in Ceph RBD provisioner.

- We should report an error when user references a secret that cannot be found
- We should report output of rbd create/delete commands, logging "exit code 1"
  is not enough.


Before:
```
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  33m           33m             1       {persistentvolume-controller }                  Warning         ProvisioningFailed      Failed to provision volume with StorageClass "cephrbdprovisioner": rbd: create volume failed, err: exit status 1
```

After:

```
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  33m           33m             1       {persistentvolume-controller }                  Warning         ProvisioningFailed      Failed to provision volume with StorageClass "cephrbdprovisioner": failed to create rbd image: exit status 1, command output: rbd: couldn't connect to the cluster
```


@rootfs, PTAL
2017-01-03 09:45:22 -08:00
Kubernetes Submit Queue
85bf256709 Merge pull request #39075 from ChenLingPeng/github-waitfor
Automatic merge from submit-queue

no need to sleep for last retry

break for needless sleep
2017-01-03 07:26:00 -08:00
Kubernetes Submit Queue
3fe288d74e Merge pull request #36221 from pospispa/86-5-add-checks-and-documentation-about-template-pods-for-recycling
Automatic merge from submit-queue (batch tested with PRs 37959, 36221)

Recycle Pod Template Check

The kube-controller-manager has two command line arguments (--pv-recycler-pod-template-filepath-hostpath and --pv-recycler-pod-template-filepath-nfs) that specify a recycle pod template. The recycle pod template may not contain the volume that shall be recycled.

A check is added to make sure that the recycle pod template contains at least a volume.

cc: @jsafrane
2017-01-02 05:08:30 -08:00
rkouj
7ebcab8c19 Add unit tests for operation_executor 2016-12-29 18:37:22 -08:00
rkouj
8cec46e8ca Check if pathExists before performing Unmount 2016-12-29 18:06:43 -08:00
Mike Danese
161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Kubernetes Submit Queue
c3771cbaf9 Merge pull request #36922 from rkouj/refactor-operation-executor
Automatic merge from submit-queue

Refactor operation_executor to make it testable

**What this PR does / why we need it**:
To refactor operation_executor to make it unit testable

**Release note**:
`NONE`
2016-12-27 19:01:56 -08:00
Kubernetes Submit Queue
d4bf500e73 Merge pull request #39055 from anguslees/detach
Automatic merge from submit-queue (batch tested with PRs 39152, 39142, 39055)

openstack: Forcibly detach an attached cinder volume before attaching elsewhere

Fixes #33288



**What this PR does / why we need it**:
Without this fix, we can't preemptively reschedule pods with persistent volumes to other hosts (for rebalancing or hardware failure recovery).

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #33288

**Special notes for your reviewer**:
(This is a resurrection/cleanup of PR #33734, originally authored by @Rotwang)

**Release note**:
2016-12-27 17:10:14 -08:00
rkouj
e7e3c55ad7 Add unit tests for MountVolume() of operation executor 2016-12-27 16:07:06 -08:00
rkouj
d5f7610b82 Refactor operation_executor to make it unit testable 2016-12-27 15:12:16 -08:00
Tim Hockin
f75bed8682 Add a ForEach() to bitmap allocator 2016-12-26 21:59:27 -08:00
Harry Zhang
5a7661b483 Raise markVolMountedErr instead of mount err 2016-12-26 07:52:51 +00:00
Kubernetes Submit Queue
1c2a23e48c Merge pull request #39014 from resouer/fix-nil-glusterfs
Automatic merge from submit-queue (batch tested with PRs 39029, 39014)

[Glusterfs Vol Plugin]: Check kube client is invalid and return error

Fixes: #38939

In volume plugins, we need to create a kube client to make api call. And this kube client can be nil when, for example, wrong api-server configuration, but kubelet should not crash in this case.

I have also checked other plugins and found only glusterfs need this fix.
2016-12-23 06:39:29 -08:00
Kubernetes Submit Queue
f1aa025837 Merge pull request #38655 from abrarshivani/fsGroupforvSphere
Automatic merge from submit-queue (batch tested with PRs 39059, 39175, 35676, 38655)

Fix fsGroup to vSphere

**What this PR does / why we need it**:
Fixes #34039 by adding support for fsGroup to vSphere Volume. 

**Special notes for your reviewer**:
Tested with example from http://stackoverflow.com/questions/35213589/docker-container-with-non-root-user-deployed-in-google-container-engine-can-not
Before this fix got error ```Permission Denied```.

**Release note**:

`NONE`

cc @pdhamdhere @kerneltime @BaluDontu
2016-12-22 18:50:34 -08:00
pospispa
ef43f82de8 Recycle Pod Template Check
The kube-controller-manager has two command line arguments (--pv-recycler-pod-template-filepath-hostpath and --pv-recycler-pod-template-filepath-nfs) that specify a recycle pod template. The recycle pod template may not contain the volume that shall be recycled.

A check is added to make sure that the recycle pod template contains at least a volume.
2016-12-22 17:44:32 +01:00
forrestchen
1d9f754565 no need to sleep for last retry
Signed-off-by: forrestchen <forrestchen@tencent.com>
2016-12-21 17:52:01 +08:00
Kubernetes Submit Queue
1abb8498aa Merge pull request #36888 from linki/patch-1
Automatic merge from submit-queue (batch tested with PRs 36888, 38180, 38855, 38590)

wrong pod reference in error message for volume attach timeout

**What this PR does / why we need it**:
when a disk mount times out you get the following error:

```
Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nginx"/"default". list of unattached/unmounted volumes=[data]
```

where the pod is referenced by "podname"/"namespace", but should be "namespace"/"podname".

**Which issue this PR fixes**
no issue number

**Special notes for your reviewer**:
untested :(
2016-12-20 20:33:52 -08:00
Kubernetes Submit Queue
abe2b3ce1c Merge pull request #38374 from NickrenREN/cinder-getDeviceMountPath-test
Automatic merge from submit-queue

cinder attacher GetDeviceMountPath
2016-12-20 19:16:26 -08:00
Harry Zhang
443ae87b7e Check kube client is valid 2016-12-21 10:38:50 +08:00
Angus Lees
fa1d6f3838 Forcibly detach an attached volume before attaching elsewhere
Fixes #33288

Co-Authored-By: @Rotwang
2016-12-21 11:57:10 +11:00
NickrenREN
430abfbdfe cinder attacher GetDeviceMountPath
add function to test GetDeviceMountPath func return value
2016-12-20 10:15:34 +08:00
rkouj
c14d47dffe Use common unmount util func for TearDownAt() 2016-12-19 16:40:55 -08:00
Kenjiro Nakayama
13660ef701 Catch error when failed to make directory in NFS volume plugin 2016-12-15 17:04:35 +09:00
aaronxu
37f5d4d719 Synchronous spellcheck for pkg/volume/* 2016-12-14 20:07:10 -08:00
Chao Xu
03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Abrar Shivani
5cb7faac5e Fix fsGroup to vSphere 2016-12-12 14:35:13 -08:00
Mike Danese
c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Kubernetes Submit Queue
aa51a165c1 Merge pull request #38378 from obnoxxx/glusterfs-gid-checks
Automatic merge from submit-queue (batch tested with PRs 38284, 38403, 38265, 38378)

glusterfs: properly check gidMin and gidMax values from SC individually

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

This fixes a misleading debug message, and also prevents the glusterfs provisioner from adapting a misconfiguration of the gid-range in the storage class. Instead it will fail with proper error messages.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

https://bugzilla.redhat.com/show_bug.cgi?id=1402286

**Special notes for your reviewer**:

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
```

Don't override explict out-of max-range configuration, but
fail with an error message instead.

Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-09 09:31:09 -08:00
Wojciech Tyczynski
aa7da5231f Update bazel files 2016-12-09 09:42:02 +01:00
Wojciech Tyczynski
e8d1cba875 GetOptions in client calls 2016-12-09 09:42:01 +01:00
Kubernetes Submit Queue
b0b6f3c256 Merge pull request #38401 from liggitt/addressable-deep-copy
Automatic merge from submit-queue (batch tested with PRs 36071, 32752, 37998, 38350, 38401)

Pass addressable values to DeepCopy

Extracted from https://github.com/kubernetes/kubernetes/pull/35728

These are the places we are currently calling DeepCopy incorrectly, and we need to fix, even if we don't pick up the changes to DeepCopy in #35728:
* creating a new cloner means we have no generated functions registered
* passing non-addressable values doesn't pick up generated deep copy functions, and forces us into reflective mode
2016-12-08 16:26:00 -08:00
Kubernetes Submit Queue
f1995ad8f5 Merge pull request #38411 from jingxu97/Dec/fixgluster
Automatic merge from submit-queue

Fix unmountDevice issue caused by shared mount in GCI

This is a fix on top #38124. In this fix, we move the logic to filter
out shared mount references into operation_executor's UnmountDevice
function to avoid this part is being used by other types volumes such as
rdb, azure etc. This filter function should be only needed during
unmount device for GCI image.
2016-12-08 15:37:00 -08:00
Michael Adam
bead60db0d glusterfs: unit-test the gidMin:gidMax parsing from the storage class
Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-08 23:41:31 +01:00
Kubernetes Submit Queue
809d259d68 Merge pull request #38338 from vmware/FixSpaceInVolumePathMaster
Automatic merge from submit-queue (batch tested with PRs 36310, 37349, 38319, 38402, 38338)

Fix space issue in volumePath with vSphere Cloud Provider

I tried to create a kubernetes deployment with vSphere volume with volume path
"[datastore] kubevols/redis-master".
In this case the cloud provider queries the getDeviceNameFromMount() to return the path of the volume mounted. Since getDeviceNameFromMount() queries the filesystem to get the mount references, it returns a volume path "[datastore]\\040kubevols/redis-master". Later the kubelet searches for this volume path in both the actual and desired states. Th actual and desired states contains volume with path "[datastore] kubevols/redis-master". So, it couldn't find such volume path and therefore kubernetes stalls unable to make any progress further similar to one described in #37022.

This PR will fix the space issue in volume path by replacing \\040 to empty space. This fixes #37712.
Also fixes #38148
@kerneltime @pdhamdhere
2016-12-08 13:44:59 -08:00
Jing Xu
bb8b54af18 Fix unmountDevice issue caused by shared mount in GCI
This is a fix on top #38124. In this fix, we move the logic to filter
out shared mount references into operation_executor's UnmountDevice
function to avoid this part is being used by other types volumes such as
rdb, azure etc. This filter function should be only needed during
unmount device for GCI image.
2016-12-08 13:34:45 -08:00
Jordan Liggitt
6819706adf
Pass addressable values to DeepCopy 2016-12-08 14:16:01 -05:00
Michael Adam
c84cba0440 glusterfs: properly check gidMin and gidMax values from SC individually
Don't override explict out-of max-range configuration, but
fail with an error message instead.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1402286

Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-08 12:02:19 +01:00
Balu Dontu
4d57ad6fca Fix space in volumePath in vSphere 2016-12-07 16:15:51 -08:00
Kubernetes Submit Queue
0f2c6fc7fc Merge pull request #37009 from sjenning/fix-perms-with-fsgroup
Automatic merge from submit-queue (batch tested with PRs 38294, 37009, 36778, 38130, 37835)

fix permissions when using fsGroup

Currently, when an fsGroup is specified, the permissions of the defaultMode are not respected and all files created by the atomic writer have mode 777.  This is because in `SetVolumeOwnership()` the `filepath.Walk` includes the symlinks created by the atomic writer.  The symlinks have mode 777 when read from `info.Mode()`.  However, when the are chmod'ed later, the chmod applies to the file the symlink points to, not the symlink itself, resulting in the wrong mode for the underlying file.

This PR skips chmod/chown for symlinks in the walk since those operations are carried out on the underlying file which will be included elsewhere in the walk.

xref https://bugzilla.redhat.com/show_bug.cgi?id=1384458

@derekwaynecarr @pmorie
2016-12-07 10:45:16 -08:00
Kubernetes Submit Queue
65ed735d4f Merge pull request #38124 from kubernetes/Dec/gluster
Automatic merge from submit-queue

Fix GCI mounter issue
2016-12-06 16:21:06 -08:00
Jing Xu
896e0b867e Fix unmount issue cuased by GCI mounter
this is a workaround for the unmount device issue caused by gci mounter. In GCI cluster, if gci mounter is used for mounting, the container started by mounter script will cause additional mounts created in the container. Since these mounts are irrelavant to the original mounts, they should be not considered when checking the mount references. By comparing the mount path prefix, those additional mounts can be filtered out.

Plan to work on better approach to solve this issue.
2016-12-06 12:24:07 -08:00
Seth Jennings
51ae5a34b9 fix permissions when using fsGroup 2016-12-06 14:04:16 -06:00
Kubernetes Submit Queue
0a7aadc282 Merge pull request #38146 from jessfraz/fix-lint-master
Automatic merge from submit-queue (batch tested with PRs 37328, 38102, 37261, 31321, 38146)

fix golint errors on master for 1.6

Needs to be merged before https://github.com/kubernetes/test-infra/pull/1299 is merged

updates https://github.com/kubernetes/kubernetes/issues/37254
2016-12-05 20:16:55 -08:00
Kubernetes Submit Queue
c5c1706f22 Merge pull request #38137 from obnoxxx/gluster-dp-gid-fix
Automatic merge from submit-queue (batch tested with PRs 38076, 38137, 36882, 37634, 37558)

glusterfs: Fix all gid types to int to prevent failures on 32bit systems

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

The glusterfs dynamic provisioner with GID security has an issue on 32 bit systems.
This fixes that issue by forcing all gid types to int internally.
<!--
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
-->

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
Fix the glusterfs dynamic provisioner for 32bit systems by limiting the gids to type int internally, and allowing 2147483647 as the highest GID.
```

This makes all types int until we hand the GID to heketi/gluster,
at which point it's converted to int64.

It also limits the maximum usable GID ti math.MaxInt32 = 2147483647.

Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-05 19:25:51 -08:00
Jess Frazelle
4d27212149
fix golint errors on master for 1.6
Signed-off-by: Jess Frazelle <acidburn@google.com>
2016-12-05 15:01:33 -08:00
Michael Adam
8a1752f2bb glusterfs: Fix all gid types to int to prevent failures on 32bit systems
This makes all types int until we hand the GID to heketi/gluster,
at which point it's converted to int64.

It also limits the maximum usable GID ti math.MaxInt32 = 2147483647.

Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-05 22:46:12 +01:00
Kubernetes Submit Queue
2ac9c08781 Merge pull request #37064 from NickrenREN/vpmtest
Automatic merge from submit-queue

VolumePluginMgrFunc test

Add test func to test VolumePluginMgr funcs in pkg/volume/plugins_test.go
2016-12-05 09:19:06 -08:00
Humble Chirammal
e6a300d735 Allow glusterfs dp volume creation for empty clusterid parameter in sc.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-12-05 13:36:10 +05:30
Kubernetes Submit Queue
bc342006bf Merge pull request #37886 from obnoxxx/gluster-dp-gid
Automatic merge from submit-queue

Implement GID security for the GlusterFS dynamic provisioner.

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:

This PR implements GID security for the glusterfs dynamic provisioner.
It is a reworked version of PR #37549 .

<!--
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
-->

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
The glusterfs dynamic volume provisioner will now choose a unique GID for new persistent volumes from a range that can be configured in the storage class with the "gidMin" and "gidMax" parameters. The default range is 2000 - 4294967295 (max uint32).
```
2016-12-04 14:34:01 -08:00
Clayton Coleman
3454a8d52c
refactor: update bazel, codec, and gofmt 2016-12-03 19:10:53 -05:00
Clayton Coleman
5df8cc39c9
refactor: generated 2016-12-03 19:10:46 -05:00
NickrenREN
6a4b671a64 volume pluginsmgr functions test
add function to test vpm functions in pkg/volume/plugins_test.go
2016-12-03 23:02:21 +08:00
Kubernetes Submit Queue
5698b50258 Merge pull request #37607 from NickrenREN/metricStatfs
Automatic merge from submit-queue (batch tested with PRs 37608, 37103, 37320, 37607, 37678)

MetricsStatfs GetMetrics() function test
2016-12-02 23:32:49 -08:00
Kubernetes Submit Queue
aaed3437fb Merge pull request #37209 from NickrenREN/cephfs-test
Automatic merge from submit-queue (batch tested with PRs 37945, 37498, 37391, 37209, 37169)

test cephfs spec construct function
2016-12-02 20:32:48 -08:00
Michael Adam
06ad835e48 glusterfs: implement GID security in the dynamic provisioner
Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-03 05:27:10 +01:00
Humble Chirammal
92167b5be8 glusterfs: teach provisioner to extract gid-range from storage class 2016-12-03 05:27:10 +01:00
Michael Adam
11a5e84aca glusterfs: add MinMaxAllocator
An allocator of integers that allows for changing the range.
Previously allocated numbers are not lost, and can  be
released later even if they have fallen outside of the range.

Signed-off-by: Michael Adam <obnox@redhat.com>
2016-12-03 05:27:10 +01:00
Kubernetes Submit Queue
8f07fc3d41 Merge pull request #36437 from humblec/glusterfs-clusterid-prov
Automatic merge from submit-queue

Add `clusterid`, an optional parameter to storageclass.

At present, admin doesn't have the privilege to chose the
trusted storage pool from which persistent gluster volume
has to be provided.

This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-12-01 06:31:25 -08:00
Martin Linkhorst
001e75ba8c
fix(kubelet): reference pod by namespace/name 2016-12-01 11:22:55 +01:00
Kubernetes Submit Queue
5658addb9b Merge pull request #37413 from vmware/FixUnmountVolume
Automatic merge from submit-queue

kubernetes attempts to unmount a wrong vSphere volume and stops making any progress after that

This is in reference to the bug #37332 which was accidentally closed. So created this new PR.

The code is already reviewed as part of PR #37332 

Fixes issue #37022 

@saad-ali @jingxu97 @abrarshivani @kerneltime
2016-11-30 23:26:06 -08:00
NickrenREN
e08f263d72 test cephfs spec construct function
test ConstructVolumeSpec function in pkg/volume/cephfs/cephfs_test.go
2016-12-01 13:50:15 +08:00
Kubernetes Submit Queue
4c0781e962 Merge pull request #37167 from luomiao/fix-photon-plugin-ConstructVolumeSpec
Automatic merge from submit-queue

Fix photon controller plugin to construct with correct PdID

**What this PR does / why we need it**:
This PR is to fix a mismatching of unmount path in photon volume plugin, which is resulted from the assigning volume spec name to persistent disk ID. Without this path, unmounting process is stalling in reconciler when a pod is deleted. Restart the same pod will see a mount failure because the previous unmounting is still going on.

The input variable of function ConstructVolumeSpec is the volume spec name instead of persistent disk ID. Previously the function directly construct new volume spec by assigning volume spec name to persistent disk ID, which will result in mismatching of mount path. The fix will find the pdID according to mount path and construct volume spec with the correct pdID.

I have tested the patch with back-to-back pod creation/deletion and mounting/unmounting of photon persistent disk volume source performs normal now.

This need to be cherry-picked to 1.5 release branch.
2016-11-30 21:11:11 -08:00
Pengfei Ni
f584ed4398 Fix package aliases to follow golang convention 2016-11-30 15:40:50 +08:00
NickrenREN
43be2d87e9 MetricsStatfs GetMetrics() function test
add test function to test GetMetrics() function in pkg/volume/metrics_statfs_test.go
2016-11-30 09:46:20 +08:00
Balu Dontu
fbd1390839 Fix for unmount volume to take in volumePath instead of volumeName 2016-11-28 18:21:12 -08:00
Kubernetes Submit Queue
68cd97a529 Merge pull request #35615 from jsafrane/fix-gluster-errors
Automatic merge from submit-queue

Improve error logging in glusterfs provisioner

- log `err` if it is known
- unify log message style
2016-11-28 12:30:38 -08:00
Kubernetes Submit Queue
e94118411c Merge pull request #36900 from vwfs/volume_reconciler_verbosity
Automatic merge from submit-queue

Reduce verbosity of volume reconciler

**What this PR does / why we need it**:
It reduces the log verbosity for attaching of volumes

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```release-note
Reduce verbosity of volume reconciler when attaching volumes
```

Set logging level for information about attaching of volumes to from 1 to 4
Otherwise the log is spammed with one line per 100ms while attaching is
in progress and afterwards as long as the volume is attached.
2016-11-28 11:42:10 -08:00
Jan Safranek
9484de5d09 Improve error reporting in Ceph RBD provisioner.
- We should report an error when user references a secret that cannot be found
- We should report output of rbd create/delete commands, logging "exit code 1"
  is not enough.


Before:
```
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  33m           33m             1       {persistentvolume-controller }                  Warning         ProvisioningFailed      Failed to provision volume with StorageClass "cephrbdprovisioner": rbd: create volume failed, err: exit status 1
```

After:

```
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  33m           33m             1       {persistentvolume-controller }                  Warning         ProvisioningFailed      Failed to provision volume with StorageClass "cephrbdprovisioner": failed to create rbd image: exit status 1, command output: rbd: couldn't connect to the cluster
```
2016-11-28 12:08:09 +01:00
Miao Luo
c240042231 Fix photon controller plugin to construct with correct PdID
The input variable of function ConstructVolumeSpec is the volume spec
name instead of persistent disk ID. Previously the function directly
construct new volume spec by assigning volume spec name to persistent
disk ID, which will result in mismatching of mount path. The fix will
find the pdID according to mount path and construct volume spec with the
correct pdID.
2016-11-23 18:12:03 -08:00
Chao Xu
bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu
bb675d395f dependencies: pkg/volume 2016-11-23 15:53:09 -08:00
Humble Chirammal
4aeb2a5771 At present, admin doesn't have the privilege to choose the
trusted storage pool from which persistent gluster volume
has to be provided.

This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-11-23 12:57:35 +05:30
Kubernetes Submit Queue
1f82f2491a Merge pull request #37206 from gmarek/nodecontroller
Automatic merge from submit-queue

Add more logging around Pod deletion

After this PR we'll have at least V(2) level log near all Pod deletions.

@saad-ali - this is required by GKE to help with diagnosing possible problem.

cc @dchen1107 @wojtek-t
2016-11-22 01:42:14 -08:00
Xiangfei Zhu
89c0aa735a Adding rescan scsi controller for cinder
For lsilogic scsi controller, attached cinder volume does not
appear under /dev/ automatically unless do a rescan.
This approach was used in vSphere volume provider before PR #27496
dropped support for lsilogic scsi controller.
2016-11-21 22:49:18 -08:00
Kubernetes Submit Queue
a47614dd15 Merge pull request #37122 from childsb/revert_gid
Automatic merge from submit-queue

Revert "Use Gid when provisioning Gluster Volumes."

On further inspection the design in #35460 was not secure enough.  This PR reverts the change. 

This reverts commit 7a0d219d12.
2016-11-21 12:11:31 -08:00
gmarek
795961f7e7 Add more logging around Pod deletion 2016-11-21 11:20:48 +01:00
Alexander Block
1c35e3c275 Add missing nodeName parameter to log call 2016-11-19 13:56:38 +01:00
Kubernetes Submit Queue
95ab8065c6 Merge pull request #36840 from jingxu97/Nov/aws-volumeid
Automatic merge from submit-queue

fix issue in converting aws volume id from mount paths

This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.

This PR fixes #36269
2016-11-18 15:17:20 -08:00
childsb
f4ff79af7b Revert "Use Gid when provisioning Gluster Volumes."
This reverts commit 7a0d219d12.
2016-11-18 15:32:24 -06:00
Kubernetes Submit Queue
f90d879204 Merge pull request #36827 from jsafrane/fix-recycler-pod-name
Automatic merge from submit-queue

Fix recycler pod deletion race.

We should use clone of recycler pod template instead of reusing the same
one for two or more recyclers running in parallel.

Also add some logs to relevant places to spot the error easily next time.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1392338
2016-11-18 09:05:11 -08:00
Jing Xu
1b89c79e55 Update aws_ebs.go
fix typo in glog
2016-11-17 11:14:55 -08:00
Jing Xu
3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
Humble Chirammal
7a0d219d12 Use Gid when provisioning Gluster Volumes.
BUG # https://github.com/openshift/origin/issues/11556

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-11-16 19:43:51 +05:30
Jan Safranek
76755034a1 Fix recycler pod deletion race.
We should use clone of recycler pod template instead of reusing the same
one for two or more recyclers running in parallel.

Also add some logs to relevant places to spot the error easily next time.
2016-11-15 17:22:32 +01:00
Tim Hockin
69f6f8f680 tweak 2016-11-15 08:56:17 +01:00
rkouj
b85ac95143 Implement CanMount() for gfsMounter for linux 2016-11-14 12:18:06 -08:00
Jan Safranek
b3050bbdfa Improve error logging in glusterfs provisioner 2016-11-14 15:16:09 +01:00
Saad Ali
346c1e80e7 Update pkg/volume/OWNERS 2016-11-13 18:07:41 -08:00
Saad Ali
4259853f4a Update OWNERS for pkg/volume/gce_pd/ 2016-11-13 18:05:12 -08:00
Saad Ali
62d3ac2e73 Update OWNERS for pkg/volume/util/ 2016-11-13 18:02:15 -08:00
Saad Ali
df325cc5ca Remove reviwers from volume/vsphere_volume 2016-11-13 18:00:58 -08:00
Saad Ali
2924a8d62e Add approvers for pkg/volume/vsphere_volume 2016-11-13 17:59:43 -08:00
Kubernetes Submit Queue
193e2ae1d1 Merge pull request #36386 from sjenning/fix-secret-file-mode
Automatic merge from submit-queue

Avoid setting S_ISGID on files in volumes

Some applications are having issues with setting the S_ISGID bit on files in volumes.  We intend to do this for directories so that the group ID is inherited, but not files for which S_ISGID indicates madatory file locking https://linux.die.net/man/2/stat

xref https://bugzilla.redhat.com/show_bug.cgi?id=1387306

@ncdc @derekwaynecarr @pmorie
2016-11-10 01:19:02 -08:00
Rajat Ramesh Koujalagi
d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Antoine Pelisse
fd510b1207 Update OWNERS approvers and reviewers: pkg/volume 2016-11-09 10:17:36 -08:00
Miao Luo
20b9fc6905 Photon Controller support: Address github code review comments. 2016-11-08 09:37:20 -08:00
Miao Luo
b22ccc6780 Support persistent volume on Photon Controller platform
1. Enable Photon Controller as cloud provider
2. Support Photon persistent disk as volume source/persistent volume
source
2016-11-08 09:36:16 -08:00
Kubernetes Submit Queue
c61911267f Merge pull request #35616 from pospispa/85-refactor-newRecyclerFunc-from-volume-plugins
Automatic merge from submit-queue

Simplifies NFS and hostPath plugin code

Simplifies NFS and hostPath plugin code.

cc: @jsafrane
2016-11-08 07:18:16 -08:00
Kubernetes Submit Queue
d87dfa2723 Merge pull request #35669 from humblec/glusterfs-instead-gluster
Automatic merge from submit-queue

Make a consistent name ( GlusterFS instead of Gluster) in variables a…

Signed-off-by: Humble Chirammal hchiramm@redhat.com
2016-11-08 04:29:19 -08:00
Seth Jennings
67f3134232 Avoid setting S_ISGID on files in volumes.
Directories in volumes are set S_ISGID to ensure files created inside
them inherit group ownership.  Currently, files are also set S_ISGID
however this is not relevant to the original intent, and indicates
'mandatory file locking' (stat(2)).

With this commit, only directories are set S_ISGID.
2016-11-07 14:18:32 -06:00
pospispa
dc9bb87ac7 Simplifies NFS and Host Path Plugin - Removed newRecyclerFunc, newDeleterFunc and newProvisionerFunc
struct hostPathPlugin contains newRecyclerFunc, newDeleterFunc and newProvisionerFunc items that have only one instance, i.e. newRecycler, newDeleter or newProvisioner function.

That's why the newRecyclerFunc, newDeleterFunc and newProvisionerFunc items are removed and the newRecycler, newDeleter or newProvisioner functions are called directly.

In addition, the TestRecycler tests whether NewFakeRecycler function is called and returns nil. This is no longer needed so this particular part of the test is removed. In addition, the no longer used NewFakeRecycler function is removed also.

Similarly for the NFS plugin, struct nfsPlugin contains newRecyclerFunc item that has only one instance, i.e. newRecycler function. That's why the newRecyclerFunc item is removed and the newRecycler function is called directly. In addition, the TestRecycler tests whether newMockRecycler function is called and returns nil. This is no longer needed so this particular part of the test is removed. In addition, the no longer used newMockRecycler function is removed also.
2016-11-07 10:39:04 +01:00
Lucas Käldström
190a513cf8 Fix the crossbuild that #35132 broke 2016-11-06 08:08:25 -08:00
Kubernetes Submit Queue
7acec071c3 Merge pull request #35430 from jsafrane/remove-pv-annotations
Automatic merge from submit-queue

Remove PV annotations for quobyte provisioner

This is the last provisioner that uses annotations to pass secrets from provisioner to deleter.

Fixes #34822

@johscheuer, I don't have access to quobyte, please take look and retest the plugin. An e2e test for quobyte would be nice!

@kubernetes/sig-storage
2016-11-06 05:26:45 -08:00
Kubernetes Submit Queue
33dab1d555 Merge pull request #35629 from hpcloud/bug/33128-unused-waitfordetach
Automatic merge from submit-queue

Remove unused WaitForDetach from Detacher interface and plugins

See issue #33128 and PR #33270

We can't rely on the device name provided by OpenStack Cinder, and thus
must perform detection based on the drive serial number (aka It's cinder ID)
on the kubelet itself.

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-06 04:52:23 -08:00
Kubernetes Submit Queue
43a915e628 Merge pull request #35491 from pmorie/byebye-getrootcontext
Automatic merge from submit-queue

Remove GetRootContext method from VolumeHost interface

Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.

Per #33951 and #35127.

Depends on #33663; only the last commit is relevant to this PR.
2016-11-06 01:09:19 -08:00
Kubernetes Submit Queue
486a1ad3e4 Merge pull request #31707 from apprenda/windows_infra_container
Automatic merge from submit-queue

Initial work on running windows containers on Kubernetes

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

This is the first stab at getting the Kubelet running on Windows (fixes #30279), and getting it to deploy network-accessible pods that consist of Windows containers. Thanks @csrwng, @jbhurat for helping out.

The main challenge with Windows containers at this point is that container networking is not supported. In other words, each container in the pod will get it's own IP address. For this reason, we had to make a couple of changes to the kubelet when it comes to setting the pod's IP in the Pod Status. Instead of using the infra-container's IP, we use the IP address of the first container.

Other approaches we investigated involved "disabling" the infra container, either conditionally on `runtime.GOOS` or having a separate windows-docker container runtime that re-implemented some of the methods (would require some refactoring to avoid maintainability nightmare). 

Other changes:
- The default docker endpoint was removed. This results in the docker client using the default for the specific underlying OS.

More detailed documentation on how to setup the Windows kubelet can be found at https://docs.google.com/document/d/1IjwqpwuRdwcuWXuPSxP-uIz0eoJNfAJ9MWwfY20uH3Q. 

cc: @ikester @brendandburns @jstarks
2016-11-06 01:30:11 -07:00
Kubernetes Submit Queue
f650ddf800 Merge pull request #35132 from dashpole/per_volume_inode
Automatic merge from submit-queue

Per Volume Inode Accounting

Collects volume inode stats using the same find command as cadvisor.  The command is "find _path_ -xdev -printf '.' | wc -c".  The output is passed to the summary api, and will be consumed by the eviction manager.

This cannot be merged yet, as it depends on changes adding the InodesUsed field to the summary api, and the eviction manager consuming this.  Expect tests to fail until this happens.
DEPENDS ON #35137
2016-11-05 23:45:44 -07:00
Kubernetes Submit Queue
f4738ff575 Merge pull request #35883 from justinsb/aws_strong_volumetype
Automatic merge from submit-queue

AWS: strong-typing for k8s vs aws volume ids
2016-11-05 02:29:17 -07:00
Kubernetes Submit Queue
00269a6c60 Merge pull request #35434 from rootfs/deviceopen
Automatic merge from submit-queue

refactor DeviceOpened() so it won't return error if device doesn't exist

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:
DeviceOpened() is called after device is unmounted but before detached. Some volumes such as rbd don't support 3rd party detach, they have to be detached during unmount. Once detached, the device path vanishes. This causes false alarm when DeviceOpened() is called.

The fix is to ignore error IsNotExist 

**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes #

**Special notes for your reviewer**:
@kubernetes/sig-storage 

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->

``` release-note
```

Signed-off-by: Huamin Chen hchen@redhat.com
2016-11-04 07:40:03 -07:00
Huamin Chen
901e084a98 checking if the device path is valid before calling DeviceOpened() to avoid false negative on devices that don't exist any more
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-11-03 15:02:15 -04:00
Paul Morie
4722cb299b Remove GetRootContext from VolumeHost 2016-11-03 12:21:19 -04:00
Kiall Mac Innes
ce8eda94df Don't rely on device name provided by Cinder
See issue #33128

We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.

This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.

This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
2016-11-02 18:48:11 +01:00
Justin Santa Barbara
3cdbfc98af AWS: strong-typing for k8s vs aws volume ids
We are more liberal in what we accept as a volume id in k8s, and indeed
we ourselves generate names that look like `aws://<zone>/<id>` for
dynamic volumes.

This volume id (hereafter a KubernetesVolumeID) cannot directly be
compared to an AWS volume ID (hereafter an awsVolumeID).

We introduce types for each, to prevent accidental comparison or
confusion.

Issue #35746
2016-11-02 09:42:55 -04:00
Kiall Mac Innes
ccb8d53a39 Remove unused WaitForDetach from Detacher interface and plugins
This has been unused since 542f2dc7, and relies on deviceName, which
can no longer be relied upon (see issue #33128).

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-02 11:59:13 +01:00
Kubernetes Submit Queue
3d33b45e43 Merge pull request #30091 from rootfs/azure-storage
Automatic merge from submit-queue

support Azure disk dynamic provisioning

azure disk dynamic provisioning

A screen shot 

``` console
$ kubectl create -f examples/experimental/persistent-volume-provisioning/azure-dd.yaml
storageclass "slow" created
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl describe pvc
Name:       claim1
Namespace:  default
Status:     Bound
Volume:     pvc-de7150d1-6a37-11e6-aec9-000d3a12e034
Labels:     <none>
Capacity:   3Gi
Access Modes:   RWO
$ kubectl create -f pod.yaml
replicationcontroller "nfs-server" created
$ kubectl describe pod
Name:       nfs-server-b9w6x
Namespace:  default
Node:       rootfs-dev/172.24.0.4
Start Time: Wed, 24 Aug 2016 19:46:21 +0000
Labels:     role=nfs-server
Status:     Running
IP:     172.17.0.2
Controllers:    ReplicationController/nfs-server
Containers:
  nfs-server:
    Container ID:   docker://be6f8c0e26dc896d4c53ef0d21c9414982f0b39a10facd6b93a255f9e1c3806c
    Image:      nginx
    Image ID:       docker://bfdd4ced794ed276a28cf56b233ea58dec544e9ca329d796cf30b8bcf6d39b3f
    Port:       
    State:      Running
      Started:      Wed, 24 Aug 2016 19:49:19 +0000
    Ready:      True
    Restart Count:  0
    Volume Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9o0fj (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  mypvc:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  claim1
    ReadOnly:   false
  default-token-9o0fj:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9o0fj
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  11m       11m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nfs-server-b9w6x to rootfs-dev
  9m        9m      1   {kubelet rootfs-dev}                    Warning     FailedMount Unable to mount volumes for pod "nfs-server-b9w6x_default(6eb7fd98-6a33-11e6-aec9-000d3a12e034)": timeout expired waiting for volumes to attach/mount for pod "nfs-server-b9w6x"/"default". list of unattached/unmounted volumes=[mypvc]
  9m        9m      1   {kubelet rootfs-dev}                    Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-server-b9w6x"/"default". list of unattached/unmounted volumes=[mypvc]
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Pulling     pulling image "nginx"
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Pulled      Successfully pulled image "nginx"
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Created     Created container with docker id be6f8c0e26dc
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Started     Started container with docker id be6f8c0e26dc

```

@colemickens @brendandburns
2016-11-01 17:27:14 -07:00
Jitendra Bhurat
66a1ef25e0
Fixing Volumes on Windows 2016-11-01 15:48:37 -04:00
David Ashpole
d494ef66f0 Collects volume inode stats using the same find command that cadvisor uses these are included in the summary 2016-11-01 10:51:11 -07:00
Kubernetes Submit Queue
44b684ad53 Merge pull request #33663 from pmorie/selinux-fixes
Automatic merge from submit-queue

SELinux Overhaul

Overhauls handling of SELinux in Kubernetes.  TLDR: Kubelet dir no longer has to be labeled `svirt_sandbox_file_t`.

Fixes #33351 and #33510.  Implements #33951.
2016-11-01 05:04:17 -07:00
Jan Safranek
472c2d6e8c Remove PV annotations for quobyte provisioner 2016-11-01 10:40:44 +01:00
Alexander Brand
244152544c
Changes to kubelet to support win containers 2016-10-31 14:20:49 -04:00
Cesar Wong
09285864db
Initial windows container runtime 2016-10-31 14:20:49 -04:00
Kubernetes Submit Queue
106492708a Merge pull request #35285 from humblec/glusterfs-stale-volumes
Automatic merge from submit-queue

Remove stale volumes if endpoint/svc creation fails.

Remove stale volumes if endpoint/svc creation fails.

Signed-off-by: Humble Chirammal hchiramm@redhat.com
2016-10-31 04:06:43 -07:00
Kubernetes Submit Queue
60dc2fa5d8 Merge pull request #35675 from liggitt/pv-secrets
Automatic merge from submit-queue

Require PV provisioner secrets to match type

In 1.5, PV provisioners are allowing targeting namespaced secrets via storageclass params. This adds a requirement that those secrets' type match the volume provisioner plugin name, to prevent targeting and extraction of arbitrary secrets

Helps limit secret targeting issues mentioned in https://github.com/kubernetes/kubernetes/issues/34822
2016-10-30 02:41:05 -07:00
Kubernetes Submit Queue
3e7172d49e Merge pull request #34859 from jingxu97/syncAttach-10-15
Automatic merge from submit-queue

Add sync state loop in master's volume reconciler

At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached
volumes kept in
the actual state cache. If the volume is no longer attached to the node,
the actual state will be updated to reflect the truth. In turn,
reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
2016-10-28 18:33:29 -07:00
Jing Xu
abbde43374 Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.

This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.

To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.

More details are explained in PR #33760
2016-10-28 09:24:53 -07:00
Huamin Chen
1d52719465 azure disk volume: support storage class and dynamic provisioning
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-10-28 13:31:47 +00:00
Janet Kuo
10aee82ae3 Rename PetSet API to StatefulSet 2016-10-27 17:25:10 -07:00
Jordan Liggitt
1dd73c59f3
Require PV provisioner secrets to match type 2016-10-27 02:45:05 -04:00
Humble Chirammal
12b7782240 Make a consistent name ( GlusterFS instead of Gluster) in variables and error messages.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-10-27 09:28:08 +05:30
Paul Morie
aa855b9f24 Update bazel configurations 2016-10-26 10:39:51 -04:00
Paul Morie
69d7297a37 Remove use of RootContext in empty_dir.go 2016-10-26 10:39:50 -04:00
Paul Morie
7fb99442a6 Refactor pkg/util/selinux 2016-10-26 09:38:03 -04:00
Jing Xu
b02481708a Fix volume states out of sync problem after kubelet restarts
When kubelet restarts, all the information about the volumes will be
gone from actual/desired states. When update node status with mounted
volumes, the volume list might be empty although there are still volumes
are mounted and in turn causing master to detach those volumes since
they are not in the mounted volumes list. This fix is to make sure only
update mounted volumes list after reconciler starts sync states process.
This sync state process will scan the existing volume directories and
reconstruct actual states if they are missing.

This PR also fixes the problem during orphaned pods' directories. In
case of the pod directory is unmounted but has not yet deleted (e.g.,
interrupted with kubelet restarts), clean up routine will delete the
directory so that the pod directoriy could be cleaned up (it is safe to
delete directory since it is no longer mounted)

The third issue this PR fixes is that during reconstruct volume in
actual state, mounter could not be nil since it is required for creating
container.VolumeMap. If it is nil, it might cause nil pointer exception
in kubelet.

Details are in proposal PR #33203
2016-10-25 12:29:12 -07:00
Mike Danese
3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Humble Chirammal
90263476d5 Remove stale volumes if endpoint/svc creation fails.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-10-21 16:43:07 +05:30
bradley childs
3add654451 Update pkg/volume/OWNERS to include Jan Safranek
Jan maintains the binder and volume driver code and should be listed as an owner of this package.
2016-10-20 12:21:06 -05:00
Kubernetes Submit Queue
ed60ee4072 Merge pull request #34705 from humblec/gluster-pvc-namespace-1
Automatic merge from submit-queue

Makeuse of PVC namespace when provisioning gluster volumes.

Depends on https://github.com/kubernetes/kubernetes/pull/34611
2016-10-20 01:28:31 -07:00
Kubernetes Submit Queue
0b2674eac7 Merge pull request #34389 from guangxuli/k8s_configmap_test
Automatic merge from submit-queue

add a clean code for TestCanSupport
2016-10-19 23:26:28 -07:00
Humble Chirammal
0d080f986d Use PVC namespace when provisioning GlusterFS volumes.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-10-20 00:31:21 +05:30
Huamin Chen
10b29de55c remove pv annotation from rbd volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-10-19 13:30:33 -04:00
guangxuli
2c9e84f50f add a clean code for TestCanSupport
update other location

forgot two files need to be updated
2016-10-19 12:35:46 +08:00
Jan Safranek
2b2508ba15 Remove PV annotations for Gluster provisioner.
Don't store Gluster SotrageClass parameters in annotations, it's insecure.
Instead, expect that there is the StorageClass available at the time
when it's needed by Gluster deleter.
2016-10-18 09:54:35 +02:00
Jan Safranek
101602ab11 Pass whole PVC to provisioner plugin
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.

Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
2016-10-12 12:22:01 +02:00
Jedrzej Nowak
f0988b95e7 Typos and englishify pkg/volume 2016-10-03 22:39:33 +02:00
Kubernetes Submit Queue
df064881d2 Merge pull request #31005 from simonswine/feature-flocker-dyn-provisioning
Automatic merge from submit-queue

Dynamic provisioning for flocker volume plugin

Refactor flocker volume plugin
* [x] Support provisioning beta (#29006)
* [x] Support deletion
* [x] Use bind mounts instead of /flocker in containers

* [x] support ownership management or SELinux relabeling.
* [x] adds volume specification via datasetUUID (this is guranted to be unique)

I based my refactor work to replicate pretty much GCE-PD behaviour 

**Related issues**: #29006 #26908

@jsafrane @mattbates @wallrj @wallnerryan
2016-09-28 01:46:43 -07:00
Kubernetes Submit Queue
1854bdcb0c Merge pull request #29048 from justinsb/volumes_nodename_not_hostname
Automatic merge from submit-queue

Use strongly-typed types.NodeName for a node name

We had another bug where we confused the hostname with the NodeName.

Also, if we want to use different values for the Node.Name (which is
an important step for making installation easier), we need to keep
better control over this.

A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
2016-09-27 17:58:41 -07:00
Justin Santa Barbara
54195d590f Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.

To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.

A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName

Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
2016-09-27 10:47:31 -04:00
Kubernetes Submit Queue
81a1b0573b Merge pull request #31869 from jsafrane/gluster-secrets
Automatic merge from submit-queue

Use secrets for glusterfs provisioning passwords

- no plain password in StorageClass!
- fix the style along the way
- use PV annotations to pass the configuration from provisioners to deleters, inspired by Ceph RBD provisioning.

~~Proposing 1.4:~~

~~- GlusterFS provisioning is a new 1.4 feature~~
~~- if we release GlusterFS provisioner as it is now, we need to support it's API (i.e. plaintext passwords) until 2.0~~
~~- it can break only GlusterFS provisioning, nothing else~~
~~- it's easy to revert~~

@kubernetes/sig-storage

fixes #31871
2016-09-27 07:32:09 -07:00
Christian Simon
cd0897801b Refactor flocker volume plugin
* Support provisioning
* Support deletion
* Use bind mounts instead of /flocker in containers
* support ownership management or SELinux relabeling.
2016-09-27 13:19:45 +00:00
Kubernetes Submit Queue
4785f6f517 Merge pull request #31978 from jsafrane/detach-before-delete
Automatic merge from submit-queue

Do not report error when deleting an attached volume

Persistent volume controller should not send warning events to a PV and mark the PV as failed when the volume is still attached.

This happens when a user quickly deletes a pod and associated PVC - PV is slowly detaching, while the PVC is already deleted and the PV enters Failed phase.

`Deleter.Deleter` can now return `tryAgainError`, which is sent as INFO to the PV to let the user know we did not forget to delete the PV, however the PV stays in Released state. The controller tries again in the next sync (15 seconds by default).

Fixes #31511
2016-09-25 18:55:32 -07:00
Kubernetes Submit Queue
e9f4db2748 Merge pull request #27714 from jsafrane/event-recycle
Automatic merge from submit-queue

Send recycle events from pod to pv.

This allows users to diagnose what's wrong with recycler. Recycler pods are started automatically with a cryptic name and they are deleted immediately when they finish.

e.g, `kubectl describe pv` could show that NFS cannot be mounted (and how many pods have tried it):

```
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  59m           59m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  53m           53m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  46m           46m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  40m           40m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  33m           33m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  27m           27m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  1h            3m              75      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
  1h            3m              76      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Pod was active on the node longer than specified deadline
  1h            1m              12      {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  20m           1m              4       {persistentvolume-controller }                  Warning         RecyclerPod     (events with common reason combined)
```

These steps were necessary:

- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion
- pass all these events through volume plugins to volume controller
- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table (too much copy-paste)
- fix all unit tests along the way
2016-09-22 12:18:53 -07:00
Jan Safranek
1adf856735 Use secrets for glusterfs provisioning passwords
- no plain password in StorageClass!
- fix the style along the way
- use PV annotations to pass the configuration from provisioners to deleters
2016-09-20 16:24:30 +02:00
Kubernetes Submit Queue
aa0e8b9cc1 Merge pull request #31434 from johscheuer/quobyte-dynamic-prov
Automatic merge from submit-queue

Support Quobyte as StorageClass

This PR allows Users to use Quobyte as StorageClass for dynamic volume provisioning and implements the Provisioner/Deleter Interface. 

@quolix @kubernetes/sig-storage @rootfs
2016-09-19 02:39:41 -07:00
Johannes Scheuermann
02db13b620 Update quobyteApiServer to quobyteAPIServer 2016-09-17 10:08:52 +02:00
Abrar Shivani
57180093af Support for storage class for vSphere volume plugin. Custom disk format for dynamic provisioning. 2016-09-16 17:15:38 -07:00
Kubernetes Submit Queue
791116476f Merge pull request #32348 from asalkeld/metrics-nil-spammy
Automatic merge from submit-queue

Disambiguate unsupported metrics from metrics errors

**What this PR does / why we need it**:
Stop logging "metrics are not supported for MetricsNil Volumes" as it spams the log.

**Which issue this PR fixes** 
fixes #20676, fixes #27373

**Special notes for your reviewer**:
None

**Release note**:
```release-note
Don't log "metrics are not supported for MetricsNil Volumes"
```
2016-09-16 11:27:15 -07:00
Johannes Scheuermann
0b7cb5f2ae Inital Quobyte dynamic provision 2016-09-16 13:26:18 +02:00
Kubernetes Submit Queue
9a3429829c Merge pull request #32662 from humblec/glusterfs-default-volume
Automatic merge from submit-queue

Change the default volume type of GlusterFS provisioner.

At  present provisioner creates 'Distribute' Volume and  this patch change the default
volume type 'Distribute Replica:(3)' volume.
2016-09-15 18:07:14 -07:00
Humble Chirammal
b4fd7e5591 Change the default volume type of GlusterFS provisioner.
At present, provisioner creates Distribute Volume and this patch
change the default volume type to Distribute-Replica(3) volume.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-09-15 19:00:21 +05:30
Jan Safranek
9903b389b3 Update cloud providers 2016-09-15 10:33:57 +02:00
Jan Safranek
a24e6a90bd Add new error 2016-09-15 09:39:30 +02:00
Angus Salkeld
a1b2fcb10f Disambiguate unsupported metrics from metrics errors 2016-09-15 10:05:30 +10:00
Kubernetes Submit Queue
6a9a93d469 Merge pull request #32242 from jingxu97/bug-wrongvolume-9-2
Automatic merge from submit-queue

Fix race condition in updating attached volume between master and node

This PR tries to fix issue #29324. The cause of this issue is that a race
condition happens when marking volumes as attached for node status. This
PR tries to clean up the logic of when and where to mark volumes as
attached/detached. Basically the workflow as follows,
1. When volume is attached sucessfully, the volume and node info is
added into nodesToUpdateStatusFor to mark the volume as attached to the
node.
2. When detach request comes in, it will check whether it is safe to
detach now. If the check passes, remove the volume from volumesToReportAsAttached
to indicate the volume is no longer considered as attached now.
Afterwards, reconciler tries to update node status and trigger detach
operation. If any of these operation fails, the volume is added back to
the volumesToReportAsAttached list showing that it is still attached.

These steps should make sure that kubelet get the right (might be
outdated) information about which volume is attached or not. It also
garantees that if detach operation is pending, kubelet should not
trigger any mount operations.
2016-09-12 15:29:38 -07:00
Jing Xu
efaceb28cc Fix race condition in updating attached volume between master and node
This PR tries to fix issue #29324. This cause of this issue is a race
condition happens when marking volumes as attached for node status. This
PR tries to clean up the logic of when and where to mark volumes as
attached/detached. Basically the workflow as follows,
1. When volume is attached sucessfully, the volume and node info is
added into nodesToUpdateStatusFor to mark the volume as attached to the
node.
2. When detach request comes in, it will check whether it is safe to
detach now. If the check passes, remove the volume from volumesToReportAsAttached
to indicate the volume is no longer considered as attached now.
Afterwards, reconciler tries to update node status and trigger detach
operation. If any of these operation fails, the volume is added back to
the volumesToReportAsAttached list showing that it is still attached.

These steps should make sure that kubelet get the right (might be
outdated) information about which volume is attached or not. It also
garantees that if detach operation is pending, kubelet should not
trigger any mount operations.
2016-09-12 13:51:08 -07:00
Kubernetes Submit Queue
34141a794d Merge pull request #31251 from rootfs/rbd-prov3
Automatic merge from submit-queue

support storage class in Ceph RBD volume

replace WIP PR #30959, using PV annotation idea from @jsafrane 

@kubernetes/sig-storage @johscheuer @elsonrodriguez
2016-09-10 07:03:14 -07:00
Jan Safranek
d7111b282f Send recycle events from pod to pv.
This allows users to diagnose what's wrong with recycler. Recycler pods are
started automatically with a cryptic name and they are deleted immediately
when they finish.

kubectl describe pods will show:

  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  59m           59m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  53m           53m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  46m           46m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  40m           40m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  33m           33m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  27m           27m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  1h            3m              75      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
  1h            3m              76      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Pod was active on the node longer than specified deadline
  1h            1m              12      {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  20m           1m              4       {persistentvolume-controller }                  Warning         RecyclerPod     (events with common reason combined)


These steps were necessary:

- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion

- pass all these events through volume plugins to volume controller

- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table
  (too much copy-paste)

- fix all unit tests along the way
2016-09-08 12:57:57 +02:00
Kubernetes Submit Queue
54b47dcf0b Merge pull request #31303 from thockin/volume-owners
Automatic merge from submit-queue

Make @rootfs the assignee for various volumes

This, combined with the '/lgtm' capability of reviewers means you can approve
PRs. @rootfs - I assume you're OK with this?
2016-09-05 14:53:32 -07:00
Kubernetes Submit Queue
aad5c66792 Merge pull request #31837 from jingxu97/recorder
Automatic merge from submit-queue

Post event message for volume attachment

This PR is to add event message when attaching volume fails to help
users to debug. For detach failure, may address in a different PR since
it requires more data structure change.
2016-09-01 23:30:57 -07:00
Jing Xu
b9157b7524 Post event message for volume attachment
This PR is to add event message when attaching volume fails to help
users to debug. For detach failure, may address in a different PR since
it requires more data structure change.
2016-09-01 16:24:36 -07:00
Huamin Chen
0c3b2f44a4 review feedbacks
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-08-25 15:32:26 -04:00
Tim Hockin
d0a840798e Make rootfs the assignee for various volumes
This, combined with the '/lgtm' capability of reviewers means he can approve
PRs.
2016-08-23 14:40:05 -07:00
Huamin Chen
5445ccf4cb support storage class in Ceph RBD volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-08-23 11:05:51 -04:00
Huamin Chen
dea4b0226d support Azure data disk volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-08-23 13:23:07 +00:00
Kubernetes Submit Queue
c5d56ea356 Merge pull request #30535 from abrarshivani/vsphere_attach_detach_interface
Automatic merge from submit-queue

Implements Attacher Plugin Interface for vSphere

This PR does the following,

Fixes #29028 (vsphere volume should implement attacher interface):  Implements Attacher Plugin Interface for vSphere. 
See file: 
pkg/volume/vsphere_volume/vsphere_volume.go. - Removed attach and detach calls from SetupAt and TearDownAt.
pkg/volume/vsphere_volume/attacher.go. - Implements Attacher & Detacher Plugin Interface for vSphere. (Ref :- GCE_PD & AWS attacher.go)
pkg/cloudproviders/provider/vsphere.go - Added DiskIsAttach method.

The vSphere plugin code needs clean up. (ex: The code for getting vSphere instance is repeated in file pkg/cloudprovider/providers/vsphere.go). I will fix this in next PR.
2016-08-23 05:13:12 -07:00
Huamin Chen
259bce370e support storage class in Cinder provisioner
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-08-22 09:28:29 -04:00
Kubernetes Submit Queue
cfe7a4391a Merge pull request #31060 from rata/secret-configmap-file-mode
Automatic merge from submit-queue

Fix coding style

cc @pmorie

**What this PR does / why we need it**: Fixes case on a variable name, it's simple and adjust the code to the coding style.

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```NONE
```
2016-08-22 06:19:47 -07:00
Kubernetes Submit Queue
a316e6def2 Merge pull request #30880 from markturansky/add_encryption
Automatic merge from submit-queue

Add encryption to EBS dynamic provisioner

Resolves https://github.com/kubernetes/kubernetes/issues/30792

Adds encryption to the EBS cloud provider and provisioner.

Follow up to #29006 (all commits but the one in this PR will drop out).

@kubernetes/sig-storage 


```release-note
```
2016-08-21 21:29:55 -07:00
Kubernetes Submit Queue
ad6eed40ec Merge pull request #30888 from humblec/mypr/29006
Automatic merge from submit-queue

GlusterFS dynamic provisioner and deleter interface based on storageclass claims

This PR depends on PR#29006
2016-08-21 01:50:16 -07:00
Clayton Coleman
e1ebde9f92
Add spec.nodeName and spec.serviceAccountName to downward env var
The serviceAccountName is occasionally useful for clients running on
Kube that need to know who they are when talking to other components.

The nodeName is useful for PetSet or DaemonSet pods that need to make
calls back to the API to fetch info about their node.

Both fields are immutable, and cannot easily be retrieved in another
way.
2016-08-20 15:50:36 -04:00
Rodrigo Campos
3366821d9a Fix coding style 2016-08-20 14:58:56 -03:00
Kubernetes Submit Queue
d0cca393d7 Merge pull request #31034 from jingxu97/unmount-8-19
Automatic merge from submit-queue

Add ismounted check in unmountpath function

This change is to fix PR #30930. The function should check if the
mountpath is still mounted or not. If it is not, it should continue with
removing the directory instead of returning error.
2016-08-19 22:18:28 -07:00
Jing Xu
cafd126ecd Add ismounted check in unmountpath function
This change is for fixing PR #30930. The function should check if the
mountpath is still mounted or not. If it is not, it should continue with
removing the directory instead of returning error.
2016-08-19 17:15:30 -07:00
Kubernetes Submit Queue
529edae1f6 Merge pull request #31006 from simonswine/flocker-owner
Automatic merge from submit-queue

Adds myself to the flocker volume plugin owners

I am happy to look after the flocker volume plugin and support @agonzalezro. Currently refactoring the volume plugin and adding dynamic provisioning features in #31005
2016-08-19 15:49:48 -07:00
Humble Chirammal
836ac6e403 GlusterFS dynamic provisioner and deleter interface based on StorageClass claims
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2016-08-19 23:03:32 +05:30
Christian Simon
517b2f400c Adds myself to flocker volume plugin owners 2016-08-19 17:01:12 +01:00
Kubernetes Submit Queue
6ce405c6ee Merge pull request #27778 from screeley44/k8-vol-executor
Automatic merge from submit-queue

Add Events for operation_executor to show status of mounts, failed/successful to show in describe events

Fixes #27590 
@saad-ali @pmorie @erinboyd

After talking with @pmorie last week about the above issue, I decided to poke around and see if I could remedy.  The refactoring broke my previous UXP merged PR's that correctly showed failed mount errors in the describe events.  However, Not sure I implemented correctly, but it tested out and seems to be working, let me know what I missed or if this is not the correct approach.

```
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2m		2m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned nfs-bb-pod1 to 127.0.0.1
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedMount	Unable to mount volumes for pod "nfs-bb-pod1_default(a94f64f1-37c9-11e6-9aa5-52540073d346)": timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  38s		38s		1	{kubelet }				Warning		FailedMount	Unable to mount volumes for pod "a94f64f1-37c9-11e6-9aa5-52540073d346": Mount failed: exit status 32
Mounting arguments: nfs1.rhs:/opt/data99 /var/lib/kubelet/pods/a94f64f1-37c9-11e6-9aa5-52540073d346/volumes/kubernetes.io~nfs/nfsvol nfs []
Output: mount.nfs: Connection timed out

Resolution hint: Check and make sure the NFS Server exists (ensure that correct IPAddress/Hostname was given) and is available/reachable.
Also make sure firewall ports are open on both client and NFS Server (2049 v4 and 2049, 20048 and 111 for v3).
Use commands telnet <nfs server> <port> and showmount <nfs server> to help test connectivity.
```
2016-08-19 08:27:48 -07:00
Abrar Shivani
e89ad04422 Implements Attacher Plugin Interface for vSphere 2016-08-19 00:28:55 -07:00
markturansky
9a2645aa5e add encryption to aws provisioner and cloud provider 2016-08-18 15:42:44 -04:00
Kubernetes Submit Queue
dbc9063c17 Merge pull request #24977 from johscheuer/quobyte-plugin
Automatic merge from submit-queue

Quobyte Volume plugin

@quofelix and myself developed a volume plugin for [Quobyte](http://www.quobyte.com) which is a software-defined storage solution. This PR allows Kubernetes users to mount a Quobyte Volume inside their containers over Kubernetes.

Here are some further informations about [Quobyte and Storage for containers](http://www.quobyte.com/containers)
2016-08-18 11:46:37 -07:00
Kubernetes Submit Queue
9d2a5fe5e8 Merge pull request #29006 from jsafrane/dynprov2
Automatic merge from submit-queue

Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass

Implemented according to PR #26908. There are several patches in this PR with one huge code regen inside.

* Please review the API changes (the first patch) carefully, sometimes I don't know what the code is doing...

* `PV.Spec.Class` and `PVC.Spec.Class` is not implemented, use annotation `volume.alpha.kubernetes.io/storage-class`

* See e2e test and integration test changes - Kubernetes won't provision a thing without explicit configuration of at least one `StorageClass` instance!

* Multiple provisioning volume plugins can coexist together, e.g. HostPath and AWS EBS. This is important for Gluster and RBD provisioners in #25026

* Contradicting the proposal, `claim.Selector` and `volume.alpha.kubernetes.io/storage-class` annotation are **not** mutually exclusive. They're both used for matching existing PVs. However, only `volume.alpha.kubernetes.io/storage-class` is used for provisioning, configuration of provisioning with `Selector` is left for (near) future.

* Documentation is missing. Can please someone write some while I am out?

For now, AWS volume plugin accepts classes with these parameters:

```
kind: StorageClass
metadata:
  name: slow
provisionerType: kubernetes.io/aws-ebs
provisionerParameters:
  type: io1
  zone: us-east-1d
  iopsPerGB: 10
```

* parameters are case-insensitive
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (`gp2` in the same zone as in 1.3).

GCE:

```
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
  name: slow
provisionerType: kubernetes.io/gce-pd
provisionerParameters:
  type: pd-standard
  zone: us-central1-a
```

* `type`: `pd-standard` or `pd-ssd`
* `zone`: GCE zone
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (SSD in the same zone as in 1.3 ?).


No OpenStack/Cinder yet

@kubernetes/sig-storage
2016-08-18 09:56:16 -07:00
Johannes Scheuermann
eed42380f9 Initial Quobyte support 2016-08-18 17:13:50 +02:00
Kubernetes Submit Queue
6824f4c08a Merge pull request #28936 from rata/secret-configmap-file-mode
Automatic merge from submit-queue

Allow setting permission mode bits on secrets, configmaps and downwardAPI files

cc @thockin @pmorie 

Here is the first round to implement: https://github.com/kubernetes/kubernetes/pull/28733.

I made two commits: one with the actual change and the other with the auto-generated code. I think it's easier to review this way, but let me know if you prefer in some other way.

I haven't written any tests yet, I wanted to have a first glance and not write them till this (and the API) are more close to the "LGTM" :)

There are some things:
 * I'm not sure where to do the "AND 0777". I'll try to look better in the code base, but suggestions are always welcome :)
 * The write permission on group and others is not set when you do an `ls -l` on the running container. It does work with write permissions to the owner. Debugging seems to show that is something happening after this is correctly set on creation. Will look closer.
 * The default permission (when the new fields are not specified) are the same that on kubernetes v1.3
 * I do realize there are conflicts with master, but I think this is good enough to have a look. The conflicts is with the autog-enerated code, so the actual code is actually the same (and it takes like ~30 minutes to generate it here)
 * I didn't generate the docs (`generated-docs` and `generated-swagger-docs` from `hack/update-all.sh`) because my machine runs out of mem. So that's why it isn't in this first PR, will try to investigate and see why it happens.

Other than that, this works fine here with some silly scripts I did to create a secret&configmap&downwardAPI, a pod and check the file permissions. Tested the "defaultMode" and "mode" for all. But of course, will write tests once this is looking fine :)


Thanks a lot again!
Rodrigo
2016-08-18 05:59:48 -07:00
Kubernetes Submit Queue
9696a27aa0 Merge pull request #30737 from saad-ali/fix29358Round2
Automatic merge from submit-queue

Skip safe to detach check if node API object no longer exists

Fixes #29358
2016-08-18 04:00:05 -07:00
Jan Safranek
d94220810e GCE changes for the new provisioning model 2016-08-18 10:36:50 +02:00
Jan Safranek
4b97db202c AWS changes for new provisioning model 2016-08-18 10:36:49 +02:00
Jan Safranek
6e4d95f646 Dynamic provisioning V2 controller, provisioners, docs and tests. 2016-08-18 10:36:49 +02:00
Rodrigo Campos
5637569f74 Check return value from volume.SetVolumeOwnership() in downwardAPI
The function can fail, so we must check the return code.
2016-08-17 14:44:42 -04:00
Rodrigo Campos
568f4c2e63 Add mode permission bits to configmap, secrets and downwardAPI
This implements the proposal in:
docs/proposals/secret-configmap-downwarapi-file-mode.md

Fixes: #28317.

The mounttest image is updated so it returns the permissions of the linked file
and not the symlink itself.
2016-08-17 14:44:41 -04:00
Kubernetes Submit Queue
f3f818a190 Merge pull request #29639 from aveshagarwal/master-default-resources-limits-fix
Automatic merge from submit-queue

Fix default resource limits (node allocatable) for downward api volumes and env vars

@kubernetes/rh-cluster-infra  @pmorie @derekwaynecarr
2016-08-17 11:37:41 -07:00
Scott Creeley
782d7d9815 Add Events for operation_executor to show status of mounts, failed or successful 2016-08-17 09:53:47 -04:00
saadali
0c72568247 Skip safe to detach if node api obj doesn't exist 2016-08-16 21:30:51 -07:00
Avesh Agarwal
52a60fe3be Fix default resource limits (node capacities) for downward api volumes 2016-08-16 14:41:17 -04:00
saadali
e73c516366 Prevent device unmount from deleting dir on err
Prevent device unmount from deleting dir unless volume is successfully
unmounted first.
2016-08-15 16:58:31 -07:00
Kubernetes Submit Queue
79ed7064ca Merge pull request #27970 from jingxu97/restartKubelet-6-22
Automatic merge from submit-queue

Add volume reconstruct/cleanup logic in kubelet volume manager

Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.

Fixes https://github.com/kubernetes/kubernetes/issues/27653
2016-08-15 13:48:43 -07:00
Jing Xu
f19a1148db This change supports robust kubelet volume cleanup
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
2016-08-15 11:29:15 -07:00
Jess Frazelle
7e9d82129e
fix go vet errors
Signed-off-by: Jess Frazelle <jessfraz@google.com>

fix composites

Signed-off-by: Jess Frazelle <me@jessfraz.com>
2016-08-10 16:45:41 -07:00
Kubernetes Submit Queue
94905bd7c0 Merge pull request #29619 from dims/fix-issue-23163
Automatic merge from submit-queue

Verify volume.GetPath() never returns ""

Add a new helper method volume.GetPath(Mounter) instead of calling
the GetPath() of the Mounter directly. Check if GetPath() is returning
a "" and convert that into an error.

Fixes #23163
2016-08-06 01:44:15 -07:00
Kubernetes Submit Queue
2537f66f0e Merge pull request #29230 from luxas/goimport
Automatic merge from submit-queue

Run goimport for the whole repo

While removing GOMAXPROC and running goimports, I noticed quite a lot of other files also needed a goimport format. Didn't commit `*.generated.go`, `*.deepcopy.go` or files in `vendor`

This is more for testing if it builds.
The only strange thing here is the gopkg.in/gcfg.v1 => github.com/scalingdata/gcfg replace.
cc @jfrazelle @thockin
2016-08-05 16:22:01 -07:00
Davanum Srinivas
e0edfebe82 Verify volume.GetPath() never returns ""
Add a new helper method volume.GetPath(Mounter) instead of calling
the GetPath() of the Mounter directly. Check if GetPath() is returning
a "" and convert that into an error. At this point, we only have
information about the type of the Mounter, so let's log that if
there is a problem

Fixes #23163
2016-08-05 08:45:33 -04:00
Abrar Shivani
87e7535e94 - Updated vmware/govmomi godep (Needs for vsan support)
- Fix unmount for vsanDatastore
- Add support for vsan datastore
2016-08-03 16:37:56 -07:00
Lucas Käldström
c88a07ce1a Run goimports 2016-08-02 15:12:39 +03:00
k8s-merge-robot
01cd7f326e Merge pull request #29621 from resouer/uuid
Automatic merge from submit-queue

Refactor uuid into its own pkg util/uuid

Continuing my work ref #15634

Anyone can review this if he/she wants.
2016-08-01 22:21:30 -07:00
Michal Rostecki
59ca5986dd Print/log pointers of structs with %#v instead of %+v
There are many places in k8s where %+v is used to format a pointer
to struct, which isn't working as expected.

Fixes #26591
2016-08-01 22:27:56 +02:00
Harry Zhang
c495397cae Refactor uuid into its own pkg 2016-07-30 00:07:02 -04:00
k8s-merge-robot
5760acf603 Merge pull request #29596 from matttproud/fix/time-leaks/remainder
Automatic merge from submit-queue

pkg/various: plug leaky time.New{Timer,Ticker}s

According to the documentation for Go package time, `time.Ticker` and
`time.Timer` are uncollectable by garbage collector finalizers.  They
leak until otherwise stopped.  This commit ensures that all remaining
instances are stopped upon departure from their relative scopes.

Similar efforts were incrementally done in #29439 and #29114.

```release-note
* pkg/various: plugged various time.Ticker and time.Timer leaks.
```
2016-07-29 14:06:47 -07:00
k8s-merge-robot
15c0c2c901 Merge pull request #29532 from anish/iscsi_iface
Automatic merge from submit-queue

Check iscsi iface file for transport name

When checking for tcp vs hardware transports, check actual iscsi iface file to see if we are using tcp as a transport, rather than relying on just the transport name of 'default'.

This fixes the open-iscsi software iscsi initiator for non-default interfaces.
fixes #27131
2016-07-28 19:42:09 -07:00
k8s-merge-robot
62e7c57acc Merge pull request #29598 from matttproud/refactor/simplify/goroutinemap
Automatic merge from submit-queue

pkg/util/goroutinemap: apply idiomatic Go cleanups

Package goroutinemap can be structurally simplified to be more
idiomatic, concise, and free of error potential.  No structural changes
are made.

It is unconventional declare `sync.Mutex` directly as a pointerized
field in a parent structure.  The `sync.Mutex` operates on pointer
receivers of itself; and by relying on that, the types that contain
those fields can be safely constructed using
https://golang.org/ref/spec#The_zero_value semantic.

The duration constants are already of type `time.Duration`, so
re-declaring that is redundant.

/CC: @saad-ali
2016-07-28 04:44:26 -07:00
k8s-merge-robot
1ae9b73cd3 Merge pull request #29673 from pmorie/mount-collision
Automatic merge from submit-queue

Fix mount collision timeout issue

Short- or medium-term workaround for #29555.  The root issue being fixed here is that the recent attach/detach work in the kubelet uses a unique volume name as a key that tracks the work that has to be done for each volume in a pod to attach/mount/umount/detach.  However, the non-attachable volume plugins do not report unique names for themselves, which causes collisions when a single secret or configmap is mounted multiple times in a pod.

This is still a WIP -- I need to add a couple E2E tests that ensure that tests break in the future if there is a regression -- but posting for early review.

cc @kubernetes/sig-storage 

Ultimately, I would like to refine this a bit further.  A couple things I would like to change:

1.  `GetUniqueVolumeName` should be a property ONLY of attachable volumes
2.  I would like to see the kubelet apparatus for attach/mount/umount/detach handle non-attachable volumes specifically to avoid things like the `WaitForControllerAttach` call that has to be done for those volume types now
2016-07-27 21:06:47 -07:00
k8s-merge-robot
75c93b4063 Merge pull request #29439 from matttproud/cleanups_volumeflocker
Automatic merge from submit-queue

volume/flocker: plug time.Ticker resource leak

This commit ensures that `flockerMounter.updateDatasetPrimary` does not leak
running `time.Ticker` instances.  Upon termination of the consuming routine, we
stop the tickers.

```release-note
* flockerMounter.updateDatasetPrimary no longer leaks running time.Ticker instances.
  Upon termination of the consuming routine, we stop the tickers.
```
2016-07-27 17:18:34 -07:00
Paul Morie
c884297990 Fix collisions issues / timeouts for mounts
For non-attachable volumes, do not call GetVolumeName on the plugin and instead
generate a unique name based on the identity of the pod and the name of the volume
within the pod.
2016-07-27 17:53:50 -04:00
Ivan Shvedunov
df1e925143 Fix wrapped volume race
This fixes race conditions in configmap, secret, downwardapi & git_repo
volume plugins.
wrappedVolumeSpec vars used by volume mounters and unmounters contained
a pointer to api.Volume structs which were being patched by
NewWrapperMounter/NewWrapperUnmounter, causing race condition during
volume mounts.
2016-07-27 12:24:46 +03:00
Matt T. Proud
4e0a1858f9 pkg/util/goroutinemap: apply idiomatic Go cleanups
Package goroutinemap can be structurally simplified to be more
idiomatic, concise, and free of error potential.  No structural changes
are made.

It is unconventional declare `sync.Mutex` directly as a pointerized
field in a parent structure.  The `sync.Mutex` operates on pointer
receivers of itself; and by relying on that, the types that contain
those fields can be safely constructed using
https://golang.org/ref/spec#The_zero_value.

The duration constants are already of type `time.Duration`, so
re-declaring that is redundant.
2016-07-26 07:00:26 +02:00
Matt T. Proud
5c6292c074 pkg/various: plug leaky time.New{Timer,Ticker}s
According to the documentation for Go package time, `time.Ticker` and
`time.Timer` are uncollectable by garbage collector finalizers.  They
leak until otherwise stopped.  This commit ensures that all remaining
instances are stopped upon departure from their relative scopes.
2016-07-26 06:20:31 +02:00
Anish Bhatt
531a961a96 Check iscsi iface file for transport name 2016-07-25 18:15:25 -07:00
k8s-merge-robot
4694a6dd71 Merge pull request #24797 from screeley44/vols_debug_mkfs
Automatic merge from submit-queue

add enhanced volume and mount logging for block devices

Fixes #24568 

Adding better logging and debugging for block device volumes and the shared SafeFormatAndMount (aws, gce, flex, rbd, cinder, etc...)
2016-07-21 17:12:33 -07:00
Scott Creeley
11d1289afa Add volume and mount logging 2016-07-21 09:10:00 -04:00
saadali
88d495026d Allow mounts to run in parallel for non-attachable
Allow mount volume operations to run in parallel for non-attachable
volume plugins.

Allow unmount volume operations to run in parallel for all volume
plugins.
2016-07-19 21:54:26 -07:00
Cindy Wang
e13c678e3b Make volume unmount more robust using exclusive mount w/ O_EXCL 2016-07-18 16:20:08 -07:00
Matt T. Proud
dbba1347c3 volume/flocker: plug time.Ticker resource leak
This commit ensures that `flockerMounter.updateDatasetPrimary` does not leak
running `time.Ticker` instances.  Upon termination of the consuming
routine, we stop the tickers.
2016-07-18 17:38:12 +02:00
k8s-merge-robot
fa174bcdaf Merge pull request #29042 from dims/fixup-imports
Automatic merge from submit-queue

Use Go canonical import paths

Add canonical imports only in existing doc.go files.
https://golang.org/doc/go1.4#canonicalimports

Fixes #29014
2016-07-18 07:23:38 -07:00
k8s-merge-robot
d168bbe3b8 Merge pull request #28767 from johscheuer/fix-volume-typos
Automatic merge from submit-queue

Fix typos in volume.go

Fixed some minor typos in the docs of `volume.go`.
2016-07-18 00:36:00 -07:00
Davanum Srinivas
2b0ed014b7 Use Go canonical import paths
Add canonical imports only in existing doc.go files.
https://golang.org/doc/go1.4#canonicalimports

Fixes #29014
2016-07-16 13:48:21 -04:00
xiangpengzhao
b2ab356ca5 Delete duplicated code. 2016-07-15 03:04:24 -04:00
joe2far
5ead89b5bb Fixed several typos 2016-07-13 15:06:24 +01:00
Johannes Scheuermann
07b81abb6c Fix typos in volume.go 2016-07-11 12:32:32 +02:00
Michael Rubin
8028e953b6 Revert "Mount r/w GCE PD disks with -o discard" 2016-07-07 16:47:35 -07:00
k8s-merge-robot
939b98481e Merge pull request #28448 from thockin/gce-pd-discard
Automatic merge from submit-queue

Mount r/w GCE PD disks with -o discard

As per https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting.

Fixes #23258
2016-07-07 11:01:43 -07:00
k8s-merge-robot
0c696dc95b Merge pull request #27848 from liubin/fix-typos
Automatic merge from submit-queue

fix some typos

Just a minor typos fix.


Signed-off-by: bin liu <liubin0329@gmail.com>
2016-07-06 23:36:49 -07:00
Angus Salkeld
d7150bfaea Add spec.Name() to the configmap GetVolumeName()
This is to base the name on the volume not just on the
source configMap. If you have 2 volumes that both have the same
configMap as a source, the volume is see as being in the attached
state (it's state is looked up based on GetVolumeName()).

See bug #28502
2016-07-06 16:39:43 +02:00
Tim Hockin
8efefab9a3 Mount r/w GCE PD disks with -o discard
As per
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting.
2016-07-03 21:30:18 -07:00
bin liu
426fdc431a Merge branch 'master' into fix-typos 2016-07-04 11:20:47 +08:00
saadali
0dd17fff22 Reorganize volume controllers and manager 2016-07-01 18:50:25 -07:00
Christian Simon
65180ea25a Fix problems with container restarts and flocker
* Removes meta dir, which prevents to detection of the correct mount
  path

* Fixes #22436
2016-06-30 05:49:15 +00:00
David McMahon
ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00
k8s-merge-robot
7f3da674f7 Merge pull request #26680 from olegshaldybin/fake-clientset-registry
Automatic merge from submit-queue

Track object modifications in fake clientset

Fake clientset is used by unit tests extensively but it has some
shortcomings:

- no filtering on namespace and name: tests that want to test objects in
  multiple namespaces end up getting all objects from this clientset,
  as it doesn't perform any filtering based on name and namespace;

- updates and deletes don't modify the clientset state, so some tests
  can get unexpected results if they modify/delete objects using the
  clientset;

- it's possible to insert multiple objects with the same
  kind/name/namespace, this leads to confusing behavior, as retrieval is
  based on the insertion order, but anchors on the last added object as
  long as no more objects are added.

This change changes core.ObjectRetriever implementation to track object
adds, updates and deletes.

Some unit tests were depending on the previous (and somewhat incorrect)
behavior. These are fixed in the following few commits.
2016-06-29 06:04:33 -07:00
saadali
e06b32b1ef Mark VolumeInUse before checking if it is Attached
Ensure that kublet marks VolumeInUse before checking if it is Attached.
Also ensures that the attach/detach controller always fetches a fresh
copy of the node object before detach (instead ofKubelet relying on node
informer cache).
2016-06-28 14:05:59 -07:00
Oleg Shaldybin
3b15d5be19 Use correct namespace in unit tests that use fake clientset
Fake clientset no longer needs to be prepopulated with records: keeping
them in leads to the name conflict on creates. Also, since fake
clientset now respects namespaces, we need to correctly populate them.
2016-06-28 11:26:34 -07:00
Rudi Chiarito
8db551f674 golint fixes for aws cloudprovider 2016-06-24 17:06:38 -04:00
k8s-merge-robot
3a29aa7941 Merge pull request #27496 from hpcloud/hpe/vsphere-scsidriver
Automatic merge from submit-queue

Adding SCSI controller type filter for vSphere disk attach

Hot plug of disks to a SCSI controller of type lsilogic doesn't work as expected. When a device is detached from the controller, it fails to remove the device from the /dev path which makes the subsequent attaches to the node to fail. With scsi controller types lsilogic-sas or paravirtual this seems to work well. This patch filters the existing controller for these types, and if it doesn't find one, it creates a new controller for disk attach.

This PR is dependent on https://github.com/kubernetes/kubernetes/pull/26658 (1st commit) also targeting this for 1.3
2016-06-23 08:09:43 -07:00
saadali
dfe8e606c1 Fix device path used by volume WaitForAttach 2016-06-22 12:56:58 -07:00
bin liu
fd27cd47f7 fix some typos
Signed-off-by: bin liu <liubin0329@gmail.com>
2016-06-22 18:14:26 +08:00
k8s-merge-robot
07471cf90f Merge pull request #27553 from justinsb/pvc_zone_spreading_2
Automatic merge from submit-queue

AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones

Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
    
We hash the volume name so we don't bias to the first few zones.
    
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset.  In that case we hash
the base name.
2016-06-22 01:22:16 -07:00
k8s-merge-robot
d3a7daf449 Merge pull request #27353 from jsafrane/cinder-attach-test
Automatic merge from submit-queue

Add Cinder volume plugin attach tests.

@kubernetes/sig-storage
2016-06-22 00:15:17 -07:00
Justin Santa Barbara
dd94997619 Add comments & misc review fixes
Lots of comments describing the heuristics, how it fits together and the
limitations.

In particular, we can't guarantee correct volume placement if the set of
zones is changing between allocating volumes.
2016-06-21 15:22:16 -04:00
Jan Safranek
ba63590e04 Add AWS volume plugin attach tests. 2016-06-21 14:27:37 +02:00
Jan Safranek
6356d85db5 Add Cinder volume plugin attach tests. 2016-06-21 13:12:47 +02:00
saadali
e716ddc771 Controller wait for attach and exponential backoff
Modify attach/detach controller to keep track of volumes to report
attached in Node VolumeToAttach status.

Modify kubelet volume manager to wait for volume to show up in Node
VolumeToAttach status.

Implement exponential backoff for errors in volume manager and attach
detach controller
2016-06-20 18:19:55 -07:00
Abitha Palaniappan
4a5ade213c Adding scsi controller type filter while attaching disks
Hot attach of disk to a scsi controller will work only if the
controller type is lsilogic-sas or paravirtual.This patch filters
the existing controller for these types, if it doesn't find one it
creates a new scsi controller.
2016-06-20 09:54:55 -07:00
saadali
d72f88bf3a Modify Attach method to return device path 2016-06-19 23:54:02 -07:00
k8s-merge-robot
4fcbc0ada7 Merge pull request #26658 from hpcloud/hpe/vsphere-vol-bugfixes
Automatic merge from submit-queue

Fixing vSphere Volume plugin bugs

This PR fixes #26646 and targeted for 1.3
2016-06-19 21:06:13 -07:00
k8s-merge-robot
7e88b0ef0e Merge pull request #26781 from aveshagarwal/master-dapi-volume-annotations-labels-issue
Automatic merge from submit-queue

Remove an empty line being output when exposing annotations and labels via downward api volume

The issue is that formatMap function (for annotations and labels) in pkg/fieldpath/fieldpath.go appends a "\n" after each key value pair which is correct for all pairs except the last pair because then a complete string is returned with a "\n" in the end. It is inconsistent with other strings (metadata.name, namespace and resources) being returned as they dont have "\n" in the end. These returned strings are processed by sortLines function in pkg/volume/downwardapi/downwardapi.go and the function finally appends "\n" to each  string, but incorrectly outputs an empty line if there is an already "\n" in the end with the  input string. To illustrate:

The sortLines works as follows: lets say the input string is : "a\nb\nc\n". 

1. It splits them as "a", "b", "c", ""  (note empty string in the end). 
2. it sort them:  "", "a", b", "c"  
3. And then it appends "\n" again to each string:  "\n",  "a\n" ,"b\n", "c\n"

So we can see that it is erroneously creating an empty string in the beginning when the input string to sortLines has "\n" in the end.  As I said above, it is not an issue with metadata.name, namespace and resources as their input strings are without \n" in the end.

So now, the output in the downward api volume, (using the example in http://kubernetes.io/docs/user-guide/downward-api/):

```
# cat /etc/annotations

 zone="us-est-coast"
 cluster="test-cluster1"
 rack="rack-22"
```

After this patch, the output will be correct and without the erroneous empty line in the beginning.
I could think other ways to solve this but I found the way in this patch with minimal code changes.

@kubernetes/rh-cluster-infra
2016-06-18 09:19:21 -07:00
Justin Santa Barbara
9c2566572d GCE Multizone: Allow volumes to be created in non-master zone
We had a long-lasting bug which prevented creation of volumes in
non-master zones, because the cloudprovider in the volume label
admission controller is not initialized with the multizone setting
(issue #27656).

This implements a simple workaround: if the volume is created with the
failure-domain zone label, we look for the volume in that zone.  This is
more efficient, avoids introducing a new semantic, and allows users (and
the dynamic provisioner) to create volumes in non-master zones.

Fixes #27657
2016-06-17 23:27:41 -04:00
Justin Santa Barbara
e711cbf912 GCE/AWS: Spread PetSet volume creation across zones
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.

We hash the volume name so we don't bias to the first few zones.

If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset.  In that case we hash
the base name.

Fixes #27256
2016-06-17 23:27:31 -04:00
saadali
cfab5362d4 Remove spam log messages from gce pd
Fixes https://github.com/kubernetes/kubernetes/pull/27410
2016-06-15 09:34:08 -07:00
saadali
542f2dc708 Introduce new kubelet volume manager
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).

This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
2016-06-15 09:34:08 -07:00
saadali
9b6a505f8a Rename UniqueDeviceName to UniqueVolumeName
Rename UniqueDeviceName to UniqueVolumeName and move helper functions
from attacherdetacher to volumehelper package.
Introduce UniquePodName alias
2016-06-15 09:32:12 -07:00
k8s-merge-robot
abfe894385 Merge pull request #27301 from bprashanth/ps_dbg
Automatic merge from submit-queue

petset and volume debug messages

To help with https://github.com/kubernetes/kubernetes/issues/27299 https://github.com/kubernetes/kubernetes/issues/27058

simple enough that either reviewer can approve I guess.
2016-06-14 12:34:43 -07:00
Wojciech Tyczynski
5d702a32c1 Fix race in informer 2016-06-14 16:40:12 +02:00
Abitha Palaniappan
6a8cec1c5c Fix vSphere Volume plugin bugs
- replaces probeVolume with scsiHostRescan to scan hot attached disks
 - fixes substring match of UUID returned from AttachDisk
 - changes DetachDisk to take volumePath argument instead of diskID
 - fixes delayed failure at mount rather than attach disk
 - removes cloning of virtual disk in AttachDisk
2016-06-13 17:20:55 -07:00
Prashanth Balasubramanian
4e2f97a80e Add some logging around ro flag in GCE volume plugin 2016-06-13 13:55:49 -07:00
k8s-merge-robot
4793372a85 Merge pull request #25888 from rootfs/attacher-aws-cinder
Automatic merge from submit-queue

implement EBS and Cinder attacher/detacher 

follow up with #21709

@kubernetes/sig-storage
2016-06-10 05:39:22 -07:00
k8s-merge-robot
c9c4ada309 Merge pull request #26615 from jsafrane/gce-attach-tests
Automatic merge from submit-queue

GCE attach tests

Add basic tests for GCE attacher.

Looking at the code, it would deserve some refactoring as suggested in #25888, so mounting is not tested at all.
2016-06-09 06:00:56 -07:00
k8s-merge-robot
29c5d6c721 Merge pull request #26848 from pmorie/wrap-volumes
Automatic merge from submit-queue

Wrap more comments in pkg/volume

Wrap some more comments in `pkg/volume`
2016-06-09 01:15:52 -07:00
Avesh Agarwal
3c865e45a0 Remove an empty line being output when exposing annotations and
labels via downward api volume
2016-06-08 09:22:10 -04:00
Huamin Chen
d1e0a13924 support AWS and Cinder attacher
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-06-08 12:56:24 +00:00
Jan Safranek
5cd5ae8d82 Add GCE attacher unit tests. 2016-06-08 13:53:04 +02:00
Huamin Chen
4b4048a084 correction on rbd volume object and defaults
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-06-06 17:27:47 +00:00
Paul Morie
6415c2d288 Wrap more comments in pkg/volume 2016-06-04 14:14:00 -04:00
k8s-merge-robot
14f2763724 Merge pull request #26777 from jsafrane/fix-attach-errors
Automatic merge from submit-queue

Fix GCE attacher/detacher to ignore return value of failed calls.

The plugin should ignore any return value if err is set. Found when writing unit tests in #26615 - my dummy `DiskIsAttached` returned `false, errors.New('fake error')` and the volume was **not** detached although the log message `"Error checking if PD (%q) is already attached to current node (%q). Will continue and try detach anyway."` suggested otherwise 

@saad-ali, PTAL
@kubernetes/sig-storage
2016-06-03 22:34:56 -07:00
Paul Morie
029b97d5a1 Wrap comments in pkg/volume 2016-06-03 16:16:57 -04:00
Jan Safranek
eb5a68319e Fix GCE attacher/detacher to ignore return value of failed calls.
The plugin should ignore any return value if err is set.
2016-06-03 14:16:17 +02:00
Saad Ali
9dbe943491 Attach/Detach Controller Kubelet Changes
This PR contains Kubelet changes to enable attach/detach controller control.
* It introduces a new "enable-controller-attach-detach" kubelet flag to
  enable control by controller. Default enabled.
* It removes all references "SafeToDetach" annoation from controller.
* It adds the new VolumesInUse field to the Node Status API object.
* It modifies the controller to use VolumesInUse instead of SafeToDetach
  annotation to gate detachment.
* There is a bug in node-problem-detector that causes VolumesInUse to
  get reset every 30 seconds. Issue https://github.com/kubernetes/node-problem-detector/issues/9
  opened to fix that.
2016-06-02 16:47:11 -07:00
k8s-merge-robot
0b7f8e5b74 Merge pull request #24808 from screeley44/gluster_errors
Automatic merge from submit-queue

read gluster log to surface glusterfs plugin errors properly in describe events

glusterfs.go does not properly expose errors as all mount errors go to a log file, I propose we read the log file to expose the errors without asking the users to 'go look at this log'

This PR does the following:
1.  adds a gluster option for log-level=ERROR to remove all noise from log file
2.  change log file name and path based on PV + Pod name - so specific per PV and Pod
3.  create a utility to read the last two lines of the log file when failure occurs

old behavior:
```
  13s	13s	1	{kubelet 127.0.0.1}		Warning	FailedMount	Unable to mount volumes for pod "bb-gluster-pod2_default(34b18c6b-070d-11e6-8e95-52540092b5fb)": glusterfs: mount failed: Mount failed: exit status 1
Mounting arguments: 192.168.234.147:myVol2 /var/lib/kubelet/pods/34b18c6b-070d-11e6-8e95-52540092b5fb/volumes/kubernetes.io~glusterfs/pv-gluster glusterfs [log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pv-gluster/glusterfs.log]
Output: Mount failed. Please check the log file for more details.
```

improved behavior: (updated after suggestions from community)
```
  34m		34m		1	{kubelet 127.0.0.1}			Warning		FailedMount	Unable to mount volumes for pod "bb-multi-pod1_default(e7d7f790-0d4b-11e6-a275-52540092b5fb)": glusterfs: mount failed: Mount failed: exit status 1
Mounting arguments: 192.168.123.222:myVol2 /var/lib/kubelet/pods/e7d7f790-0d4b-11e6-a275-52540092b5fb/volumes/kubernetes.io~glusterfs/pv-gluster2 glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pv-gluster2/bb-multi-pod1-glusterfs.log]
Output: Mount failed. Please check the log file for more details.

 the following error information was pulled from the log to help resolve this issue: 
[2016-04-28 14:21:29.109697] E [socket.c:2332:socket_connect_finish] 0-glusterfs: connection to 192.168.123.222:24007 failed (Connection timed out)
[2016-04-28 14:21:29.109767] E [glusterfsd-mgmt.c:1819:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 192.168.123.222 (Transport endpoint is not connected)

```

also this PR is alternate approach to :  #24624
2016-06-02 13:42:54 -07:00
Scott Creeley
a36cd3d55b read gluster log to surface glusterfs plugin errors properly 2016-06-02 09:09:14 -04:00
k8s-merge-robot
7030dca4c8 Merge pull request #25989 from jingxu97/bug-tmpdir
Automatic merge from submit-queue

use MkTmpDir instead of ioutil.TempDir in testing

fixes #20243
2016-05-29 06:32:36 -07:00
saadali
3c345abafd Fix DATA RACE in unit tests: reconciler_test.go 2016-05-27 01:19:25 -07:00
Alex Mohr
0a6178959f Merge pull request #25852 from vishh/network-volumes
Add metrics support for a GCE PD, EC2 EBS & Azure File volumes
2016-05-26 15:47:33 -07:00
k8s-merge-robot
bda0dc88aa Merge pull request #25457 from saad-ali/expectedStateOfWorldDataStructure
Automatic merge from submit-queue

Attach Detach Controller Business Logic

This PR adds the meat of the attach/detach controller proposed in #20262.

The PR splits the in-memory cache into a desired and actual state of the world.
2016-05-26 00:41:54 -07:00
saadali
92500a20d7 Attach detach controller business logic added
Split controller cache into actual and desired state of world.
Controller will only operate on volumes scheduled to nodes that
have the "volumes.kubernetes.io/controller-managed-attach" annotation.
2016-05-24 23:01:16 -07:00
Avesh Agarwal
1931931494 Downward API implementation for resources limits and requests 2016-05-24 12:22:35 -04:00
Vishnu Kannan
baa8ac4d6b Add metrics support for a few network based volumes.
Signed-off-by: Vishnu Kannan <vishnuk@google.com>
2016-05-23 09:33:12 -07:00
k8s-merge-robot
8b0e9c5739 Merge pull request #24947 from hpcloud/hpe/vsphere-volume
Automatic merge from submit-queue

vSphere Volume Plugin Implementation

This PR implements vSphere Volume plugin support in Kubernetes (ref. issue #23932).
2016-05-22 20:40:14 -07:00
Sami Wagiaalla
4858d0ab6f Detangle Attach/Detach from GCE PD 2016-05-22 08:28:29 -04:00
Abitha Palaniappan
95c009dbdb Adding vSphere Volume support for vSphere Cloud Provider 2016-05-21 11:00:14 -07:00
k8s-merge-robot
9c9bdb2494 Merge pull request #25502 from swagiaal/attach-interface-pvc
Automatic merge from submit-queue

Add support for PersistentVolumeClaim in Attacher/Detacher interface

The attach detach interface does not support volumes which are referenced through PVCs. This PR adds that support
2016-05-21 06:25:34 -07:00
k8s-merge-robot
eb733cbf45 Merge pull request #25285 from ingvagabund/extend-secrets-volumes-with-path-control
Automatic merge from submit-queue

Extend secrets volumes with path control

As per [1] this PR extends secrets mapped into volume with:

* key-to-path mapping the same way as is for configmap. E.g.

```
{
 "apiVersion": "v1",
 "kind": "Pod",
  "metadata": {
    "name": "mypod",
    "namespace": "default"
  },
  "spec": {
    "containers": [{
      "name": "mypod",
      "image": "redis",
      "volumeMounts": [{
        "name": "foo",
        "mountPath": "/etc/foo",
        "readOnly": true
      }]
    }],
    "volumes": [{
      "name": "foo",
      "secret": {
        "secretName": "mysecret",
        "items": [{
          "key": "username",
          "path": "my-username"
        }]
      }
    }]
  }
}
```

Here the ``spec.volumes[0].secret.items`` added changing original target ``/etc/foo/username`` to ``/etc/foo/my-username``.

* secondly, refactoring ``pkg/volumes/secrets/secrets.go`` volume plugin to use ``AtomicWritter`` to project a secret into file.

[1] https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md#changes-to-secret
2016-05-21 03:55:13 -07:00
k8s-merge-robot
62a8394eb4 Merge pull request #25263 from jsafrane/devel/adopt-recycle-pod
Automatic merge from submit-queue

volume recycler: Don't start a new recycler pod if one already exists.

Recycling is a long duration process and when the recycler controller is restarted in the meantime, it should not start a new recycler pod if there is one already running.

This means that the recycler pod must have deterministic name based on name of the recycled PV, we then get name conflicts when creating the pod.

Two things need to be changed:

- recycler controller and recycler plugins must pass the PV.Name to place, where the pod is created. This is most of the patch and it should be pretty straightforward.

- create recycler pod with deterministic name and check "already exists" error.

When at it, remove useless 'resourceVersion' argument and make log messages starting with lowercase.

There is an unit test to check the behavior + there is an e2e test that checks that regular recycling is not broken (it does not try to run two recycler pods in parallel as the recycler is single-threaded now).
2016-05-21 02:28:26 -07:00
Jing Xu
ffac5d73f6 use MkTmpDir instead of ioutil.TempDir in testing 2016-05-20 14:06:08 -07:00
Clayton Coleman
5e4308f91d
Update use of Quantity in other classes 2016-05-19 08:41:43 -04:00
Jan Safranek
0ee9160f88 volume recycler: Don't start a new recycler pod if one already exists.
Recycling is a long duration process and when the recycler controller is
restarted in the meantime, it should not start a new recycler pod if there is
one already running.

This means that the recycler pod must have deterministic name based on name
of the recycled PV, we then get name conflicts when creating the pod.

Two things need to be changed:
- recycler controller and recycler plugins must pass the PV.Name to place,
  where the pod is created.

- create recycler pod with deterministic name and check "already exists" error.

When at it, remove useless 'resourceVersion' argument and make log messages
starting with lowercase.
2016-05-19 12:58:25 +02:00
k8s-merge-robot
c63ac4e664 Merge pull request #24331 from jsafrane/devel/refactor-binder
Automatic merge from submit-queue

Refactor persistent volume controller

Here is complete persistent controller as designed in https://github.com/pmorie/pv-haxxz/blob/master/controller.go

It's feature complete and compatible with current binder/recycler/provisioner. No new features, it *should* be much more stable and predictable.

Testing
--
The unit test framework is quite complicated, still it was necessary to reach reasonable coverage (78% in `persistentvolume_controller.go`). The untested part are error cases, which are quite hard to test in reasonable way - sure, I can inject a VersionConflictError on any object update and check the error bubbles up to appropriate places, but the real test would be to run `syncClaim`/`syncVolume` again and check it recovers appropriately from the error in the next periodic sync. That's the hard part.

Organization
---
The PR starts with `rm -rf kubernetes/pkg/controller/persistentvolume`. I find it easier to read when I see only the new controller without old pieces scattered around.
[`types.go` from the old controller is reused to speed up matching a bit, the code looks solid and has 95% unit test coverage].

I tried to split the PR into smaller patches, let me know what you think.

~~TODO~~
--

* ~~Missing: provisioning, recycling~~.
* ~~Fix integration tests~~
* ~~Fix e2e tests~~

@kubernetes/sig-storage

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24331)
<!-- Reviewable:end -->

Fixes #15632
2016-05-19 03:06:46 -07:00
Jan Chaloupka
ebe56f5ff9 Extend the current secrets mounting to volume implementation with key to path mapping.
The key to path mapping allows pod to specify different name (thus location) of each secret.
At the same time refactor the volume plugin to use AtomicWritter to project secrets to files in a volume.

Update e2e Secrets test, the secret file permission has changed from 0444 to 0644
Remove TestPluginIdempotent as the AtomicWritter is responsible for secret creation
2016-05-18 16:12:31 +02:00
Jan Safranek
75b0e2ad63 provisioning: Refactor volume plugins.
NewPersistentVolumeTemplate() and Provision() are merged into one call.
2016-05-18 10:06:51 +02:00
Tim Hockin
66d0d87829 Make IsValidLabelValue return error strings 2016-05-17 21:36:10 -07:00
k8s-merge-robot
4ac32179bf Merge pull request #24798 from thockin/validation_pt8-1
Automatic merge from submit-queue

Make IsQualifiedName return error strings

Part of the larger validation PR, broken out for easier review and merge.

@lavalamp FYI, but I know you're swamped, too.
2016-05-14 22:14:17 -07:00
k8s-merge-robot
6fe3498ef9 Merge pull request #24625 from pmorie/dapi-volume-atomic
Automatic merge from submit-queue

Refactor downward API volume to use AtomicWriter

Make the downward API plugin use `AtomicWriter` instead.

@thockin @saad-ali @sdminonne

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24625)
<!-- Reviewable:end -->
2016-05-14 11:22:42 -07:00
k8s-merge-robot
4591aa0f3b Merge pull request #25306 from pmorie/configmap-medium
Automatic merge from submit-queue

Use local disk for ConfigMap volume instead of tmpfs

So that ConfigMap volumes are counted against pod's storage quota.

@kubernetes/sig-node 
cc @derekwaynecarr @vishh
2016-05-13 05:07:01 -07:00
Sami Wagiaalla
56ccd98db8 Add support for PersistentVolumeClaim in Attacher/Detacher interface
- Dereference PVCs in kubelet.
- Add getPersistentVolumebySpec to kubelet.
- Call getPersistentVolumebySpec from mount External volumes
- Add applyPVAnnotations to kubelet.
- Delete persistent_claim plugin.
2016-05-12 17:46:39 -04:00
saadali
bce708c22f Modify Detach method to take disk name 2016-05-12 12:19:24 -07:00
Saad Ali
4b564c95d7 Merge pull request #25325 from swagiaal/attacher-interface-update
Update Attacher/Detacher interfaces.
2016-05-11 11:36:19 -07:00
Paul Morie
3567b1f9c4 Use local disk for ConfigMap volume instead of tmpfs
So that ConfigMap volumes are counted against pod's storage quota.
2016-05-10 22:27:40 -04:00
Tim Hockin
72955770f3 Make IsQualifiedName return error strings 2016-05-10 11:23:23 -07:00
Sami Wagiaalla
5258392e6a Update Attacher/Detacher interfaces.
- Expand arguments for Attach/Detach interfaces
- Run waitForDetach asynchronously
2016-05-09 17:18:08 -04:00
Tim Hockin
527cb50583 Demand at least go1.6 2016-05-08 20:30:37 -07:00
k8s-merge-robot
d4b1b6776a Merge pull request #24557 from swagiaal/attacher-interface
Automatic merge from submit-queue

 Abstract node side functionality of attachable plugins

- Create PhysicalAttacher interface to abstract MountDevice and
  WaitForAttach.
- Create PhysicalDetacher interface to abstract WaitForDetach and
  UnmountDevice.
- Expand unit tests to check that Attach, Detach, WaitForAttach,
  WaitForDetach, MountDevice, and UnmountDevice get call where
  appropriet.

Physical{Attacher,Detacher} are working titles suggestions welcome. Some other thoughts:
- NodeSideAttacher or NodeAttacher.
- AttachWatcher
- Call this Attacher and call the Current Attacher CloudAttacher.
- DeviceMounter (although there are way too many things called Mounter right now :/)

This is to address: https://github.com/kubernetes/kubernetes/pull/21709#issuecomment-192035382

@saad-ali
2016-05-08 14:04:44 -07:00
k8s-merge-robot
62ef6c9a34 Merge pull request #20490 from swagiaal/auto-supplemental-group-kubelet
Automatic merge from submit-queue

Automatically Add Supplemental Groups from Volumes to Pods

This adds support for a "GID" annotation that one can add to their PVs. When this annotation is seen the kubelet automatically adds the given GID to the list of supplemental groups for the pod to which the PV is attached. This allows admins to create volumes and suggest a GID to use to access the volume. This is needed for volumes which do not support ownership management such as NFS.

@markturansky PTAL
2016-05-08 08:08:01 -07:00
Sami Wagiaalla
d1aacfc059 Move getCloudProvider retries to getCloudProvider() 2016-05-04 16:43:38 -04:00
Sami Wagiaalla
71e7dba845 Abstract node side functionality of attachable plugins
- Expand Attacher/Detacher interfaces to break up work more
  explicitly.
- Add arguments to all functions to avoid having implementers store
  the data needed for operations.
- Expand unit tests to check that Attach, Detach, WaitForAttach,
  WaitForDetach, MountDevice, and UnmountDevice get call where
  appropriet.
2016-05-04 10:18:39 -04:00
Paul Morie
272066321c Refactor downward API volume to use AtomicWriter 2016-05-02 09:40:16 -04:00
Clayton Coleman
fdb110c859
Fix the rest of the code 2016-04-29 17:12:10 -04:00
Sami Wagiaalla
e1e7da2712 Add GIDs specified in a PV's annotations to pod's supplemental groups 2016-04-29 16:44:56 -04:00
k8s-merge-robot
06160b6abe Merge pull request #22023 from mkulke/rackspace-improvements
Automatic merge from submit-queue

Rackspace improvements (OpenStack Cinder)

This adds PV support via Cinder on Rackspace clusters. Rackspace Cloud Block Storage is pretty much vanilla OpenStack Cinder, so there is no need for a separate Volume Plugin. Instead I refactored the Cinder/OpenStack interaction a bit (by introducing a CinderProvider Interface and moving the device path detection logic to the OpenStack part).

Right now this is limited to `AttachDisk` and `DetachDisk`. Creation and deletion of Block Storage is not in scope of this PR.

Also the `ExternalID` and `InstanceID` cloud provider methods have been implemented for Rackspace.
2016-04-21 16:38:13 -07:00
k8s-merge-robot
e3dab39df0 Merge pull request #21304 from tobad357/iscsi-mpio-support
Automatic merge from submit-queue

Add mpio support for iscsi

This allows the iscsi volume to check if a iscsi device belongs to a mpio device
If it does belong to the device then we make sure we mount the mpio device instead of
the raw device. 
The code is based on the current FibreChannel volume support for mpio

example
/dev/disk/by-path/iqn-example.com.2999 -> /dev/sde
Then we check
/sys/block/[dm-X]/slaves/xx
until we find the [dm-X] containing /dev/sde and mount it

Additional work that can be done in future
1. Add multiple portal support to iscsi
2. Move the FibreChannel volume provider to use the code that has been extracted
2016-04-21 15:40:50 -07:00
kulke
ba4d74f3c7 Added Block Storage support to Rackspace provider, improved Node discovery. 2016-04-21 10:31:37 +02:00