Commit Graph

1095 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
2df943ce50 Merge pull request #36698 from fabiand/no-mpathconf
Automatic merge from submit-queue

fc: Drop multipath.conf snippet

**What this PR does / why we need it**:
Removes multipath.conf - The code does not make use of it - or ensure s that it's getting used - and it should in addition be handled elsewehre.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```

A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.

This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-03-24 10:24:49 -07:00
Jeff Schroeder
a5afdfa17f Fix spelling of the word successfully
Auto-generated via:
    git grep -l [Ss]uccesfully  | xargs sed -ri 's/([sS])uccesfully/\1uccessfully/g'

I noticed this when running kube-scheduler with --v4 and it is annoying.
Then manually reverted changed to the vendored bits.
2017-03-22 18:33:11 -05:00
Kubernetes Submit Queue
754effe332 Merge pull request #42949 from wenlxie/master
Automatic merge from submit-queue

recycle pod can't get the event since channel closed

What this PR does / why we need it:
We create a   hostPath type  PV with "Recycle" persistentVolumeReclaimPolicy,  and bind a PVC to it, but after deleted the PVC, the PV cannot become to available status. This is happened after we upgrade etcd to 3.0. The reason is:
If the channel used to get the pod message and events been abnormal closed(for example, the event channel maybe closed because of "required revision has been compacted" error), the function internalRecycleVolumeByWatchingPodUntilCompletion will stuck in a loop, and the recycle pod will not been deleted, the PV can not become into available status

Special notes for your reviewer:
None
Release note:
2017-03-16 02:41:11 -07:00
Vladimir Vivien
0715b32439 Update ScaleIO volume plugin default readOnly value
This commit updates the code to set the default value of the readOnly attribute to false.
It also updates the example docs to add full list of supported plugin attributes and doc.
2017-03-14 14:19:48 -04:00
wenlxie
33385214bc recycle pod can't get the event since the channel been closed 2017-03-14 10:35:08 +08:00
Hemant Kumar
a4a3d20934 Fix vsphere selinux support
Managed flag must be true for SELinux relabelling to work
for vsphere.
2017-03-12 23:21:07 -04:00
Hemant Kumar
12d6b87894 Validation PVs for mount options
We are going to move the validation in its own package
and we will be calling validation for individual volume types
as needed.
2017-03-09 18:24:37 -05:00
yupengzte
363f321f32 should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
2017-03-06 09:14:48 +08:00
Kubernetes Submit Queue
f9ccee7714 Merge pull request #42435 from dashpole/timestamps_for_fsstats
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats

Continuation of #33121.  Credit for most of this goes to @sjenning.  I added volume fs timestamps.

**why is this a bug** 
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads. 
This PR will also help deflake a node e2e test suite.

The eviction manager currently avoids evicting pods if metrics are old.  However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.

cc: @kubernetes/sig-storage-pr-reviews  @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
2017-03-03 23:21:48 -08:00
Vladimir Vivien
915a54180d Addition of ScaleIO Kubernetes Volume Plugin
This commits implements the Kubernetes volume plugin allowing pods to seamlessly access and use data stored on ScaleIO volumes.
2017-03-03 15:47:19 -05:00
Kubernetes Submit Queue
e9bbfb81c1 Merge pull request #41306 from gnufied/implement-interface-bulk-volume-poll
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Implement bulk polling of volumes

This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.

cc @justinsb
2017-03-03 10:54:38 -08:00
Kubernetes Submit Queue
ff9296fcad Merge pull request #35055 from ivan4th/make-downward-api-test-table-driven
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

Make Downward API test table-driven
2017-03-03 09:24:48 -08:00
David Ashpole
a90c7951d4 add volume timestamps 2017-03-02 15:01:59 -08:00
Hemant Kumar
786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Jan Safranek
9487552e41 Regenerate everything 2017-03-02 10:23:58 +01:00
Jan Safranek
7ae4152712 Move PV/PVC annotations to PV/PVC types.
They aren't part of storage.k8s.io/v1 or v1beta1 API.
Also move associated *GetClass functions.
2017-03-02 10:23:55 +01:00
Jan Safranek
a39bd53509 Explicitly use storage.k8s.io/v1beta1 everywhere.
v1 is not yet awailable on GKE and tests would fail.
2017-03-02 08:56:26 +01:00
Jimeng Liu
5c53a906bd remove unused StatusFailure constant 2017-03-01 14:21:50 -08:00
Scott Creeley
762ca8e8a9 adding some debug 2017-03-01 13:30:21 -05:00
Hemant Kumar
2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Tomas Smetana
58edea18de Remove unused method from operation_generator 2017-03-01 10:42:53 +01:00
Kubernetes Submit Queue
4e46ae1d3b Merge pull request #41597 from rootfs/rbd-fencing2
Automatic merge from submit-queue (batch tested with PRs 41597, 42185, 42075, 42178, 41705)

force rbd image unlock if the image is not used

**What this PR does / why we need it**:
Ceph RBD image could be locked if the host that holds the lock is down. In such case, the image cannot be used by other Pods. 

The fix is to detect the orphaned locks and force unlock.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #31790

**Special notes for your reviewer**:

Note, previously, RBD volume plugin maps the image, mount it, and create a lock on the image. Since the proposed fix uses `rbd status` output to determine if the image is being used, the sequence has to change to: rbd lock checking (through `rbd lock list`), mapping check (through `rbd status`), forced unlock if necessary (through `rbd lock rm`), image lock, image mapping, and mount.




**Release note**:

```release-note
force unlock rbd image if the image is not used
```
2017-03-01 00:36:01 -08:00
Aditya Dani
28df55fc31 Portworx Volume Driver in Kubernetes
- Add a new type PortworxVolumeSource
- Implement the kubernetes volume plugin for Portworx Volumes under pkg/volume/portworx
- The Portworx Volume Driver uses the libopenstorage/openstorage specifications and apis for volume operations.

Changes for k8s configuration and examples for portworx volumes.

- Add PortworxVolume hooks in kubectl, kube-controller-manager and validation.
- Add a README for PortworxVolume usage as PVs, PVCs and StorageClass.
- Add example spec files

Handle code review comments.

- Modified READMEs to incorporate to suggestions.
- Add a test for ReadWriteMany access mode.
- Use util.UnmountPath in TearDown.
- Add ReadOnly flag to PortworxVolumeSource
- Use hostname:port instead of unix sockets
- Delete the mount dir in TearDown.
- Fix link issue in persistentvolumes README
- In unit test check for mountpath after Setup is done.
- Add PVC Claim Name as a Portworx Volume Label

Generated code and documentation.
- Updated swagger spec
- Updated api-reference docs
- Updated generated code under pkg/api/v1

Godeps update for Portworx Volume Driver
- Adds github.com/libopenstorage/openstorage
- Adds go.pedge.io/pb/go/google/protobuf
- Updates Godep Licenses
2017-02-28 23:24:56 +00:00
Huamin Chen
6782a48dfa Enable storage class support in Azure File volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2017-02-27 15:34:37 -05:00
Hemant Kumar
54b0637a0e Add gnufied as reviewer for pkg/volume
I have helped review and contributed code to this
area already.
2017-02-27 09:12:15 -05:00
Kubernetes Submit Queue
d1f5331102 Merge pull request #41804 from chakri-nelluri/flex
Automatic merge from submit-queue (batch tested with PRs 41116, 41804, 42104, 42111, 42120)

Add support for attacher/detacher interface in Flex volume

Add support for attacher/detacher interface in Flex volume
This change breaks backward compatibility and requires to be release noted.

```release-note
Flex volume plugin is updated to support attach/detach interfaces. It broke backward compatibility. Please update your drivers and implement the new callouts. 
```
2017-02-27 04:10:25 -08:00
Kubernetes Submit Queue
cff3c99613 Merge pull request #41628 from humblec/glusterfs-refactor
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Factor new GetClusterNodes() out of CreateVolume().
2017-02-26 08:10:02 -08:00
Jordan Liggitt
41c88e0455 Revert "Merge pull request #40088 from jsafrane/storage-ga-v1"
This reverts commit 5984607cb9, reversing
changes made to 067f92e789.
2017-02-25 22:35:15 -05:00
Chakravarthy Nelluri
0d2af70e95 Add support for attacher/detacher interface in Flex volume 2017-02-24 20:18:06 -05:00
Jan Safranek
fa93f1c411 Update imports 2017-02-24 13:52:16 +01:00
Jan Safranek
cea7a46de1 Regenerate everything 2017-02-24 13:34:18 +01:00
Jan Safranek
3f6caca97a Add storage.k8s.io/v1 2017-02-24 13:34:18 +01:00
Fabian Deutsch
2367d7de2f volume: Document multipath configuration
Add some lines about how to enable multipath for block storage.
A new README was added, because multipath is relevant for at least
FC and iSCSI.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-02-24 09:59:11 +01:00
Fabian Deutsch
68c3502954 fc: Drop multipath.conf snippet
A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.

This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.

Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
2017-02-24 09:59:11 +01:00
Humble Chirammal
43c0a6869d This feature ensures the backup servers in the trusted pool
is contacted if there is a failure in the connected server.
Mount option becomes:
mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glustermount/glusterpod-glusterfs.log,backup-volfile-servers=192.168.100.0:192.168.200.0:192.168.43.149 ..

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-24 13:00:34 +05:30
Kubernetes Submit Queue
3adc12c5f5 Merge pull request #41113 from vmware/AddDatastoreParamForDynamicProvisioning
Automatic merge from submit-queue

Fix for Support selection of datastore for dynamic provisioning in vS…

Fixes #40558

Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.

With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.

For example:

User creates a storage class with the datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.

If the user creates a storage class without any datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)

@pdhamdhere @kerneltime
2017-02-23 22:10:42 -08:00
Kubernetes Submit Queue
b5d010d6a3 Merge pull request #40910 from justinsb/fix_35695
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)

Allow multiple mounts in StatefulSet volume zone placement

We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones.  Sadly they forgot to account for
multiple mounts.  This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.

Fix #35695

```release-note
Fix zone placement heuristics so that multiple mounts in a StatefulSet pod are created in the same zone
```
2017-02-23 20:57:29 -08:00
Kubernetes Submit Queue
e373b5981a Merge pull request #41778 from NickrenREN/volume-typo
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)

fix some typos and var style

**Release note**:

```NONE
```
2017-02-23 07:54:37 -08:00
Justin Santa Barbara
62b8010aa2 Curate owners for pkg/volume/aws_ebs
The previous list was algorithmically generated; applying some curation.
2017-02-22 22:51:08 -05:00
Kubernetes Submit Queue
ae8f537c87 Merge pull request #41688 from humblec/iscsi-reviewer
Automatic merge from submit-queue

Update reviewer list for iscsi volume plugin.

Contributed nodiskconflict, multipath feature .etc to iscsi volume plugin. 
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-22 18:18:22 -08:00
Humble Chirammal
3ade29ff73 Factor new GetClusterNodes() out of CreateVolume()".
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-22 22:44:55 +05:30
Balu Dontu
12f75f0b86 Fix for Support selection of datastore for dynamic provisioning in vSphere 2017-02-21 19:04:45 +00:00
Kubernetes Submit Queue
a67e78e4fa Merge pull request #40317 from kpgriffith/recycle-vol-plug-cleanup
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)

changes to cleanup the volume plugin for recycle

**What this PR does / why we need it**:
Code cleanup. Changing from creating a new interface from the plugin, that then calls a function to recycle a volume, to adding the function to the plugin itself.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #26230

**Special notes for your reviewer**:
Took same approach from closed PR #28432.

Do you want the approach to be the same for NewDeleter(), NewMounter(), NewUnMounter() and should they be in this same PR or submit different PR's for those?

**Release note**:

```NONE
```
2017-02-21 07:45:40 -08:00
Ivan Shvedunov
e80ae63028 Make Downward API test table-driven 2017-02-21 16:43:30 +03:00
Johannes Scheuermann
96e43e406e Remove unnecessary constants and add type to secret 2017-02-21 14:02:46 +01:00
NickrenREN
6899dd85d4 fix some typos and var style 2017-02-21 17:08:14 +08:00
Jeff Peeler
ec701a65e8 Generated files for projected volume driver 2017-02-20 13:09:41 -05:00
Jeff Peeler
8fb1b71c66 Implements projected volume driver
Proposal: kubernetes/kubernetes#35313
2017-02-20 12:56:04 -05:00
Kubernetes Submit Queue
7236af6162 Merge pull request #39373 from apprenda/fix_configmap
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)

Fix ConfigMaps for Windows

**What this PR does / why we need it**: ConfigMaps were broken for Windows as the existing code used linux specific file paths. Updated the code in `kubelet_getters.go` to use `path/filepath` to get the directories. Also reverted back the code in `secret.go` as updating `kubelet_getters.go` to use `path/filepath` also fixes `secrets`

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/39372

```release-note
Fix ConfigMap for Windows Containers.
```

cc: @pires
2017-02-19 13:50:37 -08:00
Justin Santa Barbara
bba343d066 Allow multiple mounts in StatefulSet volume zone placement
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones.  Sadly they forgot to account for
multiple mounts.  This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.

Fix #35695
2017-02-19 02:20:04 -05:00