Commit Graph

19368 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
1359ffc502 Merge pull request #41818 from aveshagarwal/master-taints-tolerations-api-fields-pod-spec-updates
Automatic merge from submit-queue (batch tested with PRs 41701, 41818, 41897, 41119, 41562)

Allow updates to pod tolerations.

Opening this PR to continue discussion for pod spec tolerations updates when a pod has been scheduled already. This PR is built on top of https://github.com/kubernetes/kubernetes/pull/38957.

@kubernetes/sig-scheduling-pr-reviews @liggitt @davidopp @derekwaynecarr @kubernetes/rh-cluster-infra
2017-02-26 14:02:51 -08:00
Kubernetes Submit Queue
a8b629d4ee Merge pull request #41701 from vishh/evict-non-static-critical-pods
Automatic merge from submit-queue

Admit critical pods under resource pressure

And evict critical pods that are not static.

Depends on #40952.

For #40573
2017-02-26 13:43:10 -08:00
Kubernetes Submit Queue
0bc16d8966 Merge pull request #40576 from nikhiljindal/kubectlcascDel
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)

Updating kubectl to send delete requests with orphanDependents=false if --cascade is true

Ref https://github.com/kubernetes/kubernetes/issues/40568 #38897

Updating kubectl to always set `DeleteOptions.orphanDependents=false` when deleting a resource with `--cascade=true`.
This is primarily for federation where we want to use server side cascading deletion.

Impact on kubernetes: kubectl will do another GET after sending a DELETE and wait till the resource is actually deleted. This can have an impact if the resource has a finalizer. kubectl will wait till the finalizer is removed and then the resource is deleted, which is the right thing to do but a notable change in behavior.

cc @caesarxuchao @lavalamp @smarterclayton @kubernetes/sig-federation-pr-reviews @kubernetes/sig-cli-pr-reviews
2017-02-26 12:58:01 -08:00
Kubernetes Submit Queue
16f87fe7d8 Merge pull request #40952 from dashpole/premption
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)

Guaranteed admission for Critical Pods

This is the first step in implementing node-level preemption for critical pods.
It defines the AdmissionFailureHandler interface, which allows callers, like the kubelet, to define how failed predicates are handled, and take steps to correct failures if necessary.
In the kubelet's implementation, it triggers preemption if the pod being admitted is critical, and if the only failed predicates are InsufficientResourceErrors, then it prempts (not yet implemented) other other pods to allow admission of the critical pod.

cc: @vishh
2017-02-26 12:57:59 -08:00
Kubernetes Submit Queue
452420484c Merge pull request #41982 from deads2k/agg-18-ca-permissions
Automatic merge from submit-queue

Add namespaced role to inspect particular configmap for delegated authentication

Builds on https://github.com/kubernetes/kubernetes/pull/41814 and https://github.com/kubernetes/kubernetes/pull/41922 (those are already lgtm'ed) with the ultimate goal of making an extension API server zero-config for "normal" authentication cases.

This part creates a namespace role in `kube-system` that can *only* look the configmap which gives the delegated authentication check.  When a cluster-admin grants the SA running the extension API server the power to run delegated authentication checks, he should also bind this role in this namespace.

@sttts Should we add a flag to aggregated API servers to indicate they want to look this up so they can crashloop on startup?  The alternative is sometimes having it and sometimes not.  I guess we could try to key on explicit "disable front-proxy" which may make more sense.

@kubernetes/sig-api-machinery-misc 

@ncdc I spoke to @liggitt about this before he left and he was ok in concept.  Can you take a look at the details?
2017-02-26 12:12:49 -08:00
Kubernetes Submit Queue
c4835f2626 Merge pull request #41864 from marun/kubectl-drain-orphans
Automatic merge from submit-queue (batch tested with PRs 41857, 41864, 40522, 41835, 41991)

kubectl: Allow 'drain --force' to remove orphaned pods

If the managing resource of a given pod (e.g. DaemonSet/ReplicaSet/etc) is deleted (effectively orphaning the pod), and ``kubectl drain --force`` is invoked on the node hosting the pod, the command would fail with an error indicating that the managing resource was not found.  This PR reduces the error to a warning if ``--force`` is specified, allowing nodes with orphaned pods to be drained.   

Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1424678

cc: @derekwaynecarr 

```release-note
Allow drain --force to remove pods whose managing resource is deleted.
```
2017-02-26 11:13:53 -08:00
Kubernetes Submit Queue
a1490926d6 Merge pull request #41077 from deads2k/cli-01-cani
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)

add kubectl can-i to see if you can perform an action

Adds `kubectl auth can-i <verb> <resource> [<name>]` so that a user can see if they are allowed to perform an action.

@kubernetes/sig-cli-pr-reviews @fabianofranz 

This particular command satisfies the immediate need of knowing if you can perform an action without trying that action.  When using RBAC in a script that is adding permissions, there is a lag between adding the permission and the permission being realized in the RBAC cache.  As a user on the CLI, you almost never see it, but as a script adding a binding and then using that new power, you hit it quite often.

There are natural follow-ons to the same area (hence the `auth` subcommand) to figure out if someone else can perform an action, what actions you can perform in total, and who can perform a given action.  Someone else is an API we have already, what-can-i-do was a proposed API a while back and a very useful one for interfaces, and who-can is common question if someone is administering a namespace.
2017-02-26 10:22:54 -08:00
Kubernetes Submit Queue
0b54264d3e Merge pull request #41406 from jsafrane/operation-backoff
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)

pv_controller: Do not report exponential backoff as error.

It's not an error when recycle/delete/provision operation cannot be started
because it has failed recently. It will be restarted automatically when
backoff expires.

This just pollutes logs without any useful information:
```
E0214 08:00:30.428073   77288 pv_controller.go:1410] error scheduling operaion "delete-pvc-1fa0e8b4-f2b5-11e6-a8bb-fa163ecb84eb[1fbd52ee-f2b5-11e6-a8bb-fa163ecb84eb]": Failed to create operation with name "delete-pvc-1fa0e8b4-f2b5-11e6-a8bb-fa163ecb84eb[1fbd52ee-f2b5-11e6-a8bb-fa163ecb84eb]". An operation with that name failed at 2017-02-14 08:00:15.631133152 -0500 EST. No retries permitted until 2017-02-14 08:00:31.631133152 -0500 EST (16s). Last error: "Cannot delete the volume \"11a4faea-bfc7-4713-88b3-dec492480dba\", it's still attached to a node".
```

```release-note
NONE
```

@kubernetes/sig-storage-pr-reviews
2017-02-26 10:22:53 -08:00
Kubernetes Submit Queue
2eef3b1a14 Merge pull request #41957 from liggitt/mirror-pod-secrets
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)

Use consistent helper for getting secret names from pod

Kubelet secret-manager and mirror-pod admission both need to know what secrets a pod spec references. Eventually, a node authorizer will also need to know the list of secrets.

This creates a single (well, double, because api versions) helper that can be used to traverse the secret names referenced from a pod, optionally short-circuiting (for places that are just looking to see if any secrets are referenced, like admission, or are looking for a particular secret ref, like authorization)

Fixes:
* secret manager not handling secrets used by env/envFrom in initcontainers
* admission allowing mirror pods with secret references

@smarterclayton @wojtek-t
2017-02-26 10:22:51 -08:00
Kubernetes Submit Queue
92c44c9d42 Merge pull request #41922 from deads2k/rbac-05-reconcile
Automatic merge from submit-queue

make reconcilation generic to handle roles and clusterroles

We have a need to reconcile regular roles, so this pull moves the reconciliation code to use interfaces (still tightly coupled) rather than structs.

@liggitt @kubernetes/sig-auth-pr-reviews
2017-02-26 10:17:34 -08:00
Kubernetes Submit Queue
1519422aba Merge pull request #41814 from deads2k/agg-06-cas
Automatic merge from submit-queue

add client-ca to configmap in kube-public

Client CA information is not secret and it's required for any API server trying to terminate a TLS connection.  This pull adds the information to configmaps in `kube-public` that look like this:


```yaml
apiVersion: v1
data:
  client-ca.crt: |
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
  requestheader-allowed-names: '["system:auth-proxy"]'
  requestheader-client-ca-file: |
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
  requestheader-extra-headers-prefix: '["X-Remote-Extra-"]'
  requestheader-group-headers: '["X-Remote-Group"]'
  requestheader-username-headers: '["X-Remote-User"]'
kind: ConfigMap
metadata:
  creationTimestamp: 2017-02-22T17:54:37Z
  name: extension-apiserver-authentication
  namespace: kube-system
  resourceVersion: "6"
  selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
  uid: fa1dd328-f927-11e6-8b0e-28d2447dc82b

```

@kubernetes/sig-auth-api-reviews @liggitt @kubernetes/sig-api-machinery-pr-reviews @lavalamp @sttts 


There will need to be a corresponding pull for permissions
2017-02-26 09:32:44 -08:00
Kubernetes Submit Queue
cff3c99613 Merge pull request #41628 from humblec/glusterfs-refactor
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Factor new GetClusterNodes() out of CreateVolume().
2017-02-26 08:10:02 -08:00
Kubernetes Submit Queue
9a218d406b Merge pull request #41309 from kars7e/add-cafile-openstack
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Add custom CA file to openstack cloud provider config

**What this PR does / why we need it**: Adds ability to specify custom CA bundle file to verify OpenStack endpoint against. Useful in tests and PoC deployments. Similar to what https://github.com/kubernetes/kubernetes/pull/35488 did for authentication.  


**Which issue this PR fixes**: None

**Special notes for your reviewer**: Based on https://github.com/kubernetes/kubernetes/pull/35488 which added support for custom CA file for authentication.

**Release note**:
2017-02-26 08:10:00 -08:00
Kubernetes Submit Queue
dd29e6cdc7 Merge pull request #41896 from kevin-wangzefeng/daemonset-infinite-default-toleration-seconds
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Make DaemonSets survive taint-based evictions when nodes turn unreachable/notReady

**What this PR does / why we need it**:
DaemonPods shouldn't be deleted by NodeController in case of Node problems.
This PR is to add infinite tolerations for Unreachable/NotReady NoExecute Taints, so that they won't be deleted by NodeController when a node goes unreachable/notReady.

**Which issue this PR fixes** :
fixes #41738 
Related PR: #41133


**Special notes for your reviewer**:

**Release note**:

```release-note
Make DaemonSets survive taint-based evictions when nodes turn unreachable/notReady.
```
2017-02-26 08:09:56 -08:00
Kubernetes Submit Queue
80e6492f03 Merge pull request #40932 from peay/cronjob-max-finished-jobs
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)

Modify CronJob API to add job history limits, cleanup jobs in controller

**What this PR does / why we need it**:
As discussed in #34710: this adds two limits to `CronJobSpec`, to limit the number of finished jobs created by a CronJob to keep.

**Which issue this PR fixes**: fixes #34710

**Special notes for your reviewer**:

cc @soltysh, please have a look and let me know what you think -- I'll then add end to end testing and update the doc in a separate commit. What is the timeline to get this into 1.6?

The plan:

- [x] API changes
  - [x] Changing versioned APIs
    - [x] `types.go`
    - [x] `defaults.go` (nothing to do)
    - [x] `conversion.go` (nothing to do?)
    - [x] `conversion_test.go` (nothing to do?)
  - [x] Changing the internal structure
    - [x] `types.go`
    - [x] `validation.go`
    - [x] `validation_test.go`
  - [x] Edit version conversions
    - [x] Edit (nothing to do?)
    - [x] Run `hack/update-codegen.sh`
  - [x] Generate protobuf objects
    - [x] Run `hack/update-generated-protobuf.sh`
  - [x] Generate json (un)marshaling code
    - [x] Run `hack/update-codecgen.sh`
  - [x] Update fuzzer
- [x] Actual logic
- [x] Unit tests
- [x] End to end tests
- [x] Documentation changes and API specs update in separate commit


**Release note**:

```release-note
Add configurable limits to CronJob resource to specify how many successful and failed jobs are preserved.
```
2017-02-26 08:09:54 -08:00
Kubernetes Submit Queue
5c3791b9e0 Merge pull request #41729 from smarterclayton/refactor_printers
Automatic merge from submit-queue (batch tested with PRs 41621, 41946, 41941, 41250, 41729)

Refactor printers and describers into their own package.

This sets the stage for using printer code from the server side (decoupled from kubectl) and loosens the coupling between kubectl and the printers. `pkg/printers` contains interfaces and has an import restriction against pulling in API specific code, while `pkg/printers/internalversion` can be used for internal types.

Add a method on `Factory` for retrieving PrinterForCommand which uses the Scheme and RESTMapper from the Factory, not the hardcoded ones.  This further separates kubectl from the core API scheme and allows better composition.

Change NamePrinter to use RESTMapper (previously it was hardcoding those conversions). This means that we now return plural resource names (`pods/foo`) but is correct once aliases and shortnames start being returned by the mapper.

This is a prerequisite for server side get, but is pure refactor (contains no new features).

@deads2k @liggitt
2017-02-26 06:47:03 -08:00
Kubernetes Submit Queue
3f4ef9ae11 Merge pull request #41250 from kargakis/switch-get-from-cache
Automatic merge from submit-queue (batch tested with PRs 41621, 41946, 41941, 41250, 41729)

controller: poll replica sets from the cache
2017-02-26 06:47:00 -08:00
Kubernetes Submit Queue
8e531de1d5 Merge pull request #41946 from freehan/hostport-fix
Automatic merge from submit-queue (batch tested with PRs 41621, 41946, 41941, 41250, 41729)

bug fix for hostport-syncer

fix a bug introduced by the previous refactoring of hostport-syncer.  https://github.com/kubernetes/kubernetes/pull/39443
and fix some nits
2017-02-26 06:46:55 -08:00
Kubernetes Submit Queue
28a8d783e6 Merge pull request #41621 from derekwaynecarr/best-effort-qos-shares
Automatic merge from submit-queue

BestEffort QoS class has min cpu shares

**What this PR does / why we need it**:
BestEffort QoS class is given the minimum amount of CPU shares per the QoS design.
2017-02-26 06:32:43 -08:00
Kubernetes Submit Queue
3c059c0a2f Merge pull request #42098 from kargakis/fix-rs-rc-validation
Automatic merge from submit-queue (batch tested with PRs 42106, 42094, 42069, 42098, 41852)

Fix availableReplicas validation

An available replica is a ready replica, not the other way around

@kubernetes/sig-apps-bugs caught while testing https://github.com/kubernetes/kubernetes/pull/42097
2017-02-26 04:34:00 -08:00
Jordan Liggitt
41c88e0455 Revert "Merge pull request #40088 from jsafrane/storage-ga-v1"
This reverts commit 5984607cb9, reversing
changes made to 067f92e789.
2017-02-25 22:35:15 -05:00
Kubernetes Submit Queue
5984607cb9 Merge pull request #40088 from jsafrane/storage-ga-v1
Automatic merge from submit-queue (batch tested with PRs 41854, 41801, 40088, 41590, 41911)

Add storage.k8s.io/v1 API

v1 API is direct copy of v1beta1 API. This v1 API gets installed and exposed in this PR, I tested that kubectl can create both v1beta1 and v1 StorageClass.

~~Rest of Kubernetes (controllers, examples,. tests, ...) still use v1beta1 API, I will update it when this PR gets merged as these changes would get lost among generated code.~~ Most parts use v1 API now, it would not compile / run tests without it.

**Release note**:
```
Kubernetes API storage.k8s.io for storage objects is now fully supported and is available as storage.k8s.io/v1. Beta version of the API storage.k8s.io/v1beta1 is still available in this release, however it will be removed in a future Kubernetes release.

Together with the API endpoint, StorageClass annotation "storageclass.beta.kubernetes.io/is-default-class" is deprecated and  "storageclass.kubernetes.io/is-default-class" should be used instead to mark a default storage class. The beta annotation is still working in this release, however it won't be supported in the next one.
```

@kubernetes/sig-storage-misc
2017-02-25 05:02:55 -08:00
Kubernetes Submit Queue
067f92e789 Merge pull request #41801 from riverzhang/patch-1
Automatic merge from submit-queue (batch tested with PRs 41854, 41801, 40088, 41590, 41911)

Fix  some typos

**Release note**:

```release-note
```
2017-02-25 05:02:53 -08:00
Michail Kargakis
f7fa286b65 Add status validation unit tests, validate updatedReplicas 2017-02-25 13:47:29 +01:00
Kubernetes Submit Queue
258a5cb3f1 Merge pull request #40665 from brendandburns/i18n
Automatic merge from submit-queue (batch tested with PRs 40665, 41094, 41351, 41721, 41843)

Update i18n tools and process.

@fabianofranz @zen @kubernetes/sig-cli-pr-reviews 

This is an update to the translation process based on feedback from folks.

The main changes are:
   * `msgctx` is being removed from the files.
   * String wrapping and string extraction have been separated.
   * More tools from the `gettext` family of tools are being used
   * Extracted strings are being sorted for canonical ordering
   * A `.pot` template has been added.
2017-02-25 03:56:51 -08:00
Michail Kargakis
e0288342ef Fix availableReplicas validation 2017-02-25 12:53:31 +01:00
peay
ca3c4b3993 Re-generate code and API spec for CronJob API 2017-02-25 06:51:59 -05:00
peay
2b33de0684 Modify CronJob API to add job history limits, cleanup jobs in controller 2017-02-25 06:51:54 -05:00
Kubernetes Submit Queue
a426904009 Merge pull request #31515 from jsafrane/format-error
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)

Show specific error when a volume is formatted by unexpected filesystem.

kubelet now detects that e.g. xfs volume is being mounted as ext3 because of
wrong volume.Spec.

Mount error is left in the error message to diagnose issues with mounting e.g.
'ext3' volume as 'ext4' - they are different filesystems, however kernel should
mount ext3 as ext4 without errors.

Example kubectl describe pod output:

```
  FirstSeen     LastSeen        Count   From                                    SubobjectPath   Type            Reason          Message
  41s           3s              7       {kubelet ip-172-18-3-82.ec2.internal}                   Warning         FailedMount     MountVolume.MountDevice failed for volume "kubernetes.io/aws-ebs/aws://us-east-1d/vol-ba79c81d" (spec.Name: "pvc-ce175cbb-6b82-11e6-9fe4-0e885cca73d3") pod "3d19cb64-6b83-11e6-9fe4-0e885cca73d3" (UID: "3d19cb64-6b83-11e6-9fe4-0e885cca73d3") with: failed to mount the volume as "ext4", it's already formatted with "xfs". Mount error: mount failed: exit status 32
Mounting arguments: /dev/xvdba /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-ba79c81d ext4 [defaults]
Output: mount: wrong fs type, bad option, bad superblock on /dev/xvdba,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
```
2017-02-25 02:17:57 -08:00
Kubernetes Submit Queue
8e6af485f9 Merge pull request #41918 from ncdc/shared-informers-14-scheduler
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)

Switch scheduler to use generated listers/informers

Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).

I think this can wait until master is open for 1.7 pulls, given that we're close to the 1.6 freeze.

After this and #41482 go in, the only code left that references legacylisters will be federation, and 1 bit in a stateful set unit test (which I'll clean up in a follow-up).

@resouer I imagine this will conflict with your equivalence class work, so one of us will be doing some rebasing 😄 

cc @wojtek-t @gmarek  @timothysc @jayunit100 @smarterclayton @deads2k @liggitt @sttts @derekwaynecarr @kubernetes/sig-scheduling-pr-reviews @kubernetes/sig-scalability-pr-reviews
2017-02-25 02:17:55 -08:00
Kubernetes Submit Queue
a93904eaa5 Merge pull request #42052 from derekwaynecarr/disable-groups-per-qos
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)

Disable cgroups-per-qos pending Burstable/cpu.shares being set

Disable cgroups-per-qos to allow kubemark problems to still be resolved.

Re-enable it once the following merge:
https://github.com/kubernetes/kubernetes/pull/41753
https://github.com/kubernetes/kubernetes/pull/41644
https://github.com/kubernetes/kubernetes/pull/41621

Enabling it before cpu.shares is set on qos tiers can cause regressions since Burstable and BestEffort pods are given equal time.
2017-02-25 02:17:54 -08:00
Kubernetes Submit Queue
734dfcb3d8 Merge pull request #41510 from kargakis/fix-progress-check-requeue
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)

controller: fix requeueing progressing deployments

Drop the secondary queue and add either ratelimited or after the
required amount of time that we need to wait directly in the main
queue. In this way we can always be sure that we will sync back
the Deployment if its progress has yet to resolve into a complete
(NewReplicaSetAvailable) or TimedOut condition.

This should also simplify the deployment controller a bit.

Fixes https://github.com/kubernetes/kubernetes/issues/39785. Once this change soaks, I will move the test out of the flaky suite.

@kubernetes/sig-apps-misc
2017-02-25 02:17:53 -08:00
Saad Ali
6ba69ed9a1 Merge pull request #42081 from Random-Liu/remove-extra-operation-when-start-podsandbox
Remove extra operations when generating pod sandbox configuration.
2017-02-24 20:42:21 -08:00
Kubernetes Submit Queue
46b20acba2 Merge pull request #41876 from kargakis/add-approvers-in-rc-rs-controllers
Automatic merge from submit-queue

controller: add approvers for rc/rs
2017-02-24 15:34:27 -08:00
Random-Liu
8380148d48 Remove extra operations when generating pod sandbox configuration. 2017-02-24 15:06:03 -08:00
deads2k
4a06b69579 add client-ca to configmap in kube-public 2017-02-24 14:51:12 -05:00
elipapa
136c90a7bf solving unknown file attribute error while sourcing completions
sourcing the file with `zsh` > 4 resulted in an `unknown file attribute`.
More details at http://stackoverflow.com/questions/37220495/zsh-unknown-file-attribute

replacing $@ with $* for get_comp_words
as suggested by @sttts worked to resolve the issue
2017-02-24 16:29:41 +00:00
Derek Carr
36f4256afd Disble cgroups-per-qos pending Burstable/cpu.shares being set 2017-02-24 10:16:41 -05:00
Kubernetes Submit Queue
6edd079024 Merge pull request #42041 from yu-song/close-file-handle
Automatic merge from submit-queue

Add f.close for the opend file
2017-02-24 05:30:27 -08:00
Kubernetes Submit Queue
4c1b875ca0 Merge pull request #39196 from resouer/omit-dot
Automatic merge from submit-queue

kubelet config should ignore file start with dots

Fixes: #39156

Ignore files started with dot.
2017-02-24 05:30:21 -08:00
Jan Safranek
a1b6eeefc8 Update kubectl unit test 2017-02-24 13:52:16 +01:00
Jan Safranek
fa93f1c411 Update imports 2017-02-24 13:52:16 +01:00
Jan Safranek
3e7d6067da Install storage v1 API 2017-02-24 13:52:15 +01:00
Jan Safranek
cea7a46de1 Regenerate everything 2017-02-24 13:34:18 +01:00
Jan Safranek
3f6caca97a Add storage.k8s.io/v1 2017-02-24 13:34:18 +01:00
gmarek
f9d6086217 Fix leftover Taint-related helper function 2017-02-24 09:24:33 +01:00
gmarek
6637592b1d generated 2017-02-24 09:24:33 +01:00
gmarek
d88af7806c NodeController sets NodeTaints instead of deleting Pods 2017-02-24 09:24:33 +01:00
SongRuixia
6b1cf1d71c Add f.close for the opend file 2017-02-24 16:18:22 +08:00
Kubernetes Submit Queue
3adc12c5f5 Merge pull request #41113 from vmware/AddDatastoreParamForDynamicProvisioning
Automatic merge from submit-queue

Fix for Support selection of datastore for dynamic provisioning in vS…

Fixes #40558

Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.

With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.

For example:

User creates a storage class with the datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.

If the user creates a storage class without any datastore

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)

@pdhamdhere @kerneltime
2017-02-23 22:10:42 -08:00