Commit Graph

201 Commits

Author SHA1 Message Date
Xiaoyu Zhang
e3d534b2c4 Fix a type
Fix a type
2017-03-31 10:17:19 +08:00
Connor Doyle
364dbc0ca5 Revert "Revert "Pods pending due to insufficient OIR should get scheduled once sufficient OIR becomes available.""
- This reverts commit 60758f3fff.
- Disabled opaque integer resource end-to-end tests.
2017-03-06 17:48:09 -08:00
Dawn Chen
60758f3fff Revert "Pods pending due to insufficient OIR should get scheduled once sufficient OIR becomes available." 2017-03-06 14:27:17 -08:00
Connor Doyle
8a42189690 Fix unbounded growth of cached OIRs in sched cache
- Added schedulercache.Resource.SetOpaque helper.
- Amend kubelet allocatable sync so that when OIRs are removed from capacity
  they are also removed from allocatable.
- Fixes #41861.
2017-03-04 09:26:22 -08:00
Janet Kuo
4c882477e9 Make DaemonSet respect critical pods annotation when scheduling 2017-02-27 09:59:45 -08:00
Kubernetes Submit Queue
18fe59264c Merge pull request #41875 from wanghaoran1988/fix_issue
Automatic merge from submit-queue (batch tested with PRs 41954, 40528, 41875, 41165, 41877)

Move the scheduler Fake* to testing folder

Address this issue: https://github.com/kubernetes/kubernetes/issues/41662
@timothysc I moved the interface to the types.go, and the Fake* to the testing package, not sure whether you like or not.
2017-02-26 14:54:52 -08:00
Kubernetes Submit Queue
1359ffc502 Merge pull request #41818 from aveshagarwal/master-taints-tolerations-api-fields-pod-spec-updates
Automatic merge from submit-queue (batch tested with PRs 41701, 41818, 41897, 41119, 41562)

Allow updates to pod tolerations.

Opening this PR to continue discussion for pod spec tolerations updates when a pod has been scheduled already. This PR is built on top of https://github.com/kubernetes/kubernetes/pull/38957.

@kubernetes/sig-scheduling-pr-reviews @liggitt @davidopp @derekwaynecarr @kubernetes/rh-cluster-infra
2017-02-26 14:02:51 -08:00
Kubernetes Submit Queue
16f87fe7d8 Merge pull request #40952 from dashpole/premption
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)

Guaranteed admission for Critical Pods

This is the first step in implementing node-level preemption for critical pods.
It defines the AdmissionFailureHandler interface, which allows callers, like the kubelet, to define how failed predicates are handled, and take steps to correct failures if necessary.
In the kubelet's implementation, it triggers preemption if the pod being admitted is critical, and if the only failed predicates are InsufficientResourceErrors, then it prempts (not yet implemented) other other pods to allow admission of the critical pod.

cc: @vishh
2017-02-26 12:57:59 -08:00
Haoran Wang
4540bb9e77 move the lister.go to testing folder 2017-02-25 23:36:27 +08:00
David Ashpole
c58970e47c critical pods can preempt other pods to be admitted 2017-02-23 10:31:20 -08:00
Avesh Agarwal
b9d95b4426 Allow toleration updates via pod spec. 2017-02-23 11:06:13 -05:00
Andy Goldstein
9d8d6ad16c Switch scheduler to use generated listers/informers
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
2017-02-23 09:57:12 -05:00
Avesh Agarwal
b4d3d24eaf Update tests. 2017-02-22 09:27:42 -05:00
Avesh Agarwal
9b640838a5 Change taint/toleration annotations to api fields. 2017-02-22 09:27:42 -05:00
Kubernetes Submit Queue
f2e234e47f Merge pull request #41398 from codablock/azure_max_pd
Automatic merge from submit-queue

Add scheduler predicate to filter for max Azure disks attached

**What this PR does / why we need it**: This PR adds scheduler predicates for maximum Azure Disks count. This allows to use the environment variable KUBE_MAX_PD_VOLS on scheduler the same as it's already possible with GCE and AWS.

This is needed as we need a way to specify the maximum attachable disks on Azure to avoid permanently failing disk attachment in cases k8s scheduled too many PODs with AzureDisk volumes onto the same node. 

I've chosen 16 as the default value for DefaultMaxAzureDiskVolumes even though it may be too high for many smaller VM types and too low for the larger VM types. This means, the default behavior may change for clusters with large VM types. For smaller VM types, the behavior will not change (it will keep failing attaching).

In the future, the value should be determined at run time on a per node basis, depending on the VM size. I know that this is already implemented in the ongoing Azure Managed Disks work, but I don't remember where to find this anymore and also forgot who was working on this. Maybe @colemickens can help here.

**Release note**:

```release-note
Support KUBE_MAX_PD_VOLS on Azure
```

CC @colemickens @brendandburns
2017-02-21 06:11:09 -08:00
Kubernetes Submit Queue
ba6dca94bc Merge pull request #41458 from humblec/iscsi-nodisk-conflict
Automatic merge from submit-queue

Adjust nodiskconflict support based on iscsi multipath.

With the multipath support is in place, to declare whether both iscsi disks are same, we need to only depend on IQN.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-20 03:54:41 -08:00
Alexander Block
73a0083a84 Add scheduler predicate to filter for max Azure disks attached 2017-02-20 09:00:18 +01:00
Timothy St. Clair
2bcd63c524 Cleanup work to enable feature gating annotations 2017-02-18 09:25:57 -06:00
Robert Rati
32c4683242 Feature-Gate affinity in annotations 2017-02-18 09:08:38 -06:00
Humble Chirammal
7a1ac6c6db Adjust nodiskconflict support based on iscsi multipath feature.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-02-16 16:24:53 +05:30
Kubernetes Submit Queue
4ed86f5d46 Merge pull request #41076 from gyliu513/port-forward
Automatic merge from submit-queue

Removed a space in portforward.go.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-08 07:59:10 -08:00
Guangya Liu
9607edc556 Clean up for some typo.
1) Removed a space in portforward.go.
2) Renamed `lockAquisitionFunc` to `lockAcquisitionFunc` in
controller.go.
3) Fixed typo in predicates.go.
2017-02-08 09:39:03 +08:00
gmarek
37585b06e0 Scheduler doesn't schedule Pods not tolerating NoExecute Taints 2017-02-07 13:56:48 +01:00
Kevin
36dcb57407 forgiveness library changes 2017-01-31 21:39:17 +08:00
Kubernetes Submit Queue
3dbbd0bdf4 Merge pull request #40606 from deads2k/client-17-sync
Automatic merge from submit-queue (batch tested with PRs 34543, 40606)

sync client-go and move util/workqueue

The vision of client-go is that it provides enough utilities to build a reasonable controller.  It has been copying `util/workqueue`.  This makes it authoritative.

@liggitt I'm getting really close to making client-go authoritative ptal.

approved based on https://github.com/kubernetes/kubernetes/issues/40363
2017-01-30 08:19:10 -08:00
Kubernetes Submit Queue
83791b0ee4 Merge pull request #34543 from ivan4th/dont-require-failure-domains-for-pod-affinity-checker
Automatic merge from submit-queue

Don't require failureDomains in PodAffinityChecker

`failureDomains` are only used for `PreferredDuringScheduling` pod
anti-affinity, which is ignored by `PodAffinityChecker`.
This unnecessary requirement was making it hard to move
`PodAffinityChecker` to `GeneralPredicates` because that would require
passing `--failure-domains` to both `kubelet` and `kube-controller-manager`.
2017-01-30 08:18:32 -08:00
deads2k
2c1c0f3f72 move workqueue to client-go 2017-01-30 09:08:21 -05:00
Dr. Stefan Schimanski
44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski
bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
deads2k
1ce0637b27 move listers out of cache to reduce import tree 2017-01-20 15:01:38 -05:00
Kubernetes Submit Queue
b2e134a724 Merge pull request #36693 from ConnorDoyle/oir-cleanup
Automatic merge from submit-queue (batch tested with PRs 36693, 40154, 40170, 39033)

Minor hygiene in scheduler.

**What this PR does / why we need it**:

Minor cleanups in scheduler, related to PR #31652.

- Unified lazy opaque resource caching.
- Deleted a commented-out line of code.

**Release note**:
```release-note
N/A
```
2017-01-20 09:18:49 -08:00
Klaus Ma
aea7b1faab Improve code coverage for algorithm/predicates. 2017-01-18 23:23:26 +08:00
Clayton Coleman
bcde05753b
Correct import statements 2017-01-17 16:18:18 -05:00
Clayton Coleman
9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
Connor Doyle
94b9c0e20c Minor hygiene in scheduler.
- Unified lazy opaque resource caching.
- Deleted a commented-out line of code.
2017-01-17 07:00:07 -08:00
Kubernetes Submit Queue
b1506004cc Merge pull request #39601 from mqliang/upstream-tolerates-taints-bugfix
Automatic merge from submit-queue (batch tested with PRs 39945, 39601)

bugfix for PodToleratesNodeTaints

`PodToleratesNodeTaints`predicate func should return true if pod has no toleration annotations and node's taint effect is `PreferNoSchedule`
2017-01-17 04:08:47 -08:00
Klaus Ma
c184fef6e6 Fixed pod anti-affinity bugs. 2017-01-17 13:28:54 +08:00
Robert Rati
6a3ad93d6c [scheduling] Moved pod affinity and anti-affinity from annotations to api
fields. #25319
2017-01-12 14:54:29 -05:00
deads2k
6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Seth Jennings
4c30459e49 switch from local qos types to api types 2017-01-10 10:54:30 -06:00
mqliang
d473646855 bugfix for PodToleratesNodeTaints 2017-01-09 18:16:43 +08:00
Jeff Grafton
20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
Mike Danese
161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Kubernetes Submit Queue
69ddd8eb27 Merge pull request #39247 from wojtek-t/optimize_controller_manager_memory
Automatic merge from submit-queue

Avoid unnecessary memory allocations

Low-hanging fruits in saving memory allocations. During our 5000-node kubemark runs I've see this:

ControllerManager:
- 40.17% k8s.io/kubernetes/pkg/util/system.IsMasterNode
- 19.04% k8s.io/kubernetes/pkg/controller.(*PodControllerRefManager).Classify

Scheduler:
- 42.74% k8s.io/kubernetes/plugin/pkg/scheduler/algrorithm/predicates.(*MaxPDVolumeCountChecker).filterVolumes

This PR is eliminating all of those.
2016-12-28 00:02:59 -08:00
Wojciech Tyczynski
ba07a36651 Avoid copying volumes in scheduler 2016-12-27 16:11:11 +01:00
Kubernetes Submit Queue
7b134995e5 Merge pull request #37513 from xiaolou86/podAffinity
Automatic merge from submit-queue

Optimize pod affinity when predicate

Optimize by returning as early as possible to avoid invoking priorityutil.PodMatchesTermsNamespaceAndSelector.
2016-12-27 06:46:54 -08:00
Joe Finney
b4c87a94a8 Remove two zany unit tests. 2016-12-16 14:49:05 -08:00
Robert Rati
91931c138e [scheduling] Moved node affinity from annotations to api fields. #35518 2016-12-16 11:42:43 -05:00
Kubernetes Submit Queue
59ad9a30ca Merge pull request #36060 from resouer/fix-service-affinity
Automatic merge from submit-queue

Add use case to service affinity

Also part of nits in refactoring predicates, I found the explanation of `serviceaffinity` in its comment is very hard to understand. So I added example instead here to help user/developer to digest it.
2016-12-15 04:10:08 -08:00
Harry Zhang
a0e836a378 Add use case to service affinity 2016-12-14 16:59:35 +08:00