Commit Graph

3513 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
2ba2cc1321
Merge pull request #61615 from janetkuo/rm-adopt-hash
Automatic merge from submit-queue (batch tested with PRs 61790, 61808, 60339, 61615, 61757). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Deployment to stop adding pod-template-hash labels/selector on adoption

**What this PR does / why we need it**: This is a blocker for #55714, because ReplicaSet selector becomes immutable in `apps/v1`. With controller ref, Deployment's ReplicaSets and Pods can avoid fighting with each others without unique label/selector (pod-template-hash), so it's safe to stop adding hash label/selector on adoption. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #61433

**Special notes for your reviewer**: This is a behavioral change to Deployment controller that will affect all versions of Deployment APIs (`apps/v1`, `extensions/v1beta1`, `apps/v1beta1`, `apps/v1beta2`). 

**Release note**:

```release-note
Deployment will stop adding pod-template-hash labels/selector to ReplicaSets and Pods it adopts. Resources created by Deployments are not affected (will still have pod-template-hash labels/selector). 
```
2018-03-28 09:39:18 -07:00
Janet Kuo
cda3f18b8c Remove unused Deployment util functions 2018-03-26 17:08:31 -07:00
Janet Kuo
647d8d8a22 Deployment to stop adding pod-template-hash labels/selector on adoption 2018-03-26 15:52:42 -07:00
Kubernetes Submit Queue
0ab01d19c9
Merge pull request #61375 from satyasm/cloud-cidr-bound-retries
Automatic merge from submit-queue (batch tested with PRs 60455, 61365, 61375, 61597, 61491). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix #61363, Bounded retries for cloud allocator.

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #61363

**Special notes for your reviewer**:
Changed the tracking of nodesInProcessing from a set to map[string]int so that we can count the
number of times we re-process the node and not re-queue in case updateMaxRetries exceeded.

**Release note**:

```release-note
Bound cloud allocator to 10 retries with 100 ms delay between retries.
```
2018-03-26 15:34:45 -07:00
Satyadeep Musuvathy
adc71ff034 Fix #61363, Bounded retries for cloud allocator. 2018-03-23 12:17:31 -07:00
Kubernetes Submit Queue
2a3144e377
Merge pull request #61367 from enisoc/apps-v1-rs
Automatic merge from submit-queue (batch tested with PRs 60980, 61273, 60811, 61021, 61367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use apps/v1 ReplicaSet in controller and tests.

This updates the RS/RC controller and RS integration/e2e tests to use apps/v1 ReplicaSet, as part of #55714.

It does *not* update the Deployment controller, nor its integration/e2e tests, to use apps/v1 ReplicaSet. That will be done in a separate PR (#61419) because Deployment has many more tendrils embedded throughout the system.

```release-note
Conformance: ReplicaSet must be supported in the `apps/v1` version.
```

/assign @janetkuo
2018-03-22 02:08:27 -07:00
Kubernetes Submit Queue
3d4cd0ace3
Merge pull request #61362 from bskiba/test-em-missing
Automatic merge from submit-queue (batch tested with PRs 60373, 61098, 61352, 61359, 61362). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add HPA test for FailedGetExternalMetric

**What this PR does / why we need it**:
Add a HPA test for missing external metrics.

**Release note**:

```
NONE
```
2018-03-21 22:39:22 -07:00
Kubernetes Submit Queue
971bd430d3
Merge pull request #61013 from andyxning/cronjob_remove_getNextStartTimeAfter
Automatic merge from submit-queue (batch tested with PRs 60632, 60806, 59471, 61251, 61013). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

cronjob_remove_getNextStartTimeAfter

**What this PR does / why we need it**:
`getNextStartTimeAfter` has not been used anywhere in Kubernetes and as it is a inter-pkg method, it is safe to remove it.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-03-21 16:01:23 -07:00
Kubernetes Submit Queue
60836d0d37
Merge pull request #59471 from dmathieu/remove-cronjob-from-client
Automatic merge from submit-queue (batch tested with PRs 60632, 60806, 59471, 61251, 61013). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove method NewCronJobControllerFromClient

**What this PR does / why we need it**:

This method was originally introduced when cronjob was still called scheduledjob: 7a34347f7f
Back then, both init methods had different signatures.

Since the rename to cronjob (41d88d30dd), this method is an alias to the normal initializer, have the same signature and is not used anywhere in the codebase.

Since this method was never actually used for cronjobs, it doesn't seem removing it would need any deprecation notice.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:


```release-note
Remove never used NewCronJobControllerFromClient method
```
2018-03-21 16:01:16 -07:00
Kubernetes Submit Queue
4f0ec7b199
Merge pull request #61487 from gmarek/condition
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix condition for using network unavailable taint in cloud_cidr_allocator

Ref. #61481

The 'networkUnavailable' condition has, in a sense reverse logic. I.e. we should be trying to allocate CIRD when the condition is "true", i.e. when the taint exists.

```release-note
NONE
```

@shyamjvs @agabet @bowei
2018-03-21 13:40:33 -07:00
Kubernetes Submit Queue
e40ffd7197
Merge pull request #59172 from fisherxu/removeyear
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove YEAR field of all generated files and fix kubernetes boilerplate checker

**What this PR does / why we need it**:
Remove YEAR field of all generated files and fix kubernetes boilerplate checker
xref: [remove YEAR fileds in gengo #91](https://github.com/kubernetes/gengo/pull/91)

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#gengo/issues/24](https://github.com/kubernetes/gengo/issues/24)

**Special notes for your reviewer**:
/cc @thockin @lavalamp @sttts 

**Release note**:

```release-note
NONE
```
2018-03-21 12:44:37 -07:00
Marek Grabowski
f93598eea5 Fix condition for using network unavailable taint in cloud_cidr_allocator 2018-03-21 17:55:26 +00:00
Kubernetes Submit Queue
d9d1364657
Merge pull request #61124 from satyasm/avoid-sync-delay
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Issue #61123, call syncer.Update on add event.

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #61123

**Special notes for your reviewer**:

**Release note**:

```release-note
Fixed #61123 by triggering syncer.Update on all cases including when a syncer is created
on a new add event.
```
2018-03-21 08:25:44 -07:00
Kubernetes Submit Queue
a7d788d91f
Merge pull request #60886 from mattjmcnaughton/mattjmcnaughton/59975-object-metrics-ignore-unready-pods
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Ignore unready pods when calculating desired replicas

**What this PR does / why we need it**:

This PR causes `GetExternalMetricReplicas` and `GetObjectMetricReplicas` to ignore unready pods when computing the number of desired replicas. If we don't ignore unready pods, there is a risk of overscaling. See the commit messages for examples and implementation info.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #59975 

**Special notes for your reviewer**:
@MaciekPytel and I consciously chose to save `GetExternalPerPodMetricReplicas` for a separate PR, as we aren't definite on what is the preferred behavior.

**Release note**:

```release-note
Unready pods will no longer impact the number of desired replicas when using horizontal auto-scaling with external metrics or object metrics.
```
2018-03-21 01:34:23 -07:00
Kubernetes Submit Queue
a64a11d7e4
Merge pull request #61411 from liggitt/remove-ds-scheduling
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

disable DaemonSet scheduling feature for 1.10

The DaemonSet scheduling feature has blocked the alpha CI job being green and is preventing getting good CI signal for v1.10

It still contains pod scheduling races (#61050) and fundamental issues with the affinity terms it creates (#61410)

As such, there is not significant value in having the feature available in 1.10 in the current state

This PR disables the feature in order to regain green signal on the alpha CI job (reverting commits is likely to be more disruptive at this point)

related to https://github.com/kubernetes/kubernetes/issues/61050

```release-note
DaemonSet scheduling associated with the alpha ScheduleDaemonSetPods feature flag has been removed from the 1.10 release. See https://github.com/kubernetes/features/issues/548 for feature status.
```
2018-03-20 12:34:23 -07:00
Anthony Yeh
c4c6e4bbb8
hack/update-bazel.sh 2018-03-20 11:15:36 -07:00
Kubernetes Submit Queue
8c00efe653
Merge pull request #60831 from resouer/fix-race
Automatic merge from submit-queue (batch tested with PRs 60574, 60666, 60831, 60877, 60357). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix data race in node lifecycle controller

**What this PR does / why we need it**:
Encountered this bug during fixing: https://github.com/kubernetes/kubernetes/pull/60753

There's a data race for `zoneNoExecuteTainter `.

```
--- PASS: TestTaintNodeByCondition (5.72s)
PASS
==================
WARNING: DATA RACE
Write at 0x00c421a8d2f0 by goroutine 1472:
  runtime.mapassign_faststr()
      /usr/local/go/src/runtime/hashmap_fast.go:598 +0x0
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).addPodEvictorForNewZone()
      /root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:1053 +0x37d
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).monitorNodeStatus()
      

Previous read at 0x00c421a8d2f0 by goroutine 1471:
  runtime.mapiterinit()
      /usr/local/go/src/runtime/hashmap.go:709 +0x0
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).doNoExecuteTaintingPass()
      /root/code/kubernetes/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go:459 +0xec
  k8s.io/kubernetes/pkg/controller/nodelifecycle.(*Controller).(k8s.io/kubernetes/pkg/controller/nodelifecycle.doNoExecuteTaintingPass)-fm()
```

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Fix data race in node lifecycle controller
```
2018-03-20 08:34:40 -07:00
Jordan Liggitt
05e4ccecb1
disable DaemonSet scheduling feature for 1.10 2018-03-20 10:50:37 -04:00
Kubernetes Submit Queue
854002c3e1
Merge pull request #59208 from augabet/CIDR_Taints
Automatic merge from submit-queue (batch tested with PRs 60363, 59208, 59465, 60581, 60702). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

check taints when allocating CIDR for the cloud

check taint when allocating CIDR for the Cloud (for the shared informer cache).

What this PR does / why we need it:
Following the issue #58406 here is a check of taint when when allocating CIDR for the Cloud

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #58406

Special notes for your reviewer:
/assign @yastij @gmarek

```release-note
None
```
2018-03-20 02:37:16 -07:00
Kubernetes Submit Queue
d60394eebd
Merge pull request #59542 from dmathieu/remove-cronjob-label-todo
Automatic merge from submit-queue (batch tested with PRs 60189, 59542, 59931, 60621, 60353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove todo to consider adding the cronjob name as a label

**What this PR does / why we need it**:

It seems this label shouldn't be added automatically. If so, we should remove the comment.
See #59473

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #41633

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-03-20 00:42:09 -07:00
Kubernetes Submit Queue
1be8e8bb59
Merge pull request #60562 from Pingan2017/cleanupnode
Automatic merge from submit-queue (batch tested with PRs 60457, 60331, 54970, 58731, 60562). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

clean up unused const in node_lifecycle_controller.go

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-03-19 23:42:22 -07:00
Kubernetes Submit Queue
c64f19dd1b
Merge pull request #59728 from wgliang/master.append
Automatic merge from submit-queue (batch tested with PRs 59740, 59728, 60080, 60086, 58714). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

more concise to merge the slice

**What this PR does / why we need it**:
more concise to merge the slice

**Special notes for your reviewer**:
2018-03-19 21:34:30 -07:00
Anthony Yeh
f3799fae36
ReplicationController: Use apps/v1 ReplicaSet in conversion layer. 2018-03-19 13:32:08 -07:00
Anthony Yeh
8c4341de4e
ReplicaSet: Use apps/v1 for RS controller. 2018-03-19 13:18:23 -07:00
Damien Mathieu
e8efc51c1c remove todo suggesting to add the cronjob start time 2018-03-19 19:22:14 +01:00
Damien Mathieu
c669ce440c remove todo to consider adding the cronjob name as a label
See #59473
2018-03-19 19:22:14 +01:00
Beata Skiba
dd24087aa3 Add test for FailedGetExternalMetric 2018-03-19 15:22:36 +01:00
Kubernetes Submit Queue
f8f67da082
Merge pull request #61201 from jennybuckley/fix-gc-empty-map
Automatic merge from submit-queue (batch tested with PRs 61284, 61119, 61201). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Prevent garbage collector from attempting to sync with 0 resources

**What this PR does / why we need it**:
As of #55259 we enabled garbagecollector.GetDeletableResources to return partial discovery results (including an empty set of discovery results).
This had the unintended consequence of allowing the garbage collector to enter a blocked state that can only be fixed by restarting.

From [this comment](https://github.com/kubernetes/kubernetes/issues/60037#issuecomment-372801088):

> 1. The Sync function periodically calls GetDeletableResources
>
> 2. According to the comment above GetDeletableResources, All discovery errors are considered temporary. Upon encountering any error, GetDeletableResources will log and return any discovered resources it was able to process (which may be none)., an error in discovery causes the discovery client to no longer discover resources in the cluster, but instead of failing and returning an error, it simply logs the error as garbagecollector.go:601] failed to discover preferred resources: %vthe server was unable to return a response in the time allotted, but may still be processing the request and returns an empty list of resources
>
> 3. The Sync function, upon recieving an empty resource list from discovery, detects that the resources have changed, and calls resyncMonitors, which calls dependencyGraphBuilder.syncMonitors with map[] as the argument as shown in the log as garbagecollector.go:189] syncing garbage collector with updated resources from discovery: map[], which sets the list of monitors to an empty list because it thinks there are no resources to monitor.
>
> 4. Lastly the Sync function calls controller.WaitForCacheSync, which calls cache.WaitForCacheSync, which will continually retry the garbagecollector.IsSynced function until it returns true, but it will always return false because len(gb.monitors) is 0.

This PR prevents that specific race condition from arising.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #60037

**Release note**:
```release-note
Fix bug allowing garbage collector to enter a broken state that could only be fixed by restarting the controller-manager.
```
2018-03-16 16:56:03 -07:00
jennybuckley
455c6fb049 Prevent garbage collector from attempting to sync with 0 resources 2018-03-16 11:44:09 -07:00
jennybuckley
68e2a96016 Add unit test TestGarbageCollectorSync 2018-03-16 11:28:58 -07:00
Kubernetes Submit Queue
ca02c11887
Merge pull request #61161 from k82cn/k8s_59194_4
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Added unschedulable taint

Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194; fixes #61050

**Release note**:

```release-note
When `TaintNodesByCondition` enabled, added `node.kubernetes.io/unschedulable:NoSchedule`
 taint to the node if `spec.Unschedulable` is true.

When `ScheduleDaemonSetPods` enabled, `node.kubernetes.io/unschedulable:NoSchedule` 
toleration is added automatically to DaemonSet Pods; so the `unschedulable` field of 
a node is not respected by the DaemonSet controller.
```
2018-03-16 11:22:05 -07:00
Kubernetes Submit Queue
5d67222592
Merge pull request #60985 from soltysh/issue59918
Automatic merge from submit-queue (batch tested with PRs 60978, 60985). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Backoff only when failed pod shows up

**What this PR does / why we need it**:
Upon introducing the backoff policy we started to delay sync runs for the job when it failed several times before. This leads to failed jobs not reporting status right away in cases that are not related to failed pods, eg. a successful run. This PR ensures the backoff is applied only when `updatePod` receives a failed pod.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #59918 #59527

/assign @janetkuo @kow3ns 

**Release note**:
```release-note
None
```
2018-03-15 22:55:02 -07:00
Da K. Ma
b23db30765 Added unscheduable taint.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
2018-03-16 09:13:08 +08:00
Kubernetes Submit Queue
02611149c1
Merge pull request #60579 from gmarek/ss_logs
Automatic merge from submit-queue (batch tested with PRs 61118, 60579). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Increase loging verbosity for deleting stateful set pods

We should always log reasons for deleting StatefulSet Pods.
@jdumars - what's the current process for putting such changes into the release? It's literally 0-risk change that helps with debugging.

cc @ttz21

```release-note
NONE
```
2018-03-14 09:49:52 -07:00
Maciej Szulik
1266252dc2
Backoff only when failed pod shows up 2018-03-14 11:49:13 +01:00
mattjmcnaughton
d33494d459 GetExternalMetricReplicas ignores unready pods
Similar to the change we made for `GetObjectMetricReplicas` in the
previous commit. Ensure that `GetExternalMetricReplicas` does not
include unready pods when its determining how many replica it desires.
Including unready pods can lead to over-scaling.

We did not change the behavior of `GetExternalPerPodMetricReplicas`, as
it is slightly less clear what is the desired behavior. We did make some
small naming refactorings to this method, which will make it easier to
ignore unready pods if we decide we want to.
2018-03-13 22:27:28 -04:00
Satyadeep Musuvathy
4b2de75679 Fix Issue #61123, call syncer.Update on add event. 2018-03-13 11:20:50 -07:00
Kubernetes Submit Queue
f7aafaeb40
Merge pull request #59862 from k82cn/k8s_59194_3
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Task 2: Schedule DaemonSet Pods by default scheduler.

Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #59194
https://github.com/kubernetes/features/issues/548

**Release note**:

```release-note
When ScheduleDaemonSetPods is enabled, the DaemonSet controller will delegate Pods scheduling to default scheduler.
```
2018-03-11 06:19:27 -07:00
Andy Xie
8d16742a32 cronjob_remove_getNextStartTimeAfter 2018-03-11 11:49:11 +08:00
Shyam Jeedigunta
8ff1f05f7c Increase verbosity of frequently printed logline in scheduler_binder 2018-03-08 19:25:01 +01:00
fisherxu
b49ef6531c regenerated all files and remove all YEAR fields 2018-03-08 17:52:48 +08:00
Da K. Ma
5adb2bad45 Task 2: Schedule DaemonSet Pods by default scheduler.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
2018-03-08 17:36:49 +08:00
mattjmcnaughton
7e3bce7b3e GetObjectMetricReplicas ignores unready pods
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).

After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.

This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
2018-03-07 08:13:01 -05:00
Harry Zhang
da29bd2cbe Fix data race in node lifecycle controller 2018-03-06 00:18:11 -08:00
Kubernetes Submit Queue
30eb1aa7c5
Merge pull request #60648 from bskiba/hpa-unready
Automatic merge from submit-queue (batch tested with PRs 60732, 60689, 60648, 60704). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Do not count failed pods as unready in HPA controller

**What this PR does / why we need it**:
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.

After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.

@MaciekPytel @DirectXMan12 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55630

**Special notes for your reviewer**:

**Release note**:
```
Stop counting failed pods as unready in HPA controller to avoid failed pods incorrectly affecting scale up replica count calculation.
```
2018-03-02 14:25:54 -08:00
Kubernetes Submit Queue
ae1fc13aee
Merge pull request #60386 from k82cn/k8s_60163
Automatic merge from submit-queue (batch tested with PRs 60683, 60386). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Added unschedulabe predicate.

Signed-off-by: Da K. Ma <madaxa@cn.ibm.com>

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #60163

**Release note**:
```release-note
None
```
2018-03-02 03:41:50 -08:00
Beata Skiba
e5f8bfa023 Do not count failed pods as unready in HPA controller
Currently, when performing a scale up, any failed pods (which can be present for example in case of evictions performed by kubelet) will be treated as unready. Unready pods are treated as if they had 0% utilization which will slow down or even block scale up.

After this change, failed pods are ignored in all calculations. This way they do not influence neither scale up nor scale down replica calculations.
2018-03-01 16:21:02 +01:00
Marek Grabowski
b27157a271 Increase loging verbosity for deleting stateful set pods 2018-03-01 09:33:18 +00:00
Kubernetes Submit Queue
07240b7166
Merge pull request #60555 from zhangxiaoyu-zidif/add-unit-test-for-nodenames-slice-comparison
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add unit test case for nodenames comparison

**What this PR does / why we need it**:
ref https://github.com/kubernetes/kubernetes/pull/60486

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
please merge it after https://github.com/kubernetes/kubernetes/pull/60486

**Release note**:

```release-note
NONE
```
2018-02-28 10:39:18 -08:00
Kubernetes Submit Queue
c4f3102b1f
Merge pull request #60486 from zhangxiaoyu-zidif/fix-nodename-slice-cmp
Automatic merge from submit-queue (batch tested with PRs 60342, 60505, 59218, 52900, 60486). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix nodenames slices comparison para.

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-02-28 06:07:34 -08:00