Commit Graph

387 Commits

Author SHA1 Message Date
Marek Siarkowicz
4ebc0c94a4 Remove legacy metrics client from podautoscaler 2021-06-04 23:06:32 +02:00
Mikkel Oscar Lyderik Larsen
fef092b417
hpa: Don't scale down if at least one metric was invalid
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2021-03-03 07:53:01 +01:00
Benjamin Elder
56e092e382 hack/update-bazel.sh 2021-02-28 15:17:29 -08:00
Joseph Burnett
16133c2b77 Up and down scale stabilize with envelope.
The HPA controller keeps a flat history of recommendations for
stabilization. However when both up and down scale stabilization are
configured, the interpretation of the history changes depending on the
direction of movement. What we want is to keep the stabilized
recommendation within the envelope of the minimum and maximum over
configured stabilization windows. We should only move when the
envelope forces a move.
2020-12-21 14:36:13 +01:00
Kubernetes Prow Robot
c5efee02ac
Merge pull request #89465 from shibataka000/84142-cm
Fix HPA bug about unintentional scale out during updating deployment when using PodMetric.
2020-12-16 04:44:21 -08:00
Ben Hu
4e62298c1b Fix static checks for pkg/controller/podautoscaler 2020-10-23 18:53:07 +00:00
Kubernetes Prow Robot
ec453ffb1a
Merge pull request #90691 from arjunrn/container-resource-hpa
Add container based scaling to HPA
2020-10-23 05:51:51 -07:00
weiwei
b19a115f42 If we set SelectPolicy MinPolicySelect on scaleUp behavior or scaleDown behavior,Horizontal Pod Autoscaler doesn`t automatically scale the number of pods correctly
Signed-off-by: weiwei <weiwei@tenxcloud.com>
2020-10-22 18:00:49 +08:00
Arjun Naik
0fec7b0f7e Added functionality and API for pod autoscaling based on container resources
Signed-off-by: Arjun Naik <anaik@redhat.com>
2020-10-21 21:10:05 +02:00
Joseph Burnett
1ccaaa768d Ignore deleted pods.
When a pod is deleted, it is given a deletion timestamp. However the
pod might still run for some time during graceful shutdown. During
this time it might still produce CPU utilization metrics and be in a
Running phase.

Currently the HPA replica calculator attempts to ignore deleted pods
by skipping over them. However by not adding them to the ignoredPods
set, their metrics are not removed from the average utilization
calculation. This allows pods in the process of shutting down to drag
down the recommmended number of replicas by producing near 0%
utilization metrics.

In fact the ignoredPods set is misnomer. Those pods are not fully
ignored. When the replica calculator recommends to scale up, 0%
utilization metrics are filled in for those pods to limit the scale
up. This prevents overscaling when pods take some time to startup. In
fact, there should be 4 sets considered (readyPods, unreadyPods,
missingPods, ignoredPods) not just 3.

This change renames ignoredPods as unreadyPods and leaves the scaleup
limiting semantics. Another set (actually) ignoredPods is added to
which delete pods are added instead of being skipped during
grouping. Both ignoredPods and unreadyPods have their metrics removed
from consideration. But only unreadyPods have 0% utilization metrics
filled in upon scaleup.
2020-10-14 16:45:06 +02:00
Kobayashi Daisuke
4ae11dac2e Replace StartLogging(klog.Infof) with StartStructuredLogging(0) 2020-06-15 17:48:35 +09:00
Davanum Srinivas
07d88617e5
Run hack/update-vendor.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:33 -04:00
Davanum Srinivas
442a69c3bd
switch over k/k to use klog v2
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:27 -04:00
Kubernetes Prow Robot
2b2cf8df30
Merge pull request #80700 from mrkm4ntr/add-error-check
Add missing error check
2020-05-11 00:37:51 -07:00
shibataka000
dbd0d566ed Fix bug about unintentional scale out during updating deployment.
This commit fix bug in calcPlainMetricReplicas function.
Same bug in GetResourceReplicas function was fixed in #85027.
2020-03-25 15:47:40 +09:00
Kubernetes Prow Robot
c441a1a7dc
Merge pull request #85027 from shibataka000/fix-bug-about-unintentional-scale-out-during-updating-deployment
Fix HPA bug about unintentional scale out during updating deployment.
2020-03-24 04:50:46 -07:00
Kubernetes Prow Robot
1827fe444e
Merge pull request #87895 from alexzimmer96/68026-lint-pkg-controller-autoscaler
Fix Golint errors in pkg/controller/podautoscaler
2020-03-17 16:19:53 -07:00
Kubernetes Prow Robot
179fe40d06
Merge pull request #88599 from julianvmodesto/scale-ctx-opts
Add context and options to scale client
2020-03-06 13:17:08 -08:00
Julian V. Modesto
da3c3432d8 Add context and options to scale client 2020-03-02 00:03:26 -05:00
taesun_lee
79680b5d9b Fix pkg/controller typos in some error messages, comments etc
- applied review results by LuisSanchez
- Co-Authored-By: Luis Sanchez <sanchezl@redhat.com>

genernal -> general
iniital -> initial
initalObjects -> initialObjects
intentionaly -> intentionally
inforer -> informer
anotother -> another
triger -> trigger
mutli -> multi
Verifyies -> Verifies
valume -> volume
unexpect -> unexpected
unfulfiled -> unfulfilled
implenets -> implements
assignement -> assignment
expectataions -> expectations
nexpected -> unexpected
boundSatsified -> boundSatisfied
externel -> external
calcuates -> calculates
workes -> workers
unitialized -> uninitialized
afater -> after
Espected -> Expected
nodeMontiorGracePeriod -> NodeMonitorGracePeriod
estimateGrracefulTermination -> estimateGracefulTermination
secondrary -> secondary
ShouldRunDaemonPodOnUnscheduableNode -> ShouldRunDaemonPodOnUnschedulableNode
rrror -> error
expectatitons -> expectations
foud -> found
epackage -> package
succesfulJobs -> successfulJobs
namesapce -> namespace
ConfigMapResynce -> ConfigMapResync
2020-02-27 00:15:33 +09:00
Kubernetes Prow Robot
1a0f923a65
Merge pull request #87712 from alena1108/jan30kubelet
Ineffassign fixes for pkg/controller and kubelet
2020-02-14 14:29:27 -08:00
Mike Danese
25651408ae generated: run refactor 2020-02-08 12:30:21 -05:00
Mike Danese
3aa59f7f30 generated: run refactor 2020-02-07 18:16:47 -08:00
Alexander Zimmermann
e0b1e9206d
Clarified comments 2020-02-07 09:09:49 +01:00
Alexander Zimmermann
a1c837022c
Fixed Golint errors in pkg/controller/podautoscaler 2020-02-06 17:16:38 +01:00
Alena Prokharchyk
6c3093f970 Ineffassign fixes for pkg/controller and kubelet 2020-01-30 14:35:10 -08:00
Mike Danese
968adfa993 cleanup req.Context() and ResponseWrapper 2020-01-29 08:50:45 -08:00
Arjun Naik
8ab226263a Adds tests
Signed-off-by: Arjun Naik <arjun@arjunnaik.in>
2019-12-10 18:09:20 +01:00
Ivan Glushkov
27ffe439b6
Adds the algorithm implementation for the Configurable HPA 2019-12-10 20:37:33 +04:00
shibataka000
b7122770f8 Fix bug about unintentional scale out during updating deployment.
During rolling update with maxSurge=1 and maxUnavailable=0,
len(metrics) is greater than currentReplcas
and it may cause unintentional scale out.
2019-11-09 06:24:31 +00:00
yuxiaobo
81e9f21f83 Correct spelling mistakes
Signed-off-by: yuxiaobo <yuxiaobogo@163.com>
2019-11-06 20:25:19 +08:00
wojtekt
7b6bcdf780 Autogenerated code 2019-10-24 20:21:00 +02:00
Bob Killen
e37d702208
Prune inactive owners from autoscaling related OWNERS files. 2019-10-13 08:52:14 -04:00
tanjunchen
de3cf23414 remove the repeat word in documents 2019-10-06 23:32:01 +08:00
Joseph Burnett
7bdb66f8d1
Fix reviewer typo. 2019-09-06 12:09:50 +02:00
Kubernetes Prow Robot
927f45191e
Merge pull request #81527 from yastij/move-controller-util
move WaitForCacheSync to the sharedInformer package
2019-08-27 00:52:54 -07:00
Yassine TIJANI
7e4c3096fe move WaitForCacheSync to the sharedInformer package
Signed-off-by: Yassine TIJANI <ytijani@vmware.com>
2019-08-22 16:13:41 +01:00
Joseph Burnett
a5354d04bb Test more replicas than spec.
During a Deployment update there may be more Pods in the scale target
ref status than in the spec. This test verifies that we do not scale
to the status value. Instead we should stay at the spec value.

Fails before #79035 and passes after.
2019-08-06 14:46:34 +02:00
Shintaro Murakami
4635f16dc1 Add missing error check 2019-07-29 14:37:48 +09:00
David Xia
fabfd950b1
cleanup: fix some log and error capitalizations
Part of https://github.com/kubernetes/kubernetes/issues/15863
2019-07-20 18:26:16 -04:00
Kubernetes Prow Robot
5ece88c4c8
Merge pull request #74526 from DXist/feature/hpa-scale-to-zero
Support scaling HPA to/from zero pods for object/external metrics
2019-07-16 10:11:24 -07:00
Rinat Shigapov
d55f037b7d HPA scale-to-zero for custom object/external metrics
Add support for scaling to zero pods

minReplicas is allowed to be zero

condition is set once

Based on https://github.com/kubernetes/kubernetes/pull/61423

set original valid condition

add scale to/from zero and invalid metric tests

Scaling up from zero pods ignores tolerance

validate metrics when minReplicas is 0

Document HPA behaviour when minReplicas is 0

Documented minReplicas field in autoscaling APIs
2019-07-16 08:46:21 -05:00
Joseph Burnett
7382fa464d Add josephburnett to podautoscaler OWNERS. 2019-07-12 10:20:16 +02:00
Kubernetes Prow Robot
b500c740ee
Merge pull request #79859 from sukeesh/hpa-error-log-fix
HPA incorrectly reported condition status
2019-07-11 07:28:55 -07:00
Kubernetes Prow Robot
57eef32041
Merge pull request #79657 from josephburnett/hpastuck
Ignore unschedulable pods
2019-07-10 11:34:29 -07:00
Joseph Burnett
80e279d353 Ignore pending pods.
This change adds pending pods to the ignored set first before
selecting pods missing metrics. Pending pods are always ignored when
calculating scale.

When the HPA decides which pods and metric values to take into account
when scaling, it divides the pods into three disjoint subsets: 1)
ready 2) missing metrics and 3) ignored. First the HPA selects pods
which are missing metrics. Then it selects pods should be ignored
because they are not ready yet, or are still consuming CPU during
initialization. All the remaining pods go into the ready set. After
the HPA has decided what direction it wants to scale based on the
ready pods, it considers what might have happened if it had the
missing metrics. It makes a conservative guess about what the missing
metrics might have been, 0% if it wants to scale up--100% if it wants
to scale down. This is a good thing when scaling up, because newly
added pods will likely help reduce the usage ratio, even though their
metrics are missing at the moment. The HPA should wait to see the
results of its previous scale decision before it makes another
one. However when scaling down, it means that many missing metrics can
pin the HPA at high scale, even when load is completely removed. In
particular, when there are many unschedulable pods due to insufficient
cluster capacity, the many missing metrics (assumed to be 100%) can
cause the HPA to avoid scaling down indefinitely.
2019-07-10 12:16:33 +02:00
Sukeesh
44c3f0105f fix incorrect hpa status 2019-07-08 17:27:38 +09:00
Joseph Burnett
39c4875321 There are various reasons that the HPA will decide not the change the
current scale. Two important ones are when missing metrics might
change the direction of scaling, and when the recommended scale is
within tolerance of the current scale.

The way that ReplicaCalculator signals it's desire to not change the
current scale is by returning the current scale. However the current
scale is from scale.Status.Replicas and can be larger than
scale.Spec.Replicas (e.g. during Deployment rollout with configured
surge). This causes a positive feedback loop because
scale.Status.Replicas is written back into scale.Spec.Replicas,
further increasing the current scale.

This PR fixes the feedback loop by plumbing the replica count from
spec through horizontal.go and replica_calculator.go so the calculator
can punt with the right value.
2019-07-02 14:21:32 +02:00
waynepeking348
b8b1720f12 Fix bug of ObjectPerPodMetricReplicas to initialize replicaCount with currentReplicas 2019-06-05 11:54:03 +00:00
GuyTempleton
1efbde2815
Handle invalid metrics when scaling on multiple metrics
Handle a case in the Horizontal Pod Autoscaler Controller when scaling
on multiple metrics and one or more is missing or invalid.

If all metrics are missing - return an error and leave the isScalingActive
condition as that for the last invalid metric.

If some metrics are missing/invalid and some are valid and found -
if a scale up would be triggered by the valid metrics ignore the missing
metrics and scale up, if a scale down would be triggered, return an error
and leave the isScalingActive condition as that for the last invalid metric.
2019-05-29 23:20:40 +01:00
GuyTempleton
ee4dbbcbff
Add tests for handling scaling on unavailable metrics
Add three tests for handling invalid metrics when scaling on
multiple metrics - one for scaling up successfully (new behaviour)
and two for ensuring we don't scale down (existing behaviour).
2019-05-29 23:11:32 +01:00
Kubernetes Prow Robot
4ebe11a6cb
Merge pull request #76110 from DirectXMan12/infra/prune-owners
Prune directxman12 from metrics/autoscaling OWNERS
2019-04-29 14:35:36 -07:00
Davanum Srinivas
7b8c9acc09
remove unused code
Change-Id: If821920ec8872e326b7d85437ad8d2620807799d
2019-04-19 08:36:31 -04:00
Joel Smith
f50696adda Fix potential test flakes in HPA tests TestEventNotCreated and TestAvoidUncessaryUpdates
Also, re-work the code so that the lock is never held while writing to the chan
2019-04-17 08:10:33 -06:00
Kubernetes Prow Robot
deb48e331a
Merge pull request #76189 from soltysh/fix_legacy_podautoscaler
Fix flaky legacy pod autoscaler test
2019-04-05 14:34:05 -07:00
Kubernetes Prow Robot
1cdb4c965a
Merge pull request #74946 from ialidzhikov/clean-ineffectual-assignments
Clean ineffectual assignments
2019-04-05 14:33:53 -07:00
Maciej Szulik
bcfd48c29e
Fix flaky legacy pod autoscaler test
The reactor in runTest is set to catch all actions, but eventually it
only handles CreateAction without checking action type which might fail
sometimes when Patch arrives. This fix ensures we handle only the
CreateAction.
2019-04-05 13:20:30 +02:00
Solly Ross
837976cb59 Prune directxman12 from metrics/autoscaling OWNERS
Since I'm not really working on metrics or autoscaling stuff any more, I
figured it was time to remove myself from the approvers list.
2019-04-03 16:24:51 -07:00
ialidzhikov
c3b2fb0d11 Clean ineffectual assignments
Signed-off-by: ialidzhikov <i.alidjikov@gmail.com>
2019-03-23 00:27:07 +02:00
Kubernetes Prow Robot
4499275cb9
Merge pull request #72800 from stewart-yu/stewart-component-base
Move config local to every controller in KCM
2019-03-21 19:26:19 -07:00
caiweidong
5fa000e5ed Change log: avoid to print raw json response too frequently 2019-03-13 13:07:01 +08:00
stewart-yu
ecbd5427e7 auto-generated file 2019-03-02 12:55:26 +08:00
stewart-yu
e01ff1641c move config local to every controllers in kube-controller-manager 2019-03-02 12:54:33 +08:00
Jordan Liggitt
d1e865ee34 Update client callers to use explicit versions 2019-02-26 08:36:30 -05:00
Kubernetes Prow Robot
808f2cf0ef
Merge pull request #72525 from justinsb/owners_should_not_be_executable
Remove executable file permission from OWNERS files
2019-02-14 23:55:45 -08:00
Roy Lenferink
b43c04452f Updated OWNERS files to include link to docs 2019-02-04 22:33:12 +01:00
Kubernetes Prow Robot
a4e3a5cb52
Merge pull request #71561 from anjensan/hpa-fix-current-metrics
Fix 'currentMetrics' field for HPA with 'AverageValue' target type
2019-02-04 03:34:52 -08:00
Kubernetes Prow Robot
a3f74bd583
Merge pull request #72872 from arjunrn/object-average-value
Added functionality for specifying target average value for object me…
2019-02-01 06:31:50 -08:00
Arjun Naik
c99d505001 Added functionality to use target average value for object metrics
Signed-off-by: Arjun Naik <arjun.rn@gmail.com>
2019-01-23 21:00:05 +01:00
Justin SB
dd19b923b7
Remove executable file permission from OWNERS files 2019-01-11 16:42:59 -08:00
Krzysztof Jastrzebski
7498c14218 Update comments in Horizontal Pod Autoscaler Controller. 2019-01-07 10:06:21 +01:00
Krzysztof Jastrzebski
c6ebd126a7 Add request processing HPA into the queue after processing is finished.
This fixes a bug with skipping request inserted by resync because previous one hasn't processed yet.
2019-01-04 11:59:57 +01:00
Jordan Liggitt
0ff455e340 generated files 2018-12-19 11:19:12 -05:00
Jordan Liggitt
fd9e9b01b1 Remove uses of extensions/v1beta1 clients 2018-12-19 11:18:53 -05:00
danielqsj
3c055aa4b4 Fix typos like limitting 2018-12-04 11:01:40 +08:00
Andrei Zhlobich
a8c58bcd24 Fix updating 'currentMetrics' field for HPA with 'AverageValue' target 2018-11-29 11:50:33 +01:00
Davanum Srinivas
954996e231
Move from glog to klog
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
  * github.com/kubernetes/repo-infra
  * k8s.io/gengo/
  * k8s.io/kube-openapi/
  * github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods

Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
2018-11-10 07:50:31 -05:00
Christoph Blecker
97b2992dc1
Update gofmt for go1.11 2018-10-05 12:59:38 -07:00
Joachim Bartosik
7d7c48a647 HPA stabilizes initial recommendation
HPA will treat initial size of autoscalee to avoid hastily overriding
recomendations made by HPA (if HPA set size and then was restarted) or by user
(initial size should be treated as human-generated recommendation).
2018-09-19 14:54:55 +02:00
Krzysztof Jastrzebski
985ba931b1 Use informer cache instead of active pod gets in HPA controller. 2018-09-05 11:31:27 +02:00
Krzysztof Jastrzebski
958cba1c82 Replace scale down forbidden window
Replacement is scale down stabilization window. HPA will scale down only
    to max of recommendations it made during that window. More details in

    https://docs.google.com/document/d/1IdG3sqgCEaRV3urPLA29IDudCufD89RYCohfBPNeWIM
2018-08-31 20:24:38 +02:00
Kubernetes Submit Queue
2548fb08cd
Merge pull request #68068 from krzysztof-jastrzebski/hpas2
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Change CPU sample sanitization in HPA.

**What this PR does / why we need it**:
Change CPU sample sanitization in HPA.
    Ignore samples if:
    - Pod is beeing initalized - 5 minutes from start defined by flag
        - pod is unready
        - pod is ready but full window of metric hasn't been colected since
        transition
    - Pod is initialized - 5 minutes from start defined by flag:
        - Pod has never been ready after initial readiness period.

**Release notes:**
```release-note
Improve CPU sample sanitization in HPA by taking metric's freshness into account.
```
2018-08-31 10:17:44 -07:00
Krzysztof Jastrzebski
5357bf9eac Change CPU sample sanitization in HPA.
Ignore samples if:
- Pod is beeing initalized - 5 minutes from start defined by flag
    - pod is unready
    - pod is ready but full window of metric hasn't been colected since
    transition
- Pod is initialized - 5 minutes from start defined by flag:
    - Pod has never been ready after initial readiness period.
2018-08-30 23:13:14 +02:00
Kubernetes Submit Queue
42c6f1fb28
Merge pull request #67067 from moonek/master
Automatic merge from submit-queue (batch tested with PRs 67067, 67947). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Do not count soft-deleted pods for scaling purposes in HPA controller

**What this PR does / why we need it**:
The metrics of "soft-deleted" pods in general to be deleted should probably not matter for scaling purposes, since they'll be gone "soon", whether they're nodelost or just normally delete.

As long as soft-deleted pods still exist, they prevent normal scale up.


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/issues/62845

**Special notes for your reviewer**:

**Release note**:

```release-note
Stop counting soft-deleted pods for scaling purposes in HPA controller to avoid soft-deleted pods incorrectly affecting scale up replica count calculation.
```
2018-08-28 15:08:01 -07:00
moonek
3fedbe48e3 Do not count soft-deleted pods for scaling purposes in HPA controller 2018-08-28 16:27:47 +00:00
Krzysztof Jastrzebski
dfd88dbde0 Remove incorrect glog error from Horizontal Pod Autoscaler. 2018-08-28 09:18:25 +02:00
Mike Dame
dd7e81a8cd Add dry run test for hpa v2beta2 2018-08-27 11:37:22 -04:00
Mike Dame
77d7f9cfa2 Generate files and modifications for autoscaling/v2beta2 and custom_metrics/v1beta2 2018-08-27 11:07:53 -04:00
Mike Dame
c7102ee5dc Implement autoscaling/v2beta2 features in HPA controller 2018-08-27 11:07:52 -04:00
Kubernetes Submit Queue
663551bebd
Merge pull request #67252 from jbartosik/metric-sanitization
Automatic merge from submit-queue (batch tested with PRs 66916, 67252, 67794, 67619, 67328). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix HPA sample sanitization

**What this PR does / why we need it**: @mwielgus pointed out a case when HPA fails as a result of my changes to HPA algorithm:
- Have pods that use a lot of CPU during initilization, become ready right after they initialize,
- Trigger a scale up,
- When new pods become ready will will count their usage (even though it's not related to any work that needs doing),
- This triggers another scale up, even though existing pods can handle work, no problem.

The fix is:
- Use all samples for non-cpu metrics.
- Only use CPU samples if:
  - Pod is ready and was started more than 2 minutes ago, or
  - Pod is unready and last readiness change happened more than 10s after it was started.

Reasoning behind this in: https://docs.google.com/document/d/1UdtYedhmCxjaJIQi6hwJMY0eHQQKxlVD8lSHZC1BPOA/edit

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

**Special notes for your reviewer**:

**Release note**:
```release-note
Replace scale up forbidden window with disregarding CPU samples collected when pod was initializing.
```
2018-08-24 15:25:07 -07:00
Joachim Bartosik
4fd6a1684d Make HPA more configurable
Duration of initialization taint on CPU and window of initial readiness
setting controlled by flags.

Adding API violation exceptions following example of e50340ee23
2018-08-24 13:13:02 +02:00
Davanum Srinivas
9b43d97cd4
Add Labels to various OWNERS files
Will reduce the burden of manually adding labels. Information pulled
from:
https://github.com/kubernetes/community/blob/master/sigs.yaml

Change-Id: I17e661e37719f0bccf63e41347b628269cef7c8b
2018-08-21 13:59:08 -04:00
Joachim Bartosik
7d6676eab1 Improve HPA sample sanitization
After my previous changes HPA wasn't behaving correctly in the following
situation:

- Pods use a lot of CPU during initilization, become ready right after they initialize,
- Scale up triggers,
- When new pods become ready HPA counts their usage (even though it's not related to any work that needs doing),
- Another scale up, even though existing pods can handle work, no problem.
2018-08-21 16:22:06 +02:00
liangwenguo
8f8a7bb83f make the log more readable 2018-08-07 10:00:31 +08:00
Joachim Bartosik
7681c284f5 Remove UpscaleForbiddenWindow
Instead discard metric values for pods that are unready and have never
been ready (they may report misleading values, the original reason for
introducing scale up forbidden window).

Use per pod metric when pod is:
- Ready, or
- Not ready but creation timestamp and last readiness change are more
  than 10s apart.

In the latter case we asume the pod was ready but later became unready.
We want to use metrics for such pods because sometimes such pods are
unready because they were getting too much load.
2018-08-01 17:47:23 +02:00
Joachim Bartosik
086ed3c659 Rename desiredReplicas to expectedDesiredReplicas
Naming fields specifying values expected by test as expected.* is a nice
convention to have, lets follow it.
2018-08-01 17:43:01 +02:00
Kubernetes Submit Queue
1e3d23c5c3
Merge pull request #65907 from jbartosik/hpa-improv-refactor-run-test
Automatic merge from submit-queue (batch tested with PRs 64681, 65907). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Make runTest easier to understand

Fewer nested conditions, more checking for incorrect looking test cases.

**What this PR does / why we need it**: Make HPA tests easier to understand.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2018-07-24 16:28:13 -07:00
Joachim Bartosik
3d1b6b0f6e Make runTest easier to understand
Instead of deducing metric type from details of struct describing it
test cases explicitly specify the metric type they use.
2018-07-24 17:27:17 +02:00
Kubernetes Submit Queue
ab00c609ee
Merge pull request #65901 from jbartosik/hpa-improv-refactor-replica-calc-test
Automatic merge from submit-queue (batch tested with PRs 66175, 66324, 65828, 65901, 66350). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Hpa improv refactor replica calc test

**What this PR does / why we need it**: prepareTestClient generates 4 fake clients, using replicaCalcTestCase object. This PR extracts a separate helper for generating each fake independently.

**Which issue(s) this PR fixes**

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2018-07-18 16:42:18 -07:00
Joachim Bartosik
9b91a89f3d Chop computeReplicasForMetrics to smaller pieces 2018-07-18 17:09:20 +02:00