Commit Graph

182 Commits

Author SHA1 Message Date
Kensei Nakada
543f15d10c HPA: expose the metrics "metric_computation_duration_seconds" and "metric_computation_total" from HPA controller 2023-03-14 22:47:24 +00:00
Kensei Nakada
b49b34c03a
HPA: expose the metrics "reconciliations_total" and "reconciliation_duration_seconds" from HPA controller (#116010) 2023-03-14 09:39:42 -07:00
Kubernetes Prow Robot
c237ddb226
Merge pull request #116045 from sanposhiho/sanposhiho/message
fix(HPA): make a difference in SuccessfulRescale  events between the resource metric and the container resource metric
2023-03-13 13:24:47 -07:00
Kensei Nakada
fafbed3b1d
fix the error message 2023-03-12 14:48:48 +09:00
Kensei Nakada
f76258f0ff fix based on the suggestion 2023-03-05 15:01:34 +00:00
Kensei Nakada
33daba24fb fix(HPA): ignore the container resource metrics in HPA controller when the feature gate is disabled 2023-02-25 23:04:07 +00:00
Kensei Nakada
2ea50fc200 fix(HPA): make a difference in SuccessfulRescale events between the resource metric and the container resource metric 2023-02-24 14:47:38 +00:00
Freddie
dee494ece1 squashing without rebase 2023-02-17 01:47:52 +05:30
Pavel Beschetnov
caddfdd040 Add pod ambiguous selector check 2022-11-04 12:49:20 +00:00
Kubernetes Prow Robot
85643c0f93
Merge pull request #108501 from zroubalik/hpa
add `--concurrent-horizontal-pod-autoscaler-syncs` flag to kube-controller-manager
2022-10-17 14:13:18 -07:00
Zbynek Roubalik
1cefcdea2d add --concurrent-horizontal-pod-autoscaler-syncs flag to kube-controller-manager
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2022-10-17 17:39:31 +02:00
Kubernetes Prow Robot
4245895261
Merge pull request #111463 from pbetkier/hpa-comment-fix
Fix comment in HPA's scale event replicaChange
2022-09-30 04:08:28 -07:00
Kubernetes Prow Robot
3a0dbe5749
Merge pull request #112335 from piotrnosek/fixcustomcrd
Fix HPA E2E CustomResourceDefinition test
2022-09-22 11:01:06 -07:00
Kushagra
01b553145c requested changes: fix return type variables 2022-09-22 08:59:02 +00:00
Piotr Nosek
96ff1b1bcb Fix HPA E2E CRD test 2022-09-21 22:39:47 +00:00
Kubernetes Prow Robot
239a19ecc1
Merge pull request #111170 from ping035627/k8s-220715
HandleError of updateStatusIfNeeded in func reconcileAutoscaler
2022-08-30 10:59:06 -07:00
Kubernetes Prow Robot
da6d8c997e
Merge pull request #109058 from oliviermichaelis/calculate-start-replicas
Fix replica calculation at start of HPA scaling policy period
2022-08-30 10:58:55 -07:00
Piotr Betkier
f428705ec6 Fix comment in HPA's scale event replicaChange
The field replicaChange in timestampedScaleEvent was wrongly described
as either positive or negative depending on the scale direction. In
fact the change is set as unsigned, positive or 0 even for downscales.
2022-07-27 15:28:09 +02:00
PingWang
565d60ff15 HandleError of updateStatusIfNeeded in func reconcileAutoscaler
Signed-off-by: PingWang <wang.ping5@zte.com.cn>
2022-07-15 14:12:13 +08:00
wangyamei
187dcb5a59 Error message optimization for podautoscaler controller 2022-05-26 23:40:34 +08:00
Olivier Michaelis
3c07d3a20c
Fix replica calculation at start of HPA scaling policy period
When calculating the scale-up/scale-down limit, the number of replicas
at the start of the scaling policy period is calculated correctly by
taken into account the number of scaled-up and scaled-down replicas.

Signed-off-by: Olivier Michaelis <38879457+oliviermichaelis@users.noreply.github.com>
2022-03-27 12:34:32 +02:00
Joseph Burnett
711f96e05e Watch HPA v2 instead of v1. 2021-11-16 11:13:21 +01:00
wangyysde
d2abddd909 rename v2beta2 to v2
Signed-off-by: wangyysde <net_use@bzhy.com>

Generation swagger.json.

Use v2 path for hpa_cpu_field.

run update-codegen.sh

Signed-off-by: wangyysde <net_use@bzhy.com>
2021-11-09 10:34:54 +08:00
Mike Dame
7780024916 Wire contexts to Autoscaling controllers 2021-10-12 14:34:05 -04:00
Mikkel Oscar Lyderik Larsen
fef092b417
hpa: Don't scale down if at least one metric was invalid
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2021-03-03 07:53:01 +01:00
Joseph Burnett
16133c2b77 Up and down scale stabilize with envelope.
The HPA controller keeps a flat history of recommendations for
stabilization. However when both up and down scale stabilization are
configured, the interpretation of the history changes depending on the
direction of movement. What we want is to keep the stabilized
recommendation within the envelope of the minimum and maximum over
configured stabilization windows. We should only move when the
envelope forces a move.
2020-12-21 14:36:13 +01:00
Ben Hu
4e62298c1b Fix static checks for pkg/controller/podautoscaler 2020-10-23 18:53:07 +00:00
Kubernetes Prow Robot
ec453ffb1a
Merge pull request #90691 from arjunrn/container-resource-hpa
Add container based scaling to HPA
2020-10-23 05:51:51 -07:00
weiwei
b19a115f42 If we set SelectPolicy MinPolicySelect on scaleUp behavior or scaleDown behavior,Horizontal Pod Autoscaler doesn`t automatically scale the number of pods correctly
Signed-off-by: weiwei <weiwei@tenxcloud.com>
2020-10-22 18:00:49 +08:00
Arjun Naik
0fec7b0f7e Added functionality and API for pod autoscaling based on container resources
Signed-off-by: Arjun Naik <anaik@redhat.com>
2020-10-21 21:10:05 +02:00
Kobayashi Daisuke
4ae11dac2e Replace StartLogging(klog.Infof) with StartStructuredLogging(0) 2020-06-15 17:48:35 +09:00
Davanum Srinivas
442a69c3bd
switch over k/k to use klog v2
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:27 -04:00
Kubernetes Prow Robot
2b2cf8df30
Merge pull request #80700 from mrkm4ntr/add-error-check
Add missing error check
2020-05-11 00:37:51 -07:00
Kubernetes Prow Robot
1827fe444e
Merge pull request #87895 from alexzimmer96/68026-lint-pkg-controller-autoscaler
Fix Golint errors in pkg/controller/podautoscaler
2020-03-17 16:19:53 -07:00
Julian V. Modesto
da3c3432d8 Add context and options to scale client 2020-03-02 00:03:26 -05:00
Kubernetes Prow Robot
1a0f923a65
Merge pull request #87712 from alena1108/jan30kubelet
Ineffassign fixes for pkg/controller and kubelet
2020-02-14 14:29:27 -08:00
Mike Danese
25651408ae generated: run refactor 2020-02-08 12:30:21 -05:00
Mike Danese
3aa59f7f30 generated: run refactor 2020-02-07 18:16:47 -08:00
Alexander Zimmermann
a1c837022c
Fixed Golint errors in pkg/controller/podautoscaler 2020-02-06 17:16:38 +01:00
Alena Prokharchyk
6c3093f970 Ineffassign fixes for pkg/controller and kubelet 2020-01-30 14:35:10 -08:00
Ivan Glushkov
27ffe439b6
Adds the algorithm implementation for the Configurable HPA 2019-12-10 20:37:33 +04:00
tanjunchen
de3cf23414 remove the repeat word in documents 2019-10-06 23:32:01 +08:00
Yassine TIJANI
7e4c3096fe move WaitForCacheSync to the sharedInformer package
Signed-off-by: Yassine TIJANI <ytijani@vmware.com>
2019-08-22 16:13:41 +01:00
Shintaro Murakami
4635f16dc1 Add missing error check 2019-07-29 14:37:48 +09:00
David Xia
fabfd950b1
cleanup: fix some log and error capitalizations
Part of https://github.com/kubernetes/kubernetes/issues/15863
2019-07-20 18:26:16 -04:00
Rinat Shigapov
d55f037b7d HPA scale-to-zero for custom object/external metrics
Add support for scaling to zero pods

minReplicas is allowed to be zero

condition is set once

Based on https://github.com/kubernetes/kubernetes/pull/61423

set original valid condition

add scale to/from zero and invalid metric tests

Scaling up from zero pods ignores tolerance

validate metrics when minReplicas is 0

Document HPA behaviour when minReplicas is 0

Documented minReplicas field in autoscaling APIs
2019-07-16 08:46:21 -05:00
Sukeesh
44c3f0105f fix incorrect hpa status 2019-07-08 17:27:38 +09:00
Joseph Burnett
39c4875321 There are various reasons that the HPA will decide not the change the
current scale. Two important ones are when missing metrics might
change the direction of scaling, and when the recommended scale is
within tolerance of the current scale.

The way that ReplicaCalculator signals it's desire to not change the
current scale is by returning the current scale. However the current
scale is from scale.Status.Replicas and can be larger than
scale.Spec.Replicas (e.g. during Deployment rollout with configured
surge). This causes a positive feedback loop because
scale.Status.Replicas is written back into scale.Spec.Replicas,
further increasing the current scale.

This PR fixes the feedback loop by plumbing the replica count from
spec through horizontal.go and replica_calculator.go so the calculator
can punt with the right value.
2019-07-02 14:21:32 +02:00
GuyTempleton
1efbde2815
Handle invalid metrics when scaling on multiple metrics
Handle a case in the Horizontal Pod Autoscaler Controller when scaling
on multiple metrics and one or more is missing or invalid.

If all metrics are missing - return an error and leave the isScalingActive
condition as that for the last invalid metric.

If some metrics are missing/invalid and some are valid and found -
if a scale up would be triggered by the valid metrics ignore the missing
metrics and scale up, if a scale down would be triggered, return an error
and leave the isScalingActive condition as that for the last invalid metric.
2019-05-29 23:20:40 +01:00
ialidzhikov
c3b2fb0d11 Clean ineffectual assignments
Signed-off-by: ialidzhikov <i.alidjikov@gmail.com>
2019-03-23 00:27:07 +02:00