Commit Graph

325 Commits

Author SHA1 Message Date
Davanum Srinivas
07d88617e5
Run hack/update-vendor.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:33 -04:00
Davanum Srinivas
442a69c3bd
switch over k/k to use klog v2
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:27 -04:00
Kubernetes Prow Robot
2b2cf8df30
Merge pull request #80700 from mrkm4ntr/add-error-check
Add missing error check
2020-05-11 00:37:51 -07:00
Kubernetes Prow Robot
c441a1a7dc
Merge pull request #85027 from shibataka000/fix-bug-about-unintentional-scale-out-during-updating-deployment
Fix HPA bug about unintentional scale out during updating deployment.
2020-03-24 04:50:46 -07:00
Kubernetes Prow Robot
1827fe444e
Merge pull request #87895 from alexzimmer96/68026-lint-pkg-controller-autoscaler
Fix Golint errors in pkg/controller/podautoscaler
2020-03-17 16:19:53 -07:00
Kubernetes Prow Robot
179fe40d06
Merge pull request #88599 from julianvmodesto/scale-ctx-opts
Add context and options to scale client
2020-03-06 13:17:08 -08:00
Julian V. Modesto
da3c3432d8 Add context and options to scale client 2020-03-02 00:03:26 -05:00
taesun_lee
79680b5d9b Fix pkg/controller typos in some error messages, comments etc
- applied review results by LuisSanchez
- Co-Authored-By: Luis Sanchez <sanchezl@redhat.com>

genernal -> general
iniital -> initial
initalObjects -> initialObjects
intentionaly -> intentionally
inforer -> informer
anotother -> another
triger -> trigger
mutli -> multi
Verifyies -> Verifies
valume -> volume
unexpect -> unexpected
unfulfiled -> unfulfilled
implenets -> implements
assignement -> assignment
expectataions -> expectations
nexpected -> unexpected
boundSatsified -> boundSatisfied
externel -> external
calcuates -> calculates
workes -> workers
unitialized -> uninitialized
afater -> after
Espected -> Expected
nodeMontiorGracePeriod -> NodeMonitorGracePeriod
estimateGrracefulTermination -> estimateGracefulTermination
secondrary -> secondary
ShouldRunDaemonPodOnUnscheduableNode -> ShouldRunDaemonPodOnUnschedulableNode
rrror -> error
expectatitons -> expectations
foud -> found
epackage -> package
succesfulJobs -> successfulJobs
namesapce -> namespace
ConfigMapResynce -> ConfigMapResync
2020-02-27 00:15:33 +09:00
Kubernetes Prow Robot
1a0f923a65
Merge pull request #87712 from alena1108/jan30kubelet
Ineffassign fixes for pkg/controller and kubelet
2020-02-14 14:29:27 -08:00
Mike Danese
25651408ae generated: run refactor 2020-02-08 12:30:21 -05:00
Mike Danese
3aa59f7f30 generated: run refactor 2020-02-07 18:16:47 -08:00
Alexander Zimmermann
e0b1e9206d
Clarified comments 2020-02-07 09:09:49 +01:00
Alexander Zimmermann
a1c837022c
Fixed Golint errors in pkg/controller/podautoscaler 2020-02-06 17:16:38 +01:00
Alena Prokharchyk
6c3093f970 Ineffassign fixes for pkg/controller and kubelet 2020-01-30 14:35:10 -08:00
Mike Danese
968adfa993 cleanup req.Context() and ResponseWrapper 2020-01-29 08:50:45 -08:00
Arjun Naik
8ab226263a Adds tests
Signed-off-by: Arjun Naik <arjun@arjunnaik.in>
2019-12-10 18:09:20 +01:00
Ivan Glushkov
27ffe439b6
Adds the algorithm implementation for the Configurable HPA 2019-12-10 20:37:33 +04:00
shibataka000
b7122770f8 Fix bug about unintentional scale out during updating deployment.
During rolling update with maxSurge=1 and maxUnavailable=0,
len(metrics) is greater than currentReplcas
and it may cause unintentional scale out.
2019-11-09 06:24:31 +00:00
yuxiaobo
81e9f21f83 Correct spelling mistakes
Signed-off-by: yuxiaobo <yuxiaobogo@163.com>
2019-11-06 20:25:19 +08:00
wojtekt
7b6bcdf780 Autogenerated code 2019-10-24 20:21:00 +02:00
Bob Killen
e37d702208
Prune inactive owners from autoscaling related OWNERS files. 2019-10-13 08:52:14 -04:00
tanjunchen
de3cf23414 remove the repeat word in documents 2019-10-06 23:32:01 +08:00
Joseph Burnett
7bdb66f8d1
Fix reviewer typo. 2019-09-06 12:09:50 +02:00
Kubernetes Prow Robot
927f45191e
Merge pull request #81527 from yastij/move-controller-util
move WaitForCacheSync to the sharedInformer package
2019-08-27 00:52:54 -07:00
Yassine TIJANI
7e4c3096fe move WaitForCacheSync to the sharedInformer package
Signed-off-by: Yassine TIJANI <ytijani@vmware.com>
2019-08-22 16:13:41 +01:00
Joseph Burnett
a5354d04bb Test more replicas than spec.
During a Deployment update there may be more Pods in the scale target
ref status than in the spec. This test verifies that we do not scale
to the status value. Instead we should stay at the spec value.

Fails before #79035 and passes after.
2019-08-06 14:46:34 +02:00
Shintaro Murakami
4635f16dc1 Add missing error check 2019-07-29 14:37:48 +09:00
David Xia
fabfd950b1
cleanup: fix some log and error capitalizations
Part of https://github.com/kubernetes/kubernetes/issues/15863
2019-07-20 18:26:16 -04:00
Kubernetes Prow Robot
5ece88c4c8
Merge pull request #74526 from DXist/feature/hpa-scale-to-zero
Support scaling HPA to/from zero pods for object/external metrics
2019-07-16 10:11:24 -07:00
Rinat Shigapov
d55f037b7d HPA scale-to-zero for custom object/external metrics
Add support for scaling to zero pods

minReplicas is allowed to be zero

condition is set once

Based on https://github.com/kubernetes/kubernetes/pull/61423

set original valid condition

add scale to/from zero and invalid metric tests

Scaling up from zero pods ignores tolerance

validate metrics when minReplicas is 0

Document HPA behaviour when minReplicas is 0

Documented minReplicas field in autoscaling APIs
2019-07-16 08:46:21 -05:00
Joseph Burnett
7382fa464d Add josephburnett to podautoscaler OWNERS. 2019-07-12 10:20:16 +02:00
Kubernetes Prow Robot
b500c740ee
Merge pull request #79859 from sukeesh/hpa-error-log-fix
HPA incorrectly reported condition status
2019-07-11 07:28:55 -07:00
Kubernetes Prow Robot
57eef32041
Merge pull request #79657 from josephburnett/hpastuck
Ignore unschedulable pods
2019-07-10 11:34:29 -07:00
Joseph Burnett
80e279d353 Ignore pending pods.
This change adds pending pods to the ignored set first before
selecting pods missing metrics. Pending pods are always ignored when
calculating scale.

When the HPA decides which pods and metric values to take into account
when scaling, it divides the pods into three disjoint subsets: 1)
ready 2) missing metrics and 3) ignored. First the HPA selects pods
which are missing metrics. Then it selects pods should be ignored
because they are not ready yet, or are still consuming CPU during
initialization. All the remaining pods go into the ready set. After
the HPA has decided what direction it wants to scale based on the
ready pods, it considers what might have happened if it had the
missing metrics. It makes a conservative guess about what the missing
metrics might have been, 0% if it wants to scale up--100% if it wants
to scale down. This is a good thing when scaling up, because newly
added pods will likely help reduce the usage ratio, even though their
metrics are missing at the moment. The HPA should wait to see the
results of its previous scale decision before it makes another
one. However when scaling down, it means that many missing metrics can
pin the HPA at high scale, even when load is completely removed. In
particular, when there are many unschedulable pods due to insufficient
cluster capacity, the many missing metrics (assumed to be 100%) can
cause the HPA to avoid scaling down indefinitely.
2019-07-10 12:16:33 +02:00
Sukeesh
44c3f0105f fix incorrect hpa status 2019-07-08 17:27:38 +09:00
Joseph Burnett
39c4875321 There are various reasons that the HPA will decide not the change the
current scale. Two important ones are when missing metrics might
change the direction of scaling, and when the recommended scale is
within tolerance of the current scale.

The way that ReplicaCalculator signals it's desire to not change the
current scale is by returning the current scale. However the current
scale is from scale.Status.Replicas and can be larger than
scale.Spec.Replicas (e.g. during Deployment rollout with configured
surge). This causes a positive feedback loop because
scale.Status.Replicas is written back into scale.Spec.Replicas,
further increasing the current scale.

This PR fixes the feedback loop by plumbing the replica count from
spec through horizontal.go and replica_calculator.go so the calculator
can punt with the right value.
2019-07-02 14:21:32 +02:00
waynepeking348
b8b1720f12 Fix bug of ObjectPerPodMetricReplicas to initialize replicaCount with currentReplicas 2019-06-05 11:54:03 +00:00
GuyTempleton
1efbde2815
Handle invalid metrics when scaling on multiple metrics
Handle a case in the Horizontal Pod Autoscaler Controller when scaling
on multiple metrics and one or more is missing or invalid.

If all metrics are missing - return an error and leave the isScalingActive
condition as that for the last invalid metric.

If some metrics are missing/invalid and some are valid and found -
if a scale up would be triggered by the valid metrics ignore the missing
metrics and scale up, if a scale down would be triggered, return an error
and leave the isScalingActive condition as that for the last invalid metric.
2019-05-29 23:20:40 +01:00
GuyTempleton
ee4dbbcbff
Add tests for handling scaling on unavailable metrics
Add three tests for handling invalid metrics when scaling on
multiple metrics - one for scaling up successfully (new behaviour)
and two for ensuring we don't scale down (existing behaviour).
2019-05-29 23:11:32 +01:00
Kubernetes Prow Robot
4ebe11a6cb
Merge pull request #76110 from DirectXMan12/infra/prune-owners
Prune directxman12 from metrics/autoscaling OWNERS
2019-04-29 14:35:36 -07:00
Davanum Srinivas
7b8c9acc09
remove unused code
Change-Id: If821920ec8872e326b7d85437ad8d2620807799d
2019-04-19 08:36:31 -04:00
Joel Smith
f50696adda Fix potential test flakes in HPA tests TestEventNotCreated and TestAvoidUncessaryUpdates
Also, re-work the code so that the lock is never held while writing to the chan
2019-04-17 08:10:33 -06:00
Kubernetes Prow Robot
deb48e331a
Merge pull request #76189 from soltysh/fix_legacy_podautoscaler
Fix flaky legacy pod autoscaler test
2019-04-05 14:34:05 -07:00
Kubernetes Prow Robot
1cdb4c965a
Merge pull request #74946 from ialidzhikov/clean-ineffectual-assignments
Clean ineffectual assignments
2019-04-05 14:33:53 -07:00
Maciej Szulik
bcfd48c29e
Fix flaky legacy pod autoscaler test
The reactor in runTest is set to catch all actions, but eventually it
only handles CreateAction without checking action type which might fail
sometimes when Patch arrives. This fix ensures we handle only the
CreateAction.
2019-04-05 13:20:30 +02:00
Solly Ross
837976cb59 Prune directxman12 from metrics/autoscaling OWNERS
Since I'm not really working on metrics or autoscaling stuff any more, I
figured it was time to remove myself from the approvers list.
2019-04-03 16:24:51 -07:00
ialidzhikov
c3b2fb0d11 Clean ineffectual assignments
Signed-off-by: ialidzhikov <i.alidjikov@gmail.com>
2019-03-23 00:27:07 +02:00
Kubernetes Prow Robot
4499275cb9
Merge pull request #72800 from stewart-yu/stewart-component-base
Move config local to every controller in KCM
2019-03-21 19:26:19 -07:00
caiweidong
5fa000e5ed Change log: avoid to print raw json response too frequently 2019-03-13 13:07:01 +08:00
stewart-yu
ecbd5427e7 auto-generated file 2019-03-02 12:55:26 +08:00