Commit Graph

152 Commits

Author SHA1 Message Date
Kobayashi Daisuke
4ae11dac2e Replace StartLogging(klog.Infof) with StartStructuredLogging(0) 2020-06-15 17:48:35 +09:00
Davanum Srinivas
442a69c3bd
switch over k/k to use klog v2
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2020-05-16 07:54:27 -04:00
Kubernetes Prow Robot
2b2cf8df30
Merge pull request #80700 from mrkm4ntr/add-error-check
Add missing error check
2020-05-11 00:37:51 -07:00
Kubernetes Prow Robot
1827fe444e
Merge pull request #87895 from alexzimmer96/68026-lint-pkg-controller-autoscaler
Fix Golint errors in pkg/controller/podautoscaler
2020-03-17 16:19:53 -07:00
Julian V. Modesto
da3c3432d8 Add context and options to scale client 2020-03-02 00:03:26 -05:00
Kubernetes Prow Robot
1a0f923a65
Merge pull request #87712 from alena1108/jan30kubelet
Ineffassign fixes for pkg/controller and kubelet
2020-02-14 14:29:27 -08:00
Mike Danese
25651408ae generated: run refactor 2020-02-08 12:30:21 -05:00
Mike Danese
3aa59f7f30 generated: run refactor 2020-02-07 18:16:47 -08:00
Alexander Zimmermann
a1c837022c
Fixed Golint errors in pkg/controller/podautoscaler 2020-02-06 17:16:38 +01:00
Alena Prokharchyk
6c3093f970 Ineffassign fixes for pkg/controller and kubelet 2020-01-30 14:35:10 -08:00
Ivan Glushkov
27ffe439b6
Adds the algorithm implementation for the Configurable HPA 2019-12-10 20:37:33 +04:00
tanjunchen
de3cf23414 remove the repeat word in documents 2019-10-06 23:32:01 +08:00
Yassine TIJANI
7e4c3096fe move WaitForCacheSync to the sharedInformer package
Signed-off-by: Yassine TIJANI <ytijani@vmware.com>
2019-08-22 16:13:41 +01:00
Shintaro Murakami
4635f16dc1 Add missing error check 2019-07-29 14:37:48 +09:00
David Xia
fabfd950b1
cleanup: fix some log and error capitalizations
Part of https://github.com/kubernetes/kubernetes/issues/15863
2019-07-20 18:26:16 -04:00
Rinat Shigapov
d55f037b7d HPA scale-to-zero for custom object/external metrics
Add support for scaling to zero pods

minReplicas is allowed to be zero

condition is set once

Based on https://github.com/kubernetes/kubernetes/pull/61423

set original valid condition

add scale to/from zero and invalid metric tests

Scaling up from zero pods ignores tolerance

validate metrics when minReplicas is 0

Document HPA behaviour when minReplicas is 0

Documented minReplicas field in autoscaling APIs
2019-07-16 08:46:21 -05:00
Sukeesh
44c3f0105f fix incorrect hpa status 2019-07-08 17:27:38 +09:00
Joseph Burnett
39c4875321 There are various reasons that the HPA will decide not the change the
current scale. Two important ones are when missing metrics might
change the direction of scaling, and when the recommended scale is
within tolerance of the current scale.

The way that ReplicaCalculator signals it's desire to not change the
current scale is by returning the current scale. However the current
scale is from scale.Status.Replicas and can be larger than
scale.Spec.Replicas (e.g. during Deployment rollout with configured
surge). This causes a positive feedback loop because
scale.Status.Replicas is written back into scale.Spec.Replicas,
further increasing the current scale.

This PR fixes the feedback loop by plumbing the replica count from
spec through horizontal.go and replica_calculator.go so the calculator
can punt with the right value.
2019-07-02 14:21:32 +02:00
GuyTempleton
1efbde2815
Handle invalid metrics when scaling on multiple metrics
Handle a case in the Horizontal Pod Autoscaler Controller when scaling
on multiple metrics and one or more is missing or invalid.

If all metrics are missing - return an error and leave the isScalingActive
condition as that for the last invalid metric.

If some metrics are missing/invalid and some are valid and found -
if a scale up would be triggered by the valid metrics ignore the missing
metrics and scale up, if a scale down would be triggered, return an error
and leave the isScalingActive condition as that for the last invalid metric.
2019-05-29 23:20:40 +01:00
ialidzhikov
c3b2fb0d11 Clean ineffectual assignments
Signed-off-by: ialidzhikov <i.alidjikov@gmail.com>
2019-03-23 00:27:07 +02:00
Kubernetes Prow Robot
a4e3a5cb52
Merge pull request #71561 from anjensan/hpa-fix-current-metrics
Fix 'currentMetrics' field for HPA with 'AverageValue' target type
2019-02-04 03:34:52 -08:00
Kubernetes Prow Robot
a3f74bd583
Merge pull request #72872 from arjunrn/object-average-value
Added functionality for specifying target average value for object me…
2019-02-01 06:31:50 -08:00
Arjun Naik
c99d505001 Added functionality to use target average value for object metrics
Signed-off-by: Arjun Naik <arjun.rn@gmail.com>
2019-01-23 21:00:05 +01:00
Krzysztof Jastrzebski
7498c14218 Update comments in Horizontal Pod Autoscaler Controller. 2019-01-07 10:06:21 +01:00
Krzysztof Jastrzebski
c6ebd126a7 Add request processing HPA into the queue after processing is finished.
This fixes a bug with skipping request inserted by resync because previous one hasn't processed yet.
2019-01-04 11:59:57 +01:00
danielqsj
3c055aa4b4 Fix typos like limitting 2018-12-04 11:01:40 +08:00
Andrei Zhlobich
a8c58bcd24 Fix updating 'currentMetrics' field for HPA with 'AverageValue' target 2018-11-29 11:50:33 +01:00
Davanum Srinivas
954996e231
Move from glog to klog
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
  * github.com/kubernetes/repo-infra
  * k8s.io/gengo/
  * k8s.io/kube-openapi/
  * github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods

Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
2018-11-10 07:50:31 -05:00
Christoph Blecker
97b2992dc1
Update gofmt for go1.11 2018-10-05 12:59:38 -07:00
Joachim Bartosik
7d7c48a647 HPA stabilizes initial recommendation
HPA will treat initial size of autoscalee to avoid hastily overriding
recomendations made by HPA (if HPA set size and then was restarted) or by user
(initial size should be treated as human-generated recommendation).
2018-09-19 14:54:55 +02:00
Krzysztof Jastrzebski
985ba931b1 Use informer cache instead of active pod gets in HPA controller. 2018-09-05 11:31:27 +02:00
Krzysztof Jastrzebski
958cba1c82 Replace scale down forbidden window
Replacement is scale down stabilization window. HPA will scale down only
    to max of recommendations it made during that window. More details in

    https://docs.google.com/document/d/1IdG3sqgCEaRV3urPLA29IDudCufD89RYCohfBPNeWIM
2018-08-31 20:24:38 +02:00
Mike Dame
c7102ee5dc Implement autoscaling/v2beta2 features in HPA controller 2018-08-27 11:07:52 -04:00
liangwenguo
8f8a7bb83f make the log more readable 2018-08-07 10:00:31 +08:00
Joachim Bartosik
7681c284f5 Remove UpscaleForbiddenWindow
Instead discard metric values for pods that are unready and have never
been ready (they may report misleading values, the original reason for
introducing scale up forbidden window).

Use per pod metric when pod is:
- Ready, or
- Not ready but creation timestamp and last readiness change are more
  than 10s apart.

In the latter case we asume the pod was ready but later became unready.
We want to use metrics for such pods because sometimes such pods are
unready because they were getting too much load.
2018-08-01 17:47:23 +02:00
Joachim Bartosik
9b91a89f3d Chop computeReplicasForMetrics to smaller pieces 2018-07-18 17:09:20 +02:00
David Eads
9a48066749 update restmapping to indicate fully qualified resource 2018-05-01 16:34:49 -04:00
Mikhail Mazurskiy
468655b76a
Use typed events client directly 2018-04-01 18:57:29 +10:00
mattjmcnaughton
d33494d459 GetExternalMetricReplicas ignores unready pods
Similar to the change we made for `GetObjectMetricReplicas` in the
previous commit. Ensure that `GetExternalMetricReplicas` does not
include unready pods when its determining how many replica it desires.
Including unready pods can lead to over-scaling.

We did not change the behavior of `GetExternalPerPodMetricReplicas`, as
it is slightly less clear what is the desired behavior. We did make some
small naming refactorings to this method, which will make it easier to
ignore unready pods if we decide we want to.
2018-03-13 22:27:28 -04:00
mattjmcnaughton
7e3bce7b3e GetObjectMetricReplicas ignores unready pods
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).

After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.

This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
2018-03-07 08:13:01 -05:00
Aleksandra Malinowska
e58411c600 Implement external metrics in HPA 2018-02-27 14:10:29 +01:00
Matt Brown
151a7d2731 correct typo in HorizontalPodAutoscaler status condition
"succesfully" => "successfully"
2018-01-29 13:01:43 -05:00
mattjmcnaughton
e74838b6ab Refactor reconcileAutoscaler method in hpa
There have been a couple of recent bugs in the "normalizing" part of the
`reconcileAutoscaler` method. This part of the code base is responsible
for, among other things, taking the suggested desired replicas based on
the metrics, ensuring it conforms to certain conditions, and updating it
if it does not. Isolate the part that converts the desired replicas
based on a given set of rules into its own function.

We are refactoring this part of the code base to make the logic simpler
and to make it easier to write unit tests.
2017-11-16 09:42:49 -05:00
Dr. Stefan Schimanski
2b201ead11 Fix and update comment with api.Scheme 2017-10-30 19:54:02 +01:00
Kubernetes Submit Queue
ca8d97d673 Merge pull request #53743 from DirectXMan12/feature/polymorphic-scale-client
Automatic merge from submit-queue (batch tested with PRs 53743, 53564). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Polymorphic Scale Client

This PR introduces a polymorphic scale client based on discovery information that's able to scale scalable resources in arbitrary group-versions, as long as they present the scale subresource in their discovery information.

Currently, it supports `extensions/v1beta1.Scale` and `autoscaling/v1.Scale`, but supporting other versions of scale if/when we produce them should be fairly trivial.

It also updates the HPA to use this client, meaning the HPA will now work on any scalable resource, not just things in the `extensions/v1beta1` API group.

**Release note**:
```release-note
Introduces a polymorphic scale client, allowing HorizontalPodAutoscalers to properly function on scalable resources in any API group.
```

Unblocks #29698
Unblocks #38756
Unblocks #49504 
Fixes #38810
2017-10-23 13:39:07 -07:00
Solly Ross
d2b41120ea Make HPA controller use polymorphic scale client
This updates the HPA controller to use the polymorphic scale client from
client-go.  This should enable HPAs to work with arbitrary scalable
resources, instead of just those in the extensions API group (meaning we
can deprecate the copy of ReplicationController in extensions/v1beta1).
It also means that the HPA controller now pays attention to the
APIVersion field in `scaleTargetRef` (more specifically, the group part
of it).

Note that currently, discovery information on which resources are
available where is only fetched once (the first time that it's
requested).  In the future, we may want a refreshing discovery REST
mapper.
2017-10-19 13:21:02 -04:00
hzxuzhonghu
96b48d4386 add event broadcaster logging for all contoller managers 2017-10-19 09:18:43 +08:00
Dr. Stefan Schimanski
7773a30f67 pkg/api/legacyscheme: fixup imports 2017-10-18 17:23:55 +02:00
Kubernetes Submit Queue
03cb11f020 Merge pull request #52275 from mattjmcnaughton/mattjmcnaughton/18155-hpa-tolerance-should-be-flag
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Make HPA tolerance a flag

**What this PR does / why we need it**:
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.

**Which issue this PR fixes**: 
Fixes #18155

**Release note:**
```release-note
Control HPA tolerance through the `horizontal-pod-autoscaler-tolerance` flag.
```

Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
2017-10-16 16:47:43 -07:00
mattjmcnaughton
75c38777ad Fix hpa scaling above max replicas w/ scaleUpLimit
Fix #53670

Fix a bug where `desiredReplicas` could be greater than `maxReplicas`
if the original value for `desiredReplicas > scaleUpLimit` and
`scaleUpLimit > maxReplicas`. Previously, when that happened, we would
scale up to `scaleUpLimit`, and then in the next auto-scaling run, scale
down to `maxReplicas`. Address this issue and introduce a regression
test.
2017-10-11 08:35:31 -04:00