kubernetes/test/images/resource-consumer
Claudiu Belu fab91f2de2 test images: Retrigger failing images
Building multiarch images may flake if multiple images are being built
on the same host. Some of the E2E test image failed to build because of
this.
2022-10-23 09:49:57 -07:00
..
common hack/update-bazel.sh 2021-02-28 15:17:29 -08:00
consume-cpu generated: Run hack/update-gofmt.sh 2021-08-24 15:47:49 -04:00
.gitignore
BASEIMAGE Remove out-of-support Windows 20H2 images 2022-10-12 14:43:51 -07:00
Dockerfile test images: Adds Windows support (part 1) 2020-02-21 02:09:49 -08:00
Dockerfile_windows test images: Adds Windows support for resource-consumer 2021-02-02 14:50:35 +00:00
Makefile Merge pull request #76828 from claudiubelu/images/goarm-var 2020-07-01 07:10:18 -07:00
README.md test images: Retrigger failing images 2022-10-23 09:49:57 -07:00
resource_consumer_handler.go Remove test/images/* from hack/.golint_failures 2018-11-12 20:51:41 -07:00
resource_consumer.go
utils_common.go test images: Adds Windows support for resource-consumer 2021-02-02 14:50:35 +00:00
utils_windows.go generated: Run hack/update-gofmt.sh 2021-08-24 15:47:49 -04:00
utils.go generated: Run hack/update-gofmt.sh 2021-08-24 15:47:49 -04:00
VERSION Remove out-of-support Windows 20H2 images 2022-10-12 14:43:51 -07:00

Resource Consumer

Overview

Resource Consumer is a tool which allows to generate cpu/memory utilization in a container. The reason why it was created is testing kubernetes autoscaling. Resource Consumer can help with autoscaling tests for:

  • cluster size autoscaling,
  • horizontal autoscaling of pod - changing the size of replication controller,
  • vertical autoscaling of pod - changing its resource limits.

Usage

Resource Consumer starts an HTTP server and handle sent requests. It listens on port given as a flag (default 8080). Action of consuming resources is send to the container by a POST http request. Each http request creates new process. Http request handler is in file resource_consumer_handler.go

The container consumes specified amount of resources:

  • CPU in millicores,
  • Memory in megabytes,
  • Fake custom metrics.

Consume CPU http request

  • suffix "ConsumeCPU",
  • parameters "millicores" and "durationSec".

Consumes specified amount of millicores for durationSec seconds. Consume CPU uses "./consume-cpu/consume-cpu" binary (file consume-cpu/consume_cpu.go). When CPU consumption is too low this binary uses cpu by calculating math.sqrt(0) 10^7 times and if consumption is too high binary sleeps for 10 millisecond. One replica of Resource Consumer cannot consume more that 1 cpu.

Consume Memory http request

  • suffix "ConsumeMem",
  • parameters "megabytes" and "durationSec".

Consumes specified amount of megabytes for durationSec seconds. Consume Memory uses stress tool (stress -m 1 --vm-bytes megabytes --vm-hang 0 -t durationSec). Request leading to consuming more memory then container limit will be ignored.

Bump value of a fake custom metric

  • suffix "BumpMetric",
  • parameters "metric", "delta" and "durationSec".

Bumps metric with given name by delta for durationSec seconds. Custom metrics in Prometheus format are exposed on "/metrics" endpoint.

CURL example

kubectl run resource-consumer --image=gcr.io/k8s-staging-e2e-test-images/resource-consumer:1.9 --expose --service-overrides='{ "spec": { "type": "LoadBalancer" } }' --port 8080 --requests='cpu=500m,memory=256Mi'
kubectl get services resource-consumer

There are two IPs. The first one is internal, while the second one is the external load-balanced IP. Both serve port 8080. (Use second one)

curl --data "millicores=300&durationSec=600" http://<EXTERNAL-IP>:8080/ConsumeCPU

300 millicores will be consumed for 600 seconds.

Image

Docker image of Resource Consumer can be found in Google Container Registry as gcr.io/k8s-staging-e2e-test-images/resource-consumer:1.9

Use cases

Cluster size autoscaling

  1. Consume more resources on each node that is specified for autoscaler
  2. Observe that cluster size increased

Horizontal autoscaling of pod

  1. Create consuming RC and start consuming appropriate amount of resources
  2. Observe that RC has been resized
  3. Observe that usage on each replica decreased

Vertical autoscaling of pod

  1. Create consuming pod and start consuming appropriate amount of resources
  2. Observed that limits has been increased