This PR removes a TODO comment by adding some netmask tests. The TODO comment
introduced by commit e768924a62 "validate entry in ipset".
// TODO: CIDR /32 may not be valid
The comment says that 32 is invalid netmask, but in reality values from 0 to
32 are valid because the result of the Linux ipset command says so.
$ sudo ipset create foo hash:ip,port,net
$ sudo ipset add foo 10.20.30.40,53,192.168.3.1/33
ipset v7.5: Syntax error: '33' is out of range 0-32
$ sudo ipset --version
ipset v7.5, protocol version: 7
Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
In #56164, we had split the reject rules for non-ep existing services
into KUBE-EXTERNAL-SERVICES chain in order to avoid calling KUBE-SERVICES
from INPUT. However in #74394 KUBE-SERVICES was re-added into INPUT.
As noted in #56164, kernel is sensitive to the size of INPUT chain. This
patch refrains from calling the KUBE-SERVICES chain from INPUT and FORWARD,
instead adds the lb reject rule to the KUBE-EXTERNAL-SERVICES chain which will be
called from INPUT and FORWARD.
The unit test for controlplane produces a warning caused by using deprecated default cluster IPs.
make test WHAT=./pkg/controlplane GOFLAGS=-v
W1015 07:42:59.203836 111754 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
This warning appears in six tests, TestValidOpenAPISpec, TestLegacyRestStorageStrategies, TestCertificatesRestStorageStrategies, TestVersion, TestAPIVersionOfDiscoveryEndpoints, TestStorageVersionHashes and TestStorageVersionHashEqualities.
This patch fixes the warning by passing ServiceIPRange.
Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
when DefaultPodTopologySpread feature is enabled
If SelectorSpreadPriority is in use, PodTopologySpread gets inevitably enabled.
When only EvenPodsSpreadPriority is in use, PodTopologySpread is configured without system defaults.
Change-Id: I2389a585cd8ad0bd35b0d2acae1665cd46908b3e
When a pod is deleted, it is given a deletion timestamp. However the
pod might still run for some time during graceful shutdown. During
this time it might still produce CPU utilization metrics and be in a
Running phase.
Currently the HPA replica calculator attempts to ignore deleted pods
by skipping over them. However by not adding them to the ignoredPods
set, their metrics are not removed from the average utilization
calculation. This allows pods in the process of shutting down to drag
down the recommmended number of replicas by producing near 0%
utilization metrics.
In fact the ignoredPods set is misnomer. Those pods are not fully
ignored. When the replica calculator recommends to scale up, 0%
utilization metrics are filled in for those pods to limit the scale
up. This prevents overscaling when pods take some time to startup. In
fact, there should be 4 sets considered (readyPods, unreadyPods,
missingPods, ignoredPods) not just 3.
This change renames ignoredPods as unreadyPods and leaves the scaleup
limiting semantics. Another set (actually) ignoredPods is added to
which delete pods are added instead of being skipped during
grouping. Both ignoredPods and unreadyPods have their metrics removed
from consideration. But only unreadyPods have 0% utilization metrics
filled in upon scaleup.
When trying to fix a dockershim issue, there were not any unit tests
for dockershim/exec.go and it was difficult to add the corresponding
unit test for the bug.
This adds the unit tests for avoiding such situation in the future.