When I first wrote TestInternalExternalMasquerade, I put "FIXME"
comments in all of the cases that seemed wrong to me, most of which
got removed as we fixed the corner cases. But there were two cases
where we decided that the implemented behavior, though odd, was
correct, and those FIXMEs never got removed.
All the code to deal with enabling/disabling the feature gate is gone,
but some of the tests were still specifying "this test case assumes
PTE is enabled".
Remove "EndpointSlice" from some unit test names, because they don't
need to clarify that they use EndpointSlices now, because all of the
tests use EndpointSlices now.
Likewise, remove TestEndpointSliceE2E entirely; it was originally an
EndpointSlice version of one of the other tests, but the other test
uses EndpointSlices now too.
TL;DR: we want to start failing the LB HC if a node is tainted with ToBeDeletedByClusterAutoscaler.
This field might need refinement, but currently is deemed our best way of understanding if
a node is about to get deleted. We want to do this only for eTP:Cluster services.
The goal is to connection draining terminating nodes
Historically, IptablesRulesTotal could have been intepreted as either
"the total number of iptables rules kube-proxy is responsible for" or
"the number of iptables rules kube-proxy rewrote on the last sync".
Post-MinimizeIPTablesRestore, these are very different things (and
IptablesRulesTotal unintentionally became the latter).
Fix IptablesRulesTotal (sync_proxy_rules_iptables_total) to be "the
total number of iptables rules kube-proxy is responsible for" and add
IptablesRulesLastSync (sync_proxy_rules_iptables_last) to be "the
number of iptables rules kube-proxy rewrote on the last sync".
This required fixing a small bug in the metric, where it had
previously been counting the "-X" lines that had been passed to
iptables-restore to delete stale chains, rather than only counting the
actual rules.
getLocalDetector() used to pass a utiliptables.Interface to
NewDetectLocalByCIDR() so that NewDetectLocalByCIDR() could verify
that the passed-in CIDR was of the same family as the iptables
interface. It would make more sense for getLocalDetector() to verify
this itself and just *not call NewDetectLocalByCIDR* if the families
don't match, and that's what the code does now. So there's no longer
any need to pass the utiliptables.Interface to the local detector.
Rather than having GetNodeAddresses() return a special magic value
indicating that it matches all IPs, add a separate method to check
that. (And have GetNodeAddresses() just return the IPs as expected
instead.)
Both proxies handle IPv4 and IPv6 nodeport addresses separately, but
GetNodeAddresses went out of its way to make that difficult. Fix that.
This commit does not change any externally-visible semantics, but it
makes the existing weird semantics more obvious. Specifically, if you
say "--nodeport-addresses 10.0.0.0/8,192.168.0.0/16", then the
dual-stack proxy code would have split that into a list of IPv4 CIDRs
(["10.0.0.0/8", "192.168.0.0/16"]) to pass to the IPv4 proxier, and a
list of IPv6 CIDRs ([]) to pass to the IPv6 proxier, and then the IPv6
proxier would say "well since the list of nodeport addresses is empty,
I'll listen on all IPv6 addresses", which probably isn't what you
meant, but that's what it did.
This touches cases where FromInt() is used on numeric constants, or
values which are already int32s, or int variables which are defined
close by and can be changed to int32s with little impact.
Signed-off-by: Stephen Kitt <skitt@redhat.com>
Now that the endpoint update fields have names that make it clear that
they only contain UDP objects, it's obvious that the "protocol == UDP"
checks in the iptables and ipvs proxiers were no-ops, so remove them.
Rather than calling fp.deleteEndpointConnection() directly, set up the
proxy to have syncProxyRules() call it, so that we are testing it in
the way that it actually gets called.
Squash the IPv4 and IPv6 unit tests together so we don't need to
duplicate all that code. Fix a tiny bug in NewFakeProxier() found
while doing this...
The APIs talked about "stale services" and "stale endpoints", but the
thing that is actually "stale" is the conntrack entries, not the
services/endpoints. Fix the names to indicate what they actual keep
track of.
Also, all three fields (2 in the endpoints update object and 1 in the
service update object) are currently UDP-specific, but only the
service one made that clear. Fix that too.
This commit did not actually work; in between when it was first
written and tested, and when it merged, the code in
pkg/proxy/endpoints.go was changed to only add UDP endpoints to the
"stale endpoints"/"stale services" lists, and so checking for "either
UDP or SCTP" rather than just UDP when processing those lists had no
effect.
This reverts most of commit aa8521df66
(but leaves the changes related to
ipvs.IsRsGracefulTerminationNeeded() since that actually did have the
effect it meant to have).
Today, the health check response to the load balancers asking Kube-proxy for
the status of ETP:Local services does not include the healthz state of Kube-
proxy. This means that Kube-proxy might indicate to load balancers that they
should forward traffic to the node in question, simply because the endpoint
is running on the node - this overlooks the fact that Kube-proxy might be
not-healthy and hasn't successfully written the rules enabling traffic to
reach the endpoint.
For some reason we were calculating the available nodeport IPs at the
top of syncProxyRules even though we didn't use them until the end.
(Well, the previous code avoided generating KUBE-NODEPORTS chain rules
if there were no node IPs available, but that case is considered an
error anyway, so there's no need to optimize it.)
(Also fix a stale `err` reference exposed by this move.)
In addition to actually updating their data from the provided list of
changes, EndpointsMap.Update() and ServicePortMap.Update() return a
struct with some information about things that changed because of that
update (eg services with stale conntrack entries).
For some reason, they were also returning information about
HealthCheckNodePorts, but they were returning *static* information
based on the current (post-Update) state of the map, not information
about what had *changed* in the update. Since this doesn't match how
the other data in the struct is used (and since there's no reason to
have the data only be returned when you call Update() anyway) , split
it out.
The unit tests were broken with MinimizeIPTablesRestore enabled
because syncProxyRules() assumed that needFullSync would be set on the
first (post-setInitialized()) run, but the unit tests didn't ensure
that.
(In fact, there was a race condition in the real Proxier case as well;
theoretically syncProxyRules() could be run by the
BoundedFrequencyRunner after OnServiceSynced() called setInitialized()
but before it called forceSyncProxyRules(), thus causing the first
real sync to try to do a partial sync and fail. This is now fixed as
well.)
In the dual-stack case, iptables.NewDualStackProxier and
ipvs.NewDualStackProxier filtered the nodeport addresses values by IP
family before creating the single-stack proxiers. But in the
single-stack case, the kube-proxy startup code just passed the value
to the single-stack proxiers without validation, so they had to
re-check it themselves. Fix that.
This rule was mistakenly added to kubelet even though it only applies
to kube-proxy's traffic. We do not want to remove it from kubelet yet
because other components may be depending on it for security, but we
should make kube-proxy output its own rule rather than depending on
kubelet.
Some of the chains kube-proxy creates are also created by kubelet; we
need to ensure that those chains exist but we should not delete them
in CleanupLeftovers().
We had a test that creating a Service with an SCTP port would create
an iptables rule with "-p sctp" in it, which let us test that
kube-proxy was doing vaguely the right thing with SCTP even if the e2e
environment didn't have SCTP support. But this would really make much
more sense as a unit test.
We currently invoke /sbin/iptables 24 times on each syncProxyRules
before calling iptables-restore. Since even trivial iptables
invocations are slow on hosts with lots of iptables rules, this adds a
lot of time to each sync. Since these checks are expected to be a
no-op 99% of the time, skip them on partial syncs.
iptables-restore requires that if you change any rule in a chain, you
have to rewrite the entire chain. But if you avoid mentioning a chain
at all, it will leave it untouched. Take advantage of this by not
rewriting the SVC, SVL, EXT, FW, and SEP chains for services that have
not changed since the last sync, which should drastically cut down on
the size of each iptables-restore in large clusters.
Back when iptables was first made the default, there were
theoretically some users who wouldn't have been able to support it due
to having an old /sbin/iptables. But kube-proxy no longer does the
things that didn't work with old iptables, and we removed that check a
long time ago. There is also a check for a new-enough kernel version,
but it's checking for a feature which was added in kernel 3.6, and no
one could possibly be running Kubernetes with a kernel that old. So
the fallback code now never actually falls back, so it should just be
removed.
The proxies watch node labels for topology changes, but node labels
can change in bursts especially in larger clusters. This causes
pressure on all proxies because they can't filter the events, since
the topology could match on any label.
Change node event handling to queue the request rather than immediately
syncing. The sync runner can already handle short bursts which shouldn't
change behavior for most cases.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This moves the FW chain creation to the end (rather than having it in
the middle of adding the jump rules for the LB IPs).
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the jump rules for internal traffic. Previously we were
handling "jumping from kubeServices to internalTrafficChain" and
"adding masquerade rules to internalTrafficChain" in the same place.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the EXT chain.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the SVC and SVL chains. We were already
filling them in at the end of the loop; this fixes it to create them
at the bottom of the loop as well.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the endpoint chains. Previously they were
handled entirely at the top of the loop. Now we record which ones are
in use at the top but don't create them and fill them in until the
bottom.
We figure out early on whether we're going to end up outputting no
endpoints, so update the metrics then.
(Also remove a redundant feature gate check; svcInfo already checks
the ServiceInternalTrafficPolicy feature gate itself and so
svcInfo.InternalPolicyLocal() will always return false if the gate is
not enabled.)
Rather than marking packets to be dropped in the "nat" table and then
dropping them from the "filter" table later, just use rules in
"filter" to drop the packets we don't like directly.
Re-sync the rules from TestOverallIPTablesRulesWithMultipleServices to
make sure we're testing all the right kinds of rules. Remove a
duplicate copy of the KUBE-MARK-MASQ and KUBE-POSTROUTING rules.
Update the "REJECT" test to use the new svc6 from
TestOverallIPTablesRulesWithMultipleServices. (Previously it had used
a modified version of TOIPTRWMS's svc3.)
svc2b was using the same ClusterIP as svc3; change it and rename the
service to svc5 to make everything clearer.
Move the test of LoadBalancerSourceRanges from svc2 to svc5, so that
svc2 tests the rules for dropping packets due to
externalTrafficPolicy, and svc5 tests the rules for dropping packets
due to LoadBalancerSourceRanges, rather than having them both mixed
together in svc2.
Add svc6 with no endpoints.
"iptables-save" takes several seconds to run on machines with lots of
iptables rules, and we only use its result to figure out which chains
are no longer referenced by any rules. While it makes things less
confusing if we delete unused chains immediately, it's not actually
_necessary_ since they never get called during packet processing. So
in large clusters, make it so we only clean up chains periodically
rather than on every sync.
We don't need to parse out the counter values from the iptables-save
output (since they are always 0 for the chains we care about). Just
parse the chain names themselves.
Also, all of the callers of GetChainLines() pass it input that
contains only a single table, so just assume that, rather than
carefully parsing only a single table's worth of the input.
The iptables and ipvs proxies have code to try to preserve certain
iptables counters when modifying chains via iptables-restore, but the
counters in question only actually exist for the built-in chains (eg
INPUT, FORWARD, PREROUTING, etc), which we never modify via
iptables-restore (and in fact, *can't* safely modify via
iptables-restore), so we are really just doing a lot of unnecessary
work to copy the constant string "[0:0]" over from iptables-save
output to iptables-restore input. So stop doing that.
Also fix a confused error message when iptables-save fails.
kube-proxy generates iptables rules to forward traffic from Services to Endpoints
kube-proxy uses iptables-restore to configure the rules atomically, however,
this has the downside that large number of rules take a long time to be processed,
causing disruption.
There are different parameters than influence the number of rules generated:
- ServiceType
- Number of Services
- Number of Endpoints per Service
This test will fail when the number of rules change, so the person
that is modifying the code can have feedback about the performance impact
on their changes. It also runs multiple number of rules test cases to check
if the number of rules grows linearly.
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
refactor: svc port name variable #108806
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
refactor: rename struct for service port information to servicePortInfo and fields for more redability
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
fix: drop chain rule
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
Sort the ":CHAINNAME" lines in the same order as the "-A CHAINNAME"
lines (meaning, KUBE-NODEPORTS and KUBE-SERVICES come first).
(This will simplify IPTablesDump because it won't need to keep track
of the declaration order and the rule order separately.)
The various loops in the LoadBalancer rule section were mis-nested
such that if a service had multiple LoadBalancer IPs, we would write
out the firewall rules multiple times (and the allowFromNode rule for
the second and later IPs would end up being written after the "else
DROP" rule from the first IP).
The LoadBalancer rules change if the node IP is in one of the
LoadBalancerSourceRange subnets, so make sure to set nodeIP on the
fake proxier so we can test this, and add a second source range to
TestLoadBalancer containing the node IP. (This changes the result of
one flow test that previously expected that node-to-LB would be
dropped.)
Add TestInternalExternalMasquerade, which tests whether various
packets are considered internal or external for purposes of traffic
policy, and whether they get masqueraded, with and without
--masquerade-all, with and without a working LocalTrafficDetector.
(This extends and replaces the old TestMasqueradeAll.)
Add a new framework for testing out how particular packets would be
handled by a given set of iptables rules. (eg, "assert that a packet
from 10.180.0.2 to 172.30.0.41:80 gets NATted to 10.180.0.1:80 without
being masqueraded"). Add tests using this to all of the existing unit
tests.
This makes it easier to tell whether a given code change has any
effect on behavior, without having to carefully examine the diffs to
the generated iptables rules.