Commit Graph

1767 Commits

Author SHA1 Message Date
kerthcet
2b7373f336 kube-proxy: code optimization
Signed-off-by: kerthcet <kerthcet@gmail.com>
2022-09-04 19:34:22 +08:00
Kubernetes Prow Robot
9924814270
Merge pull request #108460 from Nordix/issue-72236
Prevent host access on VIP addresses in proxy-mode=ipvs
2022-09-01 12:59:18 -07:00
Sanskar Jaiswal
8b5f263cd3 add tests for initialSync usage in syncEndpoint
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-08-27 07:42:21 +00:00
Sanskar Jaiswal
b670656a09 update ipvs proxier to update realserver weights at startup
Update the IPVS proxier to have a bool `initialSync` which is set to
true when a new proxier is initialized and then set to false on all
syncs. This lets us run startup-only logic, which subsequently lets us
update the realserver only when needed and avoiding any expensive
operations.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-08-27 07:42:07 +00:00
Kubernetes Prow Robot
da112dda68
Merge pull request #111806 from danwinship/kube-proxy-no-mode-fallback
remove kube-proxy mode fallback
2022-08-24 05:52:03 -07:00
Kubernetes Prow Robot
ef25013252
Merge pull request #111842 from ialidzhikov/cleanup/pkg-proxy
pkg/proxy: Replace deprecated func usage from the `k8s.io/utils/pointer` pkg
2022-08-23 20:08:08 -07:00
Kubernetes Prow Robot
9efbe6eb9b
Merge pull request #111379 from muyangren2/describe_err
wrong description
2022-08-23 16:05:17 -07:00
Dan Winship
1609017f2b kube-proxy: remove ipvs-to-iptables fallback
If the user passes "--proxy-mode ipvs", and it is not possible to use
IPVS, then error out rather than falling back to iptables.

There was never any good reason to be doing fallback; this was
presumably erroneously added to parallel the iptables-to-userspace
fallback (which only existed because we had wanted iptables to be the
default but not all systems could support it).

In particular, if the user passed configuration options for ipvs, then
they presumably *didn't* pass configuration options for iptables, and
so even if the iptables proxy is able to run, it is likely to be
misconfigured.
2022-08-16 09:30:08 -04:00
Dan Winship
9f69a3a9d4 kube-proxy: remove iptables-to-userspace fallback
Back when iptables was first made the default, there were
theoretically some users who wouldn't have been able to support it due
to having an old /sbin/iptables. But kube-proxy no longer does the
things that didn't work with old iptables, and we removed that check a
long time ago. There is also a check for a new-enough kernel version,
but it's checking for a feature which was added in kernel 3.6, and no
one could possibly be running Kubernetes with a kernel that old. So
the fallback code now never actually falls back, so it should just be
removed.
2022-08-16 09:21:34 -04:00
ialidzhikov
f2bc2ed2da pkg/proxy: Replace deprecated func usage from the k8s.io/utils/pointer pkg 2022-08-14 18:27:33 +03:00
Kubernetes Prow Robot
e16ac34361
Merge pull request #110289 from danwinship/kep-3178-source-ranges-drop
Don't use KUBE-MARK-DROP for LoadBalancerSourceRanges
2022-07-28 10:21:10 -07:00
Dan Winship
f65fbc877b proxy/iptables: remove last references to KUBE-MARK-DROP 2022-07-28 09:03:49 -04:00
Dan Winship
9313188909 proxy/iptables: Don't use KUBE-MARK-DROP for LoadBalancerSourceRanges 2022-07-28 09:03:46 -04:00
Kubernetes Prow Robot
4e5711829c
Merge pull request #111228 from Abirdcfly/220716
clean unreachable code
2022-07-27 11:35:00 -07:00
Kubernetes Prow Robot
ce433f87b4
Merge pull request #110266 from danwinship/minimize-prep-reorg
iptables proxy reorg in preparation for minimizing iptables-restore
2022-07-27 04:06:30 -07:00
Davanum Srinivas
a9593d634c
Generate and format files
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh

Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2022-07-26 13:14:05 -04:00
muyangren2
9b29c930a2 wrong description 2022-07-25 13:41:59 +08:00
Kubernetes Prow Robot
7d72ccf9a8
Merge pull request #110957 from papagalu/kp_remove_hnsv1
kube-proxy: windows: Removed hnsV1
2022-07-20 04:02:38 -07:00
BinacsLee
f1c9a70b47 cleanup: simplify the function implementation of IPSet 2022-07-20 10:13:57 +08:00
Abirdcfly
f71718d644 clean Unreachable code
Signed-off-by: Abirdcfly <fp544037857@gmail.com>
2022-07-19 20:42:09 +08:00
Kubernetes Prow Robot
a521af7007
Merge pull request #111219 from dcbw/proxy-sync-on-node-events
proxy: queue syncs on node events rather than syncing immediately
2022-07-19 05:34:18 -07:00
Kubernetes Prow Robot
8af2c50201
Merge pull request #110762 from pandaamanda/windows_default_proxy
kube-proxy: kernelspace mode is announced to be default for windows
2022-07-18 11:45:15 -07:00
Dan Williams
f197509879 proxy: queue syncs on node events rather than syncing immediately
The proxies watch node labels for topology changes, but node labels
can change in bursts especially in larger clusters. This causes
pressure on all proxies because they can't filter the events, since
the topology could match on any label.

Change node event handling to queue the request rather than immediately
syncing. The sync runner can already handle short bursts which shouldn't
change behavior for most cases.

Signed-off-by: Dan Williams <dcbw@redhat.com>
2022-07-18 09:21:52 -05:00
pandaamanda
fbe934da21 kube-proxy: kernelspace mode is announced to be default for windows 2022-07-18 01:04:56 +00:00
Dan Winship
367f18c49b proxy/iptables: move firewall chain setup
Part of reorganizing the syncProxyRules loop to do:
  1. figure out what chains are needed, mark them in activeNATChains
  2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
  3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)

This moves the FW chain creation to the end (rather than having it in
the middle of adding the jump rules for the LB IPs).
2022-07-09 07:08:42 -04:00
Dan Winship
2030591ce7 proxy/iptables: move internal traffic setup code
Part of reorganizing the syncProxyRules loop to do:
  1. figure out what chains are needed, mark them in activeNATChains
  2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
  3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)

This fixes the jump rules for internal traffic. Previously we were
handling "jumping from kubeServices to internalTrafficChain" and
"adding masquerade rules to internalTrafficChain" in the same place.
2022-07-09 07:07:48 -04:00
Dan Winship
00f789cd8d proxy/iptables: move EXT chain rule creation to the end
Part of reorganizing the syncProxyRules loop to do:
  1. figure out what chains are needed, mark them in activeNATChains
  2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
  3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)

This fixes the handling of the EXT chain.
2022-07-09 07:07:47 -04:00
Dan Winship
8906ab390e proxy/iptables: reorganize cluster/local chain creation
Part of reorganizing the syncProxyRules loop to do:
  1. figure out what chains are needed, mark them in activeNATChains
  2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
  3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)

This fixes the handling of the SVC and SVL chains. We were already
filling them in at the end of the loop; this fixes it to create them
at the bottom of the loop as well.
2022-07-09 07:05:05 -04:00
Dan Winship
da14a12fe5 proxy/iptables: move endpoint chain rule creation to the end
Part of reorganizing the syncProxyRules loop to do:
  1. figure out what chains are needed, mark them in activeNATChains
  2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
  3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)

This fixes the handling of the endpoint chains. Previously they were
handled entirely at the top of the loop. Now we record which ones are
in use at the top but don't create them and fill them in until the
bottom.
2022-07-09 06:51:47 -04:00
Dan Winship
8a5801996b proxy/iptables: belatedly simplify local traffic policy metrics
We figure out early on whether we're going to end up outputting no
endpoints, so update the metrics then.

(Also remove a redundant feature gate check; svcInfo already checks
the ServiceInternalTrafficPolicy feature gate itself and so
svcInfo.InternalPolicyLocal() will always return false if the gate is
not enabled.)
2022-07-09 06:50:16 -04:00
Dimitrie Mititelu
09ca06e875 kube-proxy: windows: Removed hnsV1
hnsV1 not supported anymore

Signed-off-by: Dimitrie Mititelu <dmititelu@cloudbasesolutions.com>
2022-07-05 22:24:23 +03:00
Dan Winship
95705350d5 proxy/iptables: Don't use KUBE-MARK-DROP for "no local endpoints"
Rather than marking packets to be dropped in the "nat" table and then
dropping them from the "filter" table later, just use rules in
"filter" to drop the packets we don't like directly.
2022-06-29 16:37:24 -04:00
Dan Winship
283218bd4c proxy/iptables: update TestTracePackets
Re-sync the rules from TestOverallIPTablesRulesWithMultipleServices to
make sure we're testing all the right kinds of rules. Remove a
duplicate copy of the KUBE-MARK-MASQ and KUBE-POSTROUTING rules.

Update the "REJECT" test to use the new svc6 from
TestOverallIPTablesRulesWithMultipleServices. (Previously it had used
a modified version of TOIPTRWMS's svc3.)
2022-06-29 16:33:13 -04:00
Dan Winship
59b7f969e8 proxy/iptables: fix up TestOverallIPTablesRulesWithMultipleServices
svc2b was using the same ClusterIP as svc3; change it and rename the
service to svc5 to make everything clearer.

Move the test of LoadBalancerSourceRanges from svc2 to svc5, so that
svc2 tests the rules for dropping packets due to
externalTrafficPolicy, and svc5 tests the rules for dropping packets
due to LoadBalancerSourceRanges, rather than having them both mixed
together in svc2.

Add svc6 with no endpoints.
2022-06-29 16:33:13 -04:00
Kubernetes Prow Robot
f045fb688f
Merge pull request #110334 from danwinship/iptables-fewer-saves
only clean up iptables chains periodically in large clusters
2022-06-29 09:48:06 -07:00
Dan Winship
7d3ba837f5 proxy/iptables: only clean up chains periodically in large clusters
"iptables-save" takes several seconds to run on machines with lots of
iptables rules, and we only use its result to figure out which chains
are no longer referenced by any rules. While it makes things less
confusing if we delete unused chains immediately, it's not actually
_necessary_ since they never get called during packet processing. So
in large clusters, make it so we only clean up chains periodically
rather than on every sync.
2022-06-29 11:14:38 -04:00
Dan Winship
1cd461bd24 proxy/iptables: abstract the "endpointChainsNumberThreshold" a bit
Turn this into a generic "large cluster mode" that determines whether
we optimize for performance or debuggability.
2022-06-29 11:14:38 -04:00
Dan Winship
c12da17838 proxy/iptables: Add a unit test with multiple resyncs 2022-06-29 11:14:38 -04:00
Kubernetes Prow Robot
0d9ed2c3e7
Merge pull request #110328 from danwinship/iptables-counters
Stop trying to "preserve" iptables counters that are always 0
2022-06-29 08:06:06 -07:00
Dan Winship
7c27cf0b9b Simplify iptables-save parsing
We don't need to parse out the counter values from the iptables-save
output (since they are always 0 for the chains we care about). Just
parse the chain names themselves.

Also, all of the callers of GetChainLines() pass it input that
contains only a single table, so just assume that, rather than
carefully parsing only a single table's worth of the input.
2022-06-28 08:39:32 -04:00
Dan Winship
a3556edba1 Stop trying to "preserve" iptables counters that are always 0
The iptables and ipvs proxies have code to try to preserve certain
iptables counters when modifying chains via iptables-restore, but the
counters in question only actually exist for the built-in chains (eg
INPUT, FORWARD, PREROUTING, etc), which we never modify via
iptables-restore (and in fact, *can't* safely modify via
iptables-restore), so we are really just doing a lot of unnecessary
work to copy the constant string "[0:0]" over from iptables-save
output to iptables-restore input. So stop doing that.

Also fix a confused error message when iptables-save fails.
2022-06-28 08:39:32 -04:00
Kubernetes Prow Robot
832c4d8cb7
Merge pull request #110503 from aojea/iptables_rules
kube-proxy iptables test number of generated iptables rules
2022-06-27 18:10:08 -07:00
Lars Ekman
c1e5a9e6f0 Prevent host access on VIP addresses in proxy-mode=ipvs 2022-06-24 08:33:58 +02:00
lokichoggio
52280de403
fix comments in pkg/proxy/types.go 2022-06-24 09:50:02 +08:00
Dan Winship
28253f6030 proxy/ipvs: Use DROP directly rather than KUBE-MARK-DROP
The ipvs proxier was figuring out LoadBalancerSourceRanges matches in
the nat table and using KUBE-MARK-DROP to mark unmatched packets to be
dropped later. But with ipvs, unlike with iptables, DNAT happens after
the packet is "delivered" to the dummy interface, so the packet will
still be unmodified when it reaches the filter table (the first time)
so there's no reason to split the work between the nat and filter
tables; we can just do it all from the filter table and call DROP
directly.

Before:

  - KUBE-LOAD-BALANCER (in nat) uses kubeLoadBalancerFWSet to match LB
    traffic for services using LoadBalancerSourceRanges, and sends it
    to KUBE-FIREWALL.

  - KUBE-FIREWALL uses kubeLoadBalancerSourceCIDRSet and
    kubeLoadBalancerSourceIPSet to match allowed source/dest combos
    and calls "-j RETURN".

  - All remaining traffic that doesn't escape KUBE-FIREWALL is sent to
    KUBE-MARK-DROP.

  - Traffic sent to KUBE-MARK-DROP later gets dropped by chains in
    filter created by kubelet.

After:

  - All INPUT and FORWARD traffic gets routed to KUBE-PROXY-FIREWALL
    (in filter). (We don't use "KUBE-FIREWALL" any more because
    there's already a chain in filter by that name that belongs to
    kubelet.)

  - KUBE-PROXY-FIREWALL sends traffic matching kubeLoadbalancerFWSet
    to KUBE-SOURCE-RANGES-FIREWALL

  - KUBE-SOURCE-RANGES-FIREWALL uses kubeLoadBalancerSourceCIDRSet and
    kubeLoadBalancerSourceIPSet to match allowed source/dest combos
    and calls "-j RETURN".

  - All remaining traffic that doesn't escape
    KUBE-SOURCE-RANGES-FIREWALL is dropped (directly via "-j DROP").

  - (KUBE-LOAD-BALANCER in nat is now used only to set up masquerading)
2022-06-22 13:02:22 -04:00
Dan Winship
a9cd57fa40 proxy/ipvs: add filter table support to ipsetWithIptablesChain 2022-06-22 12:53:18 -04:00
Antonio Ojea
3cb63833ff kube-proxy iptables test number of generated iptables rules
kube-proxy generates iptables rules to forward traffic from Services to Endpoints
kube-proxy uses iptables-restore to configure the rules atomically, however,
this has the downside that large number of rules take a long time to be processed,
causing disruption.
There are different parameters than influence the number of rules generated:
- ServiceType
- Number of Services
- Number of Endpoints per Service
This test will fail when the number of rules change, so the person
that is modifying the code can have feedback about the performance impact
on their changes. It also runs multiple number of rules test cases to check
if the number of rules grows linearly.
2022-06-14 11:55:42 +02:00
Dan Winship
400d474bac proxy/ipvs: fix some identifiers
kubeLoadbalancerFWSet was the only LoadBalancer-related identifier
with a lowercase "b", so fix that.

rename TestLoadBalanceSourceRanges to TestLoadBalancerSourceRanges to
match the field name (and the iptables proxier test).
2022-06-13 09:13:15 -04:00
Dan Winship
0b1e364814 proxy/ipvs: fix a few comments 2022-06-12 20:30:47 -04:00
Kubernetes Prow Robot
dc4e91a875
Merge pull request #109844 from danwinship/iptables-tests-new
improve parsing in iptables unit tests
2022-06-10 14:27:44 -07:00