Because the proxy.Provider interface included
proxyconfig.EndpointsHandler, all the backends needed to
implement its methods. But iptables, ipvs, and winkernel implemented
them as no-ops, and metaproxier had an implementation that wouldn't
actually work (because it couldn't handle Services with no active
Endpoints).
Since Endpoints processing in kube-proxy is deprecated (and can't be
re-enabled unless you're using a backend that doesn't support
EndpointSlice), remove proxyconfig.EndpointsHandler from the
definition of proxy.Provider and drop all the useless implementations.
1. Add API definitions;
2. Add feature gate and drops the field when feature gate is not on;
3. Set default values for the field;
4. Add API Validation
5. add kube-proxy iptables and ipvs implementations
6. add tests
The detectStaleConnections function in kube-proxy is very expensive in
terms of CPU utilization. The results of this function are only actually
used for UDP ports. This adds a protocol attribute to ServicePortName to
make it simple to only run this function for UDP connections. For
clusters with primarily TCP connections this can improve kube-proxy
performance by 2x.
Currently the BaseServiceInfo struct implements the ServicePort interface, but
only uses that interface sometimes. All the elements of BaseServiceInfo are exported
and sometimes the interface is used to access them and othertimes not
I extended the ServicePort interface so that all relevent values can be accessed through
it and unexported all the elements of BaseServiceInfo
Proxies should be able to cleanly figure out when endpoints have been synced,
so make all ProxyProviders also implement EndpointsHandler and pass those
through to loadbalancers when required.
When using NodePort to connect to an endpoint using UDP, if the endpoint is deleted on
restoration of the endpoint traffic does not flow. This happens because conntrack holds
the state of the connection and the proxy does not correctly clear the conntrack entry
for the stale endpoint.
Introduced a new function to conntrack ClearEntriesForPortNAT that uses the endpointIP
and NodePort to remove the stale conntrack entry and allow traffic to resume when
the endpoint is restored.
Signed-off-by: Jacob Tanenbaum <jtanenba@redhat.com>
Refactor `pkg/proxy/config`’s ServiceConfigHandler.OnUpdate and
EndpointsConfigHandler.OnUpdate to different method names as they have
different signatures.
This will let the new proxy
(https://github.com/GoogleCloudPlatform/kubernetes/issues/3760)
implement both interfaces.
Since we won’t need a separate loadbalancer structure (load balancing
is handled in the proxy rules), we will simply handle both event types
from the same object.