This is an ugly-but-simple rewrite (particularly involving having to
rewrite "single Endpoints with multiple Subsets" as "multiple
EndpointSlices"). Can be cleaned up more later...
The slice code sorts the results slightly differently from the old
code in two cases, and it was simpler to just reorder the expectations
rather than fixing the comparison code. But other than that, the
expected results are exactly the same as before.
This exposed a bug in the EndpointSlice tracking code, which is that
we didn't properly reset the "last change time" when a slice was
deleted. (This means kube-proxy would report an erroneous value in the
"endpoint programming time" metric if a service was added/updated,
then deleted before kube-proxy processed the add/update, then later
added again.)
In the dual-stack case, iptables.NewDualStackProxier and
ipvs.NewDualStackProxier filtered the nodeport addresses values by IP
family before creating the single-stack proxiers. But in the
single-stack case, the kube-proxy startup code just passed the value
to the single-stack proxiers without validation, so they had to
re-check it themselves. Fix that.
TestEndpointsToEndpointsMap tested code that only ran when using
Endpoints tracking rather than EndpointSlice tracking--which is to
say, never, any more. (TestEndpointsMapFromESC in
endpointslicecache_test.go is an equivalent EndpointSlice test.)
This rule was mistakenly added to kubelet even though it only applies
to kube-proxy's traffic. We do not want to remove it from kubelet yet
because other components may be depending on it for security, but we
should make kube-proxy output its own rule rather than depending on
kubelet.
Some of the chains kube-proxy creates are also created by kubelet; we
need to ensure that those chains exist but we should not delete them
in CleanupLeftovers().
Server.Serve() always returns a non-nil error. If it exits because the
http server is closed, it will return ErrServerClosed. To differentiate
the error, this patch changes to close the http server instead of the
listener when the Service is deleted. Closing http server automatically
closes the listener, there is no need to cache the listeners any more.
Signed-off-by: Quan Tian <qtian@vmware.com>
Handle https://github.com/moby/ipvs/issues/27
A work-around was already in place, but a segv would occur
when the bug is fixed. That will not happen now.
To update the scheduler without node reboot now works.
The address for the probe VS is now 198.51.100.0 which is
reseved for documentation, please see rfc5737. The comment
about this is extended.
To check kernel modules is a bad way to check functionality.
This commit just removes the checks and makes it possible to
use a statically linked kernel.
Minimal updates to unit-tests are made.
We had a test that creating a Service with an SCTP port would create
an iptables rule with "-p sctp" in it, which let us test that
kube-proxy was doing vaguely the right thing with SCTP even if the e2e
environment didn't have SCTP support. But this would really make much
more sense as a unit test.
We currently invoke /sbin/iptables 24 times on each syncProxyRules
before calling iptables-restore. Since even trivial iptables
invocations are slow on hosts with lots of iptables rules, this adds a
lot of time to each sync. Since these checks are expected to be a
no-op 99% of the time, skip them on partial syncs.