As mentioned in issue #80061, in iptables lock contention case,
we can see increasing rate of iptables restore failures because it
need to grab iptables file lock.
The failure metric can provide administrators more insight
Metrics will be collected in kube-proxy iptables and ipvs modes
Signed-off-by: Hui Luo <luoh@vmware.com>
ipvs `getProxyMode` test fails on mac as `utilipvs.GetRequiredIPVSMods`
try to reach `/proc/sys/kernel/osrelease` to find version of the running
linux kernel. Linux kernel version is used to determine the list of required
kernel modules for ipvs.
Logic to determine kernel version is moved to GetKernelVersion
method in LinuxKernelHandler which implements ipvs.KernelHandler.
Mock KernelHandler is used in the test cases.
Read and parse file is converted to go function instead of execing cut.
Computing all node ips twice would always happen when no node port
addresses were explicitly set. The GetNodeAddresses call would return
two zero cidrs (ipv4 and ipv6) and we would then retrieve all node IPs
twice because the loop wouldn't break after the first time.
Also, it is possible for the user to set explicit node port addresses
including both a zero and a non-zero cidr, but this wouldn't make sense
for nodeIPs since the zero cidr would already cause nodeIPs to include
all IPs on the node.
Previously the same ip addresses would be computed for each nodePort
service and this could be CPU intensive for a large number of nodePort
services with a large number of ipaddresses on the node.
Currently the BaseServiceInfo struct implements the ServicePort interface, but
only uses that interface sometimes. All the elements of BaseServiceInfo are exported
and sometimes the interface is used to access them and othertimes not
I extended the ServicePort interface so that all relevent values can be accessed through
it and unexported all the elements of BaseServiceInfo
This adds some useful metrics around pending changes and last successful
sync time.
The goal is for administrators to be able to alert on proxies that, for
whatever reason, are quite stale.
Signed-off-by: Casey Callendrello <cdc@redhat.com>
As part of the endpoint creation process when going from 0 -> 1 conntrack entries
are cleared. This is to prevent an existing conntrack entry from preventing traffic
to the service. Currently the system ignores the existance of the services external IP
addresses, which exposes that errant behavior
This adds the externalIP addresses of udp services to the list of conntrack entries that
get cleared. Allowing traffic to flow
Signed-off-by: Jacob Tanenbaum <jtanenba@redhat.com>