* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
The agnhost pods using netexec will bind by default to the UDP
port 8081, use a different port for hostNetwork pods to avoid
scheduling conflicts and fail the tests.
Agnhost's serve-hostname at endpoint /hostname
will return hostname. Pods host node name may
return FQDN. Comparison between the two fails.
Signed-off-by: Martin Kennelly <mkennell@redhat.com>
The pods using hostNetwork use the host network namespace, hence
they have to share it with the rest of the process and pods.
If several pods try to bind to the same port, the test will fail,
so we try to use a non common port, and run the different scenario
in the same test, so we only have to bind once and we avoid consuming
ports reducing the port collision risk.
Container without elevated privileges to bind to
host port less than 1024 causes bind permission
denied error.
Increase port number greater than 1024 to allow
binding.
Signed-off-by: Martin Kennelly <mkennell@redhat.com>
Current test for service with endpoints only validate the API objects,
however, doesn't validate that the implementation of Services works.
This mean that a cluster without any component implementing Services,
can pass Conformance.
based on this comment in
ea07644522/test/e2e/framework/pod/node_selection.go (L96-L101)
// pod.Spec.NodeName should not be set directly because
// it will bypass the scheduler, potentially causing
// kubelet to Fail the pod immediately if it's out of
// resources. Instead, we want the pod to remain
// pending in the scheduler until the node has resources
// freed up.
e2e test validates the following service status endpoints
- patchCoreV1NamespacedServiceStatus
- readCoreV1NamespacedServiceStatus
- replaceCoreV1NamespacedServiceStatus
Also includes untested service endpoint
- patchCoreV1NamespacedService
the sig-network e2e tests related to services has more than 3k lines.
Some of those e2e tests are related to loadbalancers, that are
cloud provider specific and have special requirements.
We split up the services file and keeps the loadbalancers e2e tests
in their own file and with their own tag, so it is easier to skip
for people that don't run e2e tests in cloud providers.
The ESIPP tests are using a function to poll an HTTP endpoint.
This function failed the framework if the request to the http endpoint
timed out, causing a panic that ginkgo couldn´t recover.
Also, this function was used inside a pollImmediate loop, so it should
return the error instead of fail.
The test create a pod with a hostPort to expose an SCTP port, then
it checks if the iptables rules were installed correctly in the host.
The iptables rules MUST be checked in the same host where the pod
is running :)
This reverts commit 0ed8fd6dc9.
It turns out that ExternalIPs are not allowed to be reachable from
pods until the IP is present in the node.
However, due to a kube-proxy limitation it was working in environment
that used CNIs without bridges for the pods.