The test assumes that all nodes have Ceph client utilities installed.
Ceph RBD container is hand crafted to be really minimal. It creates a new RBD
on startup, which can take up to several minutes on busy machines.
iSCSI and RBD volumes don't work as Kubernetes services - these protocols
are broken by S-NAT created by kube-proxy - at least iSCSI exhanges real
IP address of the iSCSI target as part of the protocol.
This reverts commit 118004c166.
This helps with routing of TCP traffic between clients and servers in case
flannel or similar service is not installed and pods don't see each other.
- It needs 'insecure' in /etc/exports to allow NFS clients on ports > 1024,
Kubernetes service will change client port to a random number.
- glusterfs no longer needs explicit endpoint definition, it uses the service
instead.
- add appropriate server containers into contrib/for-tests/volumes-tester
- the tests are off by default (they need kubelet --allow_privileged=True)
- enable by 'go run hack/e2e.go ... --ginkgo.focus=Volume'
- add glusterfs tools to list of installed packages on each node
Instead of endpoints being a flat list, it is now a list of "subsets"
where each is a struct of {Addresses, Ports}. To generate the list of
endpoints you need to take union of the Cartesian products of the
subsets. This is compact in the vast majority of cases, yet still
represents named ports and corner cases (e.g. each pod has a different
port number).
This also stores subsets in a deterministic order (sorted by hash) to
avoid spurious updates and comparison problems.
This is a fully compatible change - old objects and clients will
keepworking as long as they don't need the new functionality.
This is the prep for multi-port Services, which will add API to produce
endpoints in this new structure.