![]() The cloudprovider is being refactored out of kubernetes core. This is being done by moving all the cloud-specific calls from kube-apiserver, kubelet and kube-controller-manager into a separately maintained binary(by vendors) called cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information about the node that it is running on. Some of the cloudproviders worked by querying local information to obtain this information. In the new world of things, local information cannot be relied on, since cloud-controller-manager will not run on every node. Only one active instance of it will be run in the cluster. Today, all calls to the cloudprovider are based on the nodename. Nodenames are unqiue within the kubernetes cluster, but generally not unique within the cloud. This model of addressing nodes by nodename will not work in the future because local services cannot be queried to uniquely identify a node in the cloud. Therefore, I propose that we perform all cloudprovider calls based on ProviderID. This ID is a unique identifier for identifying a node on an external database (such as the instanceID in aws cloud). |
||
---|---|---|
.. | ||
aws_ebs | ||
azure_dd | ||
azure_file | ||
cephfs | ||
cinder | ||
configmap | ||
downwardapi | ||
empty_dir | ||
fc | ||
flexvolume | ||
flocker | ||
gce_pd | ||
git_repo | ||
glusterfs | ||
host_path | ||
iscsi | ||
nfs | ||
photon_pd | ||
portworx | ||
projected | ||
quobyte | ||
rbd | ||
scaleio | ||
secret | ||
testing | ||
util | ||
validation | ||
vsphere_volume | ||
BUILD | ||
doc.go | ||
metrics_cached.go | ||
metrics_du_test.go | ||
metrics_du.go | ||
metrics_errors.go | ||
metrics_nil_test.go | ||
metrics_nil.go | ||
metrics_statfs_test.go | ||
metrics_statfs.go | ||
OWNERS | ||
plugins_test.go | ||
plugins.go | ||
README.md | ||
util_test.go | ||
util.go | ||
volume_linux.go | ||
volume_unsupported.go | ||
volume.go |
Multipath
To leverage multiple paths for block storage, it is important to perform the
multipath configuration on the host.
If your distribution does not provide /etc/multipath.conf
, then you can
either use the following minimalistic one:
defaults {
find_multipaths yes
user_friendly_names yes
}
or create a new one by running:
$ mpathconf --enable
Finally you'll need to ensure to start or reload and enable multipath:
$ systemctl enable multipathd.service
$ systemctl restart multipathd.service
Note: Any change to multipath.conf
or enabling multipath can lead to
inaccessible block devices, because they'll be claimed by multipath and
exposed as a device in /dev/mapper/*.
Some additional informations about multipath can be found in the iSCSI documentation