kubernetes/docs/admin/network-plugins.md
Dan Williams 6248939e11 Push responsibility for bridge-nf-call-iptables to kubelet network plugins
bridge-nf-call-iptables appears to only be relevant when the containers are
attached to a Linux bridge, which is usually the case with default Kubernetes
setups, docker, and flannel.  That ensures that the container traffic is
actually subject to the iptables rules since it traverses a Linux bridge
and bridged traffic is only subject to iptables when bridge-nf-call-iptables=1.

But with other networking solutions (like openshift-sdn) that don't use Linux
bridges, bridge-nf-call-iptables may not be not relevant, because iptables is
invoked at other points not involving a Linux bridge.

The decision to set bridge-nf-call-iptables should be influenced by networking
plugins, so push the responsiblity out to them.  If no network plugin is
specified, fall back to the existing bridge-nf-call-iptables=1 behavior.
2016-02-23 09:34:59 -06:00

5.1 KiB

WARNING WARNING WARNING WARNING WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

If you are using a released version of Kubernetes, you should refer to the docs that go with that version.

Documentation for other releases can be found at releases.k8s.io.

Network Plugins

Disclaimer: Network plugins are in alpha. Its contents will change rapidly.

Network plugins in Kubernetes come in a few flavors:

  • Plain vanilla exec plugins - deprecated in favor of CNI plugins.
  • CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
  • Kubenet plugin: implements basic cbr0 using the bridge and host-local CNI plugins

Installation

The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it found, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:

  • network-plugin-dir: Kubelet probes this directory for plugins on startup
  • network-plugin: The network plugin to use from network-plugin-dir. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".

Network Plugin Requirements

Besides providing the NetworkPlugin interface to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the net/bridge/bridge-nf-call-iptables sysctl to 1 to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.

By default if no kubelet network plugin is specified, the noop plugin is used, which sets net/bridge/bridge-nf-call-iptables=1 to ensure simple configurations (like docker with a bridge) work correctly with the iptables proxy.

Exec

Place plugins in network-plugin-dir/plugin-name/plugin-name, i.e if you have a bridge plugin and network-plugin-dir is /usr/lib/kubernetes, you'd place the bridge plugin executable at /usr/lib/kubernetes/bridge/bridge. See this comment for more details.

CNI

The CNI plugin is selected by passing Kubelet the --network-plugin=cni command-line option. Kubelet reads the first CNI configuration file from --network-plugin-dir and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the CNI specification, and any required CNI plugins referenced by the configuration must be present in /opt/cni/bin.

kubenet

The Linux-only kubenet plugin provides functionality similar to the --configure-cbr0 kubelet command-line option. It creates a Linux bridge named cbr0 and creates a veth pair for each pod with the host end of each pair connected to cbr0. The pod end of the pair is assigned an IP address allocated from a range assigned to the node through either configuration or by the controller-manager. cbr0 is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay.

The plugin requires a few things:

  • The standard CNI bridge and host-local plugins to be placed in /opt/cni/bin.
  • Kubelet must be run with the --network-plugin=kubenet argument to enable the plugin
  • Kubelet must also be run with the --reconcile-cidr argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin
  • The node must be assigned an IP subnet through either the --pod-cidr kubelet command-line option or the --allocate-node-cidrs=true --cluster-cidr=<cidr> controller-manager command-line options.

Analytics