Merge pull request #6778 from AkihiroSuda/docs-cri-simplify

Add `docs/snapshotters`; simplify `docs/cri`
This commit is contained in:
Kazuyoshi Kato
2022-04-06 09:23:55 -07:00
committed by GitHub
4 changed files with 148 additions and 0 deletions

View File

@@ -6,7 +6,104 @@ path: `/etc/containerd/config.toml`).
See [here](https://github.com/containerd/containerd/blob/main/docs/ops.md)
for more information about containerd config.
Note that the `[plugins."io.containerd.grpc.v1.cri"]` section is specific to CRI,
and not recognized by other containerd clients such as `ctr`, `nerdctl`, and Docker/Moby.
## Basic configuration
### Cgroup Driver
While containerd and Kubernetes use the legacy `cgroupfs` driver for managing cgroups by default,
it is recommended to use the `systemd` driver on systemd-based hosts for compliance of
[the "single-writer" rule](https://systemd.io/CGROUP_DELEGATION/) of cgroups.
To configure containerd to use the `systemd` driver, set the following option in `/etc/containerd/config.toml`:
```toml
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
```
In addition to containerd, you have to configure the `KubeletConfiguration` to use the "systemd" cgroup driver.
The `KubeletConfiguration` is typically located at `/var/lib/kubelet/config.yaml`:
```yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: "systemd"
```
kubeadm users should also see [the kubeadm documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
### Snapshotter
The default snapshotter is set to `overlayfs` (akin to Docker's `overlay2` storage driver):
```toml
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
```
See [here](https://github.com/containerd/containerd/blob/main/docs/snapshotters) for other supported snapshotters.
### Runtime classes
The following example registers custom runtimes into containerd:
```toml
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "crun"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
# crun: https://github.com/containers/crun
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
BinaryName = "/usr/local/bin/crun"
# gVisor: https://gvisor.dev/
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.gvisor]
runtime_type = "io.containerd.runsc.v1"
# Kata Containers: https://katacontainers.io/
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
```
In addition, you have to install the following `RuntimeClass` resources into the cluster
with the `cluster-admin` role:
```yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: crun
handler: crun
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: gvisor
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata
handler: kata
```
To apply a runtime class to a pod, set `.spec.runtimeClassName`:
```yaml
apiVersion: v1
kind: Pod
spec:
runtimeClassName: crun
```
See also [the Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/runtime-class/).
## Full configuration
The explanation and default value of each configuration item are as follows:
<details>
<p>
```toml
# Use config version 2 to enable new configuration fields.
# Config file is parsed as version 1 by default.
@@ -324,6 +421,9 @@ version = 2
config_path = ""
```
</p>
</details>
## Registry Configuration
Here is a simple example for a default registry hosts configuration. Set
@@ -344,6 +444,18 @@ server = "https://docker.io"
capabilities = ["pull", "resolve"]
```
To specify a custom certificate:
```
$ cat /etc/containerd/certs.d/192.168.12.34:5000/hosts.toml
server = "https://192.168.12.34:5000"
[host."https://192.168.12.34:5000"]
ca = "/path/to/ca.crt"
```
See [`docs/hosts.md`](https://github.com/containerd/containerd/blob/main/docs/hosts.md) for the further information.
## Untrusted Workload
The recommended way to run untrusted workload is to use

View File

@@ -10,6 +10,13 @@ should now use the form
config_path = "/etc/containerd/certs.d"
```
- - -
<!-- TODO: remove in containerd 2.0 -->
<details>
<summary>Show the original content (<strong>DEPRECATED</strong>)</summary>
<p>
## Configure Registry Endpoint
With containerd, `docker.io` is the default image registry. You can also set up other image registries similar to docker.
@@ -193,3 +200,6 @@ Image is up to date for sha256:78096d0a54788961ca68393e5f8038704b97d8af374249dc5
---
NOTE: The configuration syntax used in this doc is in version 2 which is the recommended since `containerd` 1.3. For the previous config format you can reference [https://github.com/containerd/cri/blob/release/1.2/docs/registry.md](https://github.com/containerd/cri/blob/release/1.2/docs/registry.md).
</p>
</details>

View File

@@ -0,0 +1,26 @@
# Snapshotters
Snapshotters manage the snapshots of the container filesystems.
The available snapshotters can be inspected by running `ctr plugins ls` or `nerdctl info`.
## Core snapshotter plugins
Generic:
- `overlayfs` (default): OverlayFS. This driver is akin to Docker/Moby's "overlay2" storage driver, but containerd's implementation is not called "overlay2".
- `native`: Native file copying driver. Akin to Docker/Moby's "vfs" driver.
Filesystem-specific:
- `btrfs`: btrfs. Needs the plugin root (`/var/lib/containerd/io.containerd.snapshotter.v1.btrfs`) to be mounted as btrfs.
- `zfs`: ZFS. Needs the plugin root (`/var/lib/containerd/io.containerd.snapshotter.v1.zfs`) to be mounted as ZFS. See also https://github.com/containerd/zfs .
- `devmapper`: ext4/xfs device mapper. See [`devmapper.md`](./devmapper.md).
[Deprecated](https://github.com/containerd/containerd/blob/main/RELEASES.md#deprecated-features):
- `aufs`: AUFS. Deprecated since containerd 1.5. Planned to be removed in containerd 2.0. See also https://github.com/containerd/aufs .
## Non-core snapshotter plugins
- `fuse-overlayfs`: [FUSE-OverlayFS Snapshotter](https://github.com/containerd/fuse-overlayfs-snapshotter)
- `nydus`: [Nydus Snapshotter](https://github.com/containerd/nydus-snapshotter)
- `overlaybd`: [OverlayBD Snapshotter](https://github.com/containerd/accelerated-container-image)
- `stargz`: [Stargz Snapshotter](https://github.com/containerd/stargz-snapshotter)

View File

@@ -0,0 +1,182 @@
## Devmapper snapshotter
Devmapper is a `containerd` snapshotter plugin that stores snapshots in ext4-formatted filesystem images
in a devicemapper thin pool.
## Setup
To make it work you need to prepare `thin-pool` in advance and update containerd's configuration file.
This file is typically located at `/etc/containerd/config.toml`.
Here's minimal sample entry that can be made in the configuration file:
```toml
version = 2
[plugins]
...
[plugins."io.containerd.snapshotter.v1.devmapper"]
pool_name = "containerd-pool"
base_image_size = "8192MB"
...
```
The following configuration flags are supported:
* `root_path` - a directory where the metadata will be available (if empty
default location for `containerd` plugins will be used)
* `pool_name` - a name to use for the devicemapper thin pool. Pool name
should be the same as in `/dev/mapper/` directory
* `base_image_size` - defines how much space to allocate when creating the base device
* `async_remove` - flag to async remove device using snapshot GC's cleanup callback
* `discard_blocks` - whether to discard blocks when removing a device. This is especially useful for returning disk space to the filesystem when using loopback devices.
* `fs_type` - defines the file system to use for snapshot device mount. Valid values are `ext4` and `xfs`. Defaults to `ext4` if unspecified.
* `fs_options` - optionally defines the file system options. This is currently only applicable to `ext4` file system.
Pool name and base image size are required snapshotter parameters.
## Run
Give it a try with the following commands:
```bash
ctr images pull --snapshotter devmapper docker.io/library/hello-world:latest
ctr run --snapshotter devmapper docker.io/library/hello-world:latest test
```
## Requirements
The devicemapper snapshotter requires `dmsetup` (>= 1.02.110) command line tool to be installed and
available on your computer. On Ubuntu, it can be installed with `apt-get install dmsetup` command.
### How to setup device mapper thin-pool
There are many ways how to configure a devmapper thin-pool depending on your requirements, disk configuration,
and environment.
On local dev environment you can utilize loopback devices. This type of configuration is simple and suits well for
development and testing (please note that this configuration is slow and not recommended for production uses).
Run the following script to create a thin-pool device:
```bash
#!/bin/bash
set -ex
DATA_DIR=/var/lib/containerd/devmapper
POOL_NAME=devpool
mkdir -p ${DATA_DIR}
# Create data file
sudo touch "${DATA_DIR}/data"
sudo truncate -s 100G "${DATA_DIR}/data"
# Create metadata file
sudo touch "${DATA_DIR}/meta"
sudo truncate -s 10G "${DATA_DIR}/meta"
# Allocate loop devices
DATA_DEV=$(sudo losetup --find --show "${DATA_DIR}/data")
META_DEV=$(sudo losetup --find --show "${DATA_DIR}/meta")
# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="$(sudo blockdev --getsize64 -q ${DATA_DEV})"
LENGTH_IN_SECTORS=$(bc <<< "${DATA_SIZE}/${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768
# Create a thin-pool device
sudo dmsetup create "${POOL_NAME}" \
--table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
cat << EOF
#
# Add this to your config.toml configuration file and restart containerd daemon
#
[plugins]
[plugins.devmapper]
pool_name = "${POOL_NAME}"
root_path = "${DATA_DIR}"
base_image_size = "10GB"
discard_blocks = true
EOF
```
Use `dmsetup` to verify that the thin-pool created successfully:
```bash
sudo dmsetup ls
devpool (253:0)
```
Once configured and restarted `containerd`, you'll see the following output:
```
INFO[2020-03-17T20:24:45.532604888Z] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
INFO[2020-03-17T20:24:45.532672738Z] initializing pool device "dev-pool"
```
Another way to setup a thin-pool is via [container-storage-setup](https://github.com/projectatomic/container-storage-setup)
tool (formerly known as `docker-storage-setup`). It is a script to configure CoW file systems like devicemapper:
```bash
#!/bin/bash
set -ex
# Block device to use for devmapper thin-pool
BLOCK_DEV=/dev/sdf
POOL_NAME=devpool
VG_NAME=containerd
# Install container-storage-setup tool
git clone https://github.com/projectatomic/container-storage-setup.git
cd container-storage-setup/
sudo make install-core
echo "Using version $(container-storage-setup -v)"
# Create configuration file
# Refer to `man container-storage-setup` to see available options
sudo tee /etc/sysconfig/docker-storage-setup <<EOF
DEVS=${BLOCK_DEV}
VG=${VG_NAME}
CONTAINER_THINPOOL=${POOL_NAME}
EOF
# Run the script
sudo container-storage-setup
cat << EOF
#
# Add this to your config.toml configuration file and restart containerd daemon
#
[plugins]
[plugins.devmapper]
pool_name = "${VG_NAME}-${POOL_NAME}"
base_image_size = "10GB"
EOF
```
If successful `container-storage-setup` will output:
```
+ echo VG=containerd
+ sudo container-storage-setup
INFO: Volume group backing root filesystem could not be determined
INFO: Writing zeros to first 4MB of device /dev/xvdf
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.0162906 s, 257 MB/s
INFO: Device node /dev/xvdf1 exists.
Physical volume "/dev/xvdf1" successfully created.
Volume group "containerd" successfully created
Rounding up size to full physical extent 12.00 MiB
Thin pool volume with chunk size 512.00 KiB can address at most 126.50 TiB of data.
Logical volume "devpool" created.
Logical volume containerd/devpool changed.
...
```
And `dmsetup` will produce the following output:
```bash
sudo dmsetup ls
containerd-devpool (253:2)
containerd-devpool_tdata (253:1)
containerd-devpool_tmeta (253:0)
```