Background:
With current design, the content backend uses key-lock for long-lived
write transaction. If the content reference has been marked for write
transaction, the other requestes on the same reference will fail fast with
unavailable error. Since the metadata plugin is based on boltbd which
only supports single-writer, the content backend can't block or handle
the request too long. It requires the client to handle retry by itself,
like OpenWriter - backoff retry helper. But the maximum retry interval
can be up to 2 seconds. If there are several concurrent requestes fo the
same image, the waiters maybe wakeup at the same time and there is only
one waiter can continue. A lot of waiters will get into sleep and we will
take long time to finish all the pulling jobs and be worse if the image
has many more layers, which mentioned in issue #4937.
After fetching, containerd.Pull API allows several hanlers to commit
same ChainID snapshotter but only one can be done successfully. Since
unpack tar.gz is time-consuming job, it can impact the performance on
unpacking for same ChainID snapshotter in parallel.
For instance, the Request 2 doesn't need to prepare and commit, it
should just wait for Request 1 finish, which mentioned in pull
request #6318.
```text
Request 1 Request 2
Prepare
|
|
|
| Prepare
Commit |
|
|
|
Commit(failed on exist)
```
Both content backoff retry and unnecessary unpack impacts the performance.
Solution:
Introduced the duplicate suppression in fetch and unpack context. The
deplicate suppression uses key-mutex and single-waiter-notify to support
singleflight. The caller can use the duplicate suppression in different
PullImage handlers so that we can avoid unnecessary unpack and spin-lock
in OpenWriter.
Test Result:
Before enhancement:
```bash
➜ /tmp sudo bash testing.sh "localhost:5000/redis:latest" 20
crictl pull localhost:5000/redis:latest (x20) takes ...
real 1m6.172s
user 0m0.268s
sys 0m0.193s
docker pull localhost:5000/redis:latest (x20) takes ...
real 0m1.324s
user 0m0.441s
sys 0m0.316s
➜ /tmp sudo bash testing.sh "localhost:5000/golang:latest" 20
crictl pull localhost:5000/golang:latest (x20) takes ...
real 1m47.657s
user 0m0.284s
sys 0m0.224s
docker pull localhost:5000/golang:latest (x20) takes ...
real 0m6.381s
user 0m0.488s
sys 0m0.358s
```
With this enhancement:
```bash
➜ /tmp sudo bash testing.sh "localhost:5000/redis:latest" 20
crictl pull localhost:5000/redis:latest (x20) takes ...
real 0m1.140s
user 0m0.243s
sys 0m0.178s
docker pull localhost:5000/redis:latest (x20) takes ...
real 0m1.239s
user 0m0.463s
sys 0m0.275s
➜ /tmp sudo bash testing.sh "localhost:5000/golang:latest" 20
crictl pull localhost:5000/golang:latest (x20) takes ...
real 0m5.546s
user 0m0.217s
sys 0m0.219s
docker pull localhost:5000/golang:latest (x20) takes ...
real 0m6.090s
user 0m0.501s
sys 0m0.331s
```
Test Script:
localhost:5000/{redis|golang}:latest is equal to
docker.io/library/{redis|golang}:latest. The image is hold in local registry
service by `docker run -d -p 5000:5000 --name registry registry:2`.
```bash
image_name="${1}"
pull_times="${2:-10}"
cleanup() {
ctr image rmi "${image_name}"
ctr -n k8s.io image rmi "${image_name}"
crictl rmi "${image_name}"
docker rmi "${image_name}"
sleep 2
}
crictl_testing() {
for idx in $(seq 1 ${pull_times}); do
crictl pull "${image_name}" > /dev/null 2>&1 &
done
wait
}
docker_testing() {
for idx in $(seq 1 ${pull_times}); do
docker pull "${image_name}" > /dev/null 2>&1 &
done
wait
}
cleanup > /dev/null 2>&1
echo 3 > /proc/sys/vm/drop_caches
sleep 3
echo "crictl pull $image_name (x${pull_times}) takes ..."
time crictl_testing
echo
echo 3 > /proc/sys/vm/drop_caches
sleep 3
echo "docker pull $image_name (x${pull_times}) takes ..."
time docker_testing
```
Fixes: #4937Close: #4985Close: #6318
Signed-off-by: Wei Fu <fuweid89@gmail.com>
For LCOW currently we copy (or create) the scratch.vhdx for every single snapshot
so there ends up being a sandbox.vhdx in every directory seemingly unnecessarily. With the default scratch
size of 20GB the size on disk is about 17MB so there's a 17MB overhead per layer plus the time to copy the
file with every snapshot. Only the final sandbox.vhdx is actually used so this would be a nice little
optimization.
For WCOW we essentially do the exact same except copy the blank vhdx from the base layer.
Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Dependencies may be switching to use the new `%w` formatting
option to wrap errors; switching to use `errors.Is()` makes
sure that we are still able to unwrap the error and detect the
underlying cause.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
For remote snapshotter cases it's quite often there is need to pass extra info
from client (for instance - registry URL to query remote layer from, credentials, etc).
This commit slightly extends WithPullSnapshotter to pass extra labels to a snapshotter.
Signed-off-by: Maksym Pavlenko <makpav@amazon.com>
Though containerd gives ChainID to backend snapshotters during unpack for
searching snapshots to be skipped downloading the contents, ChainID isn't enough
for some snapshotters which require additional information of layers.
Some examples are remote snapshotters which is based on stargz filesystem
(requires image-related information to query the contents to docker registry)
and those which is based on CernVM-FS (requires manifest digest, etc. for
providing squashed rootfs).
This commit solves this issue by enabling a handler to inject additional
information of layers to snapshotters during unpack.
Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
Moves the content fetching into the unpack process
and defers the download until the snapshot needs it
and is ready to apply. As soon as a layer is reached
which requires fetching, all remaining layers are
fetched.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Avoid directly handling media types with "+" attributes,
instead handling the base and passing through the full
media type to the appropriate stream processor or decompression.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>