When an upstream client (e.g. kubelet) stops or restarts, the CRI
connection to the containerd gets interrupted which is treated as a
cancellation of context which subsequently cancels an ongoing operation,
including an image pull. This generally gets followed by containerd's
GC routine that tries to delete the prepared snapshots for the image
layer(s) corresponding to the image in the pull operation that got
cancelled. However, if the upstream client immediately retries (or
starts a new) image pull operation, containerd initiates a new image
pull and starts unpacking the image layers into snapshots. This may
create a race condition: the GC routine (corresponding to the failed
image pull operation) trying to clean up the same snapshot that the new
image pull operation is preparing, thus leading to the "parent snapshot
does not exist: not found" error.
Race Condition Scenario:
Assume an image consisting of 2 layers (L1 and L2, L1 being the bottom
layer) that are supposed to get unpacked into snapshots S1 and S2
respectively.
During an image pull operation, containerd unpacks(L1) which involves
Stat()'ing the chainID. This Stat() fails as the chainID does not
exist and Prepare(L1) gets called. Once S1 gets prepared, containerd
processes L2 - unpack(L2) which again involves Stat()'ing the chainID
which fails as the chainID for S2 does not exist which results in the
call to Prepare(L2). However, if the image pull operation gets
cancelled before Prepare(L2) is called, then the GC routine tries to
clean up S1.
When the image pull operation is retried by the upstream client,
containerd follows the same series of operations. unpack(L1) gets
called which then calls Stat(chainID) for L1. However, this time,
Stat(L1) succedes as S1 already exists (from the previous image pull
operation) and thus containerd goes to the next iteration to
unpack(L2). Now, GC cleans up S1 and when Prepare(L2) gets called, it
returns back the "parent snapshot does not exist: not found" error.
Fix:
Removing the "Stat() + early return" fixes the race condition. Now
during the image pull operation corresponding to the client retry,
although the chainID (for L1) already exists, containerd does not
return early and goes on to Prepare(L1). Since L1 is already prepared,
it adds a new lease to S1 and then returns `ErrAlreadyExists`. This
new lease prevents GC from cleaning up S1 when containerd processes
L2 (unpack(L2) -> Prepare(L2)).
Fixes: https://github.com/containerd/containerd/issues/3787
Signed-off-by: Saket Jajoo <saketjajoo@google.com>
This is needed so we can build the runc shim without grpc as a
transative dependency.
With this change the runc shim binary went from 14MB to 11MB.
The RSS from an idle shim went from about 17MB to 14MB (back around
where it was in in 1.7).
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Core should not have a dependency on API types.
This was causing a transative dependency on grpc when importing the core
snapshots package.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
pkg/oci is a general utility package with dependency chains that are
uneccessary for the shim.
The shim only actually used it for a convenience function for reading
an oci spec file.
Instead of pulling in those deps just re-implement that internally in
the shim command.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This adds trace context propagation over the grpc/ttrpc calls to a shim.
It also adds the otlp plugin to the runc shim so that it will send
traces to the configured tracer (which is inherited from containerd's
config).
It doesn't look like this is adding any real overhead to the runc shim's
memory usage, however it does add 2MB to the binary size.
As such this is gated by a build tag `shim_tracing`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
However, when an image has multiple tags, the image originally obtained may not be the one actually specified by the user.
Starting from cri-api v0.28.0, a UserSpecifiedImage field is added to ImageSpec.
It is more appropriate to use UserSpecifiedImage.
Signed-off-by: jinda.ljd <jinda.ljd@alibaba-inc.com>
Bumps the golang-x group with 1 update in the / directory: [golang.org/x/mod](https://github.com/golang/mod).
Updates `golang.org/x/mod` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/mod/compare/v0.20.0...v0.21.0)
---
updated-dependencies:
- dependency-name: golang.org/x/mod
dependency-type: direct:production
update-type: version-update:semver-minor
dependency-group: golang-x
...
Signed-off-by: dependabot[bot] <support@github.com>
Motivation:
For pod-level user namespaces, it's impossible to force the container runtime
to join an existing network namespace after creating a new user namespace.
According to the capabilities section in [user_namespaces(7)][1], a network
namespace created by containerd is owned by the root user namespace. When the
container runtime (like runc or crun) creates a new user namespace, it becomes
a child of the root user namespace. Processes within this child user namespace
are not permitted to access resources owned by the parent user namespace.
If the network namespace is not owned by the new user namespace, the container
runtime will fail to mount /sys due to the [sysfs: Restrict mounting sysfs][2]
patch.
Referencing the [cap_capable][3] function in Linux, a process can access a
resource if:
* The resource is owned by the process's user namespace, and the process has
the required capability.
* The resource is owned by a child of the process's user namespace, and the
owner's user namespace was created by the process's UID.
In the context of pod-level user namespaces, the CRI plugin delegates the
creation of the network namespace to the container runtime when running the
pause container. After the pause container is initialized, the CRI plugin pins
the pause container's network namespace into `/run/netns` and then executes
the `CNI_ADD` command over it.
However, if the pause container is terminated during the pinning process, the
CRI plugin might encounter a PID cycle, leading to the `CNI_ADD` command
operating on an incorrect network namespace.
Moreover, rolling back the `RunPodSandbox` API is complex due to the delegation
of network namespace creation. As highlighted in issue #10363, the CRI plugin
can lose IP information after a containerd restart, making it challenging to
maintain robustness in the RunPodSandbox API.
Solution:
Allow containerd to create a new user namespace and then create the network
namespace within that user namespace. This way, the CRI plugin can force the
container runtime to join both the user namespace and the network namespace.
Since the network namespace is owned by the newly created user namespace,
the container runtime will have the necessary permissions to mount `/sys` on
the container's root filesystem. As a result, delegation of network namespace
creation is no longer needed.
NOTE:
* The CRI plugin does not need to pin the newly created user namespace as it
does with the network namespace, because the kernel allows retrieving a user
namespace reference via [ioctl_ns(2)][4]. As a result, the podsandbox
implementation can obtain the user namespace using the `netnsPath` parameter.
[1]: <https://man7.org/linux/man-pages/man7/user_namespaces.7.html>
[2]: <7dc5dbc879>
[3]: <2c85ebc57b/security/commoncap.c (L65)>
[4]: <https://man7.org/linux/man-pages/man2/ioctl_ns.2.html>
Signed-off-by: Wei Fu <fuweid89@gmail.com>
- https://github.com/golang/go/issues?q=milestone%3AGo1.23.1+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.23.0...go1.23.1
These minor releases include 3 security fixes following the security policy:
- go/parser: stack exhaustion in all Parse* functions
Calling any of the Parse functions on Go source code which contains
deeply nested literals can cause a panic due to stack exhaustion.
This is CVE-2024-34155 and Go issue https://go.dev/issue/69138.
- encoding/gob: stack exhaustion in Decoder.Decode
Calling Decoder.Decode on a message which contains deeply nested
structures can cause a panic due to stack exhaustion.
This is a follow-up to CVE-2022-30635.
Thanks to Md Sakib Anwar of The Ohio State University for reporting
this issue.
This is CVE-2024-34156 and Go issue https://go.dev/issue/69139.
- go/build/constraint: stack exhaustion in Parse
Calling Parse on a "// +build" build tag line with deeply nested
expressions can cause a panic due to stack exhaustion.
This is CVE-2024-34158 and Go issue https://go.dev/issue/69141.
View the release notes for more information:
https://go.dev/doc/devel/release#go1.23.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>