However, when an image has multiple tags, the image originally obtained may not be the one actually specified by the user.
Starting from cri-api v0.28.0, a UserSpecifiedImage field is added to ImageSpec.
It is more appropriate to use UserSpecifiedImage.
Signed-off-by: jinda.ljd <jinda.ljd@alibaba-inc.com>
The detached mount is less likely to fail in our case, but if we see any
failure to unmount, we should just skip the removal of directories.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
Using os.RemoveAll() is quite risky, as if the unmount failed and we
can delete files from the container rootfs. In fact, we were doing just
that.
Let's use os.Remove() to make sure we only deleted empty dirs.
Big kudos to @mbaynton for reporting this issue with lot of details,
nailing it down to containerd lines of code and showing all the log
lines to understand the big picture.
Fixes: #10704
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
Overlayfs needs to do an idmap mount of each layer and the cleanup
function just unmounts and deletes the directories. However, when the
resource is busy, the umount fails.
Let's make the unmount detached so the unmount will eventually be done
when it's not busy anymore. Also, making it detached solves the issues with
the unmount failing because it is busy.
Big kudos to @mbaynton for reporting this issue with lot of details,
nailing it down to containerd lines of code and showing all the log
lines to understand the big picture.
Fixes: #10704
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
Bumps the golang-x group with 1 update in the / directory: [golang.org/x/mod](https://github.com/golang/mod).
Updates `golang.org/x/mod` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/mod/compare/v0.20.0...v0.21.0)
---
updated-dependencies:
- dependency-name: golang.org/x/mod
dependency-type: direct:production
update-type: version-update:semver-minor
dependency-group: golang-x
...
Signed-off-by: dependabot[bot] <support@github.com>
Motivation:
For pod-level user namespaces, it's impossible to force the container runtime
to join an existing network namespace after creating a new user namespace.
According to the capabilities section in [user_namespaces(7)][1], a network
namespace created by containerd is owned by the root user namespace. When the
container runtime (like runc or crun) creates a new user namespace, it becomes
a child of the root user namespace. Processes within this child user namespace
are not permitted to access resources owned by the parent user namespace.
If the network namespace is not owned by the new user namespace, the container
runtime will fail to mount /sys due to the [sysfs: Restrict mounting sysfs][2]
patch.
Referencing the [cap_capable][3] function in Linux, a process can access a
resource if:
* The resource is owned by the process's user namespace, and the process has
the required capability.
* The resource is owned by a child of the process's user namespace, and the
owner's user namespace was created by the process's UID.
In the context of pod-level user namespaces, the CRI plugin delegates the
creation of the network namespace to the container runtime when running the
pause container. After the pause container is initialized, the CRI plugin pins
the pause container's network namespace into `/run/netns` and then executes
the `CNI_ADD` command over it.
However, if the pause container is terminated during the pinning process, the
CRI plugin might encounter a PID cycle, leading to the `CNI_ADD` command
operating on an incorrect network namespace.
Moreover, rolling back the `RunPodSandbox` API is complex due to the delegation
of network namespace creation. As highlighted in issue #10363, the CRI plugin
can lose IP information after a containerd restart, making it challenging to
maintain robustness in the RunPodSandbox API.
Solution:
Allow containerd to create a new user namespace and then create the network
namespace within that user namespace. This way, the CRI plugin can force the
container runtime to join both the user namespace and the network namespace.
Since the network namespace is owned by the newly created user namespace,
the container runtime will have the necessary permissions to mount `/sys` on
the container's root filesystem. As a result, delegation of network namespace
creation is no longer needed.
NOTE:
* The CRI plugin does not need to pin the newly created user namespace as it
does with the network namespace, because the kernel allows retrieving a user
namespace reference via [ioctl_ns(2)][4]. As a result, the podsandbox
implementation can obtain the user namespace using the `netnsPath` parameter.
[1]: <https://man7.org/linux/man-pages/man7/user_namespaces.7.html>
[2]: <7dc5dbc879>
[3]: <2c85ebc57b/security/commoncap.c (L65)>
[4]: <https://man7.org/linux/man-pages/man2/ioctl_ns.2.html>
Signed-off-by: Wei Fu <fuweid89@gmail.com>
- https://github.com/golang/go/issues?q=milestone%3AGo1.23.1+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.23.0...go1.23.1
These minor releases include 3 security fixes following the security policy:
- go/parser: stack exhaustion in all Parse* functions
Calling any of the Parse functions on Go source code which contains
deeply nested literals can cause a panic due to stack exhaustion.
This is CVE-2024-34155 and Go issue https://go.dev/issue/69138.
- encoding/gob: stack exhaustion in Decoder.Decode
Calling Decoder.Decode on a message which contains deeply nested
structures can cause a panic due to stack exhaustion.
This is a follow-up to CVE-2022-30635.
Thanks to Md Sakib Anwar of The Ohio State University for reporting
this issue.
This is CVE-2024-34156 and Go issue https://go.dev/issue/69139.
- go/build/constraint: stack exhaustion in Parse
Calling Parse on a "// +build" build tag line with deeply nested
expressions can cause a panic due to stack exhaustion.
This is CVE-2024-34158 and Go issue https://go.dev/issue/69141.
View the release notes for more information:
https://go.dev/doc/devel/release#go1.23.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This issue was caused by a race between init exits and new exec process
tracking inside the shim. The test operates by controlling the time
between when the shim invokes "runc exec" and when the actual "runc
exec" is triggered. This allows validating that races for shim state
tracking between pre- and post-start of the exec process do not exist.
Relates to https://github.com/containerd/containerd/issues/10589
Signed-off-by: Samuel Karp <samuelkarp@google.com>
This commit rewrites and simplifies a lot of this logic to reduce it's
complexity, and also handle the case where the container doesn't have
it's own pid-namespace, which means that we're not guaranteed to receive
the init exit last.
This is achieved by replacing `s.pendingExecs` with `s.runningExecs`,
for which both (previously) pending and de facto running execs are
considered.
The new exit handling logic can be summed up by:
- when we receive an init exit, stash it it in `s.containerInitExit`,
and if a container's init process has exited, refuse new execs.
- (if the container does not have it's own pidns) kill all running
processes (if the container has a private pid-namespace, then all
processes will be dead already).
- wait for the container's running exec count (which includes execs
which have been started but might still early exit) to get to 0.
- publish the stashed away init exit.
Signed-off-by: Laura Brehm <laurabrehm@hey.com>