`docker.Authorizer` requires library clients to configure scope via context.
It is helpful for the clients to use the helper (currently private) functions
for generating scope string and to use that function with the combination of
other scope-related ones (e.g. `docker.WithScope`).
Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
This accomplishes a few long-standing TODO items, but also helps users
in showing exact registry error messages
Signed-off-by: Ilya Dmitrichenko <errordeveloper@gmail.com>
Proxy registries are designed to serve content from upstreams.
However, the proxy hostname will usually not match the hostname
of the upstream, requiring the proxy to only use a single
upstream or use its own pattern matching to determine the upstream.
To solve this issue, the client will pass along the namespace which
is being used for the request, allowing mirrors to easily map
to multiple upstreams. This query parameter can safely be ignored
if multiple upstreams are not supported.
Signed-off-by: Derek McGowan <derek@mcg.dev>
Authorizer interface can’t be really implemented because
scopes are passed in on a side channel via private value in context.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
When a server is specified at the top level, there is a bug
that prevents the keys from being checked properly.
When no server is provided, the server attempts to parse
with an empty host, leaving partial values and a defaulted
skip verify configuration.
Signed-off-by: Derek McGowan <derek@mcg.dev>
The `DualStack` option was deprecated in Go 1.12, and is now enabled by default
(through commit github.com/golang/go@efc185029bf770894defe63cec2c72a4c84b2ee9).
> The Dialer.DualStack field is now meaningless and documented as deprecated.
>
> To disable fallback, set FallbackDelay to a negative value.
The default `FallbackDelay` is 300ms; to make this more explicit, this patch
sets `FallbackDelay` to the default value.
Note that Docker Hub currently does not support IPv6 (DNS for registry-1.docker.io
has no AAAA records, so we should not hit the 300ms delay).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Dependencies may be switching to use the new `%w` formatting
option to wrap errors; switching to use `errors.Is()` makes
sure that we are still able to unwrap the error and detect the
underlying cause.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
If there is not specific host config, like ctr does, the resolver will
fail to get host path. And this patch is to add default host config if
needs.
And default config host config should have all caps for pull and push.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Moved registry host configuration to the config package
and allows support of loading configurations from a
directory when the hosts are being resolved.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Add configuration toml file format and configuration
function to configure registry hosts from a directory
based configuration. Compatible with Docker registry
certificate loading.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Add `remotes/certutil` functions for loading `ca.crt`, `client.cert`, and `client.key` into `tls.Config` from a directory like `/etc/docker/certs.d/<hostname>.
See https://docs.docker.com/engine/security/certificates/ .
Client applications including CRI plugin are expected to configure the resolver using these functions.
As an example, the `ctr` tool is extended to support `ctr images pull --certs-dir=/etc/docker/certs.d example.com/foo/bar:baz`.
Tested with Harbor 1.8.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
This commit improves the fallback behaviour when resolving and
fetching images with multiple hosts. If an error is encountered
when resolving and fetching images, and more than one host is being
used, we will try the same operation on the next host. The error
from the first host is preserved so that if all hosts fail, we can
display the error from the first host.
fixes#3850
Signed-off-by: Alex Price <aprice@atlassian.com>
Registries may allow using token authorization without
explicitly setting the scope. This may cover use cases where
no scope is required for an endpoint or the registry is only
covering authentication using the token. This aligns with the
oauth2 spec which specifies the scope as optional.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Avoid directly handling media types with "+" attributes,
instead handling the base and passing through the full
media type to the appropriate stream processor or decompression.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Currently auth.docker.io uses a custom auth scope for (docker) plugins
`repository(plugin):<repo>:<perms>`.
This makes it impossible to use containerd distribution tooling to fetch
plugins without also supplying a totally custom authorizer.
This changes allows clients to set the correct scope on the context.
It's a little bit nasty but "works".
I'm also a bit suspect of some a couple of these unexported context
functrions. Before the primary one used `contextWithRepositoryScope`
overwrites any scope value and there is another one that appends the
scope value.
With this change they both append...
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Adds a method for setting a custom media type key prefix used by the
fetch handler.
This allows both overwriting a built-in prefix (for reasons?) as well as
supplying a custom media type.
I added this because I was getting an error on `FetchHandler` when
pulling docker plugin images which have their own media type.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When pushing a manifest list, all manifests should be pushed by digest
and only the final manifest pushed by tag. The Pusher was preventing
this by mistakenly disallowing objects to contain a digest. When objects
have a digest, only push tags associated with that digest.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Go documentation says
`Use of GetBody still requires setting Body`.
This change ensures the body is always set in
addition to GetBody. This fixes a bug where
sometimes the body is nil.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Adds support for registry mirrors
Adds support for multiple pull endpoints
Adds capabilities to limit trust in public mirrors
Fixes user agent header missing
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Retry 5 times in case of StatusRequestTimeout StatusTooManyRequests
This fixes the issue #2680 "Make content fetch retry more robust"
Signed-off-by: Konstantin Maksimov <kmaksimov@gmail.com>
With distribution source label in content store, select the longest
common prefix components as condidate mount blob source and try to push
with mount blob.
Fix#2964
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Currently the user agent is only being used on the initial
resolve request, then switching to the default user agent.
This ensures the correct user agent is always used. There is
a larger fix in progress which does this is a cleaner way, but
the scope of this change is fixing the user agent issue.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Docker registries return errors in a know format so this change now checks for these
errors and returns the message field. If the error is not in the expected format fall
back to the original behaviour.
https://github.com/containerd/containerd/issues/3076
Signed-off-by: Jack Baines <jack.baines@uk.ibm.com>
We can use cross repository push feature to reuse the existing blobs in
the same registry. Before make push fast, we know where the blob comes
from.
Use the `containerd.io/distribution.source. = [,]` as label format. For
example, the blob is downloaded by the docker.io/library/busybox:latest
and the label will be
containerd.io/distribution.source.docker.io = library/busybox
If the blob is shared by different repos in the same registry, the repo
name will be appended, like:
containerd.io/distribution.source.docker.io = library/busybox,x/y
NOTE:
1. no need to apply for legacy docker image schema1.
2. the concurrent fetch actions might miss some repo names in label, but
it is ok.
3. it is optional. no need to add label if the engine only uses images
not push.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Gives clients more control of the pull process, allowing
the client to operate on a descriptor after it has been
pulled. This could be useful for filtering output or
tracking children before they dispatched to. This can
also be used to call custom unpackers to have visibility
into a pulled config in parallel to the downloads.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
megacheck, gosimple and unused has been deprecated and subsumed by
staticcheck. And staticcheck also has been upgraded. we need to update
code for the linter issue.
close: #2945
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Even though application/octet-stream issue has been fixed in docker,
there exists lots of images which contains the invalid mediatype.
In order to pull those images, containerd client side modifies the
manifest content before insert/update image reference.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
containerd should cache empty label for docker schema1 image.
if not, the original empty layer will be non-empty layer and the image
config will be changed too. in this case, the image ID will be changed.
check the blob empty label to avoid changing image ID when repull docker
schema1 image.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Adds a new platform interface for matching and comparing platforms.
This new interface allows both filtering and ordering of platforms
to support running multiple platform and choosing the best platform.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
This change allows implementations to resolve the location of the actual data
using OCI descriptor fields such as MediaType.
No OCI descriptor field is written to the store.
No change on gRPC API.
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Updates blob writer helper to use new open and ensure
unavailable errors are always handled.
Removes duplication of unavailable handling code.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
This fix adds support for image registries that expect authentication for POST /v2/token such as used by the GET. E.g., JFrog Artifactory y has been observed to respond with a 401 (Unauthorized) in that case. Adding 401 in addition to the current handling of 405 and 404 in the resolver solves the authentication problem. Finally, this enables image pulls also for Artifactory.
Signed-off-by: Ruediger Maass <ruediger.maass@de.ibm.com>
Fix issue where manifest content must always be fetched
even if it is already fully downloaded or shared locally.
Simplify children label setting and platform filtering.
Prevent getting a fetcher when content shared locally.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
- Use lease API (previoisly, GC was not supported)
- Refactored interfaces for ease of future Docker v1 importer support
For usage, please refer to `ctr images import --help`.
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Schema1 manifests did not set a size in the digest for the blobs,
breaking the expectations of the update http seeking reader. Now
the http seeker has been updated to support unknown size as a
value of negative 1 and the schema1 puller sets the unknown size
accordingly.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Currently pushing a new tag to a manifest which already
exists in the registry skips the tag push because it
only checks that the manifest exists. This updates the
logic to instead check if the tag exists and is at the
same digest.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
To support resumable download, the fetcher for a remote must implement
`io.Seeker`. If implemented the `content.Copy` function will detect the
seeker and begin from where the download was terminated by a previous
attempt.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To allow concurrent pull of images of the v1 persuasion, we need to
backoff when multiple pullers are trying to operate on the same
resource. The back off logic is ported to v1 pull to match the behavior
for other images.
A little randomness is also added to the backoff to prevent thundering
herd and to reduce expected recovery time.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Prevents a server from sending a large response causing containerd to
allocate too much RAM and potentially OOM.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Add support for downloading layers with external URLs and
foreign/non-distributable mediatypes. This ensures that encountered
windows images are downloaded correctly. We still need to filter out the
extra windows resources when pulling linux, but this is a step towards
correctly supporting multi-platform images.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Content commit is updated to take in a context, allowing
content to be committed within the same context the writer
was in. This is useful when commit may be able to use more
context to complete the action rather than creating its own.
An example of this being useful is for the metadata implementation
of content, having a context allows tests to fully create
content in one database transaction by making use of the context.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Support registries returning 204 or 200 in place of 201/202.
Ensure body is closed when request is retried.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Add commit options which allow for setting labels on commit.
Prevents potential race between garbage collector reading labels
after commit and labels getting set.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
After some analysis, it was found that Content.Reader was generally
redudant to an io.ReaderAt. This change removes `Content.Reader` in
favor of a `Content.ReaderAt`. In general, `ReaderAt` can perform better
over interfaces with indeterminant latency because it avoids remote
state for reads. Where a reader is required, a helper is provided to
convert it into an `io.SectionReader`.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Now that we have most of the services required for use with containerd,
it was found that common patterns were used throughout services. By
defining a central `errdefs` package, we ensure that services will map
errors to and from grpc consistently and cleanly. One can decorate an
error with as much context as necessary, using `pkg/errors` and still
have the error mapped correctly via grpc.
We make a few sacrifices. At this point, the common errors we use across
the repository all map directly to grpc error codes. While this seems
positively crazy, it actually works out quite well. The error conditions
that were specific weren't super necessary and the ones that were
necessary now simply have better context information. We lose the
ability to add new codes, but this constraint may not be a bad thing.
Effectively, as long as one uses the errors defined in `errdefs`, the
error class will be mapped correctly across the grpc boundary and
everything will be good. If you don't use those definitions, the error
maps to "unknown" and the error message is preserved.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This PR ensures that we can pull images with manifest lists, aka OCI
indexes. After this change, when pulling such an image, the resources
will all be available for creating the image.
Further support is required to do platform based selection for rootfs
creation, so such images may not yet be runnable. This is mostly useful
for checkpoint transfers, which use an OCI index for assembling the
component set.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
These interfaces allow us to preserve both the checking of error "cause"
as well as messages returned from the gRPC API so that the client gets
full error reason instead of a default "metadata: not found" in the case
of a missing image.
Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
The size and throwaway fields in the history can bother be
omitted, making the emptiness of a layer ambiguous. In these
cases download and check whether the content is empty.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Updates content service to handle lock errors and return
them to the client. The client remote handler has been
updated to retry when a resource is locked until the
resource is unlocked or the expected resource exists.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>