The local store could end up in a state where the writer is
closed but the reference is locked after a commit on an
existing object.
Cleans up Commit logic to always close the writer even after
an error occurs, guaranteeing the reference is unlocked after commit.
Adds a test to the content test suite to verify this behavior.
Updates the content store interface definitions to clarify the behavior.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
This change allows implementations to resolve the location of the actual data
using OCI descriptor fields such as MediaType.
No OCI descriptor field is written to the store.
No change on gRPC API.
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Content commit is updated to take in a context, allowing
content to be committed within the same context the writer
was in. This is useful when commit may be able to use more
context to complete the action rather than creating its own.
An example of this being useful is for the metadata implementation
of content, having a context allows tests to fully create
content in one database transaction by making use of the context.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Add commit options which allow for setting labels on commit.
Prevents potential race between garbage collector reading labels
after commit and labels getting set.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
After some analysis, it was found that Content.Reader was generally
redudant to an io.ReaderAt. This change removes `Content.Reader` in
favor of a `Content.ReaderAt`. In general, `ReaderAt` can perform better
over interfaces with indeterminant latency because it avoids remote
state for reads. Where a reader is required, a helper is provided to
convert it into an `io.SectionReader`.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Export as a tar (Note: "-" can be used for stdout):
$ ctr images export /tmp/oci-busybox.tar docker.io/library/busybox:latest
Import a tar (Note: "-" can be used for stdin):
$ ctr images import foo/new:latest /tmp/oci-busybox.tar
Note: media types are not converted at the moment: e.g.
application/vnd.docker.image.rootfs.diff.tar.gzip
-> application/vnd.oci.image.layer.v1.tar+gzip
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Update list content command to support filters
Add label subcommand to content in dist tool to update labels
Add uncompressed label on unpack
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Move content status to list statuses and add single status
to interface.
Updates API to support list statuses and status
Updates snapshot key creation to be generic
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
These interfaces allow us to preserve both the checking of error "cause"
as well as messages returned from the gRPC API so that the client gets
full error reason instead of a default "metadata: not found" in the case
of a missing image.
Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
Updates content service to handle lock errors and return
them to the client. The client remote handler has been
updated to retry when a resource is locked until the
resource is unlocked or the expected resource exists.
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
The split between provider and ingester was a long standing division
reflecting the client-side use cases. For the most part, we were
differentiating these for the algorithms that operate them, but it made
instantation and use of the types challenging. On the server-side, this
distinction is generally less important. This change unifies these types
and in the process we get a few benefits.
The first is that we now completely access the content store over GRPC.
This was the initial intent and we have now satisfied this goal
completely. There are a few issues around listing content and getting
status, but we resolve these with simple streaming and regexp filters.
More can probably be done to polish this but the result is clean.
Several other content-oriented methods were polished in the process of
unification. We have now properly seperated out the `Abort` method to
cancel ongoing or stalled ingest processes. We have also replaced the
`Active` method with a single status method.
The transition went extremely smoothly. Once the clients were updated to
use the new methods, every thing worked as expected on the first
compile.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To make restarting after failed pull less racy, we define `Truncate(size
int64) error` on `content.Writer` for the zero offset. Truncating a
writer will dump any existing data and digest state and start from the
beginning. All subsequent writes will start from the zero offset.
For the service, we support this by defining the behavior for a write
that changes the offset. To keep this narrow, we only support writes out
of order at the offset 0, which causes the writer to dump existing data
and reset the local hash.
This makes restarting failed pulls much smoother when there was a
previously encountered error and the source doesn't support arbitrary
seeks or reads at arbitrary offsets. By allowing this to be done while
holding the write lock on a ref, we can restart the full download
without causing a race condition.
Once we implement seeking on the `io.Reader` returned by the fetcher,
this will be less useful, but it is good to ensure that our protocol
properly supports this use case for when streaming is the only option.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After implementing pull, a few changes are required to the content store
interface to make sure that the implementation works smoothly.
Specifically, we work to make sure the predeclaration path for digests
works the same between remote and local writers. Before, we were
hesitent to require the the size and digest up front, but it became
clear that having this provided significant benefit.
There are also several cleanups related to naming. We now call the
expected digest `Expected` consistently across the board and `Total` is
used to mark the expected size.
This whole effort comes together to provide a very smooth status
reporting workflow for image pull and push. This will be more obvious
when the bulk of pull code lands.
There are a few other changes to make `content.WriteBlob` more broadly
useful. In accordance with addition for predeclaring expected size when
getting a `Writer`, `WriteBlob` now supports this fully. It will also
resume downloads if provided an `io.Seeker` or `io.ReaderAt`. Coupled
with the `httpReadSeeker` from `docker/distribution`, we should only be
a lines of code away from resumable downloads.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add registration for more subsystems via plugins
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Move content service to separate package
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
After iterating on the GRPC API, the changes required for the actual
implementation are now included in the content store. The begin change
is the move to a single, atomic `Ingester.Writer` method for locking
content ingestion on a key. From this, comes several new interface
definitions.
The main benefit here is the clarification between `Status` and `Info`
that came out of the GPRC API. `Status` tells the status of a write,
whereas `Info` is for querying metadata about various blobs.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this change, we add the following commands to the dist tool:
- `ingest`: verify and accept content into storage
- `active`: display active ingest processes
- `list`: list content in storage
- `path`: provide a path to a blob by digest
- `delete`: remove a piece of content from storage
We demonstrate the utility with the following shell pipeline:
```
$ ./dist fetch docker.io/library/redis latest mediatype:application/vnd.docker.distribution.manifest.v2+json | \
jq -r '.layers[] | "./dist fetch docker.io/library/redis "+.digest + "| ./dist ingest --expected-digest "+.digest+" --expected-size "+(.size | tostring) +" docker.io/library/redis@"+.digest' | xargs -I{} -P10 -n1 sh -c "{}"
```
The above fetches a manifest, pipes it to jq, which assembles a shell
pipeline to ingest each layer into the content store. Because the
transactions are keyed by their digest, concurrent downloads and
downloads of repeated content are ignored. Each process is then executed
parallel using xargs.
Put shortly, this is a parallel layer download.
In a separate shell session, could monitor the active downloads with the
following:
```
$ watch -n0.2 ./dist active
```
For now, the content is downloaded into `.content` in the current
working directory. To watch the contents of this directory, you can use
the following:
```
$ watch -n0.2 tree .content
```
This will help to understand what is going on internally.
To get access to the layers, you can use the path command:
```
$./dist path sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa /home/sjd/go/src/github.com/docker/containerd/.content/blobs/sha256/010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
```
When you are done, you can clear out the content with the classic xargs
pipeline:
```
$ ./dist list -q | xargs ./dist delete
```
Note that this is mostly a POC. Things like failed downloads and
abandoned download cleanup aren't quite handled. We'll probably make
adjustments around how content store transactions are handled to address
this.
From here, we'll build out full image pull and create tooling to get
runtime bundles from the fetched content.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The package exports are now cleaned up to remove a lot of stuttering in
the API. We also remove direct mapping of refs to the filesystem, opting
for a hash-based approach. This *does* affect insertion performance,
since it requires more individual file ios. A benchmark baseline has
been added and we can fix this later.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Allow content stores to ingest content without coordination of a daemon
to manage locks. Supports coordinated ingest and cross-process ingest
status.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Break up the content store prototype into a few logical files. We have a
file for the store, the writer and helpers.
Also, the writer has been modified to remove write and exec permissions
on blobs in the store.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After experimenting with pull, we've defined a transactional content
store for verified storage of fetched items. A base component of
containerkit, this will interact with both the runtime and distribution
sides of the system, avoiding coupling.
Blob access if provided through direct access to readonly files.
Signed-off-by: Stephen J Day <stephen.day@docker.com>