The split between provider and ingester was a long standing division
reflecting the client-side use cases. For the most part, we were
differentiating these for the algorithms that operate them, but it made
instantation and use of the types challenging. On the server-side, this
distinction is generally less important. This change unifies these types
and in the process we get a few benefits.
The first is that we now completely access the content store over GRPC.
This was the initial intent and we have now satisfied this goal
completely. There are a few issues around listing content and getting
status, but we resolve these with simple streaming and regexp filters.
More can probably be done to polish this but the result is clean.
Several other content-oriented methods were polished in the process of
unification. We have now properly seperated out the `Abort` method to
cancel ongoing or stalled ingest processes. We have also replaced the
`Active` method with a single status method.
The transition went extremely smoothly. Once the clients were updated to
use the new methods, every thing worked as expected on the first
compile.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This adds pause and unpause to containerd's execution service and the
same commands to the `ctr` client.
Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
This allows one to edit content in the content store with their favorite
editor. It is as simple as this:
```console
$ dist content edit sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
```
The above will pop up your $EDITOR, where you can make changes to the content.
When you are done, save and the new version will be added to the content store.
The digest of the new content will be printed to stdout:
```console
sha256:247f30ac320db65f3314b63b908a3aeaac5813eade6cabc9198b5883b22807bc
```
We can then retrieve the content quite easily:
```console
$ dist content get sha256:247f30ac320db65f3314b63b908a3aeaac5813eade6cabc9198b5883b22807bc
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1278,
"digest": "sha256:4a415e3663882fbc554ee830889c68a33b3585503892cc718a4698e91ef2a526"
},
"annotations": {},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 1905270,
"digest": "sha256:627beaf3eaaff1c0bc3311d60fb933c17ad04fe377e1043d9593646d8ae3bfe1"
}
]
}
```
In this case, an annotations field was added to the original manifest.
While this implementation is very simple, we can add all sorts of validation
and tooling to allow one to edit images inline. Coupled with declaring the
mediatype, we could return specific errors that can allow a user to craft
valid, working modifications to images for testing and profit.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Add functionality for restoring containers after containerd dies and is
restarted with terminated shims.
This ensures that on restore, if a container no longer has a running
shim, containerd will kill and cleanup the container.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
For some reason, when I wrote this, I forgot about the `View` and
`Update` helpers on boltdb. These are now used and makes the code much
easier to follow.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this changeset, image store access is now moved to completely
accessible over GRPC. No clients manipulate the image store database
directly and the GRPC client is fully featured. The metadata database is
now managed by the daemon and access coordinated via services.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Server and Client images of the image store are now provided. We have
created an image metadata interface and converted the bolt functions to
implement that interface over an transaction. A remote client
implementation is provided that implements the same interface.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This is a first pass at the metadata required for supporting an image
store. We use a shallow approach to the problem, allowing this
component to centralize the naming. Resources for this image can then be
"snowballed" in for actual implementations. This is better understood
through example.
Let's take pull. One could register the name "docker.io/stevvooe/foo" as
pointing at a particular digest. When instructed to pull or fetch, the
system will notice that no components of that image are present locally.
It can then recursively resolve the resources for that image and fetch
them into the content store. Next time the instruction is issued, the
content will be present so no action will be taken.
Another example is preparing the rootfs. The requirements for a rootfs
can be resolved from a name. These "diff ids" will then be compared with
what is available in the snapshot manager. Any parts of the rootfs, such
as a layer, that isn't available in the snapshotter can be unpacked.
Once this process is satisified, the image will be runnable as a
container.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The service can use the snapshotter directly to get the rootfs.
Removed debug line for mount response.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
The message was defined but the method was returning empty, plumb through the
result from the shim layer.
Compile tested only.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
To make restarting after failed pull less racy, we define `Truncate(size
int64) error` on `content.Writer` for the zero offset. Truncating a
writer will dump any existing data and digest state and start from the
beginning. All subsequent writes will start from the zero offset.
For the service, we support this by defining the behavior for a write
that changes the offset. To keep this narrow, we only support writes out
of order at the offset 0, which causes the writer to dump existing data
and reset the local hash.
This makes restarting failed pulls much smoother when there was a
previously encountered error and the source doesn't support arbitrary
seeks or reads at arbitrary offsets. By allowing this to be done while
holding the write lock on a ref, we can restart the full download
without causing a race condition.
Once we implement seeking on the `io.Reader` returned by the fetcher,
this will be less useful, but it is good to ensure that our protocol
properly supports this use case for when streaming is the only option.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Allow deletion of content over the GRPC interface. For now, we are going
with a model that conducts reference management outside of the content
store, in the metadata store but this design is valid either way.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
For clients which only want to know about one container this is simpler than
searching the result of execution.List.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
After implementing pull, a few changes are required to the content store
interface to make sure that the implementation works smoothly.
Specifically, we work to make sure the predeclaration path for digests
works the same between remote and local writers. Before, we were
hesitent to require the the size and digest up front, but it became
clear that having this provided significant benefit.
There are also several cleanups related to naming. We now call the
expected digest `Expected` consistently across the board and `Total` is
used to mark the expected size.
This whole effort comes together to provide a very smooth status
reporting workflow for image pull and push. This will be more obvious
when the bulk of pull code lands.
There are a few other changes to make `content.WriteBlob` more broadly
useful. In accordance with addition for predeclaring expected size when
getting a `Writer`, `WriteBlob` now supports this fully. It will also
resume downloads if provided an `io.Seeker` or `io.ReaderAt`. Coupled
with the `httpReadSeeker` from `docker/distribution`, we should only be
a lines of code away from resumable downloads.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add registration for more subsystems via plugins
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Move content service to separate package
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>