Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add registration for more subsystems via plugins
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Move content service to separate package
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Following from the rest of the work in this branch, we now are porting
the dist command to work directly against the containerd content API.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After iterating on the GRPC API, the changes required for the actual
implementation are now included in the content store. The begin change
is the move to a single, atomic `Ingester.Writer` method for locking
content ingestion on a key. From this, comes several new interface
definitions.
The main benefit here is the clarification between `Status` and `Info`
that came out of the GPRC API. `Status` tells the status of a write,
whereas `Info` is for querying metadata about various blobs.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
We now define the `snapshot.Driver` interface based on earlier work.
Many details of the model are worked out, such as snapshot lifecycle and
parentage of commits against "Active" snapshots.
The impetus of this change is to provide a snapshot POC that does a
complete push/pull workflow. The beginnings of a test suite for snapshot
drivers is included that we can use to verify the assumptions of
drivers. The intent is to port the existing tests over to this test
suite and start scaling contributions and test to the snapshot driver
subsystem.
There are still some details that need to be worked out, such as listing
and metadata access. We can do this activity as we further integrate
with tooling.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this change, we add the following commands to the dist tool:
- `ingest`: verify and accept content into storage
- `active`: display active ingest processes
- `list`: list content in storage
- `path`: provide a path to a blob by digest
- `delete`: remove a piece of content from storage
We demonstrate the utility with the following shell pipeline:
```
$ ./dist fetch docker.io/library/redis latest mediatype:application/vnd.docker.distribution.manifest.v2+json | \
jq -r '.layers[] | "./dist fetch docker.io/library/redis "+.digest + "| ./dist ingest --expected-digest "+.digest+" --expected-size "+(.size | tostring) +" docker.io/library/redis@"+.digest' | xargs -I{} -P10 -n1 sh -c "{}"
```
The above fetches a manifest, pipes it to jq, which assembles a shell
pipeline to ingest each layer into the content store. Because the
transactions are keyed by their digest, concurrent downloads and
downloads of repeated content are ignored. Each process is then executed
parallel using xargs.
Put shortly, this is a parallel layer download.
In a separate shell session, could monitor the active downloads with the
following:
```
$ watch -n0.2 ./dist active
```
For now, the content is downloaded into `.content` in the current
working directory. To watch the contents of this directory, you can use
the following:
```
$ watch -n0.2 tree .content
```
This will help to understand what is going on internally.
To get access to the layers, you can use the path command:
```
$./dist path sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa /home/sjd/go/src/github.com/docker/containerd/.content/blobs/sha256/010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
```
When you are done, you can clear out the content with the classic xargs
pipeline:
```
$ ./dist list -q | xargs ./dist delete
```
Note that this is mostly a POC. Things like failed downloads and
abandoned download cleanup aren't quite handled. We'll probably make
adjustments around how content store transactions are handled to address
this.
From here, we'll build out full image pull and create tooling to get
runtime bundles from the fetched content.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The package exports are now cleaned up to remove a lot of stuttering in
the API. We also remove direct mapping of refs to the filesystem, opting
for a hash-based approach. This *does* affect insertion performance,
since it requires more individual file ios. A benchmark baseline has
been added and we can fix this later.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Allow content stores to ingest content without coordination of a daemon
to manage locks. Supports coordinated ingest and cross-process ingest
status.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Break up the content store prototype into a few logical files. We have a
file for the store, the writer and helpers.
Also, the writer has been modified to remove write and exec permissions
on blobs in the store.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After experimenting with pull, we've defined a transactional content
store for verified storage of fetched items. A base component of
containerkit, this will interact with both the runtime and distribution
sides of the system, avoiding coupling.
Blob access if provided through direct access to readonly files.
Signed-off-by: Stephen J Day <stephen.day@docker.com>