Following from the rest of the work in this branch, we now are porting
the dist command to work directly against the containerd content API.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After iterating on the GRPC API, the changes required for the actual
implementation are now included in the content store. The begin change
is the move to a single, atomic `Ingester.Writer` method for locking
content ingestion on a key. From this, comes several new interface
definitions.
The main benefit here is the clarification between `Status` and `Info`
that came out of the GPRC API. `Status` tells the status of a write,
whereas `Info` is for querying metadata about various blobs.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This adds a config file for containerd configuration. It is hard to
have structure data on cli flags and the config file should be used for
the majority of fields when configuring containerd.
There are still a few flags on the daemon that override config file
values but flags should take a back seat going forward and should be
kept at a minimum.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
It's duplicate with --log-level, and since --log-level
takes advantage of --debug and it has default value,
--debug never works now.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Removed unused requires root test function and updated
tar requires function to use lookup method.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
After trying to explain the complexities of developing with protobuf, I
have now created a command that correctly calculates the import paths
for each package and runs the protobuf command.
The Makefile has been updated accordingly, expect we now no longer use
`go generate`. A new target `protos` has been defined. We alias the two,
for the lazy. We leave `go generate` in place for cases where we will
actually use `go generate`.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This changeset adds the simple apply command. It consumes a tar layer
and applies that layer to the specified directory. For the most part, it
is a direct call into Docker's `pkg/archive.ApplyLayer`.
The following demonstrates unpacking the wordpress rootfs into a local
directory `wordpress`:
```
$ ./dist fetch docker.io/library/wordpress 4.5 mediatype:application/vnd.docker.distribution.manifest.v2+json | \
jq -r '.layers[] | "sudo ./dist apply ./wordpress < $(./dist path -n "+.digest+")"' | xargs -I{} -n1 sh -c "{}"
```
Note that you should have fetched the layers into the local content
store before running the above. Alternatively, you can just read the
manifest from the content store, rather than fetching it. We use fetch
above to avoid having to lookup the manifest digest for our demo.
This tool has a long way to go. We still need to incorporate
snapshotting, as well as the ability to calculate the `ChainID` under
subsequent unpacking. Once we have some tools to play around with
snapshotting, we'll be able to incorporate our `rootfs.ApplyLayer`
algorithm that will get us a lot closer to a production worthy system.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this change, we add the following commands to the dist tool:
- `ingest`: verify and accept content into storage
- `active`: display active ingest processes
- `list`: list content in storage
- `path`: provide a path to a blob by digest
- `delete`: remove a piece of content from storage
We demonstrate the utility with the following shell pipeline:
```
$ ./dist fetch docker.io/library/redis latest mediatype:application/vnd.docker.distribution.manifest.v2+json | \
jq -r '.layers[] | "./dist fetch docker.io/library/redis "+.digest + "| ./dist ingest --expected-digest "+.digest+" --expected-size "+(.size | tostring) +" docker.io/library/redis@"+.digest' | xargs -I{} -P10 -n1 sh -c "{}"
```
The above fetches a manifest, pipes it to jq, which assembles a shell
pipeline to ingest each layer into the content store. Because the
transactions are keyed by their digest, concurrent downloads and
downloads of repeated content are ignored. Each process is then executed
parallel using xargs.
Put shortly, this is a parallel layer download.
In a separate shell session, could monitor the active downloads with the
following:
```
$ watch -n0.2 ./dist active
```
For now, the content is downloaded into `.content` in the current
working directory. To watch the contents of this directory, you can use
the following:
```
$ watch -n0.2 tree .content
```
This will help to understand what is going on internally.
To get access to the layers, you can use the path command:
```
$./dist path sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
sha256:010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa /home/sjd/go/src/github.com/docker/containerd/.content/blobs/sha256/010c454d55e53059beaba4044116ea4636f8dd8181e975d893931c7e7204fffa
```
When you are done, you can clear out the content with the classic xargs
pipeline:
```
$ ./dist list -q | xargs ./dist delete
```
Note that this is mostly a POC. Things like failed downloads and
abandoned download cleanup aren't quite handled. We'll probably make
adjustments around how content store transactions are handled to address
this.
From here, we'll build out full image pull and create tooling to get
runtime bundles from the fetched content.
Signed-off-by: Stephen J Day <stephen.day@docker.com>