To allow us to differentiate from fetching an image, fetch a part of an
image and pulling an image, we now call the `fetch` command the
`fetch-object` command. We can now introduce a command that does the
complete image fetch without creating snapshots, allowing `pull` to
perform the entire process.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
When using the fetcher concurrently, the loop modifying the closed
`base` parameter was causing urls from different digests to be returned
randomly. We copy the the value and then modify it to make it work
correctly.
Luckily, we are using content addressable storage or this would have
been undetectable.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
After implementing pull, a few changes are required to the content store
interface to make sure that the implementation works smoothly.
Specifically, we work to make sure the predeclaration path for digests
works the same between remote and local writers. Before, we were
hesitent to require the the size and digest up front, but it became
clear that having this provided significant benefit.
There are also several cleanups related to naming. We now call the
expected digest `Expected` consistently across the board and `Total` is
used to mark the expected size.
This whole effort comes together to provide a very smooth status
reporting workflow for image pull and push. This will be more obvious
when the bulk of pull code lands.
There are a few other changes to make `content.WriteBlob` more broadly
useful. In accordance with addition for predeclaring expected size when
getting a `Writer`, `WriteBlob` now supports this fully. It will also
resume downloads if provided an `io.Seeker` or `io.ReaderAt`. Coupled
with the `httpReadSeeker` from `docker/distribution`, we should only be
a lines of code away from resumable downloads.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this changeset we introduce several new things. The first is the
top-level dist command. This is a toolkit that implements various
distribution primitives, such as fetching, unpacking and ingesting.
The first component to this is a simple `fetch` command. It is a
low-level command that takes a "remote", identified by a `locator`, and
an object identifier. Keyed by the locator, this tool can identify a
remote implementation to fetch the content and write it back to standard
out. By allowing this to be the unit of pluggability in fetching
content, we can have quite a bit of flexibility in how we retrieve
images.
The current `fetch` implementation provides anonymous access to docker
hub images, through the namespace `docker.io`. As an example, one can
fetch the manifest for `redis` with the following command:
```
$ ./dist fetch docker.io/library/redis latest mediatype:application/vnd.docker.distribution.manifest.v2+json
```
Note that we have provided a mediatype "hint", nudging the fetch
implementation to grab the correct endpoint. We can hash the output of
that to fetch the same content by digest:
```
$ ./dist fetch docker.io/library/redis sha256:$(./dist fetch docker.io/library/redis latest mediatype:application/vnd.docker.distribution.manifest.v2+json | shasum -a256)
```
Note that the hint is now elided, since we have affixed the content to a
particular hash.
If you are not yet entertained, let's bring `jq` and `xargs` into the
mix for maximum fun. The following incantation fetches the same manifest
and downloads all layers into the convenience of `/dev/null`:
```
$ ./dist fetch docker.io/library/redis sha256:a027a470aa2b9b41cc2539847a97b8a14794ebd0a4c7c5d64e390df6bde56c73 | jq -r '.layers[] | .digest' | xargs -n1 -P10 ./dist fetch docker.io/library/redis > /dev/null
```
This is just the beginning. We should be able to centralize
configuration around fetch to implement a number of distribution
methodologies that have been challenging or impossible up to this point.
The `locator`, mentioned earlier, is a schemaless URL that provides a
host and path that can be used to resolve the remote. By dispatching on
this common identifier, we should be able to support almost any protocol
and discovery mechanism imaginable.
When this is more solidified, we can roll these up into higher-level
operations that can be orchestrated through the `dist` tool or via GRPC.
What a time to be alive!
Signed-off-by: Stephen J Day <stephen.day@docker.com>