With the rename of fetch to fetch-object, we now introduce the `fetch`
command. It will fetch all of the resources required for an image into
the content store. We'll still need to follow this up with metadata
registration but this is a good start.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To make using the `fetch-object` for demonstrations much easier, the
mediatypes are defaulted when a non-digest object identifier is
provided. We also add support for OCI mediatypes, although they are
mostly unavailable.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To allow us to differentiate from fetching an image, fetch a part of an
image and pulling an image, we now call the `fetch` command the
`fetch-object` command. We can now introduce a command that does the
complete image fetch without creating snapshots, allowing `pull` to
perform the entire process.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
That is, e55af28bae.
Since f77ece9cb5
the license files regex is case insensitive which picks up one more file.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
The message was defined but the method was returning empty, plumb through the
result from the shim layer.
Compile tested only.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
To make restarting after failed pull less racy, we define `Truncate(size
int64) error` on `content.Writer` for the zero offset. Truncating a
writer will dump any existing data and digest state and start from the
beginning. All subsequent writes will start from the zero offset.
For the service, we support this by defining the behavior for a write
that changes the offset. To keep this narrow, we only support writes out
of order at the offset 0, which causes the writer to dump existing data
and reset the local hash.
This makes restarting failed pulls much smoother when there was a
previously encountered error and the source doesn't support arbitrary
seeks or reads at arbitrary offsets. By allowing this to be done while
holding the write lock on a ref, we can restart the full download
without causing a race condition.
Once we implement seeking on the `io.Reader` returned by the fetcher,
this will be less useful, but it is good to ensure that our protocol
properly supports this use case for when streaming is the only option.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Allow deletion of content over the GRPC interface. For now, we are going
with a model that conducts reference management outside of the content
store, in the metadata store but this design is valid either way.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Two test cases added
- Prepare, Stat on new layer, should return Active layer.
- Prepare & Commit , Stat on new layer, should return Committed layer
Signed-off-by: Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Allow creating actives without an upper directory for
capturing changes. Actives without the upper directory
will always be mounted read only. Read only actives
must have a parent.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
When using the fetcher concurrently, the loop modifying the closed
`base` parameter was causing urls from different digests to be returned
randomly. We copy the the value and then modify it to make it work
correctly.
Luckily, we are using content addressable storage or this would have
been undetectable.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
A previous PR placed the version string replacement in the `init`
function in the other commands. This makes this same change consistently
in the `dist` tool.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Because of the plugin findings and having the default runtime builtin
this makes it much better for development and testing.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>