Commit graph

3 commits

Author SHA1 Message Date
Stephen J Day
d99756a8a2
content: allow reset via Truncate
To make restarting after failed pull less racy, we define `Truncate(size
int64) error` on `content.Writer` for the zero offset. Truncating a
writer will dump any existing data and digest state and start from the
beginning. All subsequent writes will start from the zero offset.

For the service, we support this by defining the behavior for a write
that changes the offset. To keep this narrow, we only support writes out
of order at the offset 0, which causes the writer to dump existing data
and reset the local hash.

This makes restarting failed pulls much smoother when there was a
previously encountered error and the source doesn't support arbitrary
seeks or reads at arbitrary offsets. By allowing this to be done while
holding the write lock on a ref, we can restart the full download
without causing a race condition.

Once we implement seeking on the `io.Reader` returned by the fetcher,
this will be less useful, but it is good to ensure that our protocol
properly supports this use case for when streaming is the only option.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2017-02-28 10:40:02 -08:00
Stephen J Day
c062a85782
content: cleanup service and interfaces
After implementing pull, a few changes are required to the content store
interface to make sure that the implementation works smoothly.
Specifically, we work to make sure the predeclaration path for digests
works the same between remote and local writers. Before, we were
hesitent to require the the size and digest up front, but it became
clear that having this provided significant benefit.

There are also several cleanups related to naming. We now call the
expected digest `Expected` consistently across the board and `Total` is
used to mark the expected size.

This whole effort comes together to provide a very smooth status
reporting workflow for image pull and push. This will be more obvious
when the bulk of pull code lands.

There are a few other changes to make `content.WriteBlob` more broadly
useful. In accordance with addition for predeclaring expected size when
getting a `Writer`, `WriteBlob` now supports this fully. It will also
resume downloads if provided an `io.Seeker` or `io.ReaderAt`. Coupled
with the `httpReadSeeker` from `docker/distribution`, we should only be
a lines of code away from resumable downloads.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2017-02-22 13:30:01 -08:00
Stephen J Day
baaf7543dc
api/services/content: define the content service
Bring the content service into the containerd API. This allows the
content store to be coordinated in the containerd daemon with minimal
effort. For the most part, this API follows the conventions and behavior
of the existing content store implementation with a few caveats.
Specifically, we remove the object oriented transaction mechanism in
favor of a very rich `Write` call.

Pains are taken to reduce race conditions around when having multiple
writers to a single piece of content. Clients should be able to race
towards getting a write lock on a reference, then wait on each other.

For the most part, this should be generically pluggable to allow
implementations of the content store to be swapped out.

We'll follow this up with an implementation to validate the model.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2017-02-21 13:10:10 -08:00