Add functionality for restoring containers after containerd dies and is
restarted with terminated shims.
This ensures that on restore, if a container no longer has a running
shim, containerd will kill and cleanup the container.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
For some reason, when I wrote this, I forgot about the `View` and
`Update` helpers on boltdb. These are now used and makes the code much
easier to follow.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
With this changeset, image store access is now moved to completely
accessible over GRPC. No clients manipulate the image store database
directly and the GRPC client is fully featured. The metadata database is
now managed by the daemon and access coordinated via services.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Server and Client images of the image store are now provided. We have
created an image metadata interface and converted the bolt functions to
implement that interface over an transaction. A remote client
implementation is provided that implements the same interface.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This is a first pass at the metadata required for supporting an image
store. We use a shallow approach to the problem, allowing this
component to centralize the naming. Resources for this image can then be
"snowballed" in for actual implementations. This is better understood
through example.
Let's take pull. One could register the name "docker.io/stevvooe/foo" as
pointing at a particular digest. When instructed to pull or fetch, the
system will notice that no components of that image are present locally.
It can then recursively resolve the resources for that image and fetch
them into the content store. Next time the instruction is issued, the
content will be present so no action will be taken.
Another example is preparing the rootfs. The requirements for a rootfs
can be resolved from a name. These "diff ids" will then be compared with
what is available in the snapshot manager. Any parts of the rootfs, such
as a layer, that isn't available in the snapshotter can be unpacked.
Once this process is satisified, the image will be runnable as a
container.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The service can use the snapshotter directly to get the rootfs.
Removed debug line for mount response.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
The message was defined but the method was returning empty, plumb through the
result from the shim layer.
Compile tested only.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
To make restarting after failed pull less racy, we define `Truncate(size
int64) error` on `content.Writer` for the zero offset. Truncating a
writer will dump any existing data and digest state and start from the
beginning. All subsequent writes will start from the zero offset.
For the service, we support this by defining the behavior for a write
that changes the offset. To keep this narrow, we only support writes out
of order at the offset 0, which causes the writer to dump existing data
and reset the local hash.
This makes restarting failed pulls much smoother when there was a
previously encountered error and the source doesn't support arbitrary
seeks or reads at arbitrary offsets. By allowing this to be done while
holding the write lock on a ref, we can restart the full download
without causing a race condition.
Once we implement seeking on the `io.Reader` returned by the fetcher,
this will be less useful, but it is good to ensure that our protocol
properly supports this use case for when streaming is the only option.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Allow deletion of content over the GRPC interface. For now, we are going
with a model that conducts reference management outside of the content
store, in the metadata store but this design is valid either way.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
For clients which only want to know about one container this is simpler than
searching the result of execution.List.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
After implementing pull, a few changes are required to the content store
interface to make sure that the implementation works smoothly.
Specifically, we work to make sure the predeclaration path for digests
works the same between remote and local writers. Before, we were
hesitent to require the the size and digest up front, but it became
clear that having this provided significant benefit.
There are also several cleanups related to naming. We now call the
expected digest `Expected` consistently across the board and `Total` is
used to mark the expected size.
This whole effort comes together to provide a very smooth status
reporting workflow for image pull and push. This will be more obvious
when the bulk of pull code lands.
There are a few other changes to make `content.WriteBlob` more broadly
useful. In accordance with addition for predeclaring expected size when
getting a `Writer`, `WriteBlob` now supports this fully. It will also
resume downloads if provided an `io.Seeker` or `io.ReaderAt`. Coupled
with the `httpReadSeeker` from `docker/distribution`, we should only be
a lines of code away from resumable downloads.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Add registration for more subsystems via plugins
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Move content service to separate package
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>