Rather than accept the resulting of a layer validation, we retry up to three
times, backing off 100ms after each try. The thought is that we allow s3 files
to make their way into the correct location increasing the liklihood the
verification can proceed, if possible.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Thanks to @dmcgowan for noticing.
Added a testcase to make sure Save() can create the dir and then
read from it.
Signed-off-by: Doug Davis <dug@us.ibm.com>
This PR does the following:
- migrated ~/.dockerfg to ~/.docker/config.json. The data is migrated
but the old file remains in case its needed
- moves the auth json in that fie into an "auth" property so we can add new
top-level properties w/o messing with the auth stuff
- adds support for an HttpHeaders property in ~/.docker/config.json
which adds these http headers to all msgs from the cli
In a follow-on PR I'll move the config file process out from under
"registry" since it not specific to that any more. I didn't do it here
because I wanted the diff to be smaller so people can make sure I didn't
break/miss any auth code during my edits.
Signed-off-by: Doug Davis <dug@us.ibm.com>
When the registry starts a background timer will periodically
scan the upload directories on the file system every 24 hours
and delete any files older than 1 week. An initial jitter
intends to avoid contention on the filesystem where multiple
registries with the same storage driver are started
simultaneously.
Ensure that the status is logged in the context by instantiating before the
request is routed to handlers. While this requires some level of hacking to
acheive, the result is that the context value of "http.request.status" is as
accurate as possible for each request.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Registry is intended to be used as a repository service than an abstract collection of repositories. Namespace better describes a collection of repositories retrievable by name.
The registry service serves any repository in the global scope.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
This moves the instance id out of the app so that it is associated with an
instantiation of the runtime. The instance id is stored on the background
context. This allows allow contexts using the main background context to
include an instance id for log messages. It also simplifies the application
slightly.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
The original implementation wrote to different locations in a shared slice.
While this is theoretically okay, we end up thrashing the cpu cache since
multiple slice members may be on the same cache line. So, even though each
thread has its own memory location, there may be contention over the cache
line. This changes the code to aggregate to a slice in a single goroutine.
In reality, this change likely won't have any performance impact. The theory
proposed above hasn't really even been tested. Either way, we can consider it
and possibly go forward.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Rather than enforce lowercase paths for all drivers, support for
case-sensitivity has been deferred to the driver. There are a few caveats to
this approach:
1. There are possible security implications for tags that only differ in their
case. For instance, a tag "A" may be equivalent to tag "a" on certain file
system backends.
2. All system paths should not use case-sensitive identifiers where possible.
This might be problematic in a blob store that uses case-sensitive ids. For
now, since digest hex ids are all case-insensitive, this will not be an issue.
The recommend workaround is to not run the registry on a case-insensitive
filesystem driver in security sensitive applications.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To avoid compounded round trips leading to slow retrieval of manifests with a
large number of signatures, the fetch of signatures has been parallelized. This
simply spawns a goroutine for each path, coordinated with a sync.WaitGroup.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
registry/SearchResults was missing the "is_automated" field.
I added it back in.
Pull this 'table' removal one from the others because it fixed
a bug too
Signed-off-by: Doug Davis <dug@us.ibm.com>
For consistency with other systems, the redis and caching monitoring data has
been moved under the "registry" section in expvar. This ensures the entire
registry state is kept to a single section.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This allows one to better control the usage of the cache and turn it off
completely. The storage configuration module was modified to allow parameters
to be passed to just the storage implementation, rather than to the driver.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This changeset integrates the layer info cache with the registry webapp and
storage backend. The main benefit is to cache immutable layer meta data,
reducing backend roundtrips. The cache can be configured to use either redis or
an inmemory cache.
This provides massive performance benefits for HEAD http checks on layer blobs
and manifest verification.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This changeset defines the interface for layer info caches. Layer info caches
speed up access to layer meta data accessed in storage driver backends. The
two main operations are tests for repository membership and resolving path and
size information for backend blobs.
Two implementations are available. The main implementation leverages redis to
store layer info. An alternative implementation simply caches layer info in
maps, which should speed up resolution for less sophisticated implementations.
Signed-off-by: Stephen J Day <stephen.day@docker.com>