2016-10-14 22:40:45 +00:00
---
title: "Configuring a registry"
description: "Explains how to configure a registry"
keywords: ["registry, on-prem, images, tags, repository, distribution, configuration"]
---
2015-06-08 00:58:53 +00:00
2015-04-13 18:34:07 +00:00
# Registry Configuration Reference
2015-04-02 15:11:19 +00:00
2015-05-22 09:18:50 +00:00
The Registry configuration is based on a YAML file, detailed below. While it comes with sane default values out of the box, you are heavily encouraged to review it exhaustively before moving your systems to production.
2015-06-12 08:10:03 +00:00
## Override specific configuration options
2015-05-22 09:18:50 +00:00
2015-06-12 08:10:03 +00:00
In a typical setup where you run your Registry from the official image, you can specify a configuration variable from the environment by passing `-e` arguments to your `docker run` stanza, or from within a Dockerfile using the `ENV` instruction.
2015-05-22 09:18:50 +00:00
To override a configuration option, create an environment variable named
`REGISTRY_variable` where *`variable`* is the name of the configuration option
and the `_` (underscore) represents indention levels. For example, you can
configure the `rootdirectory` of the `filesystem` storage backend:
2016-12-15 02:50:40 +00:00
```
storage:
filesystem:
rootdirectory: /var/lib/registry
```
2015-05-22 09:18:50 +00:00
To override this value, set an environment variable like this:
2016-12-15 02:50:40 +00:00
```
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/somewhere
```
2015-05-22 09:18:50 +00:00
2015-06-11 03:54:24 +00:00
This variable overrides the `/var/lib/registry` value to the `/somewhere`
2015-05-22 09:18:50 +00:00
directory.
2015-10-19 21:52:08 +00:00
>**NOTE**: It is highly recommended to create a base configuration file with which environment variables can be used to tweak individual values. Overriding configuration sections with environment variables is not recommended.
2015-06-12 08:10:03 +00:00
## Overriding the entire configuration file
If the default configuration is not a sound basis for your usage, or if you are having issues overriding keys from the environment, you can specify an alternate YAML configuration file by mounting it as a volume in the container.
Typically, create a new configuration file from scratch, and call it `config.yml` , then:
2016-12-15 02:50:40 +00:00
```
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd` /config.yml:/etc/docker/registry/config.yml \
registry:2
```
2015-05-22 09:18:50 +00:00
2015-06-12 08:10:03 +00:00
You can (and probably should) use [this as a starting point ](https://github.com/docker/distribution/blob/master/cmd/registry/config-example.yml ).
2015-04-02 15:11:19 +00:00
2015-04-13 18:34:07 +00:00
## List of configuration options
2015-04-02 15:11:19 +00:00
2015-04-13 18:34:07 +00:00
This section lists all the registry configuration options. Some options in
the list are mutually exclusive. So, make sure to read the detailed reference
information about each option that appears later in this page.
2015-03-04 00:18:22 +00:00
2016-12-15 02:50:40 +00:00
```
version: 0.1
log:
accesslog:
disabled: true
level: debug
formatter: text
fields:
service: registry
environment: staging
hooks:
- type: mail
disabled: true
levels:
- panic
options:
smtp:
addr: mail.example.com:25
username: mailuser
password: password
insecure: true
from: sender@example.com
to:
- errors@example.com
loglevel: debug # deprecated: use "log"
storage:
filesystem:
rootdirectory: /var/lib/registry
maxthreads: 100
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
gcs:
bucket: bucketname
keyfile: /path/to/keyfile
rootdirectory: /gcs/object/name/prefix
chunksize: 5242880
s3:
accesskey: awsaccesskey
secretkey: awssecretkey
region: us-west-1
regionendpoint: http://myobjects.local
bucket: bucketname
encrypt: true
keyid: mykeyid
secure: true
v4auth: true
chunksize: 5242880
multipartcopychunksize: 33554432
multipartcopymaxconcurrency: 100
multipartcopythresholdsize: 33554432
rootdirectory: /s3/object/name/prefix
swift:
username: username
password: password
authurl: https://storage.myprovider.com/auth/v1.0 or https://storage.myprovider.com/v2.0 or https://storage.myprovider.com/v3/auth
tenant: tenantname
tenantid: tenantid
domain: domain name for Openstack Identity v3 API
domainid: domain id for Openstack Identity v3 API
insecureskipverify: true
region: fr
container: containername
rootdirectory: /swift/object/name/prefix
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: OSS region name
endpoint: optional endpoints
internal: optional internal endpoint
bucket: OSS bucket
encrypt: optional data encryption setting
secure: optional ssl setting
chunksize: optional size valye
rootdirectory: optional root directory
inmemory: # This driver takes no parameters
delete:
enabled: false
redirect:
disable: false
cache:
blobdescriptor: redis
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
readonly:
enabled: false
auth:
silly:
realm: silly-realm
service: silly-service
token:
realm: token-realm
service: token-service
issuer: registry-token-issuer
rootcertbundle: /root/certs/bundle
htpasswd:
realm: basic-realm
path: /path/to/htpasswd
middleware:
registry:
- name: ARegistryMiddleware
options:
foo: bar
repository:
- name: ARepositoryMiddleware
options:
foo: bar
storage:
- name: cloudfront
options:
baseurl: https://my.cloudfronted.domain.com/
privatekey: /path/to/pem
keypairid: cloudfrontkeypairid
duration: 3000s
storage:
- name: redirect
options:
baseurl: https://example.com/
reporting:
bugsnag:
apikey: bugsnagapikey
releasestage: bugsnagreleasestage
endpoint: bugsnagendpoint
newrelic:
licensekey: newreliclicensekey
name: newrelicname
verbose: true
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused@letsencrypt.com
debug:
addr: localhost:5001
headers:
X-Content-Type-Options: [nosniff]
http2:
disabled: false
notifications:
endpoints:
- name: alistener
disabled: false
url: https://my.listener.com/event
headers: < http.Header >
timeout: 500
threshold: 5
backoff: 1000
ignoredmediatypes:
- application/octet-stream
redis:
addr: localhost:6379
password: asecret
db: 0
dialtimeout: 10ms
readtimeout: 10ms
writetimeout: 10ms
pool:
maxidle: 16
maxactive: 64
idletimeout: 300s
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
file:
- file: /path/to/checked/file
interval: 10s
http:
- uri: http://server.to.check/must/return/200
2015-08-10 21:20:52 +00:00
headers:
2016-12-15 02:50:40 +00:00
Authorization: [Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==]
statuscode: 200
timeout: 3s
interval: 10s
threshold: 3
tcp:
- addr: redis-server.domain.com:6379
timeout: 3s
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]
compatibility:
schema1:
signingkeyfile: /etc/registry/key.json
validation:
manifests:
urls:
allow:
- ^https?://([^/]+\.)*example\.com/
deny:
- ^https?://www\.example\.com/
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
In some instances a configuration option is **optional** but it contains child
options marked as **required** . This indicates that you can omit the parent with
all its children. However, if the parent is included, you must also include all
the children marked **required** .
2015-03-04 00:18:22 +00:00
2015-05-05 08:25:42 +00:00
## version
2015-03-04 00:18:22 +00:00
2016-12-15 02:50:40 +00:00
```
version: 0.1
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The `version` option is **required** . It specifies the configuration's version.
It is expected to remain a top-level field, to allow for a consistent version
2015-05-05 08:25:42 +00:00
check before parsing the remainder of the configuration file.
2015-03-04 00:18:22 +00:00
2015-03-24 03:13:35 +00:00
## log
2015-04-13 18:34:07 +00:00
The `log` subsection configures the behavior of the logging system. The logging
system outputs everything to stdout. You can adjust the granularity and format
with this configuration section.
2015-03-24 03:13:35 +00:00
2016-12-15 02:50:40 +00:00
```
log:
accesslog:
disabled: true
level: debug
formatter: text
fields:
service: registry
environment: staging
```
2015-03-24 03:13:35 +00:00
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > level< / code >
< / td >
< td >
no
< / td >
< td >
Sets the sensitivity of logging output. Permitted values are
< code > error< / code > , < code > warn< / code > , < code > info< / code > and
< code > debug< / code > . The default is < code > info< / code > .
< / td >
< / tr >
< tr >
< td >
< code > formatter< / code >
< / td >
< td >
no
< / td >
< td >
This selects the format of logging output. The format primarily affects how keyed
attributes for a log line are encoded. Options are < code > text< / code > , < code > json< / code > or
< code > logstash< / code > . The default is < code > text< / code > .
< / td >
< / tr >
2016-09-14 00:23:27 +00:00
< tr >
2015-04-13 18:34:07 +00:00
< td >
< code > fields< / code >
< / td >
< td >
no
< / td >
< td >
A map of field names to values. These are added to every log line for
the context. This is useful for identifying log messages source after
being mixed in other systems.
< / td >
2016-09-14 00:23:27 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2016-09-14 00:23:27 +00:00
### accesslog
2016-12-15 02:50:40 +00:00
```
accesslog:
disabled: true
```
2016-09-14 00:23:27 +00:00
Within `log` , `accesslog` configures the behavior of the access logging
system. By default, the access logging system outputs to stdout in
[Combined Log Format ](https://httpd.apache.org/docs/2.4/logs.html#combined ).
Access logging can be disabled by setting the boolean flag `disabled` to `true` .
2015-04-17 12:19:20 +00:00
## hooks
2016-12-15 02:50:40 +00:00
```
hooks:
- type: mail
levels:
- panic
options:
smtp:
addr: smtp.sendhost.com:25
username: sendername
password: password
insecure: true
from: name@sendhost.com
to:
- name@receivehost.com
```
2015-04-17 12:19:20 +00:00
The `hooks` subsection configures the logging hooks' behavior. This subsection
includes a sequence handler which you can use for sending mail, for example.
Refer to `loglevel` to configure the level of messages printed.
2015-03-24 03:13:35 +00:00
2015-03-04 00:18:22 +00:00
## loglevel
2015-07-30 16:56:04 +00:00
> **DEPRECATED:** Please use [log](#log) instead.
2015-03-24 03:13:35 +00:00
2016-12-15 02:50:40 +00:00
```
loglevel: debug
```
2015-03-04 00:18:22 +00:00
2015-03-24 03:13:35 +00:00
Permitted values are `error` , `warn` , `info` and `debug` . The default is
`info` .
2015-03-04 00:18:22 +00:00
## storage
2016-12-15 02:50:40 +00:00
```
storage:
filesystem:
rootdirectory: /var/lib/registry
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
gcs:
bucket: bucketname
keyfile: /path/to/keyfile
rootdirectory: /gcs/object/name/prefix
s3:
accesskey: awsaccesskey
secretkey: awssecretkey
region: us-west-1
regionendpoint: http://myobjects.local
bucket: bucketname
encrypt: true
keyid: mykeyid
secure: true
v4auth: true
chunksize: 5242880
multipartcopychunksize: 33554432
multipartcopymaxconcurrency: 100
multipartcopythresholdsize: 33554432
rootdirectory: /s3/object/name/prefix
swift:
username: username
password: password
authurl: https://storage.myprovider.com/auth/v1.0 or https://storage.myprovider.com/v2.0 or https://storage.myprovider.com/v3/auth
tenant: tenantname
tenantid: tenantid
domain: domain name for Openstack Identity v3 API
domainid: domain id for Openstack Identity v3 API
insecureskipverify: true
region: fr
container: containername
rootdirectory: /swift/object/name/prefix
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: OSS region name
endpoint: optional endpoints
internal: optional internal endpoint
bucket: OSS bucket
encrypt: optional data encryption setting
secure: optional ssl setting
chunksize: optional size valye
rootdirectory: optional root directory
inmemory:
delete:
enabled: false
cache:
blobdescriptor: inmemory
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
redirect:
disable: false
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The storage option is **required** and defines which storage backend is in use.
2016-01-13 01:34:02 +00:00
You must configure one backend; if you configure more, the registry returns an error. You can choose any of these backend storage drivers:
2016-12-15 02:50:40 +00:00
| Storage driver | Description |
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
2016-11-10 22:28:28 +00:00
| `filesystem` | Uses the local disk to store registry files. It is ideal for development and may be appropriate for some small-scale production applications. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/filesystem.md ). |
| `azure` | Uses Microsoft's Azure Blob Storage. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/azure.md ). |
| `gcs` | Uses Google Cloud Storage. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/gcs.md ). |
| `s3` | Uses Amazon's Simple Storage Service (S3) and compatible Storage Services. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/s3.md ). |
| `swift` | Uses Openstack Swift object storage. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/swift.md ). |
| `oss` | Uses Aliyun OSS for object storage. See the [driver's reference documentation ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/oss.md ). |
2016-01-13 01:34:02 +00:00
For purely tests purposes, you can use the [`inmemory` storage
2016-11-10 22:28:28 +00:00
driver](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/inmemory.md). If you would like to run a registry from
volatile memory, use the [`filesystem` driver ](https://github.com/docker/docker.github.io/tree/master/registry/storage-drivers/filesystem.md ) on
2016-01-13 01:34:02 +00:00
a ramdisk.
2015-03-04 00:18:22 +00:00
2016-01-13 01:34:02 +00:00
If you are deploying a registry on Windows, be aware that a Windows volume
mounted from the host is not recommended. Instead, you can use a S3, or Azure,
backing data-store. If you do use a Windows volume, you must ensure that the
`PATH` to the mount point is within Windows' `MAX_PATH` limits (typically 255
characters). Failure to do so can result in the following error message:
2015-05-18 18:43:38 +00:00
2016-12-15 02:50:40 +00:00
```
mkdir /XXX protocol error and your registry will not function properly.
```
2015-05-18 18:43:38 +00:00
2015-12-11 00:02:50 +00:00
### Maintenance
Currently upload purging and read-only mode are the only maintenance functions available.
These and future maintenance functions which are related to storage can be configured under
the maintenance section.
### Upload Purging
Upload purging is a background process that periodically removes orphaned files from the upload
directories of the registry. Upload purging is enabled by default. To
configure upload directory purging, the following parameters
must be set.
2016-12-15 02:50:40 +00:00
| Parameter | Required | Description |
|:-----------|:---------|:---------------------------------------------------------------------------------------------------|
| `enabled` | yes | Set to true to enable upload purging. Default=true. |
| `age` | yes | Upload directories which are older than this age will be deleted. Default=168h (1 week) |
| `interval` | yes | The interval between upload directory purging. Default=24h. |
| `dryrun` | yes | dryrun can be set to true to obtain a summary of what directories will be deleted. Default=false. |
2015-12-11 00:02:50 +00:00
Note: `age` and `interval` are strings containing a number with optional fraction and a unit suffix: e.g. 45m, 2h10m, 168h (1 week).
### Read-only mode
If the `readonly` section under `maintenance` has `enabled` set to `true` ,
clients will not be allowed to write to the registry. This mode is useful to
temporarily prevent writes to the backend storage so a garbage collection pass
can be run. Before running garbage collection, the registry should be
restarted with readonly's `enabled` set to true. After the garbage collection
pass finishes, the registry may be restarted again, this time with `readonly`
removed from the configuration (or set to false).
2015-08-13 18:18:35 +00:00
### delete
Use the `delete` subsection to enable the deletion of image blobs and manifests
by digest. It defaults to false, but it can be enabled by writing the following
on the configuration file:
2016-12-15 02:50:40 +00:00
```
delete:
enabled: true
```
2015-08-13 18:18:35 +00:00
2015-04-13 18:34:07 +00:00
### cache
2015-04-02 23:38:01 +00:00
2015-04-13 18:34:07 +00:00
Use the `cache` subsection to enable caching of data accessed in the storage
backend. Currently, the only available cache provides fast access to layer
Refactor Blob Service API
This PR refactors the blob service API to be oriented around blob descriptors.
Identified by digests, blobs become an abstract entity that can be read and
written using a descriptor as a handle. This allows blobs to take many forms,
such as a ReadSeekCloser or a simple byte buffer, allowing blob oriented
operations to better integrate with blob agnostic APIs (such as the `io`
package). The error definitions are now better organized to reflect conditions
that can only be seen when interacting with the blob API.
The main benefit of this is to separate the much smaller metadata from large
file storage. Many benefits also follow from this. Reading and writing has
been separated into discrete services. Backend implementation is also
simplified, by reducing the amount of metadata that needs to be picked up to
simply serve a read. This also improves cacheability.
"Opening" a blob simply consists of an access check (Stat) and a path
calculation. Caching is greatly simplified and we've made the mapping of
provisional to canonical hashes a first-class concept. BlobDescriptorService
and BlobProvider can be combined in different ways to achieve varying effects.
Recommend Review Approach
-------------------------
This is a very large patch. While apologies are in order, we are getting a
considerable amount of refactoring. Most changes follow from the changes to
the root package (distribution), so start there. From there, the main changes
are in storage. Looking at (*repository).Blobs will help to understand the how
the linkedBlobStore is wired. One can explore the internals within and also
branch out into understanding the changes to the caching layer. Following the
descriptions below will also help to guide you.
To reduce the chances for regressions, it was critical that major changes to
unit tests were avoided. Where possible, they are left untouched and where
not, the spirit is hopefully captured. Pay particular attention to where
behavior may have changed.
Storage
-------
The primary changes to the `storage` package, other than the interface
updates, were to merge the layerstore and blobstore. Blob access is now
layered even further. The first layer, blobStore, exposes a global
`BlobStatter` and `BlobProvider`. Operations here provide a fast path for most
read operations that don't take access control into account. The
`linkedBlobStore` layers on top of the `blobStore`, providing repository-
scoped blob link management in the backend. The `linkedBlobStore` implements
the full `BlobStore` suite, providing access-controlled, repository-local blob
writers. The abstraction between the two is slightly broken in that
`linkedBlobStore` is the only channel under which one can write into the global
blob store. The `linkedBlobStore` also provides flexibility in that it can act
over different link sets depending on configuration. This allows us to use the
same code for signature links, manifest links and blob links. Eventually, we
will fully consolidate this storage.
The improved cache flow comes from the `linkedBlobStatter` component
of `linkedBlobStore`. Using a `cachedBlobStatter`, these combine together to
provide a simple cache hierarchy that should streamline access checks on read
and write operations, or at least provide a single path to optimize. The
metrics have been changed in a slightly incompatible way since the former
operations, Fetch and Exists, are no longer relevant.
The fileWriter and fileReader have been slightly modified to support the rest
of the changes. The most interesting is the removal of the `Stat` call from
`newFileReader`. This was the source of unnecessary round trips that were only
present to look up the size of the resulting reader. Now, one must simply pass
in the size, requiring the caller to decide whether or not the `Stat` call is
appropriate. In several cases, it turned out the caller already had the size
already. The `WriterAt` implementation has been removed from `fileWriter`,
since it is no longer required for `BlobWriter`, reducing the number of paths
which writes may take.
Cache
-----
Unfortunately, the `cache` package required a near full rewrite. It was pretty
mechanical in that the cache is oriented around the `BlobDescriptorService`
slightly modified to include the ability to set the values for individual
digests. While the implementation is oriented towards caching, it can act as a
primary store. Provisions are in place to have repository local metadata, in
addition to global metadata. Fallback is implemented as a part of the storage
package to maintain this flexibility.
One unfortunate side-effect is that caching is now repository-scoped, rather
than global. This should have little effect on performance but may increase
memory usage.
Handlers
--------
The `handlers` package has been updated to leverage the new API. For the most
part, the changes are superficial or mechanical based on the API changes. This
did expose a bug in the handling of provisional vs canonical digests that was
fixed in the unit tests.
Configuration
-------------
One user-facing change has been made to the configuration and is updated in
the associated documentation. The `layerinfo` cache parameter has been
deprecated by the `blobdescriptor` cache parameter. Both are equivalent and
configuration files should be backward compatible.
Notifications
-------------
Changes the `notification` package are simply to support the interface
changes.
Context
-------
A small change has been made to the tracing log-level. Traces have been moved
from "info" to "debug" level to reduce output when not needed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-05-12 07:10:29 +00:00
metadata. This, if configured, uses the `blobdescriptor` field.
2015-04-02 23:38:01 +00:00
Refactor Blob Service API
This PR refactors the blob service API to be oriented around blob descriptors.
Identified by digests, blobs become an abstract entity that can be read and
written using a descriptor as a handle. This allows blobs to take many forms,
such as a ReadSeekCloser or a simple byte buffer, allowing blob oriented
operations to better integrate with blob agnostic APIs (such as the `io`
package). The error definitions are now better organized to reflect conditions
that can only be seen when interacting with the blob API.
The main benefit of this is to separate the much smaller metadata from large
file storage. Many benefits also follow from this. Reading and writing has
been separated into discrete services. Backend implementation is also
simplified, by reducing the amount of metadata that needs to be picked up to
simply serve a read. This also improves cacheability.
"Opening" a blob simply consists of an access check (Stat) and a path
calculation. Caching is greatly simplified and we've made the mapping of
provisional to canonical hashes a first-class concept. BlobDescriptorService
and BlobProvider can be combined in different ways to achieve varying effects.
Recommend Review Approach
-------------------------
This is a very large patch. While apologies are in order, we are getting a
considerable amount of refactoring. Most changes follow from the changes to
the root package (distribution), so start there. From there, the main changes
are in storage. Looking at (*repository).Blobs will help to understand the how
the linkedBlobStore is wired. One can explore the internals within and also
branch out into understanding the changes to the caching layer. Following the
descriptions below will also help to guide you.
To reduce the chances for regressions, it was critical that major changes to
unit tests were avoided. Where possible, they are left untouched and where
not, the spirit is hopefully captured. Pay particular attention to where
behavior may have changed.
Storage
-------
The primary changes to the `storage` package, other than the interface
updates, were to merge the layerstore and blobstore. Blob access is now
layered even further. The first layer, blobStore, exposes a global
`BlobStatter` and `BlobProvider`. Operations here provide a fast path for most
read operations that don't take access control into account. The
`linkedBlobStore` layers on top of the `blobStore`, providing repository-
scoped blob link management in the backend. The `linkedBlobStore` implements
the full `BlobStore` suite, providing access-controlled, repository-local blob
writers. The abstraction between the two is slightly broken in that
`linkedBlobStore` is the only channel under which one can write into the global
blob store. The `linkedBlobStore` also provides flexibility in that it can act
over different link sets depending on configuration. This allows us to use the
same code for signature links, manifest links and blob links. Eventually, we
will fully consolidate this storage.
The improved cache flow comes from the `linkedBlobStatter` component
of `linkedBlobStore`. Using a `cachedBlobStatter`, these combine together to
provide a simple cache hierarchy that should streamline access checks on read
and write operations, or at least provide a single path to optimize. The
metrics have been changed in a slightly incompatible way since the former
operations, Fetch and Exists, are no longer relevant.
The fileWriter and fileReader have been slightly modified to support the rest
of the changes. The most interesting is the removal of the `Stat` call from
`newFileReader`. This was the source of unnecessary round trips that were only
present to look up the size of the resulting reader. Now, one must simply pass
in the size, requiring the caller to decide whether or not the `Stat` call is
appropriate. In several cases, it turned out the caller already had the size
already. The `WriterAt` implementation has been removed from `fileWriter`,
since it is no longer required for `BlobWriter`, reducing the number of paths
which writes may take.
Cache
-----
Unfortunately, the `cache` package required a near full rewrite. It was pretty
mechanical in that the cache is oriented around the `BlobDescriptorService`
slightly modified to include the ability to set the values for individual
digests. While the implementation is oriented towards caching, it can act as a
primary store. Provisions are in place to have repository local metadata, in
addition to global metadata. Fallback is implemented as a part of the storage
package to maintain this flexibility.
One unfortunate side-effect is that caching is now repository-scoped, rather
than global. This should have little effect on performance but may increase
memory usage.
Handlers
--------
The `handlers` package has been updated to leverage the new API. For the most
part, the changes are superficial or mechanical based on the API changes. This
did expose a bug in the handling of provisional vs canonical digests that was
fixed in the unit tests.
Configuration
-------------
One user-facing change has been made to the configuration and is updated in
the associated documentation. The `layerinfo` cache parameter has been
deprecated by the `blobdescriptor` cache parameter. Both are equivalent and
configuration files should be backward compatible.
Notifications
-------------
Changes the `notification` package are simply to support the interface
changes.
Context
-------
A small change has been made to the tracing log-level. Traces have been moved
from "info" to "debug" level to reduce output when not needed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-05-12 07:10:29 +00:00
You can set `blobdescriptor` field to `redis` or `inmemory` . The `redis` value uses
2015-04-13 18:34:07 +00:00
a Redis pool to cache layer metadata. The `inmemory` value uses an in memory
map.
2015-03-04 00:18:22 +00:00
Refactor Blob Service API
This PR refactors the blob service API to be oriented around blob descriptors.
Identified by digests, blobs become an abstract entity that can be read and
written using a descriptor as a handle. This allows blobs to take many forms,
such as a ReadSeekCloser or a simple byte buffer, allowing blob oriented
operations to better integrate with blob agnostic APIs (such as the `io`
package). The error definitions are now better organized to reflect conditions
that can only be seen when interacting with the blob API.
The main benefit of this is to separate the much smaller metadata from large
file storage. Many benefits also follow from this. Reading and writing has
been separated into discrete services. Backend implementation is also
simplified, by reducing the amount of metadata that needs to be picked up to
simply serve a read. This also improves cacheability.
"Opening" a blob simply consists of an access check (Stat) and a path
calculation. Caching is greatly simplified and we've made the mapping of
provisional to canonical hashes a first-class concept. BlobDescriptorService
and BlobProvider can be combined in different ways to achieve varying effects.
Recommend Review Approach
-------------------------
This is a very large patch. While apologies are in order, we are getting a
considerable amount of refactoring. Most changes follow from the changes to
the root package (distribution), so start there. From there, the main changes
are in storage. Looking at (*repository).Blobs will help to understand the how
the linkedBlobStore is wired. One can explore the internals within and also
branch out into understanding the changes to the caching layer. Following the
descriptions below will also help to guide you.
To reduce the chances for regressions, it was critical that major changes to
unit tests were avoided. Where possible, they are left untouched and where
not, the spirit is hopefully captured. Pay particular attention to where
behavior may have changed.
Storage
-------
The primary changes to the `storage` package, other than the interface
updates, were to merge the layerstore and blobstore. Blob access is now
layered even further. The first layer, blobStore, exposes a global
`BlobStatter` and `BlobProvider`. Operations here provide a fast path for most
read operations that don't take access control into account. The
`linkedBlobStore` layers on top of the `blobStore`, providing repository-
scoped blob link management in the backend. The `linkedBlobStore` implements
the full `BlobStore` suite, providing access-controlled, repository-local blob
writers. The abstraction between the two is slightly broken in that
`linkedBlobStore` is the only channel under which one can write into the global
blob store. The `linkedBlobStore` also provides flexibility in that it can act
over different link sets depending on configuration. This allows us to use the
same code for signature links, manifest links and blob links. Eventually, we
will fully consolidate this storage.
The improved cache flow comes from the `linkedBlobStatter` component
of `linkedBlobStore`. Using a `cachedBlobStatter`, these combine together to
provide a simple cache hierarchy that should streamline access checks on read
and write operations, or at least provide a single path to optimize. The
metrics have been changed in a slightly incompatible way since the former
operations, Fetch and Exists, are no longer relevant.
The fileWriter and fileReader have been slightly modified to support the rest
of the changes. The most interesting is the removal of the `Stat` call from
`newFileReader`. This was the source of unnecessary round trips that were only
present to look up the size of the resulting reader. Now, one must simply pass
in the size, requiring the caller to decide whether or not the `Stat` call is
appropriate. In several cases, it turned out the caller already had the size
already. The `WriterAt` implementation has been removed from `fileWriter`,
since it is no longer required for `BlobWriter`, reducing the number of paths
which writes may take.
Cache
-----
Unfortunately, the `cache` package required a near full rewrite. It was pretty
mechanical in that the cache is oriented around the `BlobDescriptorService`
slightly modified to include the ability to set the values for individual
digests. While the implementation is oriented towards caching, it can act as a
primary store. Provisions are in place to have repository local metadata, in
addition to global metadata. Fallback is implemented as a part of the storage
package to maintain this flexibility.
One unfortunate side-effect is that caching is now repository-scoped, rather
than global. This should have little effect on performance but may increase
memory usage.
Handlers
--------
The `handlers` package has been updated to leverage the new API. For the most
part, the changes are superficial or mechanical based on the API changes. This
did expose a bug in the handling of provisional vs canonical digests that was
fixed in the unit tests.
Configuration
-------------
One user-facing change has been made to the configuration and is updated in
the associated documentation. The `layerinfo` cache parameter has been
deprecated by the `blobdescriptor` cache parameter. Both are equivalent and
configuration files should be backward compatible.
Notifications
-------------
Changes the `notification` package are simply to support the interface
changes.
Context
-------
A small change has been made to the tracing log-level. Traces have been moved
from "info" to "debug" level to reduce output when not needed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-05-12 07:10:29 +00:00
>**NOTE**: Formerly, `blobdescriptor` was known as `layerinfo` . While these
>are equivalent, `layerinfo` has been deprecated, in favor or
>`blobdescriptor`.
2015-07-24 06:16:27 +00:00
### redirect
The `redirect` subsection provides configuration for managing redirects from
content backends. For backends that support it, redirecting is enabled by
default. Certain deployment scenarios may prefer to route all data through the
Registry, rather than redirecting to the backend. This may be more efficient
2016-01-13 01:34:02 +00:00
when using a backend that is not co-located or when a registry instance is
2015-07-24 06:16:27 +00:00
doing aggressive caching.
Redirects can be disabled by adding a single flag `disable` , set to `true`
under the `redirect` section:
2016-12-15 02:50:40 +00:00
```
redirect:
disable: true
```
2015-05-11 16:11:47 +00:00
2015-03-04 00:18:22 +00:00
## auth
2016-12-15 02:50:40 +00:00
```
auth:
silly:
realm: silly-realm
service: silly-service
token:
realm: token-realm
service: token-service
issuer: registry-token-issuer
rootcertbundle: /root/certs/bundle
htpasswd:
realm: basic-realm
path: /path/to/htpasswd
```
2015-03-04 00:18:22 +00:00
2015-05-22 09:17:55 +00:00
The `auth` option is **optional** . There are
2015-08-10 07:21:28 +00:00
currently 3 possible auth providers, `silly` , `token` and `htpasswd` . You can configure only
2015-04-13 18:34:07 +00:00
one `auth` provider.
2015-03-04 00:18:22 +00:00
### silly
2015-04-13 18:34:07 +00:00
The `silly` auth is only for development purposes. It simply checks for the
existence of the `Authorization` header in the HTTP request. It has no regard for
the header's value. If the header does not exist, the `silly` auth responds with a
challenge response, echoing back the realm, service, and scope that access was
2015-05-05 08:25:42 +00:00
denied for.
2015-04-13 18:34:07 +00:00
The following values are used to configure the response:
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > realm< / code >
< / td >
< td >
yes
< / td >
< td >
The realm in which the registry server authenticates.
< / td >
< / tr >
< tr >
< td >
< code > service< / code >
< / td >
< td >
yes
< / td >
< td >
The service being authenticated.
< / td >
< / tr >
< / table >
2015-03-04 00:18:22 +00:00
### token
2015-04-13 18:34:07 +00:00
Token based authentication allows the authentication system to be decoupled from
the registry. It is a well established authentication paradigm with a high
2015-05-05 08:25:42 +00:00
degree of security.
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > realm< / code >
< / td >
< td >
yes
< / td >
< td >
The realm in which the registry server authenticates.
< / td >
< / tr >
< tr >
< td >
< code > service< / code >
< / td >
< td >
yes
< / td >
< td >
The service being authenticated.
< / td >
< / tr >
< tr >
< td >
< code > issuer< / code >
< / td >
< td >
yes
< / td >
< td >
The name of the token issuer. The issuer inserts this into
the token so it must match the value configured for the issuer.
< / td >
< / tr >
< tr >
< td >
< code > rootcertbundle< / code >
< / td >
< td >
2015-07-10 21:22:06 +00:00
yes
2015-04-13 18:34:07 +00:00
< / td >
< td >
The absolute path to the root certificate bundle. This bundle contains the
public part of the certificates that is used to sign authentication tokens.
< / td >
< / tr >
2015-05-05 08:25:42 +00:00
< / table >
2015-03-04 00:18:22 +00:00
2015-07-16 02:59:02 +00:00
For more information about Token based authentication configuration, see the [specification ](spec/auth/token.md ).
2015-03-04 00:18:22 +00:00
2015-06-11 02:41:54 +00:00
### htpasswd
The _htpasswd_ authentication backed allows one to configure basic auth using an
2016-01-13 01:34:02 +00:00
[Apache htpasswd
file](https://httpd.apache.org/docs/2.4/programs/htpasswd.html). Only
[`bcrypt` ](http://en.wikipedia.org/wiki/Bcrypt ) format passwords are supported.
Entries with other hash types will be ignored. The htpasswd file is loaded once,
at startup. If the file is invalid, the registry will display an error and will
not start.
2015-06-11 02:41:54 +00:00
> __WARNING:__ This authentication scheme should only be used with TLS
> configured, since basic authentication sends passwords as part of the http
> header.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > realm< / code >
< / td >
< td >
yes
< / td >
< td >
The realm in which the registry server authenticates.
< / td >
< / tr >
< tr >
< td >
< code > path< / code >
< / td >
< td >
yes
< / td >
< td >
Path to htpasswd file to load at startup.
< / td >
< / tr >
< / table >
2015-03-04 00:18:22 +00:00
## middleware
2015-04-13 18:34:07 +00:00
The `middleware` option is **optional** . Use this option to inject middleware at
2016-01-13 01:34:02 +00:00
named hook points. All middleware must implement the same interface as the
2015-04-13 18:34:07 +00:00
object they're wrapping. This means a registry middleware must implement the
`distribution.Namespace` interface, repository middleware must implement
2015-07-31 12:36:43 +00:00
`distribution.Repository` , and storage middleware must implement
2015-04-13 18:34:07 +00:00
`driver.StorageDriver` .
2015-03-04 00:18:22 +00:00
2016-04-25 18:31:02 +00:00
An example configuration of the `cloudfront` middleware, a storage middleware:
2015-03-04 00:18:22 +00:00
2016-12-15 02:50:40 +00:00
```
middleware:
registry:
- name: ARegistryMiddleware
options:
foo: bar
repository:
- name: ARepositoryMiddleware
options:
foo: bar
storage:
- name: cloudfront
options:
baseurl: https://my.cloudfronted.domain.com/
privatekey: /path/to/pem
keypairid: cloudfrontkeypairid
duration: 3000s
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
Each middleware entry has `name` and `options` entries. The `name` must
correspond to the name under which the middleware registers itself. The
`options` field is a map that details custom configuration required to
initialize the middleware. It is treated as a `map[string]interface{}` . As such,
it supports any interesting structures desired, leaving it up to the middleware
initialization function to best determine how to handle the specific
interpretation of the options.
2015-03-04 00:18:22 +00:00
### cloudfront
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > baseurl< / code >
< / td >
< td >
yes
< / td >
< td >
< code > SCHEME://HOST[/PATH]< / code > at which Cloudfront is served.
< / td >
< / tr >
< tr >
< td >
< code > privatekey< / code >
< / td >
< td >
yes
< / td >
< td >
Private Key for Cloudfront provided by AWS.
< / td >
< / tr >
< tr >
< td >
< code > keypairid< / code >
< / td >
< td >
yes
< / td >
< td >
Key pair ID provided by AWS.
< / td >
< / tr >
< tr >
< td >
< code > duration< / code >
< / td >
< td >
no
< / td >
< td >
2016-03-29 16:53:13 +00:00
Specify a `duration` by providing an integer and a unit. Valid time units are `ns` , `us` (or `µs` ), `ms` , `s` , `m` , `h` . For example, `3000s` is a valid duration; there should be no space between the integer and unit. If you do not specify a `duration` or specify an integer without a time unit, this defaults to 20 minutes.
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
< / table >
2016-04-25 18:31:02 +00:00
### redirect
In place of the `cloudfront` storage middleware, the `redirect`
storage middleware can be used to specify a custom URL to a location
of a proxy for the layer stored by the S3 storage driver.
2016-12-15 02:50:40 +00:00
| Parameter | Required | Description |
|:----------|:---------|:------------------------------------------------------------------------------------------------------------|
2016-04-25 18:31:02 +00:00
| baseurl | yes | `SCHEME://HOST` at which layers are served. Can also contain port. For example, `https://example.com:5443` . |
2015-03-04 00:18:22 +00:00
## reporting
2016-12-15 02:50:40 +00:00
```
reporting:
bugsnag:
apikey: bugsnagapikey
releasestage: bugsnagreleasestage
endpoint: bugsnagendpoint
newrelic:
licensekey: newreliclicensekey
name: newrelicname
verbose: true
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The `reporting` option is **optional** and configures error and metrics
reporting tools. At the moment only two services are supported, [New
Relic](http://newrelic.com/) and [Bugsnag ](http://bugsnag.com ), a valid
configuration may contain both.
2015-03-04 00:18:22 +00:00
### bugsnag
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > apikey< / code >
< / td >
< td >
yes
< / td >
< td >
API Key provided by Bugsnag
< / td >
< / tr >
< tr >
< td >
< code > releasestage< / code >
< / td >
< td >
no
< / td >
< td >
Tracks where the registry is deployed, for example,
2015-07-31 12:36:43 +00:00
< code > production< / code > ,< code > staging< / code > , or
< code > development< / code > .
2015-04-13 18:34:07 +00:00
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > endpoint< / code >
< / td >
< td >
no
< / td >
< td >
2015-05-05 08:25:42 +00:00
Specify the enterprise Bugsnag endpoint.
2015-04-13 18:34:07 +00:00
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-03-04 00:18:22 +00:00
### newrelic
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > licensekey< / code >
< / td >
< td >
yes
< / td >
< td >
License key provided by New Relic.
< / td >
< / tr >
< tr >
< td >
< code > name< / code >
< / td >
< td >
no
< / td >
< td >
New Relic application name.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > verbose< / code >
< / td >
< td >
no
< / td >
< td >
Enable New Relic debugging output on stdout.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-03-04 00:18:22 +00:00
## http
2016-12-15 02:50:40 +00:00
```
http:
addr: localhost:5000
net: tcp
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused@letsencrypt.com
debug:
addr: localhost:5001
headers:
X-Content-Type-Options: [nosniff]
http2:
disabled: false
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The `http` option details the configuration for the HTTP server that hosts the registry.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > addr< / code >
< / td >
< td >
yes
< / td >
< td >
2015-05-05 08:25:42 +00:00
The address for which the server should accept connections. The form depends on a network type (see < code > net< / code > option):
< code > HOST:PORT< / code > for tcp and < code > FILE< / code > for a unix socket.
< / td >
< / tr >
< tr >
< td >
< code > net< / code >
< / td >
< td >
no
< / td >
< td >
The network which is used to create a listening socket. Known networks are < code > unix< / code > and < code > tcp< / code > .
The default empty value means tcp.
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
2015-09-18 18:03:15 +00:00
< tr >
2015-04-13 18:34:07 +00:00
< td >
< code > prefix< / code >
< / td >
< td >
no
< / td >
< td >
If the server does not run at the root path use this value to specify the
prefix. The root path is the section before < code > v2< / code > . It
should have both preceding and trailing slashes, for example < code > /path/< / code > .
< / td >
< / tr >
2015-09-18 18:03:15 +00:00
< tr >
< td >
< code > host< / code >
< / td >
< td >
no
< / td >
< td >
This parameter specifies an externally-reachable address for the registry, as a
fully qualified URL. If present, it is used when creating generated URLs.
Otherwise, these URLs are derived from client requests.
< / td >
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > secret< / code >
< / td >
< td >
yes
< / td >
< td >
A random piece of data. This is used to sign state that may be stored with the
client to protect against tampering. For production environments you should generate a
2015-07-29 19:50:43 +00:00
random piece of data using a cryptographically secure random generator. This
configuration parameter may be omitted, in which case the registry will automatically
generate a secret at launch.
< p / >
< b > WARNING: If you are building a cluster of registries behind a load balancer, you MUST
ensure the secret is the same for all registries.< / b >
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
2016-02-23 01:49:23 +00:00
< tr >
< td >
< code > relativeurls< / code >
< / td >
< td >
no
< / td >
< td >
Specifies that the registry should return relative URLs in Location headers.
The client is responsible for resolving the correct URL. This option is not
compatible with Docker 1.7 and earlier.
< / td >
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-03-04 00:18:22 +00:00
### tls
2015-04-13 18:34:07 +00:00
The `tls` struct within `http` is **optional** . Use this to configure TLS
for the server. If you already have a server such as Nginx or Apache running on
the same host as the registry, you may prefer to configure TLS termination there
and proxy connections to the registry server.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > certificate< / code >
< / td >
< td >
yes
< / td >
< td >
Absolute path to x509 cert file
< / td >
< / tr >
< tr >
< td >
< code > key< / code >
< / td >
< td >
yes
< / td >
< td >
Absolute path to x509 private key file.
< / td >
< / tr >
< tr >
< td >
< code > clientcas< / code >
< / td >
< td >
no
< / td >
< td >
2016-06-02 05:31:13 +00:00
An array of absolute paths to an x509 CA file
2015-04-13 18:34:07 +00:00
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-03-04 00:18:22 +00:00
2016-06-13 18:42:13 +00:00
### letsencrypt
The `letsencrypt` struct within `tls` is **optional** . Use this to configure TLS
certificates provided by [Let's Encrypt ](https://letsencrypt.org/how-it-works/ ).
2016-09-01 22:11:44 +00:00
>**NOTE**: When using Let's Encrypt ensure that the outward facing address is
> accessible on port `443`. The registry defaults to listening on `5000`, if
> run as a container consider adding the flag `-p 443:5000` to the `docker run`
> command or similar setting in cloud configuration.
2016-06-13 18:42:13 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > cachefile< / code >
< / td >
< td >
yes
< / td >
< td >
Absolute path to a file for the Let's Encrypt agent to cache data
< / td >
< / tr >
< tr >
< td >
< code > email< / code >
< / td >
< td >
yes
< / td >
< td >
Email used to register with Let's Encrypt.
< / td >
< / tr >
< / table >
2015-03-04 00:18:22 +00:00
### debug
2015-05-20 22:24:25 +00:00
The `debug` option is **optional** . Use it to configure a debug server that
can be helpful in diagnosing problems. The debug endpoint can be used for
monitoring registry metrics and health, as well as profiling. Sensitive
information may be available via the debug endpoint. Please be certain that
access to the debug endpoint is locked down in a production environment.
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The `debug` section takes a single, required `addr` parameter. This parameter
specifies the `HOST:PORT` on which the debug server should accept connections.
2015-03-04 00:18:22 +00:00
2015-08-10 21:20:52 +00:00
### headers
The `headers` option is **optional** . Use it to specify headers that the HTTP
server should include in responses. This can be used for security headers such
as `Strict-Transport-Security` .
The `headers` option should contain an option for each header to include, where
the parameter name is the header's name, and the parameter value a list of the
header's payload values.
Including `X-Content-Type-Options: [nosniff]` is recommended, so that browsers
will not interpret content as HTML if they are directed to load a page from the
registry. This header is included in the example configuration files.
2016-07-14 19:48:03 +00:00
### http2
The `http2` struct within `http` is **optional** . Use this to control http2
settings for the registry.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > disabled< / code >
< / td >
< td >
no
< / td >
< td >
A boolean that determines if http2 support should be disabled
< / td >
< / tr >
< / table >
2015-08-10 21:20:52 +00:00
2015-03-04 00:18:22 +00:00
## notifications
2016-12-15 02:50:40 +00:00
```
notifications:
endpoints:
- name: alistener
disabled: false
url: https://my.listener.com/event
headers: < http.Header >
timeout: 500
threshold: 5
backoff: 1000
ignoredmediatypes:
- application/octet-stream
```
2015-03-04 00:18:22 +00:00
2015-04-13 18:34:07 +00:00
The notifications option is **optional** and currently may contain a single
option, `endpoints` .
2015-03-04 00:18:22 +00:00
### endpoints
Endpoints is a list of named services (URLs) that can accept event notifications.
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > name< / code >
< / td >
< td >
yes
< / td >
< td >
2015-05-05 08:25:42 +00:00
A human readable name for the service.
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
< tr >
< td >
< code > disabled< / code >
< / td >
< td >
no
< / td >
< td >
A boolean to enable/disable notifications for a service.
< / td >
< / tr >
< tr >
< td >
< code > url< / code >
< / td >
< td >
2015-07-10 21:22:06 +00:00
yes
2015-04-13 18:34:07 +00:00
< / td >
< td >
The URL to which events should be published.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > headers< / code >
< / td >
< td >
yes
< / td >
< td >
2015-08-20 01:23:58 +00:00
Static headers to add to each request. Each header's name should be a key
underneath headers, and each value is a list of payloads for that
header name. Note that values must always be lists.
2015-04-13 18:34:07 +00:00
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > timeout< / code >
< / td >
< td >
yes
< / td >
< td >
An HTTP timeout value. This field takes a positive integer and an optional
suffix indicating the unit of time. Possible units are:
< ul >
2015-07-10 21:22:06 +00:00
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
2015-04-13 18:34:07 +00:00
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > threshold< / code >
< / td >
< td >
yes
< / td >
< td >
An integer specifying how long to wait before backing off a failure.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > backoff< / code >
< / td >
< td >
yes
< / td >
< td >
How long the system backs off before retrying. This field takes a positive
integer and an optional suffix indicating the unit of time. Possible units
are:
< ul >
2015-07-10 21:22:06 +00:00
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
2015-04-13 18:34:07 +00:00
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2016-09-12 22:07:49 +00:00
< tr >
< td >
< code > ignoredmediatypes< / code >
< / td >
< td >
no
< / td >
< td >
List of target media types to ignore. An event whose target media type
is present in this list will not be published to the endpoint.
< / td >
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-03-04 00:18:22 +00:00
2015-04-01 23:27:24 +00:00
## redis
2016-12-15 02:50:40 +00:00
```
redis:
addr: localhost:6379
password: asecret
db: 0
dialtimeout: 10ms
readtimeout: 10ms
writetimeout: 10ms
pool:
maxidle: 16
maxactive: 64
idletimeout: 300s
```
2015-04-01 23:27:24 +00:00
Declare parameters for constructing the redis connections. Registry instances
2015-04-13 18:34:07 +00:00
may use the Redis instance for several applications. The current purpose is
2015-04-01 23:27:24 +00:00
caching information about immutable blobs. Most of the options below control
2015-04-13 18:34:07 +00:00
how the registry connects to redis. You can control the pool's behavior
2015-06-18 01:58:53 +00:00
with the [pool ](#pool ) subsection.
2015-04-01 23:27:24 +00:00
2015-08-27 01:19:13 +00:00
It's advisable to configure Redis itself with the **allkeys-lru** eviction policy
as the registry does not set an expire value on keys.
2015-04-13 18:34:07 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > addr< / code >
< / td >
< td >
yes
< / td >
< td >
Address (host and port) of redis instance.
< / td >
< / tr >
< tr >
< td >
< code > password< / code >
< / td >
< td >
no
< / td >
< td >
A password used to authenticate to the redis instance.
< / td >
< / tr >
< tr >
< td >
< code > db< / code >
< / td >
< td >
no
< / td >
< td >
Selects the db for each connection.
< / td >
< / tr >
< tr >
< td >
< code > dialtimeout< / code >
< / td >
< td >
no
< / td >
< td >
Timeout for connecting to a redis instance.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > readtimeout< / code >
< / td >
< td >
no
< / td >
< td >
Timeout for reading from redis connections.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< tr >
< td >
< code > writetimeout< / code >
< / td >
< td >
no
< / td >
< td >
Timeout for writing to redis connections.
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-04-01 23:27:24 +00:00
### pool
2016-12-15 02:50:40 +00:00
```
pool:
maxidle: 16
maxactive: 64
idletimeout: 300s
```
2015-04-01 23:27:24 +00:00
2015-04-13 18:34:07 +00:00
Configure the behavior of the Redis connection pool.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > maxidle< / code >
< / td >
< td >
no
< / td >
< td >
Sets the maximum number of idle connections.
< / td >
< / tr >
< tr >
< td >
< code > maxactive< / code >
< / td >
< td >
no
< / td >
< td >
sets the maximum number of connections that should
2015-04-01 23:27:24 +00:00
be opened before blocking a connection request.
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
< tr >
< td >
< code > idletimeout< / code >
< / td >
< td >
no
< / td >
< td >
sets the amount time to wait before closing
2015-04-01 23:27:24 +00:00
inactive connections.
2015-04-13 18:34:07 +00:00
< / td >
2015-05-05 08:25:42 +00:00
< / tr >
2015-04-13 18:34:07 +00:00
< / table >
2015-04-17 01:34:29 +00:00
2015-08-19 00:19:46 +00:00
## health
2016-12-15 02:50:40 +00:00
```
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
file:
- file: /path/to/checked/file
interval: 10s
http:
- uri: http://server.to.check/must/return/200
headers:
Authorization: [Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==]
statuscode: 200
timeout: 3s
interval: 10s
threshold: 3
tcp:
- addr: redis-server.domain.com:6379
timeout: 3s
interval: 10s
threshold: 3
```
2015-08-19 00:19:46 +00:00
2015-08-19 21:12:51 +00:00
The health option is **optional** . It may contain preferences for a periodic
health check on the storage driver's backend storage, and optional periodic
2015-08-20 00:57:18 +00:00
checks on local files, HTTP URIs, and/or TCP servers. The results of the health
checks are available at /debug/health on the debug HTTP server if the debug
HTTP server is enabled (see http section).
2015-08-19 21:12:51 +00:00
### storagedriver
storagedriver contains options for a health check on the configured storage
driver's backend storage. enabled must be set to true for this health check to
be active.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > enabled< / code >
< / td >
< td >
yes
< / td >
< td >
"true" to enable the storage driver health check or "false" to disable it.
< / td >
< / tr >
< tr >
< td >
< code > interval< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait between repetitions of the check. This field
takes a positive integer and an optional suffix indicating the unit of
time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
The default value is 10 seconds if this field is omitted.
< / td >
< / tr >
< tr >
< td >
< code > threshold< / code >
< / td >
< td >
no
< / td >
< td >
An integer specifying the number of times the check must fail before the
check triggers an unhealthy state. If this filed is not specified, a
single failure will trigger an unhealthy state.
< / td >
< / tr >
< / table >
2015-08-19 00:19:46 +00:00
### file
file is a list of paths to be periodically checked for the existence of a file.
If a file exists at the given path, the health check will fail. This can be
used as a way of bringing a registry out of rotation by creating a file.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > file< / code >
< / td >
< td >
yes
< / td >
< td >
The path to check for the existence of a file.
< / td >
< / tr >
< tr >
< td >
< code > interval< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait between repetitions of the check. This field
takes a positive integer and an optional suffix indicating the unit of
time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
The default value is 10 seconds if this field is omitted.
< / td >
< / tr >
2015-08-20 00:57:18 +00:00
< / table >
### http
http is a list of HTTP URIs to be periodically checked with HEAD requests. If
a HEAD request doesn't complete or returns an unexpected status code, the
health check will fail.
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > uri< / code >
< / td >
< td >
yes
< / td >
< td >
The URI to check.
< / td >
2015-08-20 01:23:58 +00:00
< / tr >
< tr >
< td >
< code > headers< / code >
< / td >
< td >
no
< / td >
< td >
Static headers to add to each request. Each header's name should be a key
underneath headers, and each value is a list of payloads for that
header name. Note that values must always be lists.
< / td >
2015-08-20 00:57:18 +00:00
< / tr >
< tr >
< td >
< code > statuscode< / code >
< / td >
< td >
no
< / td >
< td >
Expected status code from the HTTP URI. Defaults to 200.
< / td >
< / tr >
< tr >
< td >
< code > timeout< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait before timing out the HTTP request. This field
takes a positive integer and an optional suffix indicating the unit of
time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
< / td >
< / tr >
< tr >
< td >
< code > interval< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait between repetitions of the check. This field
takes a positive integer and an optional suffix indicating the unit of
time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
The default value is 10 seconds if this field is omitted.
< / td >
< / tr >
2015-08-19 00:19:46 +00:00
< tr >
< td >
< code > threshold< / code >
< / td >
< td >
no
< / td >
< td >
An integer specifying the number of times the check must fail before the
check triggers an unhealthy state. If this filed is not specified, a
single failure will trigger an unhealthy state.
< / td >
< / tr >
< / table >
2015-08-20 00:57:18 +00:00
### tcp
2015-08-19 00:19:46 +00:00
2015-08-20 00:57:18 +00:00
tcp is a list of TCP addresses to be periodically checked with connection
attempts. The addresses must include port numbers. If a connection attempt
fails, the health check will fail.
2015-08-19 00:19:46 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
2015-08-20 00:57:18 +00:00
< code > addr< / code >
2015-08-19 00:19:46 +00:00
< / td >
< td >
yes
< / td >
< td >
2015-08-20 00:57:18 +00:00
The TCP address to connect to, including a port number.
2015-08-19 00:19:46 +00:00
< / td >
< / tr >
2015-08-20 00:57:18 +00:00
< tr >
< td >
< code > timeout< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait before timing out the TCP connection. This
field takes a positive integer and an optional suffix indicating the unit
of time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
< / td >
< / tr >
2015-08-19 00:19:46 +00:00
< tr >
< td >
< code > interval< / code >
< / td >
< td >
no
< / td >
< td >
The length of time to wait between repetitions of the check. This field
takes a positive integer and an optional suffix indicating the unit of
time. Possible units are:
< ul >
< li > < code > ns< / code > (nanoseconds)< / li >
< li > < code > us< / code > (microseconds)< / li >
< li > < code > ms< / code > (milliseconds)< / li >
< li > < code > s< / code > (seconds)< / li >
< li > < code > m< / code > (minutes)< / li >
< li > < code > h< / code > (hours)< / li >
< / ul >
If you omit the suffix, the system interprets the value as nanoseconds.
The default value is 10 seconds if this field is omitted.
< / td >
< / tr >
< tr >
< td >
< code > threshold< / code >
< / td >
< td >
no
< / td >
< td >
An integer specifying the number of times the check must fail before the
check triggers an unhealthy state. If this filed is not specified, a
single failure will trigger an unhealthy state.
< / td >
< / tr >
< / table >
2015-04-17 01:34:29 +00:00
2015-08-26 18:02:05 +00:00
## Proxy
2016-12-15 02:50:40 +00:00
```
proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]
```
2015-08-26 18:02:05 +00:00
2016-11-10 22:28:28 +00:00
Proxy enables a registry to be configured as a pull through cache to the official Docker Hub. See [mirror ](https://github.com/docker/docker.github.io/tree/master/registry/recipes/mirror.md ) for more information. Pushing to a registry configured as a pull through cache is currently unsupported.
2015-08-26 18:02:05 +00:00
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > remoteurl< / code >
< / td >
< td >
yes
< / td >
< td >
The URL of the official Docker Hub
< / td >
< / tr >
< tr >
< td >
< code > username< / code >
< / td >
< td >
no
< / td >
< td >
The username of the Docker Hub account
< / td >
< / tr >
< tr >
< td >
< code > password< / code >
< / td >
< td >
no
< / td >
< td >
The password for the official Docker Hub account
< / td >
< / tr >
< / table >
To enable pulling private repositories (e.g. `batman/robin` ) a username and password for user `batman` must be specified. Note: These private repositories will be stored in the proxy cache's storage and relevant measures should be taken to protect access to this.
2016-02-10 23:20:55 +00:00
## Compatibility
2016-12-15 02:50:40 +00:00
```
compatibility:
schema1:
signingkeyfile: /etc/registry/key.json
```
2016-02-10 23:20:55 +00:00
Configure handling of older and deprecated features. Each subsection
2016-07-08 22:44:52 +00:00
defines such a feature with configurable behavior.
2016-02-10 23:20:55 +00:00
### Schema1
< table >
< tr >
< th > Parameter< / th >
< th > Required< / th >
< th > Description< / th >
< / tr >
< tr >
< td >
< code > signingkeyfile< / code >
< / td >
< td >
no
< / td >
< td >
The signing private key used for adding signatures to schema1 manifests.
If no signing key is provided, a new ECDSA key will be generated on
startup.
< / td >
< / tr >
< / table >
2015-08-26 18:02:05 +00:00
2016-07-08 22:44:52 +00:00
## Validation
2016-12-15 02:50:40 +00:00
```
validation:
manifests:
urls:
allow:
- ^https?://([^/]+\.)*example\.com/
deny:
- ^https?://www\.example\.com/
```
2016-07-08 22:44:52 +00:00
2016-11-02 14:18:33 +00:00
### disabled
2016-07-08 22:44:52 +00:00
2016-11-02 14:18:33 +00:00
Use the `disabled` flag to disable the other options in the `validation`
section. They are enabled by default.
This option deprecates the `enabled` flag.
2016-07-08 22:44:52 +00:00
2016-11-02 14:18:33 +00:00
### manifests
2016-07-08 22:44:52 +00:00
2016-11-02 14:18:33 +00:00
Use the `manifests` subsection to configure manifests validation. If `disabled` is
`false` the validation allows nothing.
2016-07-08 22:44:52 +00:00
2016-11-02 14:18:33 +00:00
#### urls
2016-07-08 22:44:52 +00:00
The `allow` and `deny` options are both lists of
[regular expressions ](https://godoc.org/regexp/syntax ) that restrict the URLs in
pushed manifests.
If `allow` is unset, pushing a manifest containing URLs will fail.
If `allow` is set, pushing a manifest will succeed only if all URLs within match
one of the `allow` regular expressions and one of the following holds:
1. `deny` is unset.
2. `deny` is set but no URLs within the manifest match any of the `deny` regular expressions.
2015-04-13 18:34:07 +00:00
## Example: Development configuration
The following is a simple example you can use for local development:
2016-12-15 02:50:40 +00:00
```
version: 0.1
log:
level: debug
storage:
filesystem:
rootdirectory: /var/lib/registry
http:
addr: localhost:5000
secret: asecretforlocaldevelopment
debug:
addr: localhost:5001
```
2015-04-13 18:34:07 +00:00
The above configures the registry instance to run on port `5000` , binding to
2015-05-05 08:25:42 +00:00
`localhost` , with the `debug` server enabled. Registry data storage is in the
2015-06-11 03:54:24 +00:00
`/var/lib/registry` directory. Logging is in `debug` mode, which is the most
2015-04-13 18:34:07 +00:00
verbose.
A similar simple configuration is available at
2015-07-30 22:51:50 +00:00
[config-example.yml ](https://github.com/docker/distribution/blob/master/cmd/registry/config-example.yml ).
2015-04-17 21:05:07 +00:00
Both are generally useful for local development.
2015-04-13 18:34:07 +00:00
## Example: Middleware configuration
This example illustrates how to configure storage middleware in a registry.
Middleware allows the registry to serve layers via a content delivery network
2015-05-05 08:25:42 +00:00
(CDN). This is useful for reducing requests to the storage layer.
2015-04-13 18:34:07 +00:00
2016-04-25 18:31:02 +00:00
The registry supports [Amazon
2015-04-13 18:34:07 +00:00
Cloudfront](http://aws.amazon.com/cloudfront/). You can only use Cloudfront in
conjunction with the S3 storage driver.
< table >
< tr >
< th > Parameter< / th >
< th > Description< / th >
< / tr >
< tr >
< td > < code > name< / code > < / td >
< td > The storage middleware name. Currently < code > cloudfront< / code > is an accepted value.< / td >
< / tr >
< tr >
2016-12-15 02:50:40 +00:00
< td > < code > disabled< / code > < / td >
2015-04-13 18:34:07 +00:00
< td > Set to < code > false< / code > to easily disable the middleware.< / td >
< / tr >
< tr >
< td > < code > options:< / code > < / td >
2015-05-05 08:25:42 +00:00
< td >
2015-04-13 18:34:07 +00:00
A set of key/value options to configure the middleware.
< ul >
< li > < code > baseurl:< / code > The Cloudfront base URL.< / li >
< li > < code > privatekey:< / code > The location of your AWS private key on the filesystem. < / li >
< li > < code > keypairid:< / code > The ID of your Cloudfront keypair. < / li >
2015-07-10 21:22:06 +00:00
< li > < code > duration:< / code > The duration in minutes for which the URL is valid. Default is 20. < / li >
< / ul >
2015-04-13 18:34:07 +00:00
< / td >
< / tr >
< / table >
The following example illustrates these values:
2016-12-15 02:50:40 +00:00
```
middleware:
storage:
- name: cloudfront
disabled: false
options:
baseurl: http://d111111abcdef8.cloudfront.net
privatekey: /path/to/asecret.pem
keypairid: asecret
duration: 60
```
2015-04-13 18:34:07 +00:00
>**Note**: Cloudfront keys exist separately to other AWS keys. See
2015-07-30 16:56:04 +00:00
>[the documentation on AWS credentials](http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)
2015-04-17 21:05:07 +00:00
>for more information.