update vendor

Signed-off-by: Jess Frazelle <acidburn@microsoft.com>
This commit is contained in:
Jess Frazelle 2018-09-25 12:27:46 -04:00
parent 19a32db84d
commit 94d1cfbfbf
No known key found for this signature in database
GPG key ID: 18F3685C0022BFF3
10501 changed files with 2307943 additions and 29279 deletions

165
vendor/github.com/docker/cli/docs/extend/EBS_volume.md generated vendored Normal file
View file

@ -0,0 +1,165 @@
---
description: Volume plugin for Amazon EBS
keywords: "API, Usage, plugins, documentation, developer, amazon, ebs, rexray, volume"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Volume plugin for Amazon EBS
## A proof-of-concept Rexray plugin
In this example, a simple Rexray plugin will be created for the purposes of using
it on an Amazon EC2 instance with EBS. It is not meant to be a complete Rexray plugin.
The example source is available at [https://github.com/tiborvass/rexray-plugin](https://github.com/tiborvass/rexray-plugin).
To learn more about Rexray: [https://github.com/codedellemc/rexray](https://github.com/codedellemc/rexray)
## 1. Make a Docker image
The following is the Dockerfile used to containerize rexray.
```Dockerfile
FROM debian:jessie
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
RUN wget https://dl.bintray.com/emccode/rexray/stable/0.6.4/rexray-Linux-x86_64-0.6.4.tar.gz -O rexray.tar.gz && tar -xvzf rexray.tar.gz -C /usr/bin && rm rexray.tar.gz
RUN mkdir -p /run/docker/plugins /var/lib/libstorage/volumes
ENTRYPOINT ["rexray"]
CMD ["--help"]
```
To build it you can run `image=$(cat Dockerfile | docker build -q -)` and `$image`
will reference the containerized rexray image.
## 2. Extract rootfs
```sh
$ TMPDIR=/tmp/rexray # for the purpose of this example
$ # create container without running it, to extract the rootfs from image
$ docker create --name rexray "$image"
$ # save the rootfs to a tar archive
$ docker export -o $TMPDIR/rexray.tar rexray
$ # extract rootfs from tar archive to a rootfs folder
$ ( mkdir -p $TMPDIR/rootfs; cd $TMPDIR/rootfs; tar xf ../rexray.tar )
```
## 3. Add plugin configuration
We have to put the following JSON to `$TMPDIR/config.json`:
```json
{
"Args": {
"Description": "",
"Name": "",
"Settable": null,
"Value": null
},
"Description": "A proof-of-concept EBS plugin (using rexray) for Docker",
"Documentation": "https://github.com/tiborvass/rexray-plugin",
"Entrypoint": [
"/usr/bin/rexray", "service", "start", "-f"
],
"Env": [
{
"Description": "",
"Name": "REXRAY_SERVICE",
"Settable": [
"value"
],
"Value": "ebs"
},
{
"Description": "",
"Name": "EBS_ACCESSKEY",
"Settable": [
"value"
],
"Value": ""
},
{
"Description": "",
"Name": "EBS_SECRETKEY",
"Settable": [
"value"
],
"Value": ""
}
],
"Interface": {
"Socket": "rexray.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"Linux": {
"AllowAllDevices": true,
"Capabilities": ["CAP_SYS_ADMIN"],
"Devices": null
},
"Mounts": [
{
"Source": "/dev",
"Destination": "/dev",
"Type": "bind",
"Options": ["rbind"]
}
],
"Network": {
"Type": "host"
},
"PropagatedMount": "/var/lib/libstorage/volumes",
"User": {},
"WorkDir": ""
}
```
Please note a couple of points:
- `PropagatedMount` is needed so that the docker daemon can see mounts done by the
rexray plugin from within the container, otherwise the docker daemon is not able
to mount a docker volume.
- The rexray plugin needs dynamic access to host devices. For that reason, we
have to give it access to all devices under `/dev` and set `AllowAllDevices` to
true for proper access.
- The user of this simple plugin can change only 3 settings: `REXRAY_SERVICE`,
`EBS_ACCESSKEY` and `EBS_SECRETKEY`. This is because of the reduced scope of this
plugin. Ideally other rexray parameters could also be set.
## 4. Create plugin
`docker plugin create tiborvass/rexray-plugin "$TMPDIR"` will create the plugin.
```sh
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
2475a4bd0ca5 tiborvass/rexray-plugin:latest A rexray volume plugin for Docker false
```
## 5. Test plugin
```sh
$ docker plugin set tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`
$ docker plugin enable tiborvass/rexray-plugin
$ docker volume create -d tiborvass/rexray-plugin my-ebs-volume
$ docker volume ls
DRIVER VOLUME NAME
tiborvass/rexray-plugin:latest my-ebs-volume
$ docker run --rm -v my-ebs-volume:/volume busybox sh -c 'echo bye > /volume/hi'
$ docker run --rm -v my-ebs-volume:/volume busybox cat /volume/hi
bye
```
## 6. Push plugin
First, ensure you are logged in with `docker login`. Then you can run:
`docker plugin push tiborvass/rexray-plugin` to push it like a regular docker
image to a registry, to make it available for others to install via
`docker plugin install tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`.

237
vendor/github.com/docker/cli/docs/extend/config.md generated vendored Normal file
View file

@ -0,0 +1,237 @@
---
description: "How develop and use a plugin with the managed plugin system"
keywords: "API, Usage, plugins, documentation, developer"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Plugin Config Version 1 of Plugin V2
This document outlines the format of the V0 plugin configuration. The plugin
config described herein was introduced in the Docker daemon in the [v1.12.0
release](https://github.com/docker/docker/commit/f37117045c5398fd3dca8016ea8ca0cb47e7312b).
Plugin configs describe the various constituents of a docker plugin. Plugin
configs can be serialized to JSON format with the following media types:
Config Type | Media Type
------------- | -------------
config | "application/vnd.docker.plugin.v1+json"
## *Config* Field Descriptions
Config provides the base accessible fields for working with V0 plugin format
in the registry.
- **`description`** *string*
description of the plugin
- **`documentation`** *string*
link to the documentation about the plugin
- **`interface`** *PluginInterface*
interface implemented by the plugins, struct consisting of the following fields
- **`types`** *string array*
types indicate what interface(s) the plugin currently implements.
currently supported:
- **docker.volumedriver/1.0**
- **docker.networkdriver/1.0**
- **docker.ipamdriver/1.0**
- **docker.authz/1.0**
- **docker.logdriver/1.0**
- **docker.metricscollector/1.0**
- **`socket`** *string*
socket is the name of the socket the engine should use to communicate with the plugins.
the socket will be created in `/run/docker/plugins`.
- **`entrypoint`** *string array*
entrypoint of the plugin, see [`ENTRYPOINT`](../reference/builder.md#entrypoint)
- **`workdir`** *string*
workdir of the plugin, see [`WORKDIR`](../reference/builder.md#workdir)
- **`network`** *PluginNetwork*
network of the plugin, struct consisting of the following fields
- **`type`** *string*
network type.
currently supported:
- **bridge**
- **host**
- **none**
- **`mounts`** *PluginMount array*
mount of the plugin, struct consisting of the following fields, see [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts)
- **`name`** *string*
name of the mount.
- **`description`** *string*
description of the mount.
- **`source`** *string*
source of the mount.
- **`destination`** *string*
destination of the mount.
- **`type`** *string*
mount type.
- **`options`** *string array*
options of the mount.
- **`ipchost`** *boolean*
Access to host ipc namespace.
- **`pidhost`** *boolean*
Access to host pid namespace.
- **`propagatedMount`** *string*
path to be mounted as rshared, so that mounts under that path are visible to docker. This is useful for volume plugins.
This path will be bind-mounted outside of the plugin rootfs so it's contents
are preserved on upgrade.
- **`env`** *PluginEnv array*
env of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the env.
- **`description`** *string*
description of the env.
- **`value`** *string*
value of the env.
- **`args`** *PluginArgs*
args of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the args.
- **`description`** *string*
description of the args.
- **`value`** *string array*
values of the args.
- **`linux`** *PluginLinux*
- **`capabilities`** *string array*
capabilities of the plugin (*Linux only*), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
- **`allowAllDevices`** *boolean*
If `/dev` is bind mounted from the host, and allowAllDevices is set to true, the plugin will have `rwm` access to all devices on the host.
- **`devices`** *PluginDevice array*
device of the plugin, (*Linux only*), struct consisting of the following fields, see [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices)
- **`name`** *string*
name of the device.
- **`description`** *string*
description of the device.
- **`path`** *string*
path of the device.
## Example Config
*Example showing the 'tiborvass/sample-volume-plugin' plugin config.*
```json
{
"Args": {
"Description": "",
"Name": "",
"Settable": null,
"Value": null
},
"Description": "A sample volume plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Entrypoint": [
"/usr/bin/sample-volume-plugin",
"/data"
],
"Env": [
{
"Description": "",
"Name": "DEBUG",
"Settable": [
"value"
],
"Value": "0"
}
],
"Interface": {
"Socket": "plugin.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"Linux": {
"Capabilities": null,
"AllowAllDevices": false,
"Devices": null
},
"Mounts": null,
"Network": {
"Type": ""
},
"PropagatedMount": "/data",
"User": {},
"Workdir": ""
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

263
vendor/github.com/docker/cli/docs/extend/index.md generated vendored Normal file
View file

@ -0,0 +1,263 @@
---
description: Develop and use a plugin with the managed plugin system
keywords: "API, Usage, plugins, documentation, developer"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker Engine managed plugin system
* [Installing and using a plugin](index.md#installing-and-using-a-plugin)
* [Developing a plugin](index.md#developing-a-plugin)
* [Debugging plugins](index.md#debugging-plugins)
Docker Engine's plugin system allows you to install, start, stop, and remove
plugins using Docker Engine.
For information about the legacy plugin system available in Docker Engine 1.12
and earlier, see [Understand legacy Docker Engine plugins](legacy_plugins.md).
> **Note**: Docker Engine managed plugins are currently not supported
on Windows daemons.
## Installing and using a plugin
Plugins are distributed as Docker images and can be hosted on Docker Hub or on
a private registry.
To install a plugin, use the `docker plugin install` command, which pulls the
plugin from Docker Hub or your private registry, prompts you to grant
permissions or capabilities if necessary, and enables the plugin.
To check the status of installed plugins, use the `docker plugin ls` command.
Plugins that start successfully are listed as enabled in the output.
After a plugin is installed, you can use it as an option for another Docker
operation, such as creating a volume.
In the following example, you install the `sshfs` plugin, verify that it is
enabled, and use it to create a volume.
> **Note**: This example is intended for instructional purposes only. Once the volume is created, your SSH password to the remote host will be exposed as plaintext when inspecting the volume. You should delete the volume as soon as you are done with the example.
1. Install the `sshfs` plugin.
```bash
$ docker plugin install vieux/sshfs
Plugin "vieux/sshfs" is requesting the following privileges:
- network: [host]
- capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
vieux/sshfs
```
The plugin requests 2 privileges:
- It needs access to the `host` network.
- It needs the `CAP_SYS_ADMIN` capability, which allows the plugin to run
the `mount` command.
2. Check that the plugin is enabled in the output of `docker plugin ls`.
```bash
$ docker plugin ls
ID NAME TAG DESCRIPTION ENABLED
69553ca1d789 vieux/sshfs latest the `sshfs` plugin true
```
3. Create a volume using the plugin.
This example mounts the `/remote` directory on host `1.2.3.4` into a
volume named `sshvolume`.
This volume can now be mounted into containers.
```bash
$ docker volume create \
-d vieux/sshfs \
--name sshvolume \
-o sshcmd=user@1.2.3.4:/remote \
-o password=$(cat file_containing_password_for_remote_host)
sshvolume
```
4. Verify that the volume was created successfully.
```bash
$ docker volume ls
DRIVER NAME
vieux/sshfs sshvolume
```
5. Start a container that uses the volume `sshvolume`.
```bash
$ docker run --rm -v sshvolume:/data busybox ls /data
<content of /remote on machine 1.2.3.4>
```
6. Remove the volume `sshvolume`
```bash
docker volume rm sshvolume
sshvolume
```
To disable a plugin, use the `docker plugin disable` command. To completely
remove it, use the `docker plugin remove` command. For other available
commands and options, see the
[command line reference](../reference/commandline/index.md).
## Developing a plugin
#### The rootfs directory
The `rootfs` directory represents the root filesystem of the plugin. In this
example, it was created from a Dockerfile:
>**Note:** The `/run/docker/plugins` directory is mandatory inside of the
plugin's filesystem for docker to communicate with the plugin.
```bash
$ git clone https://github.com/vieux/docker-volume-sshfs
$ cd docker-volume-sshfs
$ docker build -t rootfsimage .
$ id=$(docker create rootfsimage true) # id was cd851ce43a403 when the image was created
$ sudo mkdir -p myplugin/rootfs
$ sudo docker export "$id" | sudo tar -x -C myplugin/rootfs
$ docker rm -vf "$id"
$ docker rmi rootfsimage
```
#### The config.json file
The `config.json` file describes the plugin. See the [plugins config reference](config.md).
Consider the following `config.json` file.
```json
{
"description": "sshFS plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": ["/docker-volume-sshfs"],
"network": {
"type": "host"
},
"interface" : {
"types": ["docker.volumedriver/1.0"],
"socket": "sshfs.sock"
},
"linux": {
"capabilities": ["CAP_SYS_ADMIN"]
}
}
```
This plugin is a volume driver. It requires a `host` network and the
`CAP_SYS_ADMIN` capability. It depends upon the `/docker-volume-sshfs`
entrypoint and uses the `/run/docker/plugins/sshfs.sock` socket to communicate
with Docker Engine. This plugin has no runtime parameters.
#### Creating the plugin
A new plugin can be created by running
`docker plugin create <plugin-name> ./path/to/plugin/data` where the plugin
data contains a plugin configuration file `config.json` and a root filesystem
in subdirectory `rootfs`.
After that the plugin `<plugin-name>` will show up in `docker plugin ls`.
Plugins can be pushed to remote registries with
`docker plugin push <plugin-name>`.
## Debugging plugins
Stdout of a plugin is redirected to dockerd logs. Such entries have a
`plugin=<ID>` suffix. Here are a few examples of commands for pluginID
`f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62` and their
corresponding log entries in the docker daemon logs.
```bash
$ docker plugin install tiborvass/sample-volume-plugin
INFO[0036] Starting... Found 0 volumes on startup plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
```
```bash
$ docker volume create -d tiborvass/sample-volume-plugin samplevol
INFO[0193] Create Called... Ensuring directory /data/samplevol exists on host... plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0193] open /var/lib/docker/plugin-data/local-persist.json: no such file or directory plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0193] Created volume samplevol with mountpoint /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0193] Path Called... Returned path /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
```
```bash
$ docker run -v samplevol:/tmp busybox sh
INFO[0421] Get Called... Found samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0421] Mount Called... Mounted samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0421] Path Called... Returned path /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
INFO[0421] Unmount Called... Unmounted samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
```
#### Using docker-runc to obtain logfiles and shell into the plugin.
`docker-runc`, the default docker container runtime can be used for debugging
plugins. This is specifically useful to collect plugin logs if they are
redirected to a file.
```bash
$ sudo docker-runc --root /var/run/docker/plugins/runtime-root/moby-plugins list
ID PID STATUS BUNDLE CREATED OWNER
93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 15806 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 2018-02-08T21:40:08.621358213Z root
9b4606d84e06b56df84fadf054a21374b247941c94ce405b0a261499d689d9c9 14992 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/9b4606d84e06b56df84fadf054a21374b247941c94ce405b0a261499d689d9c9 2018-02-08T21:35:12.321325872Z root
c5bb4b90941efcaccca999439ed06d6a6affdde7081bb34dc84126b57b3e793d 14984 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/c5bb4b90941efcaccca999439ed06d6a6affdde7081bb34dc84126b57b3e793d 2018-02-08T21:35:12.321288966Z root
```
```bash
$ sudo docker-runc --root /var/run/docker/plugins/runtime-root/moby-plugins exec 93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 cat /var/log/plugin.log
```
If the plugin has a built-in shell, then exec into the plugin can be done as
follows:
```bash
$ sudo docker-runc --root /var/run/docker/plugins/runtime-root/moby-plugins exec -t 93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 sh
```
#### Using curl to debug plugin socket issues.
To verify if the plugin API socket that the docker daemon communicates with
is responsive, use curl. In this example, we will make API calls from the
docker host to volume and network plugins using curl 7.47.0 to ensure that
the plugin is listening on the said socket. For a well functioning plugin,
these basic requests should work. Note that plugin sockets are available on the host under `/var/run/docker/plugins/<pluginID>`
```bash
curl -H "Content-Type: application/json" -XPOST -d '{}' --unix-socket /var/run/docker/plugins/e8a37ba56fc879c991f7d7921901723c64df6b42b87e6a0b055771ecf8477a6d/plugin.sock http:/VolumeDriver.List
{"Mountpoint":"","Err":"","Volumes":[{"Name":"myvol1","Mountpoint":"/data/myvol1"},{"Name":"myvol2","Mountpoint":"/data/myvol2"}],"Volume":null}
```
```bash
curl -H "Content-Type: application/json" -XPOST -d '{}' --unix-socket /var/run/docker/plugins/45e00a7ce6185d6e365904c8bcf62eb724b1fe307e0d4e7ecc9f6c1eb7bcdb70/plugin.sock http:/NetworkDriver.GetCapabilities
{"Scope":"local"}
```
When using curl 7.5 and above, the URL should be of the form
`http://hostname/APICall`, where `hostname` is the valid hostname where the
plugin is installed and `APICall` is the call to the plugin API.
For example, `http://localhost/VolumeDriver.List`

View file

@ -0,0 +1,104 @@
---
redirect_from:
- "/engine/extend/plugins/"
description: "How to add additional functionality to Docker with plugins extensions"
keywords: "Examples, Usage, plugins, docker, documentation, user guide"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Use Docker Engine plugins
This document describes the Docker Engine plugins generally available in Docker
Engine. To view information on plugins managed by Docker,
refer to [Docker Engine plugin system](index.md).
You can extend the capabilities of the Docker Engine by loading third-party
plugins. This page explains the types of plugins and provides links to several
volume and network plugins for Docker.
## Types of plugins
Plugins extend Docker's functionality. They come in specific types. For
example, a [volume plugin](plugins_volume.md) might enable Docker
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing.
Currently Docker supports authorization, volume and network driver plugins. In the future it
will support additional plugin types.
## Installing a plugin
Follow the instructions in the plugin's documentation.
## Finding a plugin
The sections below provide an inexhaustive overview of available plugins.
<style>
#DocumentationText tr td:first-child { white-space: nowrap;}
</style>
### Network plugins
| Plugin | Description |
|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Contiv Networking](https://github.com/contiv/netplugin) | An open source network plugin to provide infrastructure and security policies for a multi-tenant micro services deployment, while providing an integration to physical network for non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards. |
| [Kuryr Network Plugin](https://github.com/openstack/kuryr) | A network plugin is developed as part of the OpenStack Kuryr project and implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. It includes an IPAM driver as well. |
| [Weave Network Plugin](https://www.weave.works/docs/net/latest/introducing-weave/) | A network plugin that creates a virtual network that connects your Docker containers - across multiple hosts or clouds and enables automatic discovery of applications. Weave networks are resilient, partition tolerant, secure and work in partially connected networks, and other adverse environments - all configured with delightful simplicity. |
### Volume plugins
| Plugin | Description |
|:---------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Azure File Storage plugin](https://github.com/Azure/azurefile-dockervolumedriver) | Lets you mount Microsoft [Azure File Storage](https://azure.microsoft.com/blog/azure-file-storage-now-generally-available/) shares to Docker containers as volumes using the SMB 3.0 protocol. [Learn more](https://azure.microsoft.com/blog/persistent-docker-volumes-with-azure-file-storage/). |
| [BeeGFS Volume Plugin](https://github.com/RedCoolBeans/docker-volume-beegfs) | An open source volume plugin to create persistent volumes in a BeeGFS parallel file system. |
| [Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume) | A volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS. |
| [Contiv Volume Plugin](https://github.com/contiv/volplugin) | An open source volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption. It has support for Ceph and NFS. |
| [Convoy plugin](https://github.com/rancher/convoy) | A volume plugin for a variety of storage back-ends including device mapper and NFS. It's a simple standalone executable written in Go and provides the framework to support vendor-specific extensions such as snapshots, backups and restore. |
| [DigitalOcean Block Storage plugin](https://github.com/omallo/docker-volume-plugin-dostorage) | Integrates DigitalOcean's [block storage solution](https://www.digitalocean.com/products/storage/) into the Docker ecosystem by automatically attaching a given block storage volume to a DigitalOcean droplet and making the contents of the volume available to Docker containers running on that droplet. |
| [DRBD plugin](https://www.drbd.org/en/supported-projects/docker) | A volume plugin that provides highly available storage replicated by [DRBD](https://www.drbd.org). Data written to the docker volume is replicated in a cluster of DRBD nodes. |
| [Flocker plugin](https://github.com/ScatterHQ/flocker) | A volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines. |
| [Fuxi Volume Plugin](https://github.com/openstack/fuxi) | A volume plugin that is developed as part of the OpenStack Kuryr project and implements the Docker volume plugin API by utilizing Cinder, the OpenStack block storage service. |
| [gce-docker plugin](https://github.com/mcuadros/gce-docker) | A volume plugin able to attach, format and mount Google Compute [persistent-disks](https://cloud.google.com/compute/docs/disks/persistent-disks). |
| [GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) | A volume plugin that provides multi-host volumes management for Docker using GlusterFS. |
| [Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) | A volume plugin that allows on-demand, version controlled access to your data. Horcrux is an open-source plugin, written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3. |
| [HPE 3Par Volume Plugin](https://github.com/hpe-storage/python-hpedockerplugin/) | A volume plugin that supports HPE 3Par and StoreVirtual iSCSI storage arrays. |
| [Infinit volume plugin](https://infinit.sh/documentation/docker/volume-plugin) | A volume plugin that makes it easy to mount and manage Infinit volumes using Docker. |
| [IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs) | An open source volume plugin that allows using an [ipfs](https://ipfs.io/) filesystem as a volume. |
| [Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) | A plugin that provides credentials and secret management using Keywhiz as a central repository. |
| [Local Persist Plugin](https://github.com/CWSpear/local-persist) | A volume plugin that extends the default `local` driver's functionality by allowing you specify a mountpoint anywhere on the host, which enables the files to *always persist*, even if the volume is removed via `docker volume rm`. |
| [NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future. |
| [Netshare plugin](https://github.com/ContainX/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems. |
| [Nimble Storage Volume Plugin](https://connect.nimblestorage.com/community/app-integration/docker) | A volume plug-in that integrates with Nimble Storage Unified Flash Fabric arrays. The plug-in abstracts array volume capabilities to the Docker administrator to allow self-provisioning of secure multi-tenant volumes and clones. |
| [OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few. |
| [Portworx Volume Plugin](https://github.com/portworx/px-dev) | A volume plugin that turns any server into a scale-out converged compute/storage node, providing container granular storage and highly available volumes across any node, using a shared-nothing storage backend that works with any docker scheduler. |
| [Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform. |
| [REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC. |
| [Virtuozzo Storage and Ploop plugin](https://github.com/virtuozzo/docker-volume-ploop) | A volume plugin with support for Virtuozzo Storage distributed cloud file system as well as ploop devices. |
| [VMware vSphere Storage Plugin](https://github.com/vmware/docker-volume-vsphere) | Docker Volume Driver for vSphere enables customers to address persistent storage requirements for Docker containers in vSphere environments. |
### Authorization plugins
| Plugin | Description |
|:---------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Casbin AuthZ Plugin](https://github.com/casbin/casbin-authz-plugin) | An authorization plugin based on [Casbin](https://github.com/casbin/casbin), which supports access control models like ACL, RBAC, ABAC. The access control model can be customized. The policy can be persisted into file or DB. |
| [HBM plugin](https://github.com/kassisol/hbm) | An authorization plugin that prevents from executing commands with certains parameters. |
| [Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name). |
## Troubleshooting a plugin
If you are having problems with Docker after loading a plugin, ask the authors
of the plugin for help. The Docker team may not be able to assist you.
## Writing a plugin
If you are interested in writing a plugin for Docker, or seeing how they work
under the hood, see the [docker plugins reference](plugin_api.md).

195
vendor/github.com/docker/cli/docs/extend/plugin_api.md generated vendored Normal file
View file

@ -0,0 +1,195 @@
---
description: "How to write Docker plugins extensions "
keywords: "API, Usage, plugins, documentation, developer"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker Plugin API
Docker plugins are out-of-process extensions which add capabilities to the
Docker Engine.
This document describes the Docker Engine plugin API. To view information on
plugins managed by Docker Engine, refer to [Docker Engine plugin system](index.md).
This page is intended for people who want to develop their own Docker plugin.
If you just want to learn about or use Docker plugins, look
[here](legacy_plugins.md).
## What plugins are
A plugin is a process running on the same or a different host as the docker daemon,
which registers itself by placing a file on the same docker host in one of the plugin
directories described in [Plugin discovery](#plugin-discovery).
Plugins have human-readable names, which are short, lowercase strings. For
example, `flocker` or `weave`.
Plugins can run inside or outside containers. Currently running them outside
containers is recommended.
## Plugin discovery
Docker discovers plugins by looking for them in the plugin directory whenever a
user or container tries to use one by name.
There are three types of files which can be put in the plugin directory.
* `.sock` files are UNIX domain sockets.
* `.spec` files are text files containing a URL, such as `unix:///other.sock` or `tcp://localhost:8080`.
* `.json` files are text files containing a full json specification for the plugin.
Plugins with UNIX domain socket files must run on the same docker host, whereas
plugins with spec or json files can run on a different host if a remote URL is specified.
UNIX domain socket files must be located under `/run/docker/plugins`, whereas
spec files can be located either under `/etc/docker/plugins` or `/usr/lib/docker/plugins`.
The name of the file (excluding the extension) determines the plugin name.
For example, the `flocker` plugin might create a UNIX socket at
`/run/docker/plugins/flocker.sock`.
You can define each plugin into a separated subdirectory if you want to isolate definitions from each other.
For example, you can create the `flocker` socket under `/run/docker/plugins/flocker/flocker.sock` and only
mount `/run/docker/plugins/flocker` inside the `flocker` container.
Docker always searches for unix sockets in `/run/docker/plugins` first. It checks for spec or json files under
`/etc/docker/plugins` and `/usr/lib/docker/plugins` if the socket doesn't exist. The directory scan stops as
soon as it finds the first plugin definition with the given name.
### JSON specification
This is the JSON format for a plugin:
```json
{
"Name": "plugin-example",
"Addr": "https://example.com/docker/plugin",
"TLSConfig": {
"InsecureSkipVerify": false,
"CAFile": "/usr/shared/docker/certs/example-ca.pem",
"CertFile": "/usr/shared/docker/certs/example-cert.pem",
"KeyFile": "/usr/shared/docker/certs/example-key.pem"
}
}
```
The `TLSConfig` field is optional and TLS will only be verified if this configuration is present.
## Plugin lifecycle
Plugins should be started before Docker, and stopped after Docker. For
example, when packaging a plugin for a platform which supports `systemd`, you
might use [`systemd` dependencies](
http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Before=) to
manage startup and shutdown order.
When upgrading a plugin, you should first stop the Docker daemon, upgrade the
plugin, then start Docker again.
## Plugin activation
When a plugin is first referred to -- either by a user referring to it by name
(e.g. `docker run --volume-driver=foo`) or a container already configured to
use a plugin being started -- Docker looks for the named plugin in the plugin
directory and activates it with a handshake. See Handshake API below.
Plugins are *not* activated automatically at Docker daemon startup. Rather,
they are activated only lazily, or on-demand, when they are needed.
## Systemd socket activation
Plugins may also be socket activated by `systemd`. The official [Plugins helpers](https://github.com/docker/go-plugins-helpers)
natively supports socket activation. In order for a plugin to be socket activated it needs
a `service` file and a `socket` file.
The `service` file (for example `/lib/systemd/system/your-plugin.service`):
```
[Unit]
Description=Your plugin
Before=docker.service
After=network.target your-plugin.socket
Requires=your-plugin.socket docker.service
[Service]
ExecStart=/usr/lib/docker/your-plugin
[Install]
WantedBy=multi-user.target
```
The `socket` file (for example `/lib/systemd/system/your-plugin.socket`):
```
[Unit]
Description=Your plugin
[Socket]
ListenStream=/run/docker/plugins/your-plugin.sock
[Install]
WantedBy=sockets.target
```
This will allow plugins to be actually started when the Docker daemon connects to
the sockets they're listening on (for instance the first time the daemon uses them
or if one of the plugin goes down accidentally).
## API design
The Plugin API is RPC-style JSON over HTTP, much like webhooks.
Requests flow *from* the Docker daemon *to* the plugin. So the plugin needs to
implement an HTTP server and bind this to the UNIX socket mentioned in the
"plugin discovery" section.
All requests are HTTP `POST` requests.
The API is versioned via an Accept header, which currently is always set to
`application/vnd.docker.plugins.v1+json`.
## Handshake API
Plugins are activated via the following "handshake" API call.
### /Plugin.Activate
**Request:** empty body
**Response:**
```
{
"Implements": ["VolumeDriver"]
}
```
Responds with a list of Docker subsystems which this plugin implements.
After activation, the plugin will then be sent events from this subsystem.
Possible values are:
* [`authz`](plugins_authorization.md)
* [`NetworkDriver`](plugins_network.md)
* [`VolumeDriver`](plugins_volume.md)
## Plugin retries
Attempts to call a method on a plugin are retried with an exponential backoff
for up to 30 seconds. This may help when packaging plugins as containers, since
it gives plugin containers a chance to start up before failing any user
containers which depend on them.
## Plugins helpers
To ease plugins development, we're providing an `sdk` for each kind of plugins
currently supported by Docker at [docker/go-plugins-helpers](https://github.com/docker/go-plugins-helpers).

View file

@ -0,0 +1,259 @@
---
description: "How to create authorization plugins to manage access control to your Docker daemon."
keywords: "security, authorization, authentication, docker, documentation, plugin, extend"
redirect_from:
- "/engine/extend/authorization/"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Access authorization plugin
This document describes the Docker Engine plugins generally available in Docker
Engine. To view information on plugins managed by Docker Engine,
refer to [Docker Engine plugin system](index.md).
Docker's out-of-the-box authorization model is all or nothing. Any user with
permission to access the Docker daemon can run any Docker client command. The
same is true for callers using Docker's Engine API to contact the daemon. If you
require greater access control, you can create authorization plugins and add
them to your Docker daemon configuration. Using an authorization plugin, a
Docker administrator can configure granular access policies for managing access
to the Docker daemon.
Anyone with the appropriate skills can develop an authorization plugin. These
skills, at their most basic, are knowledge of Docker, understanding of REST, and
sound programming knowledge. This document describes the architecture, state,
and methods information available to an authorization plugin developer.
## Basic principles
Docker's [plugin infrastructure](plugin_api.md) enables
extending Docker by loading, removing and communicating with
third-party components using a generic API. The access authorization subsystem
was built using this mechanism.
Using this subsystem, you don't need to rebuild the Docker daemon to add an
authorization plugin. You can add a plugin to an installed Docker daemon. You do
need to restart the Docker daemon to add a new plugin.
An authorization plugin approves or denies requests to the Docker daemon based
on both the current authentication context and the command context. The
authentication context contains all user details and the authentication method.
The command context contains all the relevant request data.
Authorization plugins must follow the rules described in [Docker Plugin API](plugin_api.md).
Each plugin must reside within directories described under the
[Plugin discovery](plugin_api.md#plugin-discovery) section.
**Note**: the abbreviations `AuthZ` and `AuthN` mean authorization and authentication
respectively.
## Default user authorization mechanism
If TLS is enabled in the [Docker daemon](https://docs.docker.com/engine/security/https/), the default user authorization flow extracts the user details from the certificate subject name.
That is, the `User` field is set to the client certificate subject common name, and the `AuthenticationMethod` field is set to `TLS`.
## Basic architecture
You are responsible for registering your plugin as part of the Docker daemon
startup. You can install multiple plugins and chain them together. This chain
can be ordered. Each request to the daemon passes in order through the chain.
Only when all the plugins grant access to the resource, is the access granted.
When an HTTP request is made to the Docker daemon through the CLI or via the
Engine API, the authentication subsystem passes the request to the installed
authentication plugin(s). The request contains the user (caller) and command
context. The plugin is responsible for deciding whether to allow or deny the
request.
The sequence diagrams below depict an allow and deny authorization flow:
![Authorization Allow flow](images/authz_allow.png)
![Authorization Deny flow](images/authz_deny.png)
Each request sent to the plugin includes the authenticated user, the HTTP
headers, and the request/response body. Only the user name and the
authentication method used are passed to the plugin. Most importantly, no user
credentials or tokens are passed. Finally, not all request/response bodies
are sent to the authorization plugin. Only those request/response bodies where
the `Content-Type` is either `text/*` or `application/json` are sent.
For commands that can potentially hijack the HTTP connection (`HTTP
Upgrade`), such as `exec`, the authorization plugin is only called for the
initial HTTP requests. Once the plugin approves the command, authorization is
not applied to the rest of the flow. Specifically, the streaming data is not
passed to the authorization plugins. For commands that return chunked HTTP
response, such as `logs` and `events`, only the HTTP request is sent to the
authorization plugins.
During request/response processing, some authorization flows might
need to do additional queries to the Docker daemon. To complete such flows,
plugins can call the daemon API similar to a regular user. To enable these
additional queries, the plugin must provide the means for an administrator to
configure proper authentication and security policies.
## Docker client flows
To enable and configure the authorization plugin, the plugin developer must
support the Docker client interactions detailed in this section.
### Setting up Docker daemon
Enable the authorization plugin with a dedicated command line flag in the
`--authorization-plugin=PLUGIN_ID` format. The flag supplies a `PLUGIN_ID`
value. This value can be the plugins socket or a path to a specification file.
Authorization plugins can be loaded without restarting the daemon. Refer
to the [`dockerd` documentation](../reference/commandline/dockerd.md#configuration-reloading) for more information.
```bash
$ dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
```
Docker's authorization subsystem supports multiple `--authorization-plugin` parameters.
### Calling authorized command (allow)
```bash
$ docker pull centos
...
f1b10cd84249: Pull complete
...
```
### Calling unauthorized command (deny)
```bash
$ docker pull centos
...
docker: Error response from daemon: authorization denied by plugin PLUGIN_NAME: volumes are not allowed.
```
### Error from plugins
```bash
$ docker pull centos
...
docker: Error response from daemon: plugin PLUGIN_NAME failed with error: AuthZPlugin.AuthZReq: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
```
## API schema and implementation
In addition to Docker's standard plugin registration method, each plugin
should implement the following two methods:
* `/AuthZPlugin.AuthZReq` This authorize request method is called before the Docker daemon processes the client request.
* `/AuthZPlugin.AuthZRes` This authorize response method is called before the response is returned from Docker daemon to the client.
#### /AuthZPlugin.AuthZReq
**Request**:
```json
{
"User": "The user identification",
"UserAuthNMethod": "The authentication method used",
"RequestMethod": "The HTTP method",
"RequestURI": "The HTTP request URI",
"RequestBody": "Byte array containing the raw HTTP request body",
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string "
}
```
**Response**:
```json
{
"Allow": "Determined whether the user is allowed or not",
"Msg": "The authorization message",
"Err": "The error message if things go wrong"
}
```
#### /AuthZPlugin.AuthZRes
**Request**:
```json
{
"User": "The user identification",
"UserAuthNMethod": "The authentication method used",
"RequestMethod": "The HTTP method",
"RequestURI": "The HTTP request URI",
"RequestBody": "Byte array containing the raw HTTP request body",
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string",
"ResponseBody": "Byte array containing the raw HTTP response body",
"ResponseHeader": "Byte array containing the raw HTTP response header as a map[string][]string",
"ResponseStatusCode":"Response status code"
}
```
**Response**:
```json
{
"Allow": "Determined whether the user is allowed or not",
"Msg": "The authorization message",
"Err": "The error message if things go wrong"
}
```
### Request authorization
Each plugin must support two request authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.
#### Daemon -> Plugin
Name | Type | Description
-----------------------|-------------------|-------------------------------------------------------
User | string | The user identification
Authentication method | string | The authentication method used
Request method | enum | The HTTP method (GET/DELETE/POST)
Request URI | string | The HTTP request URI including API version (e.g., v.1.17/containers/json)
Request headers | map[string]string | Request headers as key value pairs (without the authorization header)
Request body | []byte | Raw request body
#### Plugin -> Daemon
Name | Type | Description
--------|--------|----------------------------------------------------------------------------------
Allow | bool | Boolean value indicating whether the request is allowed or denied
Msg | string | Authorization message (will be returned to the client in case the access is denied)
Err | string | Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)
### Response authorization
The plugin must support two authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.
#### Daemon -> Plugin
Name | Type | Description
----------------------- |------------------ |----------------------------------------------------
User | string | The user identification
Authentication method | string | The authentication method used
Request method | string | The HTTP method (GET/DELETE/POST)
Request URI | string | The HTTP request URI including API version (e.g., v.1.17/containers/json)
Request headers | map[string]string | Request headers as key value pairs (without the authorization header)
Request body | []byte | Raw request body
Response status code | int | Status code from the docker daemon
Response headers | map[string]string | Response headers as key value pairs
Response body | []byte | Raw docker daemon response body
#### Plugin -> Daemon
Name | Type | Description
--------|--------|----------------------------------------------------------------------------------
Allow | bool | Boolean value indicating whether the response is allowed or denied
Msg | string | Authorization message (will be returned to the client in case the access is denied)
Err | string | Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)

View file

@ -0,0 +1,403 @@
---
description: "How to manage image and container filesystems with external plugins"
keywords: "Examples, Usage, storage, image, docker, data, graph, plugin, api"
advisory: experimental
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Graphdriver plugins
## Changelog
### 1.13.0
- Support v2 plugins
# Docker graph driver plugins
Docker graph driver plugins enable admins to use an external/out-of-process
graph driver for use with Docker engine. This is an alternative to using the
built-in storage drivers, such as aufs/overlay/devicemapper/btrfs.
You need to install and enable the plugin and then restart the Docker daemon
before using the plugin. See the following example for the correct ordering
of steps.
```
$ docker plugin install cpuguy83/docker-overlay2-graphdriver-plugin # this command also enables the driver
<output suppressed>
$ pkill dockerd
$ dockerd --experimental -s cpuguy83/docker-overlay2-graphdriver-plugin
```
# Write a graph driver plugin
See the [plugin documentation](https://docs.docker.com/engine/extend/) for detailed information
on the underlying plugin protocol.
## Graph Driver plugin protocol
If a plugin registers itself as a `GraphDriver` when activated, then it is
expected to provide the rootfs for containers as well as image layer storage.
### /GraphDriver.Init
**Request**:
```json
{
"Home": "/graph/home/path",
"Opts": [],
"UIDMaps": [],
"GIDMaps": []
}
```
Initialize the graph driver plugin with a home directory and array of options.
These are passed through from the user, but the plugin is not required to parse
or honor them.
The request also includes a list of UID and GID mappings, structed as follows:
```json
{
"ContainerID": 0,
"HostID": 0,
"Size": 0
}
```
**Response**:
```json
{
"Err": ""
}
```
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Capabilities
**Request**:
```json
{}
```
Get behavioral characteristics of the graph driver. If a plugin does not handle
this request, the engine will use default values for all capabilities.
**Response**:
```json
{
"ReproducesExactDiffs": false,
}
```
Respond with values of capabilities:
* **ReproducesExactDiffs** Defaults to false. Flags that this driver is capable
of reproducing exactly equivalent diffs for read-only filesystem layers.
### /GraphDriver.Create
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Parent": "2cd9c322cb78a55e8212aa3ea8425a4180236d7106938ec921d0935a4b8ca142",
"MountLabel": "",
"StorageOpt": {}
}
```
Create a new, empty, read-only filesystem layer with the specified
`ID`, `Parent` and `MountLabel`. If `Parent` is an empty string, there is no
parent layer. `StorageOpt` is map of strings which indicate storage options.
**Response**:
```json
{
"Err": ""
}
```
Respond with a non-empty string error if an error occurred.
### /GraphDriver.CreateReadWrite
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Parent": "2cd9c322cb78a55e8212aa3ea8425a4180236d7106938ec921d0935a4b8ca142",
"MountLabel": "",
"StorageOpt": {}
}
```
Similar to `/GraphDriver.Create` but creates a read-write filesystem layer.
### /GraphDriver.Remove
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187"
}
```
Remove the filesystem layer with this given `ID`.
**Response**:
```json
{
"Err": ""
}
```
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Get
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"MountLabel": ""
}
```
Get the mountpoint for the layered filesystem referred to by the given `ID`.
**Response**:
```json
{
"Dir": "/var/mygraph/46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Err": ""
}
```
Respond with the absolute path to the mounted layered filesystem.
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Put
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187"
}
```
Release the system resources for the specified `ID`, such as unmounting the
filesystem layer.
**Response**:
```json
{
"Err": ""
}
```
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Exists
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187"
}
```
Determine if a filesystem layer with the specified `ID` exists.
**Response**:
```json
{
"Exists": true
}
```
Respond with a boolean for whether or not the filesystem layer with the specified
`ID` exists.
### /GraphDriver.Status
**Request**:
```json
{}
```
Get low-level diagnostic information about the graph driver.
**Response**:
```json
{
"Status": [[]]
}
```
Respond with a 2-D array with key/value pairs for the underlying status
information.
### /GraphDriver.GetMetadata
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187"
}
```
Get low-level diagnostic information about the layered filesystem with the
with the specified `ID`
**Response**:
```json
{
"Metadata": {},
"Err": ""
}
```
Respond with a set of key/value pairs containing the low-level diagnostic
information about the layered filesystem.
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Cleanup
**Request**:
```json
{}
```
Perform necessary tasks to release resources help by the plugin, such as
unmounting all the layered file systems.
**Response**:
```json
{
"Err": ""
}
```
Respond with a non-empty string error if an error occurred.
### /GraphDriver.Diff
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Parent": "2cd9c322cb78a55e8212aa3ea8425a4180236d7106938ec921d0935a4b8ca142"
}
```
Get an archive of the changes between the filesystem layers specified by the `ID`
and `Parent`. `Parent` may be an empty string, in which case there is no parent.
**Response**:
```
{% raw %}
{{ TAR STREAM }}
{% endraw %}
```
### /GraphDriver.Changes
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Parent": "2cd9c322cb78a55e8212aa3ea8425a4180236d7106938ec921d0935a4b8ca142"
}
```
Get a list of changes between the filesystem layers specified by the `ID` and
`Parent`. If `Parent` is an empty string, there is no parent.
**Response**:
```json
{
"Changes": [{}],
"Err": ""
}
```
Respond with a list of changes. The structure of a change is:
```json
"Path": "/some/path",
"Kind": 0,
```
Where the `Path` is the filesystem path within the layered filesystem that is
changed and `Kind` is an integer specifying the type of change that occurred:
- 0 - Modified
- 1 - Added
- 2 - Deleted
Respond with a non-empty string error if an error occurred.
### /GraphDriver.ApplyDiff
**Request**:
```
{% raw %}
{{ TAR STREAM }}
{% endraw %}
```
Extract the changeset from the given diff into the layer with the specified `ID`
and `Parent`
**Query Parameters**:
- id (required)- the `ID` of the new filesystem layer to extract the diff to
- parent (required)- the `Parent` of the given `ID`
**Response**:
```json
{
"Size": 512366,
"Err": ""
}
```
Respond with the size of the new layer in bytes.
Respond with a non-empty string error if an error occurred.
### /GraphDriver.DiffSize
**Request**:
```json
{
"ID": "46fe8644f2572fd1e505364f7581e0c9dbc7f14640bd1fb6ce97714fb6fc5187",
"Parent": "2cd9c322cb78a55e8212aa3ea8425a4180236d7106938ec921d0935a4b8ca142"
}
```
Calculate the changes between the specified `ID`
**Response**:
```json
{
"Size": 512366,
"Err": ""
}
```
Respond with the size changes between the specified `ID` and `Parent`
Respond with a non-empty string error if an error occurred.

View file

@ -0,0 +1,219 @@
---
description: "Log driver plugins."
keywords: "Examples, Usage, plugins, docker, documentation, user guide, logging"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker log driver plugins
This document describes logging driver plugins for Docker.
Logging drivers enables users to forward container logs to another service for
processing. Docker includes several logging drivers as built-ins, however can
never hope to support all use-cases with built-in drivers. Plugins allow Docker
to support a wide range of logging services without requiring to embed client
libraries for these services in the main Docker codebase. See the
[plugin documentation](legacy_plugins.md) for more information.
## Create a logging plugin
The main interface for logging plugins uses the same JSON+HTTP RPC protocol used
by other plugin types. See the
[example](https://github.com/cpuguy83/docker-log-driver-test) plugin for a
reference implementation of a logging plugin. The example wraps the built-in
`jsonfilelog` log driver.
## LogDriver protocol
Logging plugins must register as a `LogDriver` during plugin activation. Once
activated users can specify the plugin as a log driver.
There are two HTTP endpoints that logging plugins must implement:
### `/LogDriver.StartLogging`
Signals to the plugin that a container is starting that the plugin should start
receiving logs for.
Logs will be streamed over the defined file in the request. On Linux this file
is a FIFO. Logging plugins are not currently supported on Windows.
**Request**:
```json
{
"File": "/path/to/file/stream",
"Info": {
"ContainerID": "123456"
}
}
```
`File` is the path to the log stream that needs to be consumed. Each call to
`StartLogging` should provide a different file path, even if it's a container
that the plugin has already received logs for prior. The file is created by
docker with a randomly generated name.
`Info` is details about the container that's being logged. This is fairly
free-form, but is defined by the following struct definition:
```go
type Info struct {
Config map[string]string
ContainerID string
ContainerName string
ContainerEntrypoint string
ContainerArgs []string
ContainerImageID string
ContainerImageName string
ContainerCreated time.Time
ContainerEnv []string
ContainerLabels map[string]string
LogPath string
DaemonName string
}
```
`ContainerID` will always be supplied with this struct, but other fields may be
empty or missing.
**Response**
```json
{
"Err": ""
}
```
If an error occurred during this request, add an error message to the `Err` field
in the response. If no error then you can either send an empty response (`{}`)
or an empty value for the `Err` field.
The driver should at this point be consuming log messages from the passed in file.
If messages are unconsumed, it may cause the container to block while trying to
write to its stdio streams.
Log stream messages are encoded as protocol buffers. The protobuf definitions are
in the
[docker repository](https://github.com/docker/docker/blob/master/api/types/plugins/logdriver/entry.proto).
Since protocol buffers are not self-delimited you must decode them from the stream
using the following stream format:
```
[size][message]
```
Where `size` is a 4-byte big endian binary encoded uint32. `size` in this case
defines the size of the next message. `message` is the actual log entry.
A reference golang implementation of a stream encoder/decoder can be found
[here](https://github.com/docker/docker/blob/master/api/types/plugins/logdriver/io.go)
### `/LogDriver.StopLogging`
Signals to the plugin to stop collecting logs from the defined file.
Once a response is received, the file will be removed by Docker. You must make
sure to collect all logs on the stream before responding to this request or risk
losing log data.
Requests on this endpoint does not mean that the container has been removed
only that it has stopped.
**Request**:
```json
{
"File": "/path/to/file/stream"
}
```
**Response**:
```json
{
"Err": ""
}
```
If an error occurred during this request, add an error message to the `Err` field
in the response. If no error then you can either send an empty response (`{}`)
or an empty value for the `Err` field.
## Optional endpoints
Logging plugins can implement two extra logging endpoints:
### `/LogDriver.Capabilities`
Defines the capabilities of the log driver. You must implement this endpoint for
Docker to be able to take advantage of any of the defined capabilities.
**Request**:
```json
{}
```
**Response**:
```json
{
"ReadLogs": true
}
```
Supported capabilities:
- `ReadLogs` - this tells Docker that the plugin is capable of reading back logs
to clients. Plugins that report that they support `ReadLogs` must implement the
`/LogDriver.ReadLogs` endpoint
### `/LogDriver.ReadLogs`
Reads back logs to the client. This is used when `docker logs <container>` is
called.
In order for Docker to use this endpoint, the plugin must specify as much when
`/LogDriver.Capabilities` is called.
**Request**:
```json
{
"ReadConfig": {},
"Info": {
"ContainerID": "123456"
}
}
```
`ReadConfig` is the list of options for reading, it is defined with the following
golang struct:
```go
type ReadConfig struct {
Since time.Time
Tail int
Follow bool
}
```
- `Since` defines the oldest log that should be sent.
- `Tail` defines the number of lines to read (e.g. like the command `tail -n 10`)
- `Follow` signals that the client wants to stay attached to receive new log messages
as they come in once the existing logs have been read.
`Info` is the same type defined in `/LogDriver.StartLogging`. It should be used
to determine what set of logs to read.
**Response**:
```
{% raw %}{{ log stream }}{% endraw %}
```
The response should be the encoded log message using the same format as the
messages that the plugin consumed from Docker.

View file

@ -0,0 +1,84 @@
---
description: "Metrics plugins."
keywords: "Examples, Usage, plugins, docker, documentation, user guide, metrics"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker metrics collector plugins
Docker exposes internal metrics based on the prometheus format. Metrics plugins
enable accessing these metrics in a consistent way by providing a Unix
socket at a predefined path where the plugin can scrape the metrics.
> **Note**: that while the plugin interface for metrics is non-experimental, the naming
of the metrics and metric labels is still considered experimental and may change
in a future version.
## Creating a metrics plugin
You must currently set `PropagatedMount` in the plugin `config.json` to
`/run/docker`. This allows the plugin to receive updated mounts
(the bind-mounted socket) from Docker after the plugin is already configured.
## MetricsCollector protocol
Metrics plugins must register as implementing the`MetricsCollector` interface
in `config.json`.
On Unix platforms, the socket is located at `/run/docker/metrics.sock` in the
plugin's rootfs.
`MetricsCollector` must implement two endpoints:
### `MetricsCollector.StartMetrics`
Signals to the plugin that the metrics socket is now available for scraping
**Request**
```json
{}
```
The request has no playload.
**Response**
```json
{
"Err": ""
}
```
If an error occurred during this request, add an error message to the `Err` field
in the response. If no error then you can either send an empty response (`{}`)
or an empty value for the `Err` field. Errors will only be logged.
### `MetricsCollector.StopMetrics`
Signals to the plugin that the metrics socket is no longer available.
This may happen when the daemon is shutting down.
**Request**
```json
{}
```
The request has no playload.
**Response**
```json
{
"Err": ""
}
```
If an error occurred during this request, add an error message to the `Err` field
in the response. If no error then you can either send an empty response (`{}`)
or an empty value for the `Err` field. Errors will only be logged.

View file

@ -0,0 +1,78 @@
---
description: "Network driver plugins."
keywords: "Examples, Usage, plugins, docker, documentation, user guide"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker network driver plugins
This document describes Docker Engine network driver plugins generally
available in Docker Engine. To view information on plugins
managed by Docker Engine, refer to [Docker Engine plugin system](index.md).
Docker Engine network plugins enable Engine deployments to be extended to
support a wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN
or something completely different. Network driver plugins are supported via the
LibNetwork project. Each plugin is implemented as a "remote driver" for
LibNetwork, which shares plugin infrastructure with Engine. Effectively, network
driver plugins are activated in the same way as other plugins, and use the same
kind of protocol.
## Network plugins and swarm mode
[Legacy plugins](legacy_plugins.md) do not work in swarm mode. However,
plugins written using the [v2 plugin system](index.md) do work in swarm mode, as
long as they are installed on each swarm worker node.
## Use network driver plugins
The means of installing and running a network driver plugin depend on the
particular plugin. So, be sure to install your plugin according to the
instructions obtained from the plugin developer.
Once running however, network driver plugins are used just like the built-in
network drivers: by being mentioned as a driver in network-oriented Docker
commands. For example,
$ docker network create --driver weave mynet
Some network driver plugins are listed in [plugins](legacy_plugins.md)
The `mynet` network is now owned by `weave`, so subsequent commands
referring to that network will be sent to the plugin,
$ docker run --network=mynet busybox top
## Find network plugins
Network plugins are written by third parties, and are published by those
third parties, either on
[Docker Store](https://store.docker.com/search?category=network&q=&type=plugin)
or on the third party's site.
## Write a network plugin
Network plugins implement the [Docker plugin
API](plugin_api.md) and the network plugin protocol
## Network plugin protocol
The network driver protocol, in addition to the plugin activation call, is
documented as part of libnetwork:
[https://github.com/docker/libnetwork/blob/master/docs/remote.md](https://github.com/docker/libnetwork/blob/master/docs/remote.md).
## Related Information
To interact with the Docker maintainers and other interested users, see the IRC channel `#docker-network`.
- [Docker networks feature overview](https://docs.docker.com/engine/userguide/networking/)
- The [LibNetwork](https://github.com/docker/libnetwork) project

View file

@ -0,0 +1,185 @@
---
keywords: "API, Usage, plugins, documentation, developer"
title: Plugins and Services
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Using Volume and Network plugins in Docker services
In swarm mode, it is possible to create a service that allows for attaching
to networks or mounting volumes that are backed by plugins. Swarm schedules
services based on plugin availability on a node.
### Volume plugins
In this example, a volume plugin is installed on a swarm worker and a volume
is created using the plugin. In the manager, a service is created with the
relevant mount options. It can be observed that the service is scheduled to
run on the worker node with the said volume plugin and volume. Note that,
node1 is the manager and node2 is the worker.
1. Prepare manager. In node 1:
```bash
$ docker swarm init
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
```
2. Join swarm, install plugin and create volume on worker. In node 2:
```bash
$ docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
192.168.99.100:2377
```
```bash
$ docker plugin install tiborvass/sample-volume-plugin
latest: Pulling from tiborvass/sample-volume-plugin
eb9c16fbdc53: Download complete
Digest: sha256:00b42de88f3a3e0342e7b35fa62394b0a9ceb54d37f4c50be5d3167899994639
Status: Downloaded newer image for tiborvass/sample-volume-plugin:latest
Installed plugin tiborvass/sample-volume-plugin
```
```bash
$ docker volume create -d tiborvass/sample-volume-plugin --name pluginVol
```
3. Create a service using the plugin and volume. In node1:
```bash
$ docker service create --name my-service --mount type=volume,volume-driver=tiborvass/sample-volume-plugin,source=pluginVol,destination=/tmp busybox top
$ docker service ls
z1sj8bb8jnfn my-service replicated 1/1 busybox:latest
```
docker service ls shows service 1 instance of service running.
4. Observe the task getting scheduled in node 2:
```bash
{% raw %}
$ docker ps --format '{{.ID}}\t {{.Status}} {{.Names}} {{.Command}}'
83fc1e842599 Up 2 days my-service.1.9jn59qzn7nbc3m0zt1hij12xs "top"
{% endraw %}
```
### Network plugins
In this example, a global scope network plugin is installed on both the
swarm manager and worker. A service is created with replicated instances
using the installed plugin. We will observe how the availability of the
plugin determines network creation and container scheduling.
Note that node1 is the manager and node2 is the worker.
1. Install a global scoped network plugin on both manager and worker. On node1
and node2:
```bash
$ docker plugin install bboreham/weave2
Plugin "bboreham/weave2" is requesting the following privileges:
- network: [host]
- capabilities: [CAP_SYS_ADMIN CAP_NET_ADMIN]
Do you grant the above permissions? [y/N] y
latest: Pulling from bboreham/weave2
7718f575adf7: Download complete
Digest: sha256:2780330cc15644b60809637ee8bd68b4c85c893d973cb17f2981aabfadfb6d72
Status: Downloaded newer image for bboreham/weave2:latest
Installed plugin bboreham/weave2
```
2. Create a network using plugin on manager. On node1:
```bash
$ docker network create --driver=bboreham/weave2:latest globalnet
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
qlj7ueteg6ly globalnet bboreham/weave2:latest swarm
```
3. Create a service on the manager and have replicas set to 8. Observe that
containers get scheduled on both manager and worker.
On node 1:
```bash
$ docker service create --network globalnet --name myservice --replicas=8 mrjana/simpleweb simpleweb
w90drnfzw85nygbie9kb89vpa
```
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
87520965206a mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.4.ytdzpktmwor82zjxkh118uf1v
15e24de0f7aa mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.2.kh7a9n3iauq759q9mtxyfs9hp
c8c8f0144cdc mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.6.sjhpj5gr3xt33e3u2jycoj195
2e8e4b2c5c08 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.8.2z29zowsghx66u2velublwmrh
```
On node 2:
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53c0ae7c1dae mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.7.x44tvvdm3iwkt9kif35f7ykz1
9b56c627fee0 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.1.x7n1rm6lltw5gja3ueikze57q
d4f5927ba52c mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up 1 second myservice.5.i97bfo9uc6oe42lymafs9rz6k
478c0d395bd7 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.3.yr7nkffa48lff1vrl2r1m1ucs
```
4. Scale down the number of instances. On node1:
```bash
$ docker service scale myservice=0
myservice scaled to 0
```
5. Disable and uninstall the plugin on the worker. On node2:
```bash
$ docker plugin rm -f bboreham/weave2
bboreham/weave2
```
6. Scale up the number of instances again. Observe that all containers are
scheduled on the master and not on the worker, because the plugin is not available on the worker anymore.
On node 1:
```bash
$ docker service scale myservice=8
myservice scaled to 8
```
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf4b0ec2415e mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.3.r7p5o208jmlzpcbm2ytl3q6n1
57c64a6a2b88 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.4.dwoezsbb02ccstkhlqjy2xe7h
3ac68cc4e7b8 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 35 seconds myservice.5.zx4ezdrm2nwxzkrwnxthv0284
006c3cb318fc mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.8.q0e3umt19y3h3gzo1ty336k5r
dd2ffebde435 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.7.a77y3u22prjipnrjg7vzpv3ba
a86c74d8b84b mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.6.z9nbn14bagitwol1biveeygl7
2846a7850ba0 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 37 seconds myservice.2.ypufz2eh9fyhppgb89g8wtj76
e2ec01efcd8a mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 38 seconds myservice.1.8w7c4ttzr6zcb9sjsqyhwp3yl
```
On node 2:
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```

View file

@ -0,0 +1,359 @@
---
description: "How to manage data with external volume plugins"
keywords: "Examples, Usage, volume, docker, data, volumes, plugin, api"
---
<!-- This file is maintained within the docker/cli GitHub
repository at https://github.com/docker/cli/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Docker volume plugins
Docker Engine volume plugins enable Engine deployments to be integrated with
external storage systems such as Amazon EBS, and enable data volumes to persist
beyond the lifetime of a single Docker host. See the
[plugin documentation](legacy_plugins.md) for more information.
## Changelog
### 1.13.0
- If used as part of the v2 plugin architecture, mountpoints that are part of
paths returned by the plugin must be mounted under the directory specified by
`PropagatedMount` in the plugin configuration
([#26398](https://github.com/docker/docker/pull/26398))
### 1.12.0
- Add `Status` field to `VolumeDriver.Get` response
([#21006](https://github.com/docker/docker/pull/21006#))
- Add `VolumeDriver.Capabilities` to get capabilities of the volume driver
([#22077](https://github.com/docker/docker/pull/22077))
### 1.10.0
- Add `VolumeDriver.Get` which gets the details about the volume
([#16534](https://github.com/docker/docker/pull/16534))
- Add `VolumeDriver.List` which lists all volumes owned by the driver
([#16534](https://github.com/docker/docker/pull/16534))
### 1.8.0
- Initial support for volume driver plugins
([#14659](https://github.com/docker/docker/pull/14659))
## Command-line changes
To give a container access to a volume, use the `--volume` and `--volume-driver`
flags on the `docker container run` command. The `--volume` (or `-v`) flag
accepts a volume name and path on the host, and the `--volume-driver` flag
accepts a driver type.
```bash
$ docker volume create --driver=flocker volumename
$ docker container run -it --volume volumename:/data busybox sh
```
### `--volume`
The `--volume` (or `-v`) flag takes a value that is in the format
`<volume_name>:<mountpoint>`. The two parts of the value are
separated by a colon (`:`) character.
- The volume name is a human-readable name for the volume, and cannot begin with
a `/` character. It is referred to as `volume_name` in the rest of this topic.
- The `Mountpoint` is the path on the host (v1) or in the plugin (v2) where the
volume has been made available.
### `volumedriver`
Specifying a `volumedriver` in conjunction with a `volumename` allows you to
use plugins such as [Flocker](https://github.com/ScatterHQ/flocker) to manage
volumes external to a single host, such as those on EBS.
## Create a VolumeDriver
The container creation endpoint (`/containers/create`) accepts a `VolumeDriver`
field of type `string` allowing to specify the name of the driver. If not
specified, it defaults to `"local"` (the default driver for local volumes).
## Volume plugin protocol
If a plugin registers itself as a `VolumeDriver` when activated, it must
provide the Docker Daemon with writeable paths on the host filesystem. The Docker
daemon provides these paths to containers to consume. The Docker daemon makes
the volumes available by bind-mounting the provided paths into the containers.
> **Note**: Volume plugins should *not* write data to the `/var/lib/docker/`
> directory, including `/var/lib/docker/volumes`. The `/var/lib/docker/`
> directory is reserved for Docker.
### `/VolumeDriver.Create`
**Request**:
```json
{
"Name": "volume_name",
"Opts": {}
}
```
Instruct the plugin that the user wants to create a volume, given a user
specified volume name. The plugin does not need to actually manifest the
volume on the filesystem yet (until `Mount` is called).
`Opts` is a map of driver specific options passed through from the user request.
**Response**:
```json
{
"Err": ""
}
```
Respond with a string error if an error occurred.
### `/VolumeDriver.Remove`
**Request**:
```json
{
"Name": "volume_name"
}
```
Delete the specified volume from disk. This request is issued when a user
invokes `docker rm -v` to remove volumes associated with a container.
**Response**:
```json
{
"Err": ""
}
```
Respond with a string error if an error occurred.
### `/VolumeDriver.Mount`
**Request**:
```json
{
"Name": "volume_name",
"ID": "b87d7442095999a92b65b3d9691e697b61713829cc0ffd1bb72e4ccd51aa4d6c"
}
```
Docker requires the plugin to provide a volume, given a user specified volume
name. `Mount` is called once per container start. If the same `volume_name` is requested
more than once, the plugin may need to keep track of each new mount request and provision
at the first mount request and deprovision at the last corresponding unmount request.
`ID` is a unique ID for the caller that is requesting the mount.
**Response**:
- **v1**:
```json
{
"Mountpoint": "/path/to/directory/on/host",
"Err": ""
}
```
- **v2**:
```json
{
"Mountpoint": "/path/under/PropagatedMount",
"Err": ""
}
```
`Mountpoint` is the path on the host (v1) or in the plugin (v2) where the volume
has been made available.
`Err` is either empty or contains an error string.
### `/VolumeDriver.Path`
**Request**:
```json
{
"Name": "volume_name"
}
```
Request the path to the volume with the given `volume_name`.
**Response**:
- **v1**:
```json
{
"Mountpoint": "/path/to/directory/on/host",
"Err": ""
}
```
- **v2**:
```json
{
"Mountpoint": "/path/under/PropagatedMount",
"Err": ""
}
```
Respond with the path on the host (v1) or inside the plugin (v2) where the
volume has been made available, and/or a string error if an error occurred.
`Mountpoint` is optional. However, the plugin may be queried again later if one
is not provided.
### `/VolumeDriver.Unmount`
**Request**:
```json
{
"Name": "volume_name",
"ID": "b87d7442095999a92b65b3d9691e697b61713829cc0ffd1bb72e4ccd51aa4d6c"
}
```
Docker is no longer using the named volume. `Unmount` is called once per
container stop. Plugin may deduce that it is safe to deprovision the volume at
this point.
`ID` is a unique ID for the caller that is requesting the mount.
**Response**:
```json
{
"Err": ""
}
```
Respond with a string error if an error occurred.
### `/VolumeDriver.Get`
**Request**:
```json
{
"Name": "volume_name"
}
```
Get info about `volume_name`.
**Response**:
- **v1**:
```json
{
"Volume": {
"Name": "volume_name",
"Mountpoint": "/path/to/directory/on/host",
"Status": {}
},
"Err": ""
}
```
- **v2**:
```json
{
"Volume": {
"Name": "volume_name",
"Mountpoint": "/path/under/PropagatedMount",
"Status": {}
},
"Err": ""
}
```
Respond with a string error if an error occurred. `Mountpoint` and `Status` are
optional.
### /VolumeDriver.List
**Request**:
```json
{}
```
Get the list of volumes registered with the plugin.
**Response**:
- **v1**:
```json
{
"Volumes": [
{
"Name": "volume_name",
"Mountpoint": "/path/to/directory/on/host"
}
],
"Err": ""
}
```
- **v2**:
```json
{
"Volumes": [
{
"Name": "volume_name",
"Mountpoint": "/path/under/PropagatedMount"
}
],
"Err": ""
}
```
Respond with a string error if an error occurred. `Mountpoint` is optional.
### /VolumeDriver.Capabilities
**Request**:
```json
{}
```
Get the list of capabilities the driver supports.
The driver is not required to implement `Capabilities`. If it is not
implemented, the default values are used.
**Response**:
```json
{
"Capabilities": {
"Scope": "global"
}
}
```
Supported scopes are `global` and `local`. Any other value in `Scope` will be
ignored, and `local` is used. `Scope` allows cluster managers to handle the
volume in different ways. For instance, a scope of `global`, signals to the
cluster manager that it only needs to create the volume once instead of on each
Docker host. More capabilities may be added in the future.