add better generate

Signed-off-by: Jess Frazelle <acidburn@microsoft.com>
This commit is contained in:
Jess Frazelle 2018-03-20 01:33:56 -04:00
parent 3fc6abf56b
commit cdd93563f5
5655 changed files with 1187011 additions and 392 deletions

View file

@ -0,0 +1,45 @@
# ARM support
The ARM support should be considered experimental. It will be extended step by step in the coming weeks.
Building a Docker Development Image works in the same fashion as for Intel platform (x86-64).
Currently we have initial support for 32bit ARMv7 devices.
To work with the Docker Development Image you have to clone the Docker/Docker repo on a supported device.
It needs to have a Docker Engine installed to build the Docker Development Image.
From the root of the Docker/Docker repo one can use make to execute the following make targets:
- make validate
- make binary
- make build
- make deb
- make bundles
- make default
- make shell
- make test-unit
- make test-integration
- make
The Makefile does include logic to determine on which OS and architecture the Docker Development Image is built.
Based on OS and architecture it chooses the correct Dockerfile.
For the ARM 32bit architecture it uses `Dockerfile.armhf`.
So for example in order to build a Docker binary one has to:
1. clone the Docker/Docker repository on an ARM device `git clone https://github.com/docker/docker.git`
2. change into the checked out repository with `cd docker`
3. execute `make binary` to create a Docker Engine binary for ARM
## Kernel modules
A few libnetwork integration tests require that the kernel be
configured with "dummy" network interface and has the module
loaded. However, the dummy module may be not loaded automatically.
To load the kernel module permanently, run these commands as `root`.
modprobe dummy
echo "dummy" >> /etc/modules
On some systems you also have to sync your kernel modules.
oc-sync-kernel-modules
depmod

View file

@ -0,0 +1,35 @@
Branches and tags
=================
Note: details of the release process for the Engine are documented in the
[RELEASE-CHECKLIST](https://github.com/docker/docker/blob/master/project/RELEASE-CHECKLIST.md).
# Branches
The docker/docker repository should normally have only three living branches at all time, including
the regular `master` branch:
## `docs` branch
The `docs` branch supports documentation updates between product releases. This branch allow us to
decouple documentation releases from product releases.
## `release` branch
The `release` branch contains the last _released_ version of the code for the project.
The `release` branch is only updated at each public release of the project. The mechanism for this
is that the release is materialized by a pull request against the `release` branch which lives for
the duration of the code freeze period. When this pull request is merged, the `release` branch gets
updated, and its new state is tagged accordingly.
# Tags
Any public release of a compiled binary, with the logical exception of nightly builds, should have
a corresponding tag in the repository.
The general format of a tag is `vX.Y.Z[-suffix[N]]`:
- All of `X`, `Y`, `Z` must be specified (example: `v1.0.0`)
- First release candidate for version `1.8.0` should be tagged `v1.8.0-rc1`
- Second alpha release of a product should be tagged `v1.0.0-alpha1`

View file

@ -0,0 +1 @@
../CONTRIBUTING.md

View file

@ -0,0 +1,120 @@
# Moby project governance
Moby projects are governed by the [Moby Technical Steering Committee (TSC)](https://github.com/moby/tsc).
See the Moby TSC [charter](https://github.com/moby/tsc/blob/master/README.md) for
further information on the role of the TSC and procedures for escalation
of technical issues or concerns.
Contact [any Moby TSC member](https://github.com/moby/tsc/blob/master/MEMBERS.md) with your questions/concerns about the governance or a specific technical
issue that you feel requires escalation.
## Project maintainers
The current maintainers of the moby/moby repository are listed in the
[MAINTAINERS](/MAINTAINERS) file.
There are different types of maintainers, with different responsibilities, but
all maintainers have 3 things in common:
1. They share responsibility in the project's success.
2. They have made a long-term, recurring time investment to improve the project.
3. They spend that time doing whatever needs to be done, not necessarily what is the most interesting or fun.
Maintainers are often under-appreciated, because their work is less visible.
It's easy to recognize a really cool and technically advanced feature. It's harder
to appreciate the absence of bugs, the slow but steady improvement in stability,
or the reliability of a release process. But those things distinguish a good
project from a great one.
### Adding maintainers
Maintainers are first and foremost contributors who have shown their
commitment to the long term success of a project. Contributors who want to
become maintainers first demonstrate commitment to the project by contributing
code, reviewing others' work, and triaging issues on a regular basis for at
least three months.
The contributions alone don't make you a maintainer. You need to earn the
trust of the current maintainers and other project contributors, that your
decisions and actions are in the best interest of the project.
Periodically, the existing maintainers curate a list of contributors who have
shown regular activity on the project over the prior months. From this
list, maintainer candidates are selected and proposed on the maintainers
mailing list.
After a candidate is announced on the maintainers mailing list, the
existing maintainers discuss the candidate over the next 5 business days,
provide feedback, and vote. At least 66% of the current maintainers must
vote in the affirmative.
If a candidate is approved, a maintainer contacts the candidate to
invite them to open a pull request that adds the contributor to
the MAINTAINERS file. The candidate becomes a maintainer once the pull
request is merged.
### Removing maintainers
Maintainers can be removed from the project, either at their own request
or due to [project inactivity](#inactive-maintainer-policy).
#### How to step down
Life priorities, interests, and passions can change. If you're a maintainer but
feel you must remove yourself from the list, inform other maintainers that you
intend to step down, and if possible, help find someone to pick up your work.
At the very least, ensure your work can be continued where you left off.
After you've informed other maintainers, create a pull request to remove
yourself from the MAINTAINERS file.
#### Inactive maintainer policy
An existing maintainer can be removed if they do not show significant activity
on the project. Periodically, the maintainers review the list of maintainers
and their activity over the last three months.
If a maintainer has shown insufficient activity over this period, a project
representative will contact the maintainer to ask if they want to continue
being a maintainer. If the maintainer decides to step down as a maintainer,
they open a pull request to be removed from the MAINTAINERS file.
If the maintainer wants to continue in this role, but is unable to perform the
required duties, they can be removed with a vote by at least 66% of the current
maintainers. The maintainer under discussion will not be allowed to vote. An
e-mail is sent to the mailing list, inviting maintainers of the project to
vote. The voting period is five business days. Issues related to a maintainer's
performance should be discussed with them among the other maintainers so that
they are not surprised by a pull request removing them. This discussion should
be handled objectively with no ad hominem attacks.
## Project decision making
Short answer: **Everything is a pull request**.
The Moby core engine project is an open-source project with an open design
philosophy. This means that the repository is the source of truth for **every**
aspect of the project, including its philosophy, design, road map, and APIs.
*If it's part of the project, it's in the repo. If it's in the repo, it's part
of the project.*
As a result, each decision can be expressed as a change to the repository. An
implementation change is expressed as a change to the source code. An API
change is a change to the API specification. A philosophy change is a change
to the philosophy manifesto, and so on.
All decisions affecting the moby/moby repository, both big and small, follow
the same steps:
* **Step 1**: Open a pull request. Anyone can do this.
* **Step 2**: Discuss the pull request. Anyone can do this.
* **Step 3**: Maintainers merge, close or reject the pull request.
Pull requests are reviewed by the current maintainers of the moby/moby
repository. Weekly meetings are organized to are organized to synchronously
discuss tricky PRs, as well as design and architecture decisions.. When
technical agreement cannot be reached among the maintainers of the project,
escalation or concerns can be raised by opening an issue to be handled
by the [Moby Technical Steering Committee](https://github.com/moby/tsc).

View file

@ -0,0 +1,37 @@
# Freenode IRC Administration Guidelines and Tips
This is not meant to be a general "Here's how to IRC" document, so if you're
looking for that, check Google instead. ♥
If you've been charged with helping maintain one of Docker's now many IRC
channels, this might turn out to be useful. If there's information that you
wish you'd known about how a particular channel is organized, you should add
deets here! :)
## `ChanServ`
Most channel maintenance happens by talking to Freenode's `ChanServ` bot. For
example, `/msg ChanServ ACCESS <channel> LIST` will show you a list of everyone
with "access" privileges for a particular channel.
A similar command is used to give someone a particular access level. For
example, to add a new maintainer to the `#docker-maintainers` access list so
that they can contribute to the discussions (after they've been merged
appropriately in a `MAINTAINERS` file, of course), one would use `/msg ChanServ
ACCESS #docker-maintainers ADD <nick> maintainer`.
To setup a new channel with a similar `maintainer` access template, use a
command like `/msg ChanServ TEMPLATE <channel> maintainer +AV` (`+A` for letting
them view the `ACCESS LIST`, `+V` for auto-voice; see `/msg ChanServ HELP FLAGS`
for more details).
## Troubleshooting
The most common cause of not-getting-auto-`+v` woes is people not being
`IDENTIFY`ed with `NickServ` (or their current nickname not being `GROUP`ed with
their main nickname) -- often manifested by `ChanServ` responding to an `ACCESS
ADD` request with something like `xyz is not registered.`.
This is easily fixed by doing `/msg NickServ IDENTIFY OldNick SecretPassword`
followed by `/msg NickServ GROUP` to group the two nicknames together. See
`/msg NickServ HELP GROUP` for more information.

View file

@ -0,0 +1,132 @@
Triaging of issues
------------------
Triage provides an important way to contribute to an open source project. Triage helps ensure issues resolve quickly by:
- Describing the issue's intent and purpose is conveyed precisely. This is necessary because it can be difficult for an issue to explain how an end user experiences a problem and what actions they took.
- Giving a contributor the information they need before they commit to resolving an issue.
- Lowering the issue count by preventing duplicate issues.
- Streamlining the development process by preventing duplicate discussions.
If you don't have time to code, consider helping with triage. The community will thank you for saving them time by spending some of yours.
### 1. Ensure the issue contains basic information
Before triaging an issue very far, make sure that the issue's author provided the standard issue information. This will help you make an educated recommendation on how this to categorize the issue. Standard information that *must* be included in most issues are things such as:
- the output of `docker version`
- the output of `docker info`
- the output of `uname -a`
- a reproducible case if this is a bug, Dockerfiles FTW
- host distribution and version ( ubuntu 14.04, RHEL, fedora 23 )
- page URL if this is a docs issue or the name of a man page
Depending on the issue, you might not feel all this information is needed. Use your best judgement. If you cannot triage an issue using what its author provided, explain kindly to the author that they must provide the above information to clarify the problem.
If the author provides the standard information but you are still unable to triage the issue, request additional information. Do this kindly and politely because you are asking for more of the author's time.
If the author does not respond requested information within the timespan of a week, close the issue with a kind note stating that the author can request for the issue to be
reopened when the necessary information is provided.
### 2. Classify the Issue
An issue can have multiple of the following labels. Typically, a properly classified issue should
have:
- One label identifying its kind (`kind/*`).
- One or multiple labels identifying the functional areas of interest (`area/*`).
- Where applicable, one label categorizing its difficulty (`exp/*`).
#### Issue kind
| Kind | Description |
|------------------|---------------------------------------------------------------------------------------------------------------------------------|
| kind/bug | Bugs are bugs. The cause may or may not be known at triage time so debugging should be taken account into the time estimate. |
| kind/enhancement | Enhancements are not bugs or new features but can drastically improve usability or performance of a project component. |
| kind/feature | Functionality or other elements that the project does not currently support. Features are new and shiny. |
| kind/question | Contains a user or contributor question requiring a response. |
#### Functional area
| Area |
|---------------------------|
| area/api |
| area/builder |
| area/bundles |
| area/cli |
| area/daemon |
| area/distribution |
| area/docs |
| area/kernel |
| area/logging |
| area/networking |
| area/plugins |
| area/project |
| area/runtime |
| area/security |
| area/security/apparmor |
| area/security/seccomp |
| area/security/selinux |
| area/security/trust |
| area/storage |
| area/storage/aufs |
| area/storage/btrfs |
| area/storage/devicemapper |
| area/storage/overlay |
| area/storage/zfs |
| area/swarm |
| area/testing |
| area/volumes |
#### Platform
| Platform |
|---------------------------|
| platform/arm |
| platform/darwin |
| platform/ibm-power |
| platform/ibm-z |
| platform/windows |
#### Experience level
Experience level is a way for a contributor to find an issue based on their
skill set. Experience types are applied to the issue or pull request using
labels.
| Level | Experience level guideline |
|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| exp/beginner | New to Docker, and possibly Golang, and is looking to help while learning the basics. |
| exp/intermediate | Comfortable with golang and understands the core concepts of Docker and looking to dive deeper into the project. |
| exp/expert | Proficient with Docker and Golang and has been following, and active in, the community to understand the rationale behind design decisions and where the project is headed. |
As the table states, these labels are meant as guidelines. You might have
written a whole plugin for Docker in a personal project and never contributed to
Docker. With that kind of experience, you could take on an <strong
class="gh-label expert">exp/expert</strong> level task.
#### Triage status
To communicate the triage status with other collaborators, you can apply status
labels to issues. These labels prevent duplicating effort.
| Status | Description |
|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| status/confirmed | You triaged the issue, and were able to reproduce the issue. Always leave a comment how you reproduced, so that the person working on resolving the issue has a way to set up a test-case.
| status/accepted | Apply to enhancements / feature requests that we think are good to have. Adding this label helps contributors find things to work on.
| status/more-info-needed | Apply this to issues that are missing information (e.g. no `docker version` or `docker info` output, or no steps to reproduce), or require feedback from the reporter. If the issue is not updated after a week, it can generally be closed.
| status/needs-attention | Apply this label if an issue (or PR) needs more eyes.
### 3. Prioritizing issue
When, and only when, an issue is attached to a specific milestone, the issue can be labeled with the
following labels to indicate their degree of priority (from more urgent to less urgent).
| Priority | Description |
|-------------|-----------------------------------------------------------------------------------------------------------------------------------|
| priority/P0 | Urgent: Security, critical bugs, blocking issues. P0 basically means drop everything you are doing until this issue is addressed. |
| priority/P1 | Important: P1 issues are a top priority and a must-have for the next release. |
| priority/P2 | Normal priority: default priority applied. |
| priority/P3 | Best effort: those are nice to have / minor issues. |
And that's it. That should be all the information required for a new or existing contributor to come in a resolve an issue.

View file

@ -0,0 +1,74 @@
# Apt & Yum Repository Maintenance
## A maintainer's guide to managing Docker's package repos
### How to clean up old experimental debs and rpms
We release debs and rpms for experimental nightly, so these can build up.
To remove old experimental debs and rpms, and _ONLY_ keep the latest, follow the
steps below.
1. Checkout docker master
2. Run clean scripts
```bash
docker build --rm --force-rm -t docker-dev:master .
docker run --rm -it --privileged \
-v /path/to/your/repos/dir:/volumes/repos \
-v $HOME/.gnupg:/root/.gnupg \
-e GPG_PASSPHRASE \
-e DOCKER_RELEASE_DIR=/volumes/repos \
docker-dev:master hack/make.sh clean-apt-repo clean-yum-repo generate-index-listing sign-repos
```
3. Upload the changed repos to `s3` (if you host on s3)
4. Purge the cache, PURGE the cache, PURGE THE CACHE!
### How to get out of a sticky situation
Sh\*t happens. We know. Below are steps to get out of any "hash-sum mismatch" or
"gpg sig error" or the likes error that might happen to the apt repo.
**NOTE:** These are apt repo specific, have had no experience with anything similar
happening to the yum repo in the past so you can rest easy.
For each step listed below, move on to the next if the previous didn't work.
Otherwise CELEBRATE!
1. Purge the cache.
2. Did you remember to sign the debs after releasing?
Re-sign the repo with your gpg key:
```bash
docker build --rm --force-rm -t docker-dev:master .
docker run --rm -it --privileged \
-v /path/to/your/repos/dir:/volumes/repos \
-v $HOME/.gnupg:/root/.gnupg \
-e GPG_PASSPHRASE \
-e DOCKER_RELEASE_DIR=/volumes/repos \
docker-dev:master hack/make.sh sign-repos
```
Upload the changed repo to `s3` (if that is where you host)
PURGE THE CACHE.
3. Run Jess' magical, save all, only in case of extreme emergencies, "you are
going to have to break this glass to get it" script.
```bash
docker build --rm --force-rm -t docker-dev:master .
docker run --rm -it --privileged \
-v /path/to/your/repos/dir:/volumes/repos \
-v $HOME/.gnupg:/root/.gnupg \
-e GPG_PASSPHRASE \
-e DOCKER_RELEASE_DIR=/volumes/repos \
docker-dev:master hack/make.sh update-apt-repo generate-index-listing sign-repos
```
4. Upload the changed repo to `s3` (if that is where you host)
PURGE THE CACHE.

View file

@ -0,0 +1,307 @@
# Dear Packager,
If you are looking to make Docker available on your favorite software
distribution, this document is for you. It summarizes the requirements for
building and running the Docker client and the Docker daemon.
## Getting Started
We want to help you package Docker successfully. Before doing any packaging, a
good first step is to introduce yourself on the [docker-dev mailing
list](https://groups.google.com/d/forum/docker-dev), explain what you're trying
to achieve, and tell us how we can help. Don't worry, we don't bite! There might
even be someone already working on packaging for the same distro!
You can also join the IRC channel - #docker and #docker-dev on Freenode are both
active and friendly.
We like to refer to Tianon ("@tianon" on GitHub and "tianon" on IRC) as our
"Packagers Relations", since he's always working to make sure our packagers have
a good, healthy upstream to work with (both in our communication and in our
build scripts). If you're having any kind of trouble, feel free to ping him
directly. He also likes to keep track of what distributions we have packagers
for, so feel free to reach out to him even just to say "Hi!"
## Package Name
If possible, your package should be called "docker". If that name is already
taken, a second choice is "docker-engine". Another possible choice is "docker.io".
## Official Build vs Distro Build
The Docker project maintains its own build and release toolchain. It is pretty
neat and entirely based on Docker (surprise!). This toolchain is the canonical
way to build Docker. We encourage you to give it a try, and if the circumstances
allow you to use it, we recommend that you do.
You might not be able to use the official build toolchain - usually because your
distribution has a toolchain and packaging policy of its own. We get it! Your
house, your rules. The rest of this document should give you the information you
need to package Docker your way, without denaturing it in the process.
## Build Dependencies
To build Docker, you will need the following:
* A recent version of Git and Mercurial
* Go version 1.6 or later
* A clean checkout of the source added to a valid [Go
workspace](https://golang.org/doc/code.html#Workspaces) under the path
*src/github.com/docker/docker* (unless you plan to use `AUTO_GOPATH`,
explained in more detail below)
To build the Docker daemon, you will additionally need:
* An amd64/x86_64 machine running Linux
* SQLite version 3.7.9 or later
* libdevmapper version 1.02.68-cvs (2012-01-26) or later from lvm2 version
2.02.89 or later
* btrfs-progs version 3.16.1 or later (unless using an older version is
absolutely necessary, in which case 3.8 is the minimum)
* libseccomp version 2.2.1 or later (for build tag seccomp)
Be sure to also check out Docker's Dockerfile for the most up-to-date list of
these build-time dependencies.
### Go Dependencies
All Go dependencies are vendored under "./vendor". They are used by the official
build, so the source of truth for the current version of each dependency is
whatever is in "./vendor".
To use the vendored dependencies, simply make sure the path to "./vendor" is
included in `GOPATH` (or use `AUTO_GOPATH`, as explained below).
If you would rather (or must, due to distro policy) package these dependencies
yourself, take a look at "vendor.conf" for an easy-to-parse list of the
exact version for each.
NOTE: if you're not able to package the exact version (to the exact commit) of a
given dependency, please get in touch so we can remediate! Who knows what
discrepancies can be caused by even the slightest deviation. We promise to do
our best to make everybody happy.
## Stripping Binaries
Please, please, please do not strip any compiled binaries. This is really
important.
In our own testing, stripping the resulting binaries sometimes results in a
binary that appears to work, but more often causes random panics, segfaults, and
other issues. Even if the binary appears to work, please don't strip.
See the following quotes from Dave Cheney, which explain this position better
from the upstream Golang perspective.
### [go issue #5855, comment #3](https://code.google.com/p/go/issues/detail?id=5855#c3)
> Super super important: Do not strip go binaries or archives. It isn't tested,
> often breaks, and doesn't work.
### [launchpad golang issue #1200255, comment #8](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/8)
> To quote myself: "Please do not strip Go binaries, it is not supported, not
> tested, is often broken, and doesn't do what you want"
>
> To unpack that a bit
>
> * not supported, as in, we don't support it, and recommend against it when
> asked
> * not tested, we don't test stripped binaries as part of the build CI process
> * is often broken, stripping a go binary will produce anywhere from no, to
> subtle, to outright execution failure, see above
### [launchpad golang issue #1200255, comment #13](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/13)
> To clarify my previous statements.
>
> * I do not disagree with the debian policy, it is there for a good reason
> * Having said that, it stripping Go binaries doesn't work, and nobody is
> looking at making it work, so there is that.
>
> Thanks for patching the build formula.
## Building Docker
Please use our build script ("./hack/make.sh") for all your compilation of
Docker. If there's something you need that it isn't doing, or something it could
be doing to make your life as a packager easier, please get in touch with Tianon
and help us rectify the situation. Chances are good that other packagers have
probably run into the same problems and a fix might already be in the works, but
none of us will know for sure unless you harass Tianon about it. :)
All the commands listed within this section should be run with the Docker source
checkout as the current working directory.
### `AUTO_GOPATH`
If you'd rather not be bothered with the hassles that setting up `GOPATH`
appropriately can be, and prefer to just get a "build that works", you should
add something similar to this to whatever script or process you're using to
build Docker:
```bash
export AUTO_GOPATH=1
```
This will cause the build scripts to set up a reasonable `GOPATH` that
automatically and properly includes both docker/docker from the local
directory, and the local "./vendor" directory as necessary.
### `DOCKER_BUILDTAGS`
If you're building a binary that may need to be used on platforms that include
AppArmor, you will need to set `DOCKER_BUILDTAGS` as follows:
```bash
export DOCKER_BUILDTAGS='apparmor'
```
If you're building a binary that may need to be used on platforms that include
SELinux, you will need to use the `selinux` build tag:
```bash
export DOCKER_BUILDTAGS='selinux'
```
If you're building a binary that may need to be used on platforms that include
seccomp, you will need to use the `seccomp` build tag:
```bash
export DOCKER_BUILDTAGS='seccomp'
```
There are build tags for disabling graphdrivers as well. By default, support
for all graphdrivers are built in.
To disable btrfs:
```bash
export DOCKER_BUILDTAGS='exclude_graphdriver_btrfs'
```
To disable devicemapper:
```bash
export DOCKER_BUILDTAGS='exclude_graphdriver_devicemapper'
```
To disable aufs:
```bash
export DOCKER_BUILDTAGS='exclude_graphdriver_aufs'
```
NOTE: if you need to set more than one build tag, space separate them:
```bash
export DOCKER_BUILDTAGS='apparmor selinux exclude_graphdriver_aufs'
```
### Static Daemon
If it is feasible within the constraints of your distribution, you should
seriously consider packaging Docker as a single static binary. A good comparison
is Busybox, which is often packaged statically as a feature to enable mass
portability. Because of the unique way Docker operates, being similarly static
is a "feature".
To build a static Docker daemon binary, run the following command (first
ensuring that all the necessary libraries are available in static form for
linking - see the "Build Dependencies" section above, and the relevant lines
within Docker's own Dockerfile that set up our official build environment):
```bash
./hack/make.sh binary
```
This will create a static binary under
"./bundles/$VERSION/binary/docker-$VERSION", where "$VERSION" is the contents of
the file "./VERSION". This binary is usually installed somewhere like
"/usr/bin/docker".
### Dynamic Daemon / Client-only Binary
If you are only interested in a Docker client binary, you can build using:
```bash
./hack/make.sh binary-client
```
If you need to (due to distro policy, distro library availability, or for other
reasons) create a dynamically compiled daemon binary, or if you are only
interested in creating a client binary for Docker, use something similar to the
following:
```bash
./hack/make.sh dynbinary-client
```
This will create "./bundles/$VERSION/dynbinary-client/docker-$VERSION", which for
client-only builds is the important file to grab and install as appropriate.
## System Dependencies
### Runtime Dependencies
To function properly, the Docker daemon needs the following software to be
installed and available at runtime:
* iptables version 1.4 or later
* procps (or similar provider of a "ps" executable)
* e2fsprogs version 1.4.12 or later (in use: mkfs.ext4, tune2fs)
* xfsprogs (in use: mkfs.xfs)
* XZ Utils version 4.9 or later
* a [properly
mounted](https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount)
cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point
[is](https://github.com/docker/docker/issues/2683)
[not](https://github.com/docker/docker/issues/3485)
[sufficient](https://github.com/docker/docker/issues/4568))
Additionally, the Docker client needs the following software to be installed and
available at runtime:
* Git version 1.7 or later
### Kernel Requirements
The Docker daemon has very specific kernel requirements. Most pre-packaged
kernels already include the necessary options enabled. If you are building your
own kernel, you will either need to discover the options necessary via trial and
error, or check out the [Gentoo
ebuild](https://github.com/tianon/docker-overlay/blob/master/app-emulation/docker/docker-9999.ebuild),
in which a list is maintained (and if there are any issues or discrepancies in
that list, please contact Tianon so they can be rectified).
Note that in client mode, there are no specific kernel requirements, and that
the client will even run on alternative platforms such as Mac OS X / Darwin.
### Optional Dependencies
Some of Docker's features are activated by using optional command-line flags or
by having support for them in the kernel or userspace. A few examples include:
* AUFS graph driver (requires AUFS patches/support enabled in the kernel, and at
least the "auplink" utility from aufs-tools)
* BTRFS graph driver (requires BTRFS support enabled in the kernel)
* ZFS graph driver (requires userspace zfs-utils and a corresponding kernel module)
* Libseccomp to allow running seccomp profiles with containers
## Daemon Init Script
Docker expects to run as a daemon at machine startup. Your package will need to
include a script for your distro's process supervisor of choice. Be sure to
check out the "contrib/init" folder in case a suitable init script already
exists (and if one does not, contact Tianon about whether it might be
appropriate for your distro's init script to live there too!).
In general, Docker should be run as root, similar to the following:
```bash
dockerd
```
Generally, a `DOCKER_OPTS` variable of some kind is available for adding more
flags (such as changing the graph driver to use BTRFS, switching the location of
"/var/lib/docker", etc).
## Communicate
As a final note, please do feel free to reach out to Tianon at any time for
pretty much anything. He really does love hearing from our packagers and wants
to make sure we're not being a "hostile upstream". As should be a given, we
appreciate the work our packagers do to make sure we have broad distribution!

View file

@ -0,0 +1,68 @@
# Docker patch (bugfix) release process
Patch releases (the 'Z' in vX.Y.Z) are intended to fix major issues in a
release. Docker open source projects follow these procedures when creating a
patch release;
After each release (both "major" (vX.Y.0) and "patch" releases (vX.Y.Z)), a
patch release milestone (vX.Y.Z + 1) is created.
The creation of a patch release milestone is no obligation to actually
*create* a patch release. The purpose of these milestones is to collect
issues and pull requests that can *justify* a patch release;
- Any maintainer is allowed to add issues and PR's to the milestone, when
doing so, preferably leave a comment on the issue or PR explaining *why*
you think it should be considered for inclusion in a patch release.
- Issues introduced in version vX.Y.0 get added to milestone X.Y.Z+1
- Only *regressions* should be added. Issues *discovered* in version vX.Y.0,
but already present in version vX.Y-1.Z should not be added, unless
critical.
- Patch releases can *only* contain bug-fixes. New features should
*never* be added to a patch release.
The release captain of the "major" (X.Y.0) release, is also responsible for
patch releases. The release captain, together with another maintainer, will
review issues and PRs on the milestone, and assigns `priority/`labels. These
review sessions take place on a weekly basis, more frequent if needed:
- A P0 priority is assigned to critical issues. A maintainer *must* be
assigned to these issues. Maintainers should strive to fix a P0 within a week.
- A P1 priority is assigned to major issues, but not critical. A maintainer
*must* be assigned to these issues.
- P2 and P3 priorities are assigned to other issues. A maintainer can be
assigned.
- Non-critical issues and PR's can be removed from the milestone. Minor
changes, such as typo-fixes or omissions in the documentation can be
considered for inclusion in a patch release.
## Deciding if a patch release should be done
- Only a P0 can justify to proceed with the patch release.
- P1, P2, and P3 issues/PR's should not influence the decision, and
should be moved to the X.Y.Z+1 milestone, or removed from the
milestone.
> **Note**: If the next "major" release is imminent, the release captain
> can decide to cancel a patch release, and include the patches in the
> upcoming major release.
> **Note**: Security releases are also "patch releases", but follow
> a different procedure. Security releases are developed in a private
> repository, released and tested under embargo before they become
> publicly available.
## Deciding on the content of a patch release
When the criteria for moving forward with a patch release are met, the release
manager will decide on the exact content of the release.
- Fixes to all P0 issues *must* be included in the release.
- Fixes to *some* P1, P2, and P3 issues *may* be included as part of the patch
release depending on the severity of the issue and the risk associated with
the patch.
Any code delivered as part of a patch release should make life easier for a
significant amount of users with zero chance of degrading anybody's experience.
A good rule of thumb for that is to limit cherry-picking to small patches, which
fix well-understood issues, and which come with verifiable tests.

View file

@ -0,0 +1,19 @@
# Docker principles
In the design and development of Docker we try to follow these principles:
(Work in progress)
* Don't try to replace every tool. Instead, be an ingredient to improve them.
* Less code is better.
* Fewer components are better. Do you really need to add one more class?
* 50 lines of straightforward, readable code is better than 10 lines of magic that nobody can understand.
* Don't do later what you can do now. "//FIXME: refactor" is not acceptable in new code.
* When hesitating between 2 options, choose the one that is easier to reverse.
* No is temporary, Yes is forever. If you're not sure about a new feature, say no. You can change your mind later.
* Containers must be portable to the greatest possible number of machines. Be suspicious of any change which makes machines less interchangeable.
* The less moving parts in a container, the better.
* Don't merge it unless you document it.
* Don't document it unless you can keep it up-to-date.
* Don't merge it unless you test it!
* Everyone's problem is slightly different. Focus on the part that is the same for everyone, and solve that.

View file

@ -0,0 +1,24 @@
# Hacking on Docker
The `project/` directory holds information and tools for everyone involved in the process of creating and
distributing Docker, specifically:
## Guides
If you're a *contributor* or aspiring contributor, you should read [CONTRIBUTING.md](../CONTRIBUTING.md).
If you're a *maintainer* or aspiring maintainer, you should read [MAINTAINERS](../MAINTAINERS).
If you're a *packager* or aspiring packager, you should read [PACKAGERS.md](./PACKAGERS.md).
If you're a maintainer in charge of a *release*, you should read [RELEASE-CHECKLIST.md](./RELEASE-CHECKLIST.md).
## Roadmap
A high-level roadmap is available at [ROADMAP.md](../ROADMAP.md).
## Build tools
[hack/make.sh](../hack/make.sh) is the primary build tool for docker. It is used for compiling the official binary,
running the test suite, and pushing releases.

View file

@ -0,0 +1,519 @@
# Release Checklist
## A maintainer's guide to releasing Docker
So you're in charge of a Docker release? Cool. Here's what to do.
If your experience deviates from this document, please document the changes
to keep it up-to-date.
It is important to note that this document assumes that the git remote in your
repository that corresponds to "https://github.com/docker/docker" is named
"origin". If yours is not (for example, if you've chosen to name it "upstream"
or something similar instead), be sure to adjust the listed snippets for your
local environment accordingly. If you are not sure what your upstream remote is
named, use a command like `git remote -v` to find out.
If you don't have an upstream remote, you can add one easily using something
like:
```bash
export GITHUBUSER="YOUR_GITHUB_USER"
git remote add origin https://github.com/docker/docker.git
git remote add $GITHUBUSER git@github.com:$GITHUBUSER/docker.git
```
### 1. Pull from master and create a release branch
All releases version numbers will be of the form: vX.Y.Z where X is the major
version number, Y is the minor version number and Z is the patch release version number.
#### Major releases
The release branch name is just vX.Y because it's going to be the basis for all .Z releases.
```bash
export BASE=vX.Y
export VERSION=vX.Y.Z
git fetch origin
git checkout --track origin/master
git checkout -b release/$BASE
```
This new branch is going to be the base for the release. We need to push it to origin so we
can track the cherry-picked changes and the version bump:
```bash
git push origin release/$BASE
```
When you have the major release branch in origin, we need to create the bump fork branch
that we'll push to our fork:
```bash
git checkout -b bump_$VERSION
```
#### Patch releases
If we have the release branch in origin, we can create the forked bump branch from it directly:
```bash
export VERSION=vX.Y.Z
export PATCH=vX.Y.Z+1
git fetch origin
git checkout --track origin/release/$BASE
git checkout -b bump_$PATCH
```
We cherry-pick only the commits we want into the bump branch:
```bash
# get the commits ids we want to cherry-pick
git log
# cherry-pick the commits starting from the oldest one, without including merge commits
git cherry-pick -s -x <commit-id>
git cherry-pick -s -x <commit-id>
...
```
### 2. Update the VERSION files and API version on master
We don't want to stop contributions to master just because we are releasing.
So, after the release branch is up, we bump the VERSION and API version to mark
the start of the "next" release.
#### 2.1 Update the VERSION files
Update the content of the `VERSION` file to be the next minor (incrementing Y)
and add the `-dev` suffix. For example, after the release branch for 1.5.0 is
created, the `VERSION` file gets updated to `1.6.0-dev` (as in "1.6.0 in the
making").
#### 2.2 Update API version on master
We don't want API changes to go to the now frozen API version. Create a new
entry in `docs/reference/api/` by copying the latest and bumping the version
number (in both the file's name and content), and submit this in a PR against
master.
### 3. Update CHANGELOG.md
You can run this command for reference with git 2.0:
```bash
git fetch --tags
LAST_VERSION=$(git tag -l --sort=-version:refname "v*" | grep -E 'v[0-9\.]+$' | head -1)
git log --stat $LAST_VERSION..bump_$VERSION
```
If you don't have git 2.0 but have a sort command that supports `-V`:
```bash
git fetch --tags
LAST_VERSION=$(git tag -l | grep -E 'v[0-9\.]+$' | sort -rV | head -1)
git log --stat $LAST_VERSION..bump_$VERSION
```
If releasing a major version (X or Y increased in vX.Y.Z), simply listing notable user-facing features is sufficient.
```markdown
#### Notable features since <last major version>
* New docker command to do something useful
* Engine API change (deprecating old version)
* Performance improvements in some usecases
* ...
```
For minor releases (only Z increases in vX.Y.Z), provide a list of user-facing changes.
Each change should be listed under a category heading formatted as `#### CATEGORY`.
`CATEGORY` should describe which part of the project is affected.
Valid categories are:
* Builder
* Documentation
* Hack
* Packaging
* Engine API
* Runtime
* Other (please use this category sparingly)
Each change should be formatted as `BULLET DESCRIPTION`, given:
* BULLET: either `-`, `+` or `*`, to indicate a bugfix, new feature or
upgrade, respectively.
* DESCRIPTION: a concise description of the change that is relevant to the
end-user, using the present tense. Changes should be described in terms
of how they affect the user, for example "Add new feature X which allows Y",
"Fix bug which caused X", "Increase performance of Y".
EXAMPLES:
```markdown
## 0.3.6 (1995-12-25)
#### Builder
+ 'docker build -t FOO .' applies the tag FOO to the newly built image
#### Engine API
- Fix a bug in the optional unix socket transport
#### Runtime
* Improve detection of kernel version
```
If you need a list of contributors between the last major release and the
current bump branch, use something like:
```bash
git log --format='%aN <%aE>' v0.7.0...bump_v0.8.0 | sort -uf
```
Obviously, you'll need to adjust version numbers as necessary. If you just need
a count, add a simple `| wc -l`.
### 4. Change the contents of the VERSION file
Before the big thing, you'll want to make successive release candidates and get
people to test. The release candidate number `N` should be part of the version:
```bash
export RC_VERSION=${VERSION}-rcN
echo ${RC_VERSION#v} > VERSION
```
### 5. Test the docs
Make sure that your tree includes documentation for any modified or
new features, syntax or semantic changes.
To test locally:
```bash
make docs
```
To make a shared test at https://beta-docs.docker.io:
(You will need the `awsconfig` file added to the `docs/` dir)
```bash
make AWS_S3_BUCKET=beta-docs.docker.io BUILD_ROOT=yes docs-release
```
### 6. Commit and create a pull request to the "release" branch
```bash
git add VERSION CHANGELOG.md
git commit -m "Bump version to $VERSION"
git push $GITHUBUSER bump_$VERSION
echo "https://github.com/$GITHUBUSER/docker/compare/docker:release/$BASE...$GITHUBUSER:bump_$VERSION?expand=1"
```
That last command will give you the proper link to visit to ensure that you
open the PR against the "release" branch instead of accidentally against
"master" (like so many brave souls before you already have).
### 7. Create a PR to update the AUTHORS file for the release
Update the AUTHORS file, by running the `hack/generate-authors.sh` on the
release branch. To prevent duplicate entries, you may need to update the
`.mailmap` file accordingly.
### 8. Build release candidate rpms and debs
**NOTE**: It will be a lot faster if you pass a different graphdriver with
`DOCKER_GRAPHDRIVER` than `vfs`.
```bash
docker build -t docker .
docker run \
--rm -t --privileged \
-e DOCKER_GRAPHDRIVER=aufs \
-v $(pwd)/bundles:/go/src/github.com/docker/docker/bundles \
docker \
hack/make.sh binary build-deb build-rpm
```
### 9. Publish release candidate rpms and debs
With the rpms and debs you built from the last step you can release them on the
same server, or ideally, move them to a dedicated release box via scp into
another docker/docker directory in bundles. This next step assumes you have
a checkout of the docker source code at the same commit you used to build, with
the artifacts from the last step in `bundles`.
**NOTE:** If you put a space before the command your `.bash_history` will not
save it. (for the `GPG_PASSPHRASE`).
```bash
docker build -t docker .
docker run --rm -it --privileged \
-v /volumes/repos:/volumes/repos \
-v $(pwd)/bundles:/go/src/github.com/docker/docker/bundles \
-v $HOME/.gnupg:/root/.gnupg \
-e DOCKER_RELEASE_DIR=/volumes/repos \
-e GPG_PASSPHRASE \
-e KEEPBUNDLE=1 \
docker \
hack/make.sh release-deb release-rpm sign-repos generate-index-listing
```
### 10. Upload the changed repos to wherever you host
For example, above we bind mounted `/volumes/repos` as the storage for
`DOCKER_RELEASE_DIR`. In this case `/volumes/repos/apt` can be synced with
a specific s3 bucket for the apt repo and `/volumes/repos/yum` can be synced with
a s3 bucket for the yum repo.
### 11. Publish release candidate binaries
To run this you will need access to the release credentials. Get them from the
Core maintainers.
```bash
docker build -t docker .
# static binaries are still pushed to s3
docker run \
-e AWS_S3_BUCKET=test.docker.com \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
-i -t --privileged \
docker \
hack/release.sh
```
It will run the test suite, build the binaries and upload to the specified bucket,
so this is a good time to verify that you're running against **test**.docker.com.
### 12. Purge the cache!
After the binaries are uploaded to test.docker.com and the packages are on
apt.dockerproject.org and yum.dockerproject.org, make sure
they get tested in both Ubuntu and Debian for any obvious installation
issues or runtime issues.
If everything looks good, it's time to create a git tag for this candidate:
```bash
git tag -a $RC_VERSION -m $RC_VERSION bump_$VERSION
git push origin $RC_VERSION
```
Announcing on multiple medias is the best way to get some help testing! An easy
way to get some useful links for sharing:
```bash
echo "Ubuntu/Debian: curl -sSL https://test.docker.com/ | sh"
echo "Linux 64bit binary: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}"
echo "Darwin/OSX 64bit client binary: https://test.docker.com/builds/Darwin/x86_64/docker-${VERSION#v}"
echo "Linux 64bit tgz: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}.tgz"
echo "Windows 64bit client binary: https://test.docker.com/builds/Windows/x86_64/docker-${VERSION#v}.exe"
echo "Windows 32bit client binary: https://test.docker.com/builds/Windows/i386/docker-${VERSION#v}.exe"
```
### 13. Announce the release candidate
The release candidate should be announced on:
- IRC on #docker, #docker-dev, #docker-maintainers
- In a comment on the pull request to notify subscribed people on GitHub
- The [docker-dev](https://groups.google.com/forum/#!forum/docker-dev) group
- The [docker-maintainers](https://groups.google.com/a/dockerproject.org/forum/#!forum/maintainers) group
- (Optional) Any social media that can bring some attention to the release candidate
### 14. Iterate on successive release candidates
Spend several days along with the community explicitly investing time and
resources to try and break Docker in every possible way, documenting any
findings pertinent to the release. This time should be spent testing and
finding ways in which the release might have caused various features or upgrade
environments to have issues, not coding. During this time, the release is in
code freeze, and any additional code changes will be pushed out to the next
release.
It should include various levels of breaking Docker, beyond just using Docker
by the book.
Any issues found may still remain issues for this release, but they should be
documented and give appropriate warnings.
During this phase, the `bump_$VERSION` branch will keep evolving as you will
produce new release candidates. The frequency of new candidates is up to the
release manager: use your best judgement taking into account the severity of
reported issues, testers availability, and time to scheduled release date.
Each time you'll want to produce a new release candidate, you will start by
adding commits to the branch, usually by cherry-picking from master:
```bash
git cherry-pick -s -x -m0 <commit_id>
```
You want your "bump commit" (the one that updates the CHANGELOG and VERSION
files) to remain on top, so you'll have to `git rebase -i` to bring it back up.
Now that your bump commit is back on top, you will need to update the CHANGELOG
file (if appropriate for this particular release candidate), and update the
VERSION file to increment the RC number:
```bash
export RC_VERSION=$VERSION-rcN
echo $RC_VERSION > VERSION
```
You can now amend your last commit and update the bump branch:
```bash
git commit --amend
git push -f $GITHUBUSER bump_$VERSION
```
Repeat steps 6 to 14 to tag the code, publish new binaries, announce availability, and
get help testing.
### 15. Finalize the bump branch
When you're happy with the quality of a release candidate, you can move on and
create the real thing.
You will first have to amend the "bump commit" to drop the release candidate
suffix in the VERSION file:
```bash
echo $VERSION > VERSION
git add VERSION
git commit --amend
```
You will then repeat step 6 to publish the binaries to test
### 16. Get 2 other maintainers to validate the pull request
### 17. Build final rpms and debs
```bash
docker build -t docker .
docker run \
--rm -t --privileged \
-v $(pwd)/bundles:/go/src/github.com/docker/docker/bundles \
docker \
hack/make.sh binary build-deb build-rpm
```
### 18. Publish final rpms and debs
With the rpms and debs you built from the last step you can release them on the
same server, or ideally, move them to a dedicated release box via scp into
another docker/docker directory in bundles. This next step assumes you have
a checkout of the docker source code at the same commit you used to build, with
the artifacts from the last step in `bundles`.
**NOTE:** If you put a space before the command your `.bash_history` will not
save it. (for the `GPG_PASSPHRASE`).
```bash
docker build -t docker .
docker run --rm -it --privileged \
-v /volumes/repos:/volumes/repos \
-v $(pwd)/bundles:/go/src/github.com/docker/docker/bundles \
-v $HOME/.gnupg:/root/.gnupg \
-e DOCKER_RELEASE_DIR=/volumes/repos \
-e GPG_PASSPHRASE \
-e KEEPBUNDLE=1 \
docker \
hack/make.sh release-deb release-rpm sign-repos generate-index-listing
```
### 19. Upload the changed repos to wherever you host
For example, above we bind mounted `/volumes/repos` as the storage for
`DOCKER_RELEASE_DIR`. In this case `/volumes/repos/apt` can be synced with
a specific s3 bucket for the apt repo and `/volumes/repos/yum` can be synced with
a s3 bucket for the yum repo.
### 20. Publish final binaries
Once they're tested and reasonably believed to be working, run against
get.docker.com:
```bash
docker build -t docker .
# static binaries are still pushed to s3
docker run \
-e AWS_S3_BUCKET=get.docker.com \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
-i -t --privileged \
docker \
hack/release.sh
```
### 21. Purge the cache!
### 22. Apply tag and create release
It's very important that we don't make the tag until after the official
release is uploaded to get.docker.com!
```bash
git tag -a $VERSION -m $VERSION bump_$VERSION
git push origin $VERSION
```
Once the tag is pushed, go to GitHub and create a [new release](https://github.com/docker/docker/releases/new).
If the tag is for an RC make sure you check `This is a pre-release` at the bottom of the form.
Select the tag that you just pushed as the version and paste the changelog in the description of the release.
You can see examples in this two links:
https://github.com/docker/docker/releases/tag/v1.8.0
https://github.com/docker/docker/releases/tag/v1.8.0-rc3
### 23. Go to github to merge the `bump_$VERSION` branch into release
Don't forget to push that pretty blue button to delete the leftover
branch afterwards!
### 24. Update the docs branch
You will need to point the docs branch to the newly created release tag:
```bash
git checkout origin/docs
git reset --hard origin/$VERSION
git push -f origin docs
```
The docs will appear on https://docs.docker.com/ (though there may be cached
versions, so its worth checking http://docs.docker.com.s3-website-us-east-1.amazonaws.com/).
For more information about documentation releases, see `docs/README.md`.
Note that the new docs will not appear live on the site until the cache (a complex,
distributed CDN system) is flushed. The `make docs-release` command will do this
_if_ the `DISTRIBUTION_ID` is set correctly - this will take at least 15 minutes to run
and you can check its progress with the CDN Cloudfront Chrome addon.
### 25. Create a new pull request to merge your bump commit back into master
```bash
git checkout master
git fetch
git reset --hard origin/master
git cherry-pick -s -x $VERSION
git push $GITHUBUSER merge_release_$VERSION
echo "https://github.com/$GITHUBUSER/docker/compare/docker:master...$GITHUBUSER:merge_release_$VERSION?expand=1"
```
Again, get two maintainers to validate, then merge, then push that pretty
blue button to delete your branch.
### 26. Rejoice and Evangelize!
Congratulations! You're done.
Go forth and announce the glad tidings of the new release in `#docker`,
`#docker-dev`, on the [dev mailing list](https://groups.google.com/forum/#!forum/docker-dev),
the [announce mailing list](https://groups.google.com/forum/#!forum/docker-announce),
and on Twitter!

View file

@ -0,0 +1,78 @@
# Docker Release Process
This document describes how the Docker project is released. The Docker project
release process targets the Engine, Compose, Kitematic, Machine, Swarm,
Distribution, Notary and their underlying dependencies (libnetwork, libkv,
etc...).
Step-by-step technical details of the process are described in
[RELEASE-CHECKLIST.md](https://github.com/docker/docker/blob/master/project/RELEASE-CHECKLIST.md).
## Release cycle
The Docker project follows a **time-based release cycle** and ships every nine
weeks. A release cycle starts the same day the previous release cycle ends.
The first six weeks of the cycle are dedicated to development and review. During
this phase, new features and bugfixes submitted to any of the projects are
**eligible** to be shipped as part of the next release. No changeset submitted
during this period is however guaranteed to be merged for the current release
cycle.
## The freeze period
Six weeks after the beginning of the cycle, the codebase is officially frozen
and the codebase reaches a state close to the final release. A Release Candidate
(RC) gets created at the same time. The freeze period is used to find bugs and
get feedback on the state of the RC before the release.
During this freeze period, while the `master` branch will continue its normal
development cycle, no new features are accepted into the RC. As bugs are fixed
in `master` the release owner will selectively 'cherry-pick' critical ones to
be included into the RC. As the RC changes, new ones are made available for the
community to test and review.
This period lasts for three weeks.
## How to maximize chances of being merged before the freeze date?
First of all, there is never a guarantee that a specific changeset is going to
be merged. However there are different actions to follow to maximize the chances
for a changeset to be merged:
- The team gives priority to review the PRs aligned with the Roadmap (usually
defined by a ROADMAP.md file at the root of the repository).
- The earlier a PR is opened, the more time the maintainers have to review. For
example, if a PR is opened the day before the freeze date, its very unlikely
that it will be merged for the release.
- Constant communication with the maintainers (mailing-list, IRC, GitHub issues,
etc.) allows to get early feedback on the design before getting into the
implementation, which usually reduces the time needed to discuss a changeset.
- If the code is commented, fully tested and by extension follows every single
rules defined by the [CONTRIBUTING guide](
https://github.com/docker/docker/blob/master/CONTRIBUTING.md), this will help
the maintainers by speeding up the review.
## The release
At the end of the freeze (nine weeks after the start of the cycle), all the
projects are released together.
```
Codebase Release
Start of is frozen (end of the
the Cycle (7th week) 9th week)
+---------------------------------------+---------------------+
| | |
| Development phase | Freeze phase |
| | |
+---------------------------------------+---------------------+
6 weeks 3 weeks
<---------------------------------------><-------------------->
```
## Exceptions
If a critical issue is found at the end of the freeze period and more time is
needed to address it, the release will be pushed back. When a release gets
pushed back, the next release cycle gets delayed as well.

View file

@ -0,0 +1,246 @@
# Pull request reviewing process
## Labels
Labels are carefully picked to optimize for:
- Readability: maintainers must immediately know the state of a PR
- Filtering simplicity: different labels represent many different aspects of
the reviewing work, and can even be targeted at different maintainers groups.
A pull request should only be attributed labels documented in this section: other labels that may
exist on the repository should apply to issues.
### DCO labels
* `dco/no`: automatically set by a bot when one of the commits lacks proper signature
### Status labels
* `status/0-triage`
* `status/1-design-review`
* `status/2-code-review`
* `status/3-docs-review`
* `status/4-ready-to-merge`
Special status labels:
* `status/failing-ci`: indicates that the PR in its current state fails the test suite
* `status/needs-attention`: calls for a collective discussion during a review session
### Impact labels (apply to merged pull requests)
* `impact/api`
* `impact/changelog`
* `impact/cli`
* `impact/deprecation`
* `impact/distribution`
* `impact/dockerfile`
### Process labels (apply to merged pull requests)
Process labels are to assist in preparing (patch) releases. These labels should only be used for pull requests.
Label | Use for
------------------------------- | -------------------------------------------------------------------------
`process/cherry-pick` | PRs that should be cherry-picked in the bump/release branch. These pull-requests must also be assigned to a milestone.
`process/cherry-picked` | PRs that have been cherry-picked. This label is helpful to find PR's that have been added to release-candidates, and to update the change log
`process/docs-cherry-pick` | PRs that should be cherry-picked in the docs branch. Only apply this label for changes that apply to the *current* release, and generic documentation fixes, such as Markdown and spelling fixes.
`process/docs-cherry-picked` | PRs that have been cherry-picked in the docs branch
`process/merge-to-master` | PRs that are opened directly on the bump/release branch, but also need to be merged back to "master"
`process/merged-to-master` | PRs that have been merged back to "master"
## Workflow
An opened pull request can be in 1 of 5 distinct states, for each of which there is a corresponding
label that needs to be applied.
### Triage - `status/0-triage`
Maintainers are expected to triage new incoming pull requests by removing the `status/0-triage`
label and adding the correct labels (e.g. `status/1-design-review`) before any other interaction
with the PR. The starting label may potentially skip some steps depending on the kind of pull
request: use your best judgement.
Maintainers should perform an initial, high-level, overview of the pull request before moving it to
the next appropriate stage:
- Has DCO
- Contains sufficient justification (e.g., usecases) for the proposed change
- References the GitHub issue it fixes (if any) in the commit or the first GitHub comment
Possible transitions from this state:
* Close: e.g., unresponsive contributor without DCO
* `status/1-design-review`: general case
* `status/2-code-review`: e.g. trivial bugfix
* `status/3-docs-review`: non-proposal documentation-only change
### Design review - `status/1-design-review`
Maintainers are expected to comment on the design of the pull request. Review of documentation is
expected only in the context of design validation, not for stylistic changes.
Ideally, documentation should reflect the expected behavior of the code. No code review should
take place in this step.
There are no strict rules on the way a design is validated: we usually aim for a consensus,
although a single maintainer approval is often sufficient for obviously reasonable changes. In
general, strong disagreement expressed by any of the maintainers should not be taken lightly.
Once design is approved, a maintainer should make sure to remove this label and add the next one.
Possible transitions from this state:
* Close: design rejected
* `status/2-code-review`: general case
* `status/3-docs-review`: proposals with only documentation changes
### Code review - `status/2-code-review`
Maintainers are expected to review the code and ensure that it is good quality and in accordance
with the documentation in the PR.
New testcases are expected to be added. Ideally, those testcases should fail when the new code is
absent, and pass when present. The testcases should strive to test as many variants, code paths, as
possible to ensure maximum coverage.
Changes to code must be reviewed and approved (LGTM'd) by a minimum of two code maintainers. When
the author of a PR is a maintainer, he still needs the approval of two other maintainers.
Once code is approved according to the rules of the subsystem, a maintainer should make sure to
remove this label and add the next one. If documentation is absent but expected, maintainers should
ask for documentation and move to status `status/3-docs-review` for docs maintainer to follow.
Possible transitions from this state:
* Close
* `status/1-design-review`: new design concerns are raised
* `status/3-docs-review`: general case
* `status/4-ready-to-merge`: change not impacting documentation
### Docs review - `status/3-docs-review`
Maintainers are expected to review the documentation in its bigger context, ensuring consistency,
completeness, validity, and breadth of coverage across all existing and new documentation.
They should ask for any editorial change that makes the documentation more consistent and easier to
understand.
The docker/docker repository only contains _reference documentation_, all
"narrative" documentation is kept in a [unified documentation
repository](https://github.com/docker/docker.github.io). Reviewers must
therefore verify which parts of the documentation need to be updated. Any
contribution that may require changing the narrative should get the
`impact/documentation` label: this is the signal for documentation maintainers
that a change will likely need to happen on the unified documentation
repository. When in doubt, its better to add the label and leave it to
documentation maintainers to decide whether its ok to skip. In all cases,
leave a comment to explain what documentation changes you think might be needed.
- If the pull request does not impact the documentation at all, the docs review
step is skipped, and the pull request is ready to merge.
- If the changes in
the pull request require changes to the reference documentation (either
command-line reference, or API reference), those changes must be included as
part of the pull request and will be reviewed now. Keep in mind that the
narrative documentation may contain output examples of commands, so may need
to be updated as well, in which case the `impact/documentation` label must
be applied.
- If the PR has the `impact/documentation` label, merging is delayed until a
documentation maintainer acknowledges that a corresponding documentation PR
(or issue) is opened on the documentation repository. Once a documentation
maintainer acknowledges the change, she/he will move the PR to `status/4-merge`
for a code maintainer to push the green button.
Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of two docs
sub-project maintainers. If the docs change originates with a docs maintainer, only one additional
LGTM is required (since we assume a docs maintainer approves of their own PR).
Once documentation is approved, a maintainer should make sure to remove this label and
add the next one.
Possible transitions from this state:
* Close
* `status/1-design-review`: new design concerns are raised
* `status/2-code-review`: requires more code changes
* `status/4-ready-to-merge`: general case
### Merge - `status/4-ready-to-merge`
Maintainers are expected to merge this pull request as soon as possible. They can ask for a rebase
or carry the pull request themselves.
Possible transitions from this state:
* Merge: general case
* Close: carry PR
After merging a pull request, the maintainer should consider applying one or multiple impact labels
to ease future classification:
* `impact/api` signifies the patch impacted the Engine API
* `impact/changelog` signifies the change is significant enough to make it in the changelog
* `impact/cli` signifies the patch impacted a CLI command
* `impact/dockerfile` signifies the patch impacted the Dockerfile syntax
* `impact/deprecation` signifies the patch participates in deprecating an existing feature
### Close
If a pull request is closed it is expected that sufficient justification will be provided. In
particular, if there are alternative ways of achieving the same net result then those needs to be
spelled out. If the pull request is trying to solve a use case that is not one that we (as a
community) want to support then a justification for why should be provided.
The number of maintainers it takes to decide and close a PR is deliberately left unspecified. We
assume that the group of maintainers is bound by mutual trust and respect, and that opposition from
any single maintainer should be taken into consideration. Similarly, we expect maintainers to
justify their reasoning and to accept debating.
## Escalation process
Despite the previously described reviewing process, some PR might not show any progress for various
reasons:
- No strong opinion for or against the proposed patch
- Debates about the proper way to solve the problem at hand
- Lack of consensus
- ...
All these will eventually lead to stalled PR, where no apparent progress is made across several
weeks, or even months.
Maintainers should use their best judgement and apply the `status/needs-attention` label. It must
be used sparingly, as each PR with such label will be discussed by a group of maintainers during a
review session. The goal of that session is to agree on one of the following outcomes for the PR:
* Close, explaining the rationale for not pursuing further
* Continue, either by pushing the PR further in the workflow, or by deciding to carry the patch
(ideally, a maintainer should be immediately assigned to make sure that the PR keeps continued
attention)
* Escalate to Solomon by formulating a few specific questions on which his answers will allow
maintainers to decide.
## Milestones
Typically, every merged pull request get shipped naturally with the next release cut from the
`master` branch (either the next minor or major version, as indicated by the
[`VERSION`](https://github.com/docker/docker/blob/master/VERSION) file at the root of the
repository). However, the time-based nature of the release process provides no guarantee that a
given pull request will get merged in time. In other words, all open pull requests are implicitly
considered part of the next minor or major release milestone, and this won't be materialized on
GitHub.
A merged pull request must be attached to the milestone corresponding to the release in which it
will be shipped: this is both useful for tracking, and to help the release manager with the
changelog generation.
An open pull request may exceptionally get attached to a milestone to express a particular intent to
get it merged in time for that release. This may for example be the case for an important feature to
be included in a minor release, or a critical bugfix to be included in a patch release.
Finally, and as documented by the [`PATCH-RELEASES.md`](PATCH-RELEASES.md) process, the existence of
a milestone is not a guarantee that a release will happen, as some milestones will be created purely
for the purpose of bookkeeping

View file

@ -0,0 +1,63 @@
# Tools
This page describes the tools we use and infrastructure that is in place for
the Docker project.
### CI
The Docker project uses [Jenkins](https://jenkins.dockerproject.org/) as our
continuous integration server. Each Pull Request to Docker is tested by running the
equivalent of `make all`. We chose Jenkins because we can host it ourselves and
we run Docker in Docker to test.
#### Leeroy
Leeroy is a Go application which integrates Jenkins with
GitHub pull requests. Leeroy uses
[GitHub hooks](https://developer.github.com/v3/repos/hooks/)
to listen for pull request notifications and starts jobs on your Jenkins
server. Using the Jenkins
[notification plugin](https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin),
Leeroy updates the pull request using GitHub's
[status API](https://developer.github.com/v3/repos/statuses/)
with pending, success, failure, or error statuses.
The leeroy repository is maintained at
[github.com/docker/leeroy](https://github.com/docker/leeroy).
#### GordonTheTurtle IRC Bot
The GordonTheTurtle IRC Bot lives in the
[#docker-maintainers](https://botbot.me/freenode/docker-maintainers/) channel
on Freenode. He is built in Go and is based off the project at
[github.com/fabioxgn/go-bot](https://github.com/fabioxgn/go-bot).
His main command is `!rebuild`, which rebuilds a given Pull Request for a repository.
This command works by integrating with Leroy. He has a few other commands too, such
as `!gif` or `!godoc`, but we are always looking for more fun commands to add.
The gordon-bot repository is maintained at
[github.com/docker/gordon-bot](https://github.com/docker/gordon-bot)
### NSQ
We use [NSQ](https://github.com/bitly/nsq) for various aspects of the project
infrastructure.
#### Hooks
The hooks project,
[github.com/crosbymichael/hooks](https://github.com/crosbymichael/hooks),
is a small Go application that manages web hooks from github, hub.docker.com, or
other third party services.
It can be used for listening to github webhooks & pushing them to a queue,
archiving hooks to rethinkdb for processing, and broadcasting hooks to various
jobs.
#### Docker Master Binaries
One of the things queued from the Hooks are the building of the Master
Binaries. This happens on every push to the master branch of Docker. The
repository for this is maintained at
[github.com/docker/docker-bb](https://github.com/docker/docker-bb).