Compare commits

..

No commits in common. "master" and "v2.7.0-rc.0" have entirely different histories.

1193 changed files with 62737 additions and 366487 deletions

View file

@ -1,3 +0,0 @@
## Docker Distribution Community Code of Conduct
Docker Distribution follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View file

@ -1,20 +0,0 @@
linters:
enable:
- structcheck
- varcheck
- staticcheck
- unconvert
- gofmt
- goimports
- golint
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 2m
skip-dirs:
- vendor

16
.gometalinter.json Normal file
View file

@ -0,0 +1,16 @@
{
"Vendor": true,
"Deadline": "2m",
"Sort": ["linter", "severity", "path", "line"],
"EnableGC": true,
"Enable": [
"structcheck",
"staticcheck",
"unconvert",
"gofmt",
"goimports",
"golint",
"vet"
]
}

View file

@ -1,18 +1,13 @@
dist: bionic dist: trusty
sudo: required sudo: required
# setup travis so that we can run containers for integration tests # setup travis so that we can run containers for integration tests
services: services:
- docker - docker
jobs:
include:
- arch: amd64
- arch: s390x
language: go language: go
go: go:
- "1.14.x" - "1.11.x"
go_import_path: github.com/docker/distribution go_import_path: github.com/docker/distribution
@ -30,7 +25,7 @@ before_install:
- sudo apt-get -q update - sudo apt-get -q update
install: install:
- cd /tmp && go get -u github.com/vbatts/git-validation - go get -u github.com/vbatts/git-validation
# TODO: Add enforcement of license # TODO: Add enforcement of license
# - go get -u github.com/kunalkushwaha/ltag # - go get -u github.com/kunalkushwaha/ltag
- cd $TRAVIS_BUILD_DIR - cd $TRAVIS_BUILD_DIR
@ -39,7 +34,7 @@ script:
- export GOOS=$TRAVIS_GOOS - export GOOS=$TRAVIS_GOOS
- export CGO_ENABLED=$TRAVIS_CGO_ENABLED - export CGO_ENABLED=$TRAVIS_CGO_ENABLED
- DCO_VERBOSITY=-q script/validate/dco - DCO_VERBOSITY=-q script/validate/dco
- GOOS=linux GO111MODULE=on script/setup/install-dev-tools - GOOS=linux script/setup/install-dev-tools
- script/validate/vendor - script/validate/vendor
- go build -i . - go build -i .
- make check - make check

View file

@ -4,12 +4,11 @@
### If your problem is with... ### If your problem is with...
- automated builds or your [Docker Hub](https://hub.docker.com/) account - automated builds
- Report it to [Hub Support](https://hub.docker.com/support/) - your account on the [Docker Hub](https://hub.docker.com/)
- Distributions of Docker for desktop or Linux - any other [Docker Hub](https://hub.docker.com/) issue
- Report [Mac Desktop issues](https://github.com/docker/for-mac)
- Report [Windows Desktop issues](https://github.com/docker/for-win) Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
- Report [Linux issues](https://github.com/docker/for-linux)
### If you... ### If you...
@ -17,8 +16,10 @@
- can't figure out something - can't figure out something
- are not sure what's going on or what your problem is - are not sure what's going on or what your problem is
Please ask first in the #distribution channel on Docker community slack. Then please do not open an issue here yet - you should first try one of the following support forums:
[Click here for an invite to Docker community slack](https://dockr.ly/slack)
- irc: #docker-distribution on freenode
- mailing-list: <distribution@dockerproject.org> or https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
### Reporting security issues ### Reporting security issues
@ -58,72 +59,90 @@ By following these simple rules you will get better and faster feedback on your
7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used) 7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry 8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
## Contributing Code ## Contributing a patch for a known bug, or a small correction
Contributions should be made via pull requests. Pull requests will be reviewed
by one or more maintainers or reviewers and merged when acceptable.
You should follow the basic GitHub workflow: You should follow the basic GitHub workflow:
1. Use your own [fork](https://help.github.com/en/articles/about-forks) 1. fork
2. Create your [change](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes) 2. commit a change
3. Test your code 3. make sure the tests pass
4. [Commit](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages) your work, always [sign your commits](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages) 4. PR
5. Push your change to your fork and create a [Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork)
Refer to [containerd's contribution guide](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes) Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
for tips on creating a successful contribution.
## Sign your work - configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
- sign your commits using `-s`: `git commit -s -m "My commit"`
The sign-off is a simple line at the end of the explanation for the patch. Your Some simple rules to ensure quick merge:
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are pretty simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
``` - clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
Developer Certificate of Origin - prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
Version 1.1 - if you need to amend your PR following comments, please squash instead of adding more commits
Copyright (C) 2004, 2006 The Linux Foundation and its contributors. ## Contributing new features
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1 If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work.
By making a contribution to this project, I certify that: Then you should submit your implementation, clearly linking to the issue (and possible proposal).
(a) The contribution was created in whole or in part by me and I Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best It's mandatory to:
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other - interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
person who certified (a), (b) or (c) and I have not modified - address maintainers' comments and modify your submission accordingly
it. - write tests for any new code
(d) I understand and agree that this project and the contribution Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message: Have a look at a great, successful contribution: the [Swift driver PR](https://github.com/docker/distribution/pull/493)
Signed-off-by: Joe Smith <joe.smith@email.com> ## Coding Style
Use your real name (sorry, no pseudonyms or anonymous contributions.) Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
If you set your `user.name` and `user.email` git configs, you can sign your It is possible that the code base does not currently comply with these
commit automatically with `git commit -s`. guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
4. Comment the code. Tell us the why, the history and the context.
5. Document _all_ declarations and methods, even private ones. Declare
expectations, caveats and anything else that may be important. If a type
gets exported, having the comments already there will ensure it's ready.
6. Variable name length should be proportional to its context and no longer.
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
In practice, short methods will have short variable names and globals will
have longer names.
7. No underscores in package names. If you need a compound name, step back,
and re-examine why you need a compound name. If you still think you need a
compound name, lose the underscore.
8. No utils or helpers packages. If a function is not general enough to
warrant its own package, it has not been written generally enough to be a
part of a util package. Just leave it unexported and well-documented.
9. All tests should run with `go test` and outside tooling should not be
required. No, we don't need another unit testing framework. Assertion
packages are acceptable if they provide _real_ incremental value.
10. Even though we call these "rules" above, they are actually just
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](http://golang.org/doc/effective_go.html). The
[Go Blog](http://blog.golang.org/) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty.

View file

@ -1,30 +1,20 @@
ARG GO_VERSION=1.13.8 FROM golang:1.10-alpine
FROM golang:${GO_VERSION}-alpine3.11 AS build
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV BUILDTAGS include_oss include_gcs ENV DOCKER_BUILDTAGS include_oss include_gcs
ARG GOOS=linux ARG GOOS=linux
ARG GOARCH=amd64 ARG GOARCH=amd64
ARG GOARM=6
ARG VERSION
ARG REVISION
RUN set -ex \ RUN set -ex \
&& apk add --no-cache make git file && apk add --no-cache make git
WORKDIR $DISTRIBUTION_DIR WORKDIR $DISTRIBUTION_DIR
COPY . $DISTRIBUTION_DIR COPY . $DISTRIBUTION_DIR
RUN CGO_ENABLED=0 make PREFIX=/go clean binaries && file ./bin/registry | grep "statically linked"
FROM alpine:3.11
RUN set -ex \
&& apk add --no-cache ca-certificates apache2-utils
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
COPY --from=build /go/src/github.com/docker/distribution/bin/registry /bin/registry
RUN make PREFIX=/go clean binaries
VOLUME ["/var/lib/registry"] VOLUME ["/var/lib/registry"]
EXPOSE 5000 EXPOSE 5000
ENTRYPOINT ["registry"] ENTRYPOINT ["registry"]

View file

@ -1,144 +0,0 @@
# docker/distribution Project Governance
Docker distribution abides by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
For specific guidance on practical contribution steps please
see our [CONTRIBUTING.md](./CONTRIBUTING.md) guide.
## Maintainership
There are different types of maintainers, with different responsibilities, but
all maintainers have 3 things in common:
1) They share responsibility in the project's success.
2) They have made a long-term, recurring time investment to improve the project.
3) They spend that time doing whatever needs to be done, not necessarily what
is the most interesting or fun.
Maintainers are often under-appreciated, because their work is harder to appreciate.
It's easy to appreciate a really cool and technically advanced feature. It's harder
to appreciate the absence of bugs, the slow but steady improvement in stability,
or the reliability of a release process. But those things distinguish a good
project from a great one.
## Reviewers
A reviewer is a core role within the project.
They share in reviewing issues and pull requests and their LGTM counts towards the
required LGTM count to merge a code change into the project.
Reviewers are part of the organization but do not have write access.
Becoming a reviewer is a core aspect in the journey to becoming a maintainer.
## Adding maintainers
Maintainers are first and foremost contributors that have shown they are
committed to the long term success of a project. Contributors wanting to become
maintainers are expected to be deeply involved in contributing code, pull
request review, and triage of issues in the project for more than three months.
Just contributing does not make you a maintainer, it is about building trust
with the current maintainers of the project and being a person that they can
depend on and trust to make decisions in the best interest of the project.
Periodically, the existing maintainers curate a list of contributors that have
shown regular activity on the project over the prior months. From this list,
maintainer candidates are selected and proposed in a pull request or a
maintainers communication channel.
After a candidate has been announced to the maintainers, the existing
maintainers are given five business days to discuss the candidate, raise
objections and cast their vote. Votes may take place on the communication
channel or via pull request comment. Candidates must be approved by at least 66%
of the current maintainers by adding their vote on the mailing list. The
reviewer role has the same process but only requires 33% of current maintainers.
Only maintainers of the repository that the candidate is proposed for are
allowed to vote.
If a candidate is approved, a maintainer will contact the candidate to invite
the candidate to open a pull request that adds the contributor to the
MAINTAINERS file. The voting process may take place inside a pull request if a
maintainer has already discussed the candidacy with the candidate and a
maintainer is willing to be a sponsor by opening the pull request. The candidate
becomes a maintainer once the pull request is merged.
## Stepping down policy
Life priorities, interests, and passions can change. If you're a maintainer but
feel you must remove yourself from the list, inform other maintainers that you
intend to step down, and if possible, help find someone to pick up your work.
At the very least, ensure your work can be continued where you left off.
After you've informed other maintainers, create a pull request to remove
yourself from the MAINTAINERS file.
## Removal of inactive maintainers
Similar to the procedure for adding new maintainers, existing maintainers can
be removed from the list if they do not show significant activity on the
project. Periodically, the maintainers review the list of maintainers and their
activity over the last three months.
If a maintainer has shown insufficient activity over this period, a neutral
person will contact the maintainer to ask if they want to continue being
a maintainer. If the maintainer decides to step down as a maintainer, they
open a pull request to be removed from the MAINTAINERS file.
If the maintainer wants to remain a maintainer, but is unable to perform the
required duties they can be removed with a vote of at least 66% of the current
maintainers. In this case, maintainers should first propose the change to
maintainers via the maintainers communication channel, then open a pull request
for voting. The voting period is five business days. The voting pull request
should not come as a surpise to any maintainer and any discussion related to
performance must not be discussed on the pull request.
## How are decisions made?
Docker distribution is an open-source project with an open design philosophy.
This means that the repository is the source of truth for EVERY aspect of the
project, including its philosophy, design, road map, and APIs. *If it's part of
the project, it's in the repo. If it's in the repo, it's part of the project.*
As a result, all decisions can be expressed as changes to the repository. An
implementation change is a change to the source code. An API change is a change
to the API specification. A philosophy change is a change to the philosophy
manifesto, and so on.
All decisions affecting distribution, big and small, follow the same 3 steps:
* Step 1: Open a pull request. Anyone can do this.
* Step 2: Discuss the pull request. Anyone can do this.
* Step 3: Merge or refuse the pull request. Who does this depends on the nature
of the pull request and which areas of the project it affects.
## Helping contributors with the DCO
The [DCO or `Sign your work`](./CONTRIBUTING.md#sign-your-work)
requirement is not intended as a roadblock or speed bump.
Some contributors are not as familiar with `git`, or have used a web
based editor, and thus asking them to `git commit --amend -s` is not the best
way forward.
In this case, maintainers can update the commits based on clause (c) of the DCO.
The most trivial way for a contributor to allow the maintainer to do this, is to
add a DCO signature in a pull requests's comment, or a maintainer can simply
note that the change is sufficiently trivial that it does not substantially
change the existing contribution - i.e., a spelling change.
When you add someone's DCO, please also add your own to keep a log.
## I'm a maintainer. Should I make pull requests too?
Yes. Nobody should ever push to master directly. All changes should be
made through a pull request.
## Conflict Resolution
If you have a technical dispute that you feel has reached an impasse with a
subset of the community, any contributor may open an issue, specifically
calling for a resolution vote of the current core maintainers to resolve the
dispute. The same voting quorums required (2/3) for adding and removing
maintainers will apply to conflict resolution.

View file

@ -1,16 +1,243 @@
# Docker distribution project maintainers & reviewers # Distribution maintainers file
# #
# See GOVERNANCE.md for maintainer versus reviewer roles # This file describes who runs the docker/distribution project and how.
# This is a living document - if you see something out of date or missing, speak up!
# #
# MAINTAINERS # It is structured to be consumable by both humans and programs.
# GitHub ID, Name, Email address # To extract its contents programmatically, use any TOML-compliant parser.
"dmcgowan","Derek McGowan","derek@mcgstyle.net"
"manishtomar","Manish Tomar","manish.tomar@docker.com"
"stevvooe","Stephen Day","stevvooe@gmail.com"
# #
# REVIEWERS
# GitHub ID, Name, Email address [Rules]
"caervs","Ryan Abrams","rdabrams@gmail.com"
"davidswu","David Wu","dwu7401@gmail.com" [Rules.maintainers]
"RobbKistler","Robb Kistler","robb.kistler@docker.com"
"thajeztah","Sebastiaan van Stijn","github@gone.nl" title = "What is a maintainer?"
text = """
There are different types of maintainers, with different responsibilities, but
all maintainers have 3 things in common:
1) They share responsibility in the project's success.
2) They have made a long-term, recurring time investment to improve the project.
3) They spend that time doing whatever needs to be done, not necessarily what
is the most interesting or fun.
Maintainers are often under-appreciated, because their work is harder to appreciate.
It's easy to appreciate a really cool and technically advanced feature. It's harder
to appreciate the absence of bugs, the slow but steady improvement in stability,
or the reliability of a release process. But those things distinguish a good
project from a great one.
"""
[Rules.reviewer]
title = "What is a reviewer?"
text = """
A reviewer is a core role within the project.
They share in reviewing issues and pull requests and their LGTM count towards the
required LGTM count to merge a code change into the project.
Reviewers are part of the organization but do not have write access.
Becoming a reviewer is a core aspect in the journey to becoming a maintainer.
"""
[Rules.adding-maintainers]
title = "How are maintainers added?"
text = """
Maintainers are first and foremost contributors that have shown they are
committed to the long term success of a project. Contributors wanting to become
maintainers are expected to be deeply involved in contributing code, pull
request review, and triage of issues in the project for more than three months.
Just contributing does not make you a maintainer, it is about building trust
with the current maintainers of the project and being a person that they can
depend on and trust to make decisions in the best interest of the project.
Periodically, the existing maintainers curate a list of contributors that have
shown regular activity on the project over the prior months. From this list,
maintainer candidates are selected and proposed on the maintainers mailing list.
After a candidate has been announced on the maintainers mailing list, the
existing maintainers are given five business days to discuss the candidate,
raise objections and cast their vote. Candidates must be approved by at least 66% of the current maintainers by adding their vote on the mailing
list. Only maintainers of the repository that the candidate is proposed for are
allowed to vote.
If a candidate is approved, a maintainer will contact the candidate to invite
the candidate to open a pull request that adds the contributor to the
MAINTAINERS file. The candidate becomes a maintainer once the pull request is
merged.
"""
[Rules.stepping-down-policy]
title = "Stepping down policy"
text = """
Life priorities, interests, and passions can change. If you're a maintainer but
feel you must remove yourself from the list, inform other maintainers that you
intend to step down, and if possible, help find someone to pick up your work.
At the very least, ensure your work can be continued where you left off.
After you've informed other maintainers, create a pull request to remove
yourself from the MAINTAINERS file.
"""
[Rules.inactive-maintainers]
title = "Removal of inactive maintainers"
text = """
Similar to the procedure for adding new maintainers, existing maintainers can
be removed from the list if they do not show significant activity on the
project. Periodically, the maintainers review the list of maintainers and their
activity over the last three months.
If a maintainer has shown insufficient activity over this period, a neutral
person will contact the maintainer to ask if they want to continue being
a maintainer. If the maintainer decides to step down as a maintainer, they
open a pull request to be removed from the MAINTAINERS file.
If the maintainer wants to remain a maintainer, but is unable to perform the
required duties they can be removed with a vote of at least 66% of
the current maintainers. An e-mail is sent to the
mailing list, inviting maintainers of the project to vote. The voting period is
five business days. Issues related to a maintainer's performance should be
discussed with them among the other maintainers so that they are not surprised
by a pull request removing them.
"""
[Rules.decisions]
title = "How are decisions made?"
text = """
Short answer: EVERYTHING IS A PULL REQUEST.
distribution is an open-source project with an open design philosophy. This means
that the repository is the source of truth for EVERY aspect of the project,
including its philosophy, design, road map, and APIs. *If it's part of the
project, it's in the repo. If it's in the repo, it's part of the project.*
As a result, all decisions can be expressed as changes to the repository. An
implementation change is a change to the source code. An API change is a change
to the API specification. A philosophy change is a change to the philosophy
manifesto, and so on.
All decisions affecting distribution, big and small, follow the same 3 steps:
* Step 1: Open a pull request. Anyone can do this.
* Step 2: Discuss the pull request. Anyone can do this.
* Step 3: Merge or refuse the pull request. Who does this depends on the nature
of the pull request and which areas of the project it affects.
"""
[Rules.DCO]
title = "Helping contributors with the DCO"
text = """
The [DCO or `Sign your work`](
https://github.com/moby/moby/blob/master/CONTRIBUTING.md#sign-your-work)
requirement is not intended as a roadblock or speed bump.
Some distribution contributors are not as familiar with `git`, or have used a web
based editor, and thus asking them to `git commit --amend -s` is not the best
way forward.
In this case, maintainers can update the commits based on clause (c) of the DCO.
The most trivial way for a contributor to allow the maintainer to do this, is to
add a DCO signature in a pull requests's comment, or a maintainer can simply
note that the change is sufficiently trivial that it does not substantially
change the existing contribution - i.e., a spelling change.
When you add someone's DCO, please also add your own to keep a log.
"""
[Rules."no direct push"]
title = "I'm a maintainer. Should I make pull requests too?"
text = """
Yes. Nobody should ever push to master directly. All changes should be
made through a pull request.
"""
[Rules.tsc]
title = "Conflict Resolution and technical disputes"
text = """
distribution defers to the [Technical Steering Committee](https://github.com/moby/tsc) for escalations and resolution on disputes for technical matters."
"""
[Rules.meta]
title = "How is this process changed?"
text = "Just like everything else: by making a pull request :)"
# Current project organization
[Org]
[Org.Maintainers]
people = [
"dmcgowan",
"dmp42",
"stevvooe",
]
[Org.Reviewers]
people = [
"manishtomar",
"caervs",
"davidswu",
"RobbKistler"
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.caervs]
Name = "Ryan Abrams"
Email = "rdabrams@gmail.com"
GitHub = "caervs"
[people.davidswu]
Name = "David Wu"
Email = "dwu7401@gmail.com"
GitHub = "davidswu"
[people.dmcgowan]
Name = "Derek McGowan"
Email = "derek@mcgstyle.net"
GitHub = "dmcgowan"
[people.dmp42]
Name = "Olivier Gambier"
Email = "olivier@docker.com"
GitHub = "dmp42"
[people.manishtomar]
Name = "Manish Tomar"
Email = "manish.tomar@docker.com"
GitHub = "manishtomar"
[people.RobbKistler]
Name = "Robb Kistler"
Email = "robb.kistler@docker.com"
GitHub = "RobbKistler"
[people.stevvooe]
Name = "Stephen Day"
Email = "stephen.day@docker.com"
GitHub = "stevvooe"

View file

@ -2,8 +2,8 @@
ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST)))) ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST))))
# Used to populate version variable in main package. # Used to populate version variable in main package.
VERSION ?= $(shell git describe --match 'v[0-9]*' --dirty='.m' --always) VERSION=$(shell git describe --match 'v[0-9]*' --dirty='.m' --always)
REVISION ?= $(shell git rev-parse HEAD)$(shell if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi) REVISION=$(shell git rev-parse HEAD)$(shell if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi)
PKG=github.com/docker/distribution PKG=github.com/docker/distribution
@ -50,7 +50,7 @@ version/version.go:
check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck") check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck")
@echo "$(WHALE) $@" @echo "$(WHALE) $@"
@GO111MODULE=off golangci-lint run gometalinter --config .gometalinter.json ./...
test: ## run tests, except integration test with test.short test: ## run tests, except integration test with test.short
@echo "$(WHALE) $@" @echo "$(WHALE) $@"

View file

@ -2,32 +2,31 @@
The Docker toolset to pack, ship, store, and deliver content. The Docker toolset to pack, ship, store, and deliver content.
This repository's main product is the Open Source Docker Registry implementation This repository's main product is the Docker Registry 2.0 implementation
for storing and distributing Docker and OCI images using the for storing and distributing Docker images. It supersedes the
[OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). [docker/docker-registry](https://github.com/docker/docker-registry)
The goal of this project is to provide a simple, secure, and scalable base project with a new API design, focused around security and performance.
for building a registry solution or running a simple private registry.
<img src="https://www.docker.com/sites/default/files/oyster-registry-3.png" width=200px/> <img src="https://www.docker.com/sites/default/files/oyster-registry-3.png" width=200px/>
[![Build Status](https://travis-ci.org/docker/distribution.svg?branch=master)](https://travis-ci.org/docker/distribution) [![Circle CI](https://circleci.com/gh/docker/distribution/tree/master.svg?style=svg)](https://circleci.com/gh/docker/distribution/tree/master)
[![GoDoc](https://godoc.org/github.com/docker/distribution?status.svg)](https://godoc.org/github.com/docker/distribution) [![GoDoc](https://godoc.org/github.com/docker/distribution?status.svg)](https://godoc.org/github.com/docker/distribution)
This repository contains the following components: This repository contains the following components:
|**Component** |Description | |**Component** |Description |
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **registry** | An implementation of the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). | | **registry** | An implementation of the [Docker Registry HTTP API V2](docs/spec/api.md) for use with docker 1.6+. |
| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: The interfaces for these libraries are **unstable**. | | **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: These libraries are **unstable**. |
| **specifications** | _Distribution_ related specifications are available in [docs/spec](docs/spec) |
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/) related just to the registry. | | **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/) related just to the registry. |
### How does this integrate with Docker, containerd, and other OCI client? ### How does this integrate with Docker engine?
Clients implement against the OCI specification and communicate with the This project should provide an implementation to a V2 API for use in the [Docker
registry using HTTP. This project contains an client implementation which core project](https://github.com/docker/docker). The API should be embeddable
is currently in use by Docker, however, it is deprecated for the and simplify the process of securely pulling and pushing content from `docker`
[implementation in containerd](https://github.com/containerd/containerd/tree/master/remotes/docker) daemons.
and will not support new features.
### What are the long term goals of the Distribution project? ### What are the long term goals of the Distribution project?
@ -44,6 +43,18 @@ system that allow users to:
* Implement their own home made solution through good specs, and solid * Implement their own home made solution through good specs, and solid
extensions mechanism. extensions mechanism.
## More about Registry 2.0
The new registry implementation provides the following benefits:
- faster push and pull
- new, more efficient implementation
- simplified deployment
- pluggable storage backend
- webhook notifications
For information on upcoming functionality, please see [ROADMAP.md](ROADMAP.md).
### Who needs to deploy a registry? ### Who needs to deploy a registry?
By default, Docker users pull images from Docker's public registry instance. By default, Docker users pull images from Docker's public registry instance.
@ -67,25 +78,53 @@ For those who have previously deployed their own registry based on the Registry
data migration is required. A tool to assist with migration efforts has been data migration is required. A tool to assist with migration efforts has been
created. For more information see [docker/migrator](https://github.com/docker/migrator). created. For more information see [docker/migrator](https://github.com/docker/migrator).
## Contribution ## Contribute
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
issues, fixes, and patches to this project. If you are contributing code, see issues, fixes, and patches to this project. If you are contributing code, see
the instructions for [building a development environment](BUILDING.md). the instructions for [building a development environment](BUILDING.md).
## Communication ## Support
For async communication and long running discussions please use issues and pull requests on the github repo. If any issues are encountered while using the _Distribution_ project, several
This will be the best place to discuss design and implementation. avenues are available for support:
For sync communication we have a community slack with a #distribution channel that everyone is welcome to join and chat about development. <table>
<tr>
<th align="left">
IRC
</th>
<td>
#docker-distribution on FreeNode
</td>
</tr>
<tr>
<th align="left">
Issue Tracker
</th>
<td>
github.com/docker/distribution/issues
</td>
</tr>
<tr>
<th align="left">
Google Groups
</th>
<td>
https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
</td>
</tr>
<tr>
<th align="left">
Mailing List
</th>
<td>
docker@dockerproject.org
</td>
</tr>
</table>
**Slack:** Catch us in the #distribution channels on dockercommunity.slack.com.
[Click here for an invite to Docker community slack.](https://dockr.ly/slack)
## Licenses ## License
The distribution codebase is released under the [Apache 2.0 license](LICENSE). This project is distributed under [Apache License, Version 2.0](LICENSE).
The README.md file, and files in the "docs" folder are licensed under the
Creative Commons Attribution 4.0 International License. You may obtain a
copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

View file

@ -10,7 +10,7 @@ import (
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
var ( var (

View file

@ -21,7 +21,7 @@ import (
"text/template" "text/template"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
) )
var spaceRegex = regexp.MustCompile(`\n\s*`) var spaceRegex = regexp.MustCompile(`\n\s*`)

View file

@ -12,7 +12,6 @@ import (
_ "github.com/docker/distribution/registry/storage/driver/filesystem" _ "github.com/docker/distribution/registry/storage/driver/filesystem"
_ "github.com/docker/distribution/registry/storage/driver/gcs" _ "github.com/docker/distribution/registry/storage/driver/gcs"
_ "github.com/docker/distribution/registry/storage/driver/inmemory" _ "github.com/docker/distribution/registry/storage/driver/inmemory"
_ "github.com/docker/distribution/registry/storage/driver/middleware/alicdn"
_ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront" _ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront"
_ "github.com/docker/distribution/registry/storage/driver/middleware/redirect" _ "github.com/docker/distribution/registry/storage/driver/middleware/redirect"
_ "github.com/docker/distribution/registry/storage/driver/oss" _ "github.com/docker/distribution/registry/storage/driver/oss"

View file

@ -108,9 +108,6 @@ type Configuration struct {
// A file may contain multiple CA certificates encoded as PEM // A file may contain multiple CA certificates encoded as PEM
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
// Specifies the lowest TLS version allowed
MinimumTLS string `yaml:"minimumtls,omitempty"`
// LetsEncrypt is used to configuration setting up TLS through // LetsEncrypt is used to configuration setting up TLS through
// Let's Encrypt instead of manually specifying certificate and // Let's Encrypt instead of manually specifying certificate and
// key. If a TLS certificate is specified, the Let's Encrypt // key. If a TLS certificate is specified, the Let's Encrypt
@ -391,7 +388,7 @@ func (loglevel *Loglevel) UnmarshalYAML(unmarshal func(interface{}) error) error
switch loglevelString { switch loglevelString {
case "error", "warn", "info", "debug": case "error", "warn", "info", "debug":
default: default:
return fmt.Errorf("invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString) return fmt.Errorf("Invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
} }
*loglevel = Loglevel(loglevelString) *loglevel = Loglevel(loglevelString)
@ -466,7 +463,7 @@ func (storage *Storage) UnmarshalYAML(unmarshal func(interface{}) error) error {
} }
if len(types) > 1 { if len(types) > 1 {
return fmt.Errorf("must provide exactly one storage type. Provided: %v", types) return fmt.Errorf("Must provide exactly one storage type. Provided: %v", types)
} }
} }
*storage = storageMap *storage = storageMap
@ -668,11 +665,11 @@ func Parse(rd io.Reader) (*Configuration, error) {
v0_1.Loglevel = Loglevel("") v0_1.Loglevel = Loglevel("")
} }
if v0_1.Storage.Type() == "" { if v0_1.Storage.Type() == "" {
return nil, errors.New("no storage configuration provided") return nil, errors.New("No storage configuration provided")
} }
return (*Configuration)(v0_1), nil return (*Configuration)(v0_1), nil
} }
return nil, fmt.Errorf("expected *v0_1Configuration, received %#v", c) return nil, fmt.Errorf("Expected *v0_1Configuration, received %#v", c)
}, },
}, },
}) })

View file

@ -83,7 +83,6 @@ var configStruct = Configuration{
Certificate string `yaml:"certificate,omitempty"` Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"` Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
LetsEncrypt struct { LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"` CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"` Email string `yaml:"email,omitempty"`
@ -106,7 +105,6 @@ var configStruct = Configuration{
Certificate string `yaml:"certificate,omitempty"` Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"` Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
LetsEncrypt struct { LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"` CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"` Email string `yaml:"email,omitempty"`
@ -542,7 +540,9 @@ func copyConfig(config Configuration) *Configuration {
} }
configCopy.Notifications = Notifications{Endpoints: []Endpoint{}} configCopy.Notifications = Notifications{Endpoints: []Endpoint{}}
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, config.Notifications.Endpoints...) for _, v := range config.Notifications.Endpoints {
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, v)
}
configCopy.HTTP.Headers = make(http.Header) configCopy.HTTP.Headers = make(http.Header)
for k, v := range config.HTTP.Headers { for k, v := range config.HTTP.Headers {

View file

@ -122,7 +122,7 @@ func (p *Parser) Parse(in []byte, v interface{}) error {
parseInfo, ok := p.mapping[versionedStruct.Version] parseInfo, ok := p.mapping[versionedStruct.Version]
if !ok { if !ok {
return fmt.Errorf("unsupported version: %q", versionedStruct.Version) return fmt.Errorf("Unsupported version: %q", versionedStruct.Version)
} }
parseAs := reflect.New(parseInfo.ParseAs) parseAs := reflect.New(parseInfo.ParseAs)
@ -220,7 +220,7 @@ func (p *Parser) overwriteStruct(v reflect.Value, fullpath string, path []string
} }
case reflect.Ptr: case reflect.Ptr:
if field.IsNil() { if field.IsNil() {
field.Set(reflect.New(field.Type().Elem())) field.Set(reflect.New(sf.Type))
} }
} }

View file

@ -1,70 +0,0 @@
package configuration
import (
"os"
"reflect"
. "gopkg.in/check.v1"
)
type localConfiguration struct {
Version Version `yaml:"version"`
Log *Log `yaml:"log"`
}
type Log struct {
Formatter string `yaml:"formatter,omitempty"`
}
var expectedConfig = localConfiguration{
Version: "0.1",
Log: &Log{
Formatter: "json",
},
}
type ParserSuite struct{}
var _ = Suite(new(ParserSuite))
func (suite *ParserSuite) TestParserOverwriteIninitializedPoiner(c *C) {
config := localConfiguration{}
os.Setenv("REGISTRY_LOG_FORMATTER", "json")
defer os.Unsetenv("REGISTRY_LOG_FORMATTER")
p := NewParser("registry", []VersionedParseInfo{
{
Version: "0.1",
ParseAs: reflect.TypeOf(config),
ConversionFunc: func(c interface{}) (interface{}, error) {
return c, nil
},
},
})
err := p.Parse([]byte(`{version: "0.1", log: {formatter: "text"}}`), &config)
c.Assert(err, IsNil)
c.Assert(config, DeepEquals, expectedConfig)
}
func (suite *ParserSuite) TestParseOverwriteUnininitializedPoiner(c *C) {
config := localConfiguration{}
os.Setenv("REGISTRY_LOG_FORMATTER", "json")
defer os.Unsetenv("REGISTRY_LOG_FORMATTER")
p := NewParser("registry", []VersionedParseInfo{
{
Version: "0.1",
ParseAs: reflect.TypeOf(config),
ConversionFunc: func(c interface{}) (interface{}, error) {
return c, nil
},
},
})
err := p.Parse([]byte(`{version: "0.1"}`), &config)
c.Assert(err, IsNil)
c.Assert(config, DeepEquals, expectedConfig)
}

View file

@ -70,7 +70,7 @@ to the 1.0 registry. Requests from newer clients will route to the 2.0 registry.
Removing intermediate container edb84c2b40cb Removing intermediate container edb84c2b40cb
Successfully built 74acc70fa106 Successfully built 74acc70fa106
The command outputs its progress until it completes. The commmand outputs its progress until it completes.
4. Start your configuration with compose. 4. Start your configuration with compose.
@ -133,7 +133,7 @@ to the 1.0 registry. Requests from newer clients will route to the 2.0 registry.
> Accept: */* > Accept: */*
> >
< HTTP/1.1 200 OK < HTTP/1.1 200 OK
< Content-Type: application/json < Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0 < Docker-Distribution-Api-Version: registry/2.0
< Date: Tue, 14 Apr 2015 22:34:13 GMT < Date: Tue, 14 Apr 2015 22:34:13 GMT
< Content-Length: 39 < Content-Length: 39

View file

@ -35,7 +35,7 @@ the [release page](https://github.com/docker/golem/releases/tag/v0.1).
#### Running golem with docker #### Running golem with docker
Additionally golem can be run as a docker image requiring no additional Additionally golem can be run as a docker image requiring no additonal
installation. installation.
`docker run --privileged -v "$GOPATH/src/github.com/docker/distribution/contrib/docker-integration:/test" -w /test distribution/golem golem -rundaemon .` `docker run --privileged -v "$GOPATH/src/github.com/docker/distribution/contrib/docker-integration:/test" -w /test distribution/golem golem -rundaemon .`

View file

@ -245,7 +245,7 @@ func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *h
// Get response context. // Get response context.
ctx, w = dcontext.WithResponseWriter(ctx, w) ctx, w = dcontext.WithResponseWriter(ctx, w)
challenge.SetHeaders(r, w) challenge.SetHeaders(w)
handleError(ctx, errcode.ErrorCodeUnauthorized.WithDetail(challenge.Error()), w) handleError(ctx, errcode.ErrorCodeUnauthorized.WithDetail(challenge.Error()), w)
dcontext.GetResponseLogger(ctx).Info("get token authentication challenge") dcontext.GetResponseLogger(ctx).Info("get token authentication challenge")

View file

@ -1,80 +0,0 @@
package main
import (
"crypto/rand"
"crypto/rsa"
"encoding/base64"
"errors"
"testing"
"time"
"strings"
"github.com/docker/distribution/registry/auth"
"github.com/docker/libtrust"
)
func TestCreateJWTSuccessWithEmptyACL(t *testing.T) {
key, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
}
pk, err := libtrust.FromCryptoPrivateKey(key)
if err != nil {
t.Fatal(err)
}
tokenIssuer := TokenIssuer{
Expiration: time.Duration(100),
Issuer: "localhost",
SigningKey: pk,
}
grantedAccessList := make([]auth.Access, 0)
token, err := tokenIssuer.CreateJWT("test", "test", grantedAccessList)
if err != nil {
t.Fatal(err)
}
tokens := strings.Split(token, ".")
if len(token) == 0 {
t.Fatal("token not generated.")
}
json, err := decodeJWT(tokens[1])
if err != nil {
t.Fatal(err)
}
if !strings.Contains(json, "test") {
t.Fatal("Valid token was not generated.")
}
}
func decodeJWT(rawToken string) (string, error) {
data, err := joseBase64Decode(rawToken)
if err != nil {
return "", errors.New("Error in Decoding base64 String")
}
return data, nil
}
func joseBase64Decode(s string) (string, error) {
switch len(s) % 4 {
case 0:
case 2:
s += "=="
case 3:
s += "="
default:
{
return "", errors.New("Invalid base64 String")
}
}
data, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", err //errors.New("Error in Decoding base64 String")
}
return string(data), nil
}

View file

@ -147,8 +147,7 @@ storage:
endpoint: optional endpoints endpoint: optional endpoints
internal: optional internal endpoint internal: optional internal endpoint
bucket: OSS bucket bucket: OSS bucket
encrypt: optional enable server-side encryption encrypt: optional data encryption setting
encryptionkeyid: optional KMS key id for encryption
secure: optional ssl setting secure: optional ssl setting
chunksize: optional size valye chunksize: optional size valye
rootdirectory: optional root directory rootdirectory: optional root directory
@ -172,7 +171,6 @@ auth:
realm: silly-realm realm: silly-realm
service: silly-service service: silly-service
token: token:
autoredirect: true
realm: token-realm realm: token-realm
service: token-service service: token-service
issuer: registry-token-issuer issuer: registry-token-issuer
@ -198,7 +196,7 @@ middleware:
duration: 3000s duration: 3000s
ipfilteredby: awsregion ipfilteredby: awsregion
awsregion: us-east-1, use-east-2 awsregion: us-east-1, use-east-2
updatefrequency: 12h updatefrenquency: 12h
iprangesurl: https://ip-ranges.amazonaws.com/ip-ranges.json iprangesurl: https://ip-ranges.amazonaws.com/ip-ranges.json
storage: storage:
- name: redirect - name: redirect
@ -448,8 +446,7 @@ storage:
endpoint: optional endpoints endpoint: optional endpoints
internal: optional internal endpoint internal: optional internal endpoint
bucket: OSS bucket bucket: OSS bucket
encrypt: optional enable server-side encryption encrypt: optional data encryption setting
encryptionkeyid: optional KMS key id for encryption
secure: optional ssl setting secure: optional ssl setting
chunksize: optional size valye chunksize: optional size valye
rootdirectory: optional root directory rootdirectory: optional root directory
@ -626,7 +623,6 @@ security.
| `service` | yes | The service being authenticated. | | `service` | yes | The service being authenticated. |
| `issuer` | yes | The name of the token issuer. The issuer inserts this into the token so it must match the value configured for the issuer. | | `issuer` | yes | The name of the token issuer. The issuer inserts this into the token so it must match the value configured for the issuer. |
| `rootcertbundle` | yes | The absolute path to the root certificate bundle. This bundle contains the public part of the certificates used to sign authentication tokens. | | `rootcertbundle` | yes | The absolute path to the root certificate bundle. This bundle contains the public part of the certificates used to sign authentication tokens. |
| `autoredirect` | no | When set to `true`, `realm` will automatically be set using the Host header of the request as the domain and a path of `/auth/token/`|
For more information about Token based authentication configuration, see the For more information about Token based authentication configuration, see the
@ -685,7 +681,7 @@ middleware:
duration: 3000s duration: 3000s
ipfilteredby: awsregion ipfilteredby: awsregion
awsregion: us-east-1, use-east-2 awsregion: us-east-1, use-east-2
updatefrequency: 12h updatefrenquency: 12h
iprangesurl: https://ip-ranges.amazonaws.com/ip-ranges.json iprangesurl: https://ip-ranges.amazonaws.com/ip-ranges.json
``` ```
@ -705,31 +701,15 @@ interpretation of the options.
| `baseurl` | yes | The `SCHEME://HOST[/PATH]` at which Cloudfront is served. | | `baseurl` | yes | The `SCHEME://HOST[/PATH]` at which Cloudfront is served. |
| `privatekey` | yes | The private key for Cloudfront, provided by AWS. | | `privatekey` | yes | The private key for Cloudfront, provided by AWS. |
| `keypairid` | yes | The key pair ID provided by AWS. | | `keypairid` | yes | The key pair ID provided by AWS. |
| `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes). | | `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes).|
| `ipfilteredby` | no | A string with the following value `none`, `aws` or `awsregion`. | |`ipfilteredby`|no | A string with the following value `none|aws|awsregion`. |
| `awsregion` | no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2` | |`awsregion`|no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2`|
| `updatefrequency` | no | The frequency to update AWS IP regions, default: `12h` | |`updatefrenquency`|no | The frequency to update AWS IP regions, default: `12h`|
| `iprangesurl` | no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json` | |`iprangesurl`|no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json`|
Then value of ipfilteredby:
`none`: default, do not filter by IP
Value of `ipfilteredby` can be: `aws`: IP from AWS goes to S3 directly
`awsregion`: IP from certain AWS regions goes to S3 directly, use together with `awsregion`
| Value | Description |
|-------------|------------------------------------|
| `none` | default, do not filter by IP |
| `aws` | IP from AWS goes to S3 directly |
| `awsregion` | IP from certain AWS regions goes to S3 directly, use together with `awsregion`. |
### `alicdn`
`alicdn` storage middleware allows the registry to serve layers via a content delivery network provided by Alibaba Cloud. Alicdn requires the OSS storage driver.
| Parameter | Required | Description |
|--------------|----------|-------------------------------------------------------------------------|
| `baseurl` | yes | The `SCHEME://HOST` at which Alicdn is served. |
| `authtype` | yes | The URL authentication type for Alicdn, which should be `a`, `b` or `c`. See the [Authentication configuration](https://www.alibabacloud.com/help/doc-detail/85117.htm).|
| `privatekey` | yes | The URL authentication key for Alicdn. |
| `duration` | no | An integer and unit for the duration of the Alicdn session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`.|
### `redirect` ### `redirect`
@ -795,7 +775,6 @@ http:
clientcas: clientcas:
- /path/to/ca.pem - /path/to/ca.pem
- /path/to/another/ca.pem - /path/to/another/ca.pem
minimumtls: tls1.0
letsencrypt: letsencrypt:
cachefile: /path/to/cache-file cachefile: /path/to/cache-file
email: emailused@letsencrypt.com email: emailused@letsencrypt.com
@ -832,9 +811,8 @@ and proxy connections to the registry server.
| Parameter | Required | Description | | Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------| |-----------|----------|-------------------------------------------------------|
| `certificate` | yes | Absolute path to the x509 certificate file. | | `certificate` | yes | Absolute path to the x509 certificate file. |
| `key` | yes | Absolute path to the x509 private key file. | | `key` | yes | Absolute path to the x509 private key file. |
| `clientcas` | no | An array of absolute paths to x509 CA files. | | `clientcas` | no | An array of absolute paths to x509 CA files. |
| `minimumtls` | no | Minimum TLS version allowed (tls1.0, tls1.1, tls1.2). Defaults to tls1.0 |
### `letsencrypt` ### `letsencrypt`
@ -848,9 +826,7 @@ TLS certificates provided by
> to the `docker run` command or using a similar setting in a cloud > to the `docker run` command or using a similar setting in a cloud
> configuration. You should also set the `hosts` option to the list of hostnames > configuration. You should also set the `hosts` option to the list of hostnames
> that are valid for this registry to avoid trying to get certificates for random > that are valid for this registry to avoid trying to get certificates for random
> hostnames due to malicious clients connecting with bogus SNI hostnames. Please > hostnames due to malicious clients connecting with bogus SNI hostnames.
> ensure that you have the `ca-certificates` package installed in order to verify
> letsencrypt certificates.
| Parameter | Required | Description | | Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------| |-----------|----------|-------------------------------------------------------|

View file

@ -2,8 +2,6 @@
title: "HTTP API V2" title: "HTTP API V2"
description: "Specification for the Registry API." description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
--- ---
# Docker Registry HTTP API V2 # Docker Registry HTTP API V2
@ -1208,7 +1206,7 @@ The registry does not implement the V2 API.
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1246,7 +1244,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1318,7 +1316,7 @@ The following parameters should be specified on the request:
``` ```
200 OK 200 OK
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"name": <name>, "name": <name>,
@ -1346,7 +1344,7 @@ The following headers will be returned with the response:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1384,7 +1382,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1421,7 +1419,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1458,7 +1456,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1516,7 +1514,7 @@ The following parameters should be specified on the request:
200 OK 200 OK
Content-Length: <length> Content-Length: <length>
Link: <<url>?n=<last n value>&last=<last entry from response>>; rel="next" Link: <<url>?n=<last n value>&last=<last entry from response>>; rel="next"
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"name": <name>, "name": <name>,
@ -1545,7 +1543,7 @@ The following headers will be returned with the response:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1583,7 +1581,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1620,7 +1618,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1657,7 +1655,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1761,7 +1759,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1794,7 +1792,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1832,7 +1830,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1869,7 +1867,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -1906,7 +1904,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2007,7 +2005,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2043,7 +2041,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2081,7 +2079,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2118,7 +2116,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2155,7 +2153,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2191,7 +2189,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [{ "errors:" [{
@ -2279,7 +2277,7 @@ The following parameters should be specified on the request:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2312,7 +2310,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2350,7 +2348,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2387,7 +2385,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2424,7 +2422,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2460,7 +2458,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2585,7 +2583,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2616,7 +2614,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2649,7 +2647,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2687,7 +2685,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2724,7 +2722,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2761,7 +2759,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2845,7 +2843,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2876,7 +2874,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2919,7 +2917,7 @@ The range specification cannot be satisfied for the requested content. This can
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2957,7 +2955,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -2994,7 +2992,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3031,7 +3029,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3134,7 +3132,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3165,7 +3163,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
405 Method Not Allowed 405 Method Not Allowed
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3197,7 +3195,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3235,7 +3233,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3272,7 +3270,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3309,7 +3307,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3447,7 +3445,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3485,7 +3483,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3522,7 +3520,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3559,7 +3557,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3664,7 +3662,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3702,7 +3700,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3739,7 +3737,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3776,7 +3774,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3899,7 +3897,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3937,7 +3935,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -3974,7 +3972,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4011,7 +4009,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4104,7 +4102,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4136,7 +4134,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4168,7 +4166,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4206,7 +4204,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4243,7 +4241,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4280,7 +4278,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4372,7 +4370,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4404,7 +4402,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4436,7 +4434,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4474,7 +4472,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4511,7 +4509,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4548,7 +4546,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4638,7 +4636,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4670,7 +4668,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4712,7 +4710,7 @@ The `Content-Range` specification cannot be accepted, either because it does not
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4750,7 +4748,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4787,7 +4785,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4824,7 +4822,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4918,7 +4916,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4951,7 +4949,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -4983,7 +4981,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5021,7 +5019,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5058,7 +5056,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5095,7 +5093,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5179,7 +5177,7 @@ The following headers will be returned with the response:
``` ```
400 Bad Request 400 Bad Request
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5210,7 +5208,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5242,7 +5240,7 @@ The error codes that may be included in the response body are enumerated below:
401 Unauthorized 401 Unauthorized
WWW-Authenticate: <scheme> realm="<realm>", ..." WWW-Authenticate: <scheme> realm="<realm>", ..."
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5280,7 +5278,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
404 Not Found 404 Not Found
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5317,7 +5315,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
403 Forbidden 403 Forbidden
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5354,7 +5352,7 @@ The error codes that may be included in the response body are enumerated below:
``` ```
429 Too Many Requests 429 Too Many Requests
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"errors:" [ "errors:" [
@ -5416,7 +5414,7 @@ Request an unabridged list of repositories available. The implementation may im
``` ```
200 OK 200 OK
Content-Length: <length> Content-Length: <length>
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"repositories": [ "repositories": [
@ -5461,7 +5459,7 @@ The following parameters should be specified on the request:
200 OK 200 OK
Content-Length: <length> Content-Length: <length>
Link: <<url>?n=<last n value>&last=<last entry from response>>; rel="next" Link: <<url>?n=<last n value>&last=<last entry from response>>; rel="next"
Content-Type: application/json Content-Type: application/json; charset=utf-8
{ {
"repositories": [ "repositories": [

View file

@ -2,8 +2,6 @@
title: "HTTP API V2" title: "HTTP API V2"
description: "Specification for the Registry API." description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
--- ---
# Docker Registry HTTP API V2 # Docker Registry HTTP API V2

View file

@ -60,7 +60,7 @@ return this response:
``` ```
HTTP/1.1 401 Unauthorized HTTP/1.1 401 Unauthorized
Content-Type: application/json Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0 Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:samalba/my-app:pull,push" Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:samalba/my-app:pull,push"
Date: Thu, 10 Sep 2015 19:32:31 GMT Date: Thu, 10 Sep 2015 19:32:31 GMT

View file

@ -1,41 +0,0 @@
---
title: Update deprecated schema image manifest version 2, v1 images
description: Update deprecated schema v1 iamges
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced, manifest
---
## Image manifest version 2, schema 1
With the release of image manifest version 2, schema 2, image manifest version
2, schema 1 has been deprecated. This could lead to compatibility and
vulnerability issues in images that haven't been updated to image manifest
version 2, schema 2.
This page contains information on how to update from image manifest version 2,
schema 1. However, these instructions will not ensure your new image will run
successfully. There may be several other issues to troubleshoot that are
associated with the deprecated image manifest that will block your image from
running succesfully. A list of possible methods to help update your image is
also included below.
### Update to image manifest version 2, schema 2
One way to upgrade an image from image manifest version 2, schema 1 to
schema 2 is to `docker pull` the image and then `docker push` the image with a
current version of Docker. Doing so will automatically convert the image to use
the latest image manifest specification.
Converting an image to image manifest version 2, schema 2 converts the
manifest format, but does not update the contents within the image. Images
using manifest version 2, schema 1 may contain unpatched vulnerabilities. We
recommend looking for an alternative image or rebuilding it.
### Update FROM statement
You can rebuild the image by updating the `FROM` statement in your
`Dockerfile`. If your image manifest is out-of-date, there is a chance the
image pulled from your `FROM` statement in your `Dockerfile` is also
out-of-date. See the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/#from)
and the [Dockerfile best practices guide](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
for more information on how to update the `FROM` statement in your
`Dockerfile`.

47
go.mod
View file

@ -1,47 +0,0 @@
module github.com/docker/distribution
go 1.12
require (
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible
github.com/Azure/go-autorest v10.8.1+incompatible // indirect
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d
github.com/aws/aws-sdk-go v1.34.9
github.com/bitly/go-simplejson v0.5.0 // indirect
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 // indirect
github.com/bshuster-repo/logrus-logstash-hook v0.4.1
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b // indirect
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 // indirect
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c // indirect
github.com/dnaeon/go-vcr v1.0.1 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c
github.com/docker/go-metrics v0.0.1
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33
github.com/gorilla/mux v1.7.2
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/kr/pretty v0.1.0 // indirect
github.com/marstr/guid v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.1.2
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f // indirect
github.com/ncw/swift v1.0.47
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/satori/go.uuid v1.2.0 // indirect
github.com/sirupsen/logrus v1.6.0
github.com/spf13/cobra v0.0.3
github.com/spf13/pflag v1.0.3 // indirect
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 // indirect
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f // indirect
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a // indirect
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789
gopkg.in/yaml.v2 v2.2.2
)

176
go.sum
View file

@ -1,176 +0,0 @@
cloud.google.com/go v0.34.0 h1:eOI3/cP2VTU6uZLDYAoic+eyzzB9YyGmJ7eIjl8rOPg=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible h1:KnPIugL51v3N3WwvaSmZbxukD1WuWXOiE9fRdu32f2I=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-autorest v10.8.1+incompatible h1:u0jVQf+a6k6x8A+sT60l6EY9XZu+kHdnZVPAYqpVRo0=
github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/aws/aws-sdk-go v1.34.9 h1:cUGBW9CVdi0mS7K1hDzxIqTpfeWhpoQiguq81M1tjK0=
github.com/aws/aws-sdk-go v1.34.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bitly/go-simplejson v0.5.0 h1:6IH+V8/tVMab511d5bn4M7EwGXZf9Hj6i2xSwkNEM+Y=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bshuster-repo/logrus-logstash-hook v0.4.1 h1:pgAtgj+A31JBVtEHu2uHuEx0n+2ukqUJnS2vVe5pQNA=
github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd h1:rFt+Y/IK1aEZkEHchZRSq9OQbsSzIT/OrI8YFFmRIng=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembjv71DPz3uX/V/6MMlSyD9JBQ6kQ=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba h1:p6poVbjHDkKa+wtC8frBMwQtT3BmqGYBjzMwJ63tuR4=
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c h1:KJAnOBuY9cTKVqB5cfbynpvFgeHRTREkRk8C977oFu4=
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dnaeon/go-vcr v1.0.1 h1:r8L/HqC0Hje5AXMu1ooW8oyQyOFv4GxqpL0nRP7SLLY=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1 h1:ZClxb8laGDf5arXfYcAtECDFgAgHklGI8CxgjHnXKJ4=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7 h1:LofdAjjjqCSXMwLGgOgnE+rdPuvX9DxCqaHwKy7i/ko=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33 h1:893HsJqtxp9z1SF76gg6hY70hRY1wVlTSnC/h1yUDCo=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
github.com/gorilla/mux v1.7.2 h1:zoNxOV7WjqXptQOVngLmcSQgXmgk4NMz1HibBchjl/I=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jmespath/go-jmespath v0.3.0 h1:OS12ieG61fsCg5+qLJ+SsW9NicxNkg3b25OyT2yCeUc=
github.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/marstr/guid v1.1.0 h1:/M4H/1G4avsieL6BbUwCOBzulmoeKVP5ux/3mQNnbyI=
github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f h1:2+myh5ml7lgEU/51gbeLHfKGNfgEQQIWrlbdaOsidbQ=
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/ncw/swift v1.0.47 h1:4DQRPj35Y41WogBxyhOXlrI37nzGlyEcsforeudyYPQ=
github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkpEo=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3 h1:CTwfnzjQ+8dS6MhHHu4YswVAD99sL2wjPqP+VkURmKE=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 h1:+lm10QQTNSBd8DVTNGHx7o/IKu9HYDvLMffDhbyLccI=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 h1:hlE8//ciYMztlGpl/VA+Zm1AcTPHYkHJPbHqE6WJUXE=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f h1:ERexzlUfuTvpE74urLSbIQW0Z/6hF9t8U4NsJLaioAY=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d h1:9FCpayM9Egr1baVnV1SX0H87m+XB0B8S0hAMi99X/3U=
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3 h1:4y9KwBHBgBNwDbtu44R5o1fdOCQUEXhbk/P4A9WmJq0=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff h1:mk5zS3XLqVUzdF/CQCZ5ERujSF/8JFo+Wpkp/5I93NA=
google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8 h1:Cpp2P6TPjujNoC5M2KHY6g7wfyLYfIWRZaSdIKfDasA=
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a h1:zo0EaRwJM6T5UQ+QEt2dDSgEmbFJ4pZr/Rzsjpu7zgI=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789 h1:NMiUjDZiD6qDVeBOzpImftxXzQHCp2Y2QLdmaqU9MRk=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View file

@ -14,7 +14,7 @@ var (
// DownHandler registers a manual_http_status that always returns an Error // DownHandler registers a manual_http_status that always returns an Error
func DownHandler(w http.ResponseWriter, r *http.Request) { func DownHandler(w http.ResponseWriter, r *http.Request) {
if r.Method == "POST" { if r.Method == "POST" {
updater.Update(errors.New("manual Check")) updater.Update(errors.New("Manual Check"))
} else { } else {
w.WriteHeader(http.StatusNotFound) w.WriteHeader(http.StatusNotFound)
} }

View file

@ -291,7 +291,7 @@ func statusResponse(w http.ResponseWriter, r *http.Request, status int, checks m
} }
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Content-Length", fmt.Sprint(len(p))) w.Header().Set("Content-Length", fmt.Sprint(len(p)))
w.WriteHeader(status) w.WriteHeader(status)
if _, err := w.Write(p); err != nil { if _, err := w.Write(p); err != nil {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
const ( const (
@ -163,7 +163,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
}, },
} }
m.Manifests = make([]ManifestDescriptor, len(descriptors)) m.Manifests = make([]ManifestDescriptor, len(descriptors), len(descriptors))
copy(m.Manifests, descriptors) copy(m.Manifests, descriptors)
deserialized := DeserializedManifestList{ deserialized := DeserializedManifestList{
@ -177,7 +177,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
// UnmarshalJSON populates a new ManifestList struct from JSON data. // UnmarshalJSON populates a new ManifestList struct from JSON data.
func (m *DeserializedManifestList) UnmarshalJSON(b []byte) error { func (m *DeserializedManifestList) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b)) m.canonical = make([]byte, len(b), len(b))
// store manifest list in canonical // store manifest list in canonical
copy(m.canonical, b) copy(m.canonical, b)

View file

@ -7,7 +7,7 @@ import (
"testing" "testing"
"github.com/docker/distribution" "github.com/docker/distribution"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
var expectedManifestListSerialization = []byte(`{ var expectedManifestListSerialization = []byte(`{

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
// Builder is a type for constructing manifests. // Builder is a type for constructing manifests.
@ -48,7 +48,7 @@ func NewManifestBuilder(bs distribution.BlobService, configJSON []byte, annotati
// valid media type for oci image manifests currently: "" or "application/vnd.oci.image.manifest.v1+json" // valid media type for oci image manifests currently: "" or "application/vnd.oci.image.manifest.v1+json"
func (mb *Builder) SetMediaType(mediaType string) error { func (mb *Builder) SetMediaType(mediaType string) error {
if mediaType != "" && mediaType != v1.MediaTypeImageManifest { if mediaType != "" && mediaType != v1.MediaTypeImageManifest {
return errors.New("invalid media type for OCI image manifest") return errors.New("Invalid media type for OCI image manifest")
} }
mb.mediaType = mediaType mb.mediaType = mediaType

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
type mockBlobService struct { type mockBlobService struct {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
var ( var (
@ -87,7 +87,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data. // UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error { func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b)) m.canonical = make([]byte, len(b), len(b))
// store manifest in canonical // store manifest in canonical
copy(m.canonical, b) copy(m.canonical, b)

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
var expectedManifestSerialization = []byte(`{ var expectedManifestSerialization = []byte(`{

View file

@ -108,7 +108,7 @@ type SignedManifest struct {
// UnmarshalJSON populates a new SignedManifest struct from JSON data. // UnmarshalJSON populates a new SignedManifest struct from JSON data.
func (sm *SignedManifest) UnmarshalJSON(b []byte) error { func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
sm.all = make([]byte, len(b)) sm.all = make([]byte, len(b), len(b))
// store manifest and signatures in all // store manifest and signatures in all
copy(sm.all, b) copy(sm.all, b)
@ -124,7 +124,7 @@ func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
} }
// sm.Canonical stores the canonical manifest JSON // sm.Canonical stores the canonical manifest JSON
sm.Canonical = make([]byte, len(bytes)) sm.Canonical = make([]byte, len(bytes), len(bytes))
copy(sm.Canonical, bytes) copy(sm.Canonical, bytes)
// Unmarshal canonical JSON into Manifest object // Unmarshal canonical JSON into Manifest object

View file

@ -58,7 +58,7 @@ func (mb *referenceManifestBuilder) Build(ctx context.Context) (distribution.Man
func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error { func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error {
r, ok := d.(Reference) r, ok := d.(Reference)
if !ok { if !ok {
return fmt.Errorf("unable to add non-reference type to v1 builder") return fmt.Errorf("Unable to add non-reference type to v1 builder")
} }
// Entries need to be prepended // Entries need to be prepended

View file

@ -106,7 +106,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data. // UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error { func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b)) m.canonical = make([]byte, len(b), len(b))
// store manifest in canonical // store manifest in canonical
copy(m.canonical, b) copy(m.canonical, b)

View file

@ -87,7 +87,7 @@ func ManifestMediaTypes() (mediaTypes []string) {
// UnmarshalFunc implements manifest unmarshalling a given MediaType // UnmarshalFunc implements manifest unmarshalling a given MediaType
type UnmarshalFunc func([]byte) (Manifest, Descriptor, error) type UnmarshalFunc func([]byte) (Manifest, Descriptor, error)
var mappings = make(map[string]UnmarshalFunc) var mappings = make(map[string]UnmarshalFunc, 0)
// UnmarshalManifest looks up manifest unmarshal functions based on // UnmarshalManifest looks up manifest unmarshal functions based on
// MediaType // MediaType

View file

@ -10,7 +10,4 @@ const (
var ( var (
// StorageNamespace is the prometheus namespace of blob/cache related operations // StorageNamespace is the prometheus namespace of blob/cache related operations
StorageNamespace = metrics.NewNamespace(NamespacePrefix, "storage", nil) StorageNamespace = metrics.NewNamespace(NamespacePrefix, "storage", nil)
// NotificationsNamespace is the prometheus namespace of notification related metrics
NotificationsNamespace = metrics.NewNamespace(NamespacePrefix, "notifications", nil)
) )

View file

@ -8,7 +8,6 @@ import (
"github.com/docker/distribution/context" "github.com/docker/distribution/context"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
events "github.com/docker/go-events"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
@ -18,7 +17,7 @@ type bridge struct {
actor ActorRecord actor ActorRecord
source SourceRecord source SourceRecord
request RequestRecord request RequestRecord
sink events.Sink sink Sink
} }
var _ Listener = &bridge{} var _ Listener = &bridge{}
@ -33,7 +32,7 @@ type URLBuilder interface {
// using the actor and source. Any urls populated in the events created by // using the actor and source. Any urls populated in the events created by
// this bridge will be created using the URLBuilder. // this bridge will be created using the URLBuilder.
// TODO(stevvooe): Update this to simply take a context.Context object. // TODO(stevvooe): Update this to simply take a context.Context object.
func NewBridge(ub URLBuilder, source SourceRecord, actor ActorRecord, request RequestRecord, sink events.Sink, includeReferences bool) Listener { func NewBridge(ub URLBuilder, source SourceRecord, actor ActorRecord, request RequestRecord, sink Sink, includeReferences bool) Listener {
return &bridge{ return &bridge{
ub: ub, ub: ub,
includeReferences: includeReferences, includeReferences: includeReferences,
@ -126,6 +125,15 @@ func (b *bridge) RepoDeleted(repo reference.Named) error {
return b.sink.Write(*event) return b.sink.Write(*event)
} }
func (b *bridge) createManifestEventAndWrite(action string, repo reference.Named, sm distribution.Manifest) error {
manifestEvent, err := b.createManifestEvent(action, repo, sm)
if err != nil {
return err
}
return b.sink.Write(*manifestEvent)
}
func (b *bridge) createManifestDeleteEventAndWrite(action string, repo reference.Named, dgst digest.Digest) error { func (b *bridge) createManifestDeleteEventAndWrite(action string, repo reference.Named, dgst digest.Digest) error {
event := b.createEvent(action) event := b.createEvent(action)
event.Target.Repository = repo.Name() event.Target.Repository = repo.Name()

View file

@ -6,9 +6,8 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
events "github.com/docker/go-events"
"github.com/docker/libtrust" "github.com/docker/libtrust"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
@ -47,8 +46,8 @@ var (
) )
func TestEventBridgeManifestPulled(t *testing.T) { func TestEventBridgeManifestPulled(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkCommonManifest(t, EventActionPull, event) checkCommonManifest(t, EventActionPull, events...)
return nil return nil
})) }))
@ -60,8 +59,8 @@ func TestEventBridgeManifestPulled(t *testing.T) {
} }
func TestEventBridgeManifestPushed(t *testing.T) { func TestEventBridgeManifestPushed(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkCommonManifest(t, EventActionPush, event) checkCommonManifest(t, EventActionPush, events...)
return nil return nil
})) }))
@ -73,10 +72,10 @@ func TestEventBridgeManifestPushed(t *testing.T) {
} }
func TestEventBridgeManifestPushedWithTag(t *testing.T) { func TestEventBridgeManifestPushedWithTag(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkCommonManifest(t, EventActionPush, event) checkCommonManifest(t, EventActionPush, events...)
if event.(Event).Target.Tag != "latest" { if events[0].Target.Tag != "latest" {
t.Fatalf("missing or unexpected tag: %#v", event.(Event).Target) t.Fatalf("missing or unexpected tag: %#v", events[0].Target)
} }
return nil return nil
@ -89,10 +88,10 @@ func TestEventBridgeManifestPushedWithTag(t *testing.T) {
} }
func TestEventBridgeManifestPulledWithTag(t *testing.T) { func TestEventBridgeManifestPulledWithTag(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkCommonManifest(t, EventActionPull, event) checkCommonManifest(t, EventActionPull, events...)
if event.(Event).Target.Tag != "latest" { if events[0].Target.Tag != "latest" {
t.Fatalf("missing or unexpected tag: %#v", event.(Event).Target) t.Fatalf("missing or unexpected tag: %#v", events[0].Target)
} }
return nil return nil
@ -105,10 +104,10 @@ func TestEventBridgeManifestPulledWithTag(t *testing.T) {
} }
func TestEventBridgeManifestDeleted(t *testing.T) { func TestEventBridgeManifestDeleted(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkDeleted(t, EventActionDelete, event) checkDeleted(t, EventActionDelete, events...)
if event.(Event).Target.Digest != dgst { if events[0].Target.Digest != dgst {
t.Fatalf("unexpected digest on event target: %q != %q", event.(Event).Target.Digest, dgst) t.Fatalf("unexpected digest on event target: %q != %q", events[0].Target.Digest, dgst)
} }
return nil return nil
})) }))
@ -120,10 +119,10 @@ func TestEventBridgeManifestDeleted(t *testing.T) {
} }
func TestEventBridgeTagDeleted(t *testing.T) { func TestEventBridgeTagDeleted(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkDeleted(t, EventActionDelete, event) checkDeleted(t, EventActionDelete, events...)
if event.(Event).Target.Tag != m.Tag { if events[0].Target.Tag != m.Tag {
t.Fatalf("unexpected tag on event target: %q != %q", event.(Event).Target.Tag, m.Tag) t.Fatalf("unexpected tag on event target: %q != %q", events[0].Target.Tag, m.Tag)
} }
return nil return nil
})) }))
@ -135,8 +134,8 @@ func TestEventBridgeTagDeleted(t *testing.T) {
} }
func TestEventBridgeRepoDeleted(t *testing.T) { func TestEventBridgeRepoDeleted(t *testing.T) {
l := createTestEnv(t, testSinkFn(func(event events.Event) error { l := createTestEnv(t, testSinkFn(func(events ...Event) error {
checkDeleted(t, EventActionDelete, event) checkDeleted(t, EventActionDelete, events...)
return nil return nil
})) }))
@ -163,29 +162,36 @@ func createTestEnv(t *testing.T, fn testSinkFn) Listener {
return NewBridge(ub, source, actor, request, fn, true) return NewBridge(ub, source, actor, request, fn, true)
} }
func checkDeleted(t *testing.T, action string, event events.Event) { func checkDeleted(t *testing.T, action string, events ...Event) {
if event.(Event).Source != source { if len(events) != 1 {
t.Fatalf("source not equal: %#v != %#v", event.(Event).Source, source) t.Fatalf("unexpected number of events: %v != 1", len(events))
} }
if event.(Event).Request != request { event := events[0]
t.Fatalf("request not equal: %#v != %#v", event.(Event).Request, request)
if event.Source != source {
t.Fatalf("source not equal: %#v != %#v", event.Source, source)
} }
if event.(Event).Actor != actor { if event.Request != request {
t.Fatalf("request not equal: %#v != %#v", event.(Event).Actor, actor) t.Fatalf("request not equal: %#v != %#v", event.Request, request)
} }
if event.(Event).Target.Repository != repo { if event.Actor != actor {
t.Fatalf("unexpected repository: %q != %q", event.(Event).Target.Repository, repo) t.Fatalf("request not equal: %#v != %#v", event.Actor, actor)
}
if event.Target.Repository != repo {
t.Fatalf("unexpected repository: %q != %q", event.Target.Repository, repo)
} }
} }
func checkCommonManifest(t *testing.T, action string, event events.Event) { func checkCommonManifest(t *testing.T, action string, events ...Event) {
checkCommon(t, event) checkCommon(t, events...)
if event.(Event).Action != action { event := events[0]
t.Fatalf("unexpected event action: %q != %q", event.(Event).Action, action) if event.Action != action {
t.Fatalf("unexpected event action: %q != %q", event.Action, action)
} }
repoRef, _ := reference.WithName(repo) repoRef, _ := reference.WithName(repo)
@ -195,51 +201,57 @@ func checkCommonManifest(t *testing.T, action string, event events.Event) {
t.Fatalf("error building expected url: %v", err) t.Fatalf("error building expected url: %v", err)
} }
if event.(Event).Target.URL != u { if event.Target.URL != u {
t.Fatalf("incorrect url passed: \n%q != \n%q", event.(Event).Target.URL, u) t.Fatalf("incorrect url passed: \n%q != \n%q", event.Target.URL, u)
} }
if len(event.(Event).Target.References) != len(layers) { if len(event.Target.References) != len(layers) {
t.Fatalf("unexpected number of references %v != %v", len(event.(Event).Target.References), len(layers)) t.Fatalf("unexpected number of references %v != %v", len(event.Target.References), len(layers))
} }
for i, targetReference := range event.(Event).Target.References { for i, targetReference := range event.Target.References {
if targetReference.Digest != layers[i].BlobSum { if targetReference.Digest != layers[i].BlobSum {
t.Fatalf("unexpected reference: %q != %q", targetReference.Digest, layers[i].BlobSum) t.Fatalf("unexpected reference: %q != %q", targetReference.Digest, layers[i].BlobSum)
} }
} }
} }
func checkCommon(t *testing.T, event events.Event) { func checkCommon(t *testing.T, events ...Event) {
if event.(Event).Source != source { if len(events) != 1 {
t.Fatalf("source not equal: %#v != %#v", event.(Event).Source, source) t.Fatalf("unexpected number of events: %v != 1", len(events))
} }
if event.(Event).Request != request { event := events[0]
t.Fatalf("request not equal: %#v != %#v", event.(Event).Request, request)
if event.Source != source {
t.Fatalf("source not equal: %#v != %#v", event.Source, source)
} }
if event.(Event).Actor != actor { if event.Request != request {
t.Fatalf("request not equal: %#v != %#v", event.(Event).Actor, actor) t.Fatalf("request not equal: %#v != %#v", event.Request, request)
} }
if event.(Event).Target.Digest != dgst { if event.Actor != actor {
t.Fatalf("unexpected digest on event target: %q != %q", event.(Event).Target.Digest, dgst) t.Fatalf("request not equal: %#v != %#v", event.Actor, actor)
} }
if event.(Event).Target.Length != int64(len(payload)) { if event.Target.Digest != dgst {
t.Fatalf("unexpected target length: %v != %v", event.(Event).Target.Length, len(payload)) t.Fatalf("unexpected digest on event target: %q != %q", event.Target.Digest, dgst)
} }
if event.(Event).Target.Repository != repo { if event.Target.Length != int64(len(payload)) {
t.Fatalf("unexpected repository: %q != %q", event.(Event).Target.Repository, repo) t.Fatalf("unexpected target length: %v != %v", event.Target.Length, len(payload))
}
if event.Target.Repository != repo {
t.Fatalf("unexpected repository: %q != %q", event.Target.Repository, repo)
} }
} }
type testSinkFn func(event events.Event) error type testSinkFn func(events ...Event) error
func (tsf testSinkFn) Write(event events.Event) error { func (tsf testSinkFn) Write(events ...Event) error {
return tsf(event) return tsf(events...)
} }
func (tsf testSinkFn) Close() error { return nil } func (tsf testSinkFn) Close() error { return nil }

View file

@ -5,7 +5,6 @@ import (
"time" "time"
"github.com/docker/distribution/configuration" "github.com/docker/distribution/configuration"
events "github.com/docker/go-events"
) )
// EndpointConfig covers the optional configuration parameters for an active // EndpointConfig covers the optional configuration parameters for an active
@ -43,7 +42,7 @@ func (ec *EndpointConfig) defaults() {
// services when events are written. Writes are non-blocking and always // services when events are written. Writes are non-blocking and always
// succeed for callers but events may be queued internally. // succeed for callers but events may be queued internally.
type Endpoint struct { type Endpoint struct {
events.Sink Sink
url string url string
name string name string
@ -59,13 +58,13 @@ func NewEndpoint(name, url string, config EndpointConfig) *Endpoint {
endpoint.url = url endpoint.url = url
endpoint.EndpointConfig = config endpoint.EndpointConfig = config
endpoint.defaults() endpoint.defaults()
endpoint.metrics = newSafeMetrics(name) endpoint.metrics = newSafeMetrics()
// Configures the inmemory queue, retry, http pipeline. // Configures the inmemory queue, retry, http pipeline.
endpoint.Sink = newHTTPSink( endpoint.Sink = newHTTPSink(
endpoint.url, endpoint.Timeout, endpoint.Headers, endpoint.url, endpoint.Timeout, endpoint.Headers,
endpoint.Transport, endpoint.metrics.httpStatusListener()) endpoint.Transport, endpoint.metrics.httpStatusListener())
endpoint.Sink = events.NewRetryingSink(endpoint.Sink, events.NewBreaker(endpoint.Threshold, endpoint.Backoff)) endpoint.Sink = newRetryingSink(endpoint.Sink, endpoint.Threshold, endpoint.Backoff)
endpoint.Sink = newEventQueue(endpoint.Sink, endpoint.metrics.eventQueueListener()) endpoint.Sink = newEventQueue(endpoint.Sink, endpoint.metrics.eventQueueListener())
mediaTypes := append(config.Ignore.MediaTypes, config.IgnoredMediaTypes...) mediaTypes := append(config.Ignore.MediaTypes, config.IgnoredMediaTypes...)
endpoint.Sink = newIgnoredSink(endpoint.Sink, mediaTypes, config.Ignore.Actions) endpoint.Sink = newIgnoredSink(endpoint.Sink, mediaTypes, config.Ignore.Actions)

View file

@ -5,7 +5,6 @@ import (
"time" "time"
"github.com/docker/distribution" "github.com/docker/distribution"
events "github.com/docker/go-events"
) )
// EventAction constants used in action field of Event. // EventAction constants used in action field of Event.
@ -31,7 +30,7 @@ const (
type Envelope struct { type Envelope struct {
// Events make up the contents of the envelope. Events present in a single // Events make up the contents of the envelope. Events present in a single
// envelope are not necessarily related. // envelope are not necessarily related.
Events []events.Event `json:"events,omitempty"` Events []Event `json:"events,omitempty"`
} }
// TODO(stevvooe): The event type should be separate from the json format. It // TODO(stevvooe): The event type should be separate from the json format. It
@ -149,3 +148,16 @@ var (
// retries will not be successful. // retries will not be successful.
ErrSinkClosed = fmt.Errorf("sink: closed") ErrSinkClosed = fmt.Errorf("sink: closed")
) )
// Sink accepts and sends events.
type Sink interface {
// Write writes one or more events to the sink. If no error is returned,
// the caller will assume that all events have been committed and will not
// try to send them again. If an error is received, the caller may retry
// sending the event. The caller should cede the slice of memory to the
// sink and not modify it after calling this method.
Write(events ...Event) error
// Close the sink, possibly waiting for pending events to flush.
Close() error
}

View file

@ -114,7 +114,8 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
prototype.Request.UserAgent = "test/0.1" prototype.Request.UserAgent = "test/0.1"
prototype.Source.Addr = "hostname.local:port" prototype.Source.Addr = "hostname.local:port"
var manifestPush = prototype var manifestPush Event
manifestPush = prototype
manifestPush.ID = "asdf-asdf-asdf-asdf-0" manifestPush.ID = "asdf-asdf-asdf-asdf-0"
manifestPush.Target.Digest = "sha256:0123456789abcdef0" manifestPush.Target.Digest = "sha256:0123456789abcdef0"
manifestPush.Target.Length = 1 manifestPush.Target.Length = 1
@ -123,7 +124,8 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
manifestPush.Target.Repository = "library/test" manifestPush.Target.Repository = "library/test"
manifestPush.Target.URL = "http://example.com/v2/library/test/manifests/latest" manifestPush.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush0 = prototype var layerPush0 Event
layerPush0 = prototype
layerPush0.ID = "asdf-asdf-asdf-asdf-1" layerPush0.ID = "asdf-asdf-asdf-asdf-1"
layerPush0.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5" layerPush0.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
layerPush0.Target.Length = 2 layerPush0.Target.Length = 2
@ -132,7 +134,8 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
layerPush0.Target.Repository = "library/test" layerPush0.Target.Repository = "library/test"
layerPush0.Target.URL = "http://example.com/v2/library/test/manifests/latest" layerPush0.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush1 = prototype var layerPush1 Event
layerPush1 = prototype
layerPush1.ID = "asdf-asdf-asdf-asdf-2" layerPush1.ID = "asdf-asdf-asdf-asdf-2"
layerPush1.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d6" layerPush1.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d6"
layerPush1.Target.Length = 3 layerPush1.Target.Length = 3

View file

@ -7,8 +7,6 @@ import (
"net/http" "net/http"
"sync" "sync"
"time" "time"
events "github.com/docker/go-events"
) )
// httpSink implements a single-flight, http notification endpoint. This is // httpSink implements a single-flight, http notification endpoint. This is
@ -47,15 +45,15 @@ func newHTTPSink(u string, timeout time.Duration, headers http.Header, transport
// httpStatusListener is called on various outcomes of sending notifications. // httpStatusListener is called on various outcomes of sending notifications.
type httpStatusListener interface { type httpStatusListener interface {
success(status int, event events.Event) success(status int, events ...Event)
failure(status int, events events.Event) failure(status int, events ...Event)
err(err error, events events.Event) err(err error, events ...Event)
} }
// Accept makes an attempt to notify the endpoint, returning an error if it // Accept makes an attempt to notify the endpoint, returning an error if it
// fails. It is the caller's responsibility to retry on error. The events are // fails. It is the caller's responsibility to retry on error. The events are
// accepted or rejected as a group. // accepted or rejected as a group.
func (hs *httpSink) Write(event events.Event) error { func (hs *httpSink) Write(events ...Event) error {
hs.mu.Lock() hs.mu.Lock()
defer hs.mu.Unlock() defer hs.mu.Unlock()
defer hs.client.Transport.(*headerRoundTripper).CloseIdleConnections() defer hs.client.Transport.(*headerRoundTripper).CloseIdleConnections()
@ -65,7 +63,7 @@ func (hs *httpSink) Write(event events.Event) error {
} }
envelope := Envelope{ envelope := Envelope{
Events: []events.Event{event}, Events: events,
} }
// TODO(stevvooe): It is not ideal to keep re-encoding the request body on // TODO(stevvooe): It is not ideal to keep re-encoding the request body on
@ -75,7 +73,7 @@ func (hs *httpSink) Write(event events.Event) error {
p, err := json.MarshalIndent(envelope, "", " ") p, err := json.MarshalIndent(envelope, "", " ")
if err != nil { if err != nil {
for _, listener := range hs.listeners { for _, listener := range hs.listeners {
listener.err(err, event) listener.err(err, events...)
} }
return fmt.Errorf("%v: error marshaling event envelope: %v", hs, err) return fmt.Errorf("%v: error marshaling event envelope: %v", hs, err)
} }
@ -84,7 +82,7 @@ func (hs *httpSink) Write(event events.Event) error {
resp, err := hs.client.Post(hs.url, EventsMediaType, body) resp, err := hs.client.Post(hs.url, EventsMediaType, body)
if err != nil { if err != nil {
for _, listener := range hs.listeners { for _, listener := range hs.listeners {
listener.err(err, event) listener.err(err, events...)
} }
return fmt.Errorf("%v: error posting: %v", hs, err) return fmt.Errorf("%v: error posting: %v", hs, err)
@ -96,7 +94,7 @@ func (hs *httpSink) Write(event events.Event) error {
switch { switch {
case resp.StatusCode >= 200 && resp.StatusCode < 400: case resp.StatusCode >= 200 && resp.StatusCode < 400:
for _, listener := range hs.listeners { for _, listener := range hs.listeners {
listener.success(resp.StatusCode, event) listener.success(resp.StatusCode, events...)
} }
// TODO(stevvooe): This is a little accepting: we may want to support // TODO(stevvooe): This is a little accepting: we may want to support
@ -106,7 +104,7 @@ func (hs *httpSink) Write(event events.Event) error {
return nil return nil
default: default:
for _, listener := range hs.listeners { for _, listener := range hs.listeners {
listener.failure(resp.StatusCode, event) listener.failure(resp.StatusCode, events...)
} }
return fmt.Errorf("%v: response status %v unaccepted", hs, resp.Status) return fmt.Errorf("%v: response status %v unaccepted", hs, resp.Status)
} }
@ -135,7 +133,8 @@ type headerRoundTripper struct {
} }
func (hrt *headerRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { func (hrt *headerRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
var nreq = *req var nreq http.Request
nreq = *req
nreq.Header = make(http.Header) nreq.Header = make(http.Header)
merge := func(headers http.Header) { merge := func(headers http.Header) {

View file

@ -14,7 +14,6 @@ import (
"testing" "testing"
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
events "github.com/docker/go-events"
) )
// TestHTTPSink mocks out an http endpoint and notifies it under a couple of // TestHTTPSink mocks out an http endpoint and notifies it under a couple of
@ -64,14 +63,14 @@ func TestHTTPSink(t *testing.T) {
}) })
server := httptest.NewTLSServer(serverHandler) server := httptest.NewTLSServer(serverHandler)
metrics := newSafeMetrics("") metrics := newSafeMetrics()
sink := newHTTPSink(server.URL, 0, nil, nil, sink := newHTTPSink(server.URL, 0, nil, nil,
&endpointMetricsHTTPStatusListener{safeMetrics: metrics}) &endpointMetricsHTTPStatusListener{safeMetrics: metrics})
// first make sure that the default transport gives x509 untrusted cert error // first make sure that the default transport gives x509 untrusted cert error
event := Event{} events := []Event{}
err := sink.Write(event) err := sink.Write(events...)
if !strings.Contains(err.Error(), "x509") && !strings.Contains(err.Error(), "unknown ca") { if !strings.Contains(err.Error(), "x509") {
t.Fatal("TLS server with default transport should give unknown CA error") t.Fatal("TLS server with default transport should give unknown CA error")
} }
if err := sink.Close(); err != nil { if err := sink.Close(); err != nil {
@ -84,13 +83,12 @@ func TestHTTPSink(t *testing.T) {
} }
sink = newHTTPSink(server.URL, 0, nil, tr, sink = newHTTPSink(server.URL, 0, nil, tr,
&endpointMetricsHTTPStatusListener{safeMetrics: metrics}) &endpointMetricsHTTPStatusListener{safeMetrics: metrics})
err = sink.Write(event) err = sink.Write(events...)
if err != nil { if err != nil {
t.Fatalf("unexpected error writing event: %v", err) t.Fatalf("unexpected error writing events: %v", err)
} }
// reset server to standard http server and sink to a basic sink // reset server to standard http server and sink to a basic sink
metrics = newSafeMetrics("")
server = httptest.NewServer(serverHandler) server = httptest.NewServer(serverHandler)
sink = newHTTPSink(server.URL, 0, nil, nil, sink = newHTTPSink(server.URL, 0, nil, nil,
&endpointMetricsHTTPStatusListener{safeMetrics: metrics}) &endpointMetricsHTTPStatusListener{safeMetrics: metrics})
@ -113,52 +111,46 @@ func TestHTTPSink(t *testing.T) {
}() }()
for _, tc := range []struct { for _, tc := range []struct {
event events.Event // events to send events []Event // events to send
url string url string
isFailure bool // true if there should be a failure. failure bool // true if there should be a failure.
isError bool // true if the request returns an error
statusCode int // if not set, no status code should be incremented. statusCode int // if not set, no status code should be incremented.
}{ }{
{ {
statusCode: http.StatusOK, statusCode: http.StatusOK,
event: createTestEvent("push", "library/test", schema1.MediaTypeSignedManifest), events: []Event{
createTestEvent("push", "library/test", schema1.MediaTypeSignedManifest)},
}, },
{ {
statusCode: http.StatusOK, statusCode: http.StatusOK,
event: createTestEvent("push", "library/test", schema1.MediaTypeSignedManifest), events: []Event{
}, createTestEvent("push", "library/test", schema1.MediaTypeSignedManifest),
{ createTestEvent("push", "library/test", layerMediaType),
statusCode: http.StatusOK, createTestEvent("push", "library/test", layerMediaType),
event: createTestEvent("push", "library/test", layerMediaType), },
},
{
statusCode: http.StatusOK,
event: createTestEvent("push", "library/test", layerMediaType),
}, },
{ {
statusCode: http.StatusTemporaryRedirect, statusCode: http.StatusTemporaryRedirect,
}, },
{ {
statusCode: http.StatusBadRequest, statusCode: http.StatusBadRequest,
isFailure: true, failure: true,
}, },
{ {
// Case where connection is immediately closed // Case where connection is immediately closed
url: "http://" + closeL.Addr().String(), url: closeL.Addr().String(),
isError: true, failure: true,
}, },
} { } {
if tc.isFailure { if tc.failure {
expectedMetrics.Failures++ expectedMetrics.Failures += len(tc.events)
} else if tc.isError {
expectedMetrics.Errors++
} else { } else {
expectedMetrics.Successes++ expectedMetrics.Successes += len(tc.events)
} }
if tc.statusCode > 0 { if tc.statusCode > 0 {
expectedMetrics.Statuses[fmt.Sprintf("%d %s", tc.statusCode, http.StatusText(tc.statusCode))]++ expectedMetrics.Statuses[fmt.Sprintf("%d %s", tc.statusCode, http.StatusText(tc.statusCode))] += len(tc.events)
} }
url := tc.url url := tc.url
@ -169,11 +161,11 @@ func TestHTTPSink(t *testing.T) {
url += fmt.Sprintf("?status=%v", tc.statusCode) url += fmt.Sprintf("?status=%v", tc.statusCode)
sink.url = url sink.url = url
t.Logf("testcase: %v, fail=%v, error=%v", url, tc.isFailure, tc.isError) t.Logf("testcase: %v, fail=%v", url, tc.failure)
// Try a simple event emission. // Try a simple event emission.
err := sink.Write(tc.event) err := sink.Write(tc.events...)
if !tc.isFailure && !tc.isError { if !tc.failure {
if err != nil { if err != nil {
t.Fatalf("unexpected error send event: %v", err) t.Fatalf("unexpected error send event: %v", err)
} }
@ -181,7 +173,6 @@ func TestHTTPSink(t *testing.T) {
if err == nil { if err == nil {
t.Fatalf("the endpoint should have rejected the request") t.Fatalf("the endpoint should have rejected the request")
} }
t.Logf("write error: %v", err)
} }
if !reflect.DeepEqual(metrics.EndpointMetrics, expectedMetrics) { if !reflect.DeepEqual(metrics.EndpointMetrics, expectedMetrics) {

View file

@ -136,10 +136,11 @@ func checkExerciseRepository(t *testing.T, repository distribution.Repository, r
var blobDigests []digest.Digest var blobDigests []digest.Digest
blobs := repository.Blobs(ctx) blobs := repository.Blobs(ctx)
for i := 0; i < 2; i++ { for i := 0; i < 2; i++ {
rs, dgst, err := testutil.CreateRandomTarFile() rs, ds, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating test layer: %v", err) t.Fatalf("error creating test layer: %v", err)
} }
dgst := digest.Digest(ds)
blobDigests = append(blobDigests, dgst) blobDigests = append(blobDigests, dgst)
wr, err := blobs.Create(ctx) wr, err := blobs.Create(ctx)

View file

@ -5,19 +5,6 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"sync" "sync"
prometheus "github.com/docker/distribution/metrics"
events "github.com/docker/go-events"
"github.com/docker/go-metrics"
)
var (
// eventsCounter counts total events of incoming, success, failure, and errors
eventsCounter = prometheus.NotificationsNamespace.NewLabeledCounter("events", "The number of total events", "type", "endpoint")
// pendingGauge measures the pending queue size
pendingGauge = prometheus.NotificationsNamespace.NewLabeledGauge("pending", "The gauge of pending events in queue", metrics.Total, "endpoint")
// statusCounter counts the total notification call per each status code
statusCounter = prometheus.NotificationsNamespace.NewLabeledCounter("status", "The number of status code", "code", "endpoint")
) )
// EndpointMetrics track various actions taken by the endpoint, typically by // EndpointMetrics track various actions taken by the endpoint, typically by
@ -35,16 +22,14 @@ type EndpointMetrics struct {
// safeMetrics guards the metrics implementation with a lock and provides a // safeMetrics guards the metrics implementation with a lock and provides a
// safe update function. // safe update function.
type safeMetrics struct { type safeMetrics struct {
EndpointName string
EndpointMetrics EndpointMetrics
sync.Mutex // protects statuses map sync.Mutex // protects statuses map
} }
// newSafeMetrics returns safeMetrics with map allocated. // newSafeMetrics returns safeMetrics with map allocated.
func newSafeMetrics(name string) *safeMetrics { func newSafeMetrics() *safeMetrics {
var sm safeMetrics var sm safeMetrics
sm.Statuses = make(map[string]int) sm.Statuses = make(map[string]int)
sm.EndpointName = name
return &sm return &sm
} }
@ -71,32 +56,24 @@ type endpointMetricsHTTPStatusListener struct {
var _ httpStatusListener = &endpointMetricsHTTPStatusListener{} var _ httpStatusListener = &endpointMetricsHTTPStatusListener{}
func (emsl *endpointMetricsHTTPStatusListener) success(status int, event events.Event) { func (emsl *endpointMetricsHTTPStatusListener) success(status int, events ...Event) {
emsl.safeMetrics.Lock() emsl.safeMetrics.Lock()
defer emsl.safeMetrics.Unlock() defer emsl.safeMetrics.Unlock()
emsl.Statuses[fmt.Sprintf("%d %s", status, http.StatusText(status))]++ emsl.Statuses[fmt.Sprintf("%d %s", status, http.StatusText(status))] += len(events)
emsl.Successes++ emsl.Successes += len(events)
statusCounter.WithValues(fmt.Sprintf("%d %s", status, http.StatusText(status)), emsl.EndpointName).Inc(1)
eventsCounter.WithValues("Successes", emsl.EndpointName).Inc(1)
} }
func (emsl *endpointMetricsHTTPStatusListener) failure(status int, event events.Event) { func (emsl *endpointMetricsHTTPStatusListener) failure(status int, events ...Event) {
emsl.safeMetrics.Lock() emsl.safeMetrics.Lock()
defer emsl.safeMetrics.Unlock() defer emsl.safeMetrics.Unlock()
emsl.Statuses[fmt.Sprintf("%d %s", status, http.StatusText(status))]++ emsl.Statuses[fmt.Sprintf("%d %s", status, http.StatusText(status))] += len(events)
emsl.Failures++ emsl.Failures += len(events)
statusCounter.WithValues(fmt.Sprintf("%d %s", status, http.StatusText(status)), emsl.EndpointName).Inc(1)
eventsCounter.WithValues("Failures", emsl.EndpointName).Inc(1)
} }
func (emsl *endpointMetricsHTTPStatusListener) err(err error, event events.Event) { func (emsl *endpointMetricsHTTPStatusListener) err(err error, events ...Event) {
emsl.safeMetrics.Lock() emsl.safeMetrics.Lock()
defer emsl.safeMetrics.Unlock() defer emsl.safeMetrics.Unlock()
emsl.Errors++ emsl.Errors += len(events)
eventsCounter.WithValues("Errors", emsl.EndpointName).Inc(1)
} }
// endpointMetricsEventQueueListener maintains the incoming events counter and // endpointMetricsEventQueueListener maintains the incoming events counter and
@ -105,22 +82,17 @@ type endpointMetricsEventQueueListener struct {
*safeMetrics *safeMetrics
} }
func (eqc *endpointMetricsEventQueueListener) ingress(event events.Event) { func (eqc *endpointMetricsEventQueueListener) ingress(events ...Event) {
eqc.Lock() eqc.Lock()
defer eqc.Unlock() defer eqc.Unlock()
eqc.Events++ eqc.Events += len(events)
eqc.Pending++ eqc.Pending += len(events)
eventsCounter.WithValues("Events", eqc.EndpointName).Inc()
pendingGauge.WithValues(eqc.EndpointName).Inc(1)
} }
func (eqc *endpointMetricsEventQueueListener) egress(event events.Event) { func (eqc *endpointMetricsEventQueueListener) egress(events ...Event) {
eqc.Lock() eqc.Lock()
defer eqc.Unlock() defer eqc.Unlock()
eqc.Pending-- eqc.Pending -= len(events)
pendingGauge.WithValues(eqc.EndpointName).Dec(1)
} }
// endpoints is global registry of endpoints used to report metrics to expvar // endpoints is global registry of endpoints used to report metrics to expvar
@ -177,7 +149,4 @@ func init() {
})) }))
registry.(*expvar.Map).Set("notifications", &notifications) registry.(*expvar.Map).Set("notifications", &notifications)
// register prometheus metrics
metrics.Register(prometheus.NotificationsNamespace)
} }

View file

@ -4,16 +4,107 @@ import (
"container/list" "container/list"
"fmt" "fmt"
"sync" "sync"
"time"
events "github.com/docker/go-events"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
// NOTE(stevvooe): This file contains definitions for several utility sinks.
// Typically, the broadcaster is the only sink that should be required
// externally, but others are suitable for export if the need arises. Albeit,
// the tight integration with endpoint metrics should be removed.
// Broadcaster sends events to multiple, reliable Sinks. The goal of this
// component is to dispatch events to configured endpoints. Reliability can be
// provided by wrapping incoming sinks.
type Broadcaster struct {
sinks []Sink
events chan []Event
closed chan chan struct{}
}
// NewBroadcaster ...
// Add appends one or more sinks to the list of sinks. The broadcaster
// behavior will be affected by the properties of the sink. Generally, the
// sink should accept all messages and deal with reliability on its own. Use
// of EventQueue and RetryingSink should be used here.
func NewBroadcaster(sinks ...Sink) *Broadcaster {
b := Broadcaster{
sinks: sinks,
events: make(chan []Event),
closed: make(chan chan struct{}),
}
// Start the broadcaster
go b.run()
return &b
}
// Write accepts a block of events to be dispatched to all sinks. This method
// will never fail and should never block (hopefully!). The caller cedes the
// slice memory to the broadcaster and should not modify it after calling
// write.
func (b *Broadcaster) Write(events ...Event) error {
select {
case b.events <- events:
case <-b.closed:
return ErrSinkClosed
}
return nil
}
// Close the broadcaster, ensuring that all messages are flushed to the
// underlying sink before returning.
func (b *Broadcaster) Close() error {
logrus.Infof("broadcaster: closing")
select {
case <-b.closed:
// already closed
return fmt.Errorf("broadcaster: already closed")
default:
// do a little chan handoff dance to synchronize closing
closed := make(chan struct{})
b.closed <- closed
close(b.closed)
<-closed
return nil
}
}
// run is the main broadcast loop, started when the broadcaster is created.
// Under normal conditions, it waits for events on the event channel. After
// Close is called, this goroutine will exit.
func (b *Broadcaster) run() {
for {
select {
case block := <-b.events:
for _, sink := range b.sinks {
if err := sink.Write(block...); err != nil {
logrus.Errorf("broadcaster: error writing events to %v, these events will be lost: %v", sink, err)
}
}
case closing := <-b.closed:
// close all the underlying sinks
for _, sink := range b.sinks {
if err := sink.Close(); err != nil {
logrus.Errorf("broadcaster: error closing sink %v: %v", sink, err)
}
}
closing <- struct{}{}
logrus.Debugf("broadcaster: closed")
return
}
}
}
// eventQueue accepts all messages into a queue for asynchronous consumption // eventQueue accepts all messages into a queue for asynchronous consumption
// by a sink. It is unbounded and thread safe but the sink must be reliable or // by a sink. It is unbounded and thread safe but the sink must be reliable or
// events will be dropped. // events will be dropped.
type eventQueue struct { type eventQueue struct {
sink events.Sink sink Sink
events *list.List events *list.List
listeners []eventQueueListener listeners []eventQueueListener
cond *sync.Cond cond *sync.Cond
@ -23,13 +114,13 @@ type eventQueue struct {
// eventQueueListener is called when various events happen on the queue. // eventQueueListener is called when various events happen on the queue.
type eventQueueListener interface { type eventQueueListener interface {
ingress(event events.Event) ingress(events ...Event)
egress(event events.Event) egress(events ...Event)
} }
// newEventQueue returns a queue to the provided sink. If the updater is non- // newEventQueue returns a queue to the provided sink. If the updater is non-
// nil, it will be called to update pending metrics on ingress and egress. // nil, it will be called to update pending metrics on ingress and egress.
func newEventQueue(sink events.Sink, listeners ...eventQueueListener) *eventQueue { func newEventQueue(sink Sink, listeners ...eventQueueListener) *eventQueue {
eq := eventQueue{ eq := eventQueue{
sink: sink, sink: sink,
events: list.New(), events: list.New(),
@ -43,7 +134,7 @@ func newEventQueue(sink events.Sink, listeners ...eventQueueListener) *eventQueu
// Write accepts the events into the queue, only failing if the queue has // Write accepts the events into the queue, only failing if the queue has
// beend closed. // beend closed.
func (eq *eventQueue) Write(event events.Event) error { func (eq *eventQueue) Write(events ...Event) error {
eq.mu.Lock() eq.mu.Lock()
defer eq.mu.Unlock() defer eq.mu.Unlock()
@ -52,9 +143,9 @@ func (eq *eventQueue) Write(event events.Event) error {
} }
for _, listener := range eq.listeners { for _, listener := range eq.listeners {
listener.ingress(event) listener.ingress(events...)
} }
eq.events.PushBack(event) eq.events.PushBack(events)
eq.cond.Signal() // signal waiters eq.cond.Signal() // signal waiters
return nil return nil
@ -80,18 +171,18 @@ func (eq *eventQueue) Close() error {
// run is the main goroutine to flush events to the target sink. // run is the main goroutine to flush events to the target sink.
func (eq *eventQueue) run() { func (eq *eventQueue) run() {
for { for {
event := eq.next() block := eq.next()
if event == nil { if block == nil {
return // nil block means event queue is closed. return // nil block means event queue is closed.
} }
if err := eq.sink.Write(event); err != nil { if err := eq.sink.Write(block...); err != nil {
logrus.Warnf("eventqueue: error writing events to %v, these events will be lost: %v", eq.sink, err) logrus.Warnf("eventqueue: error writing events to %v, these events will be lost: %v", eq.sink, err)
} }
for _, listener := range eq.listeners { for _, listener := range eq.listeners {
listener.egress(event) listener.egress(block...)
} }
} }
} }
@ -99,7 +190,7 @@ func (eq *eventQueue) run() {
// next encompasses the critical section of the run loop. When the queue is // next encompasses the critical section of the run loop. When the queue is
// empty, it will block on the condition. If new data arrives, it will wake // empty, it will block on the condition. If new data arrives, it will wake
// and return a block. When closed, a nil slice will be returned. // and return a block. When closed, a nil slice will be returned.
func (eq *eventQueue) next() events.Event { func (eq *eventQueue) next() []Event {
eq.mu.Lock() eq.mu.Lock()
defer eq.mu.Unlock() defer eq.mu.Unlock()
@ -113,7 +204,7 @@ func (eq *eventQueue) next() events.Event {
} }
front := eq.events.Front() front := eq.events.Front()
block := front.Value.(events.Event) block := front.Value.([]Event)
eq.events.Remove(front) eq.events.Remove(front)
return block return block
@ -122,12 +213,12 @@ func (eq *eventQueue) next() events.Event {
// ignoredSink discards events with ignored target media types and actions. // ignoredSink discards events with ignored target media types and actions.
// passes the rest along. // passes the rest along.
type ignoredSink struct { type ignoredSink struct {
events.Sink Sink
ignoreMediaTypes map[string]bool ignoreMediaTypes map[string]bool
ignoreActions map[string]bool ignoreActions map[string]bool
} }
func newIgnoredSink(sink events.Sink, ignored []string, ignoreActions []string) events.Sink { func newIgnoredSink(sink Sink, ignored []string, ignoreActions []string) Sink {
if len(ignored) == 0 { if len(ignored) == 0 {
return sink return sink
} }
@ -151,14 +242,151 @@ func newIgnoredSink(sink events.Sink, ignored []string, ignoreActions []string)
// Write discards events with ignored target media types and passes the rest // Write discards events with ignored target media types and passes the rest
// along. // along.
func (imts *ignoredSink) Write(event events.Event) error { func (imts *ignoredSink) Write(events ...Event) error {
if imts.ignoreMediaTypes[event.(Event).Target.MediaType] || imts.ignoreActions[event.(Event).Action] { var kept []Event
for _, e := range events {
if !imts.ignoreMediaTypes[e.Target.MediaType] {
kept = append(kept, e)
}
}
if len(kept) == 0 {
return nil return nil
} }
return imts.Sink.Write(event) var results []Event
for _, e := range kept {
if !imts.ignoreActions[e.Action] {
results = append(results, e)
}
}
if len(results) == 0 {
return nil
}
return imts.Sink.Write(results...)
} }
func (imts *ignoredSink) Close() error { // retryingSink retries the write until success or an ErrSinkClosed is
// returned. Underlying sink must have p > 0 of succeeding or the sink will
// block. Internally, it is a circuit breaker retries to manage reset.
// Concurrent calls to a retrying sink are serialized through the sink,
// meaning that if one is in-flight, another will not proceed.
type retryingSink struct {
mu sync.Mutex
sink Sink
closed bool
// circuit breaker heuristics
failures struct {
threshold int
recent int
last time.Time
backoff time.Duration // time after which we retry after failure.
}
}
type retryingSinkListener interface {
active(events ...Event)
retry(events ...Event)
}
// TODO(stevvooe): We are using circuit break here, which actually doesn't
// make a whole lot of sense for this use case, since we always retry. Move
// this to use bounded exponential backoff.
// newRetryingSink returns a sink that will retry writes to a sink, backing
// off on failure. Parameters threshold and backoff adjust the behavior of the
// circuit breaker.
func newRetryingSink(sink Sink, threshold int, backoff time.Duration) *retryingSink {
rs := &retryingSink{
sink: sink,
}
rs.failures.threshold = threshold
rs.failures.backoff = backoff
return rs
}
// Write attempts to flush the events to the downstream sink until it succeeds
// or the sink is closed.
func (rs *retryingSink) Write(events ...Event) error {
rs.mu.Lock()
defer rs.mu.Unlock()
retry:
if rs.closed {
return ErrSinkClosed
}
if !rs.proceed() {
logrus.Warnf("%v encountered too many errors, backing off", rs.sink)
rs.wait(rs.failures.backoff)
goto retry
}
if err := rs.write(events...); err != nil {
if err == ErrSinkClosed {
// terminal!
return err
}
logrus.Errorf("retryingsink: error writing events: %v, retrying", err)
goto retry
}
return nil return nil
} }
// Close closes the sink and the underlying sink.
func (rs *retryingSink) Close() error {
rs.mu.Lock()
defer rs.mu.Unlock()
if rs.closed {
return fmt.Errorf("retryingsink: already closed")
}
rs.closed = true
return rs.sink.Close()
}
// write provides a helper that dispatches failure and success properly. Used
// by write as the single-flight write call.
func (rs *retryingSink) write(events ...Event) error {
if err := rs.sink.Write(events...); err != nil {
rs.failure()
return err
}
rs.reset()
return nil
}
// wait backoff time against the sink, unlocking so others can proceed. Should
// only be called by methods that currently have the mutex.
func (rs *retryingSink) wait(backoff time.Duration) {
rs.mu.Unlock()
defer rs.mu.Lock()
// backoff here
time.Sleep(backoff)
}
// reset marks a successful call.
func (rs *retryingSink) reset() {
rs.failures.recent = 0
rs.failures.last = time.Time{}
}
// failure records a failure.
func (rs *retryingSink) failure() {
rs.failures.recent++
rs.failures.last = time.Now().UTC()
}
// proceed returns true if the call should proceed based on circuit breaker
// heuristics.
func (rs *retryingSink) proceed() bool {
return rs.failures.recent < rs.failures.threshold ||
time.Now().UTC().After(rs.failures.last.Add(rs.failures.backoff))
}

View file

@ -1,21 +1,72 @@
package notifications package notifications
import ( import (
"fmt"
"math/rand"
"reflect" "reflect"
"sync" "sync"
"time" "time"
events "github.com/docker/go-events"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"testing" "testing"
) )
func TestBroadcaster(t *testing.T) {
const nEvents = 1000
var sinks []Sink
for i := 0; i < 10; i++ {
sinks = append(sinks, &testSink{})
}
b := NewBroadcaster(sinks...)
var block []Event
var wg sync.WaitGroup
for i := 1; i <= nEvents; i++ {
block = append(block, createTestEvent("push", "library/test", "blob"))
if i%10 == 0 && i > 0 {
wg.Add(1)
go func(block ...Event) {
if err := b.Write(block...); err != nil {
t.Errorf("error writing block of length %d: %v", len(block), err)
}
wg.Done()
}(block...)
block = nil
}
}
wg.Wait() // Wait until writes complete
if t.Failed() {
t.FailNow()
}
checkClose(t, b)
// Iterate through the sinks and check that they all have the expected length.
for _, sink := range sinks {
ts := sink.(*testSink)
ts.mu.Lock()
defer ts.mu.Unlock()
if len(ts.events) != nEvents {
t.Fatalf("not all events ended up in testsink: len(testSink) == %d, not %d", len(ts.events), nEvents)
}
if !ts.closed {
t.Fatalf("sink should have been closed")
}
}
}
func TestEventQueue(t *testing.T) { func TestEventQueue(t *testing.T) {
const nevents = 1000 const nevents = 1000
var ts testSink var ts testSink
metrics := newSafeMetrics("") metrics := newSafeMetrics()
eq := newEventQueue( eq := newEventQueue(
// delayed sync simulates destination slower than channel comms // delayed sync simulates destination slower than channel comms
&delayedSink{ &delayedSink{
@ -24,16 +75,20 @@ func TestEventQueue(t *testing.T) {
}, metrics.eventQueueListener()) }, metrics.eventQueueListener())
var wg sync.WaitGroup var wg sync.WaitGroup
var event events.Event var block []Event
for i := 1; i <= nevents; i++ { for i := 1; i <= nevents; i++ {
event = createTestEvent("push", "library/test", "blob") block = append(block, createTestEvent("push", "library/test", "blob"))
wg.Add(1) if i%10 == 0 && i > 0 {
go func(event events.Event) { wg.Add(1)
if err := eq.Write(event); err != nil { go func(block ...Event) {
t.Errorf("error writing event block: %v", err) if err := eq.Write(block...); err != nil {
} t.Errorf("error writing event block: %v", err)
wg.Done() }
}(event) wg.Done()
}(block...)
block = nil
}
} }
wg.Wait() wg.Wait()
@ -47,8 +102,8 @@ func TestEventQueue(t *testing.T) {
metrics.Lock() metrics.Lock()
defer metrics.Unlock() defer metrics.Unlock()
if ts.count != nevents { if len(ts.events) != nevents {
t.Fatalf("events did not make it to the sink: %d != %d", ts.count, 1000) t.Fatalf("events did not make it to the sink: %d != %d", len(ts.events), 1000)
} }
if !ts.closed { if !ts.closed {
@ -71,14 +126,16 @@ func TestIgnoredSink(t *testing.T) {
type testcase struct { type testcase struct {
ignoreMediaTypes []string ignoreMediaTypes []string
ignoreActions []string ignoreActions []string
expected events.Event expected []Event
} }
cases := []testcase{ cases := []testcase{
{nil, nil, blob}, {nil, nil, []Event{blob, manifest}},
{[]string{"other"}, []string{"other"}, blob}, {[]string{"other"}, []string{"other"}, []Event{blob, manifest}},
{[]string{"blob"}, []string{"other"}, []Event{manifest}},
{[]string{"blob", "manifest"}, []string{"other"}, nil}, {[]string{"blob", "manifest"}, []string{"other"}, nil},
{[]string{"other"}, []string{"pull"}, blob}, {[]string{"other"}, []string{"push"}, []Event{manifest}},
{[]string{"other"}, []string{"pull"}, []Event{blob}},
{[]string{"other"}, []string{"pull", "push"}, nil}, {[]string{"other"}, []string{"pull", "push"}, nil},
} }
@ -86,54 +143,78 @@ func TestIgnoredSink(t *testing.T) {
ts := &testSink{} ts := &testSink{}
s := newIgnoredSink(ts, c.ignoreMediaTypes, c.ignoreActions) s := newIgnoredSink(ts, c.ignoreMediaTypes, c.ignoreActions)
if err := s.Write(blob); err != nil { if err := s.Write(blob, manifest); err != nil {
t.Fatalf("error writing event: %v", err) t.Fatalf("error writing event: %v", err)
} }
ts.mu.Lock() ts.mu.Lock()
if !reflect.DeepEqual(ts.event, c.expected) { if !reflect.DeepEqual(ts.events, c.expected) {
t.Fatalf("unexpected event: %#v != %#v", ts.event, c.expected) t.Fatalf("unexpected events: %#v != %#v", ts.events, c.expected)
} }
ts.mu.Unlock() ts.mu.Unlock()
} }
}
cases = []testcase{ func TestRetryingSink(t *testing.T) {
{nil, nil, manifest},
{[]string{"other"}, []string{"other"}, manifest}, // Make a sync that fails most of the time, ensuring that all the events
{[]string{"blob"}, []string{"other"}, manifest}, // make it through.
{[]string{"blob", "manifest"}, []string{"other"}, nil}, var ts testSink
{[]string{"other"}, []string{"push"}, manifest}, flaky := &flakySink{
{[]string{"other"}, []string{"pull", "push"}, nil}, rate: 1.0, // start out always failing.
Sink: &ts,
}
s := newRetryingSink(flaky, 3, 10*time.Millisecond)
var wg sync.WaitGroup
var block []Event
for i := 1; i <= 100; i++ {
block = append(block, createTestEvent("push", "library/test", "blob"))
// Above 50, set the failure rate lower
if i > 50 {
s.mu.Lock()
flaky.rate = 0.90
s.mu.Unlock()
}
if i%10 == 0 && i > 0 {
wg.Add(1)
go func(block ...Event) {
defer wg.Done()
if err := s.Write(block...); err != nil {
t.Errorf("error writing event block: %v", err)
}
}(block...)
block = nil
}
} }
for _, c := range cases { wg.Wait()
ts := &testSink{} if t.Failed() {
s := newIgnoredSink(ts, c.ignoreMediaTypes, c.ignoreActions) t.FailNow()
}
checkClose(t, s)
if err := s.Write(manifest); err != nil { ts.mu.Lock()
t.Fatalf("error writing event: %v", err) defer ts.mu.Unlock()
}
ts.mu.Lock() if len(ts.events) != 100 {
if !reflect.DeepEqual(ts.event, c.expected) { t.Fatalf("events not propagated: %d != %d", len(ts.events), 100)
t.Fatalf("unexpected event: %#v != %#v", ts.event, c.expected)
}
ts.mu.Unlock()
} }
} }
type testSink struct { type testSink struct {
event events.Event events []Event
count int
mu sync.Mutex mu sync.Mutex
closed bool closed bool
} }
func (ts *testSink) Write(event events.Event) error { func (ts *testSink) Write(events ...Event) error {
ts.mu.Lock() ts.mu.Lock()
defer ts.mu.Unlock() defer ts.mu.Unlock()
ts.event = event ts.events = append(ts.events, events...)
ts.count++
return nil return nil
} }
@ -147,16 +228,29 @@ func (ts *testSink) Close() error {
} }
type delayedSink struct { type delayedSink struct {
events.Sink Sink
delay time.Duration delay time.Duration
} }
func (ds *delayedSink) Write(event events.Event) error { func (ds *delayedSink) Write(events ...Event) error {
time.Sleep(ds.delay) time.Sleep(ds.delay)
return ds.Sink.Write(event) return ds.Sink.Write(events...)
} }
func checkClose(t *testing.T, sink events.Sink) { type flakySink struct {
Sink
rate float64
}
func (fs *flakySink) Write(events ...Event) error {
if rand.Float64() < fs.rate {
return fmt.Errorf("error writing %d events", len(events))
}
return fs.Sink.Write(events...)
}
func checkClose(t *testing.T, sink Sink) {
if err := sink.Close(); err != nil { if err := sink.Close(); err != nil {
t.Fatalf("unexpected error closing: %v", err) t.Fatalf("unexpected error closing: %v", err)
} }
@ -167,7 +261,7 @@ func checkClose(t *testing.T, sink events.Sink) {
} }
// Write after closed should be an error // Write after closed should be an error
if err := sink.Write(Event{}); err == nil { if err := sink.Write([]Event{}...); err == nil {
t.Fatalf("write after closed did not have an error") t.Fatalf("write after closed did not have an error")
} else if err != ErrSinkClosed { } else if err != ErrSinkClosed {
t.Fatalf("error should be ErrSinkClosed") t.Fatalf("error should be ErrSinkClosed")

View file

@ -56,35 +56,6 @@ func ParseNormalizedNamed(s string) (Named, error) {
return named, nil return named, nil
} }
// ParseDockerRef normalizes the image reference following the docker convention. This is added
// mainly for backward compatibility.
// The reference returned can only be either tagged or digested. For reference contains both tag
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
func ParseDockerRef(ref string) (Named, error) {
named, err := ParseNormalizedNamed(ref)
if err != nil {
return nil, err
}
if _, ok := named.(NamedTagged); ok {
if canonical, ok := named.(Canonical); ok {
// The reference is both tagged and digested, only
// return digested.
newNamed, err := WithName(canonical.Name())
if err != nil {
return nil, err
}
newCanonical, err := WithDigest(newNamed, canonical.Digest())
if err != nil {
return nil, err
}
return newCanonical, nil
}
}
return TagNameOnly(named), nil
}
// splitDockerDomain splits a repository name to domain and remotename string. // splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name // If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before. // needs to be already validated before.

View file

@ -623,83 +623,3 @@ func TestMatch(t *testing.T) {
} }
} }
} }
func TestParseDockerRef(t *testing.T) {
testcases := []struct {
name string
input string
expected string
}{
{
name: "nothing",
input: "busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "tag only",
input: "busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "digest only",
input: "busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "docker.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
{
name: "path only",
input: "library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "hostname only",
input: "docker.io/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no tag",
input: "docker.io/library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no path",
input: "docker.io/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "no hostname",
input: "library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "full reference with tag",
input: "docker.io/library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "gcr reference without tag",
input: "gcr.io/library/busybox",
expected: "gcr.io/library/busybox:latest",
},
{
name: "both tag and digest",
input: "gcr.io/library/busybox:latest@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "gcr.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
}
for _, test := range testcases {
t.Run(test.name, func(t *testing.T) {
normalized, err := ParseDockerRef(test.input)
if err != nil {
t.Fatal(err)
}
output := normalized.String()
if output != test.expected {
t.Fatalf("expected %q to be parsed as %v, got %v", test.input, test.expected, output)
}
_, err = Parse(output)
if err != nil {
t.Fatalf("%q should be a valid reference, but got an error: %v", output, err)
}
})
}
}

View file

@ -205,7 +205,7 @@ func Parse(s string) (Reference, error) {
var repo repository var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1]) nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if len(nameMatch) == 3 { if nameMatch != nil && len(nameMatch) == 3 {
repo.domain = nameMatch[1] repo.domain = nameMatch[1]
repo.path = nameMatch[2] repo.path = nameMatch[2]
} else { } else {

View file

@ -639,7 +639,7 @@ func TestParseNamed(t *testing.T) {
failf("error parsing name: %s", err) failf("error parsing name: %s", err)
continue continue
} else if err == nil && testcase.err != nil { } else if err == nil && testcase.err != nil {
failf("parsing succeeded: expected error %v", testcase.err) failf("parsing succeded: expected error %v", testcase.err)
continue continue
} else if err != testcase.err { } else if err != testcase.err {
failf("unexpected error %v, expected %v", err, testcase.err) failf("unexpected error %v, expected %v", err, testcase.err)

View file

@ -207,11 +207,11 @@ func (errs Errors) MarshalJSON() ([]byte, error) {
for _, daErr := range errs { for _, daErr := range errs {
var err Error var err Error
switch daErr := daErr.(type) { switch daErr.(type) {
case ErrorCode: case ErrorCode:
err = daErr.WithDetail(nil) err = daErr.(ErrorCode).WithDetail(nil)
case Error: case Error:
err = daErr err = daErr.(Error)
default: default:
err = ErrorCodeUnknown.WithDetail(daErr) err = ErrorCodeUnknown.WithDetail(daErr)

View file

@ -9,7 +9,7 @@ import (
// and sets the content-type header to 'application/json'. It will handle // and sets the content-type header to 'application/json'. It will handle
// ErrorCoder and Errors, and if necessary will create an envelope. // ErrorCoder and Errors, and if necessary will create an envelope.
func ServeJSON(w http.ResponseWriter, err error) error { func ServeJSON(w http.ResponseWriter, err error) error {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
var sc int var sc int
switch errs := err.(type) { switch errs := err.(type) {

View file

@ -126,7 +126,7 @@ var (
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -147,7 +147,7 @@ var (
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -168,7 +168,7 @@ var (
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -189,7 +189,7 @@ var (
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -441,7 +441,7 @@ var routeDescriptors = []RouteDescriptor{
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: `{ Format: `{
"name": <name>, "name": <name>,
"tags": [ "tags": [
@ -478,7 +478,7 @@ var routeDescriptors = []RouteDescriptor{
linkHeader, linkHeader,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: `{ Format: `{
"name": <name>, "name": <name>,
"tags": [ "tags": [
@ -541,7 +541,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeTagInvalid, ErrorCodeTagInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -592,7 +592,7 @@ var routeDescriptors = []RouteDescriptor{
Description: "The received manifest was invalid in some way, as described by the error codes. The client should resolve the issue and retry the request.", Description: "The received manifest was invalid in some way, as described by the error codes. The client should resolve the issue and retry the request.",
StatusCode: http.StatusBadRequest, StatusCode: http.StatusBadRequest,
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -615,7 +615,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUnknown, ErrorCodeBlobUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: `{ Format: `{
"errors:" [{ "errors:" [{
"code": "BLOB_UNKNOWN", "code": "BLOB_UNKNOWN",
@ -669,7 +669,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeTagInvalid, ErrorCodeTagInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -686,7 +686,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeManifestUnknown, ErrorCodeManifestUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -766,7 +766,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeDigestInvalid, ErrorCodeDigestInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -774,7 +774,7 @@ var routeDescriptors = []RouteDescriptor{
Description: "The blob, identified by `name` and `digest`, is unknown to the registry.", Description: "The blob, identified by `name` and `digest`, is unknown to the registry.",
StatusCode: http.StatusNotFound, StatusCode: http.StatusNotFound,
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -838,7 +838,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeDigestInvalid, ErrorCodeDigestInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -849,7 +849,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUnknown, ErrorCodeBlobUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -905,7 +905,7 @@ var routeDescriptors = []RouteDescriptor{
Description: "The blob, identified by `name` and `digest`, is unknown to the registry.", Description: "The blob, identified by `name` and `digest`, is unknown to the registry.",
StatusCode: http.StatusNotFound, StatusCode: http.StatusNotFound,
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -917,7 +917,7 @@ var routeDescriptors = []RouteDescriptor{
Description: "Blob delete is not allowed because the registry is configured as a pull-through cache or `delete` has been disabled", Description: "Blob delete is not allowed because the registry is configured as a pull-through cache or `delete` has been disabled",
StatusCode: http.StatusMethodNotAllowed, StatusCode: http.StatusMethodNotAllowed,
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
ErrorCodes: []errcode.ErrorCode{ ErrorCodes: []errcode.ErrorCode{
@ -1179,7 +1179,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadInvalid, ErrorCodeBlobUploadInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1190,7 +1190,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadUnknown, ErrorCodeBlobUploadUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1254,7 +1254,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadInvalid, ErrorCodeBlobUploadInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1265,7 +1265,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadUnknown, ErrorCodeBlobUploadUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1336,7 +1336,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadInvalid, ErrorCodeBlobUploadInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1347,7 +1347,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadUnknown, ErrorCodeBlobUploadUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1431,7 +1431,7 @@ var routeDescriptors = []RouteDescriptor{
errcode.ErrorCodeUnsupported, errcode.ErrorCodeUnsupported,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1442,7 +1442,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadUnknown, ErrorCodeBlobUploadUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1488,7 +1488,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadInvalid, ErrorCodeBlobUploadInvalid,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1499,7 +1499,7 @@ var routeDescriptors = []RouteDescriptor{
ErrorCodeBlobUploadUnknown, ErrorCodeBlobUploadUnknown,
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: errorsBody, Format: errorsBody,
}, },
}, },
@ -1539,7 +1539,7 @@ var routeDescriptors = []RouteDescriptor{
}, },
}, },
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: `{ Format: `{
"repositories": [ "repositories": [
<name>, <name>,
@ -1558,7 +1558,7 @@ var routeDescriptors = []RouteDescriptor{
{ {
StatusCode: http.StatusOK, StatusCode: http.StatusOK,
Body: BodyDescriptor{ Body: BodyDescriptor{
ContentType: "application/json", ContentType: "application/json; charset=utf-8",
Format: `{ Format: `{
"repositories": [ "repositories": [
<name>, <name>,

View file

@ -252,3 +252,15 @@ func appendValuesURL(u *url.URL, values ...url.Values) *url.URL {
u.RawQuery = merged.Encode() u.RawQuery = merged.Encode()
return u return u
} }
// appendValues appends the parameters to the url. Panics if the string is not
// a url.
func appendValues(u string, values ...url.Values) string {
up, err := url.Parse(u)
if err != nil {
panic(err) // should never happen
}
return appendValuesURL(up, values...).String()
}

View file

@ -182,6 +182,11 @@ func TestURLBuilderWithPrefix(t *testing.T) {
doTest(false) doTest(false)
} }
type builderFromRequestTestCase struct {
request *http.Request
base string
}
func TestBuilderFromRequest(t *testing.T) { func TestBuilderFromRequest(t *testing.T) {
u, err := url.Parse("http://example.com") u, err := url.Parse("http://example.com")
if err != nil { if err != nil {

View file

@ -21,7 +21,7 @@
// if ctx, err := accessController.Authorized(ctx, access); err != nil { // if ctx, err := accessController.Authorized(ctx, access); err != nil {
// if challenge, ok := err.(auth.Challenge) { // if challenge, ok := err.(auth.Challenge) {
// // Let the challenge write the response. // // Let the challenge write the response.
// challenge.SetHeaders(r, w) // challenge.SetHeaders(w)
// w.WriteHeader(http.StatusUnauthorized) // w.WriteHeader(http.StatusUnauthorized)
// return // return
// } else { // } else {
@ -87,7 +87,7 @@ type Challenge interface {
// adding the an HTTP challenge header on the response message. Callers // adding the an HTTP challenge header on the response message. Callers
// are expected to set the appropriate HTTP status code (e.g. 401) // are expected to set the appropriate HTTP status code (e.g. 401)
// themselves. // themselves.
SetHeaders(r *http.Request, w http.ResponseWriter) SetHeaders(w http.ResponseWriter)
} }
// AccessController controls access to registry resources based on a request // AccessController controls access to registry resources based on a request

View file

@ -111,7 +111,7 @@ type challenge struct {
var _ auth.Challenge = challenge{} var _ auth.Challenge = challenge{}
// SetHeaders sets the basic challenge header on the response. // SetHeaders sets the basic challenge header on the response.
func (ch challenge) SetHeaders(r *http.Request, w http.ResponseWriter) { func (ch challenge) SetHeaders(w http.ResponseWriter) {
w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", ch.realm)) w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", ch.realm))
} }

View file

@ -50,7 +50,7 @@ func TestBasicAccessController(t *testing.T) {
if err != nil { if err != nil {
switch err := err.(type) { switch err := err.(type) {
case auth.Challenge: case auth.Challenge:
err.SetHeaders(r, w) err.SetHeaders(w)
w.WriteHeader(http.StatusUnauthorized) w.WriteHeader(http.StatusUnauthorized)
return return
default: default:

View file

@ -82,7 +82,7 @@ type challenge struct {
var _ auth.Challenge = challenge{} var _ auth.Challenge = challenge{}
// SetHeaders sets a simple bearer challenge on the response. // SetHeaders sets a simple bearer challenge on the response.
func (ch challenge) SetHeaders(r *http.Request, w http.ResponseWriter) { func (ch challenge) SetHeaders(w http.ResponseWriter) {
header := fmt.Sprintf("Bearer realm=%q,service=%q", ch.realm, ch.service) header := fmt.Sprintf("Bearer realm=%q,service=%q", ch.realm, ch.service)
if ch.scope != "" { if ch.scope != "" {

View file

@ -21,7 +21,7 @@ func TestSillyAccessController(t *testing.T) {
if err != nil { if err != nil {
switch err := err.(type) { switch err := err.(type) {
case auth.Challenge: case auth.Challenge:
err.SetHeaders(r, w) err.SetHeaders(w)
w.WriteHeader(http.StatusUnauthorized) w.WriteHeader(http.StatusUnauthorized)
return return
default: default:

View file

@ -76,11 +76,10 @@ var (
// authChallenge implements the auth.Challenge interface. // authChallenge implements the auth.Challenge interface.
type authChallenge struct { type authChallenge struct {
err error err error
realm string realm string
autoRedirect bool service string
service string accessSet accessSet
accessSet accessSet
} }
var _ auth.Challenge = authChallenge{} var _ auth.Challenge = authChallenge{}
@ -98,14 +97,8 @@ func (ac authChallenge) Status() int {
// challengeParams constructs the value to be used in // challengeParams constructs the value to be used in
// the WWW-Authenticate response challenge header. // the WWW-Authenticate response challenge header.
// See https://tools.ietf.org/html/rfc6750#section-3 // See https://tools.ietf.org/html/rfc6750#section-3
func (ac authChallenge) challengeParams(r *http.Request) string { func (ac authChallenge) challengeParams() string {
var realm string str := fmt.Sprintf("Bearer realm=%q,service=%q", ac.realm, ac.service)
if ac.autoRedirect {
realm = fmt.Sprintf("https://%s/auth/token", r.Host)
} else {
realm = ac.realm
}
str := fmt.Sprintf("Bearer realm=%q,service=%q", realm, ac.service)
if scope := ac.accessSet.scopeParam(); scope != "" { if scope := ac.accessSet.scopeParam(); scope != "" {
str = fmt.Sprintf("%s,scope=%q", str, scope) str = fmt.Sprintf("%s,scope=%q", str, scope)
@ -121,25 +114,23 @@ func (ac authChallenge) challengeParams(r *http.Request) string {
} }
// SetChallenge sets the WWW-Authenticate value for the response. // SetChallenge sets the WWW-Authenticate value for the response.
func (ac authChallenge) SetHeaders(r *http.Request, w http.ResponseWriter) { func (ac authChallenge) SetHeaders(w http.ResponseWriter) {
w.Header().Add("WWW-Authenticate", ac.challengeParams(r)) w.Header().Add("WWW-Authenticate", ac.challengeParams())
} }
// accessController implements the auth.AccessController interface. // accessController implements the auth.AccessController interface.
type accessController struct { type accessController struct {
realm string realm string
autoRedirect bool issuer string
issuer string service string
service string rootCerts *x509.CertPool
rootCerts *x509.CertPool trustedKeys map[string]libtrust.PublicKey
trustedKeys map[string]libtrust.PublicKey
} }
// tokenAccessOptions is a convenience type for handling // tokenAccessOptions is a convenience type for handling
// options to the contstructor of an accessController. // options to the contstructor of an accessController.
type tokenAccessOptions struct { type tokenAccessOptions struct {
realm string realm string
autoRedirect bool
issuer string issuer string
service string service string
rootCertBundle string rootCertBundle string
@ -162,15 +153,6 @@ func checkOptions(options map[string]interface{}) (tokenAccessOptions, error) {
opts.realm, opts.issuer, opts.service, opts.rootCertBundle = vals[0], vals[1], vals[2], vals[3] opts.realm, opts.issuer, opts.service, opts.rootCertBundle = vals[0], vals[1], vals[2], vals[3]
autoRedirectVal, ok := options["autoredirect"]
if ok {
autoRedirect, ok := autoRedirectVal.(bool)
if !ok {
return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect")
}
opts.autoRedirect = autoRedirect
}
return opts, nil return opts, nil
} }
@ -223,12 +205,11 @@ func newAccessController(options map[string]interface{}) (auth.AccessController,
} }
return &accessController{ return &accessController{
realm: config.realm, realm: config.realm,
autoRedirect: config.autoRedirect, issuer: config.issuer,
issuer: config.issuer, service: config.service,
service: config.service, rootCerts: rootPool,
rootCerts: rootPool, trustedKeys: trustedKeys,
trustedKeys: trustedKeys,
}, nil }, nil
} }
@ -236,10 +217,9 @@ func newAccessController(options map[string]interface{}) (auth.AccessController,
// for actions on resources described by the given access items. // for actions on resources described by the given access items.
func (ac *accessController) Authorized(ctx context.Context, accessItems ...auth.Access) (context.Context, error) { func (ac *accessController) Authorized(ctx context.Context, accessItems ...auth.Access) (context.Context, error) {
challenge := &authChallenge{ challenge := &authChallenge{
realm: ac.realm, realm: ac.realm,
autoRedirect: ac.autoRedirect, service: ac.service,
service: ac.service, accessSet: newAccessSet(accessItems...),
accessSet: newAccessSet(accessItems...),
} }
req, err := dcontext.GetRequest(ctx) req, err := dcontext.GetRequest(ctx)

View file

@ -333,7 +333,6 @@ func TestAccessController(t *testing.T) {
"issuer": issuer, "issuer": issuer,
"service": service, "service": service,
"rootcertbundle": rootCertBundleFilename, "rootcertbundle": rootCertBundleFilename,
"autoredirect": false,
} }
accessController, err := newAccessController(options) accessController, err := newAccessController(options)
@ -519,7 +518,6 @@ func TestNewAccessControllerPemBlock(t *testing.T) {
"issuer": issuer, "issuer": issuer,
"service": service, "service": service,
"rootcertbundle": rootCertBundleFilename, "rootcertbundle": rootCertBundleFilename,
"autoredirect": false,
} }
ac, err := newAccessController(options) ac, err := newAccessController(options)

View file

@ -117,8 +117,8 @@ func init() {
var t octetType var t octetType
isCtl := c <= 31 || c == 127 isCtl := c <= 31 || c == 127
isChar := 0 <= c && c <= 127 isChar := 0 <= c && c <= 127
isSeparator := strings.ContainsRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) isSeparator := strings.IndexRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) >= 0
if strings.ContainsRune(" \t\r\n", rune(c)) { if strings.IndexRune(" \t\r\n", rune(c)) >= 0 {
t |= isSpace t |= isSpace
} }
if isChar && !isCtl && !isSeparator { if isChar && !isCtl && !isSeparator {

View file

@ -366,10 +366,6 @@ func (th *tokenHandler) fetchTokenWithOAuth(realm *url.URL, refreshToken, servic
return "", time.Time{}, fmt.Errorf("unable to decode token response: %s", err) return "", time.Time{}, fmt.Errorf("unable to decode token response: %s", err)
} }
if tr.AccessToken == "" {
return "", time.Time{}, ErrNoToken
}
if tr.RefreshToken != "" && tr.RefreshToken != refreshToken { if tr.RefreshToken != "" && tr.RefreshToken != refreshToken {
th.creds.SetRefreshToken(realm, service, tr.RefreshToken) th.creds.SetRefreshToken(realm, service, tr.RefreshToken)
} }

View file

@ -466,7 +466,7 @@ func TestEndpointAuthorizeTokenBasic(t *testing.T) {
}, },
}) })
authenicate1 := "Basic realm=localhost" authenicate1 := fmt.Sprintf("Basic realm=localhost")
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password)) return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
} }
@ -546,7 +546,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresIn(t *testing.T) {
}, },
}) })
authenicate1 := "Basic realm=localhost" authenicate1 := fmt.Sprintf("Basic realm=localhost")
tokenExchanges := 0 tokenExchanges := 0
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1 tokenExchanges = tokenExchanges + 1
@ -706,7 +706,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresInAndIssuedAt(t *testing.T) {
}, },
}) })
authenicate1 := "Basic realm=localhost" authenicate1 := fmt.Sprintf("Basic realm=localhost")
tokenExchanges := 0 tokenExchanges := 0
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1 tokenExchanges = tokenExchanges + 1
@ -835,7 +835,7 @@ func TestEndpointAuthorizeBasic(t *testing.T) {
username := "user1" username := "user1"
password := "funSecretPa$$word" password := "funSecretPa$$word"
authenicate := "Basic realm=localhost" authenicate := fmt.Sprintf("Basic realm=localhost")
validCheck := func(a string) bool { validCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password)) return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
} }

View file

@ -64,8 +64,8 @@ func (hbu *httpBlobUpload) ReadFrom(r io.Reader) (n int64, err error) {
return 0, fmt.Errorf("bad range format: %s", rng) return 0, fmt.Errorf("bad range format: %s", rng)
} }
hbu.offset += end - start + 1
return (end - start + 1), nil return (end - start + 1), nil
} }
func (hbu *httpBlobUpload) Write(p []byte) (n int, err error) { func (hbu *httpBlobUpload) Write(p []byte) (n int, err error) {
@ -99,8 +99,8 @@ func (hbu *httpBlobUpload) Write(p []byte) (n int, err error) {
return 0, fmt.Errorf("bad range format: %s", rng) return 0, fmt.Errorf("bad range format: %s", rng)
} }
hbu.offset += int64(end - start + 1)
return (end - start + 1), nil return (end - start + 1), nil
} }
func (hbu *httpBlobUpload) Size() int64 { func (hbu *httpBlobUpload) Size() int64 {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil" "github.com/docker/distribution/testutil"
) )
@ -209,286 +209,3 @@ func TestUploadReadFrom(t *testing.T) {
t.Fatalf("Unexpected response status: %s, expected %s", uploadErr.Status, expected) t.Fatalf("Unexpected response status: %s, expected %s", uploadErr.Status, expected)
} }
} }
func TestUploadSize(t *testing.T) {
_, b := newRandomBlob(64)
readFromLocationPath := "/v2/test/upload/readfrom/uploads/testid"
writeLocationPath := "/v2/test/upload/readfrom/uploads/testid"
m := testutil.RequestResponseMap([]testutil.RequestResponseMapping{
{
Request: testutil.Request{
Method: "GET",
Route: "/v2/",
},
Response: testutil.Response{
StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{
"Docker-Distribution-API-Version": {"registry/2.0"},
}),
},
},
{
Request: testutil.Request{
Method: "PATCH",
Route: readFromLocationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Docker-Upload-UUID": {"46603072-7a1b-4b41-98f9-fd8a7da89f9b"},
"Location": {readFromLocationPath},
"Range": {"0-63"},
}),
},
},
{
Request: testutil.Request{
Method: "PATCH",
Route: writeLocationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Docker-Upload-UUID": {"46603072-7a1b-4b41-98f9-fd8a7da89f9b"},
"Location": {writeLocationPath},
"Range": {"0-63"},
}),
},
},
})
e, c := testServer(m)
defer c()
// Writing with ReadFrom
blobUpload := &httpBlobUpload{
client: &http.Client{},
location: e + readFromLocationPath,
}
if blobUpload.Size() != 0 {
t.Fatalf("Wrong size returned from Size: %d, expected 0", blobUpload.Size())
}
_, err := blobUpload.ReadFrom(bytes.NewReader(b))
if err != nil {
t.Fatalf("Error calling ReadFrom: %s", err)
}
if blobUpload.Size() != 64 {
t.Fatalf("Wrong size returned from Size: %d, expected 64", blobUpload.Size())
}
// Writing with Write
blobUpload = &httpBlobUpload{
client: &http.Client{},
location: e + writeLocationPath,
}
_, err = blobUpload.Write(b)
if err != nil {
t.Fatalf("Error calling Write: %s", err)
}
if blobUpload.Size() != 64 {
t.Fatalf("Wrong size returned from Size: %d, expected 64", blobUpload.Size())
}
}
func TestUploadWrite(t *testing.T) {
_, b := newRandomBlob(64)
repo := "test/upload/write"
locationPath := fmt.Sprintf("/v2/%s/uploads/testid", repo)
m := testutil.RequestResponseMap([]testutil.RequestResponseMapping{
{
Request: testutil.Request{
Method: "GET",
Route: "/v2/",
},
Response: testutil.Response{
StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{
"Docker-Distribution-API-Version": {"registry/2.0"},
}),
},
},
// Test Valid case
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Docker-Upload-UUID": {"46603072-7a1b-4b41-98f9-fd8a7da89f9b"},
"Location": {locationPath},
"Range": {"0-63"},
}),
},
},
// Test invalid range
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Docker-Upload-UUID": {"46603072-7a1b-4b41-98f9-fd8a7da89f9b"},
"Location": {locationPath},
"Range": {""},
}),
},
},
// Test 404
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusNotFound,
},
},
// Test 400 valid json
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusBadRequest,
Body: []byte(`
{ "errors":
[
{
"code": "BLOB_UPLOAD_INVALID",
"message": "blob upload invalid",
"detail": "more detail"
}
]
} `),
},
},
// Test 400 invalid json
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusBadRequest,
Body: []byte("something bad happened"),
},
},
// Test 500
{
Request: testutil.Request{
Method: "PATCH",
Route: locationPath,
Body: b,
},
Response: testutil.Response{
StatusCode: http.StatusInternalServerError,
},
},
})
e, c := testServer(m)
defer c()
blobUpload := &httpBlobUpload{
client: &http.Client{},
}
// Valid case
blobUpload.location = e + locationPath
n, err := blobUpload.Write(b)
if err != nil {
t.Fatalf("Error calling Write: %s", err)
}
if n != 64 {
t.Fatalf("Wrong length returned from Write: %d, expected 64", n)
}
// Bad range
blobUpload.location = e + locationPath
_, err = blobUpload.Write(b)
if err == nil {
t.Fatalf("Expected error when bad range received")
}
// 404
blobUpload.location = e + locationPath
_, err = blobUpload.Write(b)
if err == nil {
t.Fatalf("Expected error when not found")
}
if err != distribution.ErrBlobUploadUnknown {
t.Fatalf("Wrong error thrown: %s, expected %s", err, distribution.ErrBlobUploadUnknown)
}
// 400 valid json
blobUpload.location = e + locationPath
_, err = blobUpload.Write(b)
if err == nil {
t.Fatalf("Expected error when not found")
}
if uploadErr, ok := err.(errcode.Errors); !ok {
t.Fatalf("Wrong error type %T: %s", err, err)
} else if len(uploadErr) != 1 {
t.Fatalf("Unexpected number of errors: %d, expected 1", len(uploadErr))
} else {
v2Err, ok := uploadErr[0].(errcode.Error)
if !ok {
t.Fatalf("Not an 'Error' type: %#v", uploadErr[0])
}
if v2Err.Code != v2.ErrorCodeBlobUploadInvalid {
t.Fatalf("Unexpected error code: %s, expected %d", v2Err.Code.String(), v2.ErrorCodeBlobUploadInvalid)
}
if expected := "blob upload invalid"; v2Err.Message != expected {
t.Fatalf("Unexpected error message: %q, expected %q", v2Err.Message, expected)
}
if expected := "more detail"; v2Err.Detail.(string) != expected {
t.Fatalf("Unexpected error message: %q, expected %q", v2Err.Detail.(string), expected)
}
}
// 400 invalid json
blobUpload.location = e + locationPath
_, err = blobUpload.Write(b)
if err == nil {
t.Fatalf("Expected error when not found")
}
if uploadErr, ok := err.(*UnexpectedHTTPResponseError); !ok {
t.Fatalf("Wrong error type %T: %s", err, err)
} else {
respStr := string(uploadErr.Response)
if expected := "something bad happened"; respStr != expected {
t.Fatalf("Unexpected response string: %s, expected: %s", respStr, expected)
}
}
// 500
blobUpload.location = e + locationPath
_, err = blobUpload.Write(b)
if err == nil {
t.Fatalf("Expected error when not found")
}
if uploadErr, ok := err.(*UnexpectedHTTPStatusError); !ok {
t.Fatalf("Wrong error type %T: %s", err, err)
} else if expected := "500 " + http.StatusText(http.StatusInternalServerError); uploadErr.Status != expected {
t.Fatalf("Unexpected response status: %s, expected %s", uploadErr.Status, expected)
}
}

View file

@ -16,7 +16,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client/transport" "github.com/docker/distribution/registry/client/transport"
"github.com/docker/distribution/registry/storage/cache" "github.com/docker/distribution/registry/storage/cache"
"github.com/docker/distribution/registry/storage/cache/memory" "github.com/docker/distribution/registry/storage/cache/memory"
@ -667,28 +667,7 @@ func (bs *blobs) Open(ctx context.Context, dgst digest.Digest) (distribution.Rea
} }
func (bs *blobs) ServeBlob(ctx context.Context, w http.ResponseWriter, r *http.Request, dgst digest.Digest) error { func (bs *blobs) ServeBlob(ctx context.Context, w http.ResponseWriter, r *http.Request, dgst digest.Digest) error {
desc, err := bs.statter.Stat(ctx, dgst) panic("not implemented")
if err != nil {
return err
}
w.Header().Set("Content-Length", strconv.FormatInt(desc.Size, 10))
w.Header().Set("Content-Type", desc.MediaType)
w.Header().Set("Docker-Content-Digest", dgst.String())
w.Header().Set("Etag", dgst.String())
if r.Method == http.MethodHead {
return nil
}
blob, err := bs.Open(ctx, dgst)
if err != nil {
return err
}
defer blob.Close()
_, err = io.CopyN(w, blob, desc.Size)
return err
} }
func (bs *blobs) Put(ctx context.Context, mediaType string, p []byte) (distribution.Descriptor, error) { func (bs *blobs) Put(ctx context.Context, mediaType string, p []byte) (distribution.Descriptor, error) {
@ -773,14 +752,6 @@ func (bs *blobs) Create(ctx context.Context, options ...distribution.BlobCreateO
case http.StatusAccepted: case http.StatusAccepted:
// TODO(dmcgowan): Check for invalid UUID // TODO(dmcgowan): Check for invalid UUID
uuid := resp.Header.Get("Docker-Upload-UUID") uuid := resp.Header.Get("Docker-Upload-UUID")
if uuid == "" {
parts := strings.Split(resp.Header.Get("Location"), "/")
uuid = parts[len(parts)-1]
}
if uuid == "" {
return nil, errors.New("cannot retrieve docker upload UUID")
}
location, err := sanitizeLocation(resp.Header.Get("Location"), u) location, err := sanitizeLocation(resp.Header.Get("Location"), u)
if err != nil { if err != nil {
return nil, err return nil, err
@ -799,18 +770,7 @@ func (bs *blobs) Create(ctx context.Context, options ...distribution.BlobCreateO
} }
func (bs *blobs) Resume(ctx context.Context, id string) (distribution.BlobWriter, error) { func (bs *blobs) Resume(ctx context.Context, id string) (distribution.BlobWriter, error) {
location, err := bs.ub.BuildBlobUploadChunkURL(bs.name, id) panic("not implemented")
if err != nil {
return nil, err
}
return &httpBlobUpload{
statter: bs.statter,
client: bs.client,
uuid: id,
startedAt: time.Now(),
location: location,
}, nil
} }
func (bs *blobs) Delete(ctx context.Context, dgst digest.Digest) error { func (bs *blobs) Delete(ctx context.Context, dgst digest.Digest) error {

View file

@ -6,7 +6,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"log" "log"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
@ -23,7 +22,7 @@ import (
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil" "github.com/docker/distribution/testutil"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
"github.com/docker/libtrust" "github.com/docker/libtrust"
@ -58,7 +57,6 @@ func addTestFetch(repo string, dgst digest.Digest, content []byte, m *testutil.R
Body: content, Body: content,
Headers: http.Header(map[string][]string{ Headers: http.Header(map[string][]string{
"Content-Length": {fmt.Sprint(len(content))}, "Content-Length": {fmt.Sprint(len(content))},
"Content-Type": {"application/octet-stream"},
"Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)}, "Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)},
}), }),
}, },
@ -73,7 +71,6 @@ func addTestFetch(repo string, dgst digest.Digest, content []byte, m *testutil.R
StatusCode: http.StatusOK, StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{ Headers: http.Header(map[string][]string{
"Content-Length": {fmt.Sprint(len(content))}, "Content-Length": {fmt.Sprint(len(content))},
"Content-Type": {"application/octet-stream"},
"Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)}, "Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)},
}), }),
}, },
@ -83,7 +80,7 @@ func addTestFetch(repo string, dgst digest.Digest, content []byte, m *testutil.R
func addTestCatalog(route string, content []byte, link string, m *testutil.RequestResponseMap) { func addTestCatalog(route string, content []byte, link string, m *testutil.RequestResponseMap) {
headers := map[string][]string{ headers := map[string][]string{
"Content-Length": {strconv.Itoa(len(content))}, "Content-Length": {strconv.Itoa(len(content))},
"Content-Type": {"application/json"}, "Content-Type": {"application/json; charset=utf-8"},
} }
if link != "" { if link != "" {
headers["Link"] = append(headers["Link"], link) headers["Link"] = append(headers["Link"], link)
@ -102,193 +99,6 @@ func addTestCatalog(route string, content []byte, link string, m *testutil.Reque
}) })
} }
func TestBlobServeBlob(t *testing.T) {
dgst, blob := newRandomBlob(1024)
var m testutil.RequestResponseMap
addTestFetch("test.example.com/repo1", dgst, blob, &m)
e, c := testServer(m)
defer c()
ctx := context.Background()
repo, _ := reference.WithName("test.example.com/repo1")
r, err := NewRepository(repo, e, nil)
if err != nil {
t.Fatal(err)
}
l := r.Blobs(ctx)
resp := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/", nil)
err = l.ServeBlob(ctx, resp, req, dgst)
if err != nil {
t.Errorf("Error serving blob: %s", err.Error())
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Errorf("Error reading response body: %s", err.Error())
}
if string(body) != string(blob) {
t.Errorf("Unexpected response body. Got %q, expected %q", string(body), string(blob))
}
expectedHeaders := []struct {
Name string
Value string
}{
{Name: "Content-Length", Value: "1024"},
{Name: "Content-Type", Value: "application/octet-stream"},
{Name: "Docker-Content-Digest", Value: dgst.String()},
{Name: "Etag", Value: dgst.String()},
}
for _, h := range expectedHeaders {
if resp.Header().Get(h.Name) != h.Value {
t.Errorf("Unexpected %s. Got %s, expected %s", h.Name, resp.Header().Get(h.Name), h.Value)
}
}
}
func TestBlobServeBlobHEAD(t *testing.T) {
dgst, blob := newRandomBlob(1024)
var m testutil.RequestResponseMap
addTestFetch("test.example.com/repo1", dgst, blob, &m)
e, c := testServer(m)
defer c()
ctx := context.Background()
repo, _ := reference.WithName("test.example.com/repo1")
r, err := NewRepository(repo, e, nil)
if err != nil {
t.Fatal(err)
}
l := r.Blobs(ctx)
resp := httptest.NewRecorder()
req := httptest.NewRequest("HEAD", "/", nil)
err = l.ServeBlob(ctx, resp, req, dgst)
if err != nil {
t.Errorf("Error serving blob: %s", err.Error())
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Errorf("Error reading response body: %s", err.Error())
}
if string(body) != "" {
t.Errorf("Unexpected response body. Got %q, expected %q", string(body), "")
}
expectedHeaders := []struct {
Name string
Value string
}{
{Name: "Content-Length", Value: "1024"},
{Name: "Content-Type", Value: "application/octet-stream"},
{Name: "Docker-Content-Digest", Value: dgst.String()},
{Name: "Etag", Value: dgst.String()},
}
for _, h := range expectedHeaders {
if resp.Header().Get(h.Name) != h.Value {
t.Errorf("Unexpected %s. Got %s, expected %s", h.Name, resp.Header().Get(h.Name), h.Value)
}
}
}
func TestBlobResume(t *testing.T) {
dgst, b1 := newRandomBlob(1024)
id := uuid.Generate().String()
var m testutil.RequestResponseMap
repo, _ := reference.WithName("test.example.com/repo1")
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PATCH",
Route: "/v2/" + repo.Name() + "/blobs/uploads/" + id,
Body: b1,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Docker-Content-Digest": {dgst.String()},
"Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PUT",
Route: "/v2/" + repo.Name() + "/blobs/uploads/" + id,
QueryParams: map[string][]string{
"digest": {dgst.String()},
},
},
Response: testutil.Response{
StatusCode: http.StatusCreated,
Headers: http.Header(map[string][]string{
"Docker-Content-Digest": {dgst.String()},
"Content-Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "HEAD",
Route: "/v2/" + repo.Name() + "/blobs/" + dgst.String(),
},
Response: testutil.Response{
StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{
"Content-Length": {fmt.Sprint(len(b1))},
"Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)},
}),
},
})
e, c := testServer(m)
defer c()
ctx := context.Background()
r, err := NewRepository(repo, e, nil)
if err != nil {
t.Fatal(err)
}
l := r.Blobs(ctx)
upload, err := l.Resume(ctx, id)
if err != nil {
t.Errorf("Error resuming blob: %s", err.Error())
}
if upload.ID() != id {
t.Errorf("Unexpected UUID %s; expected %s", upload.ID(), id)
}
n, err := upload.ReadFrom(bytes.NewReader(b1))
if err != nil {
t.Fatal(err)
}
if n != int64(len(b1)) {
t.Fatalf("Unexpected ReadFrom length: %d; expected: %d", n, len(b1))
}
blob, err := upload.Commit(ctx, distribution.Descriptor{
Digest: dgst,
Size: int64(len(b1)),
})
if err != nil {
t.Fatal(err)
}
if blob.Size != int64(len(b1)) {
t.Fatalf("Unexpected blob size: %d; expected: %d", blob.Size, len(b1))
}
}
func TestBlobDelete(t *testing.T) { func TestBlobDelete(t *testing.T) {
dgst, _ := newRandomBlob(1024) dgst, _ := newRandomBlob(1024)
var m testutil.RequestResponseMap var m testutil.RequestResponseMap
@ -342,7 +152,7 @@ func TestBlobFetch(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !bytes.Equal(b, b1) { if bytes.Compare(b, b1) != 0 {
t.Fatalf("Wrong bytes values fetched: [%d]byte != [%d]byte", len(b), len(b1)) t.Fatalf("Wrong bytes values fetched: [%d]byte != [%d]byte", len(b), len(b1))
} }
@ -663,198 +473,6 @@ func TestBlobUploadMonolithic(t *testing.T) {
} }
} }
func TestBlobUploadMonolithicDockerUploadUUIDFromURL(t *testing.T) {
dgst, b1 := newRandomBlob(1024)
var m testutil.RequestResponseMap
repo, _ := reference.WithName("test.example.com/uploadrepo")
uploadID := uuid.Generate().String()
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "POST",
Route: "/v2/" + repo.Name() + "/blobs/uploads/",
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Content-Length": {"0"},
"Location": {"/v2/" + repo.Name() + "/blobs/uploads/" + uploadID},
"Range": {"0-0"},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PATCH",
Route: "/v2/" + repo.Name() + "/blobs/uploads/" + uploadID,
Body: b1,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Location": {"/v2/" + repo.Name() + "/blobs/uploads/" + uploadID},
"Content-Length": {"0"},
"Docker-Content-Digest": {dgst.String()},
"Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PUT",
Route: "/v2/" + repo.Name() + "/blobs/uploads/" + uploadID,
QueryParams: map[string][]string{
"digest": {dgst.String()},
},
},
Response: testutil.Response{
StatusCode: http.StatusCreated,
Headers: http.Header(map[string][]string{
"Content-Length": {"0"},
"Docker-Content-Digest": {dgst.String()},
"Content-Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "HEAD",
Route: "/v2/" + repo.Name() + "/blobs/" + dgst.String(),
},
Response: testutil.Response{
StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{
"Content-Length": {fmt.Sprint(len(b1))},
"Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)},
}),
},
})
e, c := testServer(m)
defer c()
ctx := context.Background()
r, err := NewRepository(repo, e, nil)
if err != nil {
t.Fatal(err)
}
l := r.Blobs(ctx)
upload, err := l.Create(ctx)
if err != nil {
t.Fatal(err)
}
if upload.ID() != uploadID {
log.Fatalf("Unexpected UUID %s; expected %s", upload.ID(), uploadID)
}
n, err := upload.ReadFrom(bytes.NewReader(b1))
if err != nil {
t.Fatal(err)
}
if n != int64(len(b1)) {
t.Fatalf("Unexpected ReadFrom length: %d; expected: %d", n, len(b1))
}
blob, err := upload.Commit(ctx, distribution.Descriptor{
Digest: dgst,
Size: int64(len(b1)),
})
if err != nil {
t.Fatal(err)
}
if blob.Size != int64(len(b1)) {
t.Fatalf("Unexpected blob size: %d; expected: %d", blob.Size, len(b1))
}
}
func TestBlobUploadMonolithicNoDockerUploadUUID(t *testing.T) {
dgst, b1 := newRandomBlob(1024)
var m testutil.RequestResponseMap
repo, _ := reference.WithName("test.example.com/uploadrepo")
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "POST",
Route: "/v2/" + repo.Name() + "/blobs/uploads/",
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Content-Length": {"0"},
"Location": {"/v2/" + repo.Name() + "/blobs/uploads/"},
"Range": {"0-0"},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PATCH",
Route: "/v2/" + repo.Name() + "/blobs/uploads/",
Body: b1,
},
Response: testutil.Response{
StatusCode: http.StatusAccepted,
Headers: http.Header(map[string][]string{
"Location": {"/v2/" + repo.Name() + "/blobs/uploads/"},
"Content-Length": {"0"},
"Docker-Content-Digest": {dgst.String()},
"Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "PUT",
Route: "/v2/" + repo.Name() + "/blobs/uploads/",
QueryParams: map[string][]string{
"digest": {dgst.String()},
},
},
Response: testutil.Response{
StatusCode: http.StatusCreated,
Headers: http.Header(map[string][]string{
"Content-Length": {"0"},
"Docker-Content-Digest": {dgst.String()},
"Content-Range": {fmt.Sprintf("0-%d", len(b1)-1)},
}),
},
})
m = append(m, testutil.RequestResponseMapping{
Request: testutil.Request{
Method: "HEAD",
Route: "/v2/" + repo.Name() + "/blobs/" + dgst.String(),
},
Response: testutil.Response{
StatusCode: http.StatusOK,
Headers: http.Header(map[string][]string{
"Content-Length": {fmt.Sprint(len(b1))},
"Last-Modified": {time.Now().Add(-1 * time.Second).Format(time.ANSIC)},
}),
},
})
e, c := testServer(m)
defer c()
ctx := context.Background()
r, err := NewRepository(repo, e, nil)
if err != nil {
t.Fatal(err)
}
l := r.Blobs(ctx)
upload, err := l.Create(ctx)
if err.Error() != "cannot retrieve docker upload UUID" {
log.Fatalf("expected rejection to retrieve docker upload UUID error. Got %q", err)
}
if upload != nil {
log.Fatal("Expected upload to be nil")
}
}
func TestBlobMount(t *testing.T) { func TestBlobMount(t *testing.T) {
dgst, content := newRandomBlob(1024) dgst, content := newRandomBlob(1024)
var m testutil.RequestResponseMap var m testutil.RequestResponseMap
@ -1444,7 +1062,7 @@ func TestObtainsErrorForMissingTag(t *testing.T) {
StatusCode: http.StatusNotFound, StatusCode: http.StatusNotFound,
Body: errBytes, Body: errBytes,
Headers: http.Header(map[string][]string{ Headers: http.Header(map[string][]string{
"Content-Type": {"application/json"}, "Content-Type": {"application/json; charset=utf-8"},
}), }),
}, },
}) })
@ -1718,7 +1336,7 @@ func TestSanitizeLocation(t *testing.T) {
expected: "http://blahalaja.com/v2/foo/baasdf?_state=asdfasfdasdfasdf&digest=foo", expected: "http://blahalaja.com/v2/foo/baasdf?_state=asdfasfdasdfasdf&digest=foo",
}, },
{ {
description: "ensure new hostname overridden", description: "ensure new hostname overidden",
location: "https://mwhahaha.com/v2/foo/baasdf?_state=asdfasfdasdfasdf", location: "https://mwhahaha.com/v2/foo/baasdf?_state=asdfasfdasdfasdf",
source: "http://blahalaja.com/v1", source: "http://blahalaja.com/v1",
expected: "https://mwhahaha.com/v2/foo/baasdf?_state=asdfasfdasdfasdf", expected: "https://mwhahaha.com/v2/foo/baasdf?_state=asdfasfdasdfasdf",

View file

@ -6,13 +6,6 @@ import (
"sync" "sync"
) )
func identityTransportWrapper(rt http.RoundTripper) http.RoundTripper {
return rt
}
// DefaultTransportWrapper allows a user to wrap every generated transport
var DefaultTransportWrapper = identityTransportWrapper
// RequestModifier represents an object which will do an inplace // RequestModifier represents an object which will do an inplace
// modification of an HTTP request. // modification of an HTTP request.
type RequestModifier interface { type RequestModifier interface {
@ -38,11 +31,10 @@ func (h headerModifier) ModifyRequest(req *http.Request) error {
// NewTransport creates a new transport which will apply modifiers to // NewTransport creates a new transport which will apply modifiers to
// the request on a RoundTrip call. // the request on a RoundTrip call.
func NewTransport(base http.RoundTripper, modifiers ...RequestModifier) http.RoundTripper { func NewTransport(base http.RoundTripper, modifiers ...RequestModifier) http.RoundTripper {
return DefaultTransportWrapper( return &transport{
&transport{ Modifiers: modifiers,
Modifiers: modifiers, Base: base,
Base: base, }
})
} }
// transport is an http.RoundTripper that makes HTTP requests after // transport is an http.RoundTripper that makes HTTP requests after

View file

@ -28,7 +28,7 @@ import (
"github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
storagedriver "github.com/docker/distribution/registry/storage/driver" storagedriver "github.com/docker/distribution/registry/storage/driver"
"github.com/docker/distribution/registry/storage/driver/factory" "github.com/docker/distribution/registry/storage/driver/factory"
_ "github.com/docker/distribution/registry/storage/driver/testdriver" _ "github.com/docker/distribution/registry/storage/driver/testdriver"
@ -65,7 +65,7 @@ func TestCheckAPI(t *testing.T) {
checkResponse(t, "issuing api base check", resp, http.StatusOK) checkResponse(t, "issuing api base check", resp, http.StatusOK)
checkHeaders(t, resp, http.Header{ checkHeaders(t, resp, http.Header{
"Content-Type": []string{"application/json"}, "Content-Type": []string{"application/json; charset=utf-8"},
"Content-Length": []string{"2"}, "Content-Length": []string{"2"},
}) })
@ -259,7 +259,7 @@ func TestURLPrefix(t *testing.T) {
checkResponse(t, "issuing api base check", resp, http.StatusOK) checkResponse(t, "issuing api base check", resp, http.StatusOK)
checkHeaders(t, resp, http.Header{ checkHeaders(t, resp, http.Header{
"Content-Type": []string{"application/json"}, "Content-Type": []string{"application/json; charset=utf-8"},
"Content-Length": []string{"2"}, "Content-Length": []string{"2"},
}) })
} }
@ -959,6 +959,7 @@ func testManifestWithStorageError(t *testing.T, env *testEnv, imageName referenc
defer resp.Body.Close() defer resp.Body.Close()
checkResponse(t, "getting non-existent manifest", resp, expectedStatusCode) checkResponse(t, "getting non-existent manifest", resp, expectedStatusCode)
checkBodyHasErrorCodes(t, "getting non-existent manifest", resp, expectedErrorCode) checkBodyHasErrorCodes(t, "getting non-existent manifest", resp, expectedErrorCode)
return
} }
func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Named) manifestArgs { func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Named) manifestArgs {
@ -1065,11 +1066,12 @@ func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers { for i := range unsignedManifest.FSLayers {
rs, dgst, err := testutil.CreateRandomTarFile() rs, dgstStr, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst unsignedManifest.FSLayers[i].BlobSum = dgst
@ -1178,7 +1180,7 @@ func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Name
// charset. // charset.
resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, schema1.MediaTypeSignedManifest, sm2) resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, schema1.MediaTypeSignedManifest, sm2)
checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated) checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated)
resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, "application/json", sm2) resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, "application/json; charset=utf-8", sm2)
checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated) checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated)
resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, "application/json", sm2) resp = putManifest(t, "re-putting signed manifest", manifestDigestURL, "application/json", sm2)
checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated) checkResponse(t, "re-putting signed manifest", resp, http.StatusCreated)
@ -1403,11 +1405,12 @@ func testManifestAPISchema2(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range manifest.Layers { for i := range manifest.Layers {
rs, dgst, err := testutil.CreateRandomTarFile() rs, dgstStr, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
manifest.Layers[i].Digest = dgst manifest.Layers[i].Digest = dgst
@ -2012,7 +2015,6 @@ type testEnv struct {
} }
func newTestEnvMirror(t *testing.T, deleteEnabled bool) *testEnv { func newTestEnvMirror(t *testing.T, deleteEnabled bool) *testEnv {
upstreamEnv := newTestEnv(t, deleteEnabled)
config := configuration.Configuration{ config := configuration.Configuration{
Storage: configuration.Storage{ Storage: configuration.Storage{
"testdriver": configuration.Parameters{}, "testdriver": configuration.Parameters{},
@ -2022,7 +2024,7 @@ func newTestEnvMirror(t *testing.T, deleteEnabled bool) *testEnv {
}}, }},
}, },
Proxy: configuration.Proxy{ Proxy: configuration.Proxy{
RemoteURL: upstreamEnv.server.URL, RemoteURL: "http://example.com",
}, },
} }
config.Compatibility.Schema1.Enabled = true config.Compatibility.Schema1.Enabled = true
@ -2327,7 +2329,7 @@ func checkBodyHasErrorCodes(t *testing.T, msg string, resp *http.Response, error
// TODO(stevvooe): Shoot. The error setup is not working out. The content- // TODO(stevvooe): Shoot. The error setup is not working out. The content-
// type headers are being set after writing the status code. // type headers are being set after writing the status code.
// if resp.Header.Get("Content-Type") != "application/json" { // if resp.Header.Get("Content-Type") != "application/json; charset=utf-8" {
// t.Fatalf("unexpected content type: %v != 'application/json'", // t.Fatalf("unexpected content type: %v != 'application/json'",
// resp.Header.Get("Content-Type")) // resp.Header.Get("Content-Type"))
// } // }
@ -2355,7 +2357,7 @@ func checkBodyHasErrorCodes(t *testing.T, msg string, resp *http.Response, error
// Ensure that counts of expected errors were all non-zero // Ensure that counts of expected errors were all non-zero
for code := range expected { for code := range expected {
if counts[code] == 0 { if counts[code] == 0 {
t.Fatalf("expected error code %v not encountered during %s: %s", code, msg, string(p)) t.Fatalf("expected error code %v not encounterd during %s: %s", code, msg, string(p))
} }
} }
@ -2430,10 +2432,11 @@ func createRepository(env *testEnv, t *testing.T, imageName string, tag string)
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers { for i := range unsignedManifest.FSLayers {
rs, dgst, err := testutil.CreateRandomTarFile() rs, dgstStr, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst unsignedManifest.FSLayers[i].BlobSum = dgst

View file

@ -24,7 +24,7 @@ import (
"github.com/docker/distribution/notifications" "github.com/docker/distribution/notifications"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
registrymiddleware "github.com/docker/distribution/registry/middleware/registry" registrymiddleware "github.com/docker/distribution/registry/middleware/registry"
repositorymiddleware "github.com/docker/distribution/registry/middleware/repository" repositorymiddleware "github.com/docker/distribution/registry/middleware/repository"
@ -36,7 +36,6 @@ import (
"github.com/docker/distribution/registry/storage/driver/factory" "github.com/docker/distribution/registry/storage/driver/factory"
storagemiddleware "github.com/docker/distribution/registry/storage/driver/middleware" storagemiddleware "github.com/docker/distribution/registry/storage/driver/middleware"
"github.com/docker/distribution/version" "github.com/docker/distribution/version"
events "github.com/docker/go-events"
"github.com/docker/go-metrics" "github.com/docker/go-metrics"
"github.com/docker/libtrust" "github.com/docker/libtrust"
"github.com/garyburd/redigo/redis" "github.com/garyburd/redigo/redis"
@ -71,7 +70,7 @@ type App struct {
// events contains notification related configuration. // events contains notification related configuration.
events struct { events struct {
sink events.Sink sink notifications.Sink
source notifications.SourceRecord source notifications.SourceRecord
} }
@ -329,7 +328,7 @@ func NewApp(ctx context.Context, config *configuration.Configuration) *App {
var ok bool var ok bool
app.repoRemover, ok = app.registry.(distribution.RepositoryRemover) app.repoRemover, ok = app.registry.(distribution.RepositoryRemover)
if !ok { if !ok {
dcontext.GetLogger(app).Warnf("Registry does not implement RepositoryRemover. Will not be able to delete repos and tags") dcontext.GetLogger(app).Warnf("Registry does not implement RempositoryRemover. Will not be able to delete repos and tags")
} }
return app return app
@ -447,7 +446,7 @@ func (app *App) register(routeName string, dispatch dispatchFunc) {
// configureEvents prepares the event sink for action. // configureEvents prepares the event sink for action.
func (app *App) configureEvents(configuration *configuration.Configuration) { func (app *App) configureEvents(configuration *configuration.Configuration) {
// Configure all of the endpoint sinks. // Configure all of the endpoint sinks.
var sinks []events.Sink var sinks []notifications.Sink
for _, endpoint := range configuration.Notifications.Endpoints { for _, endpoint := range configuration.Notifications.Endpoints {
if endpoint.Disabled { if endpoint.Disabled {
dcontext.GetLogger(app).Infof("endpoint %s disabled, skipping", endpoint.Name) dcontext.GetLogger(app).Infof("endpoint %s disabled, skipping", endpoint.Name)
@ -471,7 +470,7 @@ func (app *App) configureEvents(configuration *configuration.Configuration) {
// replacing broadcaster with a rabbitmq implementation. It's recommended // replacing broadcaster with a rabbitmq implementation. It's recommended
// that the registry instances also act as the workers to keep deployment // that the registry instances also act as the workers to keep deployment
// simple. // simple.
app.events.sink = events.NewBroadcaster(sinks...) app.events.sink = notifications.NewBroadcaster(sinks...)
// Populate registry event source // Populate registry event source
hostname, err := os.Hostname() hostname, err := os.Hostname()
@ -754,18 +753,20 @@ func (app *App) logError(ctx context.Context, errors errcode.Errors) {
for _, e1 := range errors { for _, e1 := range errors {
var c context.Context var c context.Context
switch e := e1.(type) { switch e1.(type) {
case errcode.Error: case errcode.Error:
e, _ := e1.(errcode.Error)
c = context.WithValue(ctx, errCodeKey{}, e.Code) c = context.WithValue(ctx, errCodeKey{}, e.Code)
c = context.WithValue(c, errMessageKey{}, e.Message) c = context.WithValue(c, errMessageKey{}, e.Message)
c = context.WithValue(c, errDetailKey{}, e.Detail) c = context.WithValue(c, errDetailKey{}, e.Detail)
case errcode.ErrorCode: case errcode.ErrorCode:
e, _ := e1.(errcode.ErrorCode)
c = context.WithValue(ctx, errCodeKey{}, e) c = context.WithValue(ctx, errCodeKey{}, e)
c = context.WithValue(c, errMessageKey{}, e.Message()) c = context.WithValue(c, errMessageKey{}, e.Message())
default: default:
// just normal go 'error' // just normal go 'error'
c = context.WithValue(ctx, errCodeKey{}, errcode.ErrorCodeUnknown) c = context.WithValue(ctx, errCodeKey{}, errcode.ErrorCodeUnknown)
c = context.WithValue(c, errMessageKey{}, e.Error()) c = context.WithValue(c, errMessageKey{}, e1.Error())
} }
c = dcontext.WithLogger(c, dcontext.GetLogger(c, c = dcontext.WithLogger(c, dcontext.GetLogger(c,
@ -846,7 +847,7 @@ func (app *App) authorized(w http.ResponseWriter, r *http.Request, context *Cont
switch err := err.(type) { switch err := err.(type) {
case auth.Challenge: case auth.Challenge:
// Add the appropriate WWW-Auth header // Add the appropriate WWW-Auth header
err.SetHeaders(r, w) err.SetHeaders(w)
if err := errcode.ServeJSON(w, errcode.ErrorCodeUnauthorized.WithDetail(accessRecords)); err != nil { if err := errcode.ServeJSON(w, errcode.ErrorCodeUnauthorized.WithDetail(accessRecords)); err != nil {
dcontext.GetLogger(context).Errorf("error serving error json: %v (from %v)", err, context.Errors) dcontext.GetLogger(context).Errorf("error serving error json: %v (from %v)", err, context.Errors)
@ -863,7 +864,7 @@ func (app *App) authorized(w http.ResponseWriter, r *http.Request, context *Cont
return err return err
} }
dcontext.GetLogger(ctx, auth.UserNameKey).Info("authorized request") dcontext.GetLogger(ctx).Info("authorized request")
// TODO(stevvooe): This pattern needs to be cleaned up a bit. One context // TODO(stevvooe): This pattern needs to be cleaned up a bit. One context
// should be replaced by another, rather than replacing the context on a // should be replaced by another, rather than replacing the context on a
// mutable object. // mutable object.
@ -897,7 +898,7 @@ func (app *App) nameRequired(r *http.Request) bool {
func apiBase(w http.ResponseWriter, r *http.Request) { func apiBase(w http.ResponseWriter, r *http.Request) {
const emptyJSON = "{}" const emptyJSON = "{}"
// Provide a simple /v2/ 200 OK response with empty json response. // Provide a simple /v2/ 200 OK response with empty json response.
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Content-Length", fmt.Sprint(len(emptyJSON))) w.Header().Set("Content-Length", fmt.Sprint(len(emptyJSON)))
fmt.Fprint(w, emptyJSON) fmt.Fprint(w, emptyJSON)

View file

@ -11,7 +11,7 @@ import (
"github.com/docker/distribution/configuration" "github.com/docker/distribution/configuration"
"github.com/docker/distribution/context" "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
_ "github.com/docker/distribution/registry/auth/silly" _ "github.com/docker/distribution/registry/auth/silly"
"github.com/docker/distribution/registry/storage" "github.com/docker/distribution/registry/storage"
@ -188,8 +188,8 @@ func TestNewApp(t *testing.T) {
t.Fatalf("unexpected status code during request: %v", err) t.Fatalf("unexpected status code during request: %v", err)
} }
if req.Header.Get("Content-Type") != "application/json" { if req.Header.Get("Content-Type") != "application/json; charset=utf-8" {
t.Fatalf("unexpected content-type: %v != %v", req.Header.Get("Content-Type"), "application/json") t.Fatalf("unexpected content-type: %v != %v", req.Header.Get("Content-Type"), "application/json; charset=utf-8")
} }
expectedAuthHeader := "Bearer realm=\"realm-test\",service=\"service-test\"" expectedAuthHeader := "Bearer realm=\"realm-test\",service=\"service-test\""

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/context" "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )

View file

@ -9,7 +9,7 @@ import (
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/storage" "github.com/docker/distribution/registry/storage"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
@ -36,8 +36,52 @@ func blobUploadDispatcher(ctx *Context, r *http.Request) http.Handler {
} }
if buh.UUID != "" { if buh.UUID != "" {
if h := buh.ResumeBlobUpload(ctx, r); h != nil { state, err := hmacKey(ctx.Config.HTTP.Secret).unpackUploadState(r.FormValue("_state"))
return h if err != nil {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("error resolving upload: %v", err)
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
buh.State = state
if state.Name != ctx.Repository.Named().Name() {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("mismatched repository name in upload state: %q != %q", state.Name, buh.Repository.Named().Name())
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
if state.UUID != buh.UUID {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("mismatched uuid in upload state: %q != %q", state.UUID, buh.UUID)
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
blobs := ctx.Repository.Blobs(buh)
upload, err := blobs.Resume(buh, buh.UUID)
if err != nil {
dcontext.GetLogger(ctx).Errorf("error resolving upload: %v", err)
if err == distribution.ErrBlobUploadUnknown {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadUnknown.WithDetail(err))
})
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
})
}
buh.Upload = upload
if size := upload.Size(); size != buh.State.Offset {
defer upload.Close()
dcontext.GetLogger(ctx).Errorf("upload resumed at wrong offest: %d != %d", size, buh.State.Offset)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
upload.Cancel(buh)
})
} }
return closeResources(handler, buh.Upload) return closeResources(handler, buh.Upload)
} }
@ -128,7 +172,7 @@ func (buh *blobUploadHandler) PatchBlobData(w http.ResponseWriter, r *http.Reque
ct := r.Header.Get("Content-Type") ct := r.Header.Get("Content-Type")
if ct != "" && ct != "application/octet-stream" { if ct != "" && ct != "application/octet-stream" {
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("bad Content-Type"))) buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("Bad Content-Type")))
// TODO(dmcgowan): encode error // TODO(dmcgowan): encode error
return return
} }
@ -238,57 +282,6 @@ func (buh *blobUploadHandler) CancelBlobUpload(w http.ResponseWriter, r *http.Re
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
} }
func (buh *blobUploadHandler) ResumeBlobUpload(ctx *Context, r *http.Request) http.Handler {
state, err := hmacKey(ctx.Config.HTTP.Secret).unpackUploadState(r.FormValue("_state"))
if err != nil {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("error resolving upload: %v", err)
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
buh.State = state
if state.Name != ctx.Repository.Named().Name() {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("mismatched repository name in upload state: %q != %q", state.Name, buh.Repository.Named().Name())
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
if state.UUID != buh.UUID {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dcontext.GetLogger(ctx).Infof("mismatched uuid in upload state: %q != %q", state.UUID, buh.UUID)
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
})
}
blobs := ctx.Repository.Blobs(buh)
upload, err := blobs.Resume(buh, buh.UUID)
if err != nil {
dcontext.GetLogger(ctx).Errorf("error resolving upload: %v", err)
if err == distribution.ErrBlobUploadUnknown {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadUnknown.WithDetail(err))
})
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
})
}
buh.Upload = upload
if size := upload.Size(); size != buh.State.Offset {
defer upload.Close()
dcontext.GetLogger(ctx).Errorf("upload resumed at wrong offset: %d != %d", size, buh.State.Offset)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
buh.Errors = append(buh.Errors, v2.ErrorCodeBlobUploadInvalid.WithDetail(err))
upload.Cancel(buh)
})
}
return nil
}
// blobUploadResponse provides a standard request for uploading blobs and // blobUploadResponse provides a standard request for uploading blobs and
// chunk responses. This sets the correct headers but the response status is // chunk responses. This sets the correct headers but the response status is
// left to the caller. The fresh argument is used to ensure that new blob // left to the caller. The fresh argument is used to ensure that new blob

View file

@ -55,7 +55,7 @@ func (ch *catalogHandler) GetCatalog(w http.ResponseWriter, r *http.Request) {
return return
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
// Add a link header if there are more entries to retrieve // Add a link header if there are more entries to retrieve
if moreEntries { if moreEntries {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )

View file

@ -20,7 +20,7 @@ type logHook struct {
func (hook *logHook) Fire(entry *logrus.Entry) error { func (hook *logHook) Fire(entry *logrus.Entry) error {
addr := strings.Split(hook.Mail.Addr, ":") addr := strings.Split(hook.Mail.Addr, ":")
if len(addr) != 2 { if len(addr) != 2 {
return errors.New("invalid Mail Address") return errors.New("Invalid Mail Address")
} }
host := addr[0] host := addr[0]
subject := fmt.Sprintf("[%s] %s: %s", entry.Level, host, entry.Message) subject := fmt.Sprintf("[%s] %s: %s", entry.Level, host, entry.Message)
@ -37,7 +37,7 @@ func (hook *logHook) Fire(entry *logrus.Entry) error {
if err := t.Execute(b, entry); err != nil { if err := t.Execute(b, entry); err != nil {
return err return err
} }
body := b.String() body := fmt.Sprintf("%s", b)
return hook.Mail.sendMail(subject, body) return hook.Mail.sendMail(subject, body)
} }

View file

@ -17,7 +17,7 @@ type mailer struct {
func (mail *mailer) sendMail(subject, message string) error { func (mail *mailer) sendMail(subject, message string) error {
addr := strings.Split(mail.Addr, ":") addr := strings.Split(mail.Addr, ":")
if len(addr) != 2 { if len(addr) != 2 {
return errors.New("invalid Mail Address") return errors.New("Invalid Mail Address")
} }
host := addr[0] host := addr[0]
msg := []byte("To:" + strings.Join(mail.To, ";") + msg := []byte("To:" + strings.Join(mail.To, ";") +

View file

@ -3,7 +3,6 @@ package handlers
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"mime"
"net/http" "net/http"
"strings" "strings"
@ -15,11 +14,11 @@ import (
"github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
) )
// These constants determine which architecture and OS to choose from a // These constants determine which architecture and OS to choose from a
@ -98,10 +97,14 @@ func (imh *manifestHandler) GetManifest(w http.ResponseWriter, r *http.Request)
// we need to split each header value on "," to get the full list of "Accept" values (per RFC 2616) // we need to split each header value on "," to get the full list of "Accept" values (per RFC 2616)
// https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 // https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1
for _, mediaType := range strings.Split(acceptHeader, ",") { for _, mediaType := range strings.Split(acceptHeader, ",") {
if mediaType, _, err = mime.ParseMediaType(mediaType); err != nil { // remove "; q=..." if present
continue if i := strings.Index(mediaType, ";"); i >= 0 {
mediaType = mediaType[:i]
} }
// it's common (but not required) for Accept values to be space separated ("a/b, c/d, e/f")
mediaType = strings.TrimSpace(mediaType)
if mediaType == schema2.MediaTypeManifest { if mediaType == schema2.MediaTypeManifest {
supports[manifestSchema2] = true supports[manifestSchema2] = true
} }

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2" "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
) )
@ -49,7 +49,7 @@ func (th *tagsHandler) GetTags(w http.ResponseWriter, r *http.Request) {
return return
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
enc := json.NewEncoder(w) enc := json.NewEncoder(w)
if err := enc.Encode(tagsAPIResponse{ if err := enc.Encode(tagsAPIResponse{

View file

@ -6,6 +6,7 @@ import (
"net/http" "net/http"
"strconv" "strconv"
"sync" "sync"
"time"
"github.com/docker/distribution" "github.com/docker/distribution"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
@ -14,6 +15,9 @@ import (
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
// todo(richardscothern): from cache control header or config file
const blobTTL = 24 * 7 * time.Hour
type proxyBlobStore struct { type proxyBlobStore struct {
localStore distribution.BlobStore localStore distribution.BlobStore
remoteStore distribution.BlobService remoteStore distribution.BlobService
@ -72,8 +76,13 @@ func (pbs *proxyBlobStore) serveLocal(ctx context.Context, w http.ResponseWriter
return false, nil return false, nil
} }
proxyMetrics.BlobPush(uint64(localDesc.Size)) if err == nil {
return true, pbs.localStore.ServeBlob(ctx, w, r, dgst) proxyMetrics.BlobPush(uint64(localDesc.Size))
return true, pbs.localStore.ServeBlob(ctx, w, r, dgst)
}
return false, nil
} }
func (pbs *proxyBlobStore) storeLocal(ctx context.Context, dgst digest.Digest) error { func (pbs *proxyBlobStore) storeLocal(ctx context.Context, dgst digest.Digest) error {

View file

@ -193,7 +193,7 @@ func makeTestEnv(t *testing.T, name string) *testEnv {
} }
func makeBlob(size int) []byte { func makeBlob(size int) []byte {
blob := make([]byte, size) blob := make([]byte, size, size)
for i := 0; i < size; i++ { for i := 0; i < size; i++ {
blob[i] = byte('A' + rand.Int()%48) blob[i] = byte('A' + rand.Int()%48)
} }
@ -204,6 +204,16 @@ func init() {
rand.Seed(42) rand.Seed(42)
} }
func perm(m []distribution.Descriptor) []distribution.Descriptor {
for i := 0; i < len(m); i++ {
j := rand.Intn(i + 1)
tmp := m[i]
m[i] = m[j]
m[j] = tmp
}
return m
}
func populate(t *testing.T, te *testEnv, blobCount, size, numUnique int) { func populate(t *testing.T, te *testEnv, blobCount, size, numUnique int) {
var inRemote []distribution.Descriptor var inRemote []distribution.Descriptor

View file

@ -165,10 +165,11 @@ func populateRepo(ctx context.Context, t *testing.T, repository distribution.Rep
t.Fatalf("unexpected error creating test upload: %v", err) t.Fatalf("unexpected error creating test upload: %v", err)
} }
rs, dgst, err := testutil.CreateRandomTarFile() rs, ts, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("unexpected error generating test layer file") t.Fatalf("unexpected error generating test layer file")
} }
dgst := digest.Digest(ts)
if _, err := io.Copy(wr, rs); err != nil { if _, err := io.Copy(wr, rs); err != nil {
t.Fatalf("unexpected error copying to upload: %v", err) t.Fatalf("unexpected error copying to upload: %v", err)
} }

View file

@ -118,7 +118,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
} }
if !ttles.stopped { if !ttles.stopped {
return fmt.Errorf("scheduler already started") return fmt.Errorf("Scheduler already started")
} }
dcontext.GetLogger(ttles.ctx).Infof("Starting cached object TTL expiration scheduler...") dcontext.GetLogger(ttles.ctx).Infof("Starting cached object TTL expiration scheduler...")
@ -126,7 +126,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
// Start timer for each deserialized entry // Start timer for each deserialized entry
for _, entry := range ttles.entries { for _, entry := range ttles.entries {
entry.timer = ttles.startTimer(entry, time.Until(entry.Expiry)) entry.timer = ttles.startTimer(entry, entry.Expiry.Sub(time.Now()))
} }
// Start a ticker to periodically save the entries index // Start a ticker to periodically save the entries index
@ -164,7 +164,7 @@ func (ttles *TTLExpirationScheduler) add(r reference.Reference, ttl time.Duratio
Expiry: time.Now().Add(ttl), Expiry: time.Now().Add(ttl),
EntryType: eType, EntryType: eType,
} }
dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, time.Until(entry.Expiry)) dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, entry.Expiry.Sub(time.Now()))
if oldEntry, present := ttles.entries[entry.Key]; present && oldEntry.timer != nil { if oldEntry, present := ttles.entries[entry.Key]; present && oldEntry.timer != nil {
oldEntry.timer.Stop() oldEntry.timer.Stop()
} }

View file

@ -12,18 +12,10 @@ import (
"syscall" "syscall"
"time" "time"
logrus_bugsnag "github.com/Shopify/logrus-bugsnag" "rsc.io/letsencrypt"
logstash "github.com/bshuster-repo/logrus-logstash-hook" logstash "github.com/bshuster-repo/logrus-logstash-hook"
"github.com/bugsnag/bugsnag-go" "github.com/bugsnag/bugsnag-go"
"github.com/docker/go-metrics"
gorhandlers "github.com/gorilla/handlers"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/yvasiyarov/gorelic"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"github.com/docker/distribution/configuration" "github.com/docker/distribution/configuration"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/health" "github.com/docker/distribution/health"
@ -31,6 +23,11 @@ import (
"github.com/docker/distribution/registry/listener" "github.com/docker/distribution/registry/listener"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
"github.com/docker/distribution/version" "github.com/docker/distribution/version"
"github.com/docker/go-metrics"
gorhandlers "github.com/gorilla/handlers"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/yvasiyarov/gorelic"
) )
// this channel gets notified when process receives signal. It is global to ease unit testing // this channel gets notified when process receives signal. It is global to ease unit testing
@ -98,8 +95,6 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
return nil, fmt.Errorf("error configuring logger: %v", err) return nil, fmt.Errorf("error configuring logger: %v", err)
} }
configureBugsnag(config)
// inject a logger into the uuid library. warns us if there is a problem // inject a logger into the uuid library. warns us if there is a problem
// with uuid generation under low entropy. // with uuid generation under low entropy.
uuid.Loggerf = dcontext.GetLogger(ctx).Warnf uuid.Loggerf = dcontext.GetLogger(ctx).Warnf
@ -137,26 +132,10 @@ func (registry *Registry) ListenAndServe() error {
} }
if config.HTTP.TLS.Certificate != "" || config.HTTP.TLS.LetsEncrypt.CacheFile != "" { if config.HTTP.TLS.Certificate != "" || config.HTTP.TLS.LetsEncrypt.CacheFile != "" {
var tlsMinVersion uint16
if config.HTTP.TLS.MinimumTLS == "" {
tlsMinVersion = tls.VersionTLS10
} else {
switch config.HTTP.TLS.MinimumTLS {
case "tls1.0":
tlsMinVersion = tls.VersionTLS10
case "tls1.1":
tlsMinVersion = tls.VersionTLS11
case "tls1.2":
tlsMinVersion = tls.VersionTLS12
default:
return fmt.Errorf("unknown minimum TLS level '%s' specified for http.tls.minimumtls", config.HTTP.TLS.MinimumTLS)
}
dcontext.GetLogger(registry.app).Infof("restricting TLS to %s or higher", config.HTTP.TLS.MinimumTLS)
}
tlsConf := &tls.Config{ tlsConf := &tls.Config{
ClientAuth: tls.NoClientCert, ClientAuth: tls.NoClientCert,
NextProtos: nextProtos(config), NextProtos: nextProtos(config),
MinVersion: tlsMinVersion, MinVersion: tls.VersionTLS10,
PreferServerCipherSuites: true, PreferServerCipherSuites: true,
CipherSuites: []uint16{ CipherSuites: []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
@ -172,14 +151,19 @@ func (registry *Registry) ListenAndServe() error {
if config.HTTP.TLS.Certificate != "" { if config.HTTP.TLS.Certificate != "" {
return fmt.Errorf("cannot specify both certificate and Let's Encrypt") return fmt.Errorf("cannot specify both certificate and Let's Encrypt")
} }
m := &autocert.Manager{ var m letsencrypt.Manager
HostPolicy: autocert.HostWhitelist(config.HTTP.TLS.LetsEncrypt.Hosts...), if err := m.CacheFile(config.HTTP.TLS.LetsEncrypt.CacheFile); err != nil {
Cache: autocert.DirCache(config.HTTP.TLS.LetsEncrypt.CacheFile), return err
Email: config.HTTP.TLS.LetsEncrypt.Email, }
Prompt: autocert.AcceptTOS, if !m.Registered() {
if err := m.Register(config.HTTP.TLS.LetsEncrypt.Email, nil); err != nil {
return err
}
}
if len(config.HTTP.TLS.LetsEncrypt.Hosts) > 0 {
m.SetHosts(config.HTTP.TLS.LetsEncrypt.Hosts)
} }
tlsConf.GetCertificate = m.GetCertificate tlsConf.GetCertificate = m.GetCertificate
tlsConf.NextProtos = append(tlsConf.NextProtos, acme.ALPNProto)
} else { } else {
tlsConf.Certificates = make([]tls.Certificate, 1) tlsConf.Certificates = make([]tls.Certificate, 1)
tlsConf.Certificates[0], err = tls.LoadX509KeyPair(config.HTTP.TLS.Certificate, config.HTTP.TLS.Key) tlsConf.Certificates[0], err = tls.LoadX509KeyPair(config.HTTP.TLS.Certificate, config.HTTP.TLS.Key)
@ -198,7 +182,7 @@ func (registry *Registry) ListenAndServe() error {
} }
if ok := pool.AppendCertsFromPEM(caPem); !ok { if ok := pool.AppendCertsFromPEM(caPem); !ok {
return fmt.Errorf("could not add CA to pool") return fmt.Errorf("Could not add CA to pool")
} }
} }
@ -245,6 +229,19 @@ func configureReporting(app *handlers.App) http.Handler {
var handler http.Handler = app var handler http.Handler = app
if app.Config.Reporting.Bugsnag.APIKey != "" { if app.Config.Reporting.Bugsnag.APIKey != "" {
bugsnagConfig := bugsnag.Configuration{
APIKey: app.Config.Reporting.Bugsnag.APIKey,
// TODO(brianbland): provide the registry version here
// AppVersion: "2.0",
}
if app.Config.Reporting.Bugsnag.ReleaseStage != "" {
bugsnagConfig.ReleaseStage = app.Config.Reporting.Bugsnag.ReleaseStage
}
if app.Config.Reporting.Bugsnag.Endpoint != "" {
bugsnagConfig.Endpoint = app.Config.Reporting.Bugsnag.Endpoint
}
bugsnag.Configure(bugsnagConfig)
handler = bugsnag.Handler(handler) handler = bugsnag.Handler(handler)
} }
@ -322,32 +319,6 @@ func logLevel(level configuration.Loglevel) log.Level {
return l return l
} }
// configureBugsnag configures bugsnag reporting, if enabled
func configureBugsnag(config *configuration.Configuration) {
if config.Reporting.Bugsnag.APIKey == "" {
return
}
bugsnagConfig := bugsnag.Configuration{
APIKey: config.Reporting.Bugsnag.APIKey,
}
if config.Reporting.Bugsnag.ReleaseStage != "" {
bugsnagConfig.ReleaseStage = config.Reporting.Bugsnag.ReleaseStage
}
if config.Reporting.Bugsnag.Endpoint != "" {
bugsnagConfig.Endpoint = config.Reporting.Bugsnag.Endpoint
}
bugsnag.Configure(bugsnagConfig)
// configure logrus bugsnag hook
hook, err := logrus_bugsnag.NewBugsnagHook()
if err != nil {
log.Fatalln(err)
}
log.AddHook(hook)
}
// panicHandler add an HTTP handler to web app. The handler recover the happening // panicHandler add an HTTP handler to web app. The handler recover the happening
// panic. logrus.Panic transmits panic message to pre-config log hooks, which is // panic. logrus.Panic transmits panic message to pre-config log hooks, which is
// defined in config.yml. // defined in config.yml.

View file

@ -418,7 +418,7 @@ func TestBlobMount(t *testing.T) {
bs := repository.Blobs(ctx) bs := repository.Blobs(ctx)
// Test destination for existence. // Test destination for existence.
_, err = bs.Stat(ctx, desc.Digest) statDesc, err = bs.Stat(ctx, desc.Digest)
if err == nil { if err == nil {
t.Fatalf("unexpected non-error stating unmounted blob: %v", desc) t.Fatalf("unexpected non-error stating unmounted blob: %v", desc)
} }
@ -478,12 +478,12 @@ func TestBlobMount(t *testing.T) {
t.Fatalf("Unexpected error deleting blob") t.Fatalf("Unexpected error deleting blob")
} }
_, err = bs.Stat(ctx, desc.Digest) d, err := bs.Stat(ctx, desc.Digest)
if err != nil { if err != nil {
t.Fatalf("unexpected error stating blob deleted from source repository: %v", err) t.Fatalf("unexpected error stating blob deleted from source repository: %v", err)
} }
d, err := sbs.Stat(ctx, desc.Digest) d, err = sbs.Stat(ctx, desc.Digest)
if err == nil { if err == nil {
t.Fatalf("unexpected non-error stating deleted blob: %v", d) t.Fatalf("unexpected non-error stating deleted blob: %v", d)
} }

View file

@ -0,0 +1,66 @@
package storage
import (
"context"
"expvar"
"sync/atomic"
dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/storage/cache"
)
type blobStatCollector struct {
metrics cache.Metrics
}
func (bsc *blobStatCollector) Hit() {
atomic.AddUint64(&bsc.metrics.Requests, 1)
atomic.AddUint64(&bsc.metrics.Hits, 1)
}
func (bsc *blobStatCollector) Miss() {
atomic.AddUint64(&bsc.metrics.Requests, 1)
atomic.AddUint64(&bsc.metrics.Misses, 1)
}
func (bsc *blobStatCollector) Metrics() cache.Metrics {
return bsc.metrics
}
func (bsc *blobStatCollector) Logger(ctx context.Context) cache.Logger {
return dcontext.GetLogger(ctx)
}
// blobStatterCacheMetrics keeps track of cache metrics for blob descriptor
// cache requests. Note this is kept globally and made available via expvar.
// For more detailed metrics, its recommend to instrument a particular cache
// implementation.
var blobStatterCacheMetrics cache.MetricsTracker = &blobStatCollector{}
func init() {
registry := expvar.Get("registry")
if registry == nil {
registry = expvar.NewMap("registry")
}
cache := registry.(*expvar.Map).Get("cache")
if cache == nil {
cache = &expvar.Map{}
cache.(*expvar.Map).Init()
registry.(*expvar.Map).Set("cache", cache)
}
storage := cache.(*expvar.Map).Get("storage")
if storage == nil {
storage = &expvar.Map{}
storage.(*expvar.Map).Init()
cache.(*expvar.Map).Set("storage", storage)
}
storage.(*expvar.Map).Set("blobdescriptor", expvar.Func(func() interface{} {
// no need for synchronous access: the increments are atomic and
// during reading, we don't care if the data is up to date. The
// numbers will always *eventually* be reported correctly.
return blobStatterCacheMetrics
}))
}

View file

@ -152,6 +152,16 @@ func (bs *blobStore) readlink(ctx context.Context, path string) (digest.Digest,
return linked, nil return linked, nil
} }
// resolve reads the digest link at path and returns the blob store path.
func (bs *blobStore) resolve(ctx context.Context, path string) (string, error) {
dgst, err := bs.readlink(ctx, path)
if err != nil {
return "", err
}
return bs.path(dgst)
}
type blobStatter struct { type blobStatter struct {
driver driver.StorageDriver driver driver.StorageDriver
} }

View file

@ -1,131 +0,0 @@
package cache
import (
"context"
"errors"
"testing"
"github.com/docker/distribution"
digest "github.com/opencontainers/go-digest"
)
func TestCacheSet(t *testing.T) {
cache := newTestStatter()
backend := newTestStatter()
st := NewCachedBlobStatter(cache, backend)
ctx := context.Background()
dgst := digest.Digest("dontvalidate")
_, err := st.Stat(ctx, dgst)
if err != distribution.ErrBlobUnknown {
t.Fatalf("Unexpected error %v, expected %v", err, distribution.ErrBlobUnknown)
}
desc := distribution.Descriptor{
Digest: dgst,
}
if err := backend.SetDescriptor(ctx, dgst, desc); err != nil {
t.Fatal(err)
}
actual, err := st.Stat(ctx, dgst)
if err != nil {
t.Fatal(err)
}
if actual.Digest != desc.Digest {
t.Fatalf("Unexpected descriptor %v, expected %v", actual, desc)
}
if len(cache.sets) != 1 || len(cache.sets[dgst]) == 0 {
t.Fatalf("Expected cache set")
}
if cache.sets[dgst][0].Digest != desc.Digest {
t.Fatalf("Unexpected descriptor %v, expected %v", cache.sets[dgst][0], desc)
}
desc2 := distribution.Descriptor{
Digest: digest.Digest("dontvalidate 2"),
}
cache.sets[dgst] = append(cache.sets[dgst], desc2)
actual, err = st.Stat(ctx, dgst)
if err != nil {
t.Fatal(err)
}
if actual.Digest != desc2.Digest {
t.Fatalf("Unexpected descriptor %v, expected %v", actual, desc)
}
}
func TestCacheError(t *testing.T) {
cache := newErrTestStatter(errors.New("cache error"))
backend := newTestStatter()
st := NewCachedBlobStatter(cache, backend)
ctx := context.Background()
dgst := digest.Digest("dontvalidate")
_, err := st.Stat(ctx, dgst)
if err != distribution.ErrBlobUnknown {
t.Fatalf("Unexpected error %v, expected %v", err, distribution.ErrBlobUnknown)
}
desc := distribution.Descriptor{
Digest: dgst,
}
if err := backend.SetDescriptor(ctx, dgst, desc); err != nil {
t.Fatal(err)
}
actual, err := st.Stat(ctx, dgst)
if err != nil {
t.Fatal(err)
}
if actual.Digest != desc.Digest {
t.Fatalf("Unexpected descriptor %v, expected %v", actual, desc)
}
if len(cache.sets) > 0 {
t.Fatalf("Set should not be called after stat error")
}
}
func newTestStatter() *testStatter {
return &testStatter{
stats: []digest.Digest{},
sets: map[digest.Digest][]distribution.Descriptor{},
}
}
func newErrTestStatter(err error) *testStatter {
return &testStatter{
sets: map[digest.Digest][]distribution.Descriptor{},
err: err,
}
}
type testStatter struct {
stats []digest.Digest
sets map[digest.Digest][]distribution.Descriptor
err error
}
func (s *testStatter) Stat(ctx context.Context, dgst digest.Digest) (distribution.Descriptor, error) {
if s.err != nil {
return distribution.Descriptor{}, s.err
}
if set := s.sets[dgst]; len(set) > 0 {
return set[len(set)-1], nil
}
return distribution.Descriptor{}, distribution.ErrBlobUnknown
}
func (s *testStatter) SetDescriptor(ctx context.Context, dgst digest.Digest, desc distribution.Descriptor) error {
s.sets[dgst] = append(s.sets[dgst], desc)
return s.err
}
func (s *testStatter) Clear(ctx context.Context, dgst digest.Digest) error {
return s.err
}

View file

@ -54,10 +54,6 @@ func checkBlobDescriptorCacheEmptyRepository(ctx context.Context, t *testing.T,
t.Fatalf("expected error checking for cache item with empty digest: %v", err) t.Fatalf("expected error checking for cache item with empty digest: %v", err)
} }
if _, err := cache.Stat(ctx, "sha384:cba111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"); err != distribution.ErrBlobUnknown {
t.Fatalf("expected unknown blob error with uncached repo: %v", err)
}
if _, err := cache.Stat(ctx, "sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"); err != distribution.ErrBlobUnknown { if _, err := cache.Stat(ctx, "sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"); err != distribution.ErrBlobUnknown {
t.Fatalf("expected unknown blob error with empty repo: %v", err) t.Fatalf("expected unknown blob error with empty repo: %v", err)
} }
@ -177,7 +173,8 @@ func checkBlobDescriptorCacheClear(ctx context.Context, t *testing.T, provider c
t.Error(err) t.Error(err)
} }
if _, err = cache.Stat(ctx, localDigest); err == nil { desc, err = cache.Stat(ctx, localDigest)
if err == nil {
t.Fatalf("expected error statting deleted blob: %v", err) t.Fatalf("expected error statting deleted blob: %v", err)
} }
} }

View file

@ -4,14 +4,39 @@ import (
"context" "context"
"github.com/docker/distribution" "github.com/docker/distribution"
dcontext "github.com/docker/distribution/context"
prometheus "github.com/docker/distribution/metrics" prometheus "github.com/docker/distribution/metrics"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
// Metrics is used to hold metric counters
// related to the number of times a cache was
// hit or missed.
type Metrics struct {
Requests uint64
Hits uint64
Misses uint64
}
// Logger can be provided on the MetricsTracker to log errors.
//
// Usually, this is just a proxy to dcontext.GetLogger.
type Logger interface {
Errorf(format string, args ...interface{})
}
// MetricsTracker represents a metric tracker
// which simply counts the number of hits and misses.
type MetricsTracker interface {
Hit()
Miss()
Metrics() Metrics
Logger(context.Context) Logger
}
type cachedBlobStatter struct { type cachedBlobStatter struct {
cache distribution.BlobDescriptorService cache distribution.BlobDescriptorService
backend distribution.BlobDescriptorService backend distribution.BlobDescriptorService
tracker MetricsTracker
} }
var ( var (
@ -28,36 +53,47 @@ func NewCachedBlobStatter(cache distribution.BlobDescriptorService, backend dist
} }
} }
// NewCachedBlobStatterWithMetrics creates a new statter which prefers a cache and
// falls back to a backend. Hits and misses will send to the tracker.
func NewCachedBlobStatterWithMetrics(cache distribution.BlobDescriptorService, backend distribution.BlobDescriptorService, tracker MetricsTracker) distribution.BlobStatter {
return &cachedBlobStatter{
cache: cache,
backend: backend,
tracker: tracker,
}
}
func (cbds *cachedBlobStatter) Stat(ctx context.Context, dgst digest.Digest) (distribution.Descriptor, error) { func (cbds *cachedBlobStatter) Stat(ctx context.Context, dgst digest.Digest) (distribution.Descriptor, error) {
cacheCount.WithValues("Request").Inc(1) cacheCount.WithValues("Request").Inc(1)
desc, err := cbds.cache.Stat(ctx, dgst)
if err != nil {
if err != distribution.ErrBlobUnknown {
logErrorf(ctx, cbds.tracker, "error retrieving descriptor from cache: %v", err)
}
// try getting from cache goto fallback
desc, cacheErr := cbds.cache.Stat(ctx, dgst)
if cacheErr == nil {
cacheCount.WithValues("Hit").Inc(1)
return desc, nil
} }
cacheCount.WithValues("Hit").Inc(1)
// couldn't get from cache; get from backend if cbds.tracker != nil {
desc, err := cbds.backend.Stat(ctx, dgst) cbds.tracker.Hit()
}
return desc, nil
fallback:
cacheCount.WithValues("Miss").Inc(1)
if cbds.tracker != nil {
cbds.tracker.Miss()
}
desc, err = cbds.backend.Stat(ctx, dgst)
if err != nil { if err != nil {
return desc, err return desc, err
} }
if cacheErr == distribution.ErrBlobUnknown { if err := cbds.cache.SetDescriptor(ctx, dgst, desc); err != nil {
// cache doesn't have info. update it with info got from backend logErrorf(ctx, cbds.tracker, "error adding descriptor %v to cache: %v", desc.Digest, err)
cacheCount.WithValues("Miss").Inc(1)
if err := cbds.cache.SetDescriptor(ctx, dgst, desc); err != nil {
dcontext.GetLoggerWithField(ctx, "blob", dgst).WithError(err).Error("error from cache setting desc")
}
// we don't need to return cache error upstream if any. continue returning value from backend
} else {
// unknown error from cache. just log and error. do not store cache as it may be trigger many set calls
dcontext.GetLoggerWithField(ctx, "blob", dgst).WithError(cacheErr).Error("error from cache stat(ing) blob")
cacheCount.WithValues("Error").Inc(1)
} }
return desc, nil return desc, err
} }
func (cbds *cachedBlobStatter) Clear(ctx context.Context, dgst digest.Digest) error { func (cbds *cachedBlobStatter) Clear(ctx context.Context, dgst digest.Digest) error {
@ -75,7 +111,19 @@ func (cbds *cachedBlobStatter) Clear(ctx context.Context, dgst digest.Digest) er
func (cbds *cachedBlobStatter) SetDescriptor(ctx context.Context, dgst digest.Digest, desc distribution.Descriptor) error { func (cbds *cachedBlobStatter) SetDescriptor(ctx context.Context, dgst digest.Digest, desc distribution.Descriptor) error {
if err := cbds.cache.SetDescriptor(ctx, dgst, desc); err != nil { if err := cbds.cache.SetDescriptor(ctx, dgst, desc); err != nil {
dcontext.GetLoggerWithField(ctx, "blob", dgst).WithError(err).Error("error from cache setting desc") logErrorf(ctx, cbds.tracker, "error adding descriptor %v to cache: %v", desc.Digest, err)
} }
return nil return nil
} }
func logErrorf(ctx context.Context, tracker MetricsTracker, format string, args ...interface{}) {
if tracker == nil {
return
}
logger := tracker.Logger(ctx)
if logger == nil {
return
}
logger.Errorf(format, args...)
}

View file

@ -1,69 +0,0 @@
package metrics
import (
"context"
"time"
"github.com/docker/distribution"
prometheus "github.com/docker/distribution/metrics"
"github.com/docker/distribution/registry/storage/cache"
"github.com/docker/go-metrics"
"github.com/opencontainers/go-digest"
)
type prometheusCacheProvider struct {
cache.BlobDescriptorCacheProvider
latencyTimer metrics.LabeledTimer
}
func NewPrometheusCacheProvider(wrap cache.BlobDescriptorCacheProvider, name, help string) cache.BlobDescriptorCacheProvider {
return &prometheusCacheProvider{
wrap,
// TODO: May want to have fine grained buckets since redis calls are generally <1ms and the default minimum bucket is 5ms.
prometheus.StorageNamespace.NewLabeledTimer(name, help, "operation"),
}
}
func (p *prometheusCacheProvider) Stat(ctx context.Context, dgst digest.Digest) (distribution.Descriptor, error) {
start := time.Now()
d, e := p.BlobDescriptorCacheProvider.Stat(ctx, dgst)
p.latencyTimer.WithValues("Stat").UpdateSince(start)
return d, e
}
func (p *prometheusCacheProvider) SetDescriptor(ctx context.Context, dgst digest.Digest, desc distribution.Descriptor) error {
start := time.Now()
e := p.BlobDescriptorCacheProvider.SetDescriptor(ctx, dgst, desc)
p.latencyTimer.WithValues("SetDescriptor").UpdateSince(start)
return e
}
type prometheusRepoCacheProvider struct {
distribution.BlobDescriptorService
latencyTimer metrics.LabeledTimer
}
func (p *prometheusRepoCacheProvider) Stat(ctx context.Context, dgst digest.Digest) (distribution.Descriptor, error) {
start := time.Now()
d, e := p.BlobDescriptorService.Stat(ctx, dgst)
p.latencyTimer.WithValues("RepoStat").UpdateSince(start)
return d, e
}
func (p *prometheusRepoCacheProvider) SetDescriptor(ctx context.Context, dgst digest.Digest, desc distribution.Descriptor) error {
start := time.Now()
e := p.BlobDescriptorService.SetDescriptor(ctx, dgst, desc)
p.latencyTimer.WithValues("RepoSetDescriptor").UpdateSince(start)
return e
}
func (p *prometheusCacheProvider) RepositoryScoped(repo string) (distribution.BlobDescriptorService, error) {
s, err := p.BlobDescriptorCacheProvider.RepositoryScoped(repo)
if err != nil {
return nil, err
}
return &prometheusRepoCacheProvider{
s,
p.latencyTimer,
}, nil
}

View file

@ -7,7 +7,6 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/storage/cache" "github.com/docker/distribution/registry/storage/cache"
"github.com/docker/distribution/registry/storage/cache/metrics"
"github.com/garyburd/redigo/redis" "github.com/garyburd/redigo/redis"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
@ -35,13 +34,9 @@ type redisBlobDescriptorService struct {
// NewRedisBlobDescriptorCacheProvider returns a new redis-based // NewRedisBlobDescriptorCacheProvider returns a new redis-based
// BlobDescriptorCacheProvider using the provided redis connection pool. // BlobDescriptorCacheProvider using the provided redis connection pool.
func NewRedisBlobDescriptorCacheProvider(pool *redis.Pool) cache.BlobDescriptorCacheProvider { func NewRedisBlobDescriptorCacheProvider(pool *redis.Pool) cache.BlobDescriptorCacheProvider {
return metrics.NewPrometheusCacheProvider( return &redisBlobDescriptorService{
&redisBlobDescriptorService{ pool: pool,
pool: pool, }
},
"cache_redis",
"Number of seconds taken by redis",
)
} }
// RepositoryScoped returns the scoped cache. // RepositoryScoped returns the scoped cache.
@ -186,10 +181,6 @@ func (rsrbds *repositoryScopedRedisBlobDescriptorService) Stat(ctx context.Conte
// We allow a per repository mediatype, let's look it up here. // We allow a per repository mediatype, let's look it up here.
mediatype, err := redis.String(conn.Do("HGET", rsrbds.blobDescriptorHashKey(dgst), "mediatype")) mediatype, err := redis.String(conn.Do("HGET", rsrbds.blobDescriptorHashKey(dgst), "mediatype"))
if err != nil { if err != nil {
if err == redis.ErrNil {
return distribution.Descriptor{}, distribution.ErrBlobUnknown
}
return distribution.Descriptor{}, err return distribution.Descriptor{}, err
} }

Some files were not shown because too many files have changed in this diff Show more