Compare commits
No commits in common. "master" and "aws-vendor-update" have entirely different histories.
master
...
aws-vendor
1564 changed files with 54690 additions and 437351 deletions
3
.github/CODE_OF_CONDUCT.md
vendored
3
.github/CODE_OF_CONDUCT.md
vendored
|
@ -1,3 +0,0 @@
|
||||||
## Docker Distribution Community Code of Conduct
|
|
||||||
|
|
||||||
Docker Distribution follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -35,4 +35,3 @@ bin/*
|
||||||
# Editor/IDE specific files.
|
# Editor/IDE specific files.
|
||||||
*.sublime-project
|
*.sublime-project
|
||||||
*.sublime-workspace
|
*.sublime-workspace
|
||||||
.idea/*
|
|
||||||
|
|
|
@ -1,20 +0,0 @@
|
||||||
linters:
|
|
||||||
enable:
|
|
||||||
- structcheck
|
|
||||||
- varcheck
|
|
||||||
- staticcheck
|
|
||||||
- unconvert
|
|
||||||
- gofmt
|
|
||||||
- goimports
|
|
||||||
- golint
|
|
||||||
- ineffassign
|
|
||||||
- vet
|
|
||||||
- unused
|
|
||||||
- misspell
|
|
||||||
disable:
|
|
||||||
- errcheck
|
|
||||||
|
|
||||||
run:
|
|
||||||
deadline: 2m
|
|
||||||
skip-dirs:
|
|
||||||
- vendor
|
|
24
.mailmap
24
.mailmap
|
@ -1,9 +1,9 @@
|
||||||
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@users.noreply.github.com>
|
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@users.noreply.github.com>
|
||||||
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@gmail.com>
|
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@gmail.com>
|
||||||
Olivier Gambier <olivier@docker.com> Olivier Gambier <dmp42@users.noreply.github.com>
|
Olivier Gambier <olivier@docker.com> Olivier Gambier <dmp42@users.noreply.github.com>
|
||||||
Brian Bland <brian.bland@docker.com> Brian Bland <r4nd0m1n4t0r@gmail.com>
|
Brian Bland <brian.bland@docker.com> Brian Bland <r4nd0m1n4t0r@gmail.com>
|
||||||
Brian Bland <brian.bland@docker.com> Brian Bland <brian.t.bland@gmail.com>
|
Brian Bland <brian.bland@docker.com> Brian Bland <brian.t.bland@gmail.com>
|
||||||
Josh Hawn <josh.hawn@docker.com> Josh Hawn <jlhawn@berkeley.edu>
|
Josh Hawn <josh.hawn@docker.com> Josh Hawn <jlhawn@berkeley.edu>
|
||||||
Richard Scothern <richard.scothern@docker.com> Richard <richard.scothern@gmail.com>
|
Richard Scothern <richard.scothern@docker.com> Richard <richard.scothern@gmail.com>
|
||||||
Richard Scothern <richard.scothern@docker.com> Richard Scothern <richard.scothern@gmail.com>
|
Richard Scothern <richard.scothern@docker.com> Richard Scothern <richard.scothern@gmail.com>
|
||||||
Andrew Meredith <andymeredith@gmail.com> Andrew Meredith <kendru@users.noreply.github.com>
|
Andrew Meredith <andymeredith@gmail.com> Andrew Meredith <kendru@users.noreply.github.com>
|
||||||
|
@ -16,17 +16,3 @@ davidli <wenquan.li@hp.com> davidli <wenquan.li@hpe.com>
|
||||||
Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net>
|
Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net>
|
||||||
Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com>
|
Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com>
|
||||||
Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com>
|
Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com>
|
||||||
Yu Wang <yuwa@microsoft.com> yuwaMSFT2 <yuwa@microsoft.com>
|
|
||||||
Yu Wang <yuwa@microsoft.com> Yu Wang (UC) <yuwa@microsoft.com>
|
|
||||||
Olivier Gambier <olivier@docker.com> dmp <dmp@loaner.local>
|
|
||||||
Olivier Gambier <olivier@docker.com> Olivier <o+github@gambier.email>
|
|
||||||
Olivier Gambier <olivier@docker.com> Olivier <dmp42@users.noreply.github.com>
|
|
||||||
Elsan Li 李楠 <elsanli@tencent.com> elsanli(李楠) <elsanli@tencent.com>
|
|
||||||
Rui Cao <ruicao@alauda.io> ruicao <ruicao@alauda.io>
|
|
||||||
Gwendolynne Barr <gwendolynne.barr@docker.com> gbarr01 <gwendolynne.barr@docker.com>
|
|
||||||
Haibing Zhou 周海兵 <zhouhaibing089@gmail.com> zhouhaibing089 <zhouhaibing089@gmail.com>
|
|
||||||
Feng Honglin <tifayuki@gmail.com> tifayuki <tifayuki@gmail.com>
|
|
||||||
Helen Xie <xieyulin821@harmonycloud.cn> Helen-xie <xieyulin821@harmonycloud.cn>
|
|
||||||
Mike Brown <brownwm@us.ibm.com> Mike Brown <mikebrow@users.noreply.github.com>
|
|
||||||
Manish Tomar <manish.tomar@docker.com> Manish Tomar <manishtomar@users.noreply.github.com>
|
|
||||||
Sakeven Jiang <jc5930@sina.cn> sakeven <jc5930@sina.cn>
|
|
||||||
|
|
56
.travis.yml
56
.travis.yml
|
@ -1,56 +0,0 @@
|
||||||
dist: bionic
|
|
||||||
sudo: required
|
|
||||||
# setup travis so that we can run containers for integration tests
|
|
||||||
services:
|
|
||||||
- docker
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
include:
|
|
||||||
- arch: amd64
|
|
||||||
- arch: s390x
|
|
||||||
|
|
||||||
language: go
|
|
||||||
|
|
||||||
go:
|
|
||||||
- "1.14.x"
|
|
||||||
|
|
||||||
go_import_path: github.com/docker/distribution
|
|
||||||
|
|
||||||
addons:
|
|
||||||
apt:
|
|
||||||
packages:
|
|
||||||
- python-minimal
|
|
||||||
|
|
||||||
|
|
||||||
env:
|
|
||||||
- TRAVIS_GOOS=linux DOCKER_BUILDTAGS="include_oss include_gcs" TRAVIS_CGO_ENABLED=1
|
|
||||||
|
|
||||||
before_install:
|
|
||||||
- uname -r
|
|
||||||
- sudo apt-get -q update
|
|
||||||
|
|
||||||
install:
|
|
||||||
- cd /tmp && go get -u github.com/vbatts/git-validation
|
|
||||||
# TODO: Add enforcement of license
|
|
||||||
# - go get -u github.com/kunalkushwaha/ltag
|
|
||||||
- cd $TRAVIS_BUILD_DIR
|
|
||||||
|
|
||||||
script:
|
|
||||||
- export GOOS=$TRAVIS_GOOS
|
|
||||||
- export CGO_ENABLED=$TRAVIS_CGO_ENABLED
|
|
||||||
- DCO_VERBOSITY=-q script/validate/dco
|
|
||||||
- GOOS=linux GO111MODULE=on script/setup/install-dev-tools
|
|
||||||
- script/validate/vendor
|
|
||||||
- go build -i .
|
|
||||||
- make check
|
|
||||||
- make build
|
|
||||||
- make binaries
|
|
||||||
# Currently takes too long
|
|
||||||
#- if [ "$GOOS" = "linux" ]; then make test-race ; fi
|
|
||||||
- if [ "$GOOS" = "linux" ]; then make coverage ; fi
|
|
||||||
|
|
||||||
after_success:
|
|
||||||
- bash <(curl -s https://codecov.io/bash) -F linux
|
|
||||||
|
|
||||||
before_deploy:
|
|
||||||
# Run tests with storage driver configurations
|
|
147
AUTHORS
Normal file
147
AUTHORS
Normal file
|
@ -0,0 +1,147 @@
|
||||||
|
Aaron Lehmann <aaron.lehmann@docker.com>
|
||||||
|
Aaron Schlesinger <aschlesinger@deis.com>
|
||||||
|
Aaron Vinson <avinson.public@gmail.com>
|
||||||
|
Adam Enger <adamenger@gmail.com>
|
||||||
|
Adrian Mouat <adrian.mouat@gmail.com>
|
||||||
|
Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
|
||||||
|
Alex Chan <alex.chan@metaswitch.com>
|
||||||
|
Alex Elman <aelman@indeed.com>
|
||||||
|
Alexey Gladkov <gladkov.alexey@gmail.com>
|
||||||
|
allencloud <allen.sun@daocloud.io>
|
||||||
|
amitshukla <ashukla73@hotmail.com>
|
||||||
|
Amy Lindburg <amy.lindburg@docker.com>
|
||||||
|
Andrew Hsu <andrewhsu@acm.org>
|
||||||
|
Andrew Meredith <andymeredith@gmail.com>
|
||||||
|
Andrew T Nguyen <andrew.nguyen@docker.com>
|
||||||
|
Andrey Kostov <kostov.andrey@gmail.com>
|
||||||
|
Andy Goldstein <agoldste@redhat.com>
|
||||||
|
Anis Elleuch <vadmeste@gmail.com>
|
||||||
|
Anton Tiurin <noxiouz@yandex.ru>
|
||||||
|
Antonio Mercado <amercado@thinknode.com>
|
||||||
|
Antonio Murdaca <runcom@redhat.com>
|
||||||
|
Arien Holthuizen <aholthuizen@schubergphilis.com>
|
||||||
|
Arnaud Porterie <arnaud.porterie@docker.com>
|
||||||
|
Arthur Baars <arthur@semmle.com>
|
||||||
|
Asuka Suzuki <hello@tanksuzuki.com>
|
||||||
|
Avi Miller <avi.miller@oracle.com>
|
||||||
|
Ayose Cazorla <ayosec@gmail.com>
|
||||||
|
BadZen <dave.trombley@gmail.com>
|
||||||
|
Ben Firshman <ben@firshman.co.uk>
|
||||||
|
bin liu <liubin0329@gmail.com>
|
||||||
|
Brian Bland <brian.bland@docker.com>
|
||||||
|
burnettk <burnettk@gmail.com>
|
||||||
|
Carson A <ca@carsonoid.net>
|
||||||
|
Chris Dillon <squarism@gmail.com>
|
||||||
|
cyli <cyli@twistedmatrix.com>
|
||||||
|
Daisuke Fujita <dtanshi45@gmail.com>
|
||||||
|
Daniel Huhn <daniel@danielhuhn.de>
|
||||||
|
Darren Shepherd <darren@rancher.com>
|
||||||
|
Dave Trombley <dave.trombley@gmail.com>
|
||||||
|
Dave Tucker <dt@docker.com>
|
||||||
|
David Lawrence <david.lawrence@docker.com>
|
||||||
|
David Verhasselt <david@crowdway.com>
|
||||||
|
David Xia <dxia@spotify.com>
|
||||||
|
davidli <wenquan.li@hp.com>
|
||||||
|
Dejan Golja <dejan@golja.org>
|
||||||
|
Derek McGowan <derek@mcgstyle.net>
|
||||||
|
Diogo Mónica <diogo.monica@gmail.com>
|
||||||
|
DJ Enriquez <dj.enriquez@infospace.com>
|
||||||
|
Donald Huang <don.hcd@gmail.com>
|
||||||
|
Doug Davis <dug@us.ibm.com>
|
||||||
|
Eric Yang <windfarer@gmail.com>
|
||||||
|
Fabio Huser <fabio@fh1.ch>
|
||||||
|
farmerworking <farmerworking@gmail.com>
|
||||||
|
Felix Yan <felixonmars@archlinux.org>
|
||||||
|
Florentin Raud <florentin.raud@gmail.com>
|
||||||
|
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
|
||||||
|
gabriell nascimento <gabriell@bluesoft.com.br>
|
||||||
|
Gleb Schukin <gschukin@ptsecurity.com>
|
||||||
|
harche <p.harshal@gmail.com>
|
||||||
|
Henri Gomez <henri.gomez@gmail.com>
|
||||||
|
Hu Keping <hukeping@huawei.com>
|
||||||
|
Hua Wang <wanghua.humble@gmail.com>
|
||||||
|
HuKeping <hukeping@huawei.com>
|
||||||
|
Ian Babrou <ibobrik@gmail.com>
|
||||||
|
igayoso <igayoso@gmail.com>
|
||||||
|
Jack Griffin <jackpg14@gmail.com>
|
||||||
|
Jason Freidman <jason.freidman@gmail.com>
|
||||||
|
Jeff Nickoloff <jeff@allingeek.com>
|
||||||
|
Jessie Frazelle <jessie@docker.com>
|
||||||
|
jhaohai <jhaohai@foxmail.com>
|
||||||
|
Jianqing Wang <tsing@jianqing.org>
|
||||||
|
John Starks <jostarks@microsoft.com>
|
||||||
|
Jon Johnson <jonjohnson@google.com>
|
||||||
|
Jon Poler <jonathan.poler@apcera.com>
|
||||||
|
Jonathan Boulle <jonathanboulle@gmail.com>
|
||||||
|
Jordan Liggitt <jliggitt@redhat.com>
|
||||||
|
Josh Hawn <josh.hawn@docker.com>
|
||||||
|
Julien Fernandez <julien.fernandez@gmail.com>
|
||||||
|
Ke Xu <leonhartx.k@gmail.com>
|
||||||
|
Keerthan Mala <kmala@engineyard.com>
|
||||||
|
Kelsey Hightower <kelsey.hightower@gmail.com>
|
||||||
|
Kenneth Lim <kennethlimcp@gmail.com>
|
||||||
|
Kenny Leung <kleung@google.com>
|
||||||
|
Li Yi <denverdino@gmail.com>
|
||||||
|
Liu Hua <sdu.liu@huawei.com>
|
||||||
|
liuchang0812 <liuchang0812@gmail.com>
|
||||||
|
Louis Kottmann <louis.kottmann@gmail.com>
|
||||||
|
Luke Carpenter <x@rubynerd.net>
|
||||||
|
Mary Anthony <mary@docker.com>
|
||||||
|
Matt Bentley <mbentley@mbentley.net>
|
||||||
|
Matt Duch <matt@learnmetrics.com>
|
||||||
|
Matt Moore <mattmoor@google.com>
|
||||||
|
Matt Robenolt <matt@ydekproductions.com>
|
||||||
|
Michael Prokop <mika@grml.org>
|
||||||
|
Michal Minar <miminar@redhat.com>
|
||||||
|
Miquel Sabaté <msabate@suse.com>
|
||||||
|
Morgan Bauer <mbauer@us.ibm.com>
|
||||||
|
moxiegirl <mary@docker.com>
|
||||||
|
Nathan Sullivan <nathan@nightsys.net>
|
||||||
|
nevermosby <robolwq@qq.com>
|
||||||
|
Nghia Tran <tcnghia@gmail.com>
|
||||||
|
Nikita Tarasov <nikita@mygento.ru>
|
||||||
|
Nuutti Kotivuori <nuutti.kotivuori@poplatek.fi>
|
||||||
|
Oilbeater <liumengxinfly@gmail.com>
|
||||||
|
Olivier Gambier <olivier@docker.com>
|
||||||
|
Olivier Jacques <olivier.jacques@hp.com>
|
||||||
|
Omer Cohen <git@omer.io>
|
||||||
|
Patrick Devine <patrick.devine@docker.com>
|
||||||
|
Phil Estes <estesp@linux.vnet.ibm.com>
|
||||||
|
Philip Misiowiec <philip@atlashealth.com>
|
||||||
|
Richard Scothern <richard.scothern@docker.com>
|
||||||
|
Rodolfo Carvalho <rhcarvalho@gmail.com>
|
||||||
|
Rusty Conover <rusty@luckydinosaur.com>
|
||||||
|
Sean Boran <Boran@users.noreply.github.com>
|
||||||
|
Sebastiaan van Stijn <github@gone.nl>
|
||||||
|
Serge Dubrouski <sergeyfd@gmail.com>
|
||||||
|
Sharif Nassar <sharif@mrwacky.com>
|
||||||
|
Shawn Falkner-Horine <dreadpirateshawn@gmail.com>
|
||||||
|
Shreyas Karnik <karnik.shreyas@gmail.com>
|
||||||
|
Simon Thulbourn <simon+github@thulbourn.com>
|
||||||
|
Spencer Rinehart <anubis@overthemonkey.com>
|
||||||
|
Stefan Majewsky <stefan.majewsky@sap.com>
|
||||||
|
Stefan Weil <sw@weilnetz.de>
|
||||||
|
Stephen J Day <stephen.day@docker.com>
|
||||||
|
Sungho Moon <sungho.moon@navercorp.com>
|
||||||
|
Sven Dowideit <SvenDowideit@home.org.au>
|
||||||
|
Sylvain Baubeau <sbaubeau@redhat.com>
|
||||||
|
Ted Reed <ted.reed@gmail.com>
|
||||||
|
tgic <farmer1992@gmail.com>
|
||||||
|
Thomas Sjögren <konstruktoid@users.noreply.github.com>
|
||||||
|
Tianon Gravi <admwiggin@gmail.com>
|
||||||
|
Tibor Vass <teabee89@gmail.com>
|
||||||
|
Tonis Tiigi <tonistiigi@gmail.com>
|
||||||
|
Tony Holdstock-Brown <tony@docker.com>
|
||||||
|
Trevor Pounds <trevor.pounds@gmail.com>
|
||||||
|
Troels Thomsen <troels@thomsen.io>
|
||||||
|
Vincent Batts <vbatts@redhat.com>
|
||||||
|
Vincent Demeester <vincent@sbr.pm>
|
||||||
|
Vincent Giersch <vincent.giersch@ovh.net>
|
||||||
|
W. Trevor King <wking@tremily.us>
|
||||||
|
weiyuan.yl <weiyuan.yl@alibaba-inc.com>
|
||||||
|
xg.song <xg.song@venusource.com>
|
||||||
|
xiekeyang <xiekeyang@huawei.com>
|
||||||
|
Yann ROBERT <yann.robert@anantaplex.fr>
|
||||||
|
yuzou <zouyu7@huawei.com>
|
||||||
|
zhouhaibing089 <zhouhaibing089@gmail.com>
|
||||||
|
姜继忠 <jizhong.jiangjz@alibaba-inc.com>
|
14
BUILDING.md
14
BUILDING.md
|
@ -11,7 +11,7 @@ Most people should use the [official Registry docker image](https://hub.docker.c
|
||||||
|
|
||||||
People looking for advanced operational use cases might consider rolling their own image with a custom Dockerfile inheriting `FROM registry:2`.
|
People looking for advanced operational use cases might consider rolling their own image with a custom Dockerfile inheriting `FROM registry:2`.
|
||||||
|
|
||||||
OS X users who want to run natively can do so following [the instructions here](https://github.com/docker/docker.github.io/blob/master/registry/recipes/osx-setup-guide.md).
|
OS X users who want to run natively can do so following [the instructions here](osx-setup-guide.md).
|
||||||
|
|
||||||
### Gotchas
|
### Gotchas
|
||||||
|
|
||||||
|
@ -71,7 +71,9 @@ commands, such as `go test`, should work per package (please see
|
||||||
A `Makefile` has been provided as a convenience to support repeatable builds.
|
A `Makefile` has been provided as a convenience to support repeatable builds.
|
||||||
Please install the following into `GOPATH` for it to work:
|
Please install the following into `GOPATH` for it to work:
|
||||||
|
|
||||||
go get github.com/golang/lint/golint
|
go get github.com/tools/godep github.com/golang/lint/golint
|
||||||
|
|
||||||
|
**TODO(stevvooe):** Add a `make setup` command to Makefile to run this. Have to think about how to interact with Godeps properly.
|
||||||
|
|
||||||
Once these commands are available in the `GOPATH`, run `make` to get a full
|
Once these commands are available in the `GOPATH`, run `make` to get a full
|
||||||
build:
|
build:
|
||||||
|
@ -83,7 +85,7 @@ build:
|
||||||
+ lint
|
+ lint
|
||||||
+ build
|
+ build
|
||||||
github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar
|
github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar
|
||||||
github.com/sirupsen/logrus
|
github.com/Sirupsen/logrus
|
||||||
github.com/docker/libtrust
|
github.com/docker/libtrust
|
||||||
...
|
...
|
||||||
github.com/yvasiyarov/gorelic
|
github.com/yvasiyarov/gorelic
|
||||||
|
@ -103,12 +105,12 @@ build:
|
||||||
+ /Users/sday/go/src/github.com/docker/distribution/bin/registry-api-descriptor-template
|
+ /Users/sday/go/src/github.com/docker/distribution/bin/registry-api-descriptor-template
|
||||||
+ binaries
|
+ binaries
|
||||||
|
|
||||||
The above provides a repeatable build using the contents of the vendor
|
The above provides a repeatable build using the contents of the vendored
|
||||||
directory. This includes formatting, vetting, linting, building,
|
Godeps directory. This includes formatting, vetting, linting, building,
|
||||||
testing and generating tagged binaries. We can verify this worked by running
|
testing and generating tagged binaries. We can verify this worked by running
|
||||||
the registry binary generated in the "./bin" directory:
|
the registry binary generated in the "./bin" directory:
|
||||||
|
|
||||||
$ ./bin/registry --version
|
$ ./bin/registry -version
|
||||||
./bin/registry github.com/docker/distribution v2.0.0-alpha.2-80-g16d8b2c.m
|
./bin/registry github.com/docker/distribution v2.0.0-alpha.2-80-g16d8b2c.m
|
||||||
|
|
||||||
### Optional build tags
|
### Optional build tags
|
||||||
|
|
|
@ -2,8 +2,8 @@
|
||||||
|
|
||||||
## 2.5.0 (2016-06-14)
|
## 2.5.0 (2016-06-14)
|
||||||
|
|
||||||
#### Storage
|
### Storage
|
||||||
- Ensure uploads directory is cleaned after upload is committed
|
- Ensure uploads directory is cleaned after upload is commited
|
||||||
- Add ability to cap concurrent operations in filesystem driver
|
- Add ability to cap concurrent operations in filesystem driver
|
||||||
- S3: Add 'us-gov-west-1' to the valid region list
|
- S3: Add 'us-gov-west-1' to the valid region list
|
||||||
- Swift: Handle ceph not returning Last-Modified header for HEAD requests
|
- Swift: Handle ceph not returning Last-Modified header for HEAD requests
|
||||||
|
@ -23,11 +23,13 @@
|
||||||
- Update the auth spec scope grammar to reflect the fact that hostnames are optionally supported
|
- Update the auth spec scope grammar to reflect the fact that hostnames are optionally supported
|
||||||
- Clarify API documentation around catalog fetch behavior
|
- Clarify API documentation around catalog fetch behavior
|
||||||
|
|
||||||
#### API
|
### API
|
||||||
- Support returning HTTP 429 (Too Many Requests)
|
- Support returning HTTP 429 (Too Many Requests)
|
||||||
|
|
||||||
#### Documentation
|
### Documentation
|
||||||
- Update auth documentation examples to show "expires in" as int
|
- Update auth documentation examples to show "expires in" as int
|
||||||
|
|
||||||
#### Docker Image
|
### Docker Image
|
||||||
- Use Alpine Linux as base image
|
- Use Alpine Linux as base image
|
||||||
|
|
||||||
|
|
147
CONTRIBUTING.md
147
CONTRIBUTING.md
|
@ -1,15 +1,14 @@
|
||||||
# Contributing to the registry
|
# Contributing to the registry
|
||||||
|
|
||||||
## Before reporting an issue...
|
## Before reporting an issue...
|
||||||
|
|
||||||
### If your problem is with...
|
### If your problem is with...
|
||||||
|
|
||||||
- automated builds or your [Docker Hub](https://hub.docker.com/) account
|
- automated builds
|
||||||
- Report it to [Hub Support](https://hub.docker.com/support/)
|
- your account on the [Docker Hub](https://hub.docker.com/)
|
||||||
- Distributions of Docker for desktop or Linux
|
- any other [Docker Hub](https://hub.docker.com/) issue
|
||||||
- Report [Mac Desktop issues](https://github.com/docker/for-mac)
|
|
||||||
- Report [Windows Desktop issues](https://github.com/docker/for-win)
|
Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
|
||||||
- Report [Linux issues](https://github.com/docker/for-linux)
|
|
||||||
|
|
||||||
### If you...
|
### If you...
|
||||||
|
|
||||||
|
@ -17,16 +16,10 @@
|
||||||
- can't figure out something
|
- can't figure out something
|
||||||
- are not sure what's going on or what your problem is
|
- are not sure what's going on or what your problem is
|
||||||
|
|
||||||
Please ask first in the #distribution channel on Docker community slack.
|
Then please do not open an issue here yet - you should first try one of the following support forums:
|
||||||
[Click here for an invite to Docker community slack](https://dockr.ly/slack)
|
|
||||||
|
|
||||||
### Reporting security issues
|
- irc: #docker-distribution on freenode
|
||||||
|
- mailing-list: <distribution@dockerproject.org> or https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
|
||||||
The Docker maintainers take security seriously. If you discover a security
|
|
||||||
issue, please bring it to their attention right away!
|
|
||||||
|
|
||||||
Please **DO NOT** file a public issue, instead send your report privately to
|
|
||||||
[security@docker.com](mailto:security@docker.com).
|
|
||||||
|
|
||||||
## Reporting an issue properly
|
## Reporting an issue properly
|
||||||
|
|
||||||
|
@ -34,7 +27,7 @@ By following these simple rules you will get better and faster feedback on your
|
||||||
|
|
||||||
- search the bugtracker for an already reported issue
|
- search the bugtracker for an already reported issue
|
||||||
|
|
||||||
### If you found an issue that describes your problem:
|
### If you found an issue that describes your problem:
|
||||||
|
|
||||||
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
|
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
|
||||||
- please refrain from adding "same thing here" or "+1" comments
|
- please refrain from adding "same thing here" or "+1" comments
|
||||||
|
@ -50,7 +43,7 @@ By following these simple rules you will get better and faster feedback on your
|
||||||
2. copy the output of:
|
2. copy the output of:
|
||||||
- `docker version`
|
- `docker version`
|
||||||
- `docker info`
|
- `docker info`
|
||||||
- `docker exec <registry-container> registry --version`
|
- `docker exec <registry-container> registry -version`
|
||||||
3. copy the command line you used to launch your Registry
|
3. copy the command line you used to launch your Registry
|
||||||
4. restart your docker daemon in debug mode (add `-D` to the daemon launch arguments)
|
4. restart your docker daemon in debug mode (add `-D` to the daemon launch arguments)
|
||||||
5. reproduce your problem and get your docker daemon logs showing the error
|
5. reproduce your problem and get your docker daemon logs showing the error
|
||||||
|
@ -58,72 +51,90 @@ By following these simple rules you will get better and faster feedback on your
|
||||||
7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
|
7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
|
||||||
8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
|
8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
|
||||||
|
|
||||||
## Contributing Code
|
## Contributing a patch for a known bug, or a small correction
|
||||||
|
|
||||||
Contributions should be made via pull requests. Pull requests will be reviewed
|
|
||||||
by one or more maintainers or reviewers and merged when acceptable.
|
|
||||||
|
|
||||||
You should follow the basic GitHub workflow:
|
You should follow the basic GitHub workflow:
|
||||||
|
|
||||||
1. Use your own [fork](https://help.github.com/en/articles/about-forks)
|
1. fork
|
||||||
2. Create your [change](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes)
|
2. commit a change
|
||||||
3. Test your code
|
3. make sure the tests pass
|
||||||
4. [Commit](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages) your work, always [sign your commits](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#commit-messages)
|
4. PR
|
||||||
5. Push your change to your fork and create a [Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork)
|
|
||||||
|
|
||||||
Refer to [containerd's contribution guide](https://github.com/containerd/project/blob/master/CONTRIBUTING.md#successful-changes)
|
Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
|
||||||
for tips on creating a successful contribution.
|
|
||||||
|
|
||||||
## Sign your work
|
- configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
|
||||||
|
- sign your commits using `-s`: `git commit -s -m "My commit"`
|
||||||
|
|
||||||
The sign-off is a simple line at the end of the explanation for the patch. Your
|
Some simple rules to ensure quick merge:
|
||||||
signature certifies that you wrote the patch or otherwise have the right to pass
|
|
||||||
it on as an open-source patch. The rules are pretty simple: if you can certify
|
|
||||||
the below (from [developercertificate.org](http://developercertificate.org/)):
|
|
||||||
|
|
||||||
```
|
- clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
|
||||||
Developer Certificate of Origin
|
- prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
|
||||||
Version 1.1
|
- if you need to amend your PR following comments, please squash instead of adding more commits
|
||||||
|
|
||||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
## Contributing new features
|
||||||
660 York Street, Suite 102,
|
|
||||||
San Francisco, CA 94110 USA
|
|
||||||
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies of this
|
You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
|
||||||
license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Developer's Certificate of Origin 1.1
|
If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
|
||||||
|
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work.
|
||||||
|
|
||||||
By making a contribution to this project, I certify that:
|
Then you should submit your implementation, clearly linking to the issue (and possible proposal).
|
||||||
|
|
||||||
(a) The contribution was created in whole or in part by me and I
|
Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
|
||||||
have the right to submit it under the open source license
|
|
||||||
indicated in the file; or
|
|
||||||
|
|
||||||
(b) The contribution is based upon previous work that, to the best
|
It's mandatory to:
|
||||||
of my knowledge, is covered under an appropriate open source
|
|
||||||
license and I have the right under that license to submit that
|
|
||||||
work with modifications, whether created in whole or in part
|
|
||||||
by me, under the same open source license (unless I am
|
|
||||||
permitted to submit under a different license), as indicated
|
|
||||||
in the file; or
|
|
||||||
|
|
||||||
(c) The contribution was provided directly to me by some other
|
- interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
|
||||||
person who certified (a), (b) or (c) and I have not modified
|
- address maintainers' comments and modify your submission accordingly
|
||||||
it.
|
- write tests for any new code
|
||||||
|
|
||||||
(d) I understand and agree that this project and the contribution
|
Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.
|
||||||
are public and that a record of the contribution (including all
|
|
||||||
personal information I submit with it, including my sign-off) is
|
|
||||||
maintained indefinitely and may be redistributed consistent with
|
|
||||||
this project or the open source license(s) involved.
|
|
||||||
```
|
|
||||||
|
|
||||||
Then you just add a line to every git commit message:
|
Have a look at a great, successful contribution: the [Swift driver PR](https://github.com/docker/distribution/pull/493)
|
||||||
|
|
||||||
Signed-off-by: Joe Smith <joe.smith@email.com>
|
## Coding Style
|
||||||
|
|
||||||
Use your real name (sorry, no pseudonyms or anonymous contributions.)
|
Unless explicitly stated, we follow all coding guidelines from the Go
|
||||||
|
community. While some of these standards may seem arbitrary, they somehow seem
|
||||||
|
to result in a solid, consistent codebase.
|
||||||
|
|
||||||
If you set your `user.name` and `user.email` git configs, you can sign your
|
It is possible that the code base does not currently comply with these
|
||||||
commit automatically with `git commit -s`.
|
guidelines. We are not looking for a massive PR that fixes this, since that
|
||||||
|
goes against the spirit of the guidelines. All new contributions should make a
|
||||||
|
best effort to clean up and make the code base better than they left it.
|
||||||
|
Obviously, apply your best judgement. Remember, the goal here is to make the
|
||||||
|
code base easier for humans to navigate and understand. Always keep that in
|
||||||
|
mind when nudging others to comply.
|
||||||
|
|
||||||
|
The rules:
|
||||||
|
|
||||||
|
1. All code should be formatted with `gofmt -s`.
|
||||||
|
2. All code should pass the default levels of
|
||||||
|
[`golint`](https://github.com/golang/lint).
|
||||||
|
3. All code should follow the guidelines covered in [Effective
|
||||||
|
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
|
||||||
|
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
|
||||||
|
4. Comment the code. Tell us the why, the history and the context.
|
||||||
|
5. Document _all_ declarations and methods, even private ones. Declare
|
||||||
|
expectations, caveats and anything else that may be important. If a type
|
||||||
|
gets exported, having the comments already there will ensure it's ready.
|
||||||
|
6. Variable name length should be proportional to its context and no longer.
|
||||||
|
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
|
||||||
|
In practice, short methods will have short variable names and globals will
|
||||||
|
have longer names.
|
||||||
|
7. No underscores in package names. If you need a compound name, step back,
|
||||||
|
and re-examine why you need a compound name. If you still think you need a
|
||||||
|
compound name, lose the underscore.
|
||||||
|
8. No utils or helpers packages. If a function is not general enough to
|
||||||
|
warrant its own package, it has not been written generally enough to be a
|
||||||
|
part of a util package. Just leave it unexported and well-documented.
|
||||||
|
9. All tests should run with `go test` and outside tooling should not be
|
||||||
|
required. No, we don't need another unit testing framework. Assertion
|
||||||
|
packages are acceptable if they provide _real_ incremental value.
|
||||||
|
10. Even though we call these "rules" above, they are actually just
|
||||||
|
guidelines. Since you've read all the rules, you now know that.
|
||||||
|
|
||||||
|
If you are having trouble getting into the mood of idiomatic Go, we recommend
|
||||||
|
reading through [Effective Go](http://golang.org/doc/effective_go.html). The
|
||||||
|
[Go Blog](http://blog.golang.org/) is also a great resource. Drinking the
|
||||||
|
kool-aid is a lot easier than going thirsty.
|
||||||
|
|
25
Dockerfile
25
Dockerfile
|
@ -1,30 +1,17 @@
|
||||||
ARG GO_VERSION=1.13.8
|
FROM golang:1.6-alpine
|
||||||
|
|
||||||
FROM golang:${GO_VERSION}-alpine3.11 AS build
|
|
||||||
|
|
||||||
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
|
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
|
||||||
ENV BUILDTAGS include_oss include_gcs
|
ENV DOCKER_BUILDTAGS include_oss include_gcs
|
||||||
|
|
||||||
ARG GOOS=linux
|
|
||||||
ARG GOARCH=amd64
|
|
||||||
ARG GOARM=6
|
|
||||||
ARG VERSION
|
|
||||||
ARG REVISION
|
|
||||||
|
|
||||||
RUN set -ex \
|
|
||||||
&& apk add --no-cache make git file
|
|
||||||
|
|
||||||
WORKDIR $DISTRIBUTION_DIR
|
WORKDIR $DISTRIBUTION_DIR
|
||||||
COPY . $DISTRIBUTION_DIR
|
COPY . $DISTRIBUTION_DIR
|
||||||
RUN CGO_ENABLED=0 make PREFIX=/go clean binaries && file ./bin/registry | grep "statically linked"
|
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
|
||||||
|
|
||||||
FROM alpine:3.11
|
|
||||||
|
|
||||||
RUN set -ex \
|
RUN set -ex \
|
||||||
&& apk add --no-cache ca-certificates apache2-utils
|
&& apk add --no-cache make git
|
||||||
|
|
||||||
|
RUN make PREFIX=/go clean binaries
|
||||||
|
|
||||||
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
|
|
||||||
COPY --from=build /go/src/github.com/docker/distribution/bin/registry /bin/registry
|
|
||||||
VOLUME ["/var/lib/registry"]
|
VOLUME ["/var/lib/registry"]
|
||||||
EXPOSE 5000
|
EXPOSE 5000
|
||||||
ENTRYPOINT ["registry"]
|
ENTRYPOINT ["registry"]
|
||||||
|
|
144
GOVERNANCE.md
144
GOVERNANCE.md
|
@ -1,144 +0,0 @@
|
||||||
# docker/distribution Project Governance
|
|
||||||
|
|
||||||
Docker distribution abides by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
|
||||||
|
|
||||||
For specific guidance on practical contribution steps please
|
|
||||||
see our [CONTRIBUTING.md](./CONTRIBUTING.md) guide.
|
|
||||||
|
|
||||||
## Maintainership
|
|
||||||
|
|
||||||
There are different types of maintainers, with different responsibilities, but
|
|
||||||
all maintainers have 3 things in common:
|
|
||||||
|
|
||||||
1) They share responsibility in the project's success.
|
|
||||||
2) They have made a long-term, recurring time investment to improve the project.
|
|
||||||
3) They spend that time doing whatever needs to be done, not necessarily what
|
|
||||||
is the most interesting or fun.
|
|
||||||
|
|
||||||
Maintainers are often under-appreciated, because their work is harder to appreciate.
|
|
||||||
It's easy to appreciate a really cool and technically advanced feature. It's harder
|
|
||||||
to appreciate the absence of bugs, the slow but steady improvement in stability,
|
|
||||||
or the reliability of a release process. But those things distinguish a good
|
|
||||||
project from a great one.
|
|
||||||
|
|
||||||
## Reviewers
|
|
||||||
|
|
||||||
A reviewer is a core role within the project.
|
|
||||||
They share in reviewing issues and pull requests and their LGTM counts towards the
|
|
||||||
required LGTM count to merge a code change into the project.
|
|
||||||
|
|
||||||
Reviewers are part of the organization but do not have write access.
|
|
||||||
Becoming a reviewer is a core aspect in the journey to becoming a maintainer.
|
|
||||||
|
|
||||||
## Adding maintainers
|
|
||||||
|
|
||||||
Maintainers are first and foremost contributors that have shown they are
|
|
||||||
committed to the long term success of a project. Contributors wanting to become
|
|
||||||
maintainers are expected to be deeply involved in contributing code, pull
|
|
||||||
request review, and triage of issues in the project for more than three months.
|
|
||||||
|
|
||||||
Just contributing does not make you a maintainer, it is about building trust
|
|
||||||
with the current maintainers of the project and being a person that they can
|
|
||||||
depend on and trust to make decisions in the best interest of the project.
|
|
||||||
|
|
||||||
Periodically, the existing maintainers curate a list of contributors that have
|
|
||||||
shown regular activity on the project over the prior months. From this list,
|
|
||||||
maintainer candidates are selected and proposed in a pull request or a
|
|
||||||
maintainers communication channel.
|
|
||||||
|
|
||||||
After a candidate has been announced to the maintainers, the existing
|
|
||||||
maintainers are given five business days to discuss the candidate, raise
|
|
||||||
objections and cast their vote. Votes may take place on the communication
|
|
||||||
channel or via pull request comment. Candidates must be approved by at least 66%
|
|
||||||
of the current maintainers by adding their vote on the mailing list. The
|
|
||||||
reviewer role has the same process but only requires 33% of current maintainers.
|
|
||||||
Only maintainers of the repository that the candidate is proposed for are
|
|
||||||
allowed to vote.
|
|
||||||
|
|
||||||
If a candidate is approved, a maintainer will contact the candidate to invite
|
|
||||||
the candidate to open a pull request that adds the contributor to the
|
|
||||||
MAINTAINERS file. The voting process may take place inside a pull request if a
|
|
||||||
maintainer has already discussed the candidacy with the candidate and a
|
|
||||||
maintainer is willing to be a sponsor by opening the pull request. The candidate
|
|
||||||
becomes a maintainer once the pull request is merged.
|
|
||||||
|
|
||||||
## Stepping down policy
|
|
||||||
|
|
||||||
Life priorities, interests, and passions can change. If you're a maintainer but
|
|
||||||
feel you must remove yourself from the list, inform other maintainers that you
|
|
||||||
intend to step down, and if possible, help find someone to pick up your work.
|
|
||||||
At the very least, ensure your work can be continued where you left off.
|
|
||||||
|
|
||||||
After you've informed other maintainers, create a pull request to remove
|
|
||||||
yourself from the MAINTAINERS file.
|
|
||||||
|
|
||||||
## Removal of inactive maintainers
|
|
||||||
|
|
||||||
Similar to the procedure for adding new maintainers, existing maintainers can
|
|
||||||
be removed from the list if they do not show significant activity on the
|
|
||||||
project. Periodically, the maintainers review the list of maintainers and their
|
|
||||||
activity over the last three months.
|
|
||||||
|
|
||||||
If a maintainer has shown insufficient activity over this period, a neutral
|
|
||||||
person will contact the maintainer to ask if they want to continue being
|
|
||||||
a maintainer. If the maintainer decides to step down as a maintainer, they
|
|
||||||
open a pull request to be removed from the MAINTAINERS file.
|
|
||||||
|
|
||||||
If the maintainer wants to remain a maintainer, but is unable to perform the
|
|
||||||
required duties they can be removed with a vote of at least 66% of the current
|
|
||||||
maintainers. In this case, maintainers should first propose the change to
|
|
||||||
maintainers via the maintainers communication channel, then open a pull request
|
|
||||||
for voting. The voting period is five business days. The voting pull request
|
|
||||||
should not come as a surpise to any maintainer and any discussion related to
|
|
||||||
performance must not be discussed on the pull request.
|
|
||||||
|
|
||||||
## How are decisions made?
|
|
||||||
|
|
||||||
Docker distribution is an open-source project with an open design philosophy.
|
|
||||||
This means that the repository is the source of truth for EVERY aspect of the
|
|
||||||
project, including its philosophy, design, road map, and APIs. *If it's part of
|
|
||||||
the project, it's in the repo. If it's in the repo, it's part of the project.*
|
|
||||||
|
|
||||||
As a result, all decisions can be expressed as changes to the repository. An
|
|
||||||
implementation change is a change to the source code. An API change is a change
|
|
||||||
to the API specification. A philosophy change is a change to the philosophy
|
|
||||||
manifesto, and so on.
|
|
||||||
|
|
||||||
All decisions affecting distribution, big and small, follow the same 3 steps:
|
|
||||||
|
|
||||||
* Step 1: Open a pull request. Anyone can do this.
|
|
||||||
|
|
||||||
* Step 2: Discuss the pull request. Anyone can do this.
|
|
||||||
|
|
||||||
* Step 3: Merge or refuse the pull request. Who does this depends on the nature
|
|
||||||
of the pull request and which areas of the project it affects.
|
|
||||||
|
|
||||||
## Helping contributors with the DCO
|
|
||||||
|
|
||||||
The [DCO or `Sign your work`](./CONTRIBUTING.md#sign-your-work)
|
|
||||||
requirement is not intended as a roadblock or speed bump.
|
|
||||||
|
|
||||||
Some contributors are not as familiar with `git`, or have used a web
|
|
||||||
based editor, and thus asking them to `git commit --amend -s` is not the best
|
|
||||||
way forward.
|
|
||||||
|
|
||||||
In this case, maintainers can update the commits based on clause (c) of the DCO.
|
|
||||||
The most trivial way for a contributor to allow the maintainer to do this, is to
|
|
||||||
add a DCO signature in a pull requests's comment, or a maintainer can simply
|
|
||||||
note that the change is sufficiently trivial that it does not substantially
|
|
||||||
change the existing contribution - i.e., a spelling change.
|
|
||||||
|
|
||||||
When you add someone's DCO, please also add your own to keep a log.
|
|
||||||
|
|
||||||
## I'm a maintainer. Should I make pull requests too?
|
|
||||||
|
|
||||||
Yes. Nobody should ever push to master directly. All changes should be
|
|
||||||
made through a pull request.
|
|
||||||
|
|
||||||
## Conflict Resolution
|
|
||||||
|
|
||||||
If you have a technical dispute that you feel has reached an impasse with a
|
|
||||||
subset of the community, any contributor may open an issue, specifically
|
|
||||||
calling for a resolution vote of the current core maintainers to resolve the
|
|
||||||
dispute. The same voting quorums required (2/3) for adding and removing
|
|
||||||
maintainers will apply to conflict resolution.
|
|
458
Godeps/Godeps.json
generated
Normal file
458
Godeps/Godeps.json
generated
Normal file
|
@ -0,0 +1,458 @@
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/docker/distribution",
|
||||||
|
"GoVersion": "go1.6",
|
||||||
|
"GodepVersion": "v74",
|
||||||
|
"Packages": [
|
||||||
|
"./..."
|
||||||
|
],
|
||||||
|
"Deps": [
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/Azure/azure-sdk-for-go/storage",
|
||||||
|
"Comment": "v1.2-334-g95361a2",
|
||||||
|
"Rev": "95361a2573b1fa92a00c5fc2707a80308483c6f9"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/Sirupsen/logrus",
|
||||||
|
"Comment": "v0.7.3",
|
||||||
|
"Rev": "55eb11d21d2a31a3cc93838241d04800f52e823d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/Sirupsen/logrus/formatters/logstash",
|
||||||
|
"Comment": "v0.7.3",
|
||||||
|
"Rev": "55eb11d21d2a31a3cc93838241d04800f52e823d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/awserr",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/awsutil",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/client",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/client/metadata",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/corehandlers",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/defaults",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/ec2metadata",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/request",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/session",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/aws/signer/v4",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/endpoints",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/rest",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restxml",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/private/waiter",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudfront/sign",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/service/s3",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/vendor/github.com/go-ini/ini",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/aws/aws-sdk-go/vendor/github.com/jmespath/go-jmespath",
|
||||||
|
"Comment": "v1.2.4",
|
||||||
|
"Rev": "90dec2183a5f5458ee79cbaf4b8e9ab910bc81a6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/bugsnag/bugsnag-go",
|
||||||
|
"Comment": "v1.0.2-5-gb1d1530",
|
||||||
|
"Rev": "b1d153021fcd90ca3f080db36bec96dc690fb274"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/bugsnag/bugsnag-go/errors",
|
||||||
|
"Comment": "v1.0.2-5-gb1d1530",
|
||||||
|
"Rev": "b1d153021fcd90ca3f080db36bec96dc690fb274"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/bugsnag/osext",
|
||||||
|
"Rev": "0dd3f918b21bec95ace9dc86c7e70266cfc5c702"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/bugsnag/panicwrap",
|
||||||
|
"Comment": "1.0.0-2-ge2c2850",
|
||||||
|
"Rev": "e2c28503fcd0675329da73bf48b33404db873782"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/denverdino/aliyungo/common",
|
||||||
|
"Rev": "6ffb587da9da6d029d0ce517b85fecc82172d502"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/denverdino/aliyungo/oss",
|
||||||
|
"Rev": "6ffb587da9da6d029d0ce517b85fecc82172d502"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/denverdino/aliyungo/util",
|
||||||
|
"Rev": "6ffb587da9da6d029d0ce517b85fecc82172d502"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/docker/goamz/aws",
|
||||||
|
"Rev": "f0a21f5b2e12f83a505ecf79b633bb2035cf6f85"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/docker/goamz/s3",
|
||||||
|
"Rev": "f0a21f5b2e12f83a505ecf79b633bb2035cf6f85"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/docker/libtrust",
|
||||||
|
"Rev": "fa567046d9b14f6aa788882a950d69651d230b21"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/garyburd/redigo/internal",
|
||||||
|
"Rev": "535138d7bcd717d6531c701ef5933d98b1866257"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/garyburd/redigo/redis",
|
||||||
|
"Rev": "535138d7bcd717d6531c701ef5933d98b1866257"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/golang/protobuf/proto",
|
||||||
|
"Rev": "8d92cf5fc15a4382f8964b08e1f42a75c0591aa3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/gorilla/context",
|
||||||
|
"Rev": "14f550f51af52180c2eefed15e5fd18d63c0a64a"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/gorilla/handlers",
|
||||||
|
"Rev": "60c7bfde3e33c201519a200a4507a158cc03a17b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/gorilla/mux",
|
||||||
|
"Rev": "e444e69cbd2e2e3e0749a2f3c717cec491552bbf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/inconshreveable/mousetrap",
|
||||||
|
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/mitchellh/mapstructure",
|
||||||
|
"Rev": "482a9fd5fa83e8c4e7817413b80f3eb8feec03ef"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/ncw/swift",
|
||||||
|
"Rev": "ce444d6d47c51d4dda9202cd38f5094dd8e27e86"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/ncw/swift/swifttest",
|
||||||
|
"Rev": "ce444d6d47c51d4dda9202cd38f5094dd8e27e86"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/spf13/cobra",
|
||||||
|
"Rev": "312092086bed4968099259622145a0c9ae280064"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/spf13/pflag",
|
||||||
|
"Rev": "5644820622454e71517561946e3d94b9f9db6842"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/stevvooe/resumable",
|
||||||
|
"Rev": "51ad44105773cafcbe91927f70ac68e1bf78f8b4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/stevvooe/resumable/sha256",
|
||||||
|
"Rev": "51ad44105773cafcbe91927f70ac68e1bf78f8b4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/stevvooe/resumable/sha512",
|
||||||
|
"Rev": "51ad44105773cafcbe91927f70ac68e1bf78f8b4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/yvasiyarov/go-metrics",
|
||||||
|
"Rev": "57bccd1ccd43f94bb17fdd8bf3007059b802f85e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/yvasiyarov/gorelic",
|
||||||
|
"Comment": "v0.0.6-8-ga9bba5b",
|
||||||
|
"Rev": "a9bba5b9ab508a086f9a12b8c51fab68478e2128"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "github.com/yvasiyarov/newrelic_platform_go",
|
||||||
|
"Rev": "b21fdbd4370f3717f3bbd2bf41c223bc273068e6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/crypto/bcrypt",
|
||||||
|
"Rev": "c10c31b5e94b6f7a0283272dc2bb27163dcea24b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||||
|
"Rev": "c10c31b5e94b6f7a0283272dc2bb27163dcea24b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/crypto/ocsp",
|
||||||
|
"Rev": "c10c31b5e94b6f7a0283272dc2bb27163dcea24b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/context",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/context/ctxhttp",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/http2",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/http2/hpack",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/internal/timeseries",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/net/trace",
|
||||||
|
"Rev": "4876518f9e71663000c348837735820161a42df7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/oauth2",
|
||||||
|
"Rev": "045497edb6234273d67dbc25da3f2ddbc4c4cacf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/oauth2/google",
|
||||||
|
"Rev": "045497edb6234273d67dbc25da3f2ddbc4c4cacf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/oauth2/internal",
|
||||||
|
"Rev": "045497edb6234273d67dbc25da3f2ddbc4c4cacf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/oauth2/jws",
|
||||||
|
"Rev": "045497edb6234273d67dbc25da3f2ddbc4c4cacf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/oauth2/jwt",
|
||||||
|
"Rev": "045497edb6234273d67dbc25da3f2ddbc4c4cacf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "golang.org/x/time/rate",
|
||||||
|
"Rev": "a4bde12657593d5e90d0533a3e4fd95e635124cb"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/api/gensupport",
|
||||||
|
"Rev": "9bf6e6e569ff057f75d9604a46c52928f17d2b54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/api/googleapi",
|
||||||
|
"Rev": "9bf6e6e569ff057f75d9604a46c52928f17d2b54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/api/googleapi/internal/uritemplates",
|
||||||
|
"Rev": "9bf6e6e569ff057f75d9604a46c52928f17d2b54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/api/storage/v1",
|
||||||
|
"Rev": "9bf6e6e569ff057f75d9604a46c52928f17d2b54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/app_identity",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/base",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/datastore",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/log",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/modules",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/appengine/internal/remote_api",
|
||||||
|
"Rev": "12d5545dc1cfa6047a286d5e853841b6471f4c19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/cloud",
|
||||||
|
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/cloud/compute/metadata",
|
||||||
|
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/cloud/internal",
|
||||||
|
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/cloud/internal/opts",
|
||||||
|
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/cloud/storage",
|
||||||
|
"Rev": "975617b05ea8a58727e6c1a06b6161ff4185a9f2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/codes",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/credentials",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/grpclog",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/internal",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/metadata",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/naming",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/peer",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "google.golang.org/grpc/transport",
|
||||||
|
"Rev": "d3ddb4469d5a1b949fc7a7da7c1d6a0d1b6de994"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "gopkg.in/check.v1",
|
||||||
|
"Rev": "64131543e7896d5bcc6bd5a76287eb75ea96c673"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "gopkg.in/yaml.v2",
|
||||||
|
"Rev": "bef53efd0c76e49e6de55ead051f886bea7e9420"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "rsc.io/letsencrypt",
|
||||||
|
"Rev": "a019c9e6fce0c7132679dea13bd8df7c86ffe26c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "rsc.io/letsencrypt/vendor/github.com/xenolf/lego/acme",
|
||||||
|
"Rev": "a019c9e6fce0c7132679dea13bd8df7c86ffe26c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "rsc.io/letsencrypt/vendor/gopkg.in/square/go-jose.v1",
|
||||||
|
"Rev": "a019c9e6fce0c7132679dea13bd8df7c86ffe26c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "rsc.io/letsencrypt/vendor/gopkg.in/square/go-jose.v1/cipher",
|
||||||
|
"Rev": "a019c9e6fce0c7132679dea13bd8df7c86ffe26c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ImportPath": "rsc.io/letsencrypt/vendor/gopkg.in/square/go-jose.v1/json",
|
||||||
|
"Rev": "a019c9e6fce0c7132679dea13bd8df7c86ffe26c"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
5
Godeps/Readme
generated
Normal file
5
Godeps/Readme
generated
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
This directory tree is generated automatically by godep.
|
||||||
|
|
||||||
|
Please do not edit.
|
||||||
|
|
||||||
|
See https://github.com/tools/godep for more information.
|
68
MAINTAINERS
68
MAINTAINERS
|
@ -1,16 +1,58 @@
|
||||||
# Docker distribution project maintainers & reviewers
|
# Distribution maintainers file
|
||||||
#
|
#
|
||||||
# See GOVERNANCE.md for maintainer versus reviewer roles
|
# This file describes who runs the docker/distribution project and how.
|
||||||
|
# This is a living document - if you see something out of date or missing, speak up!
|
||||||
#
|
#
|
||||||
# MAINTAINERS
|
# It is structured to be consumable by both humans and programs.
|
||||||
# GitHub ID, Name, Email address
|
# To extract its contents programmatically, use any TOML-compliant parser.
|
||||||
"dmcgowan","Derek McGowan","derek@mcgstyle.net"
|
|
||||||
"manishtomar","Manish Tomar","manish.tomar@docker.com"
|
|
||||||
"stevvooe","Stephen Day","stevvooe@gmail.com"
|
|
||||||
#
|
#
|
||||||
# REVIEWERS
|
# This file is compiled into the MAINTAINERS file in docker/opensource.
|
||||||
# GitHub ID, Name, Email address
|
#
|
||||||
"caervs","Ryan Abrams","rdabrams@gmail.com"
|
[Org]
|
||||||
"davidswu","David Wu","dwu7401@gmail.com"
|
[Org."Core maintainers"]
|
||||||
"RobbKistler","Robb Kistler","robb.kistler@docker.com"
|
people = [
|
||||||
"thajeztah","Sebastiaan van Stijn","github@gone.nl"
|
"aaronlehmann",
|
||||||
|
"dmcgowan",
|
||||||
|
"dmp42",
|
||||||
|
"richardscothern",
|
||||||
|
"shykes",
|
||||||
|
"stevvooe",
|
||||||
|
]
|
||||||
|
|
||||||
|
[people]
|
||||||
|
|
||||||
|
# A reference list of all people associated with the project.
|
||||||
|
# All other sections should refer to people by their canonical key
|
||||||
|
# in the people section.
|
||||||
|
|
||||||
|
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
|
||||||
|
|
||||||
|
[people.aaronlehmann]
|
||||||
|
Name = "Aaron Lehmann"
|
||||||
|
Email = "aaron.lehmann@docker.com"
|
||||||
|
GitHub = "aaronlehmann"
|
||||||
|
|
||||||
|
[people.dmcgowan]
|
||||||
|
Name = "Derek McGowan"
|
||||||
|
Email = "derek@mcgstyle.net"
|
||||||
|
GitHub = "dmcgowan"
|
||||||
|
|
||||||
|
[people.dmp42]
|
||||||
|
Name = "Olivier Gambier"
|
||||||
|
Email = "olivier@docker.com"
|
||||||
|
GitHub = "dmp42"
|
||||||
|
|
||||||
|
[people.richardscothern]
|
||||||
|
Name = "Richard Scothern"
|
||||||
|
Email = "richard.scothern@gmail.com"
|
||||||
|
GitHub = "richardscothern"
|
||||||
|
|
||||||
|
[people.shykes]
|
||||||
|
Name = "Solomon Hykes"
|
||||||
|
Email = "solomon@docker.com"
|
||||||
|
GitHub = "shykes"
|
||||||
|
|
||||||
|
[people.stevvooe]
|
||||||
|
Name = "Stephen Day"
|
||||||
|
Email = "stephen.day@docker.com"
|
||||||
|
GitHub = "stevvooe"
|
||||||
|
|
152
Makefile
152
Makefile
|
@ -1,21 +1,9 @@
|
||||||
# Root directory of the project (absolute path).
|
# Set an output prefix, which is the local directory if not specified
|
||||||
ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST))))
|
PREFIX?=$(shell pwd)
|
||||||
|
|
||||||
|
|
||||||
# Used to populate version variable in main package.
|
# Used to populate version variable in main package.
|
||||||
VERSION ?= $(shell git describe --match 'v[0-9]*' --dirty='.m' --always)
|
VERSION=$(shell git describe --match 'v[0-9]*' --dirty='.m' --always)
|
||||||
REVISION ?= $(shell git rev-parse HEAD)$(shell if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi)
|
|
||||||
|
|
||||||
|
|
||||||
PKG=github.com/docker/distribution
|
|
||||||
|
|
||||||
# Project packages.
|
|
||||||
PACKAGES=$(shell go list -tags "${BUILDTAGS}" ./... | grep -v /vendor/)
|
|
||||||
INTEGRATION_PACKAGE=${PKG}
|
|
||||||
COVERAGE_PACKAGES=$(filter-out ${PKG}/registry/storage/driver/%,${PACKAGES})
|
|
||||||
|
|
||||||
|
|
||||||
# Project binaries.
|
|
||||||
COMMANDS=registry digest registry-api-descriptor-template
|
|
||||||
|
|
||||||
# Allow turning off function inlining and variable registerization
|
# Allow turning off function inlining and variable registerization
|
||||||
ifeq (${DISABLE_OPTIMIZATION},true)
|
ifeq (${DISABLE_OPTIMIZATION},true)
|
||||||
|
@ -23,80 +11,96 @@ ifeq (${DISABLE_OPTIMIZATION},true)
|
||||||
VERSION:="$(VERSION)-noopt"
|
VERSION:="$(VERSION)-noopt"
|
||||||
endif
|
endif
|
||||||
|
|
||||||
WHALE = "+"
|
GO_LDFLAGS=-ldflags "-X `go list ./version`.Version=$(VERSION)"
|
||||||
|
|
||||||
# Go files
|
.PHONY: clean all fmt vet lint build test binaries
|
||||||
#
|
|
||||||
TESTFLAGS_RACE=
|
|
||||||
GOFILES=$(shell find . -type f -name '*.go')
|
|
||||||
GO_TAGS=$(if $(BUILDTAGS),-tags "$(BUILDTAGS)",)
|
|
||||||
GO_LDFLAGS=-ldflags '-s -w -X $(PKG)/version.Version=$(VERSION) -X $(PKG)/version.Revision=$(REVISION) -X $(PKG)/version.Package=$(PKG) $(EXTRA_LDFLAGS)'
|
|
||||||
|
|
||||||
BINARIES=$(addprefix bin/,$(COMMANDS))
|
|
||||||
|
|
||||||
# Flags passed to `go test`
|
|
||||||
TESTFLAGS ?= -v $(TESTFLAGS_RACE)
|
|
||||||
TESTFLAGS_PARALLEL ?= 8
|
|
||||||
|
|
||||||
.PHONY: all build binaries check clean test test-race test-full integration coverage
|
|
||||||
.DEFAULT: all
|
.DEFAULT: all
|
||||||
|
all: fmt vet lint build test binaries
|
||||||
|
|
||||||
all: binaries
|
AUTHORS: .mailmap .git/HEAD
|
||||||
|
git log --format='%aN <%aE>' | sort -fu > $@
|
||||||
|
|
||||||
# This only needs to be generated by hand when cutting full releases.
|
# This only needs to be generated by hand when cutting full releases.
|
||||||
version/version.go:
|
version/version.go:
|
||||||
@echo "$(WHALE) $@"
|
|
||||||
./version/version.sh > $@
|
./version/version.sh > $@
|
||||||
|
|
||||||
check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck")
|
# Required for go 1.5 to build
|
||||||
@echo "$(WHALE) $@"
|
GO15VENDOREXPERIMENT := 1
|
||||||
@GO111MODULE=off golangci-lint run
|
|
||||||
|
|
||||||
test: ## run tests, except integration test with test.short
|
# Package list
|
||||||
@echo "$(WHALE) $@"
|
PKGS := $(shell go list -tags "${DOCKER_BUILDTAGS}" ./... | grep -v ^github.com/docker/distribution/vendor/)
|
||||||
@go test ${GO_TAGS} -test.short ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES})
|
|
||||||
|
|
||||||
test-race: ## run tests, except integration test with test.short and race
|
# Resolving binary dependencies for specific targets
|
||||||
@echo "$(WHALE) $@"
|
GOLINT := $(shell which golint || echo '')
|
||||||
@go test ${GO_TAGS} -race -test.short ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES})
|
GODEP := $(shell which godep || echo '')
|
||||||
|
|
||||||
test-full: ## run tests, except integration tests
|
${PREFIX}/bin/registry: $(wildcard **/*.go)
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
@go test ${GO_TAGS} ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES})
|
@go build -tags "${DOCKER_BUILDTAGS}" -o $@ ${GO_LDFLAGS} ${GO_GCFLAGS} ./cmd/registry
|
||||||
|
|
||||||
integration: ## run integration tests
|
${PREFIX}/bin/digest: $(wildcard **/*.go)
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
@go test ${TESTFLAGS} -parallel ${TESTFLAGS_PARALLEL} ${INTEGRATION_PACKAGE}
|
@go build -tags "${DOCKER_BUILDTAGS}" -o $@ ${GO_LDFLAGS} ${GO_GCFLAGS} ./cmd/digest
|
||||||
|
|
||||||
coverage: ## generate coverprofiles from the unit tests
|
${PREFIX}/bin/registry-api-descriptor-template: $(wildcard **/*.go)
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
@rm -f coverage.txt
|
@go build -o $@ ${GO_LDFLAGS} ${GO_GCFLAGS} ./cmd/registry-api-descriptor-template
|
||||||
@go test ${GO_TAGS} -i ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${COVERAGE_PACKAGES}) 2> /dev/null
|
|
||||||
@( for pkg in $(filter-out ${INTEGRATION_PACKAGE},${COVERAGE_PACKAGES}); do \
|
|
||||||
go test ${GO_TAGS} ${TESTFLAGS} \
|
|
||||||
-cover \
|
|
||||||
-coverprofile=profile.out \
|
|
||||||
-covermode=atomic $$pkg || exit; \
|
|
||||||
if [ -f profile.out ]; then \
|
|
||||||
cat profile.out >> coverage.txt; \
|
|
||||||
rm profile.out; \
|
|
||||||
fi; \
|
|
||||||
done )
|
|
||||||
|
|
||||||
FORCE:
|
docs/spec/api.md: docs/spec/api.md.tmpl ${PREFIX}/bin/registry-api-descriptor-template
|
||||||
|
./bin/registry-api-descriptor-template $< > $@
|
||||||
|
|
||||||
# Build a binary from a cmd.
|
vet:
|
||||||
bin/%: cmd/% FORCE
|
@echo "+ $@"
|
||||||
@echo "$(WHALE) $@${BINARY_SUFFIX}"
|
@go vet -tags "${DOCKER_BUILDTAGS}" $(PKGS)
|
||||||
@go build ${GO_GCFLAGS} ${GO_BUILD_FLAGS} -o $@${BINARY_SUFFIX} ${GO_LDFLAGS} ${GO_TAGS} ./$<
|
|
||||||
|
|
||||||
binaries: $(BINARIES) ## build binaries
|
fmt:
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
|
@test -z "$$(gofmt -s -l . 2>&1 | grep -v ^vendor/ | tee /dev/stderr)" || \
|
||||||
|
(echo >&2 "+ please format Go code with 'gofmt -s'" && false)
|
||||||
|
|
||||||
|
lint:
|
||||||
|
@echo "+ $@"
|
||||||
|
$(if $(GOLINT), , \
|
||||||
|
$(error Please install golint: `go get -u github.com/golang/lint/golint`))
|
||||||
|
@test -z "$$($(GOLINT) ./... 2>&1 | grep -v ^vendor/ | tee /dev/stderr)"
|
||||||
|
|
||||||
build:
|
build:
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
@go build ${GO_GCFLAGS} ${GO_BUILD_FLAGS} ${GO_LDFLAGS} ${GO_TAGS} $(PACKAGES)
|
@go build -tags "${DOCKER_BUILDTAGS}" -v ${GO_LDFLAGS} $(PKGS)
|
||||||
|
|
||||||
clean: ## clean up binaries
|
test:
|
||||||
@echo "$(WHALE) $@"
|
@echo "+ $@"
|
||||||
@rm -f $(BINARIES)
|
@go test -test.short -tags "${DOCKER_BUILDTAGS}" $(PKGS)
|
||||||
|
|
||||||
|
test-full:
|
||||||
|
@echo "+ $@"
|
||||||
|
@go test -tags "${DOCKER_BUILDTAGS}" $(PKGS)
|
||||||
|
|
||||||
|
binaries: ${PREFIX}/bin/registry ${PREFIX}/bin/digest ${PREFIX}/bin/registry-api-descriptor-template
|
||||||
|
@echo "+ $@"
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "+ $@"
|
||||||
|
@rm -rf "${PREFIX}/bin/registry" "${PREFIX}/bin/digest" "${PREFIX}/bin/registry-api-descriptor-template"
|
||||||
|
|
||||||
|
dep-save:
|
||||||
|
@echo "+ $@"
|
||||||
|
$(if $(GODEP), , \
|
||||||
|
$(error Please install godep: go get github.com/tools/godep))
|
||||||
|
@$(GODEP) save $(PKGS)
|
||||||
|
|
||||||
|
dep-restore:
|
||||||
|
@echo "+ $@"
|
||||||
|
$(if $(GODEP), , \
|
||||||
|
$(error Please install godep: go get github.com/tools/godep))
|
||||||
|
@$(GODEP) restore -v
|
||||||
|
|
||||||
|
dep-validate: dep-restore
|
||||||
|
@echo "+ $@"
|
||||||
|
@rm -Rf .vendor.bak
|
||||||
|
@mv vendor .vendor.bak
|
||||||
|
@rm -Rf Godeps
|
||||||
|
@$(GODEP) save ./...
|
||||||
|
@test -z "$$(diff -r vendor .vendor.bak 2>&1 | tee /dev/stderr)" || \
|
||||||
|
(echo >&2 "+ borked dependencies! what you have in Godeps/Godeps.json does not match with what you have in vendor" && false)
|
||||||
|
@rm -Rf .vendor.bak
|
||||||
|
|
100
README.md
100
README.md
|
@ -2,32 +2,31 @@
|
||||||
|
|
||||||
The Docker toolset to pack, ship, store, and deliver content.
|
The Docker toolset to pack, ship, store, and deliver content.
|
||||||
|
|
||||||
This repository's main product is the Open Source Docker Registry implementation
|
This repository's main product is the Docker Registry 2.0 implementation
|
||||||
for storing and distributing Docker and OCI images using the
|
for storing and distributing Docker images. It supersedes the
|
||||||
[OCI Distribution Specification](https://github.com/opencontainers/distribution-spec).
|
[docker/docker-registry](https://github.com/docker/docker-registry)
|
||||||
The goal of this project is to provide a simple, secure, and scalable base
|
project with a new API design, focused around security and performance.
|
||||||
for building a registry solution or running a simple private registry.
|
|
||||||
|
|
||||||
<img src="https://www.docker.com/sites/default/files/oyster-registry-3.png" width=200px/>
|
<img src="https://www.docker.com/sites/default/files/oyster-registry-3.png" width=200px/>
|
||||||
|
|
||||||
[![Build Status](https://travis-ci.org/docker/distribution.svg?branch=master)](https://travis-ci.org/docker/distribution)
|
[![Circle CI](https://circleci.com/gh/docker/distribution/tree/master.svg?style=svg)](https://circleci.com/gh/docker/distribution/tree/master)
|
||||||
[![GoDoc](https://godoc.org/github.com/docker/distribution?status.svg)](https://godoc.org/github.com/docker/distribution)
|
[![GoDoc](https://godoc.org/github.com/docker/distribution?status.svg)](https://godoc.org/github.com/docker/distribution)
|
||||||
|
|
||||||
This repository contains the following components:
|
This repository contains the following components:
|
||||||
|
|
||||||
|**Component** |Description |
|
|**Component** |Description |
|
||||||
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| **registry** | An implementation of the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). |
|
| **registry** | An implementation of the [Docker Registry HTTP API V2](docs/spec/api.md) for use with docker 1.6+. |
|
||||||
| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: The interfaces for these libraries are **unstable**. |
|
| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://godoc.org/github.com/docker/distribution) for details. **Note**: These libraries are **unstable**. |
|
||||||
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/) related just to the registry. |
|
| **specifications** | _Distribution_ related specifications are available in [docs/spec](docs/spec) |
|
||||||
|
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/index.md) related just to the registry. |
|
||||||
|
|
||||||
### How does this integrate with Docker, containerd, and other OCI client?
|
### How does this integrate with Docker engine?
|
||||||
|
|
||||||
Clients implement against the OCI specification and communicate with the
|
This project should provide an implementation to a V2 API for use in the [Docker
|
||||||
registry using HTTP. This project contains an client implementation which
|
core project](https://github.com/docker/docker). The API should be embeddable
|
||||||
is currently in use by Docker, however, it is deprecated for the
|
and simplify the process of securely pulling and pushing content from `docker`
|
||||||
[implementation in containerd](https://github.com/containerd/containerd/tree/master/remotes/docker)
|
daemons.
|
||||||
and will not support new features.
|
|
||||||
|
|
||||||
### What are the long term goals of the Distribution project?
|
### What are the long term goals of the Distribution project?
|
||||||
|
|
||||||
|
@ -44,6 +43,18 @@ system that allow users to:
|
||||||
* Implement their own home made solution through good specs, and solid
|
* Implement their own home made solution through good specs, and solid
|
||||||
extensions mechanism.
|
extensions mechanism.
|
||||||
|
|
||||||
|
## More about Registry 2.0
|
||||||
|
|
||||||
|
The new registry implementation provides the following benefits:
|
||||||
|
|
||||||
|
- faster push and pull
|
||||||
|
- new, more efficient implementation
|
||||||
|
- simplified deployment
|
||||||
|
- pluggable storage backend
|
||||||
|
- webhook notifications
|
||||||
|
|
||||||
|
For information on upcoming functionality, please see [ROADMAP.md](ROADMAP.md).
|
||||||
|
|
||||||
### Who needs to deploy a registry?
|
### Who needs to deploy a registry?
|
||||||
|
|
||||||
By default, Docker users pull images from Docker's public registry instance.
|
By default, Docker users pull images from Docker's public registry instance.
|
||||||
|
@ -57,7 +68,7 @@ others, it is not.
|
||||||
For example, users with their own software products may want to maintain a
|
For example, users with their own software products may want to maintain a
|
||||||
registry for private, company images. Also, you may wish to deploy your own
|
registry for private, company images. Also, you may wish to deploy your own
|
||||||
image repository for images used to test or in continuous integration. For these
|
image repository for images used to test or in continuous integration. For these
|
||||||
use cases and others, [deploying your own registry instance](https://github.com/docker/docker.github.io/blob/master/registry/deploying.md)
|
use cases and others, [deploying your own registry instance](docs/deploying.md)
|
||||||
may be the better choice.
|
may be the better choice.
|
||||||
|
|
||||||
### Migration to Registry 2.0
|
### Migration to Registry 2.0
|
||||||
|
@ -65,27 +76,56 @@ may be the better choice.
|
||||||
For those who have previously deployed their own registry based on the Registry
|
For those who have previously deployed their own registry based on the Registry
|
||||||
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
|
1.0 implementation and wish to deploy a Registry 2.0 while retaining images,
|
||||||
data migration is required. A tool to assist with migration efforts has been
|
data migration is required. A tool to assist with migration efforts has been
|
||||||
created. For more information see [docker/migrator](https://github.com/docker/migrator).
|
created. For more information see [docker/migrator]
|
||||||
|
(https://github.com/docker/migrator).
|
||||||
|
|
||||||
## Contribution
|
## Contribute
|
||||||
|
|
||||||
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
|
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
|
||||||
issues, fixes, and patches to this project. If you are contributing code, see
|
issues, fixes, and patches to this project. If you are contributing code, see
|
||||||
the instructions for [building a development environment](BUILDING.md).
|
the instructions for [building a development environment](docs/recipes/building.md).
|
||||||
|
|
||||||
## Communication
|
## Support
|
||||||
|
|
||||||
For async communication and long running discussions please use issues and pull requests on the github repo.
|
If any issues are encountered while using the _Distribution_ project, several
|
||||||
This will be the best place to discuss design and implementation.
|
avenues are available for support:
|
||||||
|
|
||||||
For sync communication we have a community slack with a #distribution channel that everyone is welcome to join and chat about development.
|
<table>
|
||||||
|
<tr>
|
||||||
|
<th align="left">
|
||||||
|
IRC
|
||||||
|
</th>
|
||||||
|
<td>
|
||||||
|
#docker-distribution on FreeNode
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th align="left">
|
||||||
|
Issue Tracker
|
||||||
|
</th>
|
||||||
|
<td>
|
||||||
|
github.com/docker/distribution/issues
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th align="left">
|
||||||
|
Google Groups
|
||||||
|
</th>
|
||||||
|
<td>
|
||||||
|
https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th align="left">
|
||||||
|
Mailing List
|
||||||
|
</th>
|
||||||
|
<td>
|
||||||
|
docker@dockerproject.org
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
**Slack:** Catch us in the #distribution channels on dockercommunity.slack.com.
|
|
||||||
[Click here for an invite to Docker community slack.](https://dockr.ly/slack)
|
|
||||||
|
|
||||||
## Licenses
|
## License
|
||||||
|
|
||||||
The distribution codebase is released under the [Apache 2.0 license](LICENSE).
|
This project is distributed under [Apache License, Version 2.0](LICENSE).
|
||||||
The README.md file, and files in the "docs" folder are licensed under the
|
|
||||||
Creative Commons Attribution 4.0 International License. You may obtain a
|
|
||||||
copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.
|
|
||||||
|
|
28
blobs.go
28
blobs.go
|
@ -1,16 +1,15 @@
|
||||||
package distribution
|
package distribution
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/distribution/context"
|
||||||
|
"github.com/docker/distribution/digest"
|
||||||
"github.com/docker/distribution/reference"
|
"github.com/docker/distribution/reference"
|
||||||
"github.com/opencontainers/go-digest"
|
|
||||||
v1 "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
@ -67,19 +66,12 @@ type Descriptor struct {
|
||||||
Size int64 `json:"size,omitempty"`
|
Size int64 `json:"size,omitempty"`
|
||||||
|
|
||||||
// Digest uniquely identifies the content. A byte stream can be verified
|
// Digest uniquely identifies the content. A byte stream can be verified
|
||||||
// against this digest.
|
// against against this digest.
|
||||||
Digest digest.Digest `json:"digest,omitempty"`
|
Digest digest.Digest `json:"digest,omitempty"`
|
||||||
|
|
||||||
// URLs contains the source URLs of this content.
|
// URLs contains the source URLs of this content.
|
||||||
URLs []string `json:"urls,omitempty"`
|
URLs []string `json:"urls,omitempty"`
|
||||||
|
|
||||||
// Annotations contains arbitrary metadata relating to the targeted content.
|
|
||||||
Annotations map[string]string `json:"annotations,omitempty"`
|
|
||||||
|
|
||||||
// Platform describes the platform which the image in the manifest runs on.
|
|
||||||
// This should only be used when referring to a manifest.
|
|
||||||
Platform *v1.Platform `json:"platform,omitempty"`
|
|
||||||
|
|
||||||
// NOTE: Before adding a field here, please ensure that all
|
// NOTE: Before adding a field here, please ensure that all
|
||||||
// other options have been exhausted. Much of the type relationships
|
// other options have been exhausted. Much of the type relationships
|
||||||
// depend on the simplicity of this type.
|
// depend on the simplicity of this type.
|
||||||
|
@ -160,7 +152,7 @@ type BlobProvider interface {
|
||||||
|
|
||||||
// BlobServer can serve blobs via http.
|
// BlobServer can serve blobs via http.
|
||||||
type BlobServer interface {
|
type BlobServer interface {
|
||||||
// ServeBlob attempts to serve the blob, identified by dgst, via http. The
|
// ServeBlob attempts to serve the blob, identifed by dgst, via http. The
|
||||||
// service may decide to redirect the client elsewhere or serve the data
|
// service may decide to redirect the client elsewhere or serve the data
|
||||||
// directly.
|
// directly.
|
||||||
//
|
//
|
||||||
|
@ -200,18 +192,6 @@ type BlobCreateOption interface {
|
||||||
Apply(interface{}) error
|
Apply(interface{}) error
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateOptions is a collection of blob creation modifiers relevant to general
|
|
||||||
// blob storage intended to be configured by the BlobCreateOption.Apply method.
|
|
||||||
type CreateOptions struct {
|
|
||||||
Mount struct {
|
|
||||||
ShouldMount bool
|
|
||||||
From reference.Canonical
|
|
||||||
// Stat allows to pass precalculated descriptor to link and return.
|
|
||||||
// Blob access check will be skipped if set.
|
|
||||||
Stat *Descriptor
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// BlobWriter provides a handle for inserting data into a blob store.
|
// BlobWriter provides a handle for inserting data into a blob store.
|
||||||
// Instances should be obtained from BlobWriteService.Writer and
|
// Instances should be obtained from BlobWriteService.Writer and
|
||||||
// BlobWriteService.Resume. If supported by the store, a writer can be
|
// BlobWriteService.Resume. If supported by the store, a writer can be
|
||||||
|
|
89
circle.yml
Normal file
89
circle.yml
Normal file
|
@ -0,0 +1,89 @@
|
||||||
|
# Pony-up!
|
||||||
|
machine:
|
||||||
|
pre:
|
||||||
|
# Install gvm
|
||||||
|
- bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/1.0.22/binscripts/gvm-installer)
|
||||||
|
# Install codecov for coverage
|
||||||
|
- pip install --user codecov
|
||||||
|
|
||||||
|
post:
|
||||||
|
# go
|
||||||
|
- gvm install go1.6 --prefer-binary --name=stable
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Convenient shortcuts to "common" locations
|
||||||
|
CHECKOUT: /home/ubuntu/$CIRCLE_PROJECT_REPONAME
|
||||||
|
BASE_DIR: src/github.com/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
|
||||||
|
# Trick circle brainflat "no absolute path" behavior
|
||||||
|
BASE_STABLE: ../../../$HOME/.gvm/pkgsets/stable/global/$BASE_DIR
|
||||||
|
DOCKER_BUILDTAGS: "include_oss include_gcs"
|
||||||
|
# Workaround Circle parsing dumb bugs and/or YAML wonkyness
|
||||||
|
CIRCLE_PAIN: "mode: set"
|
||||||
|
|
||||||
|
hosts:
|
||||||
|
# Not used yet
|
||||||
|
fancy: 127.0.0.1
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
pre:
|
||||||
|
# Copy the code to the gopath of all go versions
|
||||||
|
- >
|
||||||
|
gvm use stable &&
|
||||||
|
mkdir -p "$(dirname $BASE_STABLE)" &&
|
||||||
|
cp -R "$CHECKOUT" "$BASE_STABLE"
|
||||||
|
|
||||||
|
override:
|
||||||
|
# Install dependencies for every copied clone/go version
|
||||||
|
- gvm use stable && go get github.com/tools/godep:
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
post:
|
||||||
|
# For the stable go version, additionally install linting tools
|
||||||
|
- >
|
||||||
|
gvm use stable &&
|
||||||
|
go get github.com/axw/gocov/gocov github.com/golang/lint/golint
|
||||||
|
|
||||||
|
test:
|
||||||
|
pre:
|
||||||
|
# Output the go versions we are going to test
|
||||||
|
# - gvm use old && go version
|
||||||
|
- gvm use stable && go version
|
||||||
|
|
||||||
|
# Ensure validation of dependencies
|
||||||
|
- gvm use stable && if test -n "`git diff --stat=1000 master | grep -Ei \"vendor|godeps\"`"; then make dep-validate; fi:
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
# First thing: build everything. This will catch compile errors, and it's
|
||||||
|
# also necessary for go vet to work properly (see #807).
|
||||||
|
- gvm use stable && godep go install $(go list ./... | grep -v "/vendor/"):
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
# FMT
|
||||||
|
- gvm use stable && make fmt:
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
# VET
|
||||||
|
- gvm use stable && make vet:
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
# LINT
|
||||||
|
- gvm use stable && make lint:
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
override:
|
||||||
|
# Test stable, and report
|
||||||
|
- gvm use stable; export ROOT_PACKAGE=$(go list .); go list -tags "$DOCKER_BUILDTAGS" ./... | grep -v "/vendor/" | xargs -L 1 -I{} bash -c 'export PACKAGE={}; godep go test -tags "$DOCKER_BUILDTAGS" -test.short -coverprofile=$GOPATH/src/$PACKAGE/coverage.out -coverpkg=$(./coverpkg.sh $PACKAGE $ROOT_PACKAGE) $PACKAGE':
|
||||||
|
timeout: 600
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
post:
|
||||||
|
# Report to codecov
|
||||||
|
- bash <(curl -s https://codecov.io/bash):
|
||||||
|
pwd: $BASE_STABLE
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
# Disabled the -race detector due to massive memory usage.
|
||||||
|
# Do we want these as well?
|
||||||
|
# - go get code.google.com/p/go.tools/cmd/goimports
|
||||||
|
# - test -z "$(goimports -l -w ./... | tee /dev/stderr)"
|
||||||
|
# http://labix.org/gocheck
|
|
@ -7,11 +7,8 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/distribution/digest"
|
||||||
"github.com/docker/distribution/version"
|
"github.com/docker/distribution/version"
|
||||||
"github.com/opencontainers/go-digest"
|
|
||||||
|
|
||||||
_ "crypto/sha256"
|
|
||||||
_ "crypto/sha512"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
@ -35,7 +32,7 @@ func init() {
|
||||||
|
|
||||||
func usage() {
|
func usage() {
|
||||||
fmt.Fprintf(os.Stderr, "usage: %s [files...]\n", os.Args[0])
|
fmt.Fprintf(os.Stderr, "usage: %s [files...]\n", os.Args[0])
|
||||||
fmt.Fprint(os.Stderr, `
|
fmt.Fprintf(os.Stderr, `
|
||||||
Calculate the digest of one or more input files, emitting the result
|
Calculate the digest of one or more input files, emitting the result
|
||||||
to standard out. If no files are provided, the digest of stdin will
|
to standard out. If no files are provided, the digest of stdin will
|
||||||
be calculated.
|
be calculated.
|
||||||
|
|
|
@ -21,7 +21,7 @@ import (
|
||||||
"text/template"
|
"text/template"
|
||||||
|
|
||||||
"github.com/docker/distribution/registry/api/errcode"
|
"github.com/docker/distribution/registry/api/errcode"
|
||||||
v2 "github.com/docker/distribution/registry/api/v2"
|
"github.com/docker/distribution/registry/api/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
var spaceRegex = regexp.MustCompile(`\n\s*`)
|
var spaceRegex = regexp.MustCompile(`\n\s*`)
|
||||||
|
|
|
@ -29,8 +29,6 @@ redis:
|
||||||
readtimeout: 10ms
|
readtimeout: 10ms
|
||||||
writetimeout: 10ms
|
writetimeout: 10ms
|
||||||
notifications:
|
notifications:
|
||||||
events:
|
|
||||||
includereferences: true
|
|
||||||
endpoints:
|
endpoints:
|
||||||
- name: local-8082
|
- name: local-8082
|
||||||
url: http://localhost:5003/callback
|
url: http://localhost:5003/callback
|
||||||
|
|
|
@ -31,10 +31,7 @@ storage:
|
||||||
http:
|
http:
|
||||||
addr: :5000
|
addr: :5000
|
||||||
debug:
|
debug:
|
||||||
addr: :5001
|
addr: localhost:5001
|
||||||
prometheus:
|
|
||||||
enabled: true
|
|
||||||
path: /metrics
|
|
||||||
headers:
|
headers:
|
||||||
X-Content-Type-Options: [nosniff]
|
X-Content-Type-Options: [nosniff]
|
||||||
redis:
|
redis:
|
||||||
|
@ -47,8 +44,6 @@ redis:
|
||||||
readtimeout: 10ms
|
readtimeout: 10ms
|
||||||
writetimeout: 10ms
|
writetimeout: 10ms
|
||||||
notifications:
|
notifications:
|
||||||
events:
|
|
||||||
includereferences: true
|
|
||||||
endpoints:
|
endpoints:
|
||||||
- name: local-5003
|
- name: local-5003
|
||||||
url: http://localhost:5003/callback
|
url: http://localhost:5003/callback
|
||||||
|
|
|
@ -11,10 +11,6 @@ http:
|
||||||
addr: :5000
|
addr: :5000
|
||||||
headers:
|
headers:
|
||||||
X-Content-Type-Options: [nosniff]
|
X-Content-Type-Options: [nosniff]
|
||||||
auth:
|
|
||||||
htpasswd:
|
|
||||||
realm: basic-realm
|
|
||||||
path: /etc/registry
|
|
||||||
health:
|
health:
|
||||||
storagedriver:
|
storagedriver:
|
||||||
enabled: true
|
enabled: true
|
||||||
|
|
|
@ -12,11 +12,10 @@ import (
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/filesystem"
|
_ "github.com/docker/distribution/registry/storage/driver/filesystem"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/gcs"
|
_ "github.com/docker/distribution/registry/storage/driver/gcs"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/inmemory"
|
_ "github.com/docker/distribution/registry/storage/driver/inmemory"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/middleware/alicdn"
|
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront"
|
_ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/middleware/redirect"
|
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/oss"
|
_ "github.com/docker/distribution/registry/storage/driver/oss"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/s3-aws"
|
_ "github.com/docker/distribution/registry/storage/driver/s3-aws"
|
||||||
|
_ "github.com/docker/distribution/registry/storage/driver/s3-goamz"
|
||||||
_ "github.com/docker/distribution/registry/storage/driver/swift"
|
_ "github.com/docker/distribution/registry/storage/driver/swift"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package configuration
|
package configuration
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
|
@ -23,14 +22,8 @@ type Configuration struct {
|
||||||
// Log supports setting various parameters related to the logging
|
// Log supports setting various parameters related to the logging
|
||||||
// subsystem.
|
// subsystem.
|
||||||
Log struct {
|
Log struct {
|
||||||
// AccessLog configures access logging.
|
|
||||||
AccessLog struct {
|
|
||||||
// Disabled disables access logging.
|
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
} `yaml:"accesslog,omitempty"`
|
|
||||||
|
|
||||||
// Level is the granularity at which registry operations are logged.
|
// Level is the granularity at which registry operations are logged.
|
||||||
Level Loglevel `yaml:"level,omitempty"`
|
Level Loglevel `yaml:"level"`
|
||||||
|
|
||||||
// Formatter overrides the default formatter with another. Options
|
// Formatter overrides the default formatter with another. Options
|
||||||
// include "text", "json" and "logstash".
|
// include "text", "json" and "logstash".
|
||||||
|
@ -45,9 +38,8 @@ type Configuration struct {
|
||||||
Hooks []LogHook `yaml:"hooks,omitempty"`
|
Hooks []LogHook `yaml:"hooks,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Loglevel is the level at which registry operations are logged.
|
// Loglevel is the level at which registry operations are logged. This is
|
||||||
//
|
// deprecated. Please use Log.Level in the future.
|
||||||
// Deprecated: Use Log.Level instead.
|
|
||||||
Loglevel Loglevel `yaml:"loglevel,omitempty"`
|
Loglevel Loglevel `yaml:"loglevel,omitempty"`
|
||||||
|
|
||||||
// Storage is the configuration for the registry's storage driver
|
// Storage is the configuration for the registry's storage driver
|
||||||
|
@ -85,10 +77,6 @@ type Configuration struct {
|
||||||
// Location headers
|
// Location headers
|
||||||
RelativeURLs bool `yaml:"relativeurls,omitempty"`
|
RelativeURLs bool `yaml:"relativeurls,omitempty"`
|
||||||
|
|
||||||
// Amount of time to wait for connection to drain before shutting down when registry
|
|
||||||
// receives a stop signal
|
|
||||||
DrainTimeout time.Duration `yaml:"draintimeout,omitempty"`
|
|
||||||
|
|
||||||
// TLS instructs the http server to listen with a TLS configuration.
|
// TLS instructs the http server to listen with a TLS configuration.
|
||||||
// This only support simple tls configuration with a cert and key.
|
// This only support simple tls configuration with a cert and key.
|
||||||
// Mostly, this is useful for testing situations or simple deployments
|
// Mostly, this is useful for testing situations or simple deployments
|
||||||
|
@ -108,9 +96,6 @@ type Configuration struct {
|
||||||
// A file may contain multiple CA certificates encoded as PEM
|
// A file may contain multiple CA certificates encoded as PEM
|
||||||
ClientCAs []string `yaml:"clientcas,omitempty"`
|
ClientCAs []string `yaml:"clientcas,omitempty"`
|
||||||
|
|
||||||
// Specifies the lowest TLS version allowed
|
|
||||||
MinimumTLS string `yaml:"minimumtls,omitempty"`
|
|
||||||
|
|
||||||
// LetsEncrypt is used to configuration setting up TLS through
|
// LetsEncrypt is used to configuration setting up TLS through
|
||||||
// Let's Encrypt instead of manually specifying certificate and
|
// Let's Encrypt instead of manually specifying certificate and
|
||||||
// key. If a TLS certificate is specified, the Let's Encrypt
|
// key. If a TLS certificate is specified, the Let's Encrypt
|
||||||
|
@ -122,10 +107,6 @@ type Configuration struct {
|
||||||
|
|
||||||
// Email is the email to use during Let's Encrypt registration
|
// Email is the email to use during Let's Encrypt registration
|
||||||
Email string `yaml:"email,omitempty"`
|
Email string `yaml:"email,omitempty"`
|
||||||
|
|
||||||
// Hosts specifies the hosts which are allowed to obtain Let's
|
|
||||||
// Encrypt certificates.
|
|
||||||
Hosts []string `yaml:"hosts,omitempty"`
|
|
||||||
} `yaml:"letsencrypt,omitempty"`
|
} `yaml:"letsencrypt,omitempty"`
|
||||||
} `yaml:"tls,omitempty"`
|
} `yaml:"tls,omitempty"`
|
||||||
|
|
||||||
|
@ -141,19 +122,7 @@ type Configuration struct {
|
||||||
Debug struct {
|
Debug struct {
|
||||||
// Addr specifies the bind address for the debug server.
|
// Addr specifies the bind address for the debug server.
|
||||||
Addr string `yaml:"addr,omitempty"`
|
Addr string `yaml:"addr,omitempty"`
|
||||||
// Prometheus configures the Prometheus telemetry endpoint.
|
|
||||||
Prometheus struct {
|
|
||||||
Enabled bool `yaml:"enabled,omitempty"`
|
|
||||||
Path string `yaml:"path,omitempty"`
|
|
||||||
} `yaml:"prometheus,omitempty"`
|
|
||||||
} `yaml:"debug,omitempty"`
|
} `yaml:"debug,omitempty"`
|
||||||
|
|
||||||
// HTTP2 configuration options
|
|
||||||
HTTP2 struct {
|
|
||||||
// Specifies whether the registry should disallow clients attempting
|
|
||||||
// to connect via http2. If set to true, only http/1.1 is supported.
|
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
} `yaml:"http2,omitempty"`
|
|
||||||
} `yaml:"http,omitempty"`
|
} `yaml:"http,omitempty"`
|
||||||
|
|
||||||
// Notifications specifies configuration about various endpoint to which
|
// Notifications specifies configuration about various endpoint to which
|
||||||
|
@ -201,44 +170,8 @@ type Configuration struct {
|
||||||
// TrustKey is the signing key to use for adding the signature to
|
// TrustKey is the signing key to use for adding the signature to
|
||||||
// schema1 manifests.
|
// schema1 manifests.
|
||||||
TrustKey string `yaml:"signingkeyfile,omitempty"`
|
TrustKey string `yaml:"signingkeyfile,omitempty"`
|
||||||
// Enabled determines if schema1 manifests should be pullable
|
|
||||||
Enabled bool `yaml:"enabled,omitempty"`
|
|
||||||
} `yaml:"schema1,omitempty"`
|
} `yaml:"schema1,omitempty"`
|
||||||
} `yaml:"compatibility,omitempty"`
|
} `yaml:"compatibility,omitempty"`
|
||||||
|
|
||||||
// Validation configures validation options for the registry.
|
|
||||||
Validation struct {
|
|
||||||
// Enabled enables the other options in this section. This field is
|
|
||||||
// deprecated in favor of Disabled.
|
|
||||||
Enabled bool `yaml:"enabled,omitempty"`
|
|
||||||
// Disabled disables the other options in this section.
|
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
// Manifests configures manifest validation.
|
|
||||||
Manifests struct {
|
|
||||||
// URLs configures validation for URLs in pushed manifests.
|
|
||||||
URLs struct {
|
|
||||||
// Allow specifies regular expressions (https://godoc.org/regexp/syntax)
|
|
||||||
// that URLs in pushed manifests must match.
|
|
||||||
Allow []string `yaml:"allow,omitempty"`
|
|
||||||
// Deny specifies regular expressions (https://godoc.org/regexp/syntax)
|
|
||||||
// that URLs in pushed manifests must not match.
|
|
||||||
Deny []string `yaml:"deny,omitempty"`
|
|
||||||
} `yaml:"urls,omitempty"`
|
|
||||||
} `yaml:"manifests,omitempty"`
|
|
||||||
} `yaml:"validation,omitempty"`
|
|
||||||
|
|
||||||
// Policy configures registry policy options.
|
|
||||||
Policy struct {
|
|
||||||
// Repository configures policies for repositories
|
|
||||||
Repository struct {
|
|
||||||
// Classes is a list of repository classes which the
|
|
||||||
// registry allows content for. This class is matched
|
|
||||||
// against the configuration media type inside uploaded
|
|
||||||
// manifests. When non-empty, the registry will enforce
|
|
||||||
// the class in authorized resources.
|
|
||||||
Classes []string `yaml:"classes"`
|
|
||||||
} `yaml:"repository,omitempty"`
|
|
||||||
} `yaml:"policy,omitempty"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogHook is composed of hook Level and Type.
|
// LogHook is composed of hook Level and Type.
|
||||||
|
@ -255,7 +188,7 @@ type LogHook struct {
|
||||||
// Levels set which levels of log message will let hook executed.
|
// Levels set which levels of log message will let hook executed.
|
||||||
Levels []string `yaml:"levels,omitempty"`
|
Levels []string `yaml:"levels,omitempty"`
|
||||||
|
|
||||||
// MailOptions allows user to configure email parameters.
|
// MailOptions allows user to configurate email parameters.
|
||||||
MailOptions MailOptions `yaml:"options,omitempty"`
|
MailOptions MailOptions `yaml:"options,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -349,7 +282,7 @@ type Health struct {
|
||||||
type v0_1Configuration Configuration
|
type v0_1Configuration Configuration
|
||||||
|
|
||||||
// UnmarshalYAML implements the yaml.Unmarshaler interface
|
// UnmarshalYAML implements the yaml.Unmarshaler interface
|
||||||
// Unmarshals a string of the form X.Y into a Version, validating that X and Y can represent unsigned integers
|
// Unmarshals a string of the form X.Y into a Version, validating that X and Y can represent uints
|
||||||
func (version *Version) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
func (version *Version) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
var versionString string
|
var versionString string
|
||||||
err := unmarshal(&versionString)
|
err := unmarshal(&versionString)
|
||||||
|
@ -391,7 +324,7 @@ func (loglevel *Loglevel) UnmarshalYAML(unmarshal func(interface{}) error) error
|
||||||
switch loglevelString {
|
switch loglevelString {
|
||||||
case "error", "warn", "info", "debug":
|
case "error", "warn", "info", "debug":
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
|
return fmt.Errorf("Invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
|
||||||
}
|
}
|
||||||
|
|
||||||
*loglevel = Loglevel(loglevelString)
|
*loglevel = Loglevel(loglevelString)
|
||||||
|
@ -466,7 +399,7 @@ func (storage *Storage) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(types) > 1 {
|
if len(types) > 1 {
|
||||||
return fmt.Errorf("must provide exactly one storage type. Provided: %v", types)
|
return fmt.Errorf("Must provide exactly one storage type. Provided: %v", types)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
*storage = storageMap
|
*storage = storageMap
|
||||||
|
@ -554,8 +487,6 @@ func (auth Auth) MarshalYAML() (interface{}, error) {
|
||||||
|
|
||||||
// Notifications configures multiple http endpoints.
|
// Notifications configures multiple http endpoints.
|
||||||
type Notifications struct {
|
type Notifications struct {
|
||||||
// EventConfig is the configuration for the event format that is sent to each Endpoint.
|
|
||||||
EventConfig Events `yaml:"events,omitempty"`
|
|
||||||
// Endpoints is a list of http configurations for endpoints that
|
// Endpoints is a list of http configurations for endpoints that
|
||||||
// respond to webhook notifications. In the future, we may allow other
|
// respond to webhook notifications. In the future, we may allow other
|
||||||
// kinds of endpoints, such as external queues.
|
// kinds of endpoints, such as external queues.
|
||||||
|
@ -565,26 +496,13 @@ type Notifications struct {
|
||||||
// Endpoint describes the configuration of an http webhook notification
|
// Endpoint describes the configuration of an http webhook notification
|
||||||
// endpoint.
|
// endpoint.
|
||||||
type Endpoint struct {
|
type Endpoint struct {
|
||||||
Name string `yaml:"name"` // identifies the endpoint in the registry instance.
|
Name string `yaml:"name"` // identifies the endpoint in the registry instance.
|
||||||
Disabled bool `yaml:"disabled"` // disables the endpoint
|
Disabled bool `yaml:"disabled"` // disables the endpoint
|
||||||
URL string `yaml:"url"` // post url for the endpoint.
|
URL string `yaml:"url"` // post url for the endpoint.
|
||||||
Headers http.Header `yaml:"headers"` // static headers that should be added to all requests
|
Headers http.Header `yaml:"headers"` // static headers that should be added to all requests
|
||||||
Timeout time.Duration `yaml:"timeout"` // HTTP timeout
|
Timeout time.Duration `yaml:"timeout"` // HTTP timeout
|
||||||
Threshold int `yaml:"threshold"` // circuit breaker threshold before backing off on failure
|
Threshold int `yaml:"threshold"` // circuit breaker threshold before backing off on failure
|
||||||
Backoff time.Duration `yaml:"backoff"` // backoff duration
|
Backoff time.Duration `yaml:"backoff"` // backoff duration
|
||||||
IgnoredMediaTypes []string `yaml:"ignoredmediatypes"` // target media types to ignore
|
|
||||||
Ignore Ignore `yaml:"ignore"` // ignore event types
|
|
||||||
}
|
|
||||||
|
|
||||||
// Events configures notification events.
|
|
||||||
type Events struct {
|
|
||||||
IncludeReferences bool `yaml:"includereferences"` // include reference data in manifest events
|
|
||||||
}
|
|
||||||
|
|
||||||
//Ignore configures mediaTypes and actions of the event, that it won't be propagated
|
|
||||||
type Ignore struct {
|
|
||||||
MediaTypes []string `yaml:"mediatypes"` // target media types to ignore
|
|
||||||
Actions []string `yaml:"actions"` // ignore action types
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Reporting defines error reporting methods.
|
// Reporting defines error reporting methods.
|
||||||
|
@ -657,22 +575,15 @@ func Parse(rd io.Reader) (*Configuration, error) {
|
||||||
ParseAs: reflect.TypeOf(v0_1Configuration{}),
|
ParseAs: reflect.TypeOf(v0_1Configuration{}),
|
||||||
ConversionFunc: func(c interface{}) (interface{}, error) {
|
ConversionFunc: func(c interface{}) (interface{}, error) {
|
||||||
if v0_1, ok := c.(*v0_1Configuration); ok {
|
if v0_1, ok := c.(*v0_1Configuration); ok {
|
||||||
if v0_1.Log.Level == Loglevel("") {
|
if v0_1.Loglevel == Loglevel("") {
|
||||||
if v0_1.Loglevel != Loglevel("") {
|
v0_1.Loglevel = Loglevel("info")
|
||||||
v0_1.Log.Level = v0_1.Loglevel
|
|
||||||
} else {
|
|
||||||
v0_1.Log.Level = Loglevel("info")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if v0_1.Loglevel != Loglevel("") {
|
|
||||||
v0_1.Loglevel = Loglevel("")
|
|
||||||
}
|
}
|
||||||
if v0_1.Storage.Type() == "" {
|
if v0_1.Storage.Type() == "" {
|
||||||
return nil, errors.New("no storage configuration provided")
|
return nil, fmt.Errorf("No storage configuration provided")
|
||||||
}
|
}
|
||||||
return (*Configuration)(v0_1), nil
|
return (*Configuration)(v0_1), nil
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf("expected *v0_1Configuration, received %#v", c)
|
return nil, fmt.Errorf("Expected *v0_1Configuration, received %#v", c)
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
|
@ -7,7 +7,6 @@ import (
|
||||||
"reflect"
|
"reflect"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
|
||||||
|
|
||||||
. "gopkg.in/check.v1"
|
. "gopkg.in/check.v1"
|
||||||
"gopkg.in/yaml.v2"
|
"gopkg.in/yaml.v2"
|
||||||
|
@ -20,17 +19,14 @@ func Test(t *testing.T) { TestingT(t) }
|
||||||
var configStruct = Configuration{
|
var configStruct = Configuration{
|
||||||
Version: "0.1",
|
Version: "0.1",
|
||||||
Log: struct {
|
Log: struct {
|
||||||
AccessLog struct {
|
Level Loglevel `yaml:"level"`
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
} `yaml:"accesslog,omitempty"`
|
|
||||||
Level Loglevel `yaml:"level,omitempty"`
|
|
||||||
Formatter string `yaml:"formatter,omitempty"`
|
Formatter string `yaml:"formatter,omitempty"`
|
||||||
Fields map[string]interface{} `yaml:"fields,omitempty"`
|
Fields map[string]interface{} `yaml:"fields,omitempty"`
|
||||||
Hooks []LogHook `yaml:"hooks,omitempty"`
|
Hooks []LogHook `yaml:"hooks,omitempty"`
|
||||||
}{
|
}{
|
||||||
Level: "info",
|
|
||||||
Fields: map[string]interface{}{"environment": "test"},
|
Fields: map[string]interface{}{"environment": "test"},
|
||||||
},
|
},
|
||||||
|
Loglevel: "info",
|
||||||
Storage: Storage{
|
Storage: Storage{
|
||||||
"s3": Parameters{
|
"s3": Parameters{
|
||||||
"region": "us-east-1",
|
"region": "us-east-1",
|
||||||
|
@ -63,54 +59,37 @@ var configStruct = Configuration{
|
||||||
Headers: http.Header{
|
Headers: http.Header{
|
||||||
"Authorization": []string{"Bearer <example>"},
|
"Authorization": []string{"Bearer <example>"},
|
||||||
},
|
},
|
||||||
IgnoredMediaTypes: []string{"application/octet-stream"},
|
|
||||||
Ignore: Ignore{
|
|
||||||
MediaTypes: []string{"application/octet-stream"},
|
|
||||||
Actions: []string{"pull"},
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
HTTP: struct {
|
HTTP: struct {
|
||||||
Addr string `yaml:"addr,omitempty"`
|
Addr string `yaml:"addr,omitempty"`
|
||||||
Net string `yaml:"net,omitempty"`
|
Net string `yaml:"net,omitempty"`
|
||||||
Host string `yaml:"host,omitempty"`
|
Host string `yaml:"host,omitempty"`
|
||||||
Prefix string `yaml:"prefix,omitempty"`
|
Prefix string `yaml:"prefix,omitempty"`
|
||||||
Secret string `yaml:"secret,omitempty"`
|
Secret string `yaml:"secret,omitempty"`
|
||||||
RelativeURLs bool `yaml:"relativeurls,omitempty"`
|
RelativeURLs bool `yaml:"relativeurls,omitempty"`
|
||||||
DrainTimeout time.Duration `yaml:"draintimeout,omitempty"`
|
|
||||||
TLS struct {
|
TLS struct {
|
||||||
Certificate string `yaml:"certificate,omitempty"`
|
Certificate string `yaml:"certificate,omitempty"`
|
||||||
Key string `yaml:"key,omitempty"`
|
Key string `yaml:"key,omitempty"`
|
||||||
ClientCAs []string `yaml:"clientcas,omitempty"`
|
ClientCAs []string `yaml:"clientcas,omitempty"`
|
||||||
MinimumTLS string `yaml:"minimumtls,omitempty"`
|
|
||||||
LetsEncrypt struct {
|
LetsEncrypt struct {
|
||||||
CacheFile string `yaml:"cachefile,omitempty"`
|
CacheFile string `yaml:"cachefile,omitempty"`
|
||||||
Email string `yaml:"email,omitempty"`
|
Email string `yaml:"email,omitempty"`
|
||||||
Hosts []string `yaml:"hosts,omitempty"`
|
|
||||||
} `yaml:"letsencrypt,omitempty"`
|
} `yaml:"letsencrypt,omitempty"`
|
||||||
} `yaml:"tls,omitempty"`
|
} `yaml:"tls,omitempty"`
|
||||||
Headers http.Header `yaml:"headers,omitempty"`
|
Headers http.Header `yaml:"headers,omitempty"`
|
||||||
Debug struct {
|
Debug struct {
|
||||||
Addr string `yaml:"addr,omitempty"`
|
Addr string `yaml:"addr,omitempty"`
|
||||||
Prometheus struct {
|
|
||||||
Enabled bool `yaml:"enabled,omitempty"`
|
|
||||||
Path string `yaml:"path,omitempty"`
|
|
||||||
} `yaml:"prometheus,omitempty"`
|
|
||||||
} `yaml:"debug,omitempty"`
|
} `yaml:"debug,omitempty"`
|
||||||
HTTP2 struct {
|
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
} `yaml:"http2,omitempty"`
|
|
||||||
}{
|
}{
|
||||||
TLS: struct {
|
TLS: struct {
|
||||||
Certificate string `yaml:"certificate,omitempty"`
|
Certificate string `yaml:"certificate,omitempty"`
|
||||||
Key string `yaml:"key,omitempty"`
|
Key string `yaml:"key,omitempty"`
|
||||||
ClientCAs []string `yaml:"clientcas,omitempty"`
|
ClientCAs []string `yaml:"clientcas,omitempty"`
|
||||||
MinimumTLS string `yaml:"minimumtls,omitempty"`
|
|
||||||
LetsEncrypt struct {
|
LetsEncrypt struct {
|
||||||
CacheFile string `yaml:"cachefile,omitempty"`
|
CacheFile string `yaml:"cachefile,omitempty"`
|
||||||
Email string `yaml:"email,omitempty"`
|
Email string `yaml:"email,omitempty"`
|
||||||
Hosts []string `yaml:"hosts,omitempty"`
|
|
||||||
} `yaml:"letsencrypt,omitempty"`
|
} `yaml:"letsencrypt,omitempty"`
|
||||||
}{
|
}{
|
||||||
ClientCAs: []string{"/path/to/ca.pem"},
|
ClientCAs: []string{"/path/to/ca.pem"},
|
||||||
|
@ -118,11 +97,6 @@ var configStruct = Configuration{
|
||||||
Headers: http.Header{
|
Headers: http.Header{
|
||||||
"X-Content-Type-Options": []string{"nosniff"},
|
"X-Content-Type-Options": []string{"nosniff"},
|
||||||
},
|
},
|
||||||
HTTP2: struct {
|
|
||||||
Disabled bool `yaml:"disabled,omitempty"`
|
|
||||||
}{
|
|
||||||
Disabled: false,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -130,9 +104,9 @@ var configStruct = Configuration{
|
||||||
var configYamlV0_1 = `
|
var configYamlV0_1 = `
|
||||||
version: 0.1
|
version: 0.1
|
||||||
log:
|
log:
|
||||||
level: info
|
|
||||||
fields:
|
fields:
|
||||||
environment: test
|
environment: test
|
||||||
|
loglevel: info
|
||||||
storage:
|
storage:
|
||||||
s3:
|
s3:
|
||||||
region: us-east-1
|
region: us-east-1
|
||||||
|
@ -154,13 +128,6 @@ notifications:
|
||||||
url: http://example.com
|
url: http://example.com
|
||||||
headers:
|
headers:
|
||||||
Authorization: [Bearer <example>]
|
Authorization: [Bearer <example>]
|
||||||
ignoredmediatypes:
|
|
||||||
- application/octet-stream
|
|
||||||
ignore:
|
|
||||||
mediatypes:
|
|
||||||
- application/octet-stream
|
|
||||||
actions:
|
|
||||||
- pull
|
|
||||||
reporting:
|
reporting:
|
||||||
bugsnag:
|
bugsnag:
|
||||||
apikey: BugsnagApiKey
|
apikey: BugsnagApiKey
|
||||||
|
@ -175,8 +142,7 @@ http:
|
||||||
// storage driver with no parameters
|
// storage driver with no parameters
|
||||||
var inmemoryConfigYamlV0_1 = `
|
var inmemoryConfigYamlV0_1 = `
|
||||||
version: 0.1
|
version: 0.1
|
||||||
log:
|
loglevel: info
|
||||||
level: info
|
|
||||||
storage: inmemory
|
storage: inmemory
|
||||||
auth:
|
auth:
|
||||||
silly:
|
silly:
|
||||||
|
@ -188,13 +154,6 @@ notifications:
|
||||||
url: http://example.com
|
url: http://example.com
|
||||||
headers:
|
headers:
|
||||||
Authorization: [Bearer <example>]
|
Authorization: [Bearer <example>]
|
||||||
ignoredmediatypes:
|
|
||||||
- application/octet-stream
|
|
||||||
ignore:
|
|
||||||
mediatypes:
|
|
||||||
- application/octet-stream
|
|
||||||
actions:
|
|
||||||
- pull
|
|
||||||
http:
|
http:
|
||||||
headers:
|
headers:
|
||||||
X-Content-Type-Options: [nosniff]
|
X-Content-Type-Options: [nosniff]
|
||||||
|
@ -217,7 +176,6 @@ func (suite *ConfigSuite) TestMarshalRoundtrip(c *C) {
|
||||||
configBytes, err := yaml.Marshal(suite.expectedConfig)
|
configBytes, err := yaml.Marshal(suite.expectedConfig)
|
||||||
c.Assert(err, IsNil)
|
c.Assert(err, IsNil)
|
||||||
config, err := Parse(bytes.NewReader(configBytes))
|
config, err := Parse(bytes.NewReader(configBytes))
|
||||||
c.Log(string(configBytes))
|
|
||||||
c.Assert(err, IsNil)
|
c.Assert(err, IsNil)
|
||||||
c.Assert(config, DeepEquals, suite.expectedConfig)
|
c.Assert(config, DeepEquals, suite.expectedConfig)
|
||||||
}
|
}
|
||||||
|
@ -340,9 +298,9 @@ func (suite *ConfigSuite) TestParseWithSameEnvLoglevel(c *C) {
|
||||||
// TestParseWithDifferentEnvLoglevel validates that providing an environment variable defining the
|
// TestParseWithDifferentEnvLoglevel validates that providing an environment variable defining the
|
||||||
// log level will override the value provided in the yaml document
|
// log level will override the value provided in the yaml document
|
||||||
func (suite *ConfigSuite) TestParseWithDifferentEnvLoglevel(c *C) {
|
func (suite *ConfigSuite) TestParseWithDifferentEnvLoglevel(c *C) {
|
||||||
suite.expectedConfig.Log.Level = "error"
|
suite.expectedConfig.Loglevel = "error"
|
||||||
|
|
||||||
os.Setenv("REGISTRY_LOG_LEVEL", "error")
|
os.Setenv("REGISTRY_LOGLEVEL", "error")
|
||||||
|
|
||||||
config, err := Parse(bytes.NewReader([]byte(configYamlV0_1)))
|
config, err := Parse(bytes.NewReader([]byte(configYamlV0_1)))
|
||||||
c.Assert(err, IsNil)
|
c.Assert(err, IsNil)
|
||||||
|
@ -542,7 +500,9 @@ func copyConfig(config Configuration) *Configuration {
|
||||||
}
|
}
|
||||||
|
|
||||||
configCopy.Notifications = Notifications{Endpoints: []Endpoint{}}
|
configCopy.Notifications = Notifications{Endpoints: []Endpoint{}}
|
||||||
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, config.Notifications.Endpoints...)
|
for _, v := range config.Notifications.Endpoints {
|
||||||
|
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, v)
|
||||||
|
}
|
||||||
|
|
||||||
configCopy.HTTP.Headers = make(http.Header)
|
configCopy.HTTP.Headers = make(http.Header)
|
||||||
for k, v := range config.HTTP.Headers {
|
for k, v := range config.HTTP.Headers {
|
||||||
|
|
|
@ -8,7 +8,7 @@ import (
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/Sirupsen/logrus"
|
||||||
"gopkg.in/yaml.v2"
|
"gopkg.in/yaml.v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -122,7 +122,7 @@ func (p *Parser) Parse(in []byte, v interface{}) error {
|
||||||
|
|
||||||
parseInfo, ok := p.mapping[versionedStruct.Version]
|
parseInfo, ok := p.mapping[versionedStruct.Version]
|
||||||
if !ok {
|
if !ok {
|
||||||
return fmt.Errorf("unsupported version: %q", versionedStruct.Version)
|
return fmt.Errorf("Unsupported version: %q", versionedStruct.Version)
|
||||||
}
|
}
|
||||||
|
|
||||||
parseAs := reflect.New(parseInfo.ParseAs)
|
parseAs := reflect.New(parseInfo.ParseAs)
|
||||||
|
@ -220,7 +220,7 @@ func (p *Parser) overwriteStruct(v reflect.Value, fullpath string, path []string
|
||||||
}
|
}
|
||||||
case reflect.Ptr:
|
case reflect.Ptr:
|
||||||
if field.IsNil() {
|
if field.IsNil() {
|
||||||
field.Set(reflect.New(field.Type().Elem()))
|
field.Set(reflect.New(sf.Type))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,70 +0,0 @@
|
||||||
package configuration
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"reflect"
|
|
||||||
|
|
||||||
. "gopkg.in/check.v1"
|
|
||||||
)
|
|
||||||
|
|
||||||
type localConfiguration struct {
|
|
||||||
Version Version `yaml:"version"`
|
|
||||||
Log *Log `yaml:"log"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Log struct {
|
|
||||||
Formatter string `yaml:"formatter,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
var expectedConfig = localConfiguration{
|
|
||||||
Version: "0.1",
|
|
||||||
Log: &Log{
|
|
||||||
Formatter: "json",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
type ParserSuite struct{}
|
|
||||||
|
|
||||||
var _ = Suite(new(ParserSuite))
|
|
||||||
|
|
||||||
func (suite *ParserSuite) TestParserOverwriteIninitializedPoiner(c *C) {
|
|
||||||
config := localConfiguration{}
|
|
||||||
|
|
||||||
os.Setenv("REGISTRY_LOG_FORMATTER", "json")
|
|
||||||
defer os.Unsetenv("REGISTRY_LOG_FORMATTER")
|
|
||||||
|
|
||||||
p := NewParser("registry", []VersionedParseInfo{
|
|
||||||
{
|
|
||||||
Version: "0.1",
|
|
||||||
ParseAs: reflect.TypeOf(config),
|
|
||||||
ConversionFunc: func(c interface{}) (interface{}, error) {
|
|
||||||
return c, nil
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
|
|
||||||
err := p.Parse([]byte(`{version: "0.1", log: {formatter: "text"}}`), &config)
|
|
||||||
c.Assert(err, IsNil)
|
|
||||||
c.Assert(config, DeepEquals, expectedConfig)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (suite *ParserSuite) TestParseOverwriteUnininitializedPoiner(c *C) {
|
|
||||||
config := localConfiguration{}
|
|
||||||
|
|
||||||
os.Setenv("REGISTRY_LOG_FORMATTER", "json")
|
|
||||||
defer os.Unsetenv("REGISTRY_LOG_FORMATTER")
|
|
||||||
|
|
||||||
p := NewParser("registry", []VersionedParseInfo{
|
|
||||||
{
|
|
||||||
Version: "0.1",
|
|
||||||
ParseAs: reflect.TypeOf(config),
|
|
||||||
ConversionFunc: func(c interface{}) (interface{}, error) {
|
|
||||||
return c, nil
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
|
|
||||||
err := p.Parse([]byte(`{version: "0.1"}`), &config)
|
|
||||||
c.Assert(err, IsNil)
|
|
||||||
c.Assert(config, DeepEquals, expectedConfig)
|
|
||||||
}
|
|
|
@ -1,16 +1,21 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/docker/distribution/uuid"
|
"github.com/docker/distribution/uuid"
|
||||||
|
"golang.org/x/net/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Context is a copy of Context from the golang.org/x/net/context package.
|
||||||
|
type Context interface {
|
||||||
|
context.Context
|
||||||
|
}
|
||||||
|
|
||||||
// instanceContext is a context that provides only an instance id. It is
|
// instanceContext is a context that provides only an instance id. It is
|
||||||
// provided as the main background context.
|
// provided as the main background context.
|
||||||
type instanceContext struct {
|
type instanceContext struct {
|
||||||
context.Context
|
Context
|
||||||
id string // id of context, logged as "instance.id"
|
id string // id of context, logged as "instance.id"
|
||||||
once sync.Once // once protect generation of the id
|
once sync.Once // once protect generation of the id
|
||||||
}
|
}
|
||||||
|
@ -37,10 +42,17 @@ var background = &instanceContext{
|
||||||
// Background returns a non-nil, empty Context. The background context
|
// Background returns a non-nil, empty Context. The background context
|
||||||
// provides a single key, "instance.id" that is globally unique to the
|
// provides a single key, "instance.id" that is globally unique to the
|
||||||
// process.
|
// process.
|
||||||
func Background() context.Context {
|
func Background() Context {
|
||||||
return background
|
return background
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WithValue returns a copy of parent in which the value associated with key is
|
||||||
|
// val. Use context Values only for request-scoped data that transits processes
|
||||||
|
// and APIs, not for passing optional parameters to functions.
|
||||||
|
func WithValue(parent Context, key, val interface{}) Context {
|
||||||
|
return context.WithValue(parent, key, val)
|
||||||
|
}
|
||||||
|
|
||||||
// stringMapContext is a simple context implementation that checks a map for a
|
// stringMapContext is a simple context implementation that checks a map for a
|
||||||
// key, falling back to a parent if not present.
|
// key, falling back to a parent if not present.
|
||||||
type stringMapContext struct {
|
type stringMapContext struct {
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
// Package context provides several utilities for working with
|
// Package context provides several utilities for working with
|
||||||
// Go's context in http requests. Primarily, the focus is on logging relevant
|
// golang.org/x/net/context in http requests. Primarily, the focus is on
|
||||||
// request information but this package is not limited to that purpose.
|
// logging relevant request information but this package is not limited to
|
||||||
|
// that purpose.
|
||||||
//
|
//
|
||||||
// The easiest way to get started is to get the background context:
|
// The easiest way to get started is to get the background context:
|
||||||
//
|
//
|
||||||
|
@ -63,7 +64,7 @@
|
||||||
// Note that this only affects the new context, the previous context, with the
|
// Note that this only affects the new context, the previous context, with the
|
||||||
// version field, can be used independently. Put another way, the new logger,
|
// version field, can be used independently. Put another way, the new logger,
|
||||||
// added to the request context, is unique to that context and can have
|
// added to the request context, is unique to that context and can have
|
||||||
// request scoped variables.
|
// request scoped varaibles.
|
||||||
//
|
//
|
||||||
// HTTP Requests
|
// HTTP Requests
|
||||||
//
|
//
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
@ -9,9 +8,9 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
log "github.com/Sirupsen/logrus"
|
||||||
"github.com/docker/distribution/uuid"
|
"github.com/docker/distribution/uuid"
|
||||||
"github.com/gorilla/mux"
|
"github.com/gorilla/mux"
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Common errors used with this package.
|
// Common errors used with this package.
|
||||||
|
@ -69,7 +68,7 @@ func RemoteIP(r *http.Request) string {
|
||||||
// is available at "http.request". Other common attributes are available under
|
// is available at "http.request". Other common attributes are available under
|
||||||
// the prefix "http.request.". If a request is already present on the context,
|
// the prefix "http.request.". If a request is already present on the context,
|
||||||
// this method will panic.
|
// this method will panic.
|
||||||
func WithRequest(ctx context.Context, r *http.Request) context.Context {
|
func WithRequest(ctx Context, r *http.Request) Context {
|
||||||
if ctx.Value("http.request") != nil {
|
if ctx.Value("http.request") != nil {
|
||||||
// NOTE(stevvooe): This needs to be considered a programming error. It
|
// NOTE(stevvooe): This needs to be considered a programming error. It
|
||||||
// is unlikely that we'd want to have more than one request in
|
// is unlikely that we'd want to have more than one request in
|
||||||
|
@ -88,7 +87,7 @@ func WithRequest(ctx context.Context, r *http.Request) context.Context {
|
||||||
// GetRequest returns the http request in the given context. Returns
|
// GetRequest returns the http request in the given context. Returns
|
||||||
// ErrNoRequestContext if the context does not have an http request associated
|
// ErrNoRequestContext if the context does not have an http request associated
|
||||||
// with it.
|
// with it.
|
||||||
func GetRequest(ctx context.Context) (*http.Request, error) {
|
func GetRequest(ctx Context) (*http.Request, error) {
|
||||||
if r, ok := ctx.Value("http.request").(*http.Request); r != nil && ok {
|
if r, ok := ctx.Value("http.request").(*http.Request); r != nil && ok {
|
||||||
return r, nil
|
return r, nil
|
||||||
}
|
}
|
||||||
|
@ -97,24 +96,34 @@ func GetRequest(ctx context.Context) (*http.Request, error) {
|
||||||
|
|
||||||
// GetRequestID attempts to resolve the current request id, if possible. An
|
// GetRequestID attempts to resolve the current request id, if possible. An
|
||||||
// error is return if it is not available on the context.
|
// error is return if it is not available on the context.
|
||||||
func GetRequestID(ctx context.Context) string {
|
func GetRequestID(ctx Context) string {
|
||||||
return GetStringValue(ctx, "http.request.id")
|
return GetStringValue(ctx, "http.request.id")
|
||||||
}
|
}
|
||||||
|
|
||||||
// WithResponseWriter returns a new context and response writer that makes
|
// WithResponseWriter returns a new context and response writer that makes
|
||||||
// interesting response statistics available within the context.
|
// interesting response statistics available within the context.
|
||||||
func WithResponseWriter(ctx context.Context, w http.ResponseWriter) (context.Context, http.ResponseWriter) {
|
func WithResponseWriter(ctx Context, w http.ResponseWriter) (Context, http.ResponseWriter) {
|
||||||
irw := instrumentedResponseWriter{
|
irw := instrumentedResponseWriter{
|
||||||
ResponseWriter: w,
|
ResponseWriter: w,
|
||||||
Context: ctx,
|
Context: ctx,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if closeNotifier, ok := w.(http.CloseNotifier); ok {
|
||||||
|
irwCN := &instrumentedResponseWriterCN{
|
||||||
|
instrumentedResponseWriter: irw,
|
||||||
|
CloseNotifier: closeNotifier,
|
||||||
|
}
|
||||||
|
|
||||||
|
return irwCN, irwCN
|
||||||
|
}
|
||||||
|
|
||||||
return &irw, &irw
|
return &irw, &irw
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetResponseWriter returns the http.ResponseWriter from the provided
|
// GetResponseWriter returns the http.ResponseWriter from the provided
|
||||||
// context. If not present, ErrNoResponseWriterContext is returned. The
|
// context. If not present, ErrNoResponseWriterContext is returned. The
|
||||||
// returned instance provides instrumentation in the context.
|
// returned instance provides instrumentation in the context.
|
||||||
func GetResponseWriter(ctx context.Context) (http.ResponseWriter, error) {
|
func GetResponseWriter(ctx Context) (http.ResponseWriter, error) {
|
||||||
v := ctx.Value("http.response")
|
v := ctx.Value("http.response")
|
||||||
|
|
||||||
rw, ok := v.(http.ResponseWriter)
|
rw, ok := v.(http.ResponseWriter)
|
||||||
|
@ -134,7 +143,7 @@ var getVarsFromRequest = mux.Vars
|
||||||
// example, if looking for the variable "name", it can be accessed as
|
// example, if looking for the variable "name", it can be accessed as
|
||||||
// "vars.name". Implementations that are accessing values need not know that
|
// "vars.name". Implementations that are accessing values need not know that
|
||||||
// the underlying context is implemented with gorilla/mux vars.
|
// the underlying context is implemented with gorilla/mux vars.
|
||||||
func WithVars(ctx context.Context, r *http.Request) context.Context {
|
func WithVars(ctx Context, r *http.Request) Context {
|
||||||
return &muxVarsContext{
|
return &muxVarsContext{
|
||||||
Context: ctx,
|
Context: ctx,
|
||||||
vars: getVarsFromRequest(r),
|
vars: getVarsFromRequest(r),
|
||||||
|
@ -144,7 +153,7 @@ func WithVars(ctx context.Context, r *http.Request) context.Context {
|
||||||
// GetRequestLogger returns a logger that contains fields from the request in
|
// GetRequestLogger returns a logger that contains fields from the request in
|
||||||
// the current context. If the request is not available in the context, no
|
// the current context. If the request is not available in the context, no
|
||||||
// fields will display. Request loggers can safely be pushed onto the context.
|
// fields will display. Request loggers can safely be pushed onto the context.
|
||||||
func GetRequestLogger(ctx context.Context) Logger {
|
func GetRequestLogger(ctx Context) Logger {
|
||||||
return GetLogger(ctx,
|
return GetLogger(ctx,
|
||||||
"http.request.id",
|
"http.request.id",
|
||||||
"http.request.method",
|
"http.request.method",
|
||||||
|
@ -160,7 +169,7 @@ func GetRequestLogger(ctx context.Context) Logger {
|
||||||
// Because the values are read at call time, pushing a logger returned from
|
// Because the values are read at call time, pushing a logger returned from
|
||||||
// this function on the context will lead to missing or invalid data. Only
|
// this function on the context will lead to missing or invalid data. Only
|
||||||
// call this at the end of a request, after the response has been written.
|
// call this at the end of a request, after the response has been written.
|
||||||
func GetResponseLogger(ctx context.Context) Logger {
|
func GetResponseLogger(ctx Context) Logger {
|
||||||
l := getLogrusLogger(ctx,
|
l := getLogrusLogger(ctx,
|
||||||
"http.response.written",
|
"http.response.written",
|
||||||
"http.response.status",
|
"http.response.status",
|
||||||
|
@ -177,7 +186,7 @@ func GetResponseLogger(ctx context.Context) Logger {
|
||||||
|
|
||||||
// httpRequestContext makes information about a request available to context.
|
// httpRequestContext makes information about a request available to context.
|
||||||
type httpRequestContext struct {
|
type httpRequestContext struct {
|
||||||
context.Context
|
Context
|
||||||
|
|
||||||
startedAt time.Time
|
startedAt time.Time
|
||||||
id string
|
id string
|
||||||
|
@ -236,7 +245,7 @@ fallback:
|
||||||
}
|
}
|
||||||
|
|
||||||
type muxVarsContext struct {
|
type muxVarsContext struct {
|
||||||
context.Context
|
Context
|
||||||
vars map[string]string
|
vars map[string]string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -258,12 +267,20 @@ func (ctx *muxVarsContext) Value(key interface{}) interface{} {
|
||||||
return ctx.Context.Value(key)
|
return ctx.Context.Value(key)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// instrumentedResponseWriterCN provides response writer information in a
|
||||||
|
// context. It implements http.CloseNotifier so that users can detect
|
||||||
|
// early disconnects.
|
||||||
|
type instrumentedResponseWriterCN struct {
|
||||||
|
instrumentedResponseWriter
|
||||||
|
http.CloseNotifier
|
||||||
|
}
|
||||||
|
|
||||||
// instrumentedResponseWriter provides response writer information in a
|
// instrumentedResponseWriter provides response writer information in a
|
||||||
// context. This variant is only used in the case where CloseNotifier is not
|
// context. This variant is only used in the case where CloseNotifier is not
|
||||||
// implemented by the parent ResponseWriter.
|
// implemented by the parent ResponseWriter.
|
||||||
type instrumentedResponseWriter struct {
|
type instrumentedResponseWriter struct {
|
||||||
http.ResponseWriter
|
http.ResponseWriter
|
||||||
context.Context
|
Context
|
||||||
|
|
||||||
mu sync.Mutex
|
mu sync.Mutex
|
||||||
status int
|
status int
|
||||||
|
@ -335,3 +352,13 @@ func (irw *instrumentedResponseWriter) Value(key interface{}) interface{} {
|
||||||
fallback:
|
fallback:
|
||||||
return irw.Context.Value(key)
|
return irw.Context.Value(key)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (irw *instrumentedResponseWriterCN) Value(key interface{}) interface{} {
|
||||||
|
if keyStr, ok := key.(string); ok {
|
||||||
|
if keyStr == "http.response" {
|
||||||
|
return irw
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return irw.instrumentedResponseWriter.Value(key)
|
||||||
|
}
|
||||||
|
|
|
@ -1,11 +1,10 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"runtime"
|
|
||||||
|
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/Sirupsen/logrus"
|
||||||
|
"runtime"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Logger provides a leveled-logging interface.
|
// Logger provides a leveled-logging interface.
|
||||||
|
@ -39,28 +38,24 @@ type Logger interface {
|
||||||
Warn(args ...interface{})
|
Warn(args ...interface{})
|
||||||
Warnf(format string, args ...interface{})
|
Warnf(format string, args ...interface{})
|
||||||
Warnln(args ...interface{})
|
Warnln(args ...interface{})
|
||||||
|
|
||||||
WithError(err error) *logrus.Entry
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type loggerKey struct{}
|
|
||||||
|
|
||||||
// WithLogger creates a new context with provided logger.
|
// WithLogger creates a new context with provided logger.
|
||||||
func WithLogger(ctx context.Context, logger Logger) context.Context {
|
func WithLogger(ctx Context, logger Logger) Context {
|
||||||
return context.WithValue(ctx, loggerKey{}, logger)
|
return WithValue(ctx, "logger", logger)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetLoggerWithField returns a logger instance with the specified field key
|
// GetLoggerWithField returns a logger instance with the specified field key
|
||||||
// and value without affecting the context. Extra specified keys will be
|
// and value without affecting the context. Extra specified keys will be
|
||||||
// resolved from the context.
|
// resolved from the context.
|
||||||
func GetLoggerWithField(ctx context.Context, key, value interface{}, keys ...interface{}) Logger {
|
func GetLoggerWithField(ctx Context, key, value interface{}, keys ...interface{}) Logger {
|
||||||
return getLogrusLogger(ctx, keys...).WithField(fmt.Sprint(key), value)
|
return getLogrusLogger(ctx, keys...).WithField(fmt.Sprint(key), value)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetLoggerWithFields returns a logger instance with the specified fields
|
// GetLoggerWithFields returns a logger instance with the specified fields
|
||||||
// without affecting the context. Extra specified keys will be resolved from
|
// without affecting the context. Extra specified keys will be resolved from
|
||||||
// the context.
|
// the context.
|
||||||
func GetLoggerWithFields(ctx context.Context, fields map[interface{}]interface{}, keys ...interface{}) Logger {
|
func GetLoggerWithFields(ctx Context, fields map[interface{}]interface{}, keys ...interface{}) Logger {
|
||||||
// must convert from interface{} -> interface{} to string -> interface{} for logrus.
|
// must convert from interface{} -> interface{} to string -> interface{} for logrus.
|
||||||
lfields := make(logrus.Fields, len(fields))
|
lfields := make(logrus.Fields, len(fields))
|
||||||
for key, value := range fields {
|
for key, value := range fields {
|
||||||
|
@ -76,7 +71,7 @@ func GetLoggerWithFields(ctx context.Context, fields map[interface{}]interface{}
|
||||||
// argument passed to GetLogger will be passed to fmt.Sprint when expanded as
|
// argument passed to GetLogger will be passed to fmt.Sprint when expanded as
|
||||||
// a logging key field. If context keys are integer constants, for example,
|
// a logging key field. If context keys are integer constants, for example,
|
||||||
// its recommended that a String method is implemented.
|
// its recommended that a String method is implemented.
|
||||||
func GetLogger(ctx context.Context, keys ...interface{}) Logger {
|
func GetLogger(ctx Context, keys ...interface{}) Logger {
|
||||||
return getLogrusLogger(ctx, keys...)
|
return getLogrusLogger(ctx, keys...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -84,11 +79,11 @@ func GetLogger(ctx context.Context, keys ...interface{}) Logger {
|
||||||
// are provided, they will be resolved on the context and included in the
|
// are provided, they will be resolved on the context and included in the
|
||||||
// logger. Only use this function if specific logrus functionality is
|
// logger. Only use this function if specific logrus functionality is
|
||||||
// required.
|
// required.
|
||||||
func getLogrusLogger(ctx context.Context, keys ...interface{}) *logrus.Entry {
|
func getLogrusLogger(ctx Context, keys ...interface{}) *logrus.Entry {
|
||||||
var logger *logrus.Entry
|
var logger *logrus.Entry
|
||||||
|
|
||||||
// Get a logger, if it is present.
|
// Get a logger, if it is present.
|
||||||
loggerInterface := ctx.Value(loggerKey{})
|
loggerInterface := ctx.Value("logger")
|
||||||
if loggerInterface != nil {
|
if loggerInterface != nil {
|
||||||
if lgr, ok := loggerInterface.(*logrus.Entry); ok {
|
if lgr, ok := loggerInterface.(*logrus.Entry); ok {
|
||||||
logger = lgr
|
logger = lgr
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
@ -37,7 +36,7 @@ import (
|
||||||
//
|
//
|
||||||
// Notice that the function name is automatically resolved, along with the
|
// Notice that the function name is automatically resolved, along with the
|
||||||
// package and a trace id is emitted that can be linked with parent ids.
|
// package and a trace id is emitted that can be linked with parent ids.
|
||||||
func WithTrace(ctx context.Context) (context.Context, func(format string, a ...interface{})) {
|
func WithTrace(ctx Context) (Context, func(format string, a ...interface{})) {
|
||||||
if ctx == nil {
|
if ctx == nil {
|
||||||
ctx = Background()
|
ctx = Background()
|
||||||
}
|
}
|
||||||
|
@ -70,7 +69,7 @@ func WithTrace(ctx context.Context) (context.Context, func(format string, a ...i
|
||||||
// also provides fast lookup for the various attributes that are available on
|
// also provides fast lookup for the various attributes that are available on
|
||||||
// the trace.
|
// the trace.
|
||||||
type traced struct {
|
type traced struct {
|
||||||
context.Context
|
Context
|
||||||
id string
|
id string
|
||||||
parent string
|
parent string
|
||||||
start time.Time
|
start time.Time
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
@ -36,7 +35,7 @@ func TestWithTrace(t *testing.T) {
|
||||||
ctx, done := WithTrace(Background())
|
ctx, done := WithTrace(Background())
|
||||||
defer done("this will be emitted at end of test")
|
defer done("this will be emitted at end of test")
|
||||||
|
|
||||||
checkContextForValues(ctx, t, append(base, valueTestCase{
|
checkContextForValues(t, ctx, append(base, valueTestCase{
|
||||||
key: "trace.func",
|
key: "trace.func",
|
||||||
expected: f.Name(),
|
expected: f.Name(),
|
||||||
}))
|
}))
|
||||||
|
@ -49,7 +48,7 @@ func TestWithTrace(t *testing.T) {
|
||||||
ctx, done := WithTrace(ctx)
|
ctx, done := WithTrace(ctx)
|
||||||
defer done("this should be subordinate to the other trace")
|
defer done("this should be subordinate to the other trace")
|
||||||
time.Sleep(time.Second)
|
time.Sleep(time.Second)
|
||||||
checkContextForValues(ctx, t, append(base, valueTestCase{
|
checkContextForValues(t, ctx, append(base, valueTestCase{
|
||||||
key: "trace.func",
|
key: "trace.func",
|
||||||
expected: f.Name(),
|
expected: f.Name(),
|
||||||
}, valueTestCase{
|
}, valueTestCase{
|
||||||
|
@ -68,7 +67,8 @@ type valueTestCase struct {
|
||||||
notnilorempty bool // just check not empty/not nil
|
notnilorempty bool // just check not empty/not nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func checkContextForValues(ctx context.Context, t *testing.T, values []valueTestCase) {
|
func checkContextForValues(t *testing.T, ctx Context, values []valueTestCase) {
|
||||||
|
|
||||||
for _, testcase := range values {
|
for _, testcase := range values {
|
||||||
v := ctx.Value(testcase.key)
|
v := ctx.Value(testcase.key)
|
||||||
if testcase.notnilorempty {
|
if testcase.notnilorempty {
|
||||||
|
|
|
@ -1,14 +1,13 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Since looks up key, which should be a time.Time, and returns the duration
|
// Since looks up key, which should be a time.Time, and returns the duration
|
||||||
// since that time. If the key is not found, the value returned will be zero.
|
// since that time. If the key is not found, the value returned will be zero.
|
||||||
// This is helpful when inferring metrics related to context execution times.
|
// This is helpful when inferring metrics related to context execution times.
|
||||||
func Since(ctx context.Context, key interface{}) time.Duration {
|
func Since(ctx Context, key interface{}) time.Duration {
|
||||||
if startedAt, ok := ctx.Value(key).(time.Time); ok {
|
if startedAt, ok := ctx.Value(key).(time.Time); ok {
|
||||||
return time.Since(startedAt)
|
return time.Since(startedAt)
|
||||||
}
|
}
|
||||||
|
@ -17,7 +16,7 @@ func Since(ctx context.Context, key interface{}) time.Duration {
|
||||||
|
|
||||||
// GetStringValue returns a string value from the context. The empty string
|
// GetStringValue returns a string value from the context. The empty string
|
||||||
// will be returned if not found.
|
// will be returned if not found.
|
||||||
func GetStringValue(ctx context.Context, key interface{}) (value string) {
|
func GetStringValue(ctx Context, key interface{}) (value string) {
|
||||||
if valuev, ok := ctx.Value(key).(string); ok {
|
if valuev, ok := ctx.Value(key).(string); ok {
|
||||||
value = valuev
|
value = valuev
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,22 +1,16 @@
|
||||||
package context
|
package context
|
||||||
|
|
||||||
import "context"
|
|
||||||
|
|
||||||
type versionKey struct{}
|
|
||||||
|
|
||||||
func (versionKey) String() string { return "version" }
|
|
||||||
|
|
||||||
// WithVersion stores the application version in the context. The new context
|
// WithVersion stores the application version in the context. The new context
|
||||||
// gets a logger to ensure log messages are marked with the application
|
// gets a logger to ensure log messages are marked with the application
|
||||||
// version.
|
// version.
|
||||||
func WithVersion(ctx context.Context, version string) context.Context {
|
func WithVersion(ctx Context, version string) Context {
|
||||||
ctx = context.WithValue(ctx, versionKey{}, version)
|
ctx = WithValue(ctx, "version", version)
|
||||||
// push a new logger onto the stack
|
// push a new logger onto the stack
|
||||||
return WithLogger(ctx, GetLogger(ctx, versionKey{}))
|
return WithLogger(ctx, GetLogger(ctx, "version"))
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetVersion returns the application version from the context. An empty
|
// GetVersion returns the application version from the context. An empty
|
||||||
// string may returned if the version was not set on the context.
|
// string may returned if the version was not set on the context.
|
||||||
func GetVersion(ctx context.Context) string {
|
func GetVersion(ctx Context) string {
|
||||||
return GetStringValue(ctx, versionKey{})
|
return GetStringValue(ctx, "version")
|
||||||
}
|
}
|
||||||
|
|
|
@ -70,7 +70,7 @@ to the 1.0 registry. Requests from newer clients will route to the 2.0 registry.
|
||||||
Removing intermediate container edb84c2b40cb
|
Removing intermediate container edb84c2b40cb
|
||||||
Successfully built 74acc70fa106
|
Successfully built 74acc70fa106
|
||||||
|
|
||||||
The command outputs its progress until it completes.
|
The commmand outputs its progress until it completes.
|
||||||
|
|
||||||
4. Start your configuration with compose.
|
4. Start your configuration with compose.
|
||||||
|
|
||||||
|
@ -123,22 +123,22 @@ to the 1.0 registry. Requests from newer clients will route to the 2.0 registry.
|
||||||
|
|
||||||
4. Use `curl` to list the image in the registry.
|
4. Use `curl` to list the image in the registry.
|
||||||
|
|
||||||
$ curl -v -X GET http://localhost:5000/v2/registry_one/tags/list
|
$ curl -v -X GET http://localhost:32777/v2/registry1/tags/list
|
||||||
* Hostname was NOT found in DNS cache
|
* Hostname was NOT found in DNS cache
|
||||||
* Trying 127.0.0.1...
|
* Trying 127.0.0.1...
|
||||||
* Connected to localhost (127.0.0.1) port 32777 (#0)
|
* Connected to localhost (127.0.0.1) port 32777 (#0)
|
||||||
> GET /v2/registry1/tags/list HTTP/1.1
|
> GET /v2/registry1/tags/list HTTP/1.1
|
||||||
> User-Agent: curl/7.36.0
|
> User-Agent: curl/7.36.0
|
||||||
> Host: localhost:5000
|
> Host: localhost:32777
|
||||||
> Accept: */*
|
> Accept: */*
|
||||||
>
|
>
|
||||||
< HTTP/1.1 200 OK
|
< HTTP/1.1 200 OK
|
||||||
< Content-Type: application/json
|
< Content-Type: application/json; charset=utf-8
|
||||||
< Docker-Distribution-Api-Version: registry/2.0
|
< Docker-Distribution-Api-Version: registry/2.0
|
||||||
< Date: Tue, 14 Apr 2015 22:34:13 GMT
|
< Date: Tue, 14 Apr 2015 22:34:13 GMT
|
||||||
< Content-Length: 39
|
< Content-Length: 39
|
||||||
<
|
<
|
||||||
{"name":"registry_one","tags":["latest"]}
|
{"name":"registry1","tags":["latest"]}
|
||||||
* Connection #0 to host localhost left intact
|
* Connection #0 to host localhost left intact
|
||||||
|
|
||||||
This example refers to the specific port assigned to the 2.0 registry. You saw
|
This example refers to the specific port assigned to the 2.0 registry. You saw
|
||||||
|
|
|
@ -4,6 +4,3 @@ proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
proxy_read_timeout 900;
|
proxy_read_timeout 900;
|
||||||
proxy_send_timeout 300;
|
|
||||||
proxy_request_buffering off; (see issue #2292 - https://github.com/moby/moby/issues/2292)
|
|
||||||
proxy_http_version 1.1;
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ the [release page](https://github.com/docker/golem/releases/tag/v0.1).
|
||||||
|
|
||||||
#### Running golem with docker
|
#### Running golem with docker
|
||||||
|
|
||||||
Additionally golem can be run as a docker image requiring no additional
|
Additionally golem can be run as a docker image requiring no additonal
|
||||||
installation.
|
installation.
|
||||||
|
|
||||||
`docker run --privileged -v "$GOPATH/src/github.com/docker/distribution/contrib/docker-integration:/test" -w /test distribution/golem golem -rundaemon .`
|
`docker run --privileged -v "$GOPATH/src/github.com/docker/distribution/contrib/docker-integration:/test" -w /test distribution/golem golem -rundaemon .`
|
||||||
|
|
|
@ -18,7 +18,6 @@ nginx:
|
||||||
- "5557:5557"
|
- "5557:5557"
|
||||||
- "5558:5558"
|
- "5558:5558"
|
||||||
- "5559:5559"
|
- "5559:5559"
|
||||||
- "5600:5600"
|
|
||||||
- "6666:6666"
|
- "6666:6666"
|
||||||
links:
|
links:
|
||||||
- registryv2:registryv2
|
- registryv2:registryv2
|
||||||
|
@ -26,7 +25,6 @@ nginx:
|
||||||
- registryv2token:registryv2token
|
- registryv2token:registryv2token
|
||||||
- tokenserver:tokenserver
|
- tokenserver:tokenserver
|
||||||
- registryv2tokenoauth:registryv2tokenoauth
|
- registryv2tokenoauth:registryv2tokenoauth
|
||||||
- registryv2tokenoauthnotls:registryv2tokenoauthnotls
|
|
||||||
- tokenserveroauth:tokenserveroauth
|
- tokenserveroauth:tokenserveroauth
|
||||||
registryv2:
|
registryv2:
|
||||||
image: golem-distribution:latest
|
image: golem-distribution:latest
|
||||||
|
@ -55,16 +53,9 @@ registryv2tokenoauth:
|
||||||
- ./tokenserver-oauth/certs/localregistry.cert:/etc/docker/registry/localregistry.cert
|
- ./tokenserver-oauth/certs/localregistry.cert:/etc/docker/registry/localregistry.cert
|
||||||
- ./tokenserver-oauth/certs/localregistry.key:/etc/docker/registry/localregistry.key
|
- ./tokenserver-oauth/certs/localregistry.key:/etc/docker/registry/localregistry.key
|
||||||
- ./tokenserver-oauth/certs/signing.cert:/etc/docker/registry/tokenbundle.pem
|
- ./tokenserver-oauth/certs/signing.cert:/etc/docker/registry/tokenbundle.pem
|
||||||
registryv2tokenoauthnotls:
|
|
||||||
image: golem-distribution:latest
|
|
||||||
ports:
|
|
||||||
- "5000"
|
|
||||||
volumes:
|
|
||||||
- ./tokenserver-oauth/registry-config-notls.yml:/etc/docker/registry/config.yml
|
|
||||||
- ./tokenserver-oauth/certs/signing.cert:/etc/docker/registry/tokenbundle.pem
|
|
||||||
tokenserveroauth:
|
tokenserveroauth:
|
||||||
build: "tokenserver-oauth"
|
build: "tokenserver-oauth"
|
||||||
command: "--debug -addr 0.0.0.0:5559 -issuer registry-test -passwd .htpasswd -tlscert tls.cert -tlskey tls.key -key sign.key -realm http://auth.localregistry:5559 -enforce-class"
|
command: "--debug -addr 0.0.0.0:5559 -issuer registry-test -passwd .htpasswd -tlscert tls.cert -tlskey tls.key -key sign.key -realm http://auth.localregistry:5559"
|
||||||
ports:
|
ports:
|
||||||
- "5559"
|
- "5559"
|
||||||
malevolent:
|
malevolent:
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
[[suite]]
|
[[suite]]
|
||||||
dind=true
|
dind=true
|
||||||
images=[ "nginx:1.9", "dmcgowan/token-server:simple", "dmcgowan/token-server:oauth", "dmcgowan/malevolent:0.1.0", "dmcgowan/ncat:latest" ]
|
images=[ "nginx:1.9", "dmcgowan/token-server:simple", "dmcgowan/token-server:oauth", "dmcgowan/malevolent:0.1.0" ]
|
||||||
|
|
||||||
[[suite.pretest]]
|
[[suite.pretest]]
|
||||||
command="sh ./install_certs.sh /etc/generated_certs.d"
|
command="sh ./install_certs.sh /etc/generated_certs.d"
|
||||||
|
|
|
@ -32,44 +32,18 @@ function basic_auth_version_check() {
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
email="a@nowhere.com"
|
|
||||||
|
|
||||||
# docker_t_login calls login with email depending on version
|
|
||||||
function docker_t_login() {
|
|
||||||
# Only pass email field pre 1.11, no deprecation warning
|
|
||||||
parse_version "$GOLEM_DIND_VERSION"
|
|
||||||
v=$version
|
|
||||||
parse_version "1.11.0"
|
|
||||||
if [ "$v" -lt "$version" ]; then
|
|
||||||
run docker_t login -e $email $@
|
|
||||||
else
|
|
||||||
run docker_t login $@
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# login issues a login to docker to the provided server
|
# login issues a login to docker to the provided server
|
||||||
# uses user, password, and email variables set outside of function
|
# uses user, password, and email variables set outside of function
|
||||||
# requies bats
|
# requies bats
|
||||||
function login() {
|
function login() {
|
||||||
rm -f /root/.docker/config.json
|
rm -f /root/.docker/config.json
|
||||||
|
run docker_t login -u $user -p $password -e $email $1
|
||||||
docker_t_login -u $user -p $password $1
|
|
||||||
if [ "$status" -ne 0 ]; then
|
if [ "$status" -ne 0 ]; then
|
||||||
echo $output
|
echo $output
|
||||||
fi
|
fi
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
|
# First line is WARNING about credential save or email deprecation (maybe both)
|
||||||
# Handle different deprecation warnings
|
[ "${lines[2]}" = "Login Succeeded" -o "${lines[1]}" = "Login Succeeded" ]
|
||||||
parse_version "$GOLEM_DIND_VERSION"
|
|
||||||
v=$version
|
|
||||||
parse_version "1.11.0"
|
|
||||||
if [ "$v" -lt "$version" ]; then
|
|
||||||
# First line is WARNING about credential save or email deprecation (maybe both)
|
|
||||||
[ "${lines[2]}" = "Login Succeeded" -o "${lines[1]}" = "Login Succeeded" ]
|
|
||||||
else
|
|
||||||
[ "${lines[0]}" = "Login Succeeded" ]
|
|
||||||
fi
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function login_oauth() {
|
function login_oauth() {
|
||||||
|
@ -118,7 +92,7 @@ function docker_t() {
|
||||||
docker exec dockerdaemon docker $@
|
docker exec dockerdaemon docker $@
|
||||||
}
|
}
|
||||||
|
|
||||||
# build creates a new docker image id from another image
|
# build reates a new docker image id from another image
|
||||||
function build() {
|
function build() {
|
||||||
docker exec -i dockerdaemon docker build --no-cache -t $1 - <<DOCKERFILE
|
docker exec -i dockerdaemon docker build --no-cache -t $1 - <<DOCKERFILE
|
||||||
FROM $2
|
FROM $2
|
||||||
|
|
|
@ -23,7 +23,6 @@ install_test_certs() {
|
||||||
# For test remove CA
|
# For test remove CA
|
||||||
rm $1/${hostname}:5447/ca.crt
|
rm $1/${hostname}:5447/ca.crt
|
||||||
install_ca $1 5448
|
install_ca $1 5448
|
||||||
install_ca $1 5600
|
|
||||||
}
|
}
|
||||||
|
|
||||||
install_ca_file() {
|
install_ca_file() {
|
||||||
|
@ -31,11 +30,6 @@ install_ca_file() {
|
||||||
cp $1 $2/ca.crt
|
cp $1 $2/ca.crt
|
||||||
}
|
}
|
||||||
|
|
||||||
append_ca_file() {
|
|
||||||
mkdir -p $2
|
|
||||||
cat $1 >> $2/ca.crt
|
|
||||||
}
|
|
||||||
|
|
||||||
install_test_certs $installdir
|
install_test_certs $installdir
|
||||||
|
|
||||||
# Malevolent server
|
# Malevolent server
|
||||||
|
@ -46,5 +40,4 @@ install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5554
|
||||||
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5555
|
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5555
|
||||||
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5557
|
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5557
|
||||||
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5558
|
install_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5558
|
||||||
append_ca_file ./tokenserver/certs/ca.pem $installdir/$hostname:5600
|
|
||||||
|
|
||||||
|
|
|
@ -1,18 +1,18 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIC+TCCAeGgAwIBAgIQJMzVQNYVNTbh36kZUytWiDANBgkqhkiG9w0BAQsFADAm
|
MIIC9TCCAd+gAwIBAgIQKQTGjKpSVBW78ef0fOcxRTALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDgyMDIz
|
||||||
MjI1OTA2WhcNMjgwODI2MjI1OTA2WjAmMREwDwYDVQQKEwhRdWlja1RMUzERMA8G
|
MjE0OVoXDTE4MDgwNDIzMjE0OVowJjERMA8GA1UEChMIUXVpY2tUTFMxETAPBgNV
|
||||||
A1UEAxMIUXVpY2tUTFMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCe
|
BAMTCFF1aWNrVExTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwoPM
|
||||||
8rEU8xHh6BMYVRz/KhFftKSxS4dxJi2LoNN4fxzY6EgHNfBACt2MhIWaUSHf2YkF
|
xiDZK6Fwy5r3waRkfJHhyZZH828Jyj+nz5UVkMyOM/xN6MgJ2w911hTj1wSXG2n3
|
||||||
NsS/T7qZWq23NEuIJYUUwbJRAh/iQsEhCI56eV+aJX+DGd2SQQNKdx1Pt528LNws
|
AohF3gTFNrDYh4j2qRZnixDrOM5GBm2/KJbyfBIYkrR45yLfjidO7MRnhaPZ5Fov
|
||||||
n8Ci8rEHTe6i2/U7n/DLqa32BWF3aShsVrchRgpizXezS7GLyFmhv0hi0zRKJgDG
|
l+RKwNBXP4Q2mUe7q9FM457Rm8hAcqXP04AJT20m1QSYQivDgxsDxuAQte3VEy1E
|
||||||
JebLeqe/BUtEOsS/Oa65NQTEO/5EZBzM74+4eRo5zyp9Uvw4edmOrXRXK1fK9gP3
|
0j0CwUKoFHT6MHOnDPEZbc4r1+ba34WBM1Sc5KXyV2JlbtU07J4hACYWVsD7vQCl
|
||||||
Fq/jz9+8b5eUd9vl0e9z/xTqMdicYZOUHuUtxM3hXAkkxcaVJqqqDe6URbJHpbaN
|
VFlZNE4E35ahMDZ+ODLal9PAT8ARLdAtjvRWrT+h8qZ4Yfwt/sGF1K4CAkTP3H5p
|
||||||
8Vt/p/csFXMWj3oSokvDAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB
|
uMkJG56zmqIEYeHMuwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAKQwDwYDVR0TAQH/
|
||||||
Af8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCC3NiX+2Qk3WB+TRNDPCtQ7Pw+
|
BAUwAwEB/zALBgkqhkiG9w0BAQsDggEBALpieTckiPEeb3rTAWl7waDPLPOIhS5C
|
||||||
o31SSqfF8m3fevT4mdrJqFAF4qUpDwgV9/9EkU4UBoIq03S91Dk/No0jR3VAzzRA
|
XHVfOm7cPmRn3pT2VuR8y74U7a1uOkYMgJnCWb8lSXhbqC89FatLnAhKqo4I9oD8
|
||||||
h3+ul/7u08JriS/ZgVediodi7H8xeCz3nvZfAwCP2ZmHzDGp39Uhc3L3WFZImZuV
|
2BXgYeIpP5/OWBcjzmsMnowrvokc0chAmAR0Ux6AP0eX9amC0lGMuTHdw3+is0AR
|
||||||
fCDeSWF3c5CjJbdUuCYYFy6LwSFLPoBXZaNBL19XP9btJtjbNTm77PZJ4cELTQ+U
|
lhoImOUPXvgMH7W2RimpSgnX0R5wKqfuGwMfbGa0xhWBZ+wekAKcU8b+pIHDyX0c
|
||||||
r5Ofw9D9mCCYrapmprw7Fw9wdE+iLL9EJCHAj7L8UYshF4+7O7Jv3ZatySMWPbjS
|
EQcir2y8/lVjECXSAIlV6iasPQ3hm1sd0xq1hx4yrwYFvQb7yEhOXbK24HLr/20D
|
||||||
nIa2+eKl/sfvRvLZWV9dUSObVsm/bpv8bsHIKp4bYl+IDb2aoSWnw4eZQHDJ
|
RRmEOuS8gg2XtUFv66z/VOw/nUleIg9GAuWDJaiu9frmIma4/tIY4qY=
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,19 +1,19 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIDFTCCAf2gAwIBAgIQfv/raCIVnmpXY74aUyohmDANBgkqhkiG9w0BAQsFADAm
|
MIIDETCCAfugAwIBAgIQZRKt7OeG+TlC2riszYwQQTALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDgyMDIz
|
||||||
MjI1OTA2WhcNMjgwODI2MjI1OTA2WjArMREwDwYDVQQKEwhRdWlja1RMUzEWMBQG
|
MjE0OVoXDTE4MDgwNDIzMjE0OVowKzERMA8GA1UEChMIUXVpY2tUTFMxFjAUBgNV
|
||||||
A1UEAxMNbG9jYWxyZWdpc3RyeTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
|
BAMTDWxvY2FscmVnaXN0cnkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
|
||||||
ggEBALedGn6gB0Km693mvJ8yz89wtfDs+SGjJi+XmJv0PYe6j5uToXQH2naXXIOZ
|
AQDPdsUBStNMz4coXfQVIJIafG85VkngM4fV7hrg7AbiGLCWvq8cWOrYM50G9Wmo
|
||||||
lT9lmXd/RciZwn50aK4T6alu96D8yeLE13P+75rdrI9DWTNHsfx0jwRxUEXNazPI
|
twK1WeQ6bigYOjINgSfTxcy3adciVZIIJyXqboz6n2V0yRPWpakof939bvuAurAP
|
||||||
5Knwbf2MgGJfvHE6LjQ3FStJJ9f8JzryspIAYy5PJETuzoF7GsrUhgmcgQNqQcIx
|
tSqQ2V5fGN0ZZn4J4IbXMSovKwo7sG3X6i4q/8DYHZ/mKjvCRMPC3MGWqunknpkm
|
||||||
d81QwOnW3EHastTPIbUxQ3cbEKZMVmvsYSY60pQuw/syN7vGcR/uJQ6HsCUWTEpk
|
dzyKbIFHaDKlAqIOwTsDhHvGzm/9n3D+h4sl5ZPBobuBEV2u5GR0H5ujak4+Kczt
|
||||||
LWFNJYudYnRIJ/mb6bGJ0tJhdlXKQ9+89oiEWZp9p1KMfyXesp8HeW8Jyoa06+Ri
|
thCWtRkzCfnjW0TEanheSYJGu8OgCGoFjQnHotgqvOO6iHZCsrB3gf8WQeou+y9e
|
||||||
5U82r0oQgC0MI5AueueoNOmQyGsCAwEAAaM6MDgwDgYDVR0PAQH/BAQDAgWgMAwG
|
+OyLZv3FmqdC9SXr3b0LGQTFAgMBAAGjOjA4MA4GA1UdDwEB/wQEAwIAoDAMBgNV
|
||||||
A1UdEwEB/wQCMAAwGAYDVR0RBBEwD4INbG9jYWxyZWdpc3RyeTANBgkqhkiG9w0B
|
HRMBAf8EAjAAMBgGA1UdEQQRMA+CDWxvY2FscmVnaXN0cnkwCwYJKoZIhvcNAQEL
|
||||||
AQsFAAOCAQEAGgUESvQoD/QGZQlY2NA4sauad/yMHVo7vs5TLiKxnAfJrnP1ycD6
|
A4IBAQC/PP2Y9QVhO8t4BXML1QpNRWqXG8Gg0P1XIh6M6FoxcGIodLdbzui828YB
|
||||||
sqcbwCu6B1GU7fqGjKKgzXWXHTi4MiLi5bnh5Y2JBTABksGmzNAU1LbQJJkwsPnE
|
wm9ZlyKars+nDdgLdQWawdV7hSd6s2NeQlHYQSGLsdTAVkgIxiD7D2Tw3kAZ6Zrj
|
||||||
GBF0RgUmcw7a+4qu3TqPJABOsl+RiUQ4VDzP3DFRbyigs2li+SjLTJepahDhAke9
|
dPikoVAc+rBMm/BXQLzy95IAbBVOHOpBkOOgF+TYxeLnOc3GzbUqBi1Pq97DMaxr
|
||||||
11lU/r3pm1cov9m0AsKSHrU777Hv5B7gmyJ1FO1Os7/KnkdHKUwiIZx0VW6Ho5H+
|
DaDuywH55P/6v7qt610UIsZ6+RZ78iiRx4Q+oRxEqGT0rXI76gVxOFabbJuFr1n1
|
||||||
IiCH7iKJ1tTxe3nkwjlkSXnx7xiLOG7QK1LtTNHzBumF4COSF1kvWvIqNhJeg482
|
kEWa3u/BssJzX3KVAm7oUtaBnj2SH5fokFmvZ5lBXA4QO/5doOa8yZiFFvvQs7EY
|
||||||
e38+Kzctl5iVbrB+JWY6roTQ26VLIdlS7A==
|
SWDxLrvS33UCtsCcpPggjehnxKaC
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,27 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEpQIBAAKCAQEAt50afqAHQqbr3ea8nzLPz3C18Oz5IaMmL5eYm/Q9h7qPm5Oh
|
MIIEpQIBAAKCAQEAz3bFAUrTTM+HKF30FSCSGnxvOVZJ4DOH1e4a4OwG4hiwlr6v
|
||||||
dAfadpdcg5mVP2WZd39FyJnCfnRorhPpqW73oPzJ4sTXc/7vmt2sj0NZM0ex/HSP
|
HFjq2DOdBvVpqLcCtVnkOm4oGDoyDYEn08XMt2nXIlWSCCcl6m6M+p9ldMkT1qWp
|
||||||
BHFQRc1rM8jkqfBt/YyAYl+8cTouNDcVK0kn1/wnOvKykgBjLk8kRO7OgXsaytSG
|
KH/d/W77gLqwD7UqkNleXxjdGWZ+CeCG1zEqLysKO7Bt1+ouKv/A2B2f5io7wkTD
|
||||||
CZyBA2pBwjF3zVDA6dbcQdqy1M8htTFDdxsQpkxWa+xhJjrSlC7D+zI3u8ZxH+4l
|
wtzBlqrp5J6ZJnc8imyBR2gypQKiDsE7A4R7xs5v/Z9w/oeLJeWTwaG7gRFdruRk
|
||||||
DoewJRZMSmQtYU0li51idEgn+ZvpsYnS0mF2VcpD37z2iIRZmn2nUox/Jd6ynwd5
|
dB+bo2pOPinM7bYQlrUZMwn541tExGp4XkmCRrvDoAhqBY0Jx6LYKrzjuoh2QrKw
|
||||||
bwnKhrTr5GLlTzavShCALQwjkC5656g06ZDIawIDAQABAoIBAQCw7oKJYkucvpyq
|
d4H/FkHqLvsvXvjsi2b9xZqnQvUl6929CxkExQIDAQABAoIBAQCZjCUI7NFwwxQc
|
||||||
x50bCyuVCVdJQhEPiNdTJRG5tjFUiUG4+RmrZaXugQx1A5n97TllHQ9xrjjtAd+d
|
m1UAogeglMJZJHUu+9SoUD8Sg34grvdbyqueBm1iMOkiclaOKU1W3b4eRNNmAwRy
|
||||||
XzLaQkP8rZsdGfFDpXXeFZ4irxNVhtDMJMVr0oU3vip/TCaMW1Kh8LIGGZrMwPOk
|
nEnW4km+4hX48m5PnHHijYnIIFsd0YjeT+Pf9qtdXFvGjeWq6oIjjM3dAnD50LKu
|
||||||
/S849tWeGyzycMwCRL1N8pVQl44G1aexTmlt/tjpGyQAUcGt3MtKaUhhr8mLttfL
|
KsCB2oCHQoqjXNQfftJGvt2C1oI2/WvdOR4prnGXElVfASswX4PkP5LCfLhIx+Fr
|
||||||
2r6wfZgvSqReURBMdn/bf+sMKnJrYnZLRv/iPz+YWhdk4v1OXPO3D4OlYwR8HwSo
|
7ErfaRIKigLSaAWLKaw3IlL12Q/KkuGcnzYIzIRwY4VJ64ENN6M3+KknfGovQItL
|
||||||
a9mOpPuC6lWBqzq8eCBU474aQw4FXaFwN08YkJKa4DqUrmadnd4o+ajvOIA4MdF5
|
sCxceSe61THDP9AAI3Mequm8z3H0CImOWhJCge5l7ttLLMXZXqGxDCVx+3zvqlCa
|
||||||
7OOsHQaBAoGBANcVQIM6vndN2MFwODGnF8RfeLhEf46VlANkZadOOa0/igyra865
|
X0cgGSVBAoGBAOvTN3oJJx1vnh1mRj8+hqzFq1bjm4T/Wp314QWLeo++43II4uMM
|
||||||
7IR4dREFFkSdte8bj6/iEAPeDzXgS4TRsZfr2gkhdXuc2NW4jTVeiYfWW3cgKfW+
|
5hxUlO5ViY1sKxQrGwK+9c9ddxAvm5OAFFkzgW9EhDCu0tXUb2/vAJQ93SgqbcRu
|
||||||
7BQiHXsXCDeoZ1gXq/F5RmD8ue0TkP+IclWR52AM5e1MzfAuZzaIFNJFAoGBANqL
|
coXWJpk0eNW/ouk2s1X8dzs+sCs3a4H64fEEj8yhwoyovjfucspsn7t1AoGBAOE2
|
||||||
Q925GxuDamcbuloxQUBarXPJgBDfTWUAXAJVISy80N3av45Y0gyoNjPaU7wHNtU9
|
ayLKx7CcWCiD/VGNvP7714MDst2isyq8reg8LEMmAaXR2IWWj5eGwKrImTQCsrjW
|
||||||
ppnYvM47o1W4qe9AkTtuU79T1WwXFr5T+4Ehm5I8WDHQwkzWGd+WlWkDidLWuvlx
|
P37aBp1lcWuuYRKl/WEGBy6JLNdATyUoYc1Yo+8YdenekkOtOHHJerlK3OKi3ZVp
|
||||||
ZkzwQGp3KOTJhO20lpOtCbnOa627Op/zLhCBQzLvAoGAFF4A0+x2KNoIUpkL2TfX
|
q4HJY9wzKg/wYLcbTmjjzKj+OBIZWwig73XUHwoRAoGBAJnuIrYbp1aFdvXFvnCl
|
||||||
elMIHXrvEVN8xq11KtivgYZozjZVaSgWC51UiJ4Qs8KzfccAXklr9tHKYvGwdQ1e
|
xY6c8DwlEWx8qY+V4S2XX4bYmOnkdwSxdLplU1lGqCSRyIS/pj/imdyjK4Z7LNfY
|
||||||
YeKFrSOr+l6p8eMeDBW9tE1KMAetsYW42Vc5r3RI5OxfjOoA8EbpsTl9acPWkTwc
|
sG+RORmB5a9JTgGZSqwLm5snzmXbXA7t8P7/S+6Q25baIeKMe/7SbplTT/bFk/0h
|
||||||
h5nfbSsLguMpBTt/rpxITHkCgYEAnKwwSBj25P+OXULUkuoytDcNmC+Bnxbm/hyG
|
371MtvhhVfYuZwtnL7KFuLXJAoGBAMQ3UHKYsBC8tsZd8Pf8AL07mFHKiC04Etfa
|
||||||
2ak78j2eox26LAti8m35Ba1kUCz/01myQSLPIC5DByYutXWdaHTMlyI7o5Td2i6M
|
Wb5rpri+RVM+mGITgnmnavehHHHHJAWMjPetZ3P8rSv/Ww4PVsoQoXM3Cr1jh1E9
|
||||||
5GM6i1i1hWj6kmj+/XqPvEwsFzmXq1HvnAK0u16Xs4UAxgSr2ky35zujmFXcTmTg
|
dLCfWPz4l8syIscaBYKF4wnLItXGxj3mOgoy93EjlrMaYHlILjGOv4JBM4L5WmoT
|
||||||
xjZU/YMCgYEAqF93h8WfckZxSUUMBgxTkNfu4MJlbsVBzIHv6TJY95VA49RcRYEK
|
JW7IaF6xAoGAZ4K8MwU/cAah8VinMmLGxvWWuBSgTTebuY5zN603MvFLKv5necuc
|
||||||
b7Xg+RiNQ42QGd8JBXZ50zQrIDhdd/yJ0KcytvW7WdiEEaF3ANO2QesygmI50611
|
BZfTTxD+gOnxRT6QAh++tOsbBmsgR9HmTSlQSSgw1L7cwGyXzLCDYw+5K/03KXSU
|
||||||
R76F8Bj0xnoQUCbyPuMOLRfTwEaS1jBG7TKWQXTaN0fm4DxUU0KazxU=
|
DaFdgtfcDDJO8WtjOgjyTRzEAOsqFta1ige4pIu5fTilNVMQlhts5Iw=
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -12,7 +12,7 @@ function setup() {
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test malevolent proxy pass through" {
|
@test "Test malevolent proxy pass through" {
|
||||||
docker_t tag $base:latest $host/$base/nochange:latest
|
docker_t tag -f $base:latest $host/$base/nochange:latest
|
||||||
run docker_t push $host/$base/nochange:latest
|
run docker_t push $host/$base/nochange:latest
|
||||||
echo $output
|
echo $output
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
|
@ -26,7 +26,7 @@ function setup() {
|
||||||
@test "Test malevolent image name change" {
|
@test "Test malevolent image name change" {
|
||||||
imagename="$host/$base/rename"
|
imagename="$host/$base/rename"
|
||||||
image="$imagename:lastest"
|
image="$imagename:lastest"
|
||||||
docker_t tag $base:latest $image
|
docker_t tag -f $base:latest $image
|
||||||
run docker_t push $image
|
run docker_t push $image
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
|
@ -133,7 +133,7 @@ function setup() {
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
|
|
||||||
image2="$host/$base/image2/alteredid:$poison2"
|
image2="$host/$base/image2/alteredid:$poison2"
|
||||||
docker_t tag $image1 $image2
|
docker_t tag -f $image1 $image2
|
||||||
run docker_t push $image2
|
run docker_t push $image2
|
||||||
echo "$output"
|
echo "$output"
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
|
|
|
@ -7,4 +7,3 @@ COPY registry-noauth.conf /etc/nginx/registry-noauth.conf
|
||||||
COPY registry-basic.conf /etc/nginx/registry-basic.conf
|
COPY registry-basic.conf /etc/nginx/registry-basic.conf
|
||||||
COPY test.passwd /etc/nginx/test.passwd
|
COPY test.passwd /etc/nginx/test.passwd
|
||||||
COPY ssl /etc/nginx/ssl
|
COPY ssl /etc/nginx/ssl
|
||||||
COPY v1 /var/www/html/v1
|
|
||||||
|
|
|
@ -219,42 +219,3 @@ server {
|
||||||
include registry-noauth.conf;
|
include registry-noauth.conf;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# V1 search test
|
|
||||||
# Registry configured with token auth and no tls
|
|
||||||
# TLS termination done by nginx, search results
|
|
||||||
# served by nginx
|
|
||||||
|
|
||||||
upstream docker-registry-v2-oauth {
|
|
||||||
server registryv2tokenoauthnotls:5000;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 5600;
|
|
||||||
server_name localregistry;
|
|
||||||
ssl on;
|
|
||||||
ssl_certificate /etc/nginx/ssl/registry-ca+localregistry-cert.pem;
|
|
||||||
ssl_certificate_key /etc/nginx/ssl/registry-ca+localregistry-key.pem;
|
|
||||||
|
|
||||||
root /var/www/html;
|
|
||||||
|
|
||||||
client_max_body_size 0;
|
|
||||||
chunked_transfer_encoding on;
|
|
||||||
location /v2/ {
|
|
||||||
proxy_buffering off;
|
|
||||||
proxy_pass http://docker-registry-v2-oauth;
|
|
||||||
proxy_set_header Host $http_host; # required for docker client's sake
|
|
||||||
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
proxy_read_timeout 900;
|
|
||||||
}
|
|
||||||
|
|
||||||
location /v1/search {
|
|
||||||
if ($http_authorization !~ "Bearer [a-zA-Z0-9\._-]+") {
|
|
||||||
return 401;
|
|
||||||
}
|
|
||||||
try_files /v1/search.json =404;
|
|
||||||
add_header Content-Type application/json;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,18 +1,29 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIC+TCCAeGgAwIBAgIQVhmtXJ4fG4BkISUkyZ65ITANBgkqhkiG9w0BAQsFADAm
|
MIIE9TCCAt+gAwIBAgIQMsdPWoLAso/tIOvLk8R/sDALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MjI1MjMwWhcNMjgwODI2MjI1MjMwWjAmMREwDwYDVQQKEwhRdWlja1RMUzERMA8G
|
NTQwMVoXDTE4MDUxMDIwNTQwMVowJjERMA8GA1UEChMIUXVpY2tUTFMxETAPBgNV
|
||||||
A1UEAxMIUXVpY2tUTFMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDK
|
BAMTCFF1aWNrVExTMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA1YeX
|
||||||
J/SLv0dL7UXaNSEAdTMV8+rOFMcQNov/xLWa1mO+7zNZXHIdM+i1uQTHTdhuta6R
|
GTvXPKlWA2lMbCvIGB9JYld/otf8aqs6euVJK1f09ngj5b6VoVlI8o1ScVcHKlKx
|
||||||
wfqkruPMZ9sqK7G9UIPi11ynkdTiZKRCvCr2VMc/uf5WuIsZE1JXXknSNee1TMmV
|
BGfPMThnM7fiEmsfDSPuCIlGmTqR0t4t9dHRnLBGbZmR8JdAs7LKpP+PFYu0JTIT
|
||||||
Je8TUJsRjEyQDbxn5qUAJLi8yj/O7W8wsnVHdySKMbaLN6v75151TxiIuOoncCHQ
|
wFcjXIs+45cIF2HpsYY6zkj0bmNsyYmT1U1BTW+qqmhvc0Jkr+ikElOQ93Pn7zIO
|
||||||
yzz10DzjXfXYajuheu+MLy/rjNGDj0gys4yQZAHlQWY9Lsiiix9rBdXQjVc3q2QT
|
cXtxdERdzdzXY5cfL3CCaoJDgXOsKPQfYrCi5Zl6sLZVBkIc6Q2fErSIjTp45+NY
|
||||||
VM5v3pMjXcPweaIbTWJnbOgmy+267kX6kQpUfZRE55dQt6mPtPQ2idPvqPP3TXwa
|
AjiOxfUT0MOFtA0/HzYvVp3gTNPGEWM3dF1hwzCqJ32odbw/3TiFCEeC1B82p1sR
|
||||||
AFH39cz/pPifIZApDfZFAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB
|
sgoFZ6Vbfy9fMhB5S7BBtbqF09Yq/PMM3drOvWIxMF4aOY55ilrtKVwmnckiB0mE
|
||||||
Af8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQB93GckXcLcfNdg9C0xMkvByPQJ
|
CPOColUUyiWIwwvp82InYsX5ekfS4x1mX1iz8zQEuTF5QHdKiUfd4A33ZMf0Ve6p
|
||||||
dcy0GT991eZ/bNC39AXrmCSfn6a1FRlWoiCOSOW1NIZWQQ7jDep/T585vq2jN7KX
|
y9SaMmos99uVQMzWlwj7nVACXjb9Ee6MY/ePRl7Z2gBxEYV41SGFRg8LNkQ//fYk
|
||||||
hT/z3iIdNWR+Amvo4pyJ93u2D3uG/bmmguAr62jyIgrJudQ3+Mnd+bj/J33XzAgc
|
o2vJ4Bp4aOh/O3ZQNv1eqEDmf/Su5lYCzURyQ2srcRRdwpteDPX+NHYn2d07knHN
|
||||||
d4ZGPvCmKtn8cTKzyS8rjy1oPSUm6pZnfk41MgMWrGuS5HkC3Aa7jo/4RdgGOJpm
|
NQvOJn6EkcsDbgp0vSr6mFDv2GZWkTOAd8jZyrcErrLHAxRNm0Va+CEIKLhswf1G
|
||||||
nUdz2FGfW/+gwXRy2e94V7ijjz+YwpzL0wHPyXyAm7GwJ7mfvPOZrQOLLw4Z9OaK
|
Y2kFkPL1otI8OSDvdJSjZ2GjRSwXhM2Mf3PzfAkCAwEAAaMjMCEwDgYDVR0PAQH/
|
||||||
R76t4NZBo5TmtvW5zQVsv3sPRnuqcmR0q6WR/fEuMafVtRVOVuDrZlSy0EtA
|
BAQDAgCkMA8GA1UdEwEB/wQFMAMBAf8wCwYJKoZIhvcNAQELA4ICAQDBxOHKnF9z
|
||||||
|
PZWPNKDRmBPtmnU2IHh6JJ9HzqGALJJbBU0MUSD/aLBBkYeS0YSHgYZ1hXLsfuRU
|
||||||
|
lm/czV41hU1FTDqS2fFpcAAGH+6/rwyfrz+GYr2K4b/ijCwOMbMrDWO54zqZT3KU
|
||||||
|
GFBpkrh4fNyKdgUNJsy0Q0it3gOGSUmLvEQUzqxPFVz7h/pF/Cecr0/kpjbpsxna
|
||||||
|
XQkhtDyKDIQfPCq8Ci1vox5WvBbBkdzDtyCm+KSb6VC3pCX6LV5NkS7YM7mtscTi
|
||||||
|
QdYfLbKX05kUVG2R9SShJn5BSXzGk9M5FR5koGY0lMHwmJqaOqazXjqa1jR7UNDK
|
||||||
|
UyExHIXSqJ+nCf4bChEsaC1uwu3Gr7PfP41Zb2U3Raf8UmFnbz6Hx0sS4zBvyJ5w
|
||||||
|
Ntemve4M1mB7++oLZ4PkuwK82SkQ8YK0z+lGJQRjg/HP3fVETV8TlIPJAvg7bRnH
|
||||||
|
sMrLb/V+K6iY+08kQ2rpU02itRjKnU/DLoha4KVjafY8eIcIR2lpwrYjx+KYpkcF
|
||||||
|
AMEC7MnuzhyUfDL++GO6XGwRnx2E54MnKtkrECObMSzwuLysPmjhrEUH6YR7zGib
|
||||||
|
KmN6vQkA4s5053R+Tu0k1JGaw90SfvcW4bxGcFjU4Kg0KqlY1y8tnt+ZiHmK0naA
|
||||||
|
KauB3KY1NiL+Ng5DCzNdkwDkWH78ZguI2w==
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,18 +1,29 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIC+TCCAeGgAwIBAgIRAMGmTKEnobjz4ymIziTsFuMwDQYJKoZIhvcNAQELBQAw
|
MIIE9TCCAt+gAwIBAgIRAKbgxG1zgQI81ISaHxqLfpcwCwYJKoZIhvcNAQELMCYx
|
||||||
JjERMA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE4MDUy
|
ETAPBgNVBAoTCFF1aWNrVExTMREwDwYDVQQDEwhRdWlja1RMUzAeFw0xNTA1MjYy
|
||||||
MTIyNTIzMVoXDTI4MDgyNjIyNTIzMVowEzERMA8GA1UEChMIUXVpY2tUTFMwggEi
|
MDU0MjJaFw0xODA1MTAyMDU0MjJaMBMxETAPBgNVBAoTCFF1aWNrVExTMIICIjAN
|
||||||
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCaFrwVi+BAvng9TebwOLg2Juzg
|
BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq0Pc8DQ9AyvokFzm9v4a+29TCA3/
|
||||||
wnW2Lv2EOqpSYmlZLLub46/W+ktqrcb+nBMBwnbON0woCbMArONuiRk7BATnmLH8
|
oARHbx59G+GOeGkrwG6ZWSZa/oNEJf3NJcU00V04k+fQuVoYBCgBXec9TEBvXa8M
|
||||||
1e6I9Rax1nCaEpKhhH/b3T9PjwvzrXC+NIqeC46E7AEneAdBa4L/x27F/npLJy7X
|
WpLxp5U9LyYkv0AiSPfT2fJEE8mC+isMl+DbmgBcShwRXpeZQyIbEJhedS8mIjW/
|
||||||
PAwcH9ImvACJ9csIObjFnGXNTwtGA2SMIOCiNv3lpyb/Yx20EqBcj+etz8XBjAIS
|
MgJbdTylEq1UcZSLMuky+RWv10dw02fLuN1302OgfJRZooPug9rPYHHGbTB0o7II
|
||||||
46z0JDAtYAbJgIs7Ek2XQSrUud18jopzK9mrT9YvA4tDu9Woj70IXdZfOeb0W6Y+
|
hGlhziLVTKV9W1RP8Aop8TamSD85OV6shDaCvmMFr1YNDjcJJ5MGMaSmq0Krq9v4
|
||||||
aBbEoHvqFtyeG7BStNszM7n6CTcJAqpHOMlYQPeRjtMwb2Ffw86NvxkfrjoNAgMB
|
nFwmuhOo8gvw/HhzYcxyMHnqMt6EgvbVWwXOoW7xiI3BEDFV33xgTp61bFpcdCai
|
||||||
AAGjNTAzMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNV
|
gwUNzfe4/dHeCk/r3pteWOxH1bvcxUlmUB65wjRAwKuIX8Z0hC4ZlM30o+z11Aru
|
||||||
HRMBAf8EAjAAMA0GCSqGSIb3DQEBCwUAA4IBAQBv1MfAYTymtDeA62N86QFOwASq
|
5QqKMrbSlOcd6yHT6NM1ZRyD+nbFORqB8W51g344eYl0zqQjxTQ0TNjJWDR2RWB/
|
||||||
ah2BQqfHvUzcM0U/H6YDEYUEKX2RFOtGwOwSBXr6v7JmU4KuE6tA6s+XWjD/lLr7
|
Vlp5N+WRjDpsBscR8kt2Q1My17gWzvHfijGETZpbvmo2f+Keqc9fcfzkIe/VZFoO
|
||||||
CqWvJfZNP6zARL+MqbZjSmyymtuXaXH4eNVgN0aaGifhUSMDsg0qyZwG8isMN4hG
|
nhRqhl2PSphcWdimk8Bwf5jC2uDAXWCdvVWvRSP4Xg8zpDwLhlsfLaWVH9n+WG3j
|
||||||
kd0y1nNCn+Q3V7oe3NfjfdjviLU9PNNBQFbKRJJRQ6y267lFoWwlaHwtNyvDupVi
|
NLQ8EmHWaZlJSeW4BiDYsXmpTAkeLmwoS+pk2WL0TSQ7+S3DyrmTeVANHipNQZeB
|
||||||
f+JFMiuG3o+upqBF8UFUV8Of4VL6UcJI0OoF4ngTFzn3gRYaYKmkYawUmIr9vvg7
|
twZJXIXR6Jc8hgsCAwEAAaM1MDMwDgYDVR0PAQH/BAQDAgCgMBMGA1UdJQQMMAoG
|
||||||
oQccajcN1iNArnZwXK3lKSERybrUEiUZ4uZ69wVlXzE2TemhW1iYfrTU1cya
|
CCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwCwYJKoZIhvcNAQELA4ICAQCl0cTLbLIn
|
||||||
|
XFuxreei+y6TlG2Z5XcxJ84mr8VLAaQMlJOLZV0O/suFBu9KqBuvPaHhGRnKE2uw
|
||||||
|
Vxdj9qaDdvmvuzi4jYyUA/sQuqq1+wHwGTadOi9r0IsL8OxzsG16OlhuXzhoQVdw
|
||||||
|
C9z1jad4HC7uihQ5yhl2ltAA+h5G0Sr1b9El2mx4p6BV+okmTvrqrmjshQb1GZwx
|
||||||
|
jG6SJ/uvjGf7rn09ZyYafF9ZDTMNodNXjW8orqGlFdXZLPFJ9agUFfwWfqD2lrtm
|
||||||
|
Fu+Ei0ZvKOtyzmh06eO2aGAHJCBTfcDM4tBKBKp0MOMoZkcQQDNpSyI12j6s1wtx
|
||||||
|
/1dC8QDyfFpZFXTbKn3q+6MpR+u5zqVquYjwP5DqGTvX0e1sLSthv7LRiOi0qHv1
|
||||||
|
bZ8JoWhRMNumui9mzwar5t20ExcWxGxizZY+t+OIj4kaAeRoKK6r6FrYBnTjM+iR
|
||||||
|
+xtML5UHPOSmYfNcai0Wn4T7hwpgnCJ+K7qGYjFUCarsINppQEwkxHAvuX+asc38
|
||||||
|
nA0wd7ByulkMJph0gP6j6LuJf28JODi6EQ7FcQItMeTuPrc+mpqJ4jP7vTTSJG7Q
|
||||||
|
wvqXLMgFQFR+2PG0s10hbY/Y/nwZAROfAs7ADED+EcDPTl/+XjVyo/aYIeOb/07W
|
||||||
|
SpS/cacZYUsSLgB4cWbxElcc/p7CW1PbOA==
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEogIBAAKCAQEAmha8FYvgQL54PU3m8Di4Nibs4MJ1ti79hDqqUmJpWSy7m+Ov
|
MIIJKQIBAAKCAgEAq0Pc8DQ9AyvokFzm9v4a+29TCA3/oARHbx59G+GOeGkrwG6Z
|
||||||
1vpLaq3G/pwTAcJ2zjdMKAmzAKzjbokZOwQE55ix/NXuiPUWsdZwmhKSoYR/290/
|
WSZa/oNEJf3NJcU00V04k+fQuVoYBCgBXec9TEBvXa8MWpLxp5U9LyYkv0AiSPfT
|
||||||
T48L861wvjSKnguOhOwBJ3gHQWuC/8duxf56Sycu1zwMHB/SJrwAifXLCDm4xZxl
|
2fJEE8mC+isMl+DbmgBcShwRXpeZQyIbEJhedS8mIjW/MgJbdTylEq1UcZSLMuky
|
||||||
zU8LRgNkjCDgojb95acm/2MdtBKgXI/nrc/FwYwCEuOs9CQwLWAGyYCLOxJNl0Eq
|
+RWv10dw02fLuN1302OgfJRZooPug9rPYHHGbTB0o7IIhGlhziLVTKV9W1RP8Aop
|
||||||
1LndfI6KcyvZq0/WLwOLQ7vVqI+9CF3WXznm9FumPmgWxKB76hbcnhuwUrTbMzO5
|
8TamSD85OV6shDaCvmMFr1YNDjcJJ5MGMaSmq0Krq9v4nFwmuhOo8gvw/HhzYcxy
|
||||||
+gk3CQKqRzjJWED3kY7TMG9hX8POjb8ZH646DQIDAQABAoIBAE2SfnOWbHoLqXqr
|
MHnqMt6EgvbVWwXOoW7xiI3BEDFV33xgTp61bFpcdCaigwUNzfe4/dHeCk/r3pte
|
||||||
WkS7OTnB1OS94Qarl2NXKWG6O3DyTSyIroBal1cITzLkncj3/lmIiyVo5J3Fa+W8
|
WOxH1bvcxUlmUB65wjRAwKuIX8Z0hC4ZlM30o+z11Aru5QqKMrbSlOcd6yHT6NM1
|
||||||
zV/hgRqay5gOlzyJrjgvTZazHPCFRN0KABJsYEb3nNeUmehAxynxqg8VpQlxN4zO
|
ZRyD+nbFORqB8W51g344eYl0zqQjxTQ0TNjJWDR2RWB/Vlp5N+WRjDpsBscR8kt2
|
||||||
+NxiZWyqODGRAEO0XVa0tMy/Wcw0guD18+U9GYiYQi3d7NEHTt5d8CX9VKY/bHKR
|
Q1My17gWzvHfijGETZpbvmo2f+Keqc9fcfzkIe/VZFoOnhRqhl2PSphcWdimk8Bw
|
||||||
+ecC/lr7URnA/8FM60mKI6MAiHPxyUjJ7/6dq1goG8dDHcAtOEEIawECQtRfQ+Dn
|
f5jC2uDAXWCdvVWvRSP4Xg8zpDwLhlsfLaWVH9n+WG3jNLQ8EmHWaZlJSeW4BiDY
|
||||||
RL55nDPRYNviXRgr8u61TFm8zgkTUQy2MLRkHAyP0IBLUiMpqDdmXB4LNMQQSrsY
|
sXmpTAkeLmwoS+pk2WL0TSQ7+S3DyrmTeVANHipNQZeBtwZJXIXR6Jc8hgsCAwEA
|
||||||
0FyinIECgYEAy3eT5ZUb/ijGsWUT/DizUoetFfg8X4LV+HRLXdlxfcOYB3Elbeks
|
AQKCAgBJcL1iR5ROMtr0ZNIp4gciALfjQVV3gb48GR/e/9b/LWI0j3i0sOzeLN3h
|
||||||
JPC+Tdm33nB0lqo3hLVNPB9yqJiPOOaWQPpWBImOeitpmDRAagjwUewJwLY9RmKT
|
SLda1fjzOn1Td1ma0dZwmdMUOF+hvhPDYZfzkwWLLkThXgLt/At3rMYstGWa8pN2
|
||||||
RD0+YyCC0SwvSDFDsWF+ncW/8XpobvetCSC6mmjX6Wr070yHkhDeeC0CgYEAwd9v
|
wVUSH7sri7IHmYedP3baQdrHP/9pUsGQc+m8ASTE3i+PFcKbPe5+818HTtRrhVgN
|
||||||
P+TjgWVyL5YRiOJ+wjR7ZKpHCiSSxSTjIhq40hs5LtHddSk9e/+AU0otcMExzCqN
|
X3oNmPKUNCmSom7ZcKer5P1+Ruum0NuDgomCdkoZgfhjeKeLrVjl/wXDSQL/AhWA
|
||||||
E4f/e05a6TD5CFAgmUMK7nb49ept3ENVoD+M13K3tTaTyeZghwYNNK56osDtdCgc
|
02c4/sML7xx19nl8uf7z+Gj0ir1pvRouhRJTwnRc4KdWu+Yn7WLU8j2ZKf5St/as
|
||||||
c68jQAy81gU7iRt30xbLVV6HgGdrSrWN8D8DFWECgYABkV1RYpHBppzJVycNRX6U
|
zjnpYVEdCp0KSHccgXtobUZDEG2NCHmM6gR2j3qgoUAYjHyqPYlph2r5C47q+p4c
|
||||||
PzllNvF4JvDxJixCf99xAaXVQNjx/N77NeOxg+D31NQBKTSeUCtVMETY6bwIyzYT
|
dDWkpwZwGiuYq9qpZj24X6BfppxExcX6AwOgFLZLp80IynwrMVxFsDd2J+KpKRQ1
|
||||||
MBqjlE/FvznkE1r/tivr5a65jm3wcegCmZo2d1SqufVvT/nejwrDunddK/1MBZqO
|
+ZtYPcULwInF9MNi/dv84pxGOmmOaIUyjN8Sw4eqANU4T5uvTjUj7Ou6KYyfmxgG
|
||||||
vHLTp8UqJknW4jcVOA4OzQKBgG7BdozJ9i62BcWptdq9iizoTpXzsSHaQv7dU+Tn
|
y++vjpRN7tN1t1Hwde8SVWobvmhU+5SJVHV8INoJD7uciaevPo9pt833SQTtDXeY
|
||||||
3y4o30IgIqQMK1PrYyQx/EOuGwTISlAeIZYP7V/K2nolTHpCEryouxHCG4D59rDV
|
PVBhOKO7thAxdUiqlU/1nGTXnf1VO6wAjaVYoTnP4tJ97WuTptwd2F5znVWHFGVh
|
||||||
nWB36PtdcpClS//XNTQjeWwBS6ZQQ/DS3RB6NmcOFjT9vDabjw32MvLoIiNMFQpq
|
lzJAzmFOuyCnRnInsf4n5EmWJnT7XF2CofQqAJ8NIddrU8GnQQKCAQEAyqWAiPMK
|
||||||
9RgBAoGARQnQ94oH98m/iheJpzaM9NhQhAoXSi4w19FySCtnyZTYTd0A7hjRzsSl
|
I/dMzlS7oJGlhbKZ5R4buc+EoZqtW7/8/S+0L6IaQvpEUilD+aDQyaxXjoKiQQL+
|
||||||
DeoAkIGDHyy33RPK/kPtA6dxM/DQ00IkkwH4soaDDbnCmagdw4NnY8eA1Y/KSbd+
|
0UeeSmF/zU5BsOTpB8AuJUfYoUe0N+x7hO5eIcoCB/QWYX+iC3tCN4j1Iwt6VliV
|
||||||
XNNm+sDafoVyCojtsTA7bripKB8q5vPYo3qRLfQ7dwMeRPYblPI=
|
PBYEiLUYPngSIHob/nK8UtgxrWQ3Fik9XJtWhePHrvMvDBalgCKdnyhuucGxKUjc
|
||||||
|
TtPcyMFdi0z4Kt/FAm+5u/v4ZkO909Ish0FrAqQ9t5ETfvTTTYKBmzny6/LSPTK9
|
||||||
|
0XIsHltuC1xG4vGQsES/Ph++Yj3Vn011FqvFZeBUHbfcQuB4h5wcb+90d4GU1kux
|
||||||
|
eabsHPIZKrlN4QKCAQEA2Fs8NAN5K9i7qbxZCJPi6DJV6XMznk6JVGb+qkkChCyq
|
||||||
|
IOXb95+c9CIpe6w2d3res3zvML3zbdz2Lyp9G0ve6tSlOaSnHeyIxZ5SRB+yQrcF
|
||||||
|
GXtsx370bOGjCi1/NH85kwKlMuROFJKleJQv8rKpIEo5aPSPV9Cc/VsUqBpvR+O0
|
||||||
|
U1HMv57P4yJA/ddw6imHJBl3jTmWBpK4B+LBsCbdypxdVoO8t32Lb2BqDTaPJfYU
|
||||||
|
RJUpjn/efLLoP6CWxYtqpUlY5tc7NJGAokl8Fo1mPn02klydvs09uiXE80Li2Hoc
|
||||||
|
/meMH07Lbt2VTw6iGNRX6VpIHEUZGZeS6rbAvO4ZawKCAQEAjOtGVPXdyWEB0kHu
|
||||||
|
MBzYY/7tMf0b/rymWNL9Vt5NiauQu8cYSBdNR21WzdLdHkFwqbOCLX9twA7zrnna
|
||||||
|
q+SNnfuxaShlbptls9HvKyySQMCaSRj3DJzaq3ZcM2vFgmUFQxeKPV1geeY9xOta
|
||||||
|
LqbExDzmFq2m9F1PPmqAPDL1bt6+7mCVzb1irB9be52WysUNKrPdBP6b5V1DHYAK
|
||||||
|
EwK1WOs/TxBusqDn/gWBjjmLqYr+ZVndaTfDvPd3sWDdzBoiKZ40QUZ15Z5lu76M
|
||||||
|
6e2DhfHCUjGcZBEjDaI+WYc9s0REAzJajEf9Lax3ZKZUyCpWbXx5CgSdKCHB8+cP
|
||||||
|
RTyTQQKCAQEAsxx8r5a8hocLfQ43Kvm7HH0nUHeVoRXlbOFDLNf6ZE/RnCCOxOX3
|
||||||
|
esiZTRAZmzo2CaOBJPnr/+SwTgW/woxCBGh8TEc6LnS2GdviwRD4c3CuoRTjzhgU
|
||||||
|
49q8Ld3SdDRrBoBnIMWOuktY/4S2WRZ9GwU3l+L2lD1Y6gmwBSa1P2+Lxnpupagk
|
||||||
|
9CVUZpEnokM05LbMmTa2M8Tc43Je5KSYcnaWctvmrIUbnN3VjhC/2y5oQwq1d4n2
|
||||||
|
N4eo65vXlbzAUgtxtNEz62YVdsSdHNJ8dXkVZ3+S+/VPh75i2PxjbdFSFW7Futlx
|
||||||
|
YtvAEs3LdgC8squSDQ1LJTutXfBjiUUX9wKCAQBiCMre86tLyJu6Qb6X1cRAwO7m
|
||||||
|
4kyGzIUtijXko6mWxb4X/usVvzhSaNVYbHbMZXjX+J5vhBOul+RmQ3EY6nw0H2z8
|
||||||
|
9D4z/rnQVqeb0uvIeUhBPni+s4fS4bA92M6Ie5bhiOSF2JjjJr38BFnTZARE7C+7
|
||||||
|
ZII7z2c0eQz/wAAt9fWWroAB2mIm6wxq0LNij2NoE0iq6k2xJE1/k8qhXpsN0zAv
|
||||||
|
bjG72Q7WryBeK/eIDK9e5wGlfLVDOx2Evlcaj70oJxuoRh57e8fCYy8huJQT+Wlx
|
||||||
|
Qw4zhxiyzAMq8SEqFsm8dVO4Bu2FwzmmehA80ieSb+si7JZU92xGDT394Im2
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1,19 +1,29 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIDDTCCAfWgAwIBAgIQfzdVwVz4igfdJPd6SW/ENTANBgkqhkiG9w0BAQsFADAm
|
MIIFCTCCAvOgAwIBAgIQdcXDOHrLsd2ENSfj5h8ZmjALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MjI1MjMwWhcNMjgwODI2MjI1MjMwWjAnMREwDwYDVQQKEwhRdWlja1RMUzESMBAG
|
NTQwM1oXDTE4MDUxMDIwNTQwM1owJzERMA8GA1UEChMIUXVpY2tUTFMxEjAQBgNV
|
||||||
A1UEAxMJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
|
BAMTCWxvY2FsaG9zdDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK2K
|
||||||
v+H3BTOGLRYjyPx+JQQcP5r8HHBmjknflE6VcrbRD5VGx8192hwsjAdlL0kz1CEq
|
saEVcHq0eldu5kABbWtZsf9keK7lz8beVIowzOqp5IHpGlggtH7xDVeigA/sLdds
|
||||||
FW2KQidJieDi8iIh9BWB8lsTQ51xZGnry6CbVXxTbv1Ss8ci9r8Cm3GPjWy5gqTi
|
WTgKEOq3zsJzdgfEti5TNAjjmPqjMKkolqv3LXDJG0dZ2GZ8W/eBB6X1wB0LKr3i
|
||||||
DTUUQez8xq29gUod4ZvRoJ8jl/eI7gF7MBFakv7tZQ40SHcogjQoG7nKMXG1VOhX
|
ye3/5jb/wCZYVGGMQXj0VQxY8Qq+OHEp0effeheJqA0OYOj+RaZwi20OR/KmJRgY
|
||||||
D4kM120E+hW9x0U3j0SaCIYl6bG2RHIvUMlrVnj4es6JBVzqItkhAwugE6ytneOh
|
wXU33bZyapuyT4krhFlFbtzXeKsKQPrT2ePWxPAceqUGUTIqyJySYIw6vb72YxjX
|
||||||
VxWQ/7e8qKW2+lVsPnH/zjNES0j/9XYgVCjwkgirxjs2eZRIS5Mg14DdYqfQ9MRQ
|
FNRw6Jg7B7RqVJaVCfBrVxtAv+rCLOhUOVYmWhgWEIODPXiqOGwB0VUApAVAYqfi
|
||||||
EoyQxl3xcDxjqPocMgGYHwIDAQABozYwNDAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0T
|
TYnJIZ7QYLlQx5VPNlzZuSJTUzKmHQLtLcTqdO5HmLxfxc0WuS/ftK916wy/jpSc
|
||||||
AQH/BAIwADAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZIhvcNAQELBQADggEB
|
m2DiHjIy6aAEaHKGQrNgT+no68kp30xkYAVsIs0BFpl6Q2iNr5e0uKta82A0xU1Q
|
||||||
ACU0E2BAdqjVvO06ZyHplxxQ4TQxK9voBCTheC2G7oFaM4VLFf48GgoMkvbsMGyd
|
we7swSHOHCevuDZfFA/CqnBptOjvNUuVytcroCeCrV/ftp75w/Fd9zOcb6LGLxM2
|
||||||
1JqIACCDuSJ5UVjmWm6VIDZrnRsf/BbQCTZXKQd4ONLL5DU/OPjAFKGeCpAK51yj
|
2UzhkSXl3II250xj74Q3q8T9TDxCLty7oiawhaYKI+8SDYc510EQ7MH46WMO+3Uq
|
||||||
OMHdw3cQmMCEpMH9HHJ+iB3XWLcDKPAxTkcsBytC9VLUgU7Q4+3eYIT/j/ug+y4G
|
JkpmmELd9POgnnZ1JrCFmf0flUKTi2CqU3wrBPpPMwFBxoFipp5iL87npACHc3DY
|
||||||
W4A0cmdDDuozwBAPXj7ZLKdVlkUFka8WjQAJesHTIifS1bfahGiSNVJbYjXbGoML
|
6uaoF4Pf9Et1Fd7HRon8RMsKkrSF92NFiBx5UvhZAgMBAAGjNjA0MA4GA1UdDwEB
|
||||||
d0IeGMd1lXlc2M+ygqZsSM2ErzndNdvDs7S6u/FIICm7uW6P2naPeMtedb2orO6Q
|
/wQEAwIAoDAMBgNVHRMBAf8EAjAAMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDALBgkq
|
||||||
5O3gRtj/UQjegI0XV4YO2TQ=
|
hkiG9w0BAQsDggIBAC0F4ci1nqZ9KUhEEAmWmy8g89DovNNIGSC51r2WJ/COmYUX
|
||||||
|
X70TONscsBL/kx5MK4xoAmb+EN6Yy8i+z9NkNJd0B+2MjXPMFBpgGb0UiPv2wEmZ
|
||||||
|
5PAKyjwTxNIm6L/nFhkmVqfsQHfjHukXES4C0ff6fj6fuDpBfl5nTlVmc9LpP+hT
|
||||||
|
5RAwW10qumucGxAWGNBWW+K66cf8O7n/0nQykxJxYjBx16ZB80H2uvqFDKDVFqze
|
||||||
|
co5M4euXQq9KiXPRlcC9rab2a7FGLHd0TyPkq6TvfsqpxcryyKS4rIAz3sQh/tl/
|
||||||
|
/qm1tBcZW2bce3UlF2Wb2dW9HqvIu1O84f6ptLqwgKcIdTbwgQZ0kbFoWE2kWJSV
|
||||||
|
w+eAFb7tz1LDTpF3NRlz+1K27pBQWRQgcqoIRoQXpC0LfQY9Mp70QIfUQdUh6tnO
|
||||||
|
8hmq5y623tfxiDwCxb/EOpwCmwK1Cp9cloZTDefVE1r6NkEJWeeHG79VljUGF1KT
|
||||||
|
NKzXWrrsFtge/hU9Pj+frcZO9qExxPCcsrdZcoK7Ll8s+pjulRvbnCnJkNpeOI3P
|
||||||
|
iz6+sdGmzKSKg2daRM67Zmy5tmlBEX/eV7kFqt+b3HsdUiLo3Ng2lyPLNNDfwUtB
|
||||||
|
EukgYGjVJoyqLjLXgsCxLJlk7X/ogVwf8SlAnQ7p6KuxGWm02vlUpEmJp+Hq
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEowIBAAKCAQEAv+H3BTOGLRYjyPx+JQQcP5r8HHBmjknflE6VcrbRD5VGx819
|
MIIJJwIBAAKCAgEArYqxoRVwerR6V27mQAFta1mx/2R4ruXPxt5UijDM6qnkgeka
|
||||||
2hwsjAdlL0kz1CEqFW2KQidJieDi8iIh9BWB8lsTQ51xZGnry6CbVXxTbv1Ss8ci
|
WCC0fvENV6KAD+wt12xZOAoQ6rfOwnN2B8S2LlM0COOY+qMwqSiWq/ctcMkbR1nY
|
||||||
9r8Cm3GPjWy5gqTiDTUUQez8xq29gUod4ZvRoJ8jl/eI7gF7MBFakv7tZQ40SHco
|
Znxb94EHpfXAHQsqveLJ7f/mNv/AJlhUYYxBePRVDFjxCr44cSnR5996F4moDQ5g
|
||||||
gjQoG7nKMXG1VOhXD4kM120E+hW9x0U3j0SaCIYl6bG2RHIvUMlrVnj4es6JBVzq
|
6P5FpnCLbQ5H8qYlGBjBdTfdtnJqm7JPiSuEWUVu3Nd4qwpA+tPZ49bE8Bx6pQZR
|
||||||
ItkhAwugE6ytneOhVxWQ/7e8qKW2+lVsPnH/zjNES0j/9XYgVCjwkgirxjs2eZRI
|
MirInJJgjDq9vvZjGNcU1HDomDsHtGpUlpUJ8GtXG0C/6sIs6FQ5ViZaGBYQg4M9
|
||||||
S5Mg14DdYqfQ9MRQEoyQxl3xcDxjqPocMgGYHwIDAQABAoIBABbp0ueqGXG03R0Z
|
eKo4bAHRVQCkBUBip+JNickhntBguVDHlU82XNm5IlNTMqYdAu0txOp07keYvF/F
|
||||||
Ga8t6Hmn9kcnHPgM1kgNgkcqkZh8yPD/FvI+vwsRrwGQikHgm/fnFsWDj4KJelBT
|
zRa5L9+0r3XrDL+OlJybYOIeMjLpoARocoZCs2BP6ejrySnfTGRgBWwizQEWmXpD
|
||||||
xx4wm03nlktSt8G37FJqoWH58LSmR4P0WbaBZLxPOUc4Hob9TYkqN3sP47eN871G
|
aI2vl7S4q1rzYDTFTVDB7uzBIc4cJ6+4Nl8UD8KqcGm06O81S5XK1yugJ4KtX9+2
|
||||||
rn7MbqHxnvx8sLtLLfy1dc1r58lTTZB7YL1OPV7B/VYhYFDtpkUBvadV+WJ7SJ5G
|
nvnD8V33M5xvosYvEzbZTOGRJeXcgjbnTGPvhDerxP1MPEIu3LuiJrCFpgoj7xIN
|
||||||
UHrBsshOUJbUI4ahmc8izi40yDw+A0LRhtj3i7aFr2Og+vCq9M8NXDjhdOu9VBkI
|
hznXQRDswfjpYw77dSomSmaYQt3086CednUmsIWZ/R+VQpOLYKpTfCsE+k8zAUHG
|
||||||
fvniC6worJk/GnQDJ/KT5Uqfejdd3Pq7eHp11riqwua8+/wi726zRz9perFh/3gJ
|
gWKmnmIvzuekAIdzcNjq5qgXg9/0S3UV3sdGifxEywqStIX3Y0WIHHlS+FkCAwEA
|
||||||
pYjaY+ECgYEA+ssW+vJRZNHEzdf8zzIJxHqq9tOjbQK9yyIPQP5O4q9zKvDJIpnX
|
AQKCAgAtZw3V8P/+el1PpqoCsNzpqwvQn36bc3CKvPwtM1tJQa2Q92V3DQdr9rDg
|
||||||
T31aZTLGy0op+XA9GJ7X0/d1tqo3G2wNBsFYWPn3gmVVth/7iHxRznorNfmsuea7
|
7pjGkankpGorKScH4ZLseLy2h5aKRCZm9PS/DhbbCs1wrDhtO5AxeKYPGhYNiOpx
|
||||||
1gFm19StL2+q8PaZ4fx9vUcWwDHlALYTYlTaazms6z9FWD/KbB8kiWkCgYEAw93H
|
VvwuHQ/Pohfmdn7KgNrKrW1WIBW5CWN+2X4mq2Gk6aYLHgKZSeB3mf1st6mNRACW
|
||||||
Pp12ND3f6p2rYbXPfHJ0aAUbrQR4wRG3ipVWXGjvn2h/CbrLAt5W1wB3iwnWwatX
|
RZg5OZKW3VMv0a/l3cVaeqooXwQ/PtUkXhMp3ILnnKly3Gulzi2gIyj3EQ5vODSe
|
||||||
opdbfzjxgb0wRQHSPNVj3/SOHr8E5zH/mw+eV7mOea4xlCLTSIAJNzW1320hwsbw
|
O3gND/UZOJwwgGG6Aief4fnDc7an+c1OSgBr8OVC21Ys3dfQWWV0os9gVFhymX8k
|
||||||
FrEC5qe41PrbMUu+4LvXPkHCKVxRXaV4QX4YHEcCgYEAurjegTRM+X1cw81dwn4E
|
2AgRf6jP93sFw2NSY34KvcGZpKG59oMDxWF1vPo8sOt17Ey0+qp3eUtB3FfE7Wtf
|
||||||
265g/6iO8qip2kWficpNvWTXoE7p0cMslVhFJzdo3w52teqk8mHBW2XQ1JFiuh32
|
BaLaD/x4U91izIqOEMzQ6QiZAyvmUoBkUSo125CYuIkt8C8Q1lA1KjihETWF37QR
|
||||||
jOMC/iwN5Z3A9PpW8kVtOwemiGc9/KMXkrw0b9k+oCTJ5uITrDeq/nOhMrNzRtZJ
|
mr8LUk0A0x3SErtm4wVfeDEqVSfI9gKpk6i6rlUzuCjv58Rc0yyqoghXwBWM4CKj
|
||||||
FFsMy+yDHBtda9kCwwFk2JECgYBQUpbu+qwK6IT3NgmeXGzmYBmUvuOGpJrQsm9O
|
5ZHYpBKAxj4bM6IrKnodAOcsyVk2c2zVTaMxPhoUj0fF7IE5Hy6YAQ/yBheZEM1v
|
||||||
iceMxgvel3/hgZTXbE64hRyBDFvhuF6L8v42widoSSmOYxzQjcITibruqO9d0Ic+
|
fhsdBFyS6OqSCnN6UinhH268QPam82lfKTFjW5lOgsSDQZ9rhiWoyamhonJTq65I
|
||||||
E72fxBzFkcYLNezngnpFBeW75ok900+KPrUt2gJWdTmGkcWJa/7tLRJu28kSWlVi
|
nb08f4mzT6OGMwV13zq8dXio6WnUIQAhXdEYWrMBmxp5b6CxAQKCAQEA4kmwV3Nb
|
||||||
pk9E6QKBgDH2Uh61ToeNq8Gbnue3pnhUddHELRFQfwHHaa4tFrXBHuPLKqkVefKT
|
n3ZIzVAp2l+yGZwdg4YWzN2kcfdNkL8I+Pn8pWrOwv/uGQYmM0786ys9kB5lu4FR
|
||||||
A58awVoPpKTECROeyqe2DJXg9EdSVzKyhg217N/07NRaunfCJ9/TSpFy+5Xls7Rl
|
TMcoEo3AaK/z8N49ro2Kl6HcTmxZgTMr+cl6iwetzqYdkRK7klxyCv5uVloDQDtc
|
||||||
U7zK25S1/13KZ6rGVHpmP6Q82VSnsHkPtUfDo3A29llqIQ8je43Y
|
AulDH6RkW9BfRERpi6XtlgiFdJj5jMvXMpwGHX69JVsXb83ZSQESjI2JfO9Y8+4M
|
||||||
|
a7hNKWW/W0ZBrGCcQQPbgpysfJ+PFKUF/yF1h8SSCdetW2Kv2ix16wL5uHKINYmZ
|
||||||
|
Y/Om+/AFnUOQlANycgThtgBI5mvg9Khq6W2i/RNcIL7bvwAzq1p+o6cGnImXo4bY
|
||||||
|
hC4fs2/aeX17UQKCAQEAxFQHSLBYDLal5CQYbHbNZ2sLjwRUraEd/+BA8XoERVVQ
|
||||||
|
JPihgEvTPEaHnWrFTw0qaGKgMZ5SZCZSWUIfXjYvQIUcEMhNUOHweXhJJhifO5sd
|
||||||
|
sTuvU7bWg76F69bRKfp8KM266m7qMYv+tNlQ6Kbz/1ImsW00xb86vCK2hPfhldtN
|
||||||
|
d/iBb4HVDu1uoATHUNuqsSGj/UvttKudQdg7MapzM4N+D4m6rPZUjQmtoMWOXt7R
|
||||||
|
LYrqEOHWfkxXKlVHw1cL9uzUpArvnR0VcYvGfXiYJFbXWsEB07VxIoLMPEtPbpH9
|
||||||
|
YLY37KugrthEVnsbySmZIWCRDEqQuuAaa5o8S1naiQKCAQAiU/dybMebe0A0FVMk
|
||||||
|
E5xbEjnP+AmBbqZBu7iCmthrnNDc70UKg/TEyxAEfJkVu+uM72+TcFy6/wNvPR3R
|
||||||
|
Q9AH3E8TKdm6gw1+wCUb2n1zWUND0Bhn3v9hQKw/2dJbJJnsc59GoTqmHmjWZgPr
|
||||||
|
gcLSAmbYjoVqW0STmZlR6KJuxQiQdOeQwS7fASVTU9xSgi43S7/80UIFHWJnQ04y
|
||||||
|
NIhF9CoAGuuz9ryb80CraxVrzNGdlQ5qe9OKp3/x4wjIbB0iBA3xwTwJ066jTZgs
|
||||||
|
cVF/gr5b2a28BHMKsZbgxqPhYYZ2SfeR6CJB6W/tML9BaFcybBUa85vpAW5BtFg6
|
||||||
|
UfThAoIBAAp1/71byBVFVimF0tdUrTUpewAv1uM5hoOvy0YSnk+jcBXIObLAV40K
|
||||||
|
pQc6PTEtHmlZd/es2+8CK7kd0NYQRQxHC2vJgHUi1NFkG2GwRivC5B4hdAId5+g1
|
||||||
|
KqWaWKLH+f2imKcNKeVh9Dxmp+z9mFquYelqTDmNKvADWX5URuzZNpOB5kOuw098
|
||||||
|
TzyvhH9GdR3jEP3aIdxSmJp9jwnibyj7hKgHSq8UoQSy01GRtThQ3wxyLm6f2fH4
|
||||||
|
11wmFyDNbpHFpL7o5kOU3SOjsvvUhSbKiccIKbTCIjkYhxFfYegeV0Xj767opjMq
|
||||||
|
ytlgzeY2FTa2EoR5JKUQc9fv6+6H5yECggEAVVfnywPm8QXn+ByFDdUndZg3uEje
|
||||||
|
DGyvt1M3mIz5geyRZO8ECzgsZVzKgZC8jDB4lPKz3AGgNlUl/vyGHk6CtW6V6ePA
|
||||||
|
EXcmOkkMKJQMdopY2/cE6YlSpBGMCcnfothgL0HXxYoop4xVjb74k7tFcNrIDoRx
|
||||||
|
zp9dSalgxx9aMeaURRbMWf8AhWLZUAjJ/359M1SmcNW619SL3p8Q95Nptvdiltww
|
||||||
|
lWOCkBdgkjW0mel+Mi2+gY8UPmgNBMPrJ1z9b7b7529YCv5Oci8ABn/N202nhjCp
|
||||||
|
LupADooNknOMLDyqwRorEv4g6wRjuPIYTIhI9fO5ranu089x+mmGU2tCBw==
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1,19 +1,30 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIDFTCCAf2gAwIBAgIQM3khHYh+82EC0qR1Pelk2DANBgkqhkiG9w0BAQsFADAm
|
MIIFETCCAvugAwIBAgIQJ+iLgsp9gA0DmROqW+tHFzALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MjI1MjMxWhcNMjgwODI2MjI1MjMxWjArMREwDwYDVQQKEwhRdWlja1RMUzEWMBQG
|
NTQxNloXDTE4MDUxMDIwNTQxNlowKzERMA8GA1UEChMIUXVpY2tUTFMxFjAUBgNV
|
||||||
A1UEAxMNbG9jYWxyZWdpc3RyeTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
|
BAMTDWxvY2FscmVnaXN0cnkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
|
||||||
ggEBAKA8e9cUSyasRtEHw3yGW5lFCnnZIN+SSvykAOynt9LLKzU5G5ge3ekBtzsl
|
AQDHR/A6uiQ9X/Xh5ivmdjRr5XVr1D7+fU9Qu6ohArqtBuJsLr6t2RBTS9w6PIAf
|
||||||
HE1ndeYjy/dK7XkECQBQ0csF+KSacU5QiZek8g6btH94HDwltCq1I8f1E8LQFP6k
|
xjQSMSFlrm/CY+hbfBMSgm9NeH23o3kYCgoEPhP/634A45W5xwUFno388U8/NHK7
|
||||||
483MKZUDeNNnHzbuK9xsMjYOCrJWGysLHnKjzK/+yfVPwTm9tmUVRqd4xjw1oYY6
|
qwzSP1ezKXfXNvzuo1mZhT08aVdGMOrZUcZZZl8R3RPcIRw9XDSfXKVkMluH6egk
|
||||||
C7iCffIWn7+dQKDjHrn+KyheIy244v5y63AaxgPfjHrtvJtz1vPqxi+FyzDM7RfZ
|
8iLdOxdIdRS58DeSI09FskWe3cIZ5kJmMqnKoIbYSJCVVeYPO0RFlIBi+zpdVyI/
|
||||||
GIjklC6KaKHmxvUsB0hO4WNb9kt8FBvnxOxuDKf+rUYKTg6JK72O3TaUauiEvE2X
|
r9LG0r0plRdz/HJevbOitU2y93S1s9NWMNEkOFU1PFJmsF3ZzNqJFCySj00y/Hcs
|
||||||
SKT0vYpLoep5hc9ns/yh3BuuznECAwEAAaM6MDgwDgYDVR0PAQH/BAQDAgWgMAwG
|
jPULYwIxYdqcv16cTNmd3P6FegvuzLJLjNuGaLJGc1antv+p62P7ZdE3DyprFuxs
|
||||||
A1UdEwEB/wQCMAAwGAYDVR0RBBEwD4INbG9jYWxyZWdpc3RyeTANBgkqhkiG9w0B
|
MJgDL9+NjDaIzoamFf0Uv7K3F7hxrrAHfvm1CMUOyQLg9J6Wl4mLsOy2ZhCbdNFs
|
||||||
AQsFAAOCAQEAMt/lnR3Wy99X/knvjtg7wsPz5T9sZ5hGy/9sIm8sFdsqt5NZi9IY
|
T6dobAUGvz4Muj9V8V5pR+nFehjmsPENSsTcs5j0e8zTWtvMFISdS+NZAkpiz0s4
|
||||||
vS+eyij1yHvOU+pqOxsYQ2NG26CS0CKM3JWLJTo/w8GyiSwxL8a1/UxHmTxDnSMH
|
PV8DLgk5Rp1ZG2V5OnRPLMOTgK0nngc5GVaxf7OYCrFHbBJ8tL93MXNQptNFeBpV
|
||||||
cYZRsuPtdkTiAuZhoT5I1ZTsOa7MQF25HiFBL6Ei88FFhcQQjJ7+xYDNhSoddMtz
|
FhjUGqVFcz+6nbFX2NsFLZnghQRs9lej4TTG33NSAYusKqhVwpYFf8CsXCcvYuU6
|
||||||
U8mUY6NOENmvE86QMjWjaj1PXPLO8PxPIqw482Ln/95pHzuaxAYMvxhs2aQlBS1/
|
RlkCYjr3PB+nX1UDa0eUGm0zOabf9O3D1VzHQBpDuzSHQwIDAQABozowODAOBgNV
|
||||||
9+vi6VOkbQna9+crmzniXjZDx5QdvMN2QwzFL4hCgqbebVg0zwjhByOwQIjtNEXE
|
HQ8BAf8EBAMCAKAwDAYDVR0TAQH/BAIwADAYBgNVHREEETAPgg1sb2NhbHJlZ2lz
|
||||||
gqxjLkTNOdSva6Fkk/z8BD2XSZ4L+nM3Mw==
|
dHJ5MAsGCSqGSIb3DQEBCwOCAgEAaPfAs6saij4FZIPbzAb5M6ZVvfXBg+AfH52t
|
||||||
|
p3tFsnWUJCiOh9ywsc2NcmJdleKDc4/spElFMUarHqcE1ua6EH15O5GEnHWKj8EY
|
||||||
|
PVQFrPvf30UkRGNPl8eC7afZtCNk9MLllIATAzBr5Z1i+psV7MmgBKpbZ4B0TnhR
|
||||||
|
GXNT60QaCJ9RfUuc2z7RHJNo9XTn3Q44X7TFj+P3jHOWzTf8y6Mz6saTy2bugIUy
|
||||||
|
AfRgRgq/bB8hRjrazg55FIlrMv7dr3J0cIuqmaHfsw7Q2ECMCXW8oQXMBzfuIT0n
|
||||||
|
sG4u0oVxdNx4OdHsAubGjjwNDhxJvN5j8+YFqZMu03i8LbyamTwsrZg2C3QrRUq8
|
||||||
|
SujQEEB+AmO0lpuJ24FsOOYVSYCpLy2ugrKOr2NUqbiBKZs8uBh6RGACfunMZlEw
|
||||||
|
4BntohiO7oZ5gjvhGZNUEqzMChw7knvVjZ+DkhFk9yE4qIL7VsJSUNI2ZJym/Xeq
|
||||||
|
jr/oT8CpP8/mFZspa6DFciPfhGLQqKcaZZohL7461pOYWY5C2vsJNR2ucBZzTFvD
|
||||||
|
BiN/rMnIGFrxUscCCje6RLmrsZ3Lb7bfhB3W6kwzLRfr/XEygAzx6S2mlOM34kqF
|
||||||
|
HFpKrg9TtLIpYLAKAIfuNbrLaNP1UKh7iLarhDz/qDcvRka/qJTzLD3eLeGXefAP
|
||||||
|
KjJ1S7s=
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEogIBAAKCAQEAoDx71xRLJqxG0QfDfIZbmUUKedkg35JK/KQA7Ke30ssrNTkb
|
MIIJKAIBAAKCAgEAx0fwOrokPV/14eYr5nY0a+V1a9Q+/n1PULuqIQK6rQbibC6+
|
||||||
mB7d6QG3OyUcTWd15iPL90rteQQJAFDRywX4pJpxTlCJl6TyDpu0f3gcPCW0KrUj
|
rdkQU0vcOjyAH8Y0EjEhZa5vwmPoW3wTEoJvTXh9t6N5GAoKBD4T/+t+AOOVuccF
|
||||||
x/UTwtAU/qTjzcwplQN402cfNu4r3GwyNg4KslYbKwsecqPMr/7J9U/BOb22ZRVG
|
BZ6N/PFPPzRyu6sM0j9Xsyl31zb87qNZmYU9PGlXRjDq2VHGWWZfEd0T3CEcPVw0
|
||||||
p3jGPDWhhjoLuIJ98hafv51AoOMeuf4rKF4jLbji/nLrcBrGA9+Meu28m3PW8+rG
|
n1ylZDJbh+noJPIi3TsXSHUUufA3kiNPRbJFnt3CGeZCZjKpyqCG2EiQlVXmDztE
|
||||||
L4XLMMztF9kYiOSULopooebG9SwHSE7hY1v2S3wUG+fE7G4Mp/6tRgpODokrvY7d
|
RZSAYvs6XVciP6/SxtK9KZUXc/xyXr2zorVNsvd0tbPTVjDRJDhVNTxSZrBd2cza
|
||||||
NpRq6IS8TZdIpPS9ikuh6nmFz2ez/KHcG67OcQIDAQABAoIBABNXmb9ZtMSjUR0U
|
iRQsko9NMvx3LIz1C2MCMWHanL9enEzZndz+hXoL7syyS4zbhmiyRnNWp7b/qetj
|
||||||
adWTRmVW/y+8NQqn1yNuDKqEiF0Kp1mSXjFbsH/a9CpQjX0Oex3fvlRImCfeg9Ok
|
+2XRNw8qaxbsbDCYAy/fjYw2iM6GphX9FL+ytxe4ca6wB375tQjFDskC4PSelpeJ
|
||||||
7d4rB1ufRQQmFqXWhF2dEAm/DvF3v6rUGNCfVdZTVeVzNAh4l6BkPeaO8SapU2QV
|
i7DstmYQm3TRbE+naGwFBr8+DLo/VfFeaUfpxXoY5rDxDUrE3LOY9HvM01rbzBSE
|
||||||
L250/XePi1ID0pYWDbRE9k4FZZa5je3mTctn3s1PHp6xxQdyDHfxZmCZImwZcErj
|
nUvjWQJKYs9LOD1fAy4JOUadWRtleTp0TyzDk4CtJ54HORlWsX+zmAqxR2wSfLS/
|
||||||
joBoQldvUUfjqXCY9SgRJ/MQSNeJoJvPwXmYokpqxfv2sP+JlQgXEcO3Ihj9IkGx
|
dzFzUKbTRXgaVRYY1BqlRXM/up2xV9jbBS2Z4IUEbPZXo+E0xt9zUgGLrCqoVcKW
|
||||||
avMFR3yGdWWLxmE3zzypXvFI+My0E035fEjcObspVOgqxJJUCWLSwWtVAo9shFgO
|
BX/ArFwnL2LlOkZZAmI69zwfp19VA2tHlBptMzmm3/Ttw9Vcx0AaQ7s0h0MCAwEA
|
||||||
fPnfv70CgYEAxqVNQ4eEf8HRDN7Ygr9yruqN5NxXKJKBqOT+OlTAiCtrm6iRFkR/
|
AQKCAgBd61qd4vKHdn1kzNztzdHg9BDGFA7oU9iYvQlua2HdgDwgLluxhXa7Oyp8
|
||||||
WOFA3Ewjk5dxnVBvXHhTZoS2yfIVj8Pz7wbcoigfT+ia4JcAW8xQTs1CV/Xz8JsN
|
y9y6nOgXls4dpPuJCxsMWsqGU7DvOxVNAh9lI/4ah8NXPv5wntIG73Q/dL2Ic5Yc
|
||||||
ChUH3ee2POue/AAxf25yDBGH3fKq34aqL9WNDmaUz+hDCo4r3/hfVZ8CgYEAzoAv
|
vLRCHFh7klzb1HRlmsXUFmp4/yGgIil+rDlS2MZ5hdTSj3X3ricoCBfI75oHQfB/
|
||||||
tBxwE/VUwkmWzv40WI9J4GSh7lYo4d8Z2TR6FRSxgb0Uf3C3GiGKuLf9EMilL3ae
|
es7s8q1ZxKqxfHSbOUqHdlq7B0zmla8QE8RBdCkvlT5YGsMBjq1RimYfwOBNRgf4
|
||||||
i/Dsb0CVn2sfLdSNFlxj1l8V4R8JfXST2Tn4g1pv6Hs3LEXJtlncg5/1DiMtfrqW
|
y8MZbt0Q1WtPeLPH9zdTzWYnDfmjmhqINEsq+PDoeCA4aciQGxjwOCrapgZnwF/q
|
||||||
quJtKuv8xO+5rbfqtmMYduf4ELkwg1uJJBc/we8CgYBZkUMrRbl6mXuXIAvjuEsP
|
4q+r8HbgufXjnjGw5ERLt7BsRSYynoJiTWQ3p/wZ2VLpjFtxYxoJ5/qpQvbZMgGS
|
||||||
j3b3UFqEUrrf2pC+4GQHgfx9LR5uOehpvPcv3azU6Z4y3oe33BFO0lxQ5jTOo/4j
|
Yu3FZNC6cnbOs+JWbdm7Kg93N24cBrGdk/KdEE6lz6uQq07FTSqLtPEQWePzBiuA
|
||||||
Mqbc/tZPg4QB7FQfEBrNzUMywhWB0Yepmh338nh7M4p1+ehXmwcVZforGzXsn52w
|
1wfP78b2AH6vyJKq36EfMCJK2i7rpwtNz7d9NI5kiLRDB7gesqC94WJ+psEu+ErO
|
||||||
/8sgSSSkMge4hK5HyIfD5QKBgHVr6rROH2UZ8dJwqfKWFgntoKKaVoICOEkH5dje
|
w9DbTV3xdOPs4FGGrR41Hbo8emrk6smhb8+VK2odggi8i2CLAkYupMsuobBlX3CL
|
||||||
wDTQiYcuj0NQQq33OLyE0sACd/ufRdRpcOhqHyqBbT9QR9HZQ2QYuYZDcdAGxDOX
|
hyJPfWDv1aREJ1w7zWVQlJkvp5zR0oXZXpfFxjpj7Ypbp7BKxmh5+WYj8msFDfaD
|
||||||
hTqb6FqYBe2E2Yh5XKzz/hLF6g7P5vDQxCbN/fO2JS0lEbAYdUbX7PUFeRKYsEj3
|
8VQ+pqgPpdl6zElEq9m5koHjsHH57fMeJQ59HiWpWFur+kQx4QKCAQEA0Jnvbm7R
|
||||||
d2e9AoGAMrejS2Ic64k2I8VyYapEJ1SUaCeNCj7yR67QVtXJWvmYeu9tsUy9bxGC
|
WypbPDInkIoPDIhyP9Pqv+wMzNfYEnVEG0GhEU/H5aE20a+Dm6u0bsmPm5lCSQsu
|
||||||
FmZuEIUnQV5KZUCKG26GKq/0NiT0Umc38zlUSJzDVM9LUHEt5K066RhVEBp3Fds5
|
EvylTSL3yumQZMincNIUXcPYb2Qye/ZzJnMIibCqwMKQqi4HxCXprWhiEoGPum8A
|
||||||
VIGgI1BkHeMKfhve0wwAbFECL+rzC9ihb6uNxZywlfeyfKN6ga8=
|
fN0bTGgMYfM6JZ/Dh1eGsEvemeW+5tn5xZF4Lfp/vkT8v4FuHDydUF/lIx7F5MMi
|
||||||
|
VteS0hHnR1DuvxHqtysf0wy2l61LFr7mQCMYTNEyFB3ZfXqpxJmFmCqPbr4PQsIm
|
||||||
|
2rqIDw+13eeoyDpJJkdi+yzHkAYDOdAsur0vOQvK/Zj1QKz9qmC1O6L4BN5yp265
|
||||||
|
vjSE4Orvo7btEQKCAQEA9I/afLw6lHUJ4FVL0p7dH15JSFjt7nmGHocE7Wf6Yp3G
|
||||||
|
vMp+PdGyoJ2KEQB2unnQZK1gZqUuRQLannjNl7fsIiIhHgHxMBCIiylwSUVnP868
|
||||||
|
u9/fpJV/cSGze2zF0WAttIgXKNtXG7xMntcY2k+SAe0qjqX494KT0NGnznySt2nU
|
||||||
|
A1YlkXm6u3KCOJrBKfbtiHXFoH39sA+ihuPiV7xcETS2ZrFdAX9M422p4yDHqe/0
|
||||||
|
dTe18wIxJNiEX4xp/HRE//cuQ5dw/Z/QmNrzgWxHbOmXVR5C90vIJRuYY9xz0tDP
|
||||||
|
LMnifSKfnG16l2gqg7zb8xsxYqSGndXWKPAeiq3/EwKCAQEAhCWQbWgcjmFFzNuE
|
||||||
|
/ubG48yoe9DW/OAft8Dg68iH7bBkxd/BpbG8VZeXiw16T1i29f5f5IAFnxeX7EbD
|
||||||
|
rTLLO1113V3ocwH3YZGa/bbBedETzo4xjc1z8asZVmQiJa1ju4+CKrvZFkDH415i
|
||||||
|
wcZgxqbwKhQDijl1+g52Ii5iMYuXE6GGPVXcu8DVrWOk0N7+/IGpIeOQJG2KYDPh
|
||||||
|
TOdzZ22FQKY8EeoS3gF0+SLUIDtbUIaR7/Z86iXD2HzdCemkVaZnaoYuMRBL0ybD
|
||||||
|
sqDn5nguEObWSII0pgN5Fa3QODhS6xOSc5brfx5X0BBVn0L9VbBJ99GIL3t71jRe
|
||||||
|
vVrL0QKCAQB+jUYZT+ncUqgWruy6g7yW89pmFqagxb/SYjn5g9m8WDq0DPDAmped
|
||||||
|
p4f/fkbx/gEJZ/I/i3BjA7QPVyHERcdqblDGz2h4X8XYhUv2jnR8P0XIznNTHo1B
|
||||||
|
BJh04PeIfgWIqveZC8+KqajYdSQGLDC40Ho6MMahha9p2mPEZRAi2x97zoNIQT6Q
|
||||||
|
qxOZqPMV/RIzkAYBI9E33w9ST/AbSHw35xgQEe23zaEC+wdzYc4QMPxF/9smcdbu
|
||||||
|
YyA0tVtO6PefoNAO5/nvNFjkEED7kwVu5X2K7Urn3w4lrZ7w5e4FhEoAukN6T4Va
|
||||||
|
lAhg+uUtIHiM12B50/tZB4N30bFsP9eDAoIBAHc7ppfpo1aDK3bDr6zTSOU4Mn1l
|
||||||
|
XrfhBJHDy2Wt9WkvWtcCtXr3sDpthaChueV+mGoKvfgWyzUoauO6HDDsRYriqaQB
|
||||||
|
cXclVjyy+3atY32Opz9rnWefQkbgTOQ+oQgOzEFhxNS+11Omc6ZZ9s31N6TZi/Yz
|
||||||
|
rgXzhGrr73DkV6uwiiwkvP8vJxg8AMWKorDIm1myr9wwlK5ogDKSku1DM/y1gvlt
|
||||||
|
4EA39fqURyqxN9o5Yq+8K1+a/smjGx95M+P8Nke4bMs1+lb7bBXbMaVpC6DLqj8B
|
||||||
|
eleOZ7adY2mS0CBuf0PNkJRNDwF1B5VDmGBJLubUtGLuUUoEyUbv66WfnUw=
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1,18 +1,29 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIC+DCCAeCgAwIBAgIQTCXTJncsLpgueaMqQF6AiTANBgkqhkiG9w0BAQsFADAm
|
MIIE9DCCAt6gAwIBAgIQb58oJ+9SvWUCcYWA+L1oiTALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MjI1NzI4WhcNMjgwODI2MjI1NzI4WjATMREwDwYDVQQKEwhRdWlja1RMUzCCASIw
|
NTUwMFoXDTE4MDUxMDIwNTUwMFowEzERMA8GA1UEChMIUXVpY2tUTFMwggIiMA0G
|
||||||
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL0fYn9wE7phMA6CFT6gv7mDpzSB
|
CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDDmOL3EhBm4So3agPMmF0z1+/nPlrE
|
||||||
LkebCxj3LfU/isdgXvtXUn+BKIolvav7oJyTyz1R0NzX5uXxEERMBUW89KWvPLPK
|
xoG7x0HYPk5CP3PF3TNVk3ArBPkMzge0/895a4ZEb9j+LUQEjOZa/ZwuLmSjfJSt
|
||||||
o3d47MWMcAgiYx2+FeGZo1cjq3IRVKyg3WRVw2rO0YNL3K1QCS93A+IdA/05muwt
|
9xTXI1ldp8KasyzQZjC33/bUj7FGxGzgbHyJrGGBoH2W5HdswH4WzhCnGTslyiDo
|
||||||
346XJ2FV0tPmETn6t+So2e9ZXh+uJjcCHq4XpJAJznCwemzzRpDe7nG5sYZqq+Oz
|
VN4hklJ7gr+Geq3TPf8Eji+1L71MOrUyoNp7BaQBQT/gKxK0nV+ZuSk6eaiu+om7
|
||||||
zBQ/bTC8rOdqW5woH/GhQHYHcKf1taPLmDLczVPQCqS3LAEK5EOUElfpQykfkZI4
|
slp3x4bc21o7eIMmNXggJP6p9fMDctnioKhAPcm+5ADiFYSjivLeUQ85VkMTpmdU
|
||||||
clOZBhJ0e5zNEBTB/XRd7uuUA57Ig58l7hbX0fUPHgS9MF1z9CXJ40BSm/sCAwEA
|
yvq6ziK3Ls6erD+S3xLvcHYAaeu84qLd7qdPwkHMTQsDpO4vPMIwL8piMzZV+kwL
|
||||||
AaM1MDMwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMCMAwGA1Ud
|
Bq+5xk5//FwnQH0pSo2Nr4vRn+DITZc3GKyGUJQoOUgAdfGNskTt8GXa4IsHn5iw
|
||||||
EwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggEBAHKH54KZdpvcLRIJK4yeSqwOigYp
|
zr12vGaxb//GDm0RLHnh7NVbD8xxDHIJq+fJNFb7MdXa8v31PYebkWuaPhYt6HQC
|
||||||
0NHM9U8RlHjmf5Tp9lCtZpVrkfUtg9rXytekAXfd6GaNex7swTMNPnJBGgaQ2vA8
|
I/D81zwcJIOGfzNITS2ifM5tvMaUXireo4pLC2v2aSY6RrPq1owlB6jGFwGwZSAF
|
||||||
0jdtKfe6AcHTYQV1rs0qunlR8i26cNhYblKPJjYYA6FBzTTtybXhHYG9xvYpSVpo
|
O6rxSqWO1gLfhJLzqcw/NjWnO7nCZEs/iKgAa22K2CtTt3dDMTvSBYKdkRe/FYQC
|
||||||
XcrsC81DYK6nMiQMRYuT7RO/rtI4Tzx+laYc0lYgBzf6pXUjXycgAuJ5+cWT8DDn
|
MCa7MFJSaH85pYRzoDN4IuVpvROrtuQmlI47oZzb64uCPoA4A8AN+k8iysqITsgK
|
||||||
OxPXbfAxfzc6jYfsigwzdOCnuIomFogm8ad47ApTTTLFrVtqCNJAKCu7HufEbB2G
|
1m8ePPXhbu4YlwIDAQABozUwMzAOBgNVHQ8BAf8EBAMCAKAwEwYDVR0lBAwwCgYI
|
||||||
OKWvl9NmTPYetS6MO5LqLAWcf/uRPn+lufHeTfBWIDD5zbJ2+ATP+mQQ2d0=
|
KwYBBQUHAwIwDAYDVR0TAQH/BAIwADALBgkqhkiG9w0BAQsDggIBALSgrCdEQd3I
|
||||||
|
vb/FNkNZkAwdjfBD6j7ZtPBwvjEiiyNTx9hOLBGvbey7kr0HtW0KkLWsdRmCc+3z
|
||||||
|
ev9I5VjDOtpiqrvuAA1wRBaL3UzGyj/eFjPJpvkfJi8zjkIZ2y18QG3yJ6Eqy6dD
|
||||||
|
0aIQAHl9hkXMOVrf364gf0p7EoOGtSlfQ56yIGDPTFKKiy+Al0S42p17lhI4coz9
|
||||||
|
zGXE1/SiNeZgdsk4zHDqhzzBp8foZuSL1sGcIXHkG8RtqZ1WvCyIPYRyIjIKZcXd
|
||||||
|
JCEM//EbgDzQ7VE/jm+hIlYfPjM7fmUzsfii+bIrp/0HGEU3HN++LsA6eQOwWPa/
|
||||||
|
PrxKPP36EVXb72QK8C3lmz6y+CHhuuAm0C1b1qmYVEs4eRE21S8eB2l0KUlfOecf
|
||||||
|
xZ1LWp1agKt6fGqRgcsR3/qO27l8W7hlbFNPeOTgr6NQQkEMRW5OxbnZ58ULXqr3
|
||||||
|
gWh8Na3D4+3j53035UBBQUMmeeFfWCvtr5n0+6BTAi62Cwwu9QQQBM/2f9/9K+B7
|
||||||
|
cW0xPYtczm+VwJL6/rDtNN9xPWitxab1dkZp2XcHG3VWtYvE2R2EtEoKvvCLPggx
|
||||||
|
zcafsZfcD1wlvtQF7YjykGJnMa0SB0GBl9SQtvGc8PkP39yXHqXZhIoo3fp4qm9v
|
||||||
|
RfbdpOr8p/Ks34ZqQPukFwpM1s/6aicF
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEowIBAAKCAQEAvR9if3ATumEwDoIVPqC/uYOnNIEuR5sLGPct9T+Kx2Be+1dS
|
MIIJKQIBAAKCAgEAw5ji9xIQZuEqN2oDzJhdM9fv5z5axMaBu8dB2D5OQj9zxd0z
|
||||||
f4EoiiW9q/ugnJPLPVHQ3Nfm5fEQREwFRbz0pa88s8qjd3jsxYxwCCJjHb4V4Zmj
|
VZNwKwT5DM4HtP/PeWuGRG/Y/i1EBIzmWv2cLi5ko3yUrfcU1yNZXafCmrMs0GYw
|
||||||
VyOrchFUrKDdZFXDas7Rg0vcrVAJL3cD4h0D/Tma7C3fjpcnYVXS0+YROfq35KjZ
|
t9/21I+xRsRs4Gx8iaxhgaB9luR3bMB+Fs4Qpxk7Jcog6FTeIZJSe4K/hnqt0z3/
|
||||||
71leH64mNwIerhekkAnOcLB6bPNGkN7ucbmxhmqr47PMFD9tMLys52pbnCgf8aFA
|
BI4vtS+9TDq1MqDaewWkAUE/4CsStJ1fmbkpOnmorvqJu7Jad8eG3NtaO3iDJjV4
|
||||||
dgdwp/W1o8uYMtzNU9AKpLcsAQrkQ5QSV+lDKR+RkjhyU5kGEnR7nM0QFMH9dF3u
|
ICT+qfXzA3LZ4qCoQD3JvuQA4hWEo4ry3lEPOVZDE6ZnVMr6us4ity7Onqw/kt8S
|
||||||
65QDnsiDnyXuFtfR9Q8eBL0wXXP0JcnjQFKb+wIDAQABAoIBAGQFxk1KFFT9c7Io
|
73B2AGnrvOKi3e6nT8JBzE0LA6TuLzzCMC/KYjM2VfpMCwavucZOf/xcJ0B9KUqN
|
||||||
oF3IHL5b38HIFJbwbBUfHaJYoehCktlxXINs5ujxfvgHk/FbxSDANaunUEoKjaTh
|
ja+L0Z/gyE2XNxishlCUKDlIAHXxjbJE7fBl2uCLB5+YsM69drxmsW//xg5tESx5
|
||||||
Y+R3RBigroUURhI41VjBprrWnP8s+lufqyC6D8G7YsIOLikTps/FZE+Bfsv2yXTe
|
4ezVWw/McQxyCavnyTRW+zHV2vL99T2Hm5Frmj4WLeh0AiPw/Nc8HCSDhn8zSE0t
|
||||||
CCK9X8+8eLAyrsq2LLCw+Fjzk+bKRj+zE1bUR2MqNYtRNOFizDR0DCy/f+OltmhR
|
onzObbzGlF4q3qOKSwtr9mkmOkaz6taMJQeoxhcBsGUgBTuq8UqljtYC34SS86nM
|
||||||
MVTQgA4hAWyCXc3c07zJ3YMiVMHBIGX3hiwEGhzgKtS8vQ7isW21StGLsMQlvUgt
|
PzY1pzu5wmRLP4ioAGttitgrU7d3QzE70gWCnZEXvxWEAjAmuzBSUmh/OaWEc6Az
|
||||||
AjrVzzsacCSzuL+QZoZtZl3E7V/Mko0bKNeOz2ouoWTKxInlzget+b+zE39+1WZO
|
eCLlab0Tq7bkJpSOO6Gc2+uLgj6AOAPADfpPIsrKiE7ICtZvHjz14W7uGJcCAwEA
|
||||||
T/X54gkCgYEAx5sI73letGuk9DOopwKLokj0Qdj3f5VRb3yJqbp3YkLTeayyRAwD
|
AQKCAgBmIvmxpp8l+cH/ub5OIenZXpMJn4fqZPXtxjjd4HshIN0ln0JlF15lOG2M
|
||||||
3KY+NwSDGLqj/IcG5DN/ZtLbbhiI2F3oPcJG8QyVqmsfzF7aW3RaBBt6gFN6IdQ9
|
gDGKFGKUts8gAX/ACocQETtgnDnn65XlwPIqfXFGflD2FNoLyjBGinY6LhtIF9is
|
||||||
SO0pS28bj3PVLqPqx3gXHZ3l9WRgj5mbl6yvoICiymMMKajOgKi0sTcCgYEA8o4j
|
aXmpHz1Q7tDjzZiHKLor8cBlzCjp+MToEMpqR5bO1Qd5M2cro/gM7Lyz9kN3S3x/
|
||||||
+0HFhxcLvPz8GCynSarMXaZe/mEImURq8ObH2KSgBogD5mCA3IHL4kQSiRyxNoAt
|
x9BCpbgwsVtYxGfEePmFkwAO159tx4WMCYvOlW2kSm5j+a7+iwmA9D7MGkVZHvNN
|
||||||
crGr1idsR28UYfX4xprMp3okA9ujAw0hkiNhUh3jf3ZywvQXFkOoSbtwnfAFK83c
|
A7Y/H0F8ekdVBN5pMG9Yrv/vk0ht2lugcS5YGr4eufFq0mhWdv+jhBTxLzqPMMBG
|
||||||
CmHy+c4OL9BAXsHvhsRHDCVjfKupqJQwux+9HV0CgYB+FSMmyX6V7qzqiDsPC5+S
|
m9oMJcj8XyXYtwpfVsqBpCqK2wnEnv4Kf0rZzBU706nI2mjPXx3dL+5qo8uQJKNp
|
||||||
Kg0IDvn/QB2Jk5wNdzhz/AxC/mA4dXJ3DRedfx8kHrj5CX3D5feixqxOtfay3VaW
|
mxoS7vmHV5RIJgtdvyzGFHjdfu1leowhV+Jy9jWzMw4wlnmlxsfDECf5RoSf2XGt
|
||||||
tEJFfxKG7FXQrVW2kR9PGuBdcN1jwwHXL992w78f9SYC6Q2jY+sODTA1umr4KipL
|
SMGJb0dbJKae+W4MfNUFsgAWMZk3h3KF8AHHe44OpDbQeoh3JLnkWSG0oS3CR0ch
|
||||||
O4xQkRDDUJ9dLUELqgVBLwKBgQC+/CLizQgOdZv9hCmvk0FppP3j44M6wwa1QAUA
|
68TzCy0SZZEZ9IS+I6o5WVpwWfReCQ5NjaKipWcpiJvxg+Dc3GG3QcVXVz2gGrJh
|
||||||
iIblU8LZQbHobSYp+l2iXL1HjvsOkeC3RaSrLEF7AcDH3Zi0MOFiIa9IBmIVnfpI
|
g9v0v6eyeOJ32QGvvP7THFBjpWeeHlXT8Yz6hFcPrvErEZ029TEmhg8aLWBGfsR5
|
||||||
Cmmv8e7Wx1pXnUCsfDt/SwLCqWI4+o/+8N8TySasiUqWEhhbQiM7Mhli6fvdzEmO
|
F1bazdbqvOSEB9vBAAaddNnEDG9Rl8EmC4WdsnVgYUw1J7gfQQKCAQEA9DKjD9eN
|
||||||
ndAX1QKBgCKJA25iPkLKw4mFVxAaPIAZnenJXJpuHF9tGzjjcFfioGtvI/1mrePs
|
CrUl/2YfSm2WaFhYci74XcHDVeAXN2SbOyKbMIqk3aOFQNRAsLRnwPkdiLtuqeDK
|
||||||
PhwoO1qpjzY9brtf47l+vVMSY9KrA1LvudPvTqBtyjQvG5SqsWZSLuyJL30HKeFy
|
BafrfLTCORHfFdYKnUzmuekESNLckN9VyLztgqOqNAv3LD6GmSHBaJEnUyniLxOL
|
||||||
hv9FCsGVcF6wu3S8wXaGC/H8kityxTqFgZQW5whl2D9axJavygKj
|
k0wMEBIsEQw7Fb4blM2REYJ3ZzMFmgpRGnIX8KcxhW9XgSrnqMLO0w6mVxjo7xzd
|
||||||
|
813nCcNrGhySM/EzKYtTNHy2JZmMH5QFHaIj67KklO7VeEZX5U+TKveBEt4rmHqs
|
||||||
|
Ndqf/djSs8vu1xse82pVRxMXX2mhDLmwjUjPgWYxUL92jTiyJhE7GxpVB/yHgF1J
|
||||||
|
Ecb47MDahoNKkQKCAQEAzQzvCOA77IQpGO117GcMqcjzwEUhTytojFBT+s5mHfzk
|
||||||
|
dYr5TyN86LQ7/GktNoJ5oRvD9UGRSul1OGneivqtWj6mv6/Zvfzacx8NXY4MYFs1
|
||||||
|
nEr3Gr7orVFIzD2x7nMPG2G6+J6hZ1rhpnZ9Hprf5G41sHIJxHJ9wTYSUAmFh8bv
|
||||||
|
FiJqF90bSq/E5hgjphtX6wZWeZYspzc/5+IrJ/I0nqoxV3rjUy234zlzKJAV10sV
|
||||||
|
5oVgxLLQsUujkHp/Da+ij2aTv1Za8y3PTJ7MAHYgdpa5l/4U9MnPUEB2REBCI1NN
|
||||||
|
TqxnViwD0xgsvxfb79UzruLJIYOCKvfOumlutXM0pwKCAQBUIMXQhWAP2kyW6mXJ
|
||||||
|
TGvO0vDVlZz3H/Pdt/AHo19fRhLU7E7UFKupo/YNanl8H9au7nO3jrvKqwkT02o+
|
||||||
|
IwwKB81sV7v9PGu/cvWN64MwPvZMVXojqCOlWH0icGCjV66Glh1YPpGNU1ushbYs
|
||||||
|
wVvxp6b04sUhlSLxqMA7S2aZh8j7nX4QDEXHODLLDyIV0Cw6QViuV/GXEDiyQmK5
|
||||||
|
gjJUNrp7i4ZExNozpeyCTIpepSde4hKVRJrCbumFFJ8M5GvRRj0asNh3TTRlTbd5
|
||||||
|
Pb6w2KUXEwECFW+t7UQQkEBkzDrAx6YhvXRoPqoRN0p3keDNeZBtBrZPq47CccZX
|
||||||
|
JRAhAoIBAQCJ/DgnGu54XP9i/PksGrSU1Nvi+SJPKoDyW2QIFTj22SXMS7c1oEYA
|
||||||
|
OrlbRFPeqLK8zfhyZKsnZC8zxVqy37okTqDbwbSfezZt3emamWqOtRJAmNnsr6fY
|
||||||
|
aii4+JNySQ9Td9LgV69549iRso7EN6iPCfMrR7J29izWBlMQdTfchOyDUqleYbZp
|
||||||
|
7hpsVLY4o5HoYJ10uLBX3oAsxTARc5YhZ5pIqjOr18o1KIXsN/napXaZaAwUkdiK
|
||||||
|
VsI9CZHSXezg30Bxs+UEXEFx6DKT5Oo3o3pFZAAqMlxGPvrXNv7K0tXlKXNos7nn
|
||||||
|
Jg+GkMG6hRiAibCb0umXjKcbHrQXeu1lAoIBAQDcRBsy6cSQXMSu6+PyroH+2DvR
|
||||||
|
4fuiMfSrUNjv+9K8gtjYLetrZUvRuFT3A/KzDrALKyTFTGJk3YlpTaC5iNKd+QK8
|
||||||
|
6RBJRYeYV16fpX/2ak/8MgfB2gdW//pE0eFjw+qakcUXmo957m7dUXbOrw1VNAET
|
||||||
|
LVBeVnml+2FUj0sTXGwHKcINPR78PWZ8i1ka9DptnKLBNeA+x+OMkCA88RJJegSk
|
||||||
|
/rgDDV52z4fJHQJh9TZ7zLAXxGgDFYLGPTrdeT+D/owuPXF+SCP4pMtVnwbQgH9G
|
||||||
|
dfQ9bb7G14vAeu/kEkFdGFEreS09BOTRbTfzFjFdDvSV4JyOXe9i/sUDxf9R
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1,19 +1,29 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIDDjCCAfagAwIBAgIRAI0Dt8LVd8cJPc0dv5aW+wcwDQYJKoZIhvcNAQELBQAw
|
MIIFCTCCAvOgAwIBAgIQPjclBRGzhznCybQzYRQTyjALBgkqhkiG9w0BAQswJjER
|
||||||
JjERMA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE4MDUy
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MTIyNTcyN1oXDTI4MDgyNjIyNTcyN1owJzERMA8GA1UEChMIUXVpY2tUTFMxEjAQ
|
NTQ1NloXDTE4MDUxMDIwNTQ1NlowJzERMA8GA1UEChMIUXVpY2tUTFMxEjAQBgNV
|
||||||
BgNVBAMTCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
|
BAMTCWxvY2FsaG9zdDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALBe
|
||||||
ANr32CUXFUCW1c2oPoHjq76T8jUTH/cxPiR5NabJ1y4gMCBko2rIe+TblW9UclxH
|
C9O6es+mStDowUd1kiM59VkinzzdHgE24LvKmGxQ6fDnnT8S9L7iyzoxcJWlvSHu
|
||||||
911gjfpSAxFtNf+lX5kwmAMHhU8pcCc+Mjp3Ax9acFXSXvzzTDg+xj0NGig6OBk3
|
pfyZWvij0ZIyRZ288XemTEFYq25RK0IBGGdvYz9OqT2R3lblBQrXDjSi9WG16sGx
|
||||||
jyPuO92lM8A4qs0mBZ/T04iLkawLmdRXViRoGK/T7Df8HN+hm7UsG0VO3GxFgSST
|
60MGhM2egGMqFQ5DBfT16IKw00+RjFgCVzJ8T64Lzw82E0e7d6hl39SPybY+uvrt
|
||||||
YhhKTu6JMTADszbIFPOvBxGCUNhffXiLNyviO4AiBdcAv2v0SUadEPmSGm5Jb1DK
|
SID60hYGmXoOdaiC9qquivks67BZprGNfORrvyJNrCFI6oKUFWHrQ1PpGd2tOwJN
|
||||||
tfKY0jWi1k1zNSqzit/bhML/EHbVkYJ00QmH50MBTunpz60gIgHjt48nzJarLDML
|
1P3gkkS8pVlAif6ZQkAf+zuKu+l4j5tKxGlJAkJsafVJDLOxBKutUj5msha0g6uJ
|
||||||
oRFMppG9XIBQlUn3lo0gVwcCAwEAAaM2MDQwDgYDVR0PAQH/BAQDAgWgMAwGA1Ud
|
gFXUe0+G8hkNcEjd8XqUUCwIOY3pdv4WsydKBk3uH9zMnYolw53k1q0ObvoY1NXf
|
||||||
EwEB/wQCMAAwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBCwUAA4IB
|
beMxHQAtDi7nfQGlae9cuuOSymy95WuvzfhZFKdPWUe8lKN9QXFIWVoCFnOm8T3P
|
||||||
AQAb388owui+O9vUle+A99FXwcMDEb0OILc0lBXVWx8q5ZE73vcanxyAcfOsZYRY
|
+FNCUE+p8DIWkal6Ul9THi/Kz4p7twyrUp1LwT5EtSaJ3iGAmB9I+8/1vmZT3lPi
|
||||||
Lh7G6VtJwC9xVjAdNwJ1gd+ak1l0/Rhs1V0JZ12/wOvAOQ7+9g2lRc1IedOh3EIh
|
nX8P+iVGM5yOUnptrsFm0bUcJWRD6iaTK1KxpH+Is4h2kiUiSz1tC/9bKaJYN2o9
|
||||||
d3BMI4RdDB/BnnK3XjkggYQZK3yiAOavmmsZxAOl/apzjF+5u8XjuydMmotE2NYw
|
oy7q7+ZVfHSmIxLo8ZFYsaZBcXi96cKuuPMR3X4ISPwKDqP5irxU/QbI+YQBMshg
|
||||||
IpM93zE5wWXqzYs/Kmyy7zAcHKfvq9xej/gMCFEvO6lopmwyslBLPpPNHwyfIVtA
|
G4b0BNoMZ50g30r3Hcsifw4pzPQF0RDMOBeCiOi3AgMBAAGjNjA0MA4GA1UdDwEB
|
||||||
mspm2OZhdmpRJYGzkR4wK5NjoRl2O11uzlMRDckp0GSZ0x6TGxmb7ot5HK27p3ep
|
/wQEAwIAoDAMBgNVHRMBAf8EAjAAMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDALBgkq
|
||||||
6LPZM1wJIwuYHIP74eH0ctQP
|
hkiG9w0BAQsDggIBAFuS/VrMNUwEMyUIktDyna5ExYh/FDOE+YEYf8tsX7dSMhRK
|
||||||
|
wE560/AcVZcbKKAZOnZ/262a++8tparsQt+bXBJ2so6YUqsFDNdOLCI2aShjWDRe
|
||||||
|
TNhqmLIO3FNsLRKp96WHVz+jFoiECsoYfKn0jgqTqxx+7nWFqgBaNSlF5cbCgLCH
|
||||||
|
jQV1uQhzsw/Mh/32hXAidkv/nLeLf7FbKq08hgthtoP+XstlzZ5BxkPodjb8XWXG
|
||||||
|
DSS49SWX971GHa1apwMKfxVGSppxn18ZwEmW1BUfQBNxtMytqA9DK3+xuoUdXkB0
|
||||||
|
iJbm3Jc10JSRju8iyL121Xt6f8O33paVz/ndDJIWztUOjnItc89rxHsINPt5+cUt
|
||||||
|
jix8ohwmHGDrK7ZooXBvotvmGT/xhPr2eHUAG8JuSJ/Cr09UUOwUEigz4CfgJOHm
|
||||||
|
XukdzjOkb4r7lhNmVeGqrjRol1W0Wsc1NGH++J6xdkIeQ+i23kHwFHfQWV/J69tm
|
||||||
|
rOn2N+qijtmbIy9YfVcrFDtUtEAzXylZ2StCVQNofd0M7tXNdrUL8yAFwlrhWGJV
|
||||||
|
wsSP++1xH2Ie6Diupy8z6rbP383HmnmVPU/UecgLrlX2lEpt/UZkkX1Xm+6PhrrT
|
||||||
|
HDeeULvqtUP3PD8wS0C873Pl9GXOKISqf0HKEIDUAVZhQOsGFqiZH0388M4L
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEpQIBAAKCAQEA2vfYJRcVQJbVzag+geOrvpPyNRMf9zE+JHk1psnXLiAwIGSj
|
MIIJKgIBAAKCAgEAsF4L07p6z6ZK0OjBR3WSIzn1WSKfPN0eATbgu8qYbFDp8Oed
|
||||||
ash75NuVb1RyXEf3XWCN+lIDEW01/6VfmTCYAweFTylwJz4yOncDH1pwVdJe/PNM
|
PxL0vuLLOjFwlaW9Ie6l/Jla+KPRkjJFnbzxd6ZMQVirblErQgEYZ29jP06pPZHe
|
||||||
OD7GPQ0aKDo4GTePI+473aUzwDiqzSYFn9PTiIuRrAuZ1FdWJGgYr9PsN/wc36Gb
|
VuUFCtcONKL1YbXqwbHrQwaEzZ6AYyoVDkMF9PXogrDTT5GMWAJXMnxPrgvPDzYT
|
||||||
tSwbRU7cbEWBJJNiGEpO7okxMAOzNsgU868HEYJQ2F99eIs3K+I7gCIF1wC/a/RJ
|
R7t3qGXf1I/Jtj66+u1IgPrSFgaZeg51qIL2qq6K+SzrsFmmsY185Gu/Ik2sIUjq
|
||||||
Rp0Q+ZIabklvUMq18pjSNaLWTXM1KrOK39uEwv8QdtWRgnTRCYfnQwFO6enPrSAi
|
gpQVYetDU+kZ3a07Ak3U/eCSRLylWUCJ/plCQB/7O4q76XiPm0rEaUkCQmxp9UkM
|
||||||
AeO3jyfMlqssMwuhEUymkb1cgFCVSfeWjSBXBwIDAQABAoIBAGQMCf4oZdV1FYs5
|
s7EEq61SPmayFrSDq4mAVdR7T4byGQ1wSN3xepRQLAg5jel2/hazJ0oGTe4f3Myd
|
||||||
7BV86OPSxT/q1Rgkr7gKibEDWAYDPvoOAXywzarriYOsmfQADc3kZ/qPrkcwFxQP
|
iiXDneTWrQ5u+hjU1d9t4zEdAC0OLud9AaVp71y645LKbL3la6/N+FkUp09ZR7yU
|
||||||
g3aC9XGs5gQdctj7WgfMiOiycdFEpZH9uD2asQkEC4eF0kvzTrukBkZnTRXuzlud
|
o31BcUhZWgIWc6bxPc/4U0JQT6nwMhaRqXpSX1MeL8rPinu3DKtSnUvBPkS1Jone
|
||||||
m8RDDMu+uXhadJbIsNtBlMYBllSdS+LFxXcAYm+IsvTYzmwg4+bnjvOwMHO9SMSb
|
IYCYH0j7z/W+ZlPeU+Kdfw/6JUYznI5Sem2uwWbRtRwlZEPqJpMrUrGkf4iziHaS
|
||||||
1dfgOLkg/A++/GTjD/kUyCV5dc4lv2I0i2pXJkD2V0Dr6Yra1U/MRKcOwTGC2q/8
|
JSJLPW0L/1spolg3aj2jLurv5lV8dKYjEujxkVixpkFxeL3pwq648xHdfghI/AoO
|
||||||
hZuKm9DgvGXvZsG0+yT5fsexGRwTxmByvfj+QMF3LCTDCknD4d/mmEEX0EEGPlW2
|
o/mKvFT9Bsj5hAEyyGAbhvQE2gxnnSDfSvcdyyJ/DinM9AXREMw4F4KI6LcCAwEA
|
||||||
I7OgKEECgYEA/LkdwnXy7ymis1Rgjumc3ydcLoCqV3ExaxXrvO50EkRpgRX/TLEk
|
AQKCAgEAnrHg/oD7ZMEC7PuifoRCHMRYCf5nPkLQbtNMYG2pvT0JY6VlDo4l/2Te
|
||||||
j98iVYyksiaJuMhqnxNttT6GwWJvwIXFPP9WpIGmzi4GKyqYGEX4WbyPoY9hjt/G
|
7NvzrBPYHSI55RKwkq4FMwFdNtP+imTulJYOm1MaE2gc52WI7jv/eNE6OQIWCWz8
|
||||||
muR67cTXg6ssiSssUCoQnWIHyuGDJfzRWqnoei0dIA2GobOwFJtXeV0CgYEA3c6u
|
8Uv4dBVWyTcos8S31rTaXWBOVejlAUgMERy+5wfWOpLQlzLYF4m0pMFJk/AReUtB
|
||||||
utbNtmbyp17Jffx01ee8Wprhnoz7Nh/dJMLngpIx3i8qQqpFB8TPNUTu+GLgGcol
|
nmhLXlsPsB22cag/RWZmzzcXk6tT/LzVe+R5ptLkdTsUuAxjjaBKVCDiMuDAZL1m
|
||||||
n9BDzZszoVhsxybn7Lgm/OjS/jQL4hosFoqztThkg28L8UD7QB0TyCucwgk2lgOe
|
dah3h8oKIMab8l0SABumxKqYAKkyvbSJQUhSUYAT5+3c0cfJ6q7WoMk8TqvnwfpQ
|
||||||
VxyX25kNSXzxdCYaKr1+6g2gtBAb0zPj2E+5t7MCgYEAimoA6J6dHWwaVkmiUOOW
|
2Klbcaa4G6+79H8e/a41RWmcMVTTpLKmwzx/iMLPswLnTFbWYCsLSsml3OpmXPhG
|
||||||
LYprLHT/1sCCJnptEJ8xJ0gc2LxphWGH+txk+6H6GjCNQY1TCCkl7xx9xbDaMAGU
|
CKdbIWMvNMBfahZmnCP2pNcZBVY1/k/lEw25ehtnWqA7HplawT6V3gk/Bzz+3e3R
|
||||||
E2Jt28++wjHm4wGDJ9g6uztRF1VmQ1BAgFkfEta6irzXuZDRxl4jl283gWCd6dJb
|
XEpioZF70ipdW5Pb3OG/tKSNDvRRjqLPk9UWlQzmedzu7XN28V/blw/CBVcMAcc0
|
||||||
/2ILl87ZotKFqE6347Fo6WkCgYEAyDNyMMALIzTelkUO1wFUL3If5yPeuy4C3IJ8
|
njwAledTuqv/wQ67dtbXdcxSPZbV/Rq7y3OmpgK6RWLIFzzpOPW5gULqUZfrnxtv
|
||||||
J18oeQkdq66klVF8RxvT7v/ONjGAlqaHuSzQ1jbcrifS3xp1wYsh3asELl+pziXT
|
StxVnlZXhFoymodFobTi7AYibsLaXLkunZWXEwFwdtLfFHznfHq/rHfBmna1lcKW
|
||||||
X3FH7Sz+REep3tLJNMBKB6WdsuF//H09oOD1DEej342/nhd6DNPHRtiQEZZslwBC
|
MgWRqsbaoCsqHC1nc0E4llFkn3zqGYgMQNBeqNfX6cIPI/eQzPECggEBAOk0TP8N
|
||||||
Cg9D0NMCgYEArNksPSQJSxXqxZsw17OTqQJnf3kNBI0SP9q6Wc8gN69r5YQcIHcr
|
edIFENOrzUtpH1fB3k15heeA84SeBhj8t/xrphR3o+IVO/GtMtq9hVLeYFVPwWCi
|
||||||
KgtfdiL4LawZFie6gcNu398ng7VYUzzkYR9j+G5qPetcqllQZeVc6cieUyR7Eul0
|
Mmy4KhwNUOtFeCSX4MbpiXvoPEjL3QF+Sv95HsEWsT1iBQIN4aoV0ZSv48YsRczs
|
||||||
WvtlUECCfweLFUsIhuHyEsGu1PrFYd98SlOzt24utguFss1539cEC3A=
|
tLjr96hADLTMfpCwyRq9r8XVF/hnx7vqOoOC/J1kteRhjOWRnutFpdAMfkFgzUa9
|
||||||
|
1unmDHsDifcT+vpxief9Q9zK9xMYvYmwFkBUjOlhC7WchZC20nrwvM+A2mMBpeLB
|
||||||
|
WSRWsYeOqW8zcQNGdWuXXMKxsYHwv9tXbANVWxs1gz4x7BxcFoN5poIFrnT+eImY
|
||||||
|
EwhGrKR6jZsKF00CggEBAMGbdZU0+yvxL2tAul5RGAqv9xhdUV4eg8warTQ8/RWt
|
||||||
|
8Vef2wllBYnP48rXNDovb7ZNOjMBdjIWZ2zq2McMtHqpzP+zWQWaNT8/7Zi24JTL
|
||||||
|
y4G75kZdGgTPG2Y71seZoZGxfOu4gf7cLKOqxiHYrNDHEDl5Pi13tJD/8qf6hYm6
|
||||||
|
K3yALSv+QlM3mk+5oueKQ7Lj9rV81YomYSV5+K+WhszhvLmuxv0necOLKapeBWvL
|
||||||
|
GQ5038yAq3PFdu0HXzyA6L8YdusP1d3sqwQvLbi8KAMXJCeT6WZXGYgX2Rjfbuih
|
||||||
|
ZHUaE7Ac0EsJfMuOowSkS7oXuT81k64ngCoq5KZC5hMCggEBAKYkt9JiZG8HYuSb
|
||||||
|
GsjmHQllup5RvN+hVF0gRFHbAq2YeBtO3Xg+DpXxAjErIuhWPCWri6bwB6LDVmTj
|
||||||
|
68milaTke6TbTzLy0rg+Xbcppf766LlCFIYZ5l1/TE3j+4vGAC347sW/wkWY/7lj
|
||||||
|
4GmS43zsJmqhx6/XUJuOPJOZnZSCZr0vuhL6mOoZZDJUTXy62dx0PetvZsT/O9cM
|
||||||
|
P2fDWWTCLTEVlBqik4KMdsS4qjGsyzOeCzyZReNDDRO/nZTsRSqSSwARJhQom5Rr
|
||||||
|
RDVQXeyqbw93KAQhmshroBSB5Rc+4YiyCE3wPTo7NWL38XPi3lbF0VSd/rk/uNH5
|
||||||
|
6hcSCmUCggEAIPHjQFCTrRaNiyKolAQYozjuQyceAXYP11tyvcDjEB1ZRB/flemq
|
||||||
|
15iYmpukN4J67/qUPLmy8zL8xnvwB28SBw195MUQEPP8u5aVR7dW3/sN1jWzKaYO
|
||||||
|
F2Nmti7YjX6HD9Oz/iiXdlbhAbi9nmTQg3ZcPGt1OSd1gncLQ6pNrvIPFFB7X1EU
|
||||||
|
2DRN/eMI5X2Rp49DG/7yF2AQh+AJgVeL+LEw/CfRlKJzBeNYY7U8Fuuoh907eAEt
|
||||||
|
K7YeVpc6jYEiGeJ/2eAH9IuhTkT48saRyHTXoiR5QwDvR0lHmAPtS4irH4Igd4dv
|
||||||
|
qlUi90B+XPvYJwKCc08aojf2hzZlUiVwIQKCAQEAraCoWea8hLFchxmAiBt7joIg
|
||||||
|
nNK7a3LOHYxT1gB9H+PoVqTmzGVTeZpD8Jnis/UHmDhRYuUGqvFIefjAWbz0jJAN
|
||||||
|
t6RMAozENCG1PoeXHf1gt2wspv14kza+8jSdpzNrzZgPZdb7Wh1UEqUkiRYwn87f
|
||||||
|
C7DHknqCj9S2qq0DFXYz15JNPVrbvD+ZLBFJhTAjppS9TuYQVLf8JPYHpLRio/9A
|
||||||
|
dMsyOz1VA2RRYN0u/u4ccxiN45K3PbVMCeDPbWXNm8G75YKQ7LnIuehMB1qkZy6N
|
||||||
|
MOnNGp3l/ZkFK0JsW/pZqTQ2FqSkb0+ttTFApFI3qB04sc4s0uKPI9fa0OQtUw==
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1,19 +1,30 @@
|
||||||
-----BEGIN CERTIFICATE-----
|
-----BEGIN CERTIFICATE-----
|
||||||
MIIDFTCCAf2gAwIBAgIQTyBNJlm7fS0yutwdLbhG9zANBgkqhkiG9w0BAQsFADAm
|
MIIFETCCAvugAwIBAgIQCnqSQalw9ytL5bHLgHZe+jALBgkqhkiG9w0BAQswJjER
|
||||||
MREwDwYDVQQKEwhRdWlja1RMUzERMA8GA1UEAxMIUXVpY2tUTFMwHhcNMTgwNTIx
|
MA8GA1UEChMIUXVpY2tUTFMxETAPBgNVBAMTCFF1aWNrVExTMB4XDTE1MDUyNjIw
|
||||||
MjI1NzI4WhcNMjgwODI2MjI1NzI4WjArMREwDwYDVQQKEwhRdWlja1RMUzEWMBQG
|
NTQ1OFoXDTE4MDUxMDIwNTQ1OFowKzERMA8GA1UEChMIUXVpY2tUTFMxFjAUBgNV
|
||||||
A1UEAxMNbG9jYWxyZWdpc3RyeTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
|
BAMTDWxvY2FscmVnaXN0cnkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
|
||||||
ggEBANSMT7auGdwF63fFdQM9O/EqrX++gnuBQgFa4cZzC7GqsvS90uKTOLuWIA2U
|
AQC9gvT3cwz0Ih9+7Ilv5lc15HsEiSmEMh4nOMZrSaamKgf/ydCiGo3DQapr/XDK
|
||||||
ehgF548EDkZu1z6nRAvoFh5L6B5f1VjiVknzLEPlR+5uPD22kbcxgCrMCRZn+5mK
|
FHMLKq68AxwfOlzmEFQ4d9umpPMQ2+4GBr0VG23ppGtQApIPHgD06S0/CeHmDIXN
|
||||||
vJhTUpx18yeBXMhxtPhkGnKaKwGcgeW8O69KM7Mo4HBQqg5656pa+4wkUo7GX2v0
|
FXcKybPX/9KbgNkXBWbbJkJy0EcsdP8VJD50Q2WH89nvgEYJNFuKEELD3iGY6bBF
|
||||||
R4ZqmrS1tlwOgpld8KZKVJ1FNyGEeKQkIYGJKHqgC2/JrXsbzuSZ/4pqf8BHc6Mb
|
jeDTle5jYA7CgBKvD2avn31g24Qhxn8n8/BdYO/U0kw0qmoy1veLOjCAW0os0jkM
|
||||||
AHU85RlBFVDHFPMtQ7Rg1vrhYzgInKeqXtc2kEAe63nqyYyHxPOUd3vIQX/N4tdB
|
NlKrFpyHEWNj5B3X6UgSn8EGQaVbDq17PrQwlHJYU4nih0TnD1OwvBnFnd27pXjr
|
||||||
aH41ffs68Pdtp9GeocTiYyj7KuUCAwEAAaM6MDgwDgYDVR0PAQH/BAQDAgWgMAwG
|
68eGA6Zc2BbUnhNGhppWHZ46LpPxpIbafSOH3ES3N/MZAfcUKIUntLlWE2xCQgFV
|
||||||
A1UdEwEB/wQCMAAwGAYDVR0RBBEwD4INbG9jYWxyZWdpc3RyeTANBgkqhkiG9w0B
|
TW95WeVtP/r1aWgIHu0E2Jb2eHCE+qXYqJxSU7S4DcknmmcTS69hzyHs+92Ec+7Q
|
||||||
AQsFAAOCAQEAkjfZvcd5WysbfqGfhPErG7ADWAFJ1bsIDlHVUaEn2/Asr68iJpfF
|
m0aQFZ0dyPoYPwXMgZpTAIuXEGg/FKC1fiS/deTW37DyvB2jppehKW3RJY3uso7R
|
||||||
fqb0fhBkBExPhiLDS+jmL1L86QRNIgyM+7zGCCagKJkl9uNBGXPdS6KxZtY8W8rV
|
o9vs6DJx1OdU5XEq9R3n7op61N7PK8Wxmn7TVYHEZHkITVvtucZZd1FNTOrOJaNJ
|
||||||
bF/GIYnYUL5pnyrhX4pH2ZnDJpKIAJl8CAZ1VHwErQ5VqnJAX/gGO/eKgiyCciZv
|
UnE+FuPK1Mrff+jz666Ru4zQL0CondOamX3QR5tuNK6MTqFs87wKY25qsqz7cS27
|
||||||
WmmQkhcOo60FwLW+Wi9sLOYD+YAT+VnFrGfak/SDfT78wrmmfg5v05tvFXgJaZLh
|
kHW+r7UNWbJY3/UQhaPZM78zCZa2IL1nBFUjsFvEA4rtYwIDAQABozowODAOBgNV
|
||||||
JSxRET9D5iT3DIxb+m5GyQAqIH1djh02ybrPJ9j6/+qRQDojIe5qJUL90qIvhwO+
|
HQ8BAf8EBAMCAKAwDAYDVR0TAQH/BAIwADAYBgNVHREEETAPgg1sb2NhbHJlZ2lz
|
||||||
aSbIL/p+I6//AUMWJvcR7GbXy3xywgmaYw==
|
dHJ5MAsGCSqGSIb3DQEBCwOCAgEAHVGMyoyX4lRzWCDkUjrXkrDZzuv03M2ojW2Q
|
||||||
|
UL61ejMkTWQW8R4gKrcPHAOJAPKVfGEVOrQH3ZMyxV2HnWrJ7egrn65zOzmLbWSh
|
||||||
|
O7gdpL6YYjBr218fqJn/8HadXZa4k70JyympYOLojeWSLy3KP03U+y7AMcdE1uG6
|
||||||
|
6HJI54ZjBoW/nEyWmMh/mfMz8EN+Mgek48Z9AVaOswbtHtDIXN7XO0jbB3DbY5Yh
|
||||||
|
prVqVLYAz4sCchGTadj+aEChF5sJkKREDvAew/njC0WGS2TmMJ+V1uVhXV6354mr
|
||||||
|
edk79YvdwzwDgeYArkprahMtn9eu1aSTfUXsmM5OP5tR4gyFV1kUmTPY1yUd/yO+
|
||||||
|
638wV0mWtGbbf6j8dUKeUBCyt2qGg8J80OUeFdvdHMswtaUq951NApX44BinPkbK
|
||||||
|
moBVQByZ5OEcmMidFC9SqYSUwTQ7uNyWeguhCXav+l3x900YlKnUQgRUZntPwXjs
|
||||||
|
yc7MXv0j0E86Gme6G1O02zamwkRgr3qOTHu2oQOow/a24fM4HASayLR0Kegt0sh3
|
||||||
|
rzk0HRF1mGonf1Ecyyj/3LpHVsgYSckwtJoZLOqtDMn+CKtOCEByssQfD+E9Qe07
|
||||||
|
qMyvcwpXUpfqe3ZERbJ10m98Z88VeK/XGt9ptq7HY47n1KL6lx3oyXwZIw8pq928
|
||||||
|
89dcqL0=
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -1,27 +1,51 @@
|
||||||
-----BEGIN RSA PRIVATE KEY-----
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
MIIEpAIBAAKCAQEA1IxPtq4Z3AXrd8V1Az078Sqtf76Ce4FCAVrhxnMLsaqy9L3S
|
MIIJKAIBAAKCAgEAvYL093MM9CIffuyJb+ZXNeR7BIkphDIeJzjGa0mmpioH/8nQ
|
||||||
4pM4u5YgDZR6GAXnjwQORm7XPqdEC+gWHkvoHl/VWOJWSfMsQ+VH7m48PbaRtzGA
|
ohqNw0Gqa/1wyhRzCyquvAMcHzpc5hBUOHfbpqTzENvuBga9FRtt6aRrUAKSDx4A
|
||||||
KswJFmf7mYq8mFNSnHXzJ4FcyHG0+GQacporAZyB5bw7r0ozsyjgcFCqDnrnqlr7
|
9OktPwnh5gyFzRV3Csmz1//Sm4DZFwVm2yZCctBHLHT/FSQ+dENlh/PZ74BGCTRb
|
||||||
jCRSjsZfa/RHhmqatLW2XA6CmV3wpkpUnUU3IYR4pCQhgYkoeqALb8mtexvO5Jn/
|
ihBCw94hmOmwRY3g05XuY2AOwoASrw9mr599YNuEIcZ/J/PwXWDv1NJMNKpqMtb3
|
||||||
imp/wEdzoxsAdTzlGUEVUMcU8y1DtGDW+uFjOAicp6pe1zaQQB7reerJjIfE85R3
|
izowgFtKLNI5DDZSqxachxFjY+Qd1+lIEp/BBkGlWw6tez60MJRyWFOJ4odE5w9T
|
||||||
e8hBf83i10FofjV9+zrw922n0Z6hxOJjKPsq5QIDAQABAoIBAQCLj3Xn5XllVx29
|
sLwZxZ3du6V46+vHhgOmXNgW1J4TRoaaVh2eOi6T8aSG2n0jh9xEtzfzGQH3FCiF
|
||||||
jxG+Br8NI5C4iEb1AXJtoVcODwxmpEbNHLcTvsdJpNF3GT7x9y6MYYVeCfmbUgkE
|
J7S5VhNsQkIBVU1veVnlbT/69WloCB7tBNiW9nhwhPql2KicUlO0uA3JJ5pnE0uv
|
||||||
KGgdjInlJ9fWfQdblyhBjJMmo4s6ml4jg4U8lKyC4dP6hXZALrXXtjrqfa6GjuLd
|
Yc8h7PvdhHPu0JtGkBWdHcj6GD8FzIGaUwCLlxBoPxSgtX4kv3Xk1t+w8rwdo6aX
|
||||||
Fh2nkkMa08EXL/mgp4A662QzW0POLQIo1lMJc49FFPrVQneLedUdsJDowNz/HU/q
|
oSlt0SWN7rKO0aPb7OgycdTnVOVxKvUd5+6KetTezyvFsZp+01WBxGR5CE1b7bnG
|
||||||
oD4/SsKw6inUh/A1MfSKvEhnJcVH4fiQhFQU5CdSzAHPmAYcoBeg6LjY+WScJAAs
|
WXdRTUzqziWjSVJxPhbjytTK33/o8+uukbuM0C9AqJ3Tmpl90EebbjSujE6hbPO8
|
||||||
Hu5kgunbCsB5vw9WbFDQzM1HYtW1CvJj1cjNp662b06D7VQugjtawhHNImkq1/65
|
CmNuarKs+3Etu5B1vq+1DVmyWN/1EIWj2TO/MwmWtiC9ZwRVI7BbxAOK7WMCAwEA
|
||||||
H2ZTglchAoGBAPu0OX3tEvtic4f8VLRv/TeI9NSC3EgRAtIDncDo+nwVjR54AXID
|
AQKCAgEArwqno2uEGnbuKnjmVRInmWKpcb4TN8Rm74lUVEKaB76o1s0cxK3MJP6h
|
||||||
aePceImGUsDd5xfLuQTiYp50z0cEB5CGsWYbnjm0hliF8YXz/tpqi0V0Cr8fLLA8
|
H8/e/vg2bqkE7indLsbkiaepcuLaYijXTcomJzDQMw+7zOOOLz/Aku/+qDg8D47c
|
||||||
/jG3tajbZ8xu/3p1iEcIPevYT/44bjbOyDp5peQIHhr32LZ1gZfQDRt7AoGBANgt
|
NXV5nLzn0HIPiEIF0JYJbmcR4veKxqu0Ic8K0QdCHHcn75P/x2Tuy4+twW9Vi76/
|
||||||
AIid1rPIyEzhhznpWVjw/ZIrtgaP0HDgKaUUCsEqEDoOJEaFS7WG4G7m8/iS4f8v
|
v5KRuxzZ/fTtVKKj32kWWNXb3fltgCoh+GR0jH2XlVh1DVkVBEwnfT/rM5ESvWwU
|
||||||
XGgcoYf4TjfIwYtRQy2Bp9g4oOMiUbQKukF1DuFJpsw69y3hNNoZoUm7r2jpv3Q8
|
riOah7ohT1+6QlOAPwKzwfr6FCG000eNKPb8q+p12q0ylHzMzgxtSxJwFb0X/Nzc
|
||||||
/NY+O+BNaTVdmbOjNHmKo99MYGh1cPUPVGxuP1UfAoGBAOJ9fe5OUfJa2NLYv+/N
|
snaboyWLjDAQ2I7LP6WmXizznvkKbE9PjW6UGYQ+2XApqp+Hn8tSC5I/gIDlBOOa
|
||||||
hfFfD8/aIRXIGN2Z224nNp5JVj7AhaxuXe5oCR7W+8gI5VWIP+ihPVSQj6O7gIMQ
|
psJ4fkRjr8n5+CbHbGmQG736hZcZY/z10TtOQbxeeeuri6oDQ62D4Z07GpWCG2EG
|
||||||
cLkMyQfr5afqfzamJAGuNbw9ex4Xk0LS33klchWLuI9Aoiszb3lbdTyv3OtJJAO1
|
sUakaytZnJkIN79PpfthPZwtStlG0KVs0i5wggH/iP2h0yAmvJ64ZRIqdvuE/aBn
|
||||||
dn8Hz7qtg0mJFDy65+4PjHvZAoGAXtKmmEZ75hKdYbPPiCSGT5At+g74Yjp1GP4K
|
sdfRRlYUqmFOJsVQgtUWGKGS4WIxrGaclzT1TNxCKdiAk0glXe3sDtvBni6qDW07
|
||||||
5mE7Mm3L/lszqEdR5UdLbPobbB6pyTCyHOzqIeVWEfwagYzcpbposFxunhLwucO2
|
iJzEXxrsLw6MiCDhHfDeae5JYeJXK0HlCfYHXgRmEnDFTGw8rBzwz3eXvPqZ5YNt
|
||||||
3X2GUGXpJ056HALcFwsFB32vPJrDoy4ZTbSwuPvbuU/cWsKtAt9AcHNlGozhRm05
|
j+31uHSwQjgOgEgSrXeTmRfLZsytKqndhBB/yBFmzZNrswXGackCggEBAMN5RSdW
|
||||||
//IAD8sCgYAUs6ibNtUqCFjekr10FBGFuA2ZQg+9bQYw3ti+S6uFMsxIDqYRC2bG
|
t+WWl8ghDGz/CN1oRjnk298/6L7ijluKGRgG+igwBEy+5m1EGPJT+Y5LEH4TiQJe
|
||||||
yvKhEYym/W7RwfzPWjGzuvFbZWzJnnb81WLfcI4DnrJe3h8THlnaBQhcsEObu84O
|
Oc2XjQuM7zABX7JWWk1cL8Zlv3kcmR0lg4BWs7wDkoU1HYRkMP57vubtxFzFOsNa
|
||||||
XS/sYeVo5c6l0kTNp0I8vXbn05bExZlsLAIICMTsm5bSQZI/iCRyEw==
|
momivEniZ/eonHm3yv0VHeenH9j3mhJ3mVDIpkH+7uhn3++c0zYh96NkjfQi1/jF
|
||||||
|
P35eSAt7FgHDOt37fWXwtGeYFRN4P19ZUNiIvZwT6Q1gmegRO8BYoW6cSbLWe5Cp
|
||||||
|
abaULds46+mjM4zJhCZRFkdWHbzP4bZHocSmwGsqcpABJ6SASTVim02GGhBIt1nj
|
||||||
|
fkqa10X1c5Sqis0CggEBAPgxFKSHccfIJ6yht2HJjysRLN/IHlO9hDcpCWUrISN/
|
||||||
|
hxu1uxfNGmUkd0H8zDO/O+QAJXLE8PPPB77pJniIJ8kK4swwsfufN6bNV9XJldjA
|
||||||
|
o4vXnYt9Mpuky9cugD8LocUgWQzzKY5Y875TC4s3ldzyKQVm0NO+Wz1U3gfjogEC
|
||||||
|
d7PhTk7Ba/ZjVGtL7HuZxlL+/TgZklMks2ulSTW2y8aqVJxaZXv0H0NX/+fpDHYw
|
||||||
|
iljr+iqbiqZvjrzySryb0XWMtzP9oyDEXTXrWnG+kOIZW3BZ9FLxT+Te7zZ2PUbK
|
||||||
|
vTkObsKxc8WVHIYgkt/OwWSwbYLre5nvFPvgEFbQuO8CggEAeZTlUXmbul63m5AK
|
||||||
|
xYS/w88G1x2lMK/0mT4bY4562zoDwJlVI1MdydqwVZGryDiiUnjeIC3xcBISdZu8
|
||||||
|
bjR8jFUvp6xuPs2ska0bA0kBCQNkmc3zBY2rBVy4KKFZdRNwrm8yhK3HL1KcIKyF
|
||||||
|
FEK4yPBrfozy49JMecxP9aqUHu4eky/4828gl04JBUONXwC9VpuRj7dILdaAozt0
|
||||||
|
zbXb2JSDQ7O60jCC83A4oprQMU6j+P9dVqe+Mtz9OD8ocb8eC/FiO/FTwm9aMl+u
|
||||||
|
RMzw1GHHI3oODGLg7j6y2oilcsZxKnblePJu8N+mKWFizY5aicRg3rUkKU00Ftx7
|
||||||
|
fn2xBQKCAQB7w7Xgie5SStyF+KrC58kuF8WB3oBJEAOjoiIeQhCnbAvK5KfkqZHV
|
||||||
|
CAc0b8TAtUc/XldOUSk6222oZQmbJ4J3fac1Xb8TlAUjd9iqMnk3+nBT5vSYP5mC
|
||||||
|
Bf7kUjr/tWQ5MfVWQNfjNTZvHWhvRwvDfzq3h9rxDEbhYbXKx1fdGwboO51aJpgY
|
||||||
|
6NWLH/RQepFsh91sIUxXi8CxGF5Wm84oRn4k7esXkdgZNAPX+N4O/guvZhV9M81D
|
||||||
|
S/QpAsYEIcuky8P7+Cplx6YXokKa4AXNyglQEHuG9PD7V7SAOxw5dhZAIpNXIThz
|
||||||
|
OfVcaVf0pVzJQjWKCLW9QHz9UXG0aScfAoIBACdr3exVMUaMOtrAnf2NXj3hecgg
|
||||||
|
WsWRBOOaSW5wXGt1JNlfYS4zwViafIwy31DNuMg22rj5Mq0TYMtuNYto5RoLSXeB
|
||||||
|
uupUrENEBnt7JFrwI/NyWG0uYMM3G2MtGHGYooaT9+++wT96QxJZr5fwFYF1ddf6
|
||||||
|
5tFeKtNt5VM0wWBHO1voUhQ0TCaooatJjMuAB0+WbvwniKxmdbqQDzY+6myBBUVo
|
||||||
|
gBJ0JxhxakLm1XGFHDtPCsAAHX/uZ4CvH2uyWqAlx6iwGXd0wwEGrbIRB/BundxR
|
||||||
|
oaJWswU4FIPAgOpy2LEJKnvzhcmVFtZWD5sFXA1/83QvpceLTFTD5uioBPU=
|
||||||
-----END RSA PRIVATE KEY-----
|
-----END RSA PRIVATE KEY-----
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
{"num_pages":1,"num_results":2,"page":1,"page_size": 25,"query":"testsearch","results":[{"description":"","is_automated":false,"is_official":false,"is_trusted":false, "name":"dmcgowan/testsearch-1","star_count":1000},{"description":"Some automated build","is_automated":true,"is_official":false,"is_trusted":false,"name":"dmcgowan/testsearch-2","star_count":10}]}
|
|
|
@ -1,103 +0,0 @@
|
||||||
#!/usr/bin/env bats
|
|
||||||
|
|
||||||
# This tests pushing and pulling plugins
|
|
||||||
|
|
||||||
load helpers
|
|
||||||
|
|
||||||
user="testuser"
|
|
||||||
password="testpassword"
|
|
||||||
base="hello-world"
|
|
||||||
|
|
||||||
#TODO: Create plugin image
|
|
||||||
function create_plugin() {
|
|
||||||
plugindir=$(mktemp -d)
|
|
||||||
|
|
||||||
cat - > $plugindir/config.json <<CONFIGJSON
|
|
||||||
{
|
|
||||||
"manifestVersion": "v0",
|
|
||||||
"description": "A test plugin for integration tests",
|
|
||||||
"entrypoint": ["/usr/bin/ncat", "-l", "-U", "//run/docker/plugins/plugin.sock"],
|
|
||||||
"interface" : {
|
|
||||||
"types": ["docker.volumedriver/1.0"],
|
|
||||||
"socket": "plugin.sock"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
CONFIGJSON
|
|
||||||
|
|
||||||
cid=$(docker create dmcgowan/ncat:latest /bin/sh)
|
|
||||||
|
|
||||||
mkdir $plugindir/rootfs
|
|
||||||
|
|
||||||
docker export $cid | tar -x -C $plugindir/rootfs
|
|
||||||
|
|
||||||
docker rm $cid
|
|
||||||
|
|
||||||
daemontmp=$(docker exec dockerdaemon mktemp -d)
|
|
||||||
|
|
||||||
tar -c -C $plugindir . | docker exec -i dockerdaemon tar -x -C $daemontmp
|
|
||||||
|
|
||||||
docker exec dockerdaemon docker plugin create $1 $daemontmp
|
|
||||||
|
|
||||||
docker exec dockerdaemon rm -rf $daemontmp
|
|
||||||
|
|
||||||
rm -rf $plugindir
|
|
||||||
}
|
|
||||||
|
|
||||||
@test "Test plugin push and pull" {
|
|
||||||
version_check docker "$GOLEM_DIND_VERSION" "1.13.0-rc3"
|
|
||||||
version_check docker "$GOLEM_DISTRIBUTION_VERSION" "2.6.0"
|
|
||||||
|
|
||||||
login_oauth localregistry:5558
|
|
||||||
image="localregistry:5558/testuser/plugin1"
|
|
||||||
|
|
||||||
create_plugin $image
|
|
||||||
|
|
||||||
run docker_t plugin push $image
|
|
||||||
echo $output
|
|
||||||
[ "$status" -eq 0 ]
|
|
||||||
|
|
||||||
docker_t plugin rm $image
|
|
||||||
|
|
||||||
docker_t plugin install --grant-all-permissions $image
|
|
||||||
}
|
|
||||||
|
|
||||||
@test "Test plugin push and failed image pull" {
|
|
||||||
version_check docker "$GOLEM_DIND_VERSION" "1.13.0-rc3"
|
|
||||||
version_check docker "$GOLEM_DISTRIBUTION_VERSION" "2.6.0"
|
|
||||||
|
|
||||||
|
|
||||||
login_oauth localregistry:5558
|
|
||||||
image="localregistry:5558/testuser/plugin-not-image"
|
|
||||||
|
|
||||||
create_plugin $image
|
|
||||||
|
|
||||||
run docker_t plugin push $image
|
|
||||||
echo $output
|
|
||||||
[ "$status" -eq 0 ]
|
|
||||||
|
|
||||||
docker_t plugin rm $image
|
|
||||||
|
|
||||||
run docker_t pull $image
|
|
||||||
|
|
||||||
[ "$status" -ne 0 ]
|
|
||||||
}
|
|
||||||
|
|
||||||
@test "Test image push and failed plugin pull" {
|
|
||||||
version_check docker "$GOLEM_DIND_VERSION" "1.13.0-rc3"
|
|
||||||
version_check docker "$GOLEM_DISTRIBUTION_VERSION" "2.6.0"
|
|
||||||
|
|
||||||
login_oauth localregistry:5558
|
|
||||||
image="localregistry:5558/testuser/image-not-plugin"
|
|
||||||
|
|
||||||
build $image "$base:latest"
|
|
||||||
|
|
||||||
run docker_t push $image
|
|
||||||
echo $output
|
|
||||||
[ "$status" -eq 0 ]
|
|
||||||
|
|
||||||
docker_t rmi $image
|
|
||||||
|
|
||||||
run docker_t plugin install --grant-all-permissions $image
|
|
||||||
|
|
||||||
[ "$status" -ne 0 ]
|
|
||||||
}
|
|
|
@ -46,6 +46,7 @@ echo "Testing image $distimage with distribution version $distversion"
|
||||||
# These images are defined in golem.conf
|
# These images are defined in golem.conf
|
||||||
time docker pull nginx:1.9
|
time docker pull nginx:1.9
|
||||||
time docker pull golang:1.6
|
time docker pull golang:1.6
|
||||||
|
time docker pull registry:0.9.1
|
||||||
time docker pull dmcgowan/token-server:simple
|
time docker pull dmcgowan/token-server:simple
|
||||||
time docker pull dmcgowan/token-server:oauth
|
time docker pull dmcgowan/token-server:oauth
|
||||||
time docker pull distribution/golem-runner:0.1-bats
|
time docker pull distribution/golem-runner:0.1-bats
|
||||||
|
@ -53,15 +54,11 @@ time docker pull distribution/golem-runner:0.1-bats
|
||||||
time docker pull docker:1.9.1-dind
|
time docker pull docker:1.9.1-dind
|
||||||
time docker pull docker:1.10.3-dind
|
time docker pull docker:1.10.3-dind
|
||||||
time docker pull docker:1.11.1-dind
|
time docker pull docker:1.11.1-dind
|
||||||
time docker pull docker:1.12.3-dind
|
|
||||||
time docker pull docker:1.13.0-rc5-dind
|
|
||||||
|
|
||||||
golem -cache $cachedir \
|
golem -cache $cachedir \
|
||||||
-i "golem-distribution:latest,$distimage,$distversion" \
|
-i "golem-distribution:latest,$distimage,$distversion" \
|
||||||
-i "golem-dind:latest,docker:1.9.1-dind,1.9.1" \
|
-i "golem-dind:latest,docker:1.9.1-dind,1.9.1" \
|
||||||
-i "golem-dind:latest,docker:1.10.3-dind,1.10.3" \
|
-i "golem-dind:latest,docker:1.10.3-dind,1.10.3" \
|
||||||
-i "golem-dind:latest,docker:1.11.1-dind,1.11.1" \
|
-i "golem-dind:latest,docker:1.11.1-dind,1.11.1" \
|
||||||
-i "golem-dind:latest,docker:1.12.3-dind,1.12.3" \
|
|
||||||
-i "golem-dind:latest,docker:1.13.0-rc5-dind,1.13.0" \
|
|
||||||
$DIR
|
$DIR
|
||||||
|
|
||||||
|
|
|
@ -12,13 +12,14 @@ image="${base}:latest"
|
||||||
# Login information, should match values in nginx/test.passwd
|
# Login information, should match values in nginx/test.passwd
|
||||||
user=${TEST_USER:-"testuser"}
|
user=${TEST_USER:-"testuser"}
|
||||||
password=${TEST_PASSWORD:-"passpassword"}
|
password=${TEST_PASSWORD:-"passpassword"}
|
||||||
|
email="distribution@docker.com"
|
||||||
|
|
||||||
function setup() {
|
function setup() {
|
||||||
tempImage $image
|
tempImage $image
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test valid certificates" {
|
@test "Test valid certificates" {
|
||||||
docker_t tag $image $hostname:5440/$image
|
docker_t tag -f $image $hostname:5440/$image
|
||||||
run docker_t push $hostname:5440/$image
|
run docker_t push $hostname:5440/$image
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
|
@ -27,7 +28,7 @@ function setup() {
|
||||||
@test "Test basic auth" {
|
@test "Test basic auth" {
|
||||||
basic_auth_version_check
|
basic_auth_version_check
|
||||||
login $hostname:5441
|
login $hostname:5441
|
||||||
docker_t tag $image $hostname:5441/$image
|
docker_t tag -f $image $hostname:5441/$image
|
||||||
run docker_t push $hostname:5441/$image
|
run docker_t push $hostname:5441/$image
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
|
@ -59,14 +60,14 @@ function setup() {
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test TLS client auth" {
|
@test "Test TLS client auth" {
|
||||||
docker_t tag $image $hostname:5442/$image
|
docker_t tag -f $image $hostname:5442/$image
|
||||||
run docker_t push $hostname:5442/$image
|
run docker_t push $hostname:5442/$image
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test TLS client with invalid certificate authority fails" {
|
@test "Test TLS client with invalid certificate authority fails" {
|
||||||
docker_t tag $image $hostname:5443/$image
|
docker_t tag -f $image $hostname:5443/$image
|
||||||
run docker_t push $hostname:5443/$image
|
run docker_t push $hostname:5443/$image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
@ -74,14 +75,14 @@ function setup() {
|
||||||
@test "Test basic auth with TLS client auth" {
|
@test "Test basic auth with TLS client auth" {
|
||||||
basic_auth_version_check
|
basic_auth_version_check
|
||||||
login $hostname:5444
|
login $hostname:5444
|
||||||
docker_t tag $image $hostname:5444/$image
|
docker_t tag -f $image $hostname:5444/$image
|
||||||
run docker_t push $hostname:5444/$image
|
run docker_t push $hostname:5444/$image
|
||||||
[ "$status" -eq 0 ]
|
[ "$status" -eq 0 ]
|
||||||
has_digest "$output"
|
has_digest "$output"
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test unknown certificate authority fails" {
|
@test "Test unknown certificate authority fails" {
|
||||||
docker_t tag $image $hostname:5445/$image
|
docker_t tag -f $image $hostname:5445/$image
|
||||||
run docker_t push $hostname:5445/$image
|
run docker_t push $hostname:5445/$image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
@ -89,19 +90,19 @@ function setup() {
|
||||||
@test "Test basic auth with unknown certificate authority fails" {
|
@test "Test basic auth with unknown certificate authority fails" {
|
||||||
run login $hostname:5446
|
run login $hostname:5446
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
docker_t tag $image $hostname:5446/$image
|
docker_t tag -f $image $hostname:5446/$image
|
||||||
run docker_t push $hostname:5446/$image
|
run docker_t push $hostname:5446/$image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test TLS client auth to server with unknown certificate authority fails" {
|
@test "Test TLS client auth to server with unknown certificate authority fails" {
|
||||||
docker_t tag $image $hostname:5447/$image
|
docker_t tag -f $image $hostname:5447/$image
|
||||||
run docker_t push $hostname:5447/$image
|
run docker_t push $hostname:5447/$image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test failure to connect to server fails to fallback to SSLv3" {
|
@test "Test failure to connect to server fails to fallback to SSLv3" {
|
||||||
docker_t tag $image $hostname:5448/$image
|
docker_t tag -f $image $hostname:5448/$image
|
||||||
run docker_t push $hostname:5448/$image
|
run docker_t push $hostname:5448/$image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,17 +6,23 @@ load helpers
|
||||||
|
|
||||||
user="testuser"
|
user="testuser"
|
||||||
password="testpassword"
|
password="testpassword"
|
||||||
|
email="a@nowhere.com"
|
||||||
base="hello-world"
|
base="hello-world"
|
||||||
|
|
||||||
@test "Test token server login" {
|
@test "Test token server login" {
|
||||||
login localregistry:5554
|
run docker_t login -u $user -p $password -e $email localregistry:5554
|
||||||
|
echo $output
|
||||||
|
[ "$status" -eq 0 ]
|
||||||
|
|
||||||
|
# First line is WARNING about credential save or email deprecation
|
||||||
|
[ "${lines[2]}" = "Login Succeeded" -o "${lines[1]}" = "Login Succeeded" ]
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test token server bad login" {
|
@test "Test token server bad login" {
|
||||||
docker_t_login -u "testuser" -p "badpassword" localregistry:5554
|
run docker_t login -u "testuser" -p "badpassword" -e $email localregistry:5554
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
|
|
||||||
docker_t_login -u "baduser" -p "testpassword" localregistry:5554
|
run docker_t login -u "baduser" -p "testpassword" -e $email localregistry:5554
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -52,10 +58,10 @@ base="hello-world"
|
||||||
@test "Test oauth token server bad login" {
|
@test "Test oauth token server bad login" {
|
||||||
version_check docker "$GOLEM_DIND_VERSION" "1.11.0"
|
version_check docker "$GOLEM_DIND_VERSION" "1.11.0"
|
||||||
|
|
||||||
docker_t_login -u "testuser" -p "badpassword" -e $email localregistry:5557
|
run docker_t login -u "testuser" -p "badpassword" -e $email localregistry:5557
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
|
|
||||||
docker_t_login -u "baduser" -p "testpassword" -e $email localregistry:5557
|
run docker_t login -u "baduser" -p "testpassword" -e $email localregistry:5557
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -111,19 +117,3 @@ base="hello-world"
|
||||||
run docker_t push $image
|
run docker_t push $image
|
||||||
[ "$status" -ne 0 ]
|
[ "$status" -ne 0 ]
|
||||||
}
|
}
|
||||||
|
|
||||||
@test "Test oauth with v1 search" {
|
|
||||||
version_check docker "$GOLEM_DIND_VERSION" "1.12.0"
|
|
||||||
|
|
||||||
run docker_t search localregistry:5600/testsearch
|
|
||||||
[ "$status" -ne 0 ]
|
|
||||||
|
|
||||||
login_oauth localregistry:5600
|
|
||||||
|
|
||||||
run docker_t search localregistry:5600/testsearch
|
|
||||||
echo $output
|
|
||||||
[ "$status" -eq 0 ]
|
|
||||||
|
|
||||||
echo $output | grep "testsearch-1"
|
|
||||||
echo $output | grep "testsearch-2"
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
FROM dmcgowan/token-server@sha256:5a6f76d3086cdf63249c77b521108387b49d85a30c5e1c4fe82fdf5ae3b76ba7
|
FROM dmcgowan/token-server:oauth
|
||||||
|
|
||||||
WORKDIR /
|
WORKDIR /
|
||||||
|
|
||||||
|
|
|
@ -1,18 +0,0 @@
|
||||||
version: 0.1
|
|
||||||
loglevel: debug
|
|
||||||
storage:
|
|
||||||
cache:
|
|
||||||
blobdescriptor: inmemory
|
|
||||||
filesystem:
|
|
||||||
rootdirectory: /tmp/registry-dev
|
|
||||||
http:
|
|
||||||
addr: 0.0.0.0:5000
|
|
||||||
compatibility:
|
|
||||||
schema1:
|
|
||||||
enabled: true
|
|
||||||
auth:
|
|
||||||
token:
|
|
||||||
realm: "https://auth.localregistry:5559/token/"
|
|
||||||
issuer: "registry-test"
|
|
||||||
service: "registry-test"
|
|
||||||
rootcertbundle: "/etc/docker/registry/tokenbundle.pem"
|
|
|
@ -10,9 +10,6 @@ http:
|
||||||
tls:
|
tls:
|
||||||
certificate: "/etc/docker/registry/localregistry.cert"
|
certificate: "/etc/docker/registry/localregistry.cert"
|
||||||
key: "/etc/docker/registry/localregistry.key"
|
key: "/etc/docker/registry/localregistry.key"
|
||||||
compatibility:
|
|
||||||
schema1:
|
|
||||||
enabled: true
|
|
||||||
auth:
|
auth:
|
||||||
token:
|
token:
|
||||||
realm: "https://auth.localregistry:5559/token/"
|
realm: "https://auth.localregistry:5559/token/"
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
FROM dmcgowan/token-server@sha256:0eab50ebdff5b6b95b3addf4edbd8bd2f5b940f27b41b43c94afdf05863a81af
|
FROM dmcgowan/token-server:simple
|
||||||
|
|
||||||
WORKDIR /
|
WORKDIR /
|
||||||
|
|
||||||
|
|
|
@ -10,9 +10,6 @@ http:
|
||||||
tls:
|
tls:
|
||||||
certificate: "/etc/docker/registry/localregistry.cert"
|
certificate: "/etc/docker/registry/localregistry.cert"
|
||||||
key: "/etc/docker/registry/localregistry.key"
|
key: "/etc/docker/registry/localregistry.key"
|
||||||
compatibility:
|
|
||||||
schema1:
|
|
||||||
enabled: true
|
|
||||||
auth:
|
auth:
|
||||||
token:
|
token:
|
||||||
realm: "https://auth.localregistry:5556/token/"
|
realm: "https://auth.localregistry:5556/token/"
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"flag"
|
"flag"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
|
@ -10,17 +9,13 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
dcontext "github.com/docker/distribution/context"
|
"github.com/Sirupsen/logrus"
|
||||||
|
"github.com/docker/distribution/context"
|
||||||
"github.com/docker/distribution/registry/api/errcode"
|
"github.com/docker/distribution/registry/api/errcode"
|
||||||
"github.com/docker/distribution/registry/auth"
|
"github.com/docker/distribution/registry/auth"
|
||||||
_ "github.com/docker/distribution/registry/auth/htpasswd"
|
_ "github.com/docker/distribution/registry/auth/htpasswd"
|
||||||
"github.com/docker/libtrust"
|
"github.com/docker/libtrust"
|
||||||
"github.com/gorilla/mux"
|
"github.com/gorilla/mux"
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
enforceRepoClass bool
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
@ -49,8 +44,6 @@ func main() {
|
||||||
flag.StringVar(&cert, "tlscert", "", "Certificate file for TLS")
|
flag.StringVar(&cert, "tlscert", "", "Certificate file for TLS")
|
||||||
flag.StringVar(&certKey, "tlskey", "", "Certificate key for TLS")
|
flag.StringVar(&certKey, "tlskey", "", "Certificate key for TLS")
|
||||||
|
|
||||||
flag.BoolVar(&enforceRepoClass, "enforce-class", false, "Enforce policy for single repository class")
|
|
||||||
|
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
|
||||||
if debug {
|
if debug {
|
||||||
|
@ -86,7 +79,7 @@ func main() {
|
||||||
// TODO: Make configurable
|
// TODO: Make configurable
|
||||||
issuer.Expiration = 15 * time.Minute
|
issuer.Expiration = 15 * time.Minute
|
||||||
|
|
||||||
ctx := dcontext.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
ts := &tokenServer{
|
ts := &tokenServer{
|
||||||
issuer: issuer,
|
issuer: issuer,
|
||||||
|
@ -116,23 +109,23 @@ func main() {
|
||||||
// request context from a base context.
|
// request context from a base context.
|
||||||
func handlerWithContext(ctx context.Context, handler func(context.Context, http.ResponseWriter, *http.Request)) http.Handler {
|
func handlerWithContext(ctx context.Context, handler func(context.Context, http.ResponseWriter, *http.Request)) http.Handler {
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
ctx := dcontext.WithRequest(ctx, r)
|
ctx := context.WithRequest(ctx, r)
|
||||||
logger := dcontext.GetRequestLogger(ctx)
|
logger := context.GetRequestLogger(ctx)
|
||||||
ctx = dcontext.WithLogger(ctx, logger)
|
ctx = context.WithLogger(ctx, logger)
|
||||||
|
|
||||||
handler(ctx, w, r)
|
handler(ctx, w, r)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func handleError(ctx context.Context, err error, w http.ResponseWriter) {
|
func handleError(ctx context.Context, err error, w http.ResponseWriter) {
|
||||||
ctx, w = dcontext.WithResponseWriter(ctx, w)
|
ctx, w = context.WithResponseWriter(ctx, w)
|
||||||
|
|
||||||
if serveErr := errcode.ServeJSON(w, err); serveErr != nil {
|
if serveErr := errcode.ServeJSON(w, err); serveErr != nil {
|
||||||
dcontext.GetResponseLogger(ctx).Errorf("error sending error response: %v", serveErr)
|
context.GetResponseLogger(ctx).Errorf("error sending error response: %v", serveErr)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
dcontext.GetResponseLogger(ctx).Info("application error")
|
context.GetResponseLogger(ctx).Info("application error")
|
||||||
}
|
}
|
||||||
|
|
||||||
var refreshCharacters = []rune("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
|
var refreshCharacters = []rune("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
|
||||||
|
@ -164,37 +157,18 @@ type tokenResponse struct {
|
||||||
ExpiresIn int `json:"expires_in,omitempty"`
|
ExpiresIn int `json:"expires_in,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var repositoryClassCache = map[string]string{}
|
|
||||||
|
|
||||||
func filterAccessList(ctx context.Context, scope string, requestedAccessList []auth.Access) []auth.Access {
|
func filterAccessList(ctx context.Context, scope string, requestedAccessList []auth.Access) []auth.Access {
|
||||||
if !strings.HasSuffix(scope, "/") {
|
if !strings.HasSuffix(scope, "/") {
|
||||||
scope = scope + "/"
|
scope = scope + "/"
|
||||||
}
|
}
|
||||||
grantedAccessList := make([]auth.Access, 0, len(requestedAccessList))
|
grantedAccessList := make([]auth.Access, 0, len(requestedAccessList))
|
||||||
for _, access := range requestedAccessList {
|
for _, access := range requestedAccessList {
|
||||||
if access.Type == "repository" {
|
if access.Type != "repository" {
|
||||||
if !strings.HasPrefix(access.Name, scope) {
|
context.GetLogger(ctx).Debugf("Skipping unsupported resource type: %s", access.Type)
|
||||||
dcontext.GetLogger(ctx).Debugf("Resource scope not allowed: %s", access.Name)
|
continue
|
||||||
continue
|
}
|
||||||
}
|
if !strings.HasPrefix(access.Name, scope) {
|
||||||
if enforceRepoClass {
|
context.GetLogger(ctx).Debugf("Resource scope not allowed: %s", access.Name)
|
||||||
if class, ok := repositoryClassCache[access.Name]; ok {
|
|
||||||
if class != access.Class {
|
|
||||||
dcontext.GetLogger(ctx).Debugf("Different repository class: %q, previously %q", access.Class, class)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
} else if strings.EqualFold(access.Action, "push") {
|
|
||||||
repositoryClassCache[access.Name] = access.Class
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if access.Type == "registry" {
|
|
||||||
if access.Name != "catalog" {
|
|
||||||
dcontext.GetLogger(ctx).Debugf("Unknown registry resource: %s", access.Name)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// TODO: Limit some actions to "admin" users
|
|
||||||
} else {
|
|
||||||
dcontext.GetLogger(ctx).Debugf("Skipping unsupported resource type: %s", access.Type)
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
grantedAccessList = append(grantedAccessList, access)
|
grantedAccessList = append(grantedAccessList, access)
|
||||||
|
@ -202,22 +176,10 @@ func filterAccessList(ctx context.Context, scope string, requestedAccessList []a
|
||||||
return grantedAccessList
|
return grantedAccessList
|
||||||
}
|
}
|
||||||
|
|
||||||
type acctSubject struct{}
|
|
||||||
|
|
||||||
func (acctSubject) String() string { return "acctSubject" }
|
|
||||||
|
|
||||||
type requestedAccess struct{}
|
|
||||||
|
|
||||||
func (requestedAccess) String() string { return "requestedAccess" }
|
|
||||||
|
|
||||||
type grantedAccess struct{}
|
|
||||||
|
|
||||||
func (grantedAccess) String() string { return "grantedAccess" }
|
|
||||||
|
|
||||||
// getToken handles authenticating the request and authorizing access to the
|
// getToken handles authenticating the request and authorizing access to the
|
||||||
// requested scopes.
|
// requested scopes.
|
||||||
func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *http.Request) {
|
||||||
dcontext.GetLogger(ctx).Info("getToken")
|
context.GetLogger(ctx).Info("getToken")
|
||||||
|
|
||||||
params := r.URL.Query()
|
params := r.URL.Query()
|
||||||
service := params.Get("service")
|
service := params.Get("service")
|
||||||
|
@ -243,30 +205,30 @@ func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *h
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get response context.
|
// Get response context.
|
||||||
ctx, w = dcontext.WithResponseWriter(ctx, w)
|
ctx, w = context.WithResponseWriter(ctx, w)
|
||||||
|
|
||||||
challenge.SetHeaders(r, w)
|
challenge.SetHeaders(w)
|
||||||
handleError(ctx, errcode.ErrorCodeUnauthorized.WithDetail(challenge.Error()), w)
|
handleError(ctx, errcode.ErrorCodeUnauthorized.WithDetail(challenge.Error()), w)
|
||||||
|
|
||||||
dcontext.GetResponseLogger(ctx).Info("get token authentication challenge")
|
context.GetResponseLogger(ctx).Info("get token authentication challenge")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
ctx = authorizedCtx
|
ctx = authorizedCtx
|
||||||
|
|
||||||
username := dcontext.GetStringValue(ctx, "auth.user.name")
|
username := context.GetStringValue(ctx, "auth.user.name")
|
||||||
|
|
||||||
ctx = context.WithValue(ctx, acctSubject{}, username)
|
ctx = context.WithValue(ctx, "acctSubject", username)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, acctSubject{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "acctSubject"))
|
||||||
|
|
||||||
dcontext.GetLogger(ctx).Info("authenticated client")
|
context.GetLogger(ctx).Info("authenticated client")
|
||||||
|
|
||||||
ctx = context.WithValue(ctx, requestedAccess{}, requestedAccessList)
|
ctx = context.WithValue(ctx, "requestedAccess", requestedAccessList)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, requestedAccess{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "requestedAccess"))
|
||||||
|
|
||||||
grantedAccessList := filterAccessList(ctx, username, requestedAccessList)
|
grantedAccessList := filterAccessList(ctx, username, requestedAccessList)
|
||||||
ctx = context.WithValue(ctx, grantedAccess{}, grantedAccessList)
|
ctx = context.WithValue(ctx, "grantedAccess", grantedAccessList)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, grantedAccess{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "grantedAccess"))
|
||||||
|
|
||||||
token, err := ts.issuer.CreateJWT(username, service, grantedAccessList)
|
token, err := ts.issuer.CreateJWT(username, service, grantedAccessList)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -274,7 +236,7 @@ func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *h
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
dcontext.GetLogger(ctx).Info("authorized client")
|
context.GetLogger(ctx).Info("authorized client")
|
||||||
|
|
||||||
response := tokenResponse{
|
response := tokenResponse{
|
||||||
Token: token,
|
Token: token,
|
||||||
|
@ -289,12 +251,12 @@ func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *h
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, w = dcontext.WithResponseWriter(ctx, w)
|
ctx, w = context.WithResponseWriter(ctx, w)
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
json.NewEncoder(w).Encode(response)
|
json.NewEncoder(w).Encode(response)
|
||||||
|
|
||||||
dcontext.GetResponseLogger(ctx).Info("get token complete")
|
context.GetResponseLogger(ctx).Info("get token complete")
|
||||||
}
|
}
|
||||||
|
|
||||||
type postTokenResponse struct {
|
type postTokenResponse struct {
|
||||||
|
@ -378,17 +340,17 @@ func (ts *tokenServer) postToken(ctx context.Context, w http.ResponseWriter, r *
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx = context.WithValue(ctx, acctSubject{}, subject)
|
ctx = context.WithValue(ctx, "acctSubject", subject)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, acctSubject{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "acctSubject"))
|
||||||
|
|
||||||
dcontext.GetLogger(ctx).Info("authenticated client")
|
context.GetLogger(ctx).Info("authenticated client")
|
||||||
|
|
||||||
ctx = context.WithValue(ctx, requestedAccess{}, requestedAccessList)
|
ctx = context.WithValue(ctx, "requestedAccess", requestedAccessList)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, requestedAccess{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "requestedAccess"))
|
||||||
|
|
||||||
grantedAccessList := filterAccessList(ctx, subject, requestedAccessList)
|
grantedAccessList := filterAccessList(ctx, subject, requestedAccessList)
|
||||||
ctx = context.WithValue(ctx, grantedAccess{}, grantedAccessList)
|
ctx = context.WithValue(ctx, "grantedAccess", grantedAccessList)
|
||||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, grantedAccess{}))
|
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "grantedAccess"))
|
||||||
|
|
||||||
token, err := ts.issuer.CreateJWT(subject, service, grantedAccessList)
|
token, err := ts.issuer.CreateJWT(subject, service, grantedAccessList)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -396,7 +358,7 @@ func (ts *tokenServer) postToken(ctx context.Context, w http.ResponseWriter, r *
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
dcontext.GetLogger(ctx).Info("authorized client")
|
context.GetLogger(ctx).Info("authorized client")
|
||||||
|
|
||||||
response := postTokenResponse{
|
response := postTokenResponse{
|
||||||
Token: token,
|
Token: token,
|
||||||
|
@ -417,10 +379,10 @@ func (ts *tokenServer) postToken(ctx context.Context, w http.ResponseWriter, r *
|
||||||
response.RefreshToken = rToken
|
response.RefreshToken = rToken
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, w = dcontext.WithResponseWriter(ctx, w)
|
ctx, w = context.WithResponseWriter(ctx, w)
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
json.NewEncoder(w).Encode(response)
|
json.NewEncoder(w).Encode(response)
|
||||||
|
|
||||||
dcontext.GetResponseLogger(ctx).Info("post token complete")
|
context.GetResponseLogger(ctx).Info("post token complete")
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,18 +1,16 @@
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"crypto"
|
"crypto"
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"regexp"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
dcontext "github.com/docker/distribution/context"
|
"github.com/docker/distribution/context"
|
||||||
"github.com/docker/distribution/registry/auth"
|
"github.com/docker/distribution/registry/auth"
|
||||||
"github.com/docker/distribution/registry/auth/token"
|
"github.com/docker/distribution/registry/auth/token"
|
||||||
"github.com/docker/libtrust"
|
"github.com/docker/libtrust"
|
||||||
|
@ -28,24 +26,18 @@ func ResolveScopeSpecifiers(ctx context.Context, scopeSpecs []string) []auth.Acc
|
||||||
parts := strings.SplitN(scopeSpecifier, ":", 3)
|
parts := strings.SplitN(scopeSpecifier, ":", 3)
|
||||||
|
|
||||||
if len(parts) != 3 {
|
if len(parts) != 3 {
|
||||||
dcontext.GetLogger(ctx).Infof("ignoring unsupported scope format %s", scopeSpecifier)
|
context.GetLogger(ctx).Infof("ignoring unsupported scope format %s", scopeSpecifier)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
resourceType, resourceName, actions := parts[0], parts[1], parts[2]
|
resourceType, resourceName, actions := parts[0], parts[1], parts[2]
|
||||||
|
|
||||||
resourceType, resourceClass := splitResourceClass(resourceType)
|
|
||||||
if resourceType == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Actions should be a comma-separated list of actions.
|
// Actions should be a comma-separated list of actions.
|
||||||
for _, action := range strings.Split(actions, ",") {
|
for _, action := range strings.Split(actions, ",") {
|
||||||
requestedAccess := auth.Access{
|
requestedAccess := auth.Access{
|
||||||
Resource: auth.Resource{
|
Resource: auth.Resource{
|
||||||
Type: resourceType,
|
Type: resourceType,
|
||||||
Class: resourceClass,
|
Name: resourceName,
|
||||||
Name: resourceName,
|
|
||||||
},
|
},
|
||||||
Action: action,
|
Action: action,
|
||||||
}
|
}
|
||||||
|
@ -63,19 +55,6 @@ func ResolveScopeSpecifiers(ctx context.Context, scopeSpecs []string) []auth.Acc
|
||||||
return requestedAccessList
|
return requestedAccessList
|
||||||
}
|
}
|
||||||
|
|
||||||
var typeRegexp = regexp.MustCompile(`^([a-z0-9]+)(\([a-z0-9]+\))?$`)
|
|
||||||
|
|
||||||
func splitResourceClass(t string) (string, string) {
|
|
||||||
matches := typeRegexp.FindStringSubmatch(t)
|
|
||||||
if len(matches) < 2 {
|
|
||||||
return "", ""
|
|
||||||
}
|
|
||||||
if len(matches) == 2 || len(matches[2]) < 2 {
|
|
||||||
return matches[1], ""
|
|
||||||
}
|
|
||||||
return matches[1], matches[2][1 : len(matches[2])-1]
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResolveScopeList converts a scope list from a token request's
|
// ResolveScopeList converts a scope list from a token request's
|
||||||
// `scope` parameter into a list of standard access objects.
|
// `scope` parameter into a list of standard access objects.
|
||||||
func ResolveScopeList(ctx context.Context, scopeList string) []auth.Access {
|
func ResolveScopeList(ctx context.Context, scopeList string) []auth.Access {
|
||||||
|
@ -83,19 +62,12 @@ func ResolveScopeList(ctx context.Context, scopeList string) []auth.Access {
|
||||||
return ResolveScopeSpecifiers(ctx, scopes)
|
return ResolveScopeSpecifiers(ctx, scopes)
|
||||||
}
|
}
|
||||||
|
|
||||||
func scopeString(a auth.Access) string {
|
|
||||||
if a.Class != "" {
|
|
||||||
return fmt.Sprintf("%s(%s):%s:%s", a.Type, a.Class, a.Name, a.Action)
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("%s:%s:%s", a.Type, a.Name, a.Action)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ToScopeList converts a list of access to a
|
// ToScopeList converts a list of access to a
|
||||||
// scope list string
|
// scope list string
|
||||||
func ToScopeList(access []auth.Access) string {
|
func ToScopeList(access []auth.Access) string {
|
||||||
var s []string
|
var s []string
|
||||||
for _, a := range access {
|
for _, a := range access {
|
||||||
s = append(s, scopeString(a))
|
s = append(s, fmt.Sprintf("%s:%s:%s", a.Type, a.Name, a.Action))
|
||||||
}
|
}
|
||||||
return strings.Join(s, ",")
|
return strings.Join(s, ",")
|
||||||
}
|
}
|
||||||
|
@ -130,7 +102,6 @@ func (issuer *TokenIssuer) CreateJWT(subject string, audience string, grantedAcc
|
||||||
|
|
||||||
accessEntries = append(accessEntries, &token.ResourceActions{
|
accessEntries = append(accessEntries, &token.ResourceActions{
|
||||||
Type: resource.Type,
|
Type: resource.Type,
|
||||||
Class: resource.Class,
|
|
||||||
Name: resource.Name,
|
Name: resource.Name,
|
||||||
Actions: actions,
|
Actions: actions,
|
||||||
})
|
})
|
||||||
|
|
|
@ -1,80 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"crypto/rsa"
|
|
||||||
"encoding/base64"
|
|
||||||
"errors"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/distribution/registry/auth"
|
|
||||||
"github.com/docker/libtrust"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestCreateJWTSuccessWithEmptyACL(t *testing.T) {
|
|
||||||
key, err := rsa.GenerateKey(rand.Reader, 1024)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
pk, err := libtrust.FromCryptoPrivateKey(key)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
tokenIssuer := TokenIssuer{
|
|
||||||
Expiration: time.Duration(100),
|
|
||||||
Issuer: "localhost",
|
|
||||||
SigningKey: pk,
|
|
||||||
}
|
|
||||||
|
|
||||||
grantedAccessList := make([]auth.Access, 0)
|
|
||||||
token, err := tokenIssuer.CreateJWT("test", "test", grantedAccessList)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tokens := strings.Split(token, ".")
|
|
||||||
|
|
||||||
if len(token) == 0 {
|
|
||||||
t.Fatal("token not generated.")
|
|
||||||
}
|
|
||||||
|
|
||||||
json, err := decodeJWT(tokens[1])
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !strings.Contains(json, "test") {
|
|
||||||
t.Fatal("Valid token was not generated.")
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func decodeJWT(rawToken string) (string, error) {
|
|
||||||
data, err := joseBase64Decode(rawToken)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.New("Error in Decoding base64 String")
|
|
||||||
}
|
|
||||||
return data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func joseBase64Decode(s string) (string, error) {
|
|
||||||
switch len(s) % 4 {
|
|
||||||
case 0:
|
|
||||||
case 2:
|
|
||||||
s += "=="
|
|
||||||
case 3:
|
|
||||||
s += "="
|
|
||||||
default:
|
|
||||||
{
|
|
||||||
return "", errors.New("Invalid base64 String")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
data, err := base64.StdEncoding.DecodeString(s)
|
|
||||||
if err != nil {
|
|
||||||
return "", err //errors.New("Error in Decoding base64 String")
|
|
||||||
}
|
|
||||||
return string(data), nil
|
|
||||||
}
|
|
7
coverpkg.sh
Executable file
7
coverpkg.sh
Executable file
|
@ -0,0 +1,7 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# Given a subpackage and the containing package, figures out which packages
|
||||||
|
# need to be passed to `go test -coverpkg`: this includes all of the
|
||||||
|
# subpackage's dependencies within the containing package, as well as the
|
||||||
|
# subpackage itself.
|
||||||
|
DEPENDENCIES="$(go list -f $'{{range $f := .Deps}}{{$f}}\n{{end}}' ${1} | grep ${2} | grep -v github.com/docker/distribution/vendor)"
|
||||||
|
echo "${1} ${DEPENDENCIES}" | xargs echo -n | tr ' ' ','
|
|
@ -1,18 +1,3 @@
|
||||||
// Copyright 2019, 2020 OCI Contributors
|
|
||||||
// Copyright 2017 Docker, Inc.
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// https://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package digest
|
package digest
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -23,6 +8,11 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// DigestSha256EmptyTar is the canonical sha256 digest of empty data
|
||||||
|
DigestSha256EmptyTar = "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
|
||||||
|
)
|
||||||
|
|
||||||
// Digest allows simple protection of hex formatted digest strings, prefixed
|
// Digest allows simple protection of hex formatted digest strings, prefixed
|
||||||
// by their algorithm. Strings of type Digest have some guarantee of being in
|
// by their algorithm. Strings of type Digest have some guarantee of being in
|
||||||
// the correct format and it provides quick access to the components of a
|
// the correct format and it provides quick access to the components of a
|
||||||
|
@ -46,21 +36,16 @@ func NewDigest(alg Algorithm, h hash.Hash) Digest {
|
||||||
// functions. This is also useful for rebuilding digests from binary
|
// functions. This is also useful for rebuilding digests from binary
|
||||||
// serializations.
|
// serializations.
|
||||||
func NewDigestFromBytes(alg Algorithm, p []byte) Digest {
|
func NewDigestFromBytes(alg Algorithm, p []byte) Digest {
|
||||||
return NewDigestFromEncoded(alg, alg.Encode(p))
|
return Digest(fmt.Sprintf("%s:%x", alg, p))
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDigestFromHex is deprecated. Please use NewDigestFromEncoded.
|
// NewDigestFromHex returns a Digest from alg and a the hex encoded digest.
|
||||||
func NewDigestFromHex(alg, hex string) Digest {
|
func NewDigestFromHex(alg, hex string) Digest {
|
||||||
return NewDigestFromEncoded(Algorithm(alg), hex)
|
return Digest(fmt.Sprintf("%s:%s", alg, hex))
|
||||||
}
|
|
||||||
|
|
||||||
// NewDigestFromEncoded returns a Digest from alg and the encoded digest.
|
|
||||||
func NewDigestFromEncoded(alg Algorithm, encoded string) Digest {
|
|
||||||
return Digest(fmt.Sprintf("%s:%s", alg, encoded))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// DigestRegexp matches valid digest types.
|
// DigestRegexp matches valid digest types.
|
||||||
var DigestRegexp = regexp.MustCompile(`[a-z0-9]+(?:[.+_-][a-z0-9]+)*:[a-zA-Z0-9=_-]+`)
|
var DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`)
|
||||||
|
|
||||||
// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match.
|
// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match.
|
||||||
var DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`)
|
var DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`)
|
||||||
|
@ -76,14 +61,16 @@ var (
|
||||||
ErrDigestUnsupported = fmt.Errorf("unsupported digest algorithm")
|
ErrDigestUnsupported = fmt.Errorf("unsupported digest algorithm")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Parse parses s and returns the validated digest object. An error will
|
// ParseDigest parses s and returns the validated digest object. An error will
|
||||||
// be returned if the format is invalid.
|
// be returned if the format is invalid.
|
||||||
func Parse(s string) (Digest, error) {
|
func ParseDigest(s string) (Digest, error) {
|
||||||
d := Digest(s)
|
d := Digest(s)
|
||||||
|
|
||||||
return d, d.Validate()
|
return d, d.Validate()
|
||||||
}
|
}
|
||||||
|
|
||||||
// FromReader consumes the content of rd until io.EOF, returning canonical digest.
|
// FromReader returns the most valid digest for the underlying content using
|
||||||
|
// the canonical digest algorithm.
|
||||||
func FromReader(rd io.Reader) (Digest, error) {
|
func FromReader(rd io.Reader) (Digest, error) {
|
||||||
return Canonical.FromReader(rd)
|
return Canonical.FromReader(rd)
|
||||||
}
|
}
|
||||||
|
@ -93,27 +80,36 @@ func FromBytes(p []byte) Digest {
|
||||||
return Canonical.FromBytes(p)
|
return Canonical.FromBytes(p)
|
||||||
}
|
}
|
||||||
|
|
||||||
// FromString digests the input and returns a Digest.
|
|
||||||
func FromString(s string) Digest {
|
|
||||||
return Canonical.FromString(s)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate checks that the contents of d is a valid digest, returning an
|
// Validate checks that the contents of d is a valid digest, returning an
|
||||||
// error if not.
|
// error if not.
|
||||||
func (d Digest) Validate() error {
|
func (d Digest) Validate() error {
|
||||||
s := string(d)
|
s := string(d)
|
||||||
i := strings.Index(s, ":")
|
|
||||||
if i <= 0 || i+1 == len(s) {
|
if !DigestRegexpAnchored.MatchString(s) {
|
||||||
return ErrDigestInvalidFormat
|
return ErrDigestInvalidFormat
|
||||||
}
|
}
|
||||||
algorithm, encoded := Algorithm(s[:i]), s[i+1:]
|
|
||||||
if !algorithm.Available() {
|
i := strings.Index(s, ":")
|
||||||
if !DigestRegexpAnchored.MatchString(s) {
|
if i < 0 {
|
||||||
return ErrDigestInvalidFormat
|
return ErrDigestInvalidFormat
|
||||||
|
}
|
||||||
|
|
||||||
|
// case: "sha256:" with no hex.
|
||||||
|
if i+1 == len(s) {
|
||||||
|
return ErrDigestInvalidFormat
|
||||||
|
}
|
||||||
|
|
||||||
|
switch algorithm := Algorithm(s[:i]); algorithm {
|
||||||
|
case SHA256, SHA384, SHA512:
|
||||||
|
if algorithm.Size()*2 != len(s[i+1:]) {
|
||||||
|
return ErrDigestInvalidLength
|
||||||
}
|
}
|
||||||
|
break
|
||||||
|
default:
|
||||||
return ErrDigestUnsupported
|
return ErrDigestUnsupported
|
||||||
}
|
}
|
||||||
return algorithm.Validate(encoded)
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Algorithm returns the algorithm portion of the digest. This will panic if
|
// Algorithm returns the algorithm portion of the digest. This will panic if
|
||||||
|
@ -122,24 +118,10 @@ func (d Digest) Algorithm() Algorithm {
|
||||||
return Algorithm(d[:d.sepIndex()])
|
return Algorithm(d[:d.sepIndex()])
|
||||||
}
|
}
|
||||||
|
|
||||||
// Verifier returns a writer object that can be used to verify a stream of
|
// Hex returns the hex digest portion of the digest. This will panic if the
|
||||||
// content against the digest. If the digest is invalid, the method will panic.
|
|
||||||
func (d Digest) Verifier() Verifier {
|
|
||||||
return hashVerifier{
|
|
||||||
hash: d.Algorithm().Hash(),
|
|
||||||
digest: d,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encoded returns the encoded portion of the digest. This will panic if the
|
|
||||||
// underlying digest is not in a valid format.
|
// underlying digest is not in a valid format.
|
||||||
func (d Digest) Encoded() string {
|
|
||||||
return string(d[d.sepIndex()+1:])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Hex is deprecated. Please use Digest.Encoded.
|
|
||||||
func (d Digest) Hex() string {
|
func (d Digest) Hex() string {
|
||||||
return d.Encoded()
|
return string(d[d.sepIndex()+1:])
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d Digest) String() string {
|
func (d Digest) String() string {
|
||||||
|
@ -150,7 +132,7 @@ func (d Digest) sepIndex() int {
|
||||||
i := strings.Index(string(d), ":")
|
i := strings.Index(string(d), ":")
|
||||||
|
|
||||||
if i < 0 {
|
if i < 0 {
|
||||||
panic(fmt.Sprintf("no ':' separator in digest %q", d))
|
panic("could not find ':' in digest: " + d)
|
||||||
}
|
}
|
||||||
|
|
||||||
return i
|
return i
|
82
digest/digest_test.go
Normal file
82
digest/digest_test.go
Normal file
|
@ -0,0 +1,82 @@
|
||||||
|
package digest
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParseDigest(t *testing.T) {
|
||||||
|
for _, testcase := range []struct {
|
||||||
|
input string
|
||||||
|
err error
|
||||||
|
algorithm Algorithm
|
||||||
|
hex string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
input: "sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b",
|
||||||
|
algorithm: "sha256",
|
||||||
|
hex: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
input: "sha384:d3fc7881460b7e22e3d172954463dddd7866d17597e7248453c48b3e9d26d9596bf9c4a9cf8072c9d5bad76e19af801d",
|
||||||
|
algorithm: "sha384",
|
||||||
|
hex: "d3fc7881460b7e22e3d172954463dddd7866d17597e7248453c48b3e9d26d9596bf9c4a9cf8072c9d5bad76e19af801d",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// empty hex
|
||||||
|
input: "sha256:",
|
||||||
|
err: ErrDigestInvalidFormat,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// just hex
|
||||||
|
input: "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
err: ErrDigestInvalidFormat,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// not hex
|
||||||
|
input: "sha256:d41d8cd98f00b204e9800m98ecf8427e",
|
||||||
|
err: ErrDigestInvalidFormat,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// too short
|
||||||
|
input: "sha256:abcdef0123456789",
|
||||||
|
err: ErrDigestInvalidLength,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// too short (from different algorithm)
|
||||||
|
input: "sha512:abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789",
|
||||||
|
err: ErrDigestInvalidLength,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
input: "foo:d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
err: ErrDigestUnsupported,
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
digest, err := ParseDigest(testcase.input)
|
||||||
|
if err != testcase.err {
|
||||||
|
t.Fatalf("error differed from expected while parsing %q: %v != %v", testcase.input, err, testcase.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if testcase.err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if digest.Algorithm() != testcase.algorithm {
|
||||||
|
t.Fatalf("incorrect algorithm for parsed digest: %q != %q", digest.Algorithm(), testcase.algorithm)
|
||||||
|
}
|
||||||
|
|
||||||
|
if digest.Hex() != testcase.hex {
|
||||||
|
t.Fatalf("incorrect hex for parsed digest: %q != %q", digest.Hex(), testcase.hex)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse string return value and check equality
|
||||||
|
newParsed, err := ParseDigest(digest.String())
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error parsing input %q: %v", testcase.input, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if newParsed != digest {
|
||||||
|
t.Fatalf("expected equal: %q != %q", newParsed, digest)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,18 +1,3 @@
|
||||||
// Copyright 2019, 2020 OCI Contributors
|
|
||||||
// Copyright 2017 Docker, Inc.
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// https://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package digest
|
package digest
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -20,7 +5,6 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"hash"
|
"hash"
|
||||||
"io"
|
"io"
|
||||||
"regexp"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Algorithm identifies and implementation of a digester by an identifier.
|
// Algorithm identifies and implementation of a digester by an identifier.
|
||||||
|
@ -30,9 +14,9 @@ type Algorithm string
|
||||||
|
|
||||||
// supported digest types
|
// supported digest types
|
||||||
const (
|
const (
|
||||||
SHA256 Algorithm = "sha256" // sha256 with hex encoding (lower case only)
|
SHA256 Algorithm = "sha256" // sha256 with hex encoding
|
||||||
SHA384 Algorithm = "sha384" // sha384 with hex encoding (lower case only)
|
SHA384 Algorithm = "sha384" // sha384 with hex encoding
|
||||||
SHA512 Algorithm = "sha512" // sha512 with hex encoding (lower case only)
|
SHA512 Algorithm = "sha512" // sha512 with hex encoding
|
||||||
|
|
||||||
// Canonical is the primary digest algorithm used with the distribution
|
// Canonical is the primary digest algorithm used with the distribution
|
||||||
// project. Other digests may be used but this one is the primary storage
|
// project. Other digests may be used but this one is the primary storage
|
||||||
|
@ -52,18 +36,10 @@ var (
|
||||||
SHA384: crypto.SHA384,
|
SHA384: crypto.SHA384,
|
||||||
SHA512: crypto.SHA512,
|
SHA512: crypto.SHA512,
|
||||||
}
|
}
|
||||||
|
|
||||||
// anchoredEncodedRegexps contains anchored regular expressions for hex-encoded digests.
|
|
||||||
// Note that /A-F/ disallowed.
|
|
||||||
anchoredEncodedRegexps = map[Algorithm]*regexp.Regexp{
|
|
||||||
SHA256: regexp.MustCompile(`^[a-f0-9]{64}$`),
|
|
||||||
SHA384: regexp.MustCompile(`^[a-f0-9]{96}$`),
|
|
||||||
SHA512: regexp.MustCompile(`^[a-f0-9]{128}$`),
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Available returns true if the digest type is available for use. If this
|
// Available returns true if the digest type is available for use. If this
|
||||||
// returns false, Digester and Hash will return nil.
|
// returns false, New and Hash will return nil.
|
||||||
func (a Algorithm) Available() bool {
|
func (a Algorithm) Available() bool {
|
||||||
h, ok := algorithms[a]
|
h, ok := algorithms[a]
|
||||||
if !ok {
|
if !ok {
|
||||||
|
@ -96,17 +72,13 @@ func (a *Algorithm) Set(value string) error {
|
||||||
*a = Algorithm(value)
|
*a = Algorithm(value)
|
||||||
}
|
}
|
||||||
|
|
||||||
if !a.Available() {
|
|
||||||
return ErrDigestUnsupported
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Digester returns a new digester for the specified algorithm. If the algorithm
|
// New returns a new digester for the specified algorithm. If the algorithm
|
||||||
// does not have a digester implementation, nil will be returned. This can be
|
// does not have a digester implementation, nil will be returned. This can be
|
||||||
// checked by calling Available before calling Digester.
|
// checked by calling Available before calling New.
|
||||||
func (a Algorithm) Digester() Digester {
|
func (a Algorithm) New() Digester {
|
||||||
return &digester{
|
return &digester{
|
||||||
alg: a,
|
alg: a,
|
||||||
hash: a.Hash(),
|
hash: a.Hash(),
|
||||||
|
@ -117,11 +89,6 @@ func (a Algorithm) Digester() Digester {
|
||||||
// method will panic. Check Algorithm.Available() before calling.
|
// method will panic. Check Algorithm.Available() before calling.
|
||||||
func (a Algorithm) Hash() hash.Hash {
|
func (a Algorithm) Hash() hash.Hash {
|
||||||
if !a.Available() {
|
if !a.Available() {
|
||||||
// Empty algorithm string is invalid
|
|
||||||
if a == "" {
|
|
||||||
panic(fmt.Sprintf("empty digest algorithm, validate before calling Algorithm.Hash()"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// NOTE(stevvooe): A missing hash is usually a programming error that
|
// NOTE(stevvooe): A missing hash is usually a programming error that
|
||||||
// must be resolved at compile time. We don't import in the digest
|
// must be resolved at compile time. We don't import in the digest
|
||||||
// package to allow users to choose their hash implementation (such as
|
// package to allow users to choose their hash implementation (such as
|
||||||
|
@ -135,17 +102,9 @@ func (a Algorithm) Hash() hash.Hash {
|
||||||
return algorithms[a].New()
|
return algorithms[a].New()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Encode encodes the raw bytes of a digest, typically from a hash.Hash, into
|
|
||||||
// the encoded portion of the digest.
|
|
||||||
func (a Algorithm) Encode(d []byte) string {
|
|
||||||
// TODO(stevvooe): Currently, all algorithms use a hex encoding. When we
|
|
||||||
// add support for back registration, we can modify this accordingly.
|
|
||||||
return fmt.Sprintf("%x", d)
|
|
||||||
}
|
|
||||||
|
|
||||||
// FromReader returns the digest of the reader using the algorithm.
|
// FromReader returns the digest of the reader using the algorithm.
|
||||||
func (a Algorithm) FromReader(rd io.Reader) (Digest, error) {
|
func (a Algorithm) FromReader(rd io.Reader) (Digest, error) {
|
||||||
digester := a.Digester()
|
digester := a.New()
|
||||||
|
|
||||||
if _, err := io.Copy(digester.Hash(), rd); err != nil {
|
if _, err := io.Copy(digester.Hash(), rd); err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
|
@ -156,7 +115,7 @@ func (a Algorithm) FromReader(rd io.Reader) (Digest, error) {
|
||||||
|
|
||||||
// FromBytes digests the input and returns a Digest.
|
// FromBytes digests the input and returns a Digest.
|
||||||
func (a Algorithm) FromBytes(p []byte) Digest {
|
func (a Algorithm) FromBytes(p []byte) Digest {
|
||||||
digester := a.Digester()
|
digester := a.New()
|
||||||
|
|
||||||
if _, err := digester.Hash().Write(p); err != nil {
|
if _, err := digester.Hash().Write(p); err != nil {
|
||||||
// Writes to a Hash should never fail. None of the existing
|
// Writes to a Hash should never fail. None of the existing
|
||||||
|
@ -170,24 +129,27 @@ func (a Algorithm) FromBytes(p []byte) Digest {
|
||||||
return digester.Digest()
|
return digester.Digest()
|
||||||
}
|
}
|
||||||
|
|
||||||
// FromString digests the string input and returns a Digest.
|
// TODO(stevvooe): Allow resolution of verifiers using the digest type and
|
||||||
func (a Algorithm) FromString(s string) Digest {
|
// this registration system.
|
||||||
return a.FromBytes([]byte(s))
|
|
||||||
|
// Digester calculates the digest of written data. Writes should go directly
|
||||||
|
// to the return value of Hash, while calling Digest will return the current
|
||||||
|
// value of the digest.
|
||||||
|
type Digester interface {
|
||||||
|
Hash() hash.Hash // provides direct access to underlying hash instance.
|
||||||
|
Digest() Digest
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate validates the encoded portion string
|
// digester provides a simple digester definition that embeds a hasher.
|
||||||
func (a Algorithm) Validate(encoded string) error {
|
type digester struct {
|
||||||
r, ok := anchoredEncodedRegexps[a]
|
alg Algorithm
|
||||||
if !ok {
|
hash hash.Hash
|
||||||
return ErrDigestUnsupported
|
}
|
||||||
}
|
|
||||||
// Digests much always be hex-encoded, ensuring that their hex portion will
|
func (d *digester) Hash() hash.Hash {
|
||||||
// always be size*2
|
return d.hash
|
||||||
if a.Size()*2 != len(encoded) {
|
}
|
||||||
return ErrDigestInvalidLength
|
|
||||||
}
|
func (d *digester) Digest() Digest {
|
||||||
if r.MatchString(encoded) {
|
return NewDigest(d.alg, d.hash)
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return ErrDigestInvalidFormat
|
|
||||||
}
|
}
|
21
digest/digester_resumable_test.go
Normal file
21
digest/digester_resumable_test.go
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
// +build !noresumabledigest
|
||||||
|
|
||||||
|
package digest
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stevvooe/resumable"
|
||||||
|
_ "github.com/stevvooe/resumable/sha256"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TestResumableDetection just ensures that the resumable capability of a hash
|
||||||
|
// is exposed through the digester type, which is just a hash plus a Digest
|
||||||
|
// method.
|
||||||
|
func TestResumableDetection(t *testing.T) {
|
||||||
|
d := Canonical.New()
|
||||||
|
|
||||||
|
if _, ok := d.Hash().(resumable.Hash); !ok {
|
||||||
|
t.Fatalf("expected digester to implement resumable.Hash: %#v, %v", d, d.Hash())
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,18 +1,3 @@
|
||||||
// Copyright 2019, 2020 OCI Contributors
|
|
||||||
// Copyright 2017 Docker, Inc.
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// https://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
// Package digest provides a generalized type to opaquely represent message
|
// Package digest provides a generalized type to opaquely represent message
|
||||||
// digests and their operations within the registry. The Digest type is
|
// digests and their operations within the registry. The Digest type is
|
||||||
// designed to serve as a flexible identifier in a content-addressable system.
|
// designed to serve as a flexible identifier in a content-addressable system.
|
||||||
|
@ -30,13 +15,8 @@
|
||||||
//
|
//
|
||||||
// sha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc
|
// sha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc
|
||||||
//
|
//
|
||||||
// The "algorithm" portion defines both the hashing algorithm used to calculate
|
// In this case, the string "sha256" is the algorithm and the hex bytes are
|
||||||
// the digest and the encoding of the resulting digest, which defaults to "hex"
|
// the "digest".
|
||||||
// if not otherwise specified. Currently, all supported algorithms have their
|
|
||||||
// digests encoded in hex strings.
|
|
||||||
//
|
|
||||||
// In the example above, the string "sha256" is the algorithm and the hex bytes
|
|
||||||
// are the "digest".
|
|
||||||
//
|
//
|
||||||
// Because the Digest type is simply a string, once a valid Digest is
|
// Because the Digest type is simply a string, once a valid Digest is
|
||||||
// obtained, comparisons are cheap, quick and simple to express with the
|
// obtained, comparisons are cheap, quick and simple to express with the
|
|
@ -1,12 +1,10 @@
|
||||||
package digestset
|
package digest
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
digest "github.com/opencontainers/go-digest"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
@ -46,7 +44,7 @@ func NewSet() *Set {
|
||||||
// values or short values. This function does not test equality,
|
// values or short values. This function does not test equality,
|
||||||
// rather whether the second value could match against the first
|
// rather whether the second value could match against the first
|
||||||
// value.
|
// value.
|
||||||
func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {
|
func checkShortMatch(alg Algorithm, hex, shortAlg, shortHex string) bool {
|
||||||
if len(hex) == len(shortHex) {
|
if len(hex) == len(shortHex) {
|
||||||
if hex != shortHex {
|
if hex != shortHex {
|
||||||
return false
|
return false
|
||||||
|
@ -66,7 +64,7 @@ func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool
|
||||||
// If no digests could be found ErrDigestNotFound will be returned
|
// If no digests could be found ErrDigestNotFound will be returned
|
||||||
// with an empty digest value. If multiple matches are found
|
// with an empty digest value. If multiple matches are found
|
||||||
// ErrDigestAmbiguous will be returned with an empty digest value.
|
// ErrDigestAmbiguous will be returned with an empty digest value.
|
||||||
func (dst *Set) Lookup(d string) (digest.Digest, error) {
|
func (dst *Set) Lookup(d string) (Digest, error) {
|
||||||
dst.mutex.RLock()
|
dst.mutex.RLock()
|
||||||
defer dst.mutex.RUnlock()
|
defer dst.mutex.RUnlock()
|
||||||
if len(dst.entries) == 0 {
|
if len(dst.entries) == 0 {
|
||||||
|
@ -74,11 +72,11 @@ func (dst *Set) Lookup(d string) (digest.Digest, error) {
|
||||||
}
|
}
|
||||||
var (
|
var (
|
||||||
searchFunc func(int) bool
|
searchFunc func(int) bool
|
||||||
alg digest.Algorithm
|
alg Algorithm
|
||||||
hex string
|
hex string
|
||||||
)
|
)
|
||||||
dgst, err := digest.Parse(d)
|
dgst, err := ParseDigest(d)
|
||||||
if err == digest.ErrDigestInvalidFormat {
|
if err == ErrDigestInvalidFormat {
|
||||||
hex = d
|
hex = d
|
||||||
searchFunc = func(i int) bool {
|
searchFunc = func(i int) bool {
|
||||||
return dst.entries[i].val >= d
|
return dst.entries[i].val >= d
|
||||||
|
@ -110,7 +108,7 @@ func (dst *Set) Lookup(d string) (digest.Digest, error) {
|
||||||
// Add adds the given digest to the set. An error will be returned
|
// Add adds the given digest to the set. An error will be returned
|
||||||
// if the given digest is invalid. If the digest already exists in the
|
// if the given digest is invalid. If the digest already exists in the
|
||||||
// set, this operation will be a no-op.
|
// set, this operation will be a no-op.
|
||||||
func (dst *Set) Add(d digest.Digest) error {
|
func (dst *Set) Add(d Digest) error {
|
||||||
if err := d.Validate(); err != nil {
|
if err := d.Validate(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -141,7 +139,7 @@ func (dst *Set) Add(d digest.Digest) error {
|
||||||
// Remove removes the given digest from the set. An err will be
|
// Remove removes the given digest from the set. An err will be
|
||||||
// returned if the given digest is invalid. If the digest does
|
// returned if the given digest is invalid. If the digest does
|
||||||
// not exist in the set, this operation will be a no-op.
|
// not exist in the set, this operation will be a no-op.
|
||||||
func (dst *Set) Remove(d digest.Digest) error {
|
func (dst *Set) Remove(d Digest) error {
|
||||||
if err := d.Validate(); err != nil {
|
if err := d.Validate(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -169,10 +167,10 @@ func (dst *Set) Remove(d digest.Digest) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// All returns all the digests in the set
|
// All returns all the digests in the set
|
||||||
func (dst *Set) All() []digest.Digest {
|
func (dst *Set) All() []Digest {
|
||||||
dst.mutex.RLock()
|
dst.mutex.RLock()
|
||||||
defer dst.mutex.RUnlock()
|
defer dst.mutex.RUnlock()
|
||||||
retValues := make([]digest.Digest, len(dst.entries))
|
retValues := make([]Digest, len(dst.entries))
|
||||||
for i := range dst.entries {
|
for i := range dst.entries {
|
||||||
retValues[i] = dst.entries[i].digest
|
retValues[i] = dst.entries[i].digest
|
||||||
}
|
}
|
||||||
|
@ -185,10 +183,10 @@ func (dst *Set) All() []digest.Digest {
|
||||||
// entire value of digest if uniqueness cannot be achieved without the
|
// entire value of digest if uniqueness cannot be achieved without the
|
||||||
// full value. This function will attempt to make short codes as short
|
// full value. This function will attempt to make short codes as short
|
||||||
// as possible to be unique.
|
// as possible to be unique.
|
||||||
func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
|
func ShortCodeTable(dst *Set, length int) map[Digest]string {
|
||||||
dst.mutex.RLock()
|
dst.mutex.RLock()
|
||||||
defer dst.mutex.RUnlock()
|
defer dst.mutex.RUnlock()
|
||||||
m := make(map[digest.Digest]string, len(dst.entries))
|
m := make(map[Digest]string, len(dst.entries))
|
||||||
l := length
|
l := length
|
||||||
resetIdx := 0
|
resetIdx := 0
|
||||||
for i := 0; i < len(dst.entries); i++ {
|
for i := 0; i < len(dst.entries); i++ {
|
||||||
|
@ -224,9 +222,9 @@ func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
|
||||||
}
|
}
|
||||||
|
|
||||||
type digestEntry struct {
|
type digestEntry struct {
|
||||||
alg digest.Algorithm
|
alg Algorithm
|
||||||
val string
|
val string
|
||||||
digest digest.Digest
|
digest Digest
|
||||||
}
|
}
|
||||||
|
|
||||||
type digestEntries []*digestEntry
|
type digestEntries []*digestEntry
|
|
@ -1,23 +1,20 @@
|
||||||
package digestset
|
package digest
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
_ "crypto/sha512"
|
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
digest "github.com/opencontainers/go-digest"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func assertEqualDigests(t *testing.T, d1, d2 digest.Digest) {
|
func assertEqualDigests(t *testing.T, d1, d2 Digest) {
|
||||||
if d1 != d2 {
|
if d1 != d2 {
|
||||||
t.Fatalf("Digests do not match:\n\tActual: %s\n\tExpected: %s", d1, d2)
|
t.Fatalf("Digests do not match:\n\tActual: %s\n\tExpected: %s", d1, d2)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLookup(t *testing.T) {
|
func TestLookup(t *testing.T) {
|
||||||
digests := []digest.Digest{
|
digests := []Digest{
|
||||||
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
||||||
|
@ -41,7 +38,7 @@ func TestLookup(t *testing.T) {
|
||||||
}
|
}
|
||||||
assertEqualDigests(t, dgst, digests[3])
|
assertEqualDigests(t, dgst, digests[3])
|
||||||
|
|
||||||
_, err = dset.Lookup("1234")
|
dgst, err = dset.Lookup("1234")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatal("Expected ambiguous error looking up: 1234")
|
t.Fatal("Expected ambiguous error looking up: 1234")
|
||||||
}
|
}
|
||||||
|
@ -49,15 +46,15 @@ func TestLookup(t *testing.T) {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = dset.Lookup("9876")
|
dgst, err = dset.Lookup("9876")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatal("Expected not found error looking up: 9876")
|
t.Fatal("Expected ambiguous error looking up: 9876")
|
||||||
}
|
}
|
||||||
if err != ErrDigestNotFound {
|
if err != ErrDigestNotFound {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = dset.Lookup("sha256:1234")
|
dgst, err = dset.Lookup("sha256:1234")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatal("Expected ambiguous error looking up: sha256:1234")
|
t.Fatal("Expected ambiguous error looking up: sha256:1234")
|
||||||
}
|
}
|
||||||
|
@ -91,7 +88,7 @@ func TestLookup(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAddDuplication(t *testing.T) {
|
func TestAddDuplication(t *testing.T) {
|
||||||
digests := []digest.Digest{
|
digests := []Digest{
|
||||||
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
||||||
|
@ -113,20 +110,20 @@ func TestAddDuplication(t *testing.T) {
|
||||||
t.Fatal("Invalid dset size")
|
t.Fatal("Invalid dset size")
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := dset.Add(digest.Digest("sha256:1234511111111111111111111111111111111111111111111111111111111111")); err != nil {
|
if err := dset.Add(Digest("sha256:1234511111111111111111111111111111111111111111111111111111111111")); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(dset.entries) != 8 {
|
if len(dset.entries) != 8 {
|
||||||
t.Fatal("Duplicate digest insert should not increase entries size")
|
t.Fatal("Duplicate digest insert allowed")
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := dset.Add(digest.Digest("sha384:123451111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111")); err != nil {
|
if err := dset.Add(Digest("sha384:123451111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111")); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(dset.entries) != 9 {
|
if len(dset.entries) != 9 {
|
||||||
t.Fatal("Insert with different algorithm should be allowed")
|
t.Fatal("Insert with different algorithm not allowed")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -173,7 +170,7 @@ func TestAll(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
all := map[digest.Digest]struct{}{}
|
all := map[Digest]struct{}{}
|
||||||
for _, dgst := range dset.All() {
|
for _, dgst := range dset.All() {
|
||||||
all[dgst] = struct{}{}
|
all[dgst] = struct{}{}
|
||||||
}
|
}
|
||||||
|
@ -197,7 +194,7 @@ func assertEqualShort(t *testing.T, actual, expected string) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestShortCodeTable(t *testing.T) {
|
func TestShortCodeTable(t *testing.T) {
|
||||||
digests := []digest.Digest{
|
digests := []Digest{
|
||||||
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234111111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234511111111111111111111111111111111111111111111111111111111111",
|
||||||
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
"sha256:1234611111111111111111111111111111111111111111111111111111111111",
|
||||||
|
@ -230,15 +227,15 @@ func TestShortCodeTable(t *testing.T) {
|
||||||
assertEqualShort(t, dump[digests[7]], "653")
|
assertEqualShort(t, dump[digests[7]], "653")
|
||||||
}
|
}
|
||||||
|
|
||||||
func createDigests(count int) ([]digest.Digest, error) {
|
func createDigests(count int) ([]Digest, error) {
|
||||||
r := rand.New(rand.NewSource(25823))
|
r := rand.New(rand.NewSource(25823))
|
||||||
digests := make([]digest.Digest, count)
|
digests := make([]Digest, count)
|
||||||
for i := range digests {
|
for i := range digests {
|
||||||
h := sha256.New()
|
h := sha256.New()
|
||||||
if err := binary.Write(h, binary.BigEndian, r.Int63()); err != nil {
|
if err := binary.Write(h, binary.BigEndian, r.Int63()); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
digests[i] = digest.NewDigest("sha256", h)
|
digests[i] = NewDigest("sha256", h)
|
||||||
}
|
}
|
||||||
return digests, nil
|
return digests, nil
|
||||||
}
|
}
|
|
@ -1,18 +1,3 @@
|
||||||
// Copyright 2019, 2020 OCI Contributors
|
|
||||||
// Copyright 2017 Docker, Inc.
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// https://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package digest
|
package digest
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -32,6 +17,19 @@ type Verifier interface {
|
||||||
Verified() bool
|
Verified() bool
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewDigestVerifier returns a verifier that compares the written bytes
|
||||||
|
// against a passed in digest.
|
||||||
|
func NewDigestVerifier(d Digest) (Verifier, error) {
|
||||||
|
if err := d.Validate(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return hashVerifier{
|
||||||
|
hash: d.Algorithm().Hash(),
|
||||||
|
digest: d,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
type hashVerifier struct {
|
type hashVerifier struct {
|
||||||
digest Digest
|
digest Digest
|
||||||
hash hash.Hash
|
hash hash.Hash
|
49
digest/verifiers_test.go
Normal file
49
digest/verifiers_test.go
Normal file
|
@ -0,0 +1,49 @@
|
||||||
|
package digest
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/rand"
|
||||||
|
"io"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestDigestVerifier(t *testing.T) {
|
||||||
|
p := make([]byte, 1<<20)
|
||||||
|
rand.Read(p)
|
||||||
|
digest := FromBytes(p)
|
||||||
|
|
||||||
|
verifier, err := NewDigestVerifier(digest)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error getting digest verifier: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
io.Copy(verifier, bytes.NewReader(p))
|
||||||
|
|
||||||
|
if !verifier.Verified() {
|
||||||
|
t.Fatalf("bytes not verified")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestVerifierUnsupportedDigest ensures that unsupported digest validation is
|
||||||
|
// flowing through verifier creation.
|
||||||
|
func TestVerifierUnsupportedDigest(t *testing.T) {
|
||||||
|
unsupported := Digest("bean:0123456789abcdef")
|
||||||
|
|
||||||
|
_, err := NewDigestVerifier(unsupported)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error when creating verifier")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != ErrDigestUnsupported {
|
||||||
|
t.Fatalf("incorrect error for unsupported digest: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO(stevvooe): Add benchmarks to measure bytes/second throughput for
|
||||||
|
// DigestVerifier.
|
||||||
|
//
|
||||||
|
// The relevant benchmark for comparison can be run with the following
|
||||||
|
// commands:
|
||||||
|
//
|
||||||
|
// go test -bench . crypto/sha1
|
||||||
|
//
|
9
docs/Dockerfile
Normal file
9
docs/Dockerfile
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
FROM docs/base:oss
|
||||||
|
MAINTAINER Docker Docs <docs@docker.com>
|
||||||
|
|
||||||
|
ENV PROJECT=registry
|
||||||
|
|
||||||
|
# To get the git info for this repo
|
||||||
|
COPY . /src
|
||||||
|
RUN rm -rf /docs/content/$PROJECT/
|
||||||
|
COPY . /docs/content/$PROJECT/
|
38
docs/Makefile
Normal file
38
docs/Makefile
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
.PHONY: all default docs docs-build docs-shell shell test
|
||||||
|
|
||||||
|
# to allow `make DOCSDIR=docs docs-shell` (to create a bind mount in docs)
|
||||||
|
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR)/$(DOCSDIR):/$(DOCSDIR))
|
||||||
|
|
||||||
|
# to allow `make DOCSPORT=9000 docs`
|
||||||
|
DOCSPORT := 8000
|
||||||
|
|
||||||
|
# Get the IP ADDRESS
|
||||||
|
DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''")
|
||||||
|
HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)")
|
||||||
|
HUGO_BIND_IP=0.0.0.0
|
||||||
|
|
||||||
|
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||||
|
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
|
||||||
|
DOCKER_DOCS_IMAGE := registry-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
|
||||||
|
|
||||||
|
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
|
||||||
|
|
||||||
|
# for some docs workarounds (see below in "docs-build" target)
|
||||||
|
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
|
||||||
|
|
||||||
|
default: docs
|
||||||
|
|
||||||
|
docs: docs-build
|
||||||
|
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
|
||||||
|
|
||||||
|
docs-draft: docs-build
|
||||||
|
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
|
||||||
|
|
||||||
|
docs-shell: docs-build
|
||||||
|
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
|
||||||
|
|
||||||
|
docs-build:
|
||||||
|
docker build -t "$(DOCKER_DOCS_IMAGE)" .
|
||||||
|
|
||||||
|
test: docs-build
|
||||||
|
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)"
|
|
@ -1,16 +0,0 @@
|
||||||
# The docs have been moved!
|
|
||||||
|
|
||||||
The documentation for Registry has been merged into
|
|
||||||
[the general documentation repo](https://github.com/docker/docker.github.io).
|
|
||||||
Commit history has been preserved.
|
|
||||||
|
|
||||||
The docs for Registry are now here:
|
|
||||||
https://github.com/docker/docker.github.io/tree/master/registry
|
|
||||||
|
|
||||||
> Note: The definitive [./spec directory](spec/) directory and
|
|
||||||
[configuration.md](configuration.md) file will be maintained in this repository
|
|
||||||
and be refreshed periodically in
|
|
||||||
[the general documentation repo](https://github.com/docker/docker.github.io).
|
|
||||||
|
|
||||||
As always, the docs in the general repo remain open-source and we appreciate
|
|
||||||
your feedback and pull requests!
|
|
|
@ -1,6 +1,8 @@
|
||||||
---
|
<!--[metadata]>
|
||||||
published: false
|
+++
|
||||||
---
|
draft = true
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
# Architecture
|
# Architecture
|
||||||
|
|
||||||
|
|
84
docs/compatibility.md
Normal file
84
docs/compatibility.md
Normal file
|
@ -0,0 +1,84 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
title = "Compatibility"
|
||||||
|
description = "describes get by digest pitfall"
|
||||||
|
keywords = ["registry, manifest, images, tags, repository, distribution, digest"]
|
||||||
|
[menu.main]
|
||||||
|
parent="smn_registry_ref"
|
||||||
|
weight=9
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Registry Compatibility
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
*If a manifest is pulled by _digest_ from a registry 2.3 with Docker Engine 1.9
|
||||||
|
and older, and the manifest was pushed with Docker Engine 1.10, a security check
|
||||||
|
will cause the Engine to receive a manifest it cannot use and the pull will fail.*
|
||||||
|
|
||||||
|
## Registry Manifest Support
|
||||||
|
|
||||||
|
Historically, the registry has supported a [single manifest type](./spec/manifest-v2-1.md)
|
||||||
|
known as _Schema 1_.
|
||||||
|
|
||||||
|
With the move toward multiple architecture images the distribution project
|
||||||
|
introduced two new manifest types: Schema 2 manifests and manifest lists. The
|
||||||
|
registry 2.3 supports all three manifest types and in order to be compatible
|
||||||
|
with older Docker engines will, in certain cases, do an on-the-fly
|
||||||
|
transformation of a manifest before serving the JSON in the response.
|
||||||
|
|
||||||
|
This conversion has some implications for pulling manifests by digest and this
|
||||||
|
document enumerate these implications.
|
||||||
|
|
||||||
|
|
||||||
|
## Content Addressable Storage (CAS)
|
||||||
|
|
||||||
|
Manifests are stored and retrieved in the registry by keying off a digest
|
||||||
|
representing a hash of the contents. One of the advantages provided by CAS is
|
||||||
|
security: if the contents are changed, then the digest will no longer match.
|
||||||
|
This prevents any modification of the manifest by a MITM attack or an untrusted
|
||||||
|
third party.
|
||||||
|
|
||||||
|
When a manifest is stored by the registry, this digest is returned in the HTTP
|
||||||
|
response headers and, if events are configured, delivered within the event. The
|
||||||
|
manifest can either be retrieved by the tag, or this digest.
|
||||||
|
|
||||||
|
For registry versions 2.2.1 and below, the registry will always store and
|
||||||
|
serve _Schema 1_ manifests. The Docker Engine 1.10 will first
|
||||||
|
attempt to send a _Schema 2_ manifest, falling back to sending a
|
||||||
|
Schema 1 type manifest when it detects that the registry does not
|
||||||
|
support the new version.
|
||||||
|
|
||||||
|
|
||||||
|
## Registry v2.3
|
||||||
|
|
||||||
|
### Manifest Push with Docker 1.9 and Older
|
||||||
|
|
||||||
|
The Docker Engine will construct a _Schema 1_ manifest which the
|
||||||
|
registry will persist to disk.
|
||||||
|
|
||||||
|
When the manifest is pulled by digest or tag with any docker version, a
|
||||||
|
_Schema 1_ manifest will be returned.
|
||||||
|
|
||||||
|
### Manifest Push with Docker 1.10
|
||||||
|
|
||||||
|
The docker engine will construct a _Schema 2_ manifest which the
|
||||||
|
registry will persist to disk.
|
||||||
|
|
||||||
|
When the manifest is pulled by digest or tag with Docker Engine 1.10, a
|
||||||
|
_Schema 2_ manifest will be returned. The Docker Engine 1.10
|
||||||
|
understands the new manifest format.
|
||||||
|
|
||||||
|
When the manifest is pulled by *tag* with Docker Engine 1.9 and older, the
|
||||||
|
manifest is converted on-the-fly to _Schema 1_ and sent in the
|
||||||
|
response. The Docker Engine 1.9 is compatible with this older format.
|
||||||
|
|
||||||
|
*When the manifest is pulled by _digest_ with Docker Engine 1.9 and older, the
|
||||||
|
same rewriting process will not happen in the registry. If this were to happen
|
||||||
|
the digest would no longer match the hash of the manifest and would violate the
|
||||||
|
constraints of CAS.*
|
||||||
|
|
||||||
|
For this reason if a manifest is pulled by _digest_ from a registry 2.3 with Docker
|
||||||
|
Engine 1.9 and older, and the manifest was pushed with Docker Engine 1.10, a
|
||||||
|
security check will cause the Engine to receive a manifest it cannot use and the
|
||||||
|
pull will fail.
|
File diff suppressed because it is too large
Load diff
237
docs/deploying.md
Normal file
237
docs/deploying.md
Normal file
|
@ -0,0 +1,237 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
title = "Deploying a registry server"
|
||||||
|
description = "Explains how to deploy a registry"
|
||||||
|
keywords = ["registry, on-prem, images, tags, repository, distribution, deployment"]
|
||||||
|
[menu.main]
|
||||||
|
parent="smn_registry"
|
||||||
|
weight=3
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Deploying a registry server
|
||||||
|
|
||||||
|
You need to [install Docker version 1.6.0 or newer](/engine/installation/index.md).
|
||||||
|
|
||||||
|
## Running on localhost
|
||||||
|
|
||||||
|
Start your registry:
|
||||||
|
|
||||||
|
docker run -d -p 5000:5000 --restart=always --name registry registry:2
|
||||||
|
|
||||||
|
You can now use it with docker.
|
||||||
|
|
||||||
|
Get any image from the hub and tag it to point to your registry:
|
||||||
|
|
||||||
|
docker pull ubuntu && docker tag ubuntu localhost:5000/ubuntu
|
||||||
|
|
||||||
|
... then push it to your registry:
|
||||||
|
|
||||||
|
docker push localhost:5000/ubuntu
|
||||||
|
|
||||||
|
... then pull it back from your registry:
|
||||||
|
|
||||||
|
docker pull localhost:5000/ubuntu
|
||||||
|
|
||||||
|
To stop your registry, you would:
|
||||||
|
|
||||||
|
docker stop registry && docker rm -v registry
|
||||||
|
|
||||||
|
## Storage
|
||||||
|
|
||||||
|
By default, your registry data is persisted as a [docker volume](/engine/tutorials/dockervolumes.md) on the host filesystem. Properly understanding volumes is essential if you want to stick with a local filesystem storage.
|
||||||
|
|
||||||
|
Specifically, you might want to point your volume location to a specific place in order to more easily access your registry data. To do so you can:
|
||||||
|
|
||||||
|
docker run -d -p 5000:5000 --restart=always --name registry \
|
||||||
|
-v `pwd`/data:/var/lib/registry \
|
||||||
|
registry:2
|
||||||
|
|
||||||
|
### Alternatives
|
||||||
|
|
||||||
|
You should usually consider using [another storage backend](./storage-drivers/index.md) instead of the local filesystem. Use the [storage configuration options](./configuration.md#storage) to configure an alternate storage backend.
|
||||||
|
|
||||||
|
Using one of these will allow you to more easily scale your registry, and leverage your storage redundancy and availability features.
|
||||||
|
|
||||||
|
## Running a domain registry
|
||||||
|
|
||||||
|
While running on `localhost` has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
|
||||||
|
|
||||||
|
### Get a certificate
|
||||||
|
|
||||||
|
Assuming that you own the domain `myregistrydomain.com`, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
|
||||||
|
|
||||||
|
Create a `certs` directory:
|
||||||
|
|
||||||
|
mkdir -p certs
|
||||||
|
|
||||||
|
Then move and/or rename your crt file to: `certs/domain.crt`, and your key file to: `certs/domain.key`.
|
||||||
|
|
||||||
|
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
|
||||||
|
|
||||||
|
docker run -d -p 5000:5000 --restart=always --name registry \
|
||||||
|
-v `pwd`/certs:/certs \
|
||||||
|
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
|
||||||
|
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
|
||||||
|
registry:2
|
||||||
|
|
||||||
|
You should now be able to access your registry from another docker host:
|
||||||
|
|
||||||
|
docker pull ubuntu
|
||||||
|
docker tag ubuntu myregistrydomain.com:5000/ubuntu
|
||||||
|
docker push myregistrydomain.com:5000/ubuntu
|
||||||
|
docker pull myregistrydomain.com:5000/ubuntu
|
||||||
|
|
||||||
|
#### Gotcha
|
||||||
|
|
||||||
|
A certificate issuer may supply you with an *intermediate* certificate. In this case, you must combine your certificate with the intermediate's to form a *certificate bundle*. You can do this using the `cat` command:
|
||||||
|
|
||||||
|
cat domain.crt intermediate-certificates.pem > certs/domain.crt
|
||||||
|
|
||||||
|
### Let's Encrypt
|
||||||
|
|
||||||
|
The registry supports using Let's Encrypt to automatically obtain a browser-trusted certificate. For more
|
||||||
|
information on Let's Encrypt, see [https://letsencrypt.org/how-it-works/](https://letsencrypt.org/how-it-works/) and the relevant section of the [registry configuration](configuration.md#letsencrypt).
|
||||||
|
|
||||||
|
### Alternatives
|
||||||
|
|
||||||
|
While rarely advisable, you may want to use self-signed certificates instead, or use your registry in an insecure fashion. You will find instructions [here](insecure.md).
|
||||||
|
|
||||||
|
## Load Balancing Considerations
|
||||||
|
|
||||||
|
One may want to use a load balancer to distribute load, terminate TLS or
|
||||||
|
provide high availability. While a full load balancing setup is outside the
|
||||||
|
scope of this document, there are a few considerations that can make the process
|
||||||
|
smoother.
|
||||||
|
|
||||||
|
The most important aspect is that a load balanced cluster of registries must
|
||||||
|
share the same resources. For the current version of the registry, this means
|
||||||
|
the following must be the same:
|
||||||
|
|
||||||
|
- Storage Driver
|
||||||
|
- HTTP Secret
|
||||||
|
- Redis Cache (if configured)
|
||||||
|
|
||||||
|
If any of these are different, the registry will have trouble serving requests.
|
||||||
|
As an example, if you're using the filesystem driver, all registry instances
|
||||||
|
must have access to the same filesystem root, which means they should be in
|
||||||
|
the same machine. For other drivers, such as s3 or azure, they should be
|
||||||
|
accessing the same resource, and will likely share an identical configuration.
|
||||||
|
The _HTTP Secret_ coordinates uploads, so also must be the same across
|
||||||
|
instances. Configuring different redis instances will work (at the time
|
||||||
|
of writing), but will not be optimal if the instances are not shared, causing
|
||||||
|
more requests to be directed to the backend.
|
||||||
|
|
||||||
|
#### Important/Required HTTP-Headers
|
||||||
|
Getting the headers correct is very important. For all responses to any
|
||||||
|
request under the "/v2/" url space, the `Docker-Distribution-API-Version`
|
||||||
|
header should be set to the value "registry/2.0", even for a 4xx response.
|
||||||
|
This header allows the docker engine to quickly resolve authentication realms
|
||||||
|
and fallback to version 1 registries, if necessary. Confirming this is setup
|
||||||
|
correctly can help avoid problems with fallback.
|
||||||
|
|
||||||
|
In the same train of thought, you must make sure you are properly sending the
|
||||||
|
`X-Forwarded-Proto`, `X-Forwarded-For` and `Host` headers to their "client-side"
|
||||||
|
values. Failure to do so usually makes the registry issue redirects to internal
|
||||||
|
hostnames or downgrading from https to http.
|
||||||
|
|
||||||
|
A properly secured registry should return 401 when the "/v2/" endpoint is hit
|
||||||
|
without credentials. The response should include a `WWW-Authenticate`
|
||||||
|
challenge, providing guidance on how to authenticate, such as with basic auth
|
||||||
|
or a token service. If the load balancer has health checks, it is recommended
|
||||||
|
to configure it to consider a 401 response as healthy and any other as down.
|
||||||
|
This will secure your registry by ensuring that configuration problems with
|
||||||
|
authentication don't accidentally expose an unprotected registry. If you're
|
||||||
|
using a less sophisticated load balancer, such as Amazon's Elastic Load
|
||||||
|
Balancer, that doesn't allow one to change the healthy response code, health
|
||||||
|
checks can be directed at "/", which will always return a `200 OK` response.
|
||||||
|
|
||||||
|
## Restricting access
|
||||||
|
|
||||||
|
Except for registries running on secure local networks, registries should always implement access restrictions.
|
||||||
|
|
||||||
|
### Native basic auth
|
||||||
|
|
||||||
|
The simplest way to achieve access restriction is through basic authentication (this is very similar to other web servers' basic authentication mechanism).
|
||||||
|
|
||||||
|
> **Warning**: You **cannot** use authentication with an insecure registry. You have to [configure TLS first](#running-a-domain-registry) for this to work.
|
||||||
|
|
||||||
|
First create a password file with one entry for the user "testuser", with password "testpassword":
|
||||||
|
|
||||||
|
mkdir auth
|
||||||
|
docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > auth/htpasswd
|
||||||
|
|
||||||
|
Make sure you stopped your registry from the previous step, then start it again:
|
||||||
|
|
||||||
|
docker run -d -p 5000:5000 --restart=always --name registry \
|
||||||
|
-v `pwd`/auth:/auth \
|
||||||
|
-e "REGISTRY_AUTH=htpasswd" \
|
||||||
|
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
|
||||||
|
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
|
||||||
|
-v `pwd`/certs:/certs \
|
||||||
|
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
|
||||||
|
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
|
||||||
|
registry:2
|
||||||
|
|
||||||
|
You should now be able to:
|
||||||
|
|
||||||
|
docker login myregistrydomain.com:5000
|
||||||
|
|
||||||
|
And then push and pull images as an authenticated user.
|
||||||
|
|
||||||
|
#### Gotcha
|
||||||
|
|
||||||
|
Seeing X509 errors is usually a sign you are trying to use self-signed certificates, and failed to [configure your docker daemon properly](insecure.md).
|
||||||
|
|
||||||
|
### Alternatives
|
||||||
|
|
||||||
|
1. You may want to leverage more advanced basic auth implementations through a proxy design, in front of the registry. You will find examples of such patterns in the [recipes list](recipes/index.md).
|
||||||
|
|
||||||
|
2. Alternatively, the Registry also supports delegated authentication, redirecting users to a specific, trusted token server. That approach requires significantly more investment, and only makes sense if you want to fully configure ACLs and more control over the Registry integration into your global authorization and authentication systems.
|
||||||
|
|
||||||
|
You will find [background information here](spec/auth/token.md), and [configuration information here](configuration.md#auth).
|
||||||
|
|
||||||
|
Beware that you will have to implement your own authentication service for this to work, or leverage a third-party implementation.
|
||||||
|
|
||||||
|
## Managing with Compose
|
||||||
|
|
||||||
|
As your registry configuration grows more complex, dealing with it can quickly become tedious.
|
||||||
|
|
||||||
|
It's highly recommended to use [Docker Compose](/compose/index.md) to facilitate operating your registry.
|
||||||
|
|
||||||
|
Here is a simple `docker-compose.yml` example that condenses everything explained so far:
|
||||||
|
|
||||||
|
```
|
||||||
|
registry:
|
||||||
|
restart: always
|
||||||
|
image: registry:2
|
||||||
|
ports:
|
||||||
|
- 5000:5000
|
||||||
|
environment:
|
||||||
|
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
|
||||||
|
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
|
||||||
|
REGISTRY_AUTH: htpasswd
|
||||||
|
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
|
||||||
|
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
|
||||||
|
volumes:
|
||||||
|
- /path/data:/var/lib/registry
|
||||||
|
- /path/certs:/certs
|
||||||
|
- /path/auth:/auth
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Warning**: replace `/path` by whatever directory that holds your `certs` and `auth` folder from above.
|
||||||
|
|
||||||
|
You can then start your registry with a simple
|
||||||
|
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
## Next
|
||||||
|
|
||||||
|
You will find more specific and advanced informations in the following sections:
|
||||||
|
|
||||||
|
- [Configuration reference](configuration.md)
|
||||||
|
- [Working with notifications](notifications.md)
|
||||||
|
- [Advanced "recipes"](recipes/index.md)
|
||||||
|
- [Registry API](spec/api.md)
|
||||||
|
- [Storage driver model](storage-drivers/index.md)
|
||||||
|
- [Token authentication](spec/auth/token.md)
|
27
docs/deprecated.md
Normal file
27
docs/deprecated.md
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
title = "Deprecated Features"
|
||||||
|
description = "describes deprecated functionality"
|
||||||
|
keywords = ["registry, manifest, images, signatures, repository, distribution, digest"]
|
||||||
|
[menu.main]
|
||||||
|
parent="smn_registry_ref"
|
||||||
|
weight=8
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Docker Registry Deprecation
|
||||||
|
|
||||||
|
This document details functionality or components which are deprecated within
|
||||||
|
the registry.
|
||||||
|
|
||||||
|
### v2.5.0
|
||||||
|
|
||||||
|
The signature store has been removed from the registry. Since `v2.4.0` it has
|
||||||
|
been possible to configure the registry to generate manifest signatures rather
|
||||||
|
than load them from storage. In this version of the registry this becomes
|
||||||
|
the default behavior. Signatures which are attached to manifests on put are
|
||||||
|
not stored in the registry. This does not alter the functional behavior of
|
||||||
|
the registry.
|
||||||
|
|
||||||
|
Old signatures blobs can be removed from the registry storage by running the
|
||||||
|
garbage-collect subcommand.
|
137
docs/garbage-collection.md
Normal file
137
docs/garbage-collection.md
Normal file
|
@ -0,0 +1,137 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
title = "Garbage Collection"
|
||||||
|
description = "High level discussion of garbage collection"
|
||||||
|
keywords = ["registry, garbage, images, tags, repository, distribution"]
|
||||||
|
[menu.main]
|
||||||
|
parent="smn_registry_ref"
|
||||||
|
weight=4
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Garbage Collection
|
||||||
|
|
||||||
|
As of v2.4.0 a garbage collector command is included within the registry binary.
|
||||||
|
This document describes what this command does and how and why it should be used.
|
||||||
|
|
||||||
|
## What is Garbage Collection?
|
||||||
|
|
||||||
|
From [wikipedia](https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)):
|
||||||
|
|
||||||
|
"In computer science, garbage collection (GC) is a form of automatic memory management. The
|
||||||
|
garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by
|
||||||
|
objects that are no longer in use by the program."
|
||||||
|
|
||||||
|
In the context of the Docker registry, garbage collection is the process of
|
||||||
|
removing blobs from the filesystem which are no longer referenced by a
|
||||||
|
manifest. Blobs can include both layers and manifests.
|
||||||
|
|
||||||
|
|
||||||
|
## Why Garbage Collection?
|
||||||
|
|
||||||
|
Registry data can occupy considerable amounts of disk space and freeing up
|
||||||
|
this disk space is an oft-requested feature. Additionally for reasons of security it
|
||||||
|
can be desirable to ensure that certain layers no longer exist on the filesystem.
|
||||||
|
|
||||||
|
|
||||||
|
## Garbage Collection in the Registry
|
||||||
|
|
||||||
|
Filesystem layers are stored by their content address in the Registry. This
|
||||||
|
has many advantages, one of which is that data is stored once and referred to by manifests.
|
||||||
|
See [here](compatibility.md#content-addressable-storage-cas) for more details.
|
||||||
|
|
||||||
|
Layers are therefore shared amongst manifests; each manifest maintains a reference
|
||||||
|
to the layer. As long as a layer is referenced by one manifest, it cannot be garbage
|
||||||
|
collected.
|
||||||
|
|
||||||
|
Manifests and layers can be 'deleted` with the registry API (refer to the API
|
||||||
|
documentation [here](spec/api.md#deleting-a-layer) and
|
||||||
|
[here](spec/api.md#deleting-an-image) for details). This API removes references
|
||||||
|
to the target and makes them eligible for garbage collection. It also makes them
|
||||||
|
unable to be read via the API.
|
||||||
|
|
||||||
|
If a layer is deleted it will be removed from the filesystem when garbage collection
|
||||||
|
is run. If a manifest is deleted the layers to which it refers will be removed from
|
||||||
|
the filesystem if no other manifests refers to them.
|
||||||
|
|
||||||
|
|
||||||
|
### Example
|
||||||
|
|
||||||
|
In this example manifest A references two layers: `a` and `b`. Manifest `B` references
|
||||||
|
layers `a` and `c`. In this state, nothing is eligible for garbage collection:
|
||||||
|
|
||||||
|
```
|
||||||
|
A -----> a <----- B
|
||||||
|
\--> b |
|
||||||
|
c <--/
|
||||||
|
```
|
||||||
|
|
||||||
|
Manifest B is deleted via the API:
|
||||||
|
|
||||||
|
```
|
||||||
|
A -----> a B
|
||||||
|
\--> b
|
||||||
|
c
|
||||||
|
```
|
||||||
|
|
||||||
|
In this state layer `c` no longer has a reference and is eligible for garbage
|
||||||
|
collection. Layer `a` had one reference removed but will not be garbage
|
||||||
|
collected as it is still referenced by manifest `A`. The blob representing
|
||||||
|
manifest `B` will also be eligible for garbage collection.
|
||||||
|
|
||||||
|
After garbage collection has been run manifest `A` and its blobs remain.
|
||||||
|
|
||||||
|
```
|
||||||
|
A -----> a
|
||||||
|
\--> b
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## How Garbage Collection works
|
||||||
|
|
||||||
|
Garbage collection runs in two phases. First, in the 'mark' phase, the process
|
||||||
|
scans all the manifests in the registry. From these manifests, it constructs a
|
||||||
|
set of content address digests. This set is the 'mark set' and denotes the set
|
||||||
|
of blobs to *not* delete. Secondly, in the 'sweep' phase, the process scans all
|
||||||
|
the blobs and if a blob's content address digest is not in the mark set, the
|
||||||
|
process will delete it.
|
||||||
|
|
||||||
|
|
||||||
|
> **NOTE** You should ensure that the registry is in read-only mode or not running at
|
||||||
|
> all. If you were to upload an image while garbage collection is running, there is the
|
||||||
|
> risk that the image's layers will be mistakenly deleted, leading to a corrupted image.
|
||||||
|
|
||||||
|
This type of garbage collection is known as stop-the-world garbage collection. In future
|
||||||
|
registry versions the intention is that garbage collection will be an automated background
|
||||||
|
action and this manual process will no longer apply.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Running garbage collection
|
||||||
|
|
||||||
|
Garbage collection can be run as follows
|
||||||
|
|
||||||
|
`bin/registry garbage-collect [--dry-run] /path/to/config.yml`
|
||||||
|
|
||||||
|
The garbage-collect command accepts a `--dry-run` parameter, which will print the progress
|
||||||
|
of the mark and sweep phases without removing any data. Running with a log leve of `info`
|
||||||
|
will give a clear indication of what will and will not be deleted.
|
||||||
|
|
||||||
|
_Sample output from a dry run garbage collection with registry log level set to `info`_
|
||||||
|
|
||||||
|
```
|
||||||
|
hello-world
|
||||||
|
hello-world: marking manifest sha256:fea8895f450959fa676bcc1df0611ea93823a735a01205fd8622846041d0c7cf
|
||||||
|
hello-world: marking blob sha256:03f4658f8b782e12230c1783426bd3bacce651ce582a4ffb6fbbfa2079428ecb
|
||||||
|
hello-world: marking blob sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
|
||||||
|
hello-world: marking configuration sha256:690ed74de00f99a7d00a98a5ad855ac4febd66412be132438f9b8dbd300a937d
|
||||||
|
ubuntu
|
||||||
|
|
||||||
|
4 blobs marked, 5 blobs eligible for deletion
|
||||||
|
blob eligible for deletion: sha256:28e09fddaacbfc8a13f82871d9d66141a6ed9ca526cb9ed295ef545ab4559b81
|
||||||
|
blob eligible for deletion: sha256:7e15ce58ccb2181a8fced7709e9893206f0937cc9543bc0c8178ea1cf4d7e7b5
|
||||||
|
blob eligible for deletion: sha256:87192bdbe00f8f2a62527f36bb4c7c7f4eaf9307e4b87e8334fb6abec1765bcb
|
||||||
|
blob eligible for deletion: sha256:b549a9959a664038fc35c155a95742cf12297672ca0ae35735ec027d55bf4e97
|
||||||
|
blob eligible for deletion: sha256:f251d679a7c61455f06d793e43c06786d7766c88b8c24edf242b2c08e3c3f599
|
||||||
|
```
|
||||||
|
|
70
docs/glossary.md
Normal file
70
docs/glossary.md
Normal file
|
@ -0,0 +1,70 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
draft = true
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Glossary
|
||||||
|
|
||||||
|
This page contains definitions for distribution related terms.
|
||||||
|
|
||||||
|
<dl>
|
||||||
|
<dt id="blob"><h4>Blob</h4></dt>
|
||||||
|
<dd>
|
||||||
|
<blockquote>A blob is any kind of content that is stored by a Registry under a content-addressable identifier (a "digest").</blockquote>
|
||||||
|
<p>
|
||||||
|
<a href="#layer">Layers</a> are a good example of "blobs".
|
||||||
|
</p>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="image"><h4>Image</h4></dt>
|
||||||
|
<dd>
|
||||||
|
<blockquote>An image is a named set of immutable data from which a Docker container can be created.</blockquote>
|
||||||
|
<p>
|
||||||
|
An image is represented by a json file called a <a href="#manifest">manifest</a>, and is conceptually a set of <a hred="#layer">layers</a>.
|
||||||
|
|
||||||
|
Image names indicate the location where they can be pulled from and pushed to, as they usually start with a <a href="#registry">registry</a> domain name and port.
|
||||||
|
|
||||||
|
</p>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="layer"><h4>Layer</h4></dt>
|
||||||
|
<dd>
|
||||||
|
<blockquote>A layer is a tar archive bundling partial content from a filesystem.</blockquote>
|
||||||
|
<p>
|
||||||
|
Layers from an <a href="#image">image</a> are usually extracted in order on top of each other to make up a root filesystem from which containers run out.
|
||||||
|
</p>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="manifest"><h4>Manifest</h4></dt>
|
||||||
|
<dd><blockquote>A manifest is the JSON representation of an image.</blockquote></dd>
|
||||||
|
|
||||||
|
<dt id="namespace"><h4>Namespace</h4></dt>
|
||||||
|
<dd><blockquote>A namespace is a collection of repositories with a common name prefix.</blockquote>
|
||||||
|
<p>
|
||||||
|
The namespace with an empty prefix is considered the Global Namespace.
|
||||||
|
</p>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="registry"><h4>Registry</h4></dt>
|
||||||
|
<dd><blockquote>A registry is a service that let you store and deliver <a href="#images">images</a>.</blockquote>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="registry"><h4>Repository</h4></dt>
|
||||||
|
<dd>
|
||||||
|
<blockquote>A repository is a set of data containing all versions of a given image.</blockquote>
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
<dt id="scope"><h4>Scope</h4></dt>
|
||||||
|
<dd><blockquote>A scope is the portion of a namespace onto which a given authorization token is granted.</blockquote></dd>
|
||||||
|
|
||||||
|
<dt id="tag"><h4>Tag</h4></dt>
|
||||||
|
<dd><blockquote>A tag is conceptually a "version" of a <a href="#image">named image</a>.</blockquote>
|
||||||
|
<p>
|
||||||
|
Example: `docker pull myimage:latest` instructs docker to pull the image "myimage" in version "latest".
|
||||||
|
</p>
|
||||||
|
|
||||||
|
</dd>
|
||||||
|
|
||||||
|
|
||||||
|
</dl>
|
24
docs/help.md
Normal file
24
docs/help.md
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
<!--[metadata]>
|
||||||
|
+++
|
||||||
|
title = "Getting help"
|
||||||
|
description = "Getting help with the Registry"
|
||||||
|
keywords = ["registry, on-prem, images, tags, repository, distribution, help, 101, TL;DR"]
|
||||||
|
[menu.main]
|
||||||
|
parent="smn_registry"
|
||||||
|
weight=9
|
||||||
|
+++
|
||||||
|
<![end-metadata]-->
|
||||||
|
|
||||||
|
# Getting help
|
||||||
|
|
||||||
|
If you need help, or just want to chat, you can reach us:
|
||||||
|
|
||||||
|
- on irc: `#docker-distribution` on freenode
|
||||||
|
- on the [mailing list](https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution) (mail at <distribution@dockerproject.org>)
|
||||||
|
|
||||||
|
If you want to report a bug:
|
||||||
|
|
||||||
|
- be sure to first read about [how to contribute](https://github.com/docker/distribution/blob/master/CONTRIBUTING.md)
|
||||||
|
- you can then do so on the [GitHub project bugtracker](https://github.com/docker/distribution/issues)
|
||||||
|
|
||||||
|
You can also find out more about the Docker's project [Getting Help resources](/opensource/get-help.md).
|
1
docs/images/notifications.gliffy
Normal file
1
docs/images/notifications.gliffy
Normal file
File diff suppressed because one or more lines are too long
BIN
docs/images/notifications.png
Normal file
BIN
docs/images/notifications.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 37 KiB |
1
docs/images/notifications.svg
Normal file
1
docs/images/notifications.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 31 KiB |
BIN
docs/images/v2-registry-auth.png
Normal file
BIN
docs/images/v2-registry-auth.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 12 KiB |
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue