Switch to github.com/golang/dep for vendoring

Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
This commit is contained in:
Mrunal Patel 2017-01-31 16:45:59 -08:00
parent d6ab91be27
commit 8e5b17cf13
15431 changed files with 3971413 additions and 8881 deletions

3
vendor/github.com/containers/storage/.dockerignore generated vendored Normal file
View file

@ -0,0 +1,3 @@
bundles
.gopath
vendor/pkg

30
vendor/github.com/containers/storage/.gitignore generated vendored Normal file
View file

@ -0,0 +1,30 @@
# Docker project generated files to ignore
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.orig
*.test
.*.swp
.DS_Store
# a .bashrc may be added to customize the build environment
.bashrc
.gopath/
autogen/
bundles/
cmd/dockerd/dockerd
cmd/docker/docker
dockerversion/version_autogen.go
docs/AWS_S3_BUCKET
docs/GITCOMMIT
docs/GIT_BRANCH
docs/VERSION
docs/_build
docs/_static
docs/_templates
docs/changed-files
# generated by man/md2man-all.sh
man/man1
man/man5
man/man8
vendor/pkg/

254
vendor/github.com/containers/storage/.mailmap generated vendored Normal file
View file

@ -0,0 +1,254 @@
# Generate AUTHORS: hack/generate-authors.sh
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
# git log --format='%aE - %aN' | sort -uf
#
# For explanation on this file format: man git-shortlog
Patrick Stapleton <github@gdi2290.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Erwin van der Koogh <info@erronis.nl>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Marcus Linke <marcus.linke@gmx.de>
Aleksandrs Fadins <aleks@s-ko.net>
Christopher Latham <sudosurootdev@gmail.com>
Hu Keping <hukeping@huawei.com>
Wayne Chang <wayne@neverfear.org>
Chen Chao <cc272309126@gmail.com>
Daehyeok Mun <daehyeok@gmail.com>
<daehyeok@gmail.com> <daehyeok@daehyeokui-MacBook-Air.local>
<jt@yadutaf.fr> <admin@jtlebi.fr>
<jeff@docker.com> <jefferya@programmerq.net>
<charles.hooper@dotcloud.com> <chooper@plumata.com>
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com>
<guillaume.charmes@docker.com> <guillaume@dotcloud.com>
<guillaume.charmes@docker.com> <guillaume@docker.com>
<guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
<guillaume.charmes@docker.com> <guillaume@charmes.net>
<kencochrane@gmail.com> <KenCochrane@gmail.com>
Thatcher Peskens <thatcher@docker.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
Thatcher Peskens <thatcher@docker.com> dhrp <thatcher@gmx.net>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com> jpetazzo <jerome.petazzoni@dotcloud.com>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com> <jp@enix.org>
Joffrey F <joffrey@docker.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Andy Smith <github@anarkystic.com>
<kalessin@kalessin.fr> <louis@dotcloud.com>
<victor.vieux@docker.com> <victor.vieux@dotcloud.com>
<victor.vieux@docker.com> <victor@dotcloud.com>
<victor.vieux@docker.com> <dev@vvieux.com>
<victor.vieux@docker.com> <victor@docker.com>
<victor.vieux@docker.com> <vieux@docker.com>
<victor.vieux@docker.com> <victorvieux@gmail.com>
<dominik@honnef.co> <dominikh@fork-bomb.org>
<ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Walter Stanish <walter@pratyeka.org>
<daniel@gasienica.ch> <dgasienica@zynga.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
David Sissitka <me@dsissitka.com>
Nolan Darilek <nolan@thewordnerd.info>
<mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benoit Chesneau <bchesneau@gmail.com>
Jordan Arentsen <blissdev@gmail.com>
Daniel Garcia <daniel@danielgarcia.info>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Faiz Khan <faizkhan00@gmail.com>
Victor Lyuboslavsky <victor@victoreda.com>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
<mosoni@ebay.com> <mohitsoni1989@gmail.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> root <root@vagrant-ubuntu-12.10.vagrantup.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
<proppy@google.com> <proppy@aminche.com>
<michael@docker.com> <michael@crosbymichael.com>
<michael@docker.com> <crosby.michael@gmail.com>
<michael@docker.com> <crosbymichael@gmail.com>
<github@developersupport.net> <github@metaliveblog.com>
<brandon@ifup.org> <brandon@ifup.co>
<dano@spotify.com> <daniel.norberg@gmail.com>
<danny@codeaholics.org> <Danny.Yates@mailonline.co.uk>
<gurjeet@singh.im> <singh.gurjeet@gmail.com>
<shawn@churchofgit.com> <shawnlandden@gmail.com>
<sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
<solomon@docker.com> <solomon.hykes@dotcloud.com>
<solomon@docker.com> <solomon@dotcloud.com>
<solomon@docker.com> <s@docker.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@docker.com>
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <sven@t440s.home.gateway>
<alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexander Morozov <lk4d4@docker.com>
<git.nivoc@neverbox.com> <kuehnle@online.de>
O.S. Tezer <ostezer@gmail.com>
<ostezer@gmail.com> <ostezer@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
<justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
<taim@bosboot.org> <maztaim@users.noreply.github.com>
<viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
<vbatts@redhat.com> <vbatts@hashbangbash.com>
<altsysrq@gmail.com> <iamironbob@gmail.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Sridhar Ratnakumar <sridharr@activestate.com> <github@srid.name>
Liang-Chi Hsieh <viirya@gmail.com>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Will Weaver <monkey@buildingbananas.com>
Timothy Hobbs <timothyhobbs@seznam.cz>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathanleclaire@gmail.com>
<github@hollensbe.org> <erik+github@hollensbe.org>
<github@albersweb.de> <albers@users.noreply.github.com>
<lsm5@fedoraproject.org> <lsm5@redhat.com>
<marc@marc-abramowitz.com> <msabramo@gmail.com>
Matthew Heon <mheon@redhat.com> <mheon@mheonlaptop.redhat.com>
<bernat@luffy.cx> <vincent@bernat.im>
<bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
<p@pwaller.net> <peter@scraperwiki.com>
<andrew.weiss@outlook.com> <andrew.weiss@microsoft.com>
Francisco Carriedo <fcarriedo@gmail.com>
<julienbordellier@gmail.com> <git@julienbordellier.com>
<ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
<arnaud.porterie@docker.com> <icecrime@gmail.com>
<baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Brian Goff <cpuguy83@gmail.com>
<cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
<eric@windisch.us> <ewindisch@docker.com>
<frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Hollie Teal <hollie@docker.com>
<hollie@docker.com> <hollie.teal@docker.com>
<hollie@docker.com> <hollietealok@users.noreply.github.com>
<huu@prismskylabs.com> <whoshuu@gmail.com>
Jessica Frazelle <jess@mesosphere.com>
Jessica Frazelle <jess@mesosphere.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@mesosphere.com> <acidburn@docker.com>
Jessica Frazelle <jess@mesosphere.com> <jess@docker.com>
Jessica Frazelle <jess@mesosphere.com> <princess@docker.com>
<konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
<tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
<estesp@linux.vnet.ibm.com> <estesp@gmail.com>
<github@gone.nl> <thaJeztah@users.noreply.github.com>
Thomas LEVEIL <thomasleveil@gmail.com> Thomas LÉVEIL <thomasleveil@users.noreply.github.com>
<oi@truffles.me.uk> <timruffles@googlemail.com>
<Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@linux.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Darren Shepherd <darren.s.shepherd@gmail.com> <darren@rancher.com>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
John Howard (VM) <John.Howard@microsoft.com> <jhowardmsft@users.noreply.github.com>
John Howard (VM) <John.Howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
mattyw <mattyw@me.com> <gh@mattyw.net>
resouer <resouer@163.com> <resouer@gmail.com>
AJ Bowen <aj@gandi.net> soulshake <amy@gandi.net>
AJ Bowen <aj@gandi.net> soulshake <aj@gandi.net>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Vincent Bernat <bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
Yestin Sun <sunyi0804@gmail.com> <yestin.sun@polyera.com>
bin liu <liubin0329@users.noreply.github.com> <liubin0329@gmail.com>
John Howard (VM) <John.Howard@microsoft.com> jhowardmsft <jhoward@microsoft.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Tangi COLIN <tangicolin@gmail.com> tangicolin <tangicolin@gmail.com>
Allen Sun <allen.sun@daocloud.io>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
<aanm90@gmail.com> <martins@noironetworks.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Chander G <chandergovind@gmail.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Ying Li <cyli@twistedmatrix.com>
Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local>
<dqminh@cloudflare.com> <dqminh89@gmail.com>
Daniel, Dao Quang Minh <dqminh@cloudflare.com>
Daniel Nephin <dnephin@docker.com> <dnephin@gmail.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
Doug Tangren <d.tangren@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Ben Golub <ben.golub@dotcloud.com>
Harold Cooper <hrldcpr@gmail.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Josh Hawn <josh.hawn@docker.com> <jlhawn@berkeley.edu>
Justin Cormack <justin.cormack@docker.com>
<justin.cormack@docker.com> <justin.cormack@unikernel.com>
<justin.cormack@docker.com> <justin@specialbusservice.com>
Kamil Domański <kamil@domanski.co>
Lei Jitang <leijitang@huawei.com>
<leijitang@huawei.com> <leijitang@gmail.com>
Linus Heckemann <lheckemann@twig-world.com>
<lheckemann@twig-world.com> <anonymouse2048@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com>
<lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Marianna Tessel <mtesselh@gmail.com>
Michael Huettermann <michael@huettermann.net>
Moysés Borges <moysesb@gmail.com>
<moysesb@gmail.com> <moyses.furtado@wplex.com.br>
Nigel Poulton <nigelpoulton@hotmail.com>
Qiang Huang <h.huangqiang@huawei.com>
<h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Boaz Shuster <ripcurld.github@gmail.com>
Shuwei Hao <haosw@cn.ibm.com>
<haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Soshi Katsuta <soshi.katsuta@gmail.com>
<soshi.katsuta@gmail.com> <katsuta_soshi@cyberagent.co.jp>
Stefan Berger <stefanb@linux.vnet.ibm.com>
<stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stephen Day <stephen.day@docker.com>
<stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Toli Kuznets <toli@docker.com>
Tristan Carel <tristan@cogniteev.com>
<tristan@cogniteev.com> <tristan.carel@gmail.com>
Vincent Demeester <vincent@sbr.pm>
<vincent@sbr.pm> <vincent+github@demeester.fr>
Vishnu Kannan <vishnuk@google.com>
xlgao-zju <xlgao@zju.edu.cn> xlgao <xlgao@zju.edu.cn>
yuchangchun <yuchangchun1@huawei.com> y00277921 <yuchangchun1@huawei.com>
<zij@case.edu> <zjaffee@us.ibm.com>
<anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
<eungjun.yi@navercorp.com> <semtlenori@gmail.com>
<haosw@cn.ibm.com> <haoshuwei1989@163.com>
Hao Shu Wei <haosw@cn.ibm.com>
<matt.bentley@docker.com> <mbentley@mbentley.net>
<MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
<redmond.martin@gmail.com> <xgithub@redmond5.com>
<redmond.martin@gmail.com> <martin@tinychat.com>
<srbrahma@us.ibm.com> <sbrahma@us.ibm.com>
<suda.akihiro@lab.ntt.co.jp> <suda.kyoto@gmail.com>
<thomas@gazagnaire.org> <thomas@gazagnaire.com>
Shengbo Song <thomassong@tencent.com> mYmNeo <mymneo@163.com>
Shengbo Song <thomassong@tencent.com>
<sylvain@ascribe.io> <sylvain.bellemare@ezeep.com>
Sylvain Bellemare <sylvain@ascribe.io>

19
vendor/github.com/containers/storage/.tool/lint generated vendored Executable file
View file

@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
for d in $(find . -type d -not -iwholename '*.git*' -a -not -iname '.tool' -a -not -iwholename '*vendor*'); do
gometalinter \
--exclude='error return value not checked.*(Close|Log|Print).*\(errcheck\)$' \
--exclude='.*_test\.go:.*error return value not checked.*\(errcheck\)$' \
--exclude='duplicate of.*_test.go.*\(dupl\)$' \
--disable=aligncheck \
--disable=gotype \
--disable=gas \
--cyclo-over=50 \
--dupl-threshold=100 \
--tests \
--deadline=30s "${d}"
done

18
vendor/github.com/containers/storage/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,18 @@
language: go
go:
- tip
- 1.7
- 1.6
dist: trusty
sudo: required
before_install:
- sudo apt-get -qq update
- sudo apt-get -qq install btrfs-tools libdevmapper-dev
script:
- AUTO_GOPATH=1 make install.tools
- AUTO_GOPATH=1 ./hack/make.sh validate-gofmt validate-pkg validate-lint validate-test validate-toml validate-vet validate-vendor
- AUTO_GOPATH=1 make .gitvalidation
- AUTO_GOPATH=1 make build-binary
- AUTO_GOPATH=1 ./hack/make.sh cross
- sudo env AUTO_GOPATH=1 PATH="$PATH" ./hack/make.sh test-unit
- AUTO_GOPATH=1 make docs

1522
vendor/github.com/containers/storage/AUTHORS generated vendored Normal file

File diff suppressed because it is too large Load diff

88
vendor/github.com/containers/storage/Makefile generated vendored Normal file
View file

@ -0,0 +1,88 @@
.PHONY: all binary build build-binary build-gccgo bundles cross default docs gccgo test test-integration-cli test-unit validate help win tgz
# set the graph driver as the current graphdriver if not set
DRIVER := $(if $(STORAGE_DRIVER),$(STORAGE_DRIVER),$(if $(DOCKER_GRAPHDRIVER),DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
EPOCH_TEST_COMMIT := 0418ebf59f9e1f564831c0ba9378b7f8e40a1c73
SYSTEM_GOPATH := ${GOPATH}
RUNINVM := vagrant/runinvm.sh
default all: build ## validate all checks, build linux binaries, run all tests\ncross build non-linux binaries and generate archives\nusing VMs
$(RUNINVM) hack/make.sh
build build-binary: bundles ## build using go on the host
hack/make.sh binary
build-gccgo: bundles ## build using gccgo on the host
hack/make.sh gccgo
binary: bundles
$(RUNINVM) hack/make.sh binary
bundles:
mkdir -p bundles
cross: build ## cross build the binaries for darwin, freebsd and windows\nusing VMs
$(RUNINVM) hack/make.sh binary cross
win: build ## cross build the binary for windows using VMs
$(RUNINVM) hack/make.sh win
tgz: build ## build the archives (.zip on windows and .tgz otherwise)\ncontaining the binaries on the host
hack/make.sh binary cross tgz
docs: ## build the docs on the host
$(MAKE) -C docs docs
gccgo: build-gccgo ## build the gcc-go linux binaries using VMs
$(RUNINVM) hack/make.sh gccgo
test: build ## run the unit and integration tests using VMs
$(RUNINVM) hack/make.sh binary cross test-unit test-integration-cli
test-integration-cli: build ## run the integration tests using VMs
$(RUNINVM) hack/make.sh binary test-integration-cli
test-unit: build ## run the unit tests using VMs
$(RUNINVM) hack/make.sh test-unit
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor\nusing VMs
$(RUNINVM) hack/make.sh validate-dco validate-gofmt validate-pkg validate-lint validate-test validate-toml validate-vet validate-vendor
lint:
@which gometalinter > /dev/null 2>/dev/null || (echo "ERROR: gometalinter not found. Consider 'make install.tools' target" && false)
@echo "checking lint"
@./.tool/lint
.PHONY: .gitvalidation
# When this is running in travis, it will only check the travis commit range
.gitvalidation:
@which git-validation > /dev/null 2>/dev/null || (echo "ERROR: git-validation not found. Consider 'make install.tools' target" && false)
ifeq ($(TRAVIS_EVENT_TYPE),pull_request)
git-validation -q -run DCO,short-subject
else ifeq ($(TRAVIS_EVENT_TYPE),push)
git-validation -q -run DCO,short-subject -no-travis -range $(EPOCH_TEST_COMMIT)..$(TRAVIS_BRANCH)
else
git-validation -q -run DCO,short-subject -range $(EPOCH_TEST_COMMIT)..HEAD
endif
.PHONY: install.tools
install.tools: .install.gitvalidation .install.gometalinter .install.md2man
.install.gitvalidation:
GOPATH=${SYSTEM_GOPATH} go get github.com/vbatts/git-validation
.install.gometalinter:
GOPATH=${SYSTEM_GOPATH} go get github.com/alecthomas/gometalinter
GOPATH=${SYSTEM_GOPATH} gometalinter --install
.install.md2man:
GOPATH=${SYSTEM_GOPATH} go get github.com/cpuguy83/go-md2man
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-z A-Z_-]+:.*?## / {gsub(" ",",",$$1);gsub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-21s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

19
vendor/github.com/containers/storage/NOTICE generated vendored Normal file
View file

@ -0,0 +1,19 @@
Docker
Copyright 2012-2016 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

45
vendor/github.com/containers/storage/README.md generated vendored Normal file
View file

@ -0,0 +1,45 @@
`storage` is a Go library which aims to provide methods for storing filesystem
layers, container images, and containers. An `oci-storage` CLI wrapper is also
included for manual and scripting use.
To build the CLI wrapper, use 'make build-binary', optionally passing
'AUTO_GOPATH=1' as an additional argument to avoid having to set $GOPATH
manually. For information on other recognized targets, run 'make help'.
Operations which use VMs expect to launch them using 'vagrant', defaulting to
using its 'libvirt' provider. The boxes used are also available for the
'virtualbox' provider, and can be selected by setting $VAGRANT_PROVIDER to
'virtualbox' before kicking off the build.
The library manages three types of items: layers, images, and containers.
A *layer* is a copy-on-write filesystem which is notionally stored as a set of
changes relative to its *parent* layer, if it has one. A given layer can only
have one parent, but any layer can be the parent of multiple layers. Layers
which are parents of other layers should be treated as read-only.
An *image* is a reference to a particular layer (its _top_ layer), along with
other information which the library can manage for the convenience of its
caller. This information typically includes configuration templates for
running a binary contained within the image's layers, and may include
cryptographic signatures. Multiple images can reference the same layer, as the
differences between two images may not be in their layer contents.
A *container* is a read-write layer which is a child of an image's top layer,
along with information which the library can manage for the convenience of its
caller. This information typically includes configuration information for
running the specific container. Multiple containers can be derived from a
single image.
Layers, images, and containers are represented primarily by 32 character
hexadecimal IDs, but items of each kind can also have one or more arbitrary
names attached to them, which the library will automatically resolve to IDs
when they are passed in to API calls which expect IDs.
The library can store what it calls *metadata* for each of these types of
items. This is expected to be a small piece of data, since it is cached in
memory and stored along with the library's own bookkeeping information.
Additionally, the library can store one or more of what it calls *big data* for
images and containers. This is a named chunk of larger data, which is only in
memory when it is being read from or being written to its own disk file.

1
vendor/github.com/containers/storage/VERSION generated vendored Normal file
View file

@ -0,0 +1 @@
0.1-dev

25
vendor/github.com/containers/storage/Vagrantfile generated vendored Normal file
View file

@ -0,0 +1,25 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
#
# The fedora/23-cloud-base and debian/jessie64 boxes are also available for
# the "virtualbox" provider. Set the VAGRANT_PROVIDER environment variable to
# "virtualbox" to use them instead.
#
Vagrant.configure("2") do |config|
config.vm.define "fedora" do |c|
c.vm.box = "fedora/23-cloud-base"
c.vm.synced_folder ".", "/vagrant", type: "rsync",
rsync__exclude: "bundles", rsync__args: "-vadz"
c.vm.provision "shell", inline: <<-SHELL
sudo /vagrant/vagrant/provision.sh
SHELL
end
config.vm.define "debian" do |c|
c.vm.box = "debian/jessie64"
c.vm.synced_folder ".", "/vagrant", type: "rsync",
rsync__exclude: "bundles", rsync__args: "-vadz"
c.vm.provision "shell", inline: <<-SHELL
sudo /vagrant/vagrant/provision.sh
SHELL
end
end

View file

@ -0,0 +1,31 @@
This is `oci-storage`, a command line tool for manipulating a layer store.
It depends on `storage`, which is a pretty barebones wrapping of the
graph drivers that exposes the create/mount/unmount/delete operations
and adds enough bookkeeping to know about the relationships between
layers.
On top of that, `storage` provides a notion of a reference to a layer
which is paired with arbitrary user data (i.e., an `image`, that data
being history and configuration metadata). It also provides a notion of
a type of layer, which is typically the child of an image's topmost
layer, to which arbitrary data is directly attached (i.e., a
`container`, where the data is typically configuration).
Layers, images, and containers are each identified using IDs which can
be set when they are created (if not set, random values are generated),
and can optionally be assigned names which are resolved to IDs
automatically by the various APIs.
The oci-storage tool is a CLI that wraps that as thinly as possible, so
that other tooling can use it to import layers from images. Those other
tools can then either manage the concept of images on their own, or let
the API/CLI handle storing the image metadata and/or configuration.
Likewise, other tools can create container layers and manage them on
their own or use the API/CLI for storing what I assume will be container
metadata and/or configurations.
Logic for importing images and creating and managing containers will
most likely be implemented elsewhere, and if that implementation ends up
not needing the API/CLI to provide a place to store data about images
and containers, that functionality can be dropped.

View file

@ -0,0 +1,220 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
paramContainerDataFile = ""
)
func container(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
images, err := m.Images()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
containers, err := m.Containers()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
matches := []storage.Container{}
for _, container := range containers {
nextContainer:
for _, arg := range args {
if container.ID == arg {
matches = append(matches, container)
break nextContainer
}
for _, name := range container.Names {
if name == arg {
matches = append(matches, container)
break nextContainer
}
}
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(matches)
} else {
for _, container := range matches {
fmt.Printf("ID: %s\n", container.ID)
for _, name := range container.Names {
fmt.Printf("Name: %s\n", name)
}
fmt.Printf("Image: %s\n", container.ImageID)
for _, image := range images {
if image.ID == container.ImageID {
for _, name := range image.Names {
fmt.Printf("Image name: %s\n", name)
}
break
}
}
fmt.Printf("Layer: %s\n", container.LayerID)
for _, name := range container.BigDataNames {
fmt.Printf("Data: %s\n", name)
}
}
}
if len(matches) != len(args) {
return 1
}
return 0
}
func listContainerBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
container, err := m.GetContainer(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
d, err := m.ListContainerBigData(container.ID)
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(d)
} else {
for _, name := range d {
fmt.Printf("%s\n", name)
}
}
return 0
}
func getContainerBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
container, err := m.GetContainer(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output := os.Stdout
if paramContainerDataFile != "" {
f, err := os.Create(paramContainerDataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output = f
}
b, err := m.GetContainerBigData(container.ID, args[1])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output.Write(b)
output.Close()
return 0
}
func setContainerBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
container, err := m.GetContainer(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
input := os.Stdin
if paramContainerDataFile != "" {
f, err := os.Open(paramContainerDataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
input = f
}
b, err := ioutil.ReadAll(input)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
err = m.SetContainerBigData(container.ID, args[1], b)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
return 0
}
func getContainerDir(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
path, err := m.GetContainerDirectory(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
fmt.Printf("%s\n", path)
return 0
}
func getContainerRunDir(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
path, err := m.GetContainerRunDirectory(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
fmt.Printf("%s\n", path)
return 0
}
func init() {
commands = append(commands,
command{
names: []string{"container"},
optionsHelp: "[options [...]] containerNameOrID [...]",
usage: "Examine a container",
action: container,
minArgs: 1,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
},
command{
names: []string{"list-container-data", "listcontainerdata"},
optionsHelp: "[options [...]] containerNameOrID",
usage: "List data items that are attached to an container",
action: listContainerBigData,
minArgs: 1,
maxArgs: 1,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
},
command{
names: []string{"get-container-data", "getcontainerdata"},
optionsHelp: "[options [...]] containerNameOrID dataName",
usage: "Get data that is attached to an container",
action: getContainerBigData,
minArgs: 2,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramContainerDataFile, []string{"-file", "f"}, paramContainerDataFile, "Write data to file")
},
},
command{
names: []string{"set-container-data", "setcontainerdata"},
optionsHelp: "[options [...]] containerNameOrID dataName",
usage: "Set data that is attached to an container",
action: setContainerBigData,
minArgs: 2,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramContainerDataFile, []string{"-file", "f"}, paramContainerDataFile, "Read data from file")
},
},
command{
names: []string{"get-container-dir", "getcontainerdir"},
optionsHelp: "[options [...]] containerNameOrID",
usage: "Find the container's associated data directory",
action: getContainerDir,
minArgs: 1,
},
command{
names: []string{"get-container-run-dir", "getcontainerrundir"},
optionsHelp: "[options [...]] containerNameOrID",
usage: "Find the container's associated runtime directory",
action: getContainerRunDir,
minArgs: 1,
})
}

View file

@ -0,0 +1,45 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func containers(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
containers, err := m.Containers()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(containers)
} else {
for _, container := range containers {
fmt.Printf("%s\n", container.ID)
for _, name := range container.Names {
fmt.Printf("\tname: %s\n", name)
}
for _, name := range container.BigDataNames {
fmt.Printf("\tdata: %s\n", name)
}
}
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"containers"},
optionsHelp: "[options [...]]",
usage: "List containers",
action: containers,
maxArgs: 0,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,201 @@
package main
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/containers/storage/opts"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
paramMountLabel = ""
paramNames = []string{}
paramID = ""
paramLayer = ""
paramMetadata = ""
paramMetadataFile = ""
paramCreateRO = false
)
func createLayer(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
parent := ""
if len(args) > 0 {
parent = args[0]
}
layer, err := m.CreateLayer(paramID, parent, paramNames, paramMountLabel, !paramCreateRO)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(layer)
} else {
fmt.Printf("%s", layer.ID)
for _, name := range layer.Names {
fmt.Printf("\t%s\n", name)
}
fmt.Printf("\n")
}
return 0
}
func importLayer(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
parent := ""
if len(args) > 0 {
parent = args[0]
}
diffStream := io.Reader(os.Stdin)
if applyDiffFile != "" {
if f, err := os.Open(applyDiffFile); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
} else {
diffStream = f
defer f.Close()
}
}
layer, _, err := m.PutLayer(paramID, parent, paramNames, paramMountLabel, !paramCreateRO, diffStream)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(layer)
} else {
fmt.Printf("%s", layer.ID)
for _, name := range layer.Names {
fmt.Printf("\t%s\n", name)
}
fmt.Printf("\n")
}
return 0
}
func createImage(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if paramMetadataFile != "" {
f, err := os.Open(paramMetadataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
b, err := ioutil.ReadAll(f)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
paramMetadata = string(b)
}
image, err := m.CreateImage(paramID, paramNames, args[0], paramMetadata, nil)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(image)
} else {
fmt.Printf("%s", image.ID)
for _, name := range image.Names {
fmt.Printf("\t%s\n", name)
}
fmt.Printf("\n")
}
return 0
}
func createContainer(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if paramMetadataFile != "" {
f, err := os.Open(paramMetadataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
b, err := ioutil.ReadAll(f)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
paramMetadata = string(b)
}
container, err := m.CreateContainer(paramID, paramNames, args[0], paramLayer, paramMetadata, nil)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(container)
} else {
fmt.Printf("%s", container.ID)
for _, name := range container.Names {
fmt.Printf("\t%s", name)
}
fmt.Printf("\n")
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"create-layer", "createlayer"},
optionsHelp: "[options [...]] [parentLayerNameOrID]",
usage: "Create a new layer",
maxArgs: 1,
action: createLayer,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramMountLabel, []string{"-label", "l"}, "", "Mount Label")
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "Layer name")
flags.StringVar(&paramID, []string{"-id", "i"}, "", "Layer ID")
flags.BoolVar(&paramCreateRO, []string{"-readonly", "r"}, false, "Mark as read-only")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"import-layer", "importlayer"},
optionsHelp: "[options [...]] [parentLayerNameOrID]",
usage: "Import a new layer",
maxArgs: 1,
action: importLayer,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramMountLabel, []string{"-label", "l"}, "", "Mount Label")
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "Layer name")
flags.StringVar(&paramID, []string{"-id", "i"}, "", "Layer ID")
flags.BoolVar(&paramCreateRO, []string{"-readonly", "r"}, false, "Mark as read-only")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
flags.StringVar(&applyDiffFile, []string{"-file", "f"}, "", "Read from file instead of stdin")
},
})
commands = append(commands, command{
names: []string{"create-image", "createimage"},
optionsHelp: "[options [...]] topLayerNameOrID",
usage: "Create a new image using layers",
minArgs: 1,
maxArgs: 1,
action: createImage,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "Image name")
flags.StringVar(&paramID, []string{"-id", "i"}, "", "Image ID")
flags.StringVar(&paramMetadata, []string{"-metadata", "m"}, "", "Metadata")
flags.StringVar(&paramMetadataFile, []string{"-metadata-file", "f"}, "", "Metadata File")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"create-container", "createcontainer"},
optionsHelp: "[options [...]] parentImageNameOrID",
usage: "Create a new container from an image",
minArgs: 1,
maxArgs: 1,
action: createContainer,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "Container name")
flags.StringVar(&paramID, []string{"-id", "i"}, "", "Container ID")
flags.StringVar(&paramMetadata, []string{"-metadata", "m"}, "", "Metadata")
flags.StringVar(&paramMetadataFile, []string{"-metadata-file", "f"}, "", "Metadata File")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,188 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var testDeleteImage = false
func deleteThing(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
deleted := make(map[string]string)
for _, what := range args {
err := m.Delete(what)
if err != nil {
deleted[what] = fmt.Sprintf("%v", err)
} else {
deleted[what] = ""
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(deleted)
} else {
for what, err := range deleted {
if err != "" {
fmt.Fprintf(os.Stderr, "%s: %s\n", what, err)
}
}
}
for _, err := range deleted {
if err != "" {
return 1
}
}
return 0
}
func deleteLayer(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
deleted := make(map[string]string)
for _, what := range args {
err := m.DeleteLayer(what)
if err != nil {
deleted[what] = fmt.Sprintf("%v", err)
} else {
deleted[what] = ""
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(deleted)
} else {
for what, err := range deleted {
if err != "" {
fmt.Fprintf(os.Stderr, "%s: %s\n", what, err)
}
}
}
for _, err := range deleted {
if err != "" {
return 1
}
}
return 0
}
type deletedImage struct {
DeletedLayers []string `json:"deleted-layers,omitifempty"`
Error string `json:"error,omitifempty"`
}
func deleteImage(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
deleted := make(map[string]deletedImage)
for _, what := range args {
layers, err := m.DeleteImage(what, !testDeleteImage)
errText := ""
if err != nil {
errText = fmt.Sprintf("%v", err)
}
deleted[what] = deletedImage{
DeletedLayers: layers,
Error: errText,
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(deleted)
} else {
for what, record := range deleted {
if record.Error != "" {
fmt.Fprintf(os.Stderr, "%s: %s\n", what, record.Error)
} else {
for _, layer := range record.DeletedLayers {
fmt.Fprintf(os.Stderr, "%s: %s\n", what, layer)
}
}
}
}
for _, record := range deleted {
if record.Error != "" {
return 1
}
}
return 0
}
func deleteContainer(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
deleted := make(map[string]string)
for _, what := range args {
err := m.DeleteContainer(what)
if err != nil {
deleted[what] = fmt.Sprintf("%v", err)
} else {
deleted[what] = ""
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(deleted)
} else {
for what, err := range deleted {
if err != "" {
fmt.Fprintf(os.Stderr, "%s: %s\n", what, err)
}
}
}
for _, err := range deleted {
if err != "" {
return 1
}
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"delete"},
optionsHelp: "[LayerOrImageOrContainerNameOrID [...]]",
usage: "Delete a layer or image or container, with no safety checks",
minArgs: 1,
action: deleteThing,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"delete-layer", "deletelayer"},
optionsHelp: "[LayerNameOrID [...]]",
usage: "Delete a layer, with safety checks",
minArgs: 1,
action: deleteLayer,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"delete-image", "deleteimage"},
optionsHelp: "[ImageNameOrID [...]]",
usage: "Delete an image, with safety checks",
minArgs: 1,
action: deleteImage,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&testDeleteImage, []string{"-test", "t"}, jsonOutput, "Only test removal")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"delete-container", "deletecontainer"},
optionsHelp: "[ContainerNameOrID [...]]",
usage: "Delete a container, with safety checks",
minArgs: 1,
action: deleteContainer,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,190 @@
package main
import (
"encoding/json"
"fmt"
"io"
"os"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
applyDiffFile = ""
diffFile = ""
diffGzip = false
diffBzip2 = false
diffXz = false
)
func changes(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
to := args[0]
from := ""
if len(args) >= 2 {
from = args[1]
}
changes, err := m.Changes(from, to)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(changes)
} else {
for _, change := range changes {
what := "?"
switch change.Kind {
case archive.ChangeAdd:
what = "Add"
case archive.ChangeModify:
what = "Modify"
case archive.ChangeDelete:
what = "Delete"
}
fmt.Printf("%s %q\n", what, change.Path)
}
}
return 0
}
func diff(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
to := args[0]
from := ""
if len(args) >= 2 {
from = args[1]
}
diffStream := io.Writer(os.Stdout)
if diffFile != "" {
if f, err := os.Create(diffFile); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
} else {
diffStream = f
defer f.Close()
}
}
reader, err := m.Diff(from, to)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if diffGzip || diffBzip2 || diffXz {
compression := archive.Uncompressed
if diffGzip {
compression = archive.Gzip
} else if diffBzip2 {
compression = archive.Bzip2
} else if diffXz {
compression = archive.Xz
}
compressor, err := archive.CompressStream(diffStream, compression)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
diffStream = compressor
defer compressor.Close()
}
_, err = io.Copy(diffStream, reader)
reader.Close()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
return 0
}
func applyDiff(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
diffStream := io.Reader(os.Stdin)
if applyDiffFile != "" {
if f, err := os.Open(applyDiffFile); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
} else {
diffStream = f
defer f.Close()
}
}
_, err := m.ApplyDiff(args[0], diffStream)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
return 0
}
func diffSize(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
to := args[0]
from := ""
if len(args) >= 2 {
from = args[1]
}
n, err := m.DiffSize(from, to)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
fmt.Printf("%d\n", n)
return 0
}
func init() {
commands = append(commands, command{
names: []string{"changes"},
usage: "Compare two layers",
optionsHelp: "[options [...]] layerNameOrID [referenceLayerNameOrID]",
minArgs: 1,
maxArgs: 2,
action: changes,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"diffsize", "diff-size"},
usage: "Compare two layers",
optionsHelp: "[options [...]] layerNameOrID [referenceLayerNameOrID]",
minArgs: 1,
maxArgs: 2,
action: diffSize,
})
commands = append(commands, command{
names: []string{"diff"},
usage: "Compare two layers",
optionsHelp: "[options [...]] layerNameOrID [referenceLayerNameOrID]",
minArgs: 1,
maxArgs: 2,
action: diff,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&diffFile, []string{"-file", "f"}, "", "Write to file instead of stdout")
flags.BoolVar(&diffGzip, []string{"-gzip", "c"}, diffGzip, "Compress using gzip")
flags.BoolVar(&diffBzip2, []string{"-bzip2", "-bz2", "b"}, diffBzip2, "Compress using bzip2")
flags.BoolVar(&diffXz, []string{"-xz", "x"}, diffXz, "Compress using xz")
},
})
commands = append(commands, command{
names: []string{"applydiff", "apply-diff"},
optionsHelp: "[options [...]] layerNameOrID [referenceLayerNameOrID]",
usage: "Apply a diff to a layer",
minArgs: 1,
maxArgs: 1,
action: applyDiff,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&applyDiffFile, []string{"-file", "f"}, "", "Read from file instead of stdin")
},
})
}

View file

@ -0,0 +1,77 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
existLayer = false
existImage = false
existContainer = false
existQuiet = false
)
func exist(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
anyMissing := false
existDict := make(map[string]bool)
for _, what := range args {
exists := m.Exists(what)
existDict[what] = exists
if existContainer {
if c, err := m.GetContainer(what); c == nil || err != nil {
exists = false
}
}
if existImage {
if i, err := m.GetImage(what); i == nil || err != nil {
exists = false
}
}
if existLayer {
if l, err := m.GetLayer(what); l == nil || err != nil {
exists = false
}
}
if !exists {
anyMissing = true
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(existDict)
} else {
if !existQuiet {
for what, exists := range existDict {
fmt.Printf("%s: %v\n", what, exists)
}
}
}
if anyMissing {
return 1
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"exists"},
optionsHelp: "[LayerOrImageOrContainerNameOrID [...]]",
usage: "Check if a layer or image or container exists",
minArgs: 1,
action: exist,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&existQuiet, []string{"-quiet", "q"}, existQuiet, "Don't print names")
flags.BoolVar(&existLayer, []string{"-layer", "l"}, existQuiet, "Only succeed if the match is a layer")
flags.BoolVar(&existImage, []string{"-image", "i"}, existQuiet, "Only succeed if the match is an image")
flags.BoolVar(&existContainer, []string{"-container", "c"}, existQuiet, "Only succeed if the match is a container")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,172 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
paramImageDataFile = ""
)
func image(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
images, err := m.Images()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
matched := []storage.Image{}
for _, image := range images {
nextImage:
for _, arg := range args {
if image.ID == arg {
matched = append(matched, image)
break nextImage
}
for _, name := range image.Names {
if name == arg {
matched = append(matched, image)
break nextImage
}
}
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(matched)
} else {
for _, image := range matched {
fmt.Printf("ID: %s\n", image.ID)
for _, name := range image.Names {
fmt.Printf("Name: %s\n", name)
}
fmt.Printf("Top Layer: %s\n", image.TopLayer)
for _, name := range image.BigDataNames {
fmt.Printf("Data: %s\n", name)
}
}
}
if len(matched) != len(args) {
return 1
}
return 0
}
func listImageBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
image, err := m.GetImage(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
d, err := m.ListImageBigData(image.ID)
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(d)
} else {
for _, name := range d {
fmt.Printf("%s\n", name)
}
}
return 0
}
func getImageBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
image, err := m.GetImage(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output := os.Stdout
if paramImageDataFile != "" {
f, err := os.Create(paramImageDataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output = f
}
b, err := m.GetImageBigData(image.ID, args[1])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
output.Write(b)
output.Close()
return 0
}
func setImageBigData(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
image, err := m.GetImage(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
input := os.Stdin
if paramImageDataFile != "" {
f, err := os.Open(paramImageDataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
input = f
}
b, err := ioutil.ReadAll(input)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
err = m.SetImageBigData(image.ID, args[1], b)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
return 0
}
func init() {
commands = append(commands,
command{
names: []string{"image"},
optionsHelp: "[options [...]] imageNameOrID [...]",
usage: "Examine an image",
action: image,
minArgs: 1,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
},
command{
names: []string{"list-image-data", "listimagedata"},
optionsHelp: "[options [...]] imageNameOrID",
usage: "List data items that are attached to an image",
action: listImageBigData,
minArgs: 1,
maxArgs: 1,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
},
command{
names: []string{"get-image-data", "getimagedata"},
optionsHelp: "[options [...]] imageNameOrID dataName",
usage: "Get data that is attached to an image",
action: getImageBigData,
minArgs: 2,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramImageDataFile, []string{"-file", "f"}, paramImageDataFile, "Write data to file")
},
},
command{
names: []string{"set-image-data", "setimagedata"},
optionsHelp: "[options [...]] imageNameOrID dataName",
usage: "Set data that is attached to an image",
action: setImageBigData,
minArgs: 2,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramImageDataFile, []string{"-file", "f"}, paramImageDataFile, "Read data from file")
},
})
}

View file

@ -0,0 +1,45 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func images(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
images, err := m.Images()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(images)
} else {
for _, image := range images {
fmt.Printf("%s\n", image.ID)
for _, name := range image.Names {
fmt.Printf("\tname: %s\n", name)
}
for _, name := range image.BigDataNames {
fmt.Printf("\tdata: %s\n", name)
}
}
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"images"},
optionsHelp: "[options [...]]",
usage: "List images",
action: images,
maxArgs: 0,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,113 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var listLayersTree = false
func layers(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
layers, err := m.Layers()
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(layers)
return 0
}
imageMap := make(map[string]*[]storage.Image)
if images, err := m.Images(); err == nil {
for _, image := range images {
if ilist, ok := imageMap[image.TopLayer]; ok && ilist != nil {
list := append(*ilist, image)
imageMap[image.TopLayer] = &list
} else {
list := []storage.Image{image}
imageMap[image.TopLayer] = &list
}
}
}
containerMap := make(map[string]storage.Container)
if containers, err := m.Containers(); err == nil {
for _, container := range containers {
containerMap[container.LayerID] = container
}
}
nodes := []treeNode{}
for _, layer := range layers {
if listLayersTree {
node := treeNode{
left: string(layer.Parent),
right: string(layer.ID),
notes: []string{},
}
if node.left == "" {
node.left = "(base)"
}
for _, name := range layer.Names {
node.notes = append(node.notes, "name: "+name)
}
if layer.MountPoint != "" {
node.notes = append(node.notes, "mount: "+layer.MountPoint)
}
if imageList, ok := imageMap[layer.ID]; ok && imageList != nil {
for _, image := range *imageList {
node.notes = append(node.notes, fmt.Sprintf("image: %s", image.ID))
for _, name := range image.Names {
node.notes = append(node.notes, fmt.Sprintf("image name: %s", name))
}
}
}
if container, ok := containerMap[layer.ID]; ok {
node.notes = append(node.notes, fmt.Sprintf("container: %s", container.ID))
for _, name := range container.Names {
node.notes = append(node.notes, fmt.Sprintf("container name: %s", name))
}
}
nodes = append(nodes, node)
} else {
fmt.Printf("%s\n", layer.ID)
for _, name := range layer.Names {
fmt.Printf("\tname: %s\n", name)
}
if imageList, ok := imageMap[layer.ID]; ok && imageList != nil {
for _, image := range *imageList {
fmt.Printf("\timage: %s\n", image.ID)
for _, name := range image.Names {
fmt.Printf("\t\tname: %s\n", name)
}
}
}
if container, ok := containerMap[layer.ID]; ok {
fmt.Printf("\tcontainer: %s\n", container.ID)
for _, name := range container.Names {
fmt.Printf("\t\tname: %s\n", name)
}
}
}
}
if listLayersTree {
printTree(nodes)
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"layers"},
optionsHelp: "[options [...]]",
usage: "List layers",
action: layers,
maxArgs: 0,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&listLayersTree, []string{"-tree", "t"}, listLayersTree, "Use a tree")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,126 @@
package main
import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/opts"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/pkg/reexec"
"github.com/containers/storage/storage"
)
type command struct {
names []string
optionsHelp string
minArgs int
maxArgs int
usage string
addFlags func(*mflag.FlagSet, *command)
action func(*mflag.FlagSet, string, storage.Store, []string) int
}
var (
commands = []command{}
jsonOutput = false
)
func main() {
if reexec.Init() {
return
}
options := storage.DefaultStoreOptions
debug := false
makeFlags := func(command string, eh mflag.ErrorHandling) *mflag.FlagSet {
flags := mflag.NewFlagSet(command, eh)
flags.StringVar(&options.RunRoot, []string{"-run", "R"}, options.RunRoot, "Root of the runtime state tree")
flags.StringVar(&options.GraphRoot, []string{"-graph", "g"}, options.GraphRoot, "Root of the storage tree")
flags.StringVar(&options.GraphDriverName, []string{"-storage-driver", "s"}, options.GraphDriverName, "Storage driver to use ($STORAGE_DRIVER)")
flags.Var(opts.NewListOptsRef(&options.GraphDriverOptions, nil), []string{"-storage-opt"}, "Set storage driver options ($STORAGE_OPTS)")
flags.BoolVar(&debug, []string{"-debug", "D"}, debug, "Print debugging information")
return flags
}
flags := makeFlags("oci-storage", mflag.ContinueOnError)
flags.Usage = func() {
fmt.Printf("Usage: oci-storage command [options [...]]\n\n")
fmt.Printf("Commands:\n\n")
for _, command := range commands {
fmt.Printf(" %-22s%s\n", command.names[0], command.usage)
}
fmt.Printf("\nOptions:\n")
flags.PrintDefaults()
}
if len(os.Args) < 2 {
flags.Usage()
os.Exit(1)
}
if err := flags.ParseFlags(os.Args[1:], true); err != nil {
fmt.Printf("%v while parsing arguments (1)\n", err)
flags.Usage()
os.Exit(1)
}
args := flags.Args()
if len(args) < 1 {
flags.Usage()
os.Exit(1)
return
}
cmd := args[0]
for _, command := range commands {
for _, name := range command.names {
if cmd == name {
flags := makeFlags(cmd, mflag.ExitOnError)
if command.addFlags != nil {
command.addFlags(flags, &command)
}
flags.Usage = func() {
fmt.Printf("Usage: oci-storage %s %s\n\n", cmd, command.optionsHelp)
fmt.Printf("%s\n", command.usage)
fmt.Printf("\nOptions:\n")
flags.PrintDefaults()
}
if err := flags.ParseFlags(args[1:], false); err != nil {
fmt.Printf("%v while parsing arguments (3)", err)
flags.Usage()
os.Exit(1)
}
args = flags.Args()
if command.minArgs != 0 && len(args) < command.minArgs {
fmt.Printf("%s: more arguments required.\n", cmd)
flags.Usage()
os.Exit(1)
}
if command.maxArgs != 0 && len(args) > command.maxArgs {
fmt.Printf("%s: too many arguments (%s).\n", cmd, args)
flags.Usage()
os.Exit(1)
}
if debug {
logrus.SetLevel(logrus.DebugLevel)
logrus.Debugf("RunRoot: %s", options.RunRoot)
logrus.Debugf("GraphRoot: %s", options.GraphRoot)
logrus.Debugf("GraphDriverName: %s", options.GraphDriverName)
logrus.Debugf("GraphDriverOptions: %s", options.GraphDriverOptions)
} else {
logrus.SetLevel(logrus.ErrorLevel)
}
store, err := storage.GetStore(options)
if err != nil {
fmt.Printf("error initializing: %v\n", err)
os.Exit(1)
}
os.Exit(command.action(flags, cmd, store, args))
break
}
}
}
fmt.Printf("%s: unrecognized command.\n", cmd)
os.Exit(1)
}

View file

@ -0,0 +1,98 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var metadataQuiet = false
func metadata(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
metadataDict := make(map[string]string)
missingAny := false
for _, what := range args {
if metadata, err := m.GetMetadata(what); err == nil {
metadataDict[what] = strings.TrimSuffix(metadata, "\n")
} else {
missingAny = true
}
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(metadataDict)
} else {
for _, what := range args {
if metadataQuiet {
fmt.Printf("%s\n", metadataDict[what])
} else {
fmt.Printf("%s: %s\n", what, metadataDict[what])
}
}
}
if missingAny {
return 1
}
return 0
}
func setMetadata(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
if paramMetadataFile == "" && paramMetadata == "" {
fmt.Fprintf(os.Stderr, "no new metadata provided\n")
return 1
}
if paramMetadataFile != "" {
f, err := os.Open(paramMetadataFile)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
b, err := ioutil.ReadAll(f)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
paramMetadata = string(b)
}
if err := m.SetMetadata(args[0], paramMetadata); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"metadata"},
optionsHelp: "[LayerOrImageOrContainerNameOrID [...]]",
usage: "Retrieve layer, image, or container metadata",
minArgs: 1,
action: metadata,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&metadataQuiet, []string{"-quiet", "q"}, metadataQuiet, "Omit names and IDs")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"set-metadata", "setmetadata"},
optionsHelp: "[options [...]] layerOrImageOrContainerNameOrID",
usage: "Set layer, image, or container metadata",
minArgs: 1,
maxArgs: 1,
action: setMetadata,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramMetadata, []string{"-metadata", "m"}, "", "Metadata")
flags.StringVar(&paramMetadataFile, []string{"-metadata-file", "f"}, "", "Metadata File")
},
})
}

View file

@ -0,0 +1,99 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
type mountPointOrError struct {
ID string `json:"id"`
MountPoint string `json:"mountpoint"`
Error string `json:"error"`
}
type mountPointError struct {
ID string `json:"id"`
Error string `json:"error"`
}
func mount(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
moes := []mountPointOrError{}
for _, arg := range args {
result, err := m.Mount(arg, paramMountLabel)
errText := ""
if err != nil {
errText = fmt.Sprintf("%v", err)
}
moes = append(moes, mountPointOrError{arg, result, errText})
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(moes)
} else {
for _, mountOrError := range moes {
if mountOrError.Error != "" {
fmt.Fprintf(os.Stderr, "%s while mounting %s\n", mountOrError.Error, mountOrError.ID)
}
fmt.Printf("%s\n", mountOrError.MountPoint)
}
}
for _, mountOrErr := range moes {
if mountOrErr.Error != "" {
return 1
}
}
return 0
}
func unmount(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
mes := []mountPointError{}
errors := false
for _, arg := range args {
err := m.Unmount(arg)
errText := ""
if err != nil {
errText = fmt.Sprintf("%v", err)
errors = true
}
mes = append(mes, mountPointError{arg, errText})
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(mes)
} else {
for _, me := range mes {
if me.Error != "" {
fmt.Fprintf(os.Stderr, "%s while unmounting %s\n", me.Error, me.ID)
}
}
}
if errors {
return 1
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"mount"},
optionsHelp: "[options [...]] LayerOrContainerNameOrID",
usage: "Mount a layer or container",
minArgs: 1,
action: mount,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.StringVar(&paramMountLabel, []string{"-label", "l"}, "", "Mount Label")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"unmount", "umount"},
optionsHelp: "LayerOrContainerNameOrID",
usage: "Unmount a layer or container",
minArgs: 1,
action: unmount,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,96 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/opts"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func addNames(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
id, err := m.Lookup(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
oldnames, err := m.GetNames(id)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
newNames := []string{}
if oldnames != nil {
newNames = append(newNames, oldnames...)
}
if paramNames != nil {
newNames = append(newNames, paramNames...)
}
if err := m.SetNames(id, newNames); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
names, err := m.GetNames(id)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(names)
}
return 0
}
func setNames(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
if len(args) < 1 {
return 1
}
id, err := m.Lookup(args[0])
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if err := m.SetNames(id, paramNames); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
names, err := m.GetNames(id)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(names)
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"add-names", "addnames"},
optionsHelp: "[options [...]] imageOrContainerNameOrID",
usage: "Add layer, image, or container name or names",
minArgs: 1,
action: addNames,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "New name")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
commands = append(commands, command{
names: []string{"set-names", "setnames"},
optionsHelp: "[options [...]] imageOrContainerNameOrID",
usage: "Set layer, image, or container name or names",
minArgs: 1,
action: setNames,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.Var(opts.NewListOptsRef(&paramNames, nil), []string{"-name", "n"}, "New name")
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,46 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
var (
forceShutdown = false
)
func shutdown(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
_, err := m.Shutdown(forceShutdown)
if jsonOutput {
if err == nil {
json.NewEncoder(os.Stdout).Encode(string(""))
} else {
json.NewEncoder(os.Stdout).Encode(err)
}
} else {
if err != nil {
fmt.Fprintf(os.Stderr, "%s: %v\n", action, err)
}
}
if err != nil {
return 1
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"shutdown"},
usage: "Shut down layer storage",
minArgs: 0,
action: shutdown,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
flags.BoolVar(&forceShutdown, []string{"-force", "f"}, forceShutdown, "Unmount mounted layers first")
},
})
}

View file

@ -0,0 +1,38 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func status(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
status, err := m.Status()
if err != nil {
fmt.Fprintf(os.Stderr, "status: %v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(status)
} else {
for _, pair := range status {
fmt.Fprintf(os.Stderr, "%s: %s\n", pair[0], pair[1])
}
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"status"},
usage: "Check on graph driver status",
minArgs: 0,
action: status,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,88 @@
package main
import (
"fmt"
"strings"
)
const treeIndentStep = 2
const treeStemWidth = treeIndentStep - 1
const treeVertical = '\u2502'
const treeThisAndMore = "\u251c"
const treeJustThis = "\u2514"
const treeStem = "\u2500"
type treeNode struct {
left, right string
notes []string
}
func selectRoot(nodes []treeNode) string {
children := make(map[string][]string)
areChildren := make(map[string]bool)
for _, node := range nodes {
areChildren[node.right] = true
if childlist, ok := children[node.left]; ok {
children[node.left] = append(childlist, node.right)
} else {
children[node.left] = []string{node.right}
}
}
favorite := ""
for left, right := range children {
if areChildren[left] {
continue
}
if favorite == "" {
favorite = left
} else if len(right) < len(children[favorite]) {
favorite = left
}
}
return favorite
}
func printSubTree(root string, nodes []treeNode, indent int, continued []int) []treeNode {
leftovers := []treeNode{}
children := []treeNode{}
for _, node := range nodes {
if node.left != root {
leftovers = append(leftovers, node)
continue
}
children = append(children, node)
}
for n, child := range children {
istring := []rune(strings.Repeat(" ", indent))
for _, column := range continued {
istring[column] = treeVertical
}
subc := continued[:]
header := treeJustThis
noteHeader := " "
if n < len(children)-1 {
subc = append(subc, indent)
header = treeThisAndMore
noteHeader = string(treeVertical)
}
fmt.Printf("%s%s%s%s\n", string(istring), header, strings.Repeat(treeStem, treeStemWidth), child.right)
for _, note := range child.notes {
fmt.Printf("%s%s%s%s\n", string(istring), noteHeader, strings.Repeat(" ", treeStemWidth), note)
}
leftovers = printSubTree(child.right, leftovers, indent+treeIndentStep, subc)
}
return leftovers
}
func printTree(nodes []treeNode) {
for len(nodes) > 0 {
root := selectRoot(nodes)
fmt.Printf("%s\n", root)
oldLength := len(nodes)
nodes = printSubTree(root, nodes, 0, []int{})
newLength := len(nodes)
if oldLength == newLength {
break
}
}
}

View file

@ -0,0 +1,25 @@
package main
import "testing"
func TestTree(*testing.T) {
nodes := []treeNode{
{"F", "H", []string{}},
{"F", "I", []string{}},
{"F", "J", []string{}},
{"A", "B", []string{}},
{"A", "C", []string{}},
{"A", "K", []string{}},
{"C", "F", []string{}},
{"C", "G", []string{"beware", "the", "scary", "thing"}},
{"C", "L", []string{}},
{"B", "D", []string{}},
{"B", "E", []string{}},
{"B", "M", []string{}},
{"K", "N", []string{}},
{"W", "X", []string{}},
{"Y", "Z", []string{}},
{"X", "Y", []string{}},
}
printTree(nodes)
}

View file

@ -0,0 +1,38 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func version(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
version, err := m.Version()
if err != nil {
fmt.Fprintf(os.Stderr, "version: %v\n", err)
return 1
}
if jsonOutput {
json.NewEncoder(os.Stdout).Encode(version)
} else {
for _, pair := range version {
fmt.Fprintf(os.Stderr, "%s: %s\n", pair[0], pair[1])
}
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"version"},
usage: "Return oci-storage version information",
minArgs: 0,
action: version,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

View file

@ -0,0 +1,41 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/containers/storage/pkg/mflag"
"github.com/containers/storage/storage"
)
func wipe(flags *mflag.FlagSet, action string, m storage.Store, args []string) int {
err := m.Wipe()
if jsonOutput {
if err == nil {
json.NewEncoder(os.Stdout).Encode(string(""))
} else {
json.NewEncoder(os.Stdout).Encode(err)
}
} else {
if err != nil {
fmt.Fprintf(os.Stderr, "%s: %v\n", action, err)
}
}
if err != nil {
return 1
}
return 0
}
func init() {
commands = append(commands, command{
names: []string{"wipe"},
usage: "Wipe all layers, images, and containers",
minArgs: 0,
action: wipe,
addFlags: func(flags *mflag.FlagSet, cmd *command) {
flags.BoolVar(&jsonOutput, []string{"-json", "j"}, jsonOutput, "Prefer JSON output")
},
})
}

6
vendor/github.com/containers/storage/docs/Makefile generated vendored Normal file
View file

@ -0,0 +1,6 @@
GOMD2MAN = go-md2man
docs: $(patsubst %.md,%.1,$(wildcard *.md))
%.1: %.md
$(GOMD2MAN) -in $^ -out $@

View file

@ -0,0 +1,25 @@
## oci-storage-add-names "August 2016"
## NAME
oci-storage add-names - Add names to a layer/image/container
## SYNOPSIS
**oci-storage** **add-names** [*options* [...]] *layerOrImageOrContainerNameOrID*
## DESCRIPTION
In addition to IDs, *layers*, *images*, and *containers* can have
human-readable names assigned to them in *oci-storage*. The *add-names*
command can be used to add one or more names to them.
## OPTIONS
**-n | --name** *name*
Specifies a name to add to the layer, image, or container. If a specified name
is already used by another layer, image, or container, it is removed from that
other layer, image, or container.
## EXAMPLE
**oci-storage add-names -n my-awesome-container -n my-for-realsies-awesome-container f3be6c6134d0d980936b4c894f1613b69a62b79588fdeda744d0be3693bde8ec**
## SEE ALSO
oci-storage-set-names(1)

View file

@ -0,0 +1,32 @@
## oci-storage-apply-diff 1 "August 2016"
## NAME
oci-storage apply-diff - Apply a layer diff to a layer
## SYNOPSIS
**oci-storage** **apply-diff** [*options* [...]] *layerNameOrID* [*referenceLayerNameOrID*]
## DESCRIPTION
When a layer is first created, it contains no changes relative to its parent
layer. The layer can either be mounted read-write and its contents modified
directly, or contents can be added (or removed) by applying a layer diff. A
layer diff takes the form of a (possibly compressed) tar archive with
additional information present in its headers, and can be produced by running
*oci-storage diff* or an equivalent.
Layer diffs are not typically applied manually. More often they are applied by
a tool which is being used to import an entire image, such as **skopeo**.
## OPTIONS
**-f | --file** *filename*
Specifies the name of a file from which the diff should be read. If this
option is not used, the diff is read from standard input.
## EXAMPLE
**oci-storage apply-diff -f 71841c97e320d6cde.tar.gz layer1**
## SEE ALSO
oci-storage-changes(1)
oci-storage-diff(1)
oci-storage-diffsize(1)

View file

@ -0,0 +1,21 @@
## oci-storage-changes 1 "August 2016"
## NAME
oci-storage changes - Produce a list of changes in a layer
## SYNOPSIS
**oci-storage** **changes** *layerNameOrID* [*referenceLayerNameOrID*]
## DESCRIPTION
When a layer is first created, it contains no changes relative to its parent
layer. After that is changed, the *oci-storage changes* command can be used to
obtain a summary of which files have been added, deleted, or modified in the
layer.
## EXAMPLE
**oci-storage changes f3be6c6134d0d980936b4c894f1613b69a62b79588fdeda744d0be3693bde8ec**
## SEE ALSO
oci-storage-applydiff(1)
oci-storage-diff(1)
oci-storage-diffsize(1)

View file

@ -0,0 +1,18 @@
## oci-storage-container 1 "August 2016"
## NAME
oci-storage container - Examine a single container
## SYNOPSIS
**oci-storage** **container** *containerNameOrID*
## DESCRIPTION
Retrieve information about a container: any names it has, which image was used
to create it, any names that image has, and the ID of the container's layer.
## EXAMPLE
**oci-storage container f3be6c6134d0d980936b4c894f1613b69a62b79588fdeda744d0be3693bde8ec**
**oci-storage container my-awesome-container**
## SEE ALSO
oci-storage-containers(1)

View file

@ -0,0 +1,16 @@
## oci-storage-containers 1 "August 2016"
## NAME
oci-storage containers - List known containers
## SYNOPSIS
**oci-storage** **containers**
## DESCRIPTION
Retrieves information about all known containers and lists their IDs and names.
## EXAMPLE
**oci-storage containers**
## SEE ALSO
oci-storage-container(1)

View file

@ -0,0 +1,37 @@
## oci-storage-create-container 1 "August 2016"
## NAME
oci-storage create-container - Create a container
## SYNOPSIS
**oci-storage** **create-container** [*options*...] *imageNameOrID*
## DESCRIPTION
Creates a container, using the specified image as the starting point for its
root filesystem.
## OPTIONS
**-n | --name** *name*
Sets an optional name for the container. If a name is already in use, an error
is returned.
**-i | --id** *ID*
Sets the ID for the container. If none is specified, one is generated.
**-m | --metadata** *metadata-value*
Sets the metadata for the container to the specified value.
**-f | --metadata-file** *metadata-file*
Sets the metadata for the container to the contents of the specified file.
## EXAMPLE
**oci-storage create-container -f manifest.json -n new-container goodimage**
## SEE ALSO
oci-storage-create-image(1)
oci-storage-create-layer(1)
oci-storage-delete-container(1)

View file

@ -0,0 +1,37 @@
## oci-storage-create-image 1 "August 2016"
## NAME
oci-storage create-image - Create an image
## SYNOPSIS
**oci-storage** **create-image** [*options*...] *topLayerNameOrID*
## DESCRIPTION
Creates an image, referring to the specified layer as the one which should be
used as the basis for containers which will be based on the image.
## OPTIONS
**-n | --name** *name*
Sets an optional name for the image. If a name is already in use, an error is
returned.
**-i | --id** *ID*
Sets the ID for the image. If none is specified, one is generated.
**-m | --metadata** *metadata-value*
Sets the metadata for the image to the specified value.
**-f | --metadata-file** *metadata-file*
Sets the metadata for the image to the contents of the specified file.
## EXAMPLE
**oci-storage create-image -f manifest.json -n new-image somelayer**
## SEE ALSO
oci-storage-create-container(1)
oci-storage-create-layer(1)
oci-storage-delete-image(1)

View file

@ -0,0 +1,42 @@
## oci-storage-create-layer 1 "August 2016"
## NAME
oci-storage create-layer - Create a layer
## SYNOPSIS
**oci-storage** **create-layer** [*options* [...]] [*parentLayerNameOrID*]
## DESCRIPTION
Creates a new layer which either has a specified layer as its parent, or if no
parent is specified, is empty.
## OPTIONS
**-n** *name*
Sets an optional name for the layer. If a name is already in use, an error is
returned.
**-i | --id** *ID*
Sets the ID for the layer. If none is specified, one is generated.
**-m | --metadata** *metadata-value*
Sets the metadata for the layer to the specified value.
**-f | --metadata-file** *metadata-file*
Sets the metadata for the layer to the contents of the specified file.
**-l | --label** *mount-label*
Sets the label which should be assigned as an SELinux context when mounting the
layer.
## EXAMPLE
**oci-storage create-layer -f manifest.json -n new-layer somelayer**
## SEE ALSO
oci-storage-create-container(1)
oci-storage-create-image(1)
oci-storage-delete-layer(1)

View file

@ -0,0 +1,18 @@
## oci-storage-delete-container 1 "August 2016"
## NAME
oci-storage delete-container - Delete a container
## SYNOPSIS
**oci-storage** **delete-container** *containerNameOrID*
## DESCRIPTION
Deletes a container and its layer.
## EXAMPLE
**oci-storage delete-container my-awesome-container**
## SEE ALSO
oci-storage-create-container(1)
oci-storage-delete-image(1)
oci-storage-delete-layer(1)

View file

@ -0,0 +1,21 @@
## oci-storage-delete-image 1 "August 2016"
## NAME
oci-storage delete-image - Delete an image
## SYNOPSIS
**oci-storage** **delete-image** *imageNameOrID*
## DESCRIPTION
Deletes an image if it is not currently being used by any containers. If the
image's top layer is not being used by any other images, it will be removed.
If that image's parent is then not being used by other images, it, too, will be
removed, and the this will be repeated for each parent's parent.
## EXAMPLE
**oci-storage delete-image my-base-image**
## SEE ALSO
oci-storage-create-image(1)
oci-storage-delete-container(1)
oci-storage-delete-layer(1)

View file

@ -0,0 +1,19 @@
## oci-storage-delete-layer 1 "August 2016"
## NAME
oci-storage delete-layer - Delete a layer
## SYNOPSIS
**oci-storage** **delete-layer** *layerNameOrID*
## DESCRIPTION
Deletes a layer if it is not currently being used by any images or containers,
and is not the parent of any other layers.
## EXAMPLE
**oci-storage delete-layer my-base-layer**
## SEE ALSO
oci-storage-create-layer(1)
oci-storage-delete-image(1)
oci-storage-delete-layer(1)

View file

@ -0,0 +1,19 @@
## oci-storage-delete 1 "August 2016"
## NAME
oci-storage delete - Force deletion of a layer, image, or container
## SYNOPSIS
**oci-storage** **delete** *layerOrImageOrContainerNameOrID*
## DESCRIPTION
Deletes a specified layer, image, or container, with no safety checking. This
can corrupt data, and may be removed.
## EXAMPLE
**oci-storage delete my-base-layer**
## SEE ALSO
oci-storage-delete-container(1)
oci-storage-delete-image(1)
oci-storage-delete-layer(1)

View file

@ -0,0 +1,32 @@
## oci-storage-diff 1 "August 2016"
## NAME
oci-storage diff - Generate a layer diff
## SYNOPSIS
**oci-storage** **diff** [*options* [...]] *layerNameOrID*
## DESCRIPTION
Generates a layer diff representing the changes made in the specified layer.
If the layer was populated using a layer diff, the result aims to be
bit-for-bit identical with the one that was applied, including the type of
compression which was applied.
## OPTIONS
**-f | --file** *file*
Write the diff to the specified file instead of stdout.
**-c | --gzip**
Compress the diff using gzip compression. If the layer was populated by a
layer diff, and that layer diff was compressed, this will be done
automatically.
## EXAMPLE
**oci-storage diff my-base-layer**
## SEE ALSO
oci-storage-applydiff(1)
oci-storage-changes(1)
oci-storage-diffsize(1)

View file

@ -0,0 +1,19 @@
## oci-storage-diffsize 1 "August 2016"
## NAME
oci-storage diffsize - Compute the size of a layer diff
## SYNOPSIS
**oci-storage** **diffsize** *layerNameOrID*
## DESCRIPTION
Computes the expected size of the layer diff which would be generated for the
specified layer.
## EXAMPLE
**oci-storage diffsize my-base-layer**
## SEE ALSO
oci-storage-applydiff(1)
oci-storage-changes(1)
oci-storage-diff(1)

View file

@ -0,0 +1,31 @@
## oci-storage-exists 1 "August 2016"
## NAME
oci-storage exists - Check if a layer, image, or container exists
## SYNOPSIS
**oci-storage** **exists** [*options* [...]] *layerOrImageOrContainerNameOrID* [...]
## DESCRIPTION
Checks if there are layers, images, or containers which have the specified
names or IDs.
## OPTIONS
**-c | --container**
Only succeed if the names or IDs are that of containers.
**-i | --image**
Only succeed if the names or IDs are that of images.
**-l | --layer**
Only succeed if the names or IDs are that of layers.
**-q | --quiet**
Suppress output.
## EXAMPLE
**oci-storage exists my-base-layer**

View file

@ -0,0 +1,22 @@
## oci-storage-get-container-data 1 "August 2016"
## NAME
oci-storage get-container-data - Retrieve lookaside data for a container
## SYNOPSIS
**oci-storage** **get-container-data** [*options* [...]] *containerNameOrID* *dataName*
## DESCRIPTION
Retrieves a piece of named data which is associated with a container.
## OPTIONS
**-f | --file** *file*
Write the data to a file instead of stdout.
## EXAMPLE
**oci-storage get-container-data -f config.json my-container configuration**
## SEE ALSO
oci-storage-list-container-data(1)
oci-storage-set-container-data(1)

View file

@ -0,0 +1,17 @@
## oci-storage-get-container-dir 1 "Sepember 2016"
## NAME
oci-storage get-container-dir - Find lookaside directory for a container
## SYNOPSIS
**oci-storage** **get-container-dir** [*options* [...]] *containerNameOrID*
## DESCRIPTION
Prints the location of a directory which the caller can use to store lookaside
information which should be cleaned up when the container is deleted.
## EXAMPLE
**oci-storage get-container-dir my-container**
## SEE ALSO
oci-storage-get-container-run-dir(1)

View file

@ -0,0 +1,17 @@
## oci-storage-get-container-run-dir 1 "Sepember 2016"
## NAME
oci-storage get-container-run-dir - Find runtime lookaside directory for a container
## SYNOPSIS
**oci-storage** **get-container-run-dir** [*options* [...]] *containerNameOrID*
## DESCRIPTION
Prints the location of a directory which the caller can use to store lookaside
information which should be cleaned up when the host is rebooted.
## EXAMPLE
**oci-storage get-container-run-dir my-container**
## SEE ALSO
oci-storage-get-container-dir(1)

View file

@ -0,0 +1,22 @@
## oci-storage-get-image-data 1 "August 2016"
## NAME
oci-storage get-image-data - Retrieve lookaside data for an image
## SYNOPSIS
**oci-storage** **get-image-data** [*options* [...]] *imageNameOrID* *dataName*
## DESCRIPTION
Retrieves a piece of named data which is associated with an image.
## OPTIONS
**-f | --file** *file*
Write the data to a file instead of stdout.
## EXAMPLE
**oci-storage get-image-data -f manifest.json my-image manifest**
## SEE ALSO
oci-storage-list-image-data(1)
oci-storage-set-image-data(1)

View file

@ -0,0 +1,18 @@
## oci-storage-image 1 "August 2016"
## NAME
oci-storage image - Examine a single image
## SYNOPSIS
**oci-storage** **image** *imageNameOrID*
## DESCRIPTION
Retrieve information about an image: its ID, any names it has, and the ID of
its top layer.
## EXAMPLE
**oci-storage image 49bff34e4baf9378c01733d02276a731a4c4771ebeab305020c5303679f88bb8**
**oci-storage image my-favorite-image**
## SEE ALSO
oci-storage-images(1)

View file

@ -0,0 +1,16 @@
## oci-storage-images 1 "August 2016"
## NAME
oci-storage images - List known images
## SYNOPSIS
**oci-storage** **images**
## DESCRIPTION
Retrieves information about all known images and lists their IDs and names.
## EXAMPLE
**oci-storage images**
## SEE ALSO
oci-storage-image(1)

View file

@ -0,0 +1,23 @@
## oci-storage-layers 1 "August 2016"
## NAME
oci-storage layers - List known layers
## SYNOPSIS
**oci-storage** [*options* [...]] **layers**
## DESCRIPTION
Retrieves information about all known layers and lists their IDs and names, the
IDs and names of any images which list those layers as their top layer, and the
IDs and names of any containers for which the layer serves as the container's
own layer.
## OPTIONS
**-t | --tree**
Display results using a tree to show the hierarchy of parent-child
relationships between layers.
## EXAMPLE
**oci-storage layers**
**oci-storage layers -t**

View file

@ -0,0 +1,17 @@
## oci-storage-list-container-data 1 "August 2016"
## NAME
oci-storage list-container-data - List lookaside data for a container
## SYNOPSIS
**oci-storage** **list-container-data** *containerNameOrID*
## DESCRIPTION
List the pieces of named data which are associated with a container.
## EXAMPLE
**oci-storage list-container-data my-container**
## SEE ALSO
oci-storage-get-container-data(1)
oci-storage-set-container-data(1)

View file

@ -0,0 +1,17 @@
## oci-storage-list-image-data 1 "August 2016"
## NAME
oci-storage list-image-data - List lookaside data for an image
## SYNOPSIS
**oci-storage** **list-image-data** *imageNameOrID*
## DESCRIPTION
List the pieces of named data which are associated with an image.
## EXAMPLE
**oci-storage list-image-data my-image**
## SEE ALSO
oci-storage-get-image-data(1)
oci-storage-list-image-data(1)

View file

@ -0,0 +1,22 @@
## oci-storage-metadata 1 "August 2016"
## NAME
oci-storage metadata - Retrieve metadata for a layer, image, or container
## SYNOPSIS
**oci-storage** **metadata** [*options* [...]] *layerOrImageOrContainerNameOrID*
## DESCRIPTION
Outputs metadata associated with a layer, image, or container. Metadata is
intended to be small, and is expected to be cached in memory.
## OPTIONS
**-q | --quiet**
Don't print the ID or name of the item with which the metadata is associated.
## EXAMPLE
**oci-storage metadata -q my-image > my-image.txt**
## SEE ALSO
oci-storage-set-metadata(1)

View file

@ -0,0 +1,22 @@
## oci-storage-mount 1 "August 2016"
## NAME
oci-storage mount - Mount a layer or a container's layer for manipulation
## SYNOPSIS
**oci-storage** **mount** [*options* [...]] *layerOrContainerNameOrID*
## DESCRIPTION
Mounts a layer or a container's layer on the host's filesystem and prints the
mountpoint.
## OPTIONS
**-l | --label** *label*
Specify an SELinux context for the mounted layer.
## EXAMPLE
**oci-storage mount my-container**
## SEE ALSO
oci-storage-unmount(1)

View file

@ -0,0 +1,22 @@
## oci-storage-set-container-data 1 "August 2016"
## NAME
oci-storage set-container-data - Set lookaside data for a container
## SYNOPSIS
**oci-storage** **set-container-data** [*options* [...]] *containerNameOrID* *dataName*
## DESCRIPTION
Sets a piece of named data which is associated with a container.
## OPTIONS
**-f | --file** *filename*
Read the data contents from a file instead of stdin.
## EXAMPLE
**oci-storage set-container-data -f ./config.json my-container configuration**
## SEE ALSO
oci-storage-get-container-data(1)
oci-storage-list-container-data(1)

View file

@ -0,0 +1,22 @@
## oci-storage-set-image-data 1 "August 2016"
## NAME
oci-storage set-image-data - Set lookaside data for an image
## SYNOPSIS
**oci-storage** **set-image-data** [*options* [...]] *imageNameOrID* *dataName*
## DESCRIPTION
Sets a piece of named data which is associated with an image.
## OPTIONS
**-f | --file** *filename*
Read the data contents from a file instead of stdin.
## EXAMPLE
**oci-storage set-image-data -f ./manifest.json my-image manifest**
## SEE ALSO
oci-storage-get-image-data(1)
oci-storage-list-image-data(1)

View file

@ -0,0 +1,26 @@
## oci-storage-set-metadata 1 "August 2016"
## NAME
oci-storage set-metadata - Set metadata for a layer, image, or container
## SYNOPSIS
**oci-storage** **set-metadata** [*options* [...]] *layerOrImageOrContainerNameOrID*
## DESCRIPTION
Updates the metadata associated with a layer, image, or container. Metadata is
intended to be small, and is expected to be cached in memory.
## OPTIONS
**-f | --metadata-file** *filename*
Use the contents of the specified file as the metadata.
**-m | --metadata** *value*
Use the specified value as the metadata.
## EXAMPLE
**oci-storage set-metadata -m "compression: gzip" my-layer**
## SEE ALSO
oci-storage-metadata(1)

View file

@ -0,0 +1,27 @@
## oci-storage-set-names 1 "August 2016"
## NAME
oci-storage set-names - Set names for a layer/image/container
## SYNOPSIS
**oci-storage** **set-names** [**-n** *name* [...]] *layerOrImageOrContainerNameOrID*
## DESCRIPTION
In addition to IDs, *layers*, *images*, and *containers* can have
human-readable names assigned to them in *oci-storage*. The *set-names*
command can be used to reset the list of names for any of them.
## OPTIONS
**-n | --name** *name*
Specifies a name to set on the layer, image, or container. If a specified name
is already used by another layer, image, or container, it is removed from that
other layer, image, or container. Any names which are currently assigned to
this layer, image, or container, and which are not specified using this option,
will be removed from the layer, image, or container.
## EXAMPLE
**oci-storage set-names -n my-one-and-only-name f3be6c6134d0d980936b4c894f1613b69a62b79588fdeda744d0be3693bde8ec**
## SEE ALSO
oci-storage-add-names(1)

View file

@ -0,0 +1,20 @@
## oci-storage-shutdown 1 "October 2016"
## NAME
oci-storage shutdown - Shut down layer storage
## SYNOPSIS
**oci-storage** **shutdown** [*options* [...]]
## DESCRIPTION
Shuts down the layer storage driver, which may be using kernel resources.
## OPTIONS
**-f | --force**
Attempt to unmount any mounted layers before attempting to shut down the
driver. If this option is not specified, if any layers are mounted, shutdown
will not be attempted.
## EXAMPLE
**oci-storage shutdown**

View file

@ -0,0 +1,16 @@
## oci-storage-status 1 "August 2016"
## NAME
oci-storage status - Output status information from the storage library's driver
## SYNOPSIS
**oci-storage** **status**
## DESCRIPTION
Queries the storage library's driver for status information.
## EXAMPLE
**oci-storage status**
## SEE ALSO
oci-storage-version(1)

View file

@ -0,0 +1,17 @@
## oci-storage-unmount 1 "August 2016"
## NAME
oci-storage unmount - Unmount a layer or a container's layer
## SYNOPSIS
**oci-storage** **unmount** *layerOrContainerMountpointOrNameOrID*
## DESCRIPTION
Unmounts a layer or a container's layer from the host's filesystem.
## EXAMPLE
**oci-storage unmount my-container**
**oci-storage unmount /var/lib/oci-storage/mounts/my-container**
## SEE ALSO
oci-storage-mount(1)

View file

@ -0,0 +1,16 @@
## oci-storage-version 1 "August 2016"
## NAME
oci-storage version - Output version information about the storage library
## SYNOPSIS
**oci-storage** **version**
## DESCRIPTION
Outputs version information about the storage library and *oci-storage*.
## EXAMPLE
**oci-storage version**
## SEE ALSO
oci-storage-status(1)

View file

@ -0,0 +1,14 @@
## oci-storage-wipe 1 "August 2016"
## NAME
oci-storage wipe - Delete all containers, images, and layers
## SYNOPSIS
**oci-storage** **wipe**
## DESCRIPTION
Deletes all known containers, images, and layers. Depending on your use case,
use with caution or abandon.
## EXAMPLE
**oci-storage wipe**

View file

@ -0,0 +1,131 @@
## oci-storage 1 "August 2016"
## NAME
oci-storage - Manage layer/image/container storage
## SYNOPSIS
**oci-storage** [**subcommand**] [**--help**]
## DESCRIPTION
The *oci-storage* command is a front-end for the *containers/storage* library.
While it can be used to manage storage for filesystem layers, images, and
containers directly, its main use cases are centered around troubleshooting and
querying the state of storage which is being managed by other processes.
Notionally, a complete filesystem layer is composed of a container filesystem
and some bookkeeping information. Other layers, *children* of that layer,
default to sharing its contents, but any changes made to the contents of the
children are not reflected in the *parent*. This arrangement is intended to
save disk space: by storing the *child* layer only as a set of changes relative
to its *parent*, the *parent*'s contents should not need to be duplicated for
each of the *parent*'s *children*. Of course, each *child* can have its own
*children*. The contents of *parent* layers should not be modified.
An *image* is a reference to a particular *layer*, along with some bookkeeping
information. Presumably, the *image* points to a *layer* which has been
modified, possibly in multiple steps, from some general-purpose *parent*, so
that it is suitable for running an intended application. Multiple *images* can
reference a single *layer*, while differing only in the additional bookkeeping
information that they carry. The contents of *images* should be considered
read-only.
A *container* is essentially a *layer* which is a *child* of a *layer* which is
referred to by an *image* (put another way, a *container* is instantiated from
an *image*), along with some bookkeeping information. They do not have
*children* and their *layers* can not be directly referred to by *images*.
This ensures that changes to the contents of a *container*'s layer do not
affect other *images* or *layers*, so they are considered writeable.
All of *layers*, *images*, and *containers* can have metadata which
*oci-storage* manages attached to them. Generally this metadata is not
expected to be large, as it is cached in memory.
*Images* and *containers* can also have arbitrarily-named data items attached
to them. Generally, this data can be larger than metadata, and is not kept in
memory unless it is being retrieved or written.
It is expected that signatures which can be used to verify an *image*'s
contents will be stored as data items for that *image*, along with any template
configuration data which is recommended for use in *containers* which derive
from the *image*. It is also expected that a *container*'s run-time
configuration will be stored as data items.
## SUB-COMMANDS
The *oci-storage* command's features are broken down into several subcommands:
**oci-storage add-names(1)** Add layer, image, or container name or names
**oci-storage applydiff(1)** Apply a diff to a layer
**oci-storage changes(1)** Compare two layers
**oci-storage container(1)** Examine a container
**oci-storage containers(1)** List containers
**oci-storage create-container(1)** Create a new container from an image
**oci-storage create-image(1)** Create a new image using layers
**oci-storage create-layer(1)** Create a new layer
**oci-storage delete(1)** Delete a layer or image or container, with no safety checks
**oci-storage delete-container(1)** Delete a container, with safety checks
**oci-storage delete-image(1)** Delete an image, with safety checks
**oci-storage delete-layer(1)** Delete a layer, with safety checks
**oci-storage diff(1)** Compare two layers
**oci-storage diffsize(1)** Compare two layers
**oci-storage exists(1)** Check if a layer or image or container exists
**oci-storage get-container-data(1)** Get data that is attached to a container
**oci-storage get-image-data(1)** Get data that is attached to an image
**oci-storage image(1)** Examine an image
**oci-storage images(1)** List images
**oci-storage layers(1)** List layers
**oci-storage list-container-data(1)** List data items that are attached to a container
**oci-storage list-image-data(1)** List data items that are attached to an image
**oci-storage metadata(1)** Retrieve layer, image, or container metadata
**oci-storage mount(1)** Mount a layer or container
**oci-storage set-container-data(1)** Set data that is attached to a container
**oci-storage set-image-data(1)** Set data that is attached to an image
**oci-storage set-metadata(1)** Set layer, image, or container metadata
**oci-storage set-names(1)** Set layer, image, or container name or names
**oci-storage shutdown(1)** Shut down graph driver
**oci-storage status(1)** Check on graph driver status
**oci-storage unmount(1)** Unmount a layer or container
**oci-storage version(1)** Return oci-storage version information
**oci-storage wipe(1)** Wipe all layers, images, and containers
## OPTIONS
**--help**
Print the list of available sub-commands. When a sub-command is specified,
provide information about that command.
**--debug, -D**
Increases the amount of debugging information which is printed.
**--graph, -g=/var/lib/oci-storage**
Overrides the root of the storage tree, used for storing layer contents and
information about layers, images, and containers.
**--run, -R=/var/run/oci-storage**
Overrides the root of the runtime state tree, currently used mainly for noting
the location where a given layer is mounted (see **oci-storage mount**) so that
it can be unmounted by path name as an alternative to unmounting by ID or name.
**--storage-driver, -s**
Specifies which storage driver to use. If not set, but *$STORAGE_DRIVER* is
set in the environment, its value is used. If the storage tree has previously
been initialized, neither needs to be provided. If the tree has not previously
been initialized and neither is set, a hard-coded default is selected.
**--storage-opt=[]**
Set options which will be passed to the storage driver. If not set, but
*$STORAGE_OPTS* is set in the environment, its value is treated as a
comma-separated list and used instead. If the storage tree has previously been
initialized, these need not be provided.
## EXAMPLES
**oci-storage layers -t**
## BUGS
This is still a work in progress, so some functionality may not yet be
implemented, and some will be removed if it is found to be unnecessary. That
said, if anything isn't working correctly, please report it to [the project's
issue tracker] (https://github.com/containers/storage/issues).

View file

@ -279,7 +279,7 @@ func (a *Driver) Remove(id string) error {
}
// Atomically remove each directory in turn by first moving it out of the
// way (so that docker doesn't find it anymore) before doing removal of
// way (so that container runtimes don't find it anymore) before doing removal of
// the whole tree.
tmpMntPath := path.Join(a.mntPath(), fmt.Sprintf("%s-removing", id))
if err := os.Rename(mountpoint, tmpMntPath); err != nil && !os.IsNotExist(err) {
@ -559,14 +559,14 @@ func (a *Driver) aufsMount(ro []string, rw, target, mountLabel string) (err erro
// version of aufs.
func useDirperm() bool {
enableDirpermLock.Do(func() {
base, err := ioutil.TempDir("", "docker-aufs-base")
base, err := ioutil.TempDir("", "storage-aufs-base")
if err != nil {
logrus.Errorf("error checking dirperm1: %v", err)
return
}
defer os.RemoveAll(base)
union, err := ioutil.TempDir("", "docker-aufs-union")
union, err := ioutil.TempDir("", "storage-aufs-union")
if err != nil {
logrus.Errorf("error checking dirperm1: %v", err)
return

View file

@ -0,0 +1,801 @@
// +build linux
package aufs
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"path"
"sync"
"testing"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/reexec"
"github.com/containers/storage/pkg/stringid"
)
var (
tmpOuter = path.Join(os.TempDir(), "aufs-tests")
tmp = path.Join(tmpOuter, "aufs")
)
func init() {
reexec.Init()
}
func testInit(dir string, t testing.TB) graphdriver.Driver {
d, err := Init(dir, nil, nil, nil)
if err != nil {
if err == graphdriver.ErrNotSupported {
t.Skip(err)
} else {
t.Fatal(err)
}
}
return d
}
func newDriver(t testing.TB) *Driver {
if err := os.MkdirAll(tmp, 0755); err != nil {
t.Fatal(err)
}
d := testInit(tmp, t)
return d.(*Driver)
}
func TestNewDriver(t *testing.T) {
if err := os.MkdirAll(tmp, 0755); err != nil {
t.Fatal(err)
}
d := testInit(tmp, t)
defer os.RemoveAll(tmp)
if d == nil {
t.Fatalf("Driver should not be nil")
}
}
func TestAufsString(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if d.String() != "aufs" {
t.Fatalf("Expected aufs got %s", d.String())
}
}
func TestCreateDirStructure(t *testing.T) {
newDriver(t)
defer os.RemoveAll(tmp)
paths := []string{
"mnt",
"layers",
"diff",
}
for _, p := range paths {
if _, err := os.Stat(path.Join(tmp, p)); err != nil {
t.Fatal(err)
}
}
}
// We should be able to create two drivers with the same dir structure
func TestNewDriverFromExistingDir(t *testing.T) {
if err := os.MkdirAll(tmp, 0755); err != nil {
t.Fatal(err)
}
testInit(tmp, t)
testInit(tmp, t)
os.RemoveAll(tmp)
}
func TestCreateNewDir(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
}
func TestCreateNewDirStructure(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
paths := []string{
"mnt",
"diff",
"layers",
}
for _, p := range paths {
if _, err := os.Stat(path.Join(tmp, p, "1")); err != nil {
t.Fatal(err)
}
}
}
func TestRemoveImage(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Remove("1"); err != nil {
t.Fatal(err)
}
paths := []string{
"mnt",
"diff",
"layers",
}
for _, p := range paths {
if _, err := os.Stat(path.Join(tmp, p, "1")); err == nil {
t.Fatalf("Error should not be nil because dirs with id 1 should be delted: %s", p)
}
}
}
func TestGetWithoutParent(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
diffPath, err := d.Get("1", "")
if err != nil {
t.Fatal(err)
}
expected := path.Join(tmp, "diff", "1")
if diffPath != expected {
t.Fatalf("Expected path %s got %s", expected, diffPath)
}
}
func TestCleanupWithNoDirs(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Cleanup(); err != nil {
t.Fatal(err)
}
}
func TestCleanupWithDir(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Cleanup(); err != nil {
t.Fatal(err)
}
}
func TestMountedFalseResponse(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
response, err := d.mounted(d.getDiffPath("1"))
if err != nil {
t.Fatal(err)
}
if response != false {
t.Fatalf("Response if dir id 1 is mounted should be false")
}
}
func TestMountedTrueReponse(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Create("2", "1", "", nil); err != nil {
t.Fatal(err)
}
_, err := d.Get("2", "")
if err != nil {
t.Fatal(err)
}
response, err := d.mounted(d.pathCache["2"])
if err != nil {
t.Fatal(err)
}
if response != true {
t.Fatalf("Response if dir id 2 is mounted should be true")
}
}
func TestMountWithParent(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Create("2", "1", "", nil); err != nil {
t.Fatal(err)
}
defer func() {
if err := d.Cleanup(); err != nil {
t.Fatal(err)
}
}()
mntPath, err := d.Get("2", "")
if err != nil {
t.Fatal(err)
}
if mntPath == "" {
t.Fatal("mntPath should not be empty string")
}
expected := path.Join(tmp, "mnt", "2")
if mntPath != expected {
t.Fatalf("Expected %s got %s", expected, mntPath)
}
}
func TestRemoveMountedDir(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Create("2", "1", "", nil); err != nil {
t.Fatal(err)
}
defer func() {
if err := d.Cleanup(); err != nil {
t.Fatal(err)
}
}()
mntPath, err := d.Get("2", "")
if err != nil {
t.Fatal(err)
}
if mntPath == "" {
t.Fatal("mntPath should not be empty string")
}
mounted, err := d.mounted(d.pathCache["2"])
if err != nil {
t.Fatal(err)
}
if !mounted {
t.Fatalf("Dir id 2 should be mounted")
}
if err := d.Remove("2"); err != nil {
t.Fatal(err)
}
}
func TestCreateWithInvalidParent(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "storage", "", nil); err == nil {
t.Fatalf("Error should not be nil with parent does not exist")
}
}
func TestGetDiff(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.CreateReadWrite("1", "", "", nil); err != nil {
t.Fatal(err)
}
diffPath, err := d.Get("1", "")
if err != nil {
t.Fatal(err)
}
// Add a file to the diff path with a fixed size
size := int64(1024)
f, err := os.Create(path.Join(diffPath, "test_file"))
if err != nil {
t.Fatal(err)
}
if err := f.Truncate(size); err != nil {
t.Fatal(err)
}
f.Close()
a, err := d.Diff("1", "")
if err != nil {
t.Fatal(err)
}
if a == nil {
t.Fatalf("Archive should not be nil")
}
}
func TestChanges(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.CreateReadWrite("2", "1", "", nil); err != nil {
t.Fatal(err)
}
defer func() {
if err := d.Cleanup(); err != nil {
t.Fatal(err)
}
}()
mntPoint, err := d.Get("2", "")
if err != nil {
t.Fatal(err)
}
// Create a file to save in the mountpoint
f, err := os.Create(path.Join(mntPoint, "test.txt"))
if err != nil {
t.Fatal(err)
}
if _, err := f.WriteString("testline"); err != nil {
t.Fatal(err)
}
if err := f.Close(); err != nil {
t.Fatal(err)
}
changes, err := d.Changes("2", "")
if err != nil {
t.Fatal(err)
}
if len(changes) != 1 {
t.Fatalf("Dir 2 should have one change from parent got %d", len(changes))
}
change := changes[0]
expectedPath := "/test.txt"
if change.Path != expectedPath {
t.Fatalf("Expected path %s got %s", expectedPath, change.Path)
}
if change.Kind != archive.ChangeAdd {
t.Fatalf("Change kind should be ChangeAdd got %s", change.Kind)
}
if err := d.CreateReadWrite("3", "2", "", nil); err != nil {
t.Fatal(err)
}
mntPoint, err = d.Get("3", "")
if err != nil {
t.Fatal(err)
}
// Create a file to save in the mountpoint
f, err = os.Create(path.Join(mntPoint, "test2.txt"))
if err != nil {
t.Fatal(err)
}
if _, err := f.WriteString("testline"); err != nil {
t.Fatal(err)
}
if err := f.Close(); err != nil {
t.Fatal(err)
}
changes, err = d.Changes("3", "")
if err != nil {
t.Fatal(err)
}
if len(changes) != 1 {
t.Fatalf("Dir 2 should have one change from parent got %d", len(changes))
}
change = changes[0]
expectedPath = "/test2.txt"
if change.Path != expectedPath {
t.Fatalf("Expected path %s got %s", expectedPath, change.Path)
}
if change.Kind != archive.ChangeAdd {
t.Fatalf("Change kind should be ChangeAdd got %s", change.Kind)
}
}
func TestDiffSize(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
if err := d.CreateReadWrite("1", "", "", nil); err != nil {
t.Fatal(err)
}
diffPath, err := d.Get("1", "")
if err != nil {
t.Fatal(err)
}
// Add a file to the diff path with a fixed size
size := int64(1024)
f, err := os.Create(path.Join(diffPath, "test_file"))
if err != nil {
t.Fatal(err)
}
if err := f.Truncate(size); err != nil {
t.Fatal(err)
}
s, err := f.Stat()
if err != nil {
t.Fatal(err)
}
size = s.Size()
if err := f.Close(); err != nil {
t.Fatal(err)
}
diffSize, err := d.DiffSize("1", "")
if err != nil {
t.Fatal(err)
}
if diffSize != size {
t.Fatalf("Expected size to be %d got %d", size, diffSize)
}
}
func TestChildDiffSize(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
if err := d.CreateReadWrite("1", "", "", nil); err != nil {
t.Fatal(err)
}
diffPath, err := d.Get("1", "")
if err != nil {
t.Fatal(err)
}
// Add a file to the diff path with a fixed size
size := int64(1024)
f, err := os.Create(path.Join(diffPath, "test_file"))
if err != nil {
t.Fatal(err)
}
if err := f.Truncate(size); err != nil {
t.Fatal(err)
}
s, err := f.Stat()
if err != nil {
t.Fatal(err)
}
size = s.Size()
if err := f.Close(); err != nil {
t.Fatal(err)
}
diffSize, err := d.DiffSize("1", "")
if err != nil {
t.Fatal(err)
}
if diffSize != size {
t.Fatalf("Expected size to be %d got %d", size, diffSize)
}
if err := d.Create("2", "1", "", nil); err != nil {
t.Fatal(err)
}
diffSize, err = d.DiffSize("2", "")
if err != nil {
t.Fatal(err)
}
// The diff size for the child should be zero
if diffSize != 0 {
t.Fatalf("Expected size to be %d got %d", 0, diffSize)
}
}
func TestExists(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
if d.Exists("none") {
t.Fatal("id name should not exist in the driver")
}
if !d.Exists("1") {
t.Fatal("id 1 should exist in the driver")
}
}
func TestStatus(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
if err := d.Create("1", "", "", nil); err != nil {
t.Fatal(err)
}
status := d.Status()
if status == nil || len(status) == 0 {
t.Fatal("Status should not be nil or empty")
}
rootDir := status[0]
dirs := status[2]
if rootDir[0] != "Root Dir" {
t.Fatalf("Expected Root Dir got %s", rootDir[0])
}
if rootDir[1] != d.rootPath() {
t.Fatalf("Expected %s got %s", d.rootPath(), rootDir[1])
}
if dirs[0] != "Dirs" {
t.Fatalf("Expected Dirs got %s", dirs[0])
}
if dirs[1] != "1" {
t.Fatalf("Expected 1 got %s", dirs[1])
}
}
func TestApplyDiff(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
if err := d.CreateReadWrite("1", "", "", nil); err != nil {
t.Fatal(err)
}
diffPath, err := d.Get("1", "")
if err != nil {
t.Fatal(err)
}
// Add a file to the diff path with a fixed size
size := int64(1024)
f, err := os.Create(path.Join(diffPath, "test_file"))
if err != nil {
t.Fatal(err)
}
if err := f.Truncate(size); err != nil {
t.Fatal(err)
}
f.Close()
diff, err := d.Diff("1", "")
if err != nil {
t.Fatal(err)
}
if err := d.Create("2", "", "", nil); err != nil {
t.Fatal(err)
}
if err := d.Create("3", "2", "", nil); err != nil {
t.Fatal(err)
}
if err := d.applyDiff("3", diff); err != nil {
t.Fatal(err)
}
// Ensure that the file is in the mount point for id 3
mountPoint, err := d.Get("3", "")
if err != nil {
t.Fatal(err)
}
if _, err := os.Stat(path.Join(mountPoint, "test_file")); err != nil {
t.Fatal(err)
}
}
func hash(c string) string {
h := sha256.New()
fmt.Fprint(h, c)
return hex.EncodeToString(h.Sum(nil))
}
func testMountMoreThan42Layers(t *testing.T, mountPath string) {
if err := os.MkdirAll(mountPath, 0755); err != nil {
t.Fatal(err)
}
defer os.RemoveAll(mountPath)
d := testInit(mountPath, t).(*Driver)
defer d.Cleanup()
var last string
var expected int
for i := 1; i < 127; i++ {
expected++
var (
parent = fmt.Sprintf("%d", i-1)
current = fmt.Sprintf("%d", i)
)
if parent == "0" {
parent = ""
} else {
parent = hash(parent)
}
current = hash(current)
if err := d.CreateReadWrite(current, parent, "", nil); err != nil {
t.Logf("Current layer %d", i)
t.Error(err)
}
point, err := d.Get(current, "")
if err != nil {
t.Logf("Current layer %d", i)
t.Error(err)
}
f, err := os.Create(path.Join(point, current))
if err != nil {
t.Logf("Current layer %d", i)
t.Error(err)
}
f.Close()
if i%10 == 0 {
if err := os.Remove(path.Join(point, parent)); err != nil {
t.Logf("Current layer %d", i)
t.Error(err)
}
expected--
}
last = current
}
// Perform the actual mount for the top most image
point, err := d.Get(last, "")
if err != nil {
t.Error(err)
}
files, err := ioutil.ReadDir(point)
if err != nil {
t.Error(err)
}
if len(files) != expected {
t.Errorf("Expected %d got %d", expected, len(files))
}
}
func TestMountMoreThan42Layers(t *testing.T) {
os.RemoveAll(tmpOuter)
testMountMoreThan42Layers(t, tmp)
}
func TestMountMoreThan42LayersMatchingPathLength(t *testing.T) {
defer os.RemoveAll(tmpOuter)
zeroes := "0"
for {
// This finds a mount path so that when combined into aufs mount options
// 4096 byte boundary would be in between the paths or in permission
// section. For '/tmp' it will use '/tmp/aufs-tests/00000000/aufs'
mountPath := path.Join(tmpOuter, zeroes, "aufs")
pathLength := 77 + len(mountPath)
if mod := 4095 % pathLength; mod == 0 || mod > pathLength-2 {
t.Logf("Using path: %s", mountPath)
testMountMoreThan42Layers(t, mountPath)
return
}
zeroes += "0"
}
}
func BenchmarkConcurrentAccess(b *testing.B) {
b.StopTimer()
b.ResetTimer()
d := newDriver(b)
defer os.RemoveAll(tmp)
defer d.Cleanup()
numConcurent := 256
// create a bunch of ids
var ids []string
for i := 0; i < numConcurent; i++ {
ids = append(ids, stringid.GenerateNonCryptoID())
}
if err := d.Create(ids[0], "", "", nil); err != nil {
b.Fatal(err)
}
if err := d.Create(ids[1], ids[0], "", nil); err != nil {
b.Fatal(err)
}
parent := ids[1]
ids = append(ids[2:])
chErr := make(chan error, numConcurent)
var outerGroup sync.WaitGroup
outerGroup.Add(len(ids))
b.StartTimer()
// here's the actual bench
for _, id := range ids {
go func(id string) {
defer outerGroup.Done()
if err := d.Create(id, parent, "", nil); err != nil {
b.Logf("Create %s failed", id)
chErr <- err
return
}
var innerGroup sync.WaitGroup
for i := 0; i < b.N; i++ {
innerGroup.Add(1)
go func() {
d.Get(id, "")
d.Put(id)
innerGroup.Done()
}()
}
innerGroup.Wait()
d.Remove(id)
}(id)
}
outerGroup.Wait()
b.StopTimer()
close(chErr)
for err := range chErr {
if err != nil {
b.Log(err)
b.Fail()
}
}
}

View file

@ -0,0 +1,63 @@
// +build linux
package btrfs
import (
"os"
"path"
"testing"
"github.com/containers/storage/drivers/graphtest"
)
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestBtrfsSetup and TestBtrfsTeardown
func TestBtrfsSetup(t *testing.T) {
graphtest.GetDriver(t, "btrfs")
}
func TestBtrfsCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, "btrfs")
}
func TestBtrfsCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, "btrfs")
}
func TestBtrfsCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "btrfs")
}
func TestBtrfsSubvolDelete(t *testing.T) {
d := graphtest.GetDriver(t, "btrfs")
if err := d.CreateReadWrite("test", "", "", nil); err != nil {
t.Fatal(err)
}
defer graphtest.PutDriver(t)
dir, err := d.Get("test", "")
if err != nil {
t.Fatal(err)
}
defer d.Put("test")
if err := subvolCreate(dir, "subvoltest"); err != nil {
t.Fatal(err)
}
if _, err := os.Stat(path.Join(dir, "subvoltest")); err != nil {
t.Fatal(err)
}
if err := d.Remove("test"); err != nil {
t.Fatal(err)
}
if _, err := os.Stat(path.Join(dir, "subvoltest")); !os.IsNotExist(err) {
t.Fatalf("expected not exist error on nested subvol, got: %v", err)
}
}
func TestBtrfsTeardown(t *testing.T) {
graphtest.PutDriver(t)
}

View file

@ -0,0 +1,13 @@
// +build linux,!btrfs_noversion
package btrfs
import (
"testing"
)
func TestLibVersion(t *testing.T) {
if btrfsLibVersion() <= 0 {
t.Errorf("expected output from btrfs lib version > 0")
}
}

View file

@ -1135,7 +1135,7 @@ func (devices *DeviceSet) growFS(info *devInfo) error {
defer devices.deactivateDevice(info)
fsMountPoint := "/run/docker/mnt"
fsMountPoint := "/run/containers/mnt"
if _, err := os.Stat(fsMountPoint); os.IsNotExist(err) {
if err := os.MkdirAll(fsMountPoint, 0700); err != nil {
return err
@ -1693,7 +1693,7 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
return err
}
// Set the device prefix from the device id and inode of the docker root dir
// Set the device prefix from the device id and inode of the container root dir
st, err := os.Stat(devices.root)
if err != nil {
@ -1702,11 +1702,11 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
sysSt := st.Sys().(*syscall.Stat_t)
// "reg-" stands for "regular file".
// In the future we might use "dev-" for "device file", etc.
// docker-maj,min[-inode] stands for:
// - Managed by docker
// container-maj,min[-inode] stands for:
// - Managed by container storage
// - The target of this device is at major <maj> and minor <min>
// - If <inode> is defined, use that file inside the device as a loopback image. Otherwise use the device itself.
devices.devicePrefix = fmt.Sprintf("docker-%d:%d-%d", major(sysSt.Dev), minor(sysSt.Dev), sysSt.Ino)
devices.devicePrefix = fmt.Sprintf("container-%d:%d-%d", major(sysSt.Dev), minor(sysSt.Dev), sysSt.Ino)
logrus.Debugf("devmapper: Generated prefix: %s", devices.devicePrefix)
// Check for the existence of the thin-pool device
@ -1826,7 +1826,7 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
if devices.thinPoolDevice == "" {
if devices.metadataLoopFile != "" || devices.dataLoopFile != "" {
logrus.Warn("devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.")
logrus.Warn("devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev`.")
}
}

View file

@ -0,0 +1,110 @@
// +build linux
package devmapper
import (
"fmt"
"testing"
"time"
"github.com/containers/storage/drivers"
"github.com/containers/storage/drivers/graphtest"
)
func init() {
// Reduce the size the the base fs and loopback for the tests
defaultDataLoopbackSize = 300 * 1024 * 1024
defaultMetaDataLoopbackSize = 200 * 1024 * 1024
defaultBaseFsSize = 300 * 1024 * 1024
defaultUdevSyncOverride = true
if err := graphtest.InitLoopbacks(); err != nil {
panic(err)
}
}
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestDevmapperSetup and TestDevmapperTeardown
func TestDevmapperSetup(t *testing.T) {
graphtest.GetDriver(t, "devicemapper")
}
func TestDevmapperCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, "devicemapper")
}
func TestDevmapperCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, "devicemapper")
}
func TestDevmapperCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "devicemapper")
}
func TestDevmapperTeardown(t *testing.T) {
graphtest.PutDriver(t)
}
func TestDevmapperReduceLoopBackSize(t *testing.T) {
tenMB := int64(10 * 1024 * 1024)
testChangeLoopBackSize(t, -tenMB, defaultDataLoopbackSize, defaultMetaDataLoopbackSize)
}
func TestDevmapperIncreaseLoopBackSize(t *testing.T) {
tenMB := int64(10 * 1024 * 1024)
testChangeLoopBackSize(t, tenMB, defaultDataLoopbackSize+tenMB, defaultMetaDataLoopbackSize+tenMB)
}
func testChangeLoopBackSize(t *testing.T, delta, expectDataSize, expectMetaDataSize int64) {
driver := graphtest.GetDriver(t, "devicemapper").(*graphtest.Driver).Driver.(*graphdriver.NaiveDiffDriver).ProtoDriver.(*Driver)
defer graphtest.PutDriver(t)
// make sure data or metadata loopback size are the default size
if s := driver.DeviceSet.Status(); s.Data.Total != uint64(defaultDataLoopbackSize) || s.Metadata.Total != uint64(defaultMetaDataLoopbackSize) {
t.Fatalf("data or metadata loop back size is incorrect")
}
if err := driver.Cleanup(); err != nil {
t.Fatal(err)
}
//Reload
d, err := Init(driver.home, []string{
fmt.Sprintf("dm.loopdatasize=%d", defaultDataLoopbackSize+delta),
fmt.Sprintf("dm.loopmetadatasize=%d", defaultMetaDataLoopbackSize+delta),
}, nil, nil)
if err != nil {
t.Fatalf("error creating devicemapper driver: %v", err)
}
driver = d.(*graphdriver.NaiveDiffDriver).ProtoDriver.(*Driver)
if s := driver.DeviceSet.Status(); s.Data.Total != uint64(expectDataSize) || s.Metadata.Total != uint64(expectMetaDataSize) {
t.Fatalf("data or metadata loop back size is incorrect")
}
if err := driver.Cleanup(); err != nil {
t.Fatal(err)
}
}
// Make sure devices.Lock() has been release upon return from cleanupDeletedDevices() function
func TestDevmapperLockReleasedDeviceDeletion(t *testing.T) {
driver := graphtest.GetDriver(t, "devicemapper").(*graphtest.Driver).Driver.(*graphdriver.NaiveDiffDriver).ProtoDriver.(*Driver)
defer graphtest.PutDriver(t)
// Call cleanupDeletedDevices() and after the call take and release
// DeviceSet Lock. If lock has not been released, this will hang.
driver.DeviceSet.cleanupDeletedDevices()
doneChan := make(chan bool)
go func() {
driver.DeviceSet.Lock()
defer driver.DeviceSet.Unlock()
doneChan <- true
}()
select {
case <-time.After(time.Second * 5):
// Timer expired. That means lock was not released upon
// function return and we are deadlocked. Release lock
// here so that cleanup could succeed and fail the test.
driver.DeviceSet.Unlock()
t.Fatalf("Could not acquire devices lock after call to cleanupDeletedDevices()")
case <-doneChan:
}
}

View file

@ -0,0 +1,264 @@
// +build linux freebsd
package graphtest
import (
"bytes"
"io"
"io/ioutil"
"path/filepath"
"testing"
"github.com/containers/storage/pkg/stringid"
)
// DriverBenchExists benchmarks calls to exist
func DriverBenchExists(b *testing.B, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if !driver.Exists(base) {
b.Fatal("Newly created image doesn't exist")
}
}
}
// DriverBenchGetEmpty benchmarks calls to get on an empty layer
func DriverBenchGetEmpty(b *testing.B, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := driver.Get(base, "")
b.StopTimer()
if err != nil {
b.Fatalf("Error getting mount: %s", err)
}
if err := driver.Put(base); err != nil {
b.Fatalf("Error putting mount: %s", err)
}
b.StartTimer()
}
}
// DriverBenchDiffBase benchmarks calls to diff on a root layer
func DriverBenchDiffBase(b *testing.B, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
if err := addFiles(driver, base, 3); err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
arch, err := driver.Diff(base, "")
if err != nil {
b.Fatal(err)
}
_, err = io.Copy(ioutil.Discard, arch)
if err != nil {
b.Fatalf("Error copying archive: %s", err)
}
arch.Close()
}
}
// DriverBenchDiffN benchmarks calls to diff on two layers with
// a provided number of files on the lower and upper layers.
func DriverBenchDiffN(b *testing.B, bottom, top int, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
upper := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
if err := addManyFiles(driver, base, bottom, 3); err != nil {
b.Fatal(err)
}
if err := driver.Create(upper, base, "", nil); err != nil {
b.Fatal(err)
}
if err := addManyFiles(driver, upper, top, 6); err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
arch, err := driver.Diff(upper, "")
if err != nil {
b.Fatal(err)
}
_, err = io.Copy(ioutil.Discard, arch)
if err != nil {
b.Fatalf("Error copying archive: %s", err)
}
arch.Close()
}
}
// DriverBenchDiffApplyN benchmarks calls to diff and apply together
func DriverBenchDiffApplyN(b *testing.B, fileCount int, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
upper := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
if err := addManyFiles(driver, base, fileCount, 3); err != nil {
b.Fatal(err)
}
if err := driver.Create(upper, base, "", nil); err != nil {
b.Fatal(err)
}
if err := addManyFiles(driver, upper, fileCount, 6); err != nil {
b.Fatal(err)
}
diffSize, err := driver.DiffSize(upper, "")
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
b.StopTimer()
for i := 0; i < b.N; i++ {
diff := stringid.GenerateRandomID()
if err := driver.Create(diff, base, "", nil); err != nil {
b.Fatal(err)
}
if err := checkManyFiles(driver, diff, fileCount, 3); err != nil {
b.Fatal(err)
}
b.StartTimer()
arch, err := driver.Diff(upper, "")
if err != nil {
b.Fatal(err)
}
applyDiffSize, err := driver.ApplyDiff(diff, "", arch)
if err != nil {
b.Fatal(err)
}
b.StopTimer()
arch.Close()
if applyDiffSize != diffSize {
// TODO: enforce this
//b.Fatalf("Apply diff size different, got %d, expected %s", applyDiffSize, diffSize)
}
if err := checkManyFiles(driver, diff, fileCount, 6); err != nil {
b.Fatal(err)
}
}
}
// DriverBenchDeepLayerDiff benchmarks calls to diff on top of a given number of layers.
func DriverBenchDeepLayerDiff(b *testing.B, layerCount int, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
if err := addFiles(driver, base, 50); err != nil {
b.Fatal(err)
}
topLayer, err := addManyLayers(driver, base, layerCount)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
arch, err := driver.Diff(topLayer, "")
if err != nil {
b.Fatal(err)
}
_, err = io.Copy(ioutil.Discard, arch)
if err != nil {
b.Fatalf("Error copying archive: %s", err)
}
arch.Close()
}
}
// DriverBenchDeepLayerRead benchmarks calls to read a file under a given number of layers.
func DriverBenchDeepLayerRead(b *testing.B, layerCount int, drivername string, driveroptions ...string) {
driver := GetDriver(b, drivername, driveroptions...)
defer PutDriver(b)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
b.Fatal(err)
}
content := []byte("test content")
if err := addFile(driver, base, "testfile.txt", content); err != nil {
b.Fatal(err)
}
topLayer, err := addManyLayers(driver, base, layerCount)
if err != nil {
b.Fatal(err)
}
root, err := driver.Get(topLayer, "")
if err != nil {
b.Fatal(err)
}
defer driver.Put(topLayer)
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Read content
c, err := ioutil.ReadFile(filepath.Join(root, "testfile.txt"))
if err != nil {
b.Fatal(err)
}
b.StopTimer()
if bytes.Compare(c, content) != 0 {
b.Fatalf("Wrong content in file %v, expected %v", c, content)
}
b.StartTimer()
}
}

View file

@ -0,0 +1,350 @@
// +build linux freebsd
package graphtest
import (
"bytes"
"io/ioutil"
"math/rand"
"os"
"path"
"reflect"
"syscall"
"testing"
"unsafe"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/stringid"
"github.com/docker/go-units"
)
var (
drv *Driver
)
// Driver conforms to graphdriver.Driver interface and
// contains information such as root and reference count of the number of clients using it.
// This helps in testing drivers added into the framework.
type Driver struct {
graphdriver.Driver
root string
refCount int
}
func newDriver(t testing.TB, name string, options []string) *Driver {
root, err := ioutil.TempDir("", "storage-graphtest-")
if err != nil {
t.Fatal(err)
}
if err := os.MkdirAll(root, 0755); err != nil {
t.Fatal(err)
}
d, err := graphdriver.GetDriver(name, root, options, nil, nil)
if err != nil {
t.Logf("graphdriver: %v\n", err)
if err == graphdriver.ErrNotSupported || err == graphdriver.ErrPrerequisites || err == graphdriver.ErrIncompatibleFS {
t.Skipf("Driver %s not supported", name)
}
t.Fatal(err)
}
return &Driver{d, root, 1}
}
func cleanup(t testing.TB, d *Driver) {
if err := drv.Cleanup(); err != nil {
t.Fatal(err)
}
os.RemoveAll(d.root)
}
// GetDriver create a new driver with given name or return an existing driver with the name updating the reference count.
func GetDriver(t testing.TB, name string, options ...string) graphdriver.Driver {
if drv == nil {
drv = newDriver(t, name, options)
} else {
drv.refCount++
}
return drv
}
// PutDriver removes the driver if it is no longer used and updates the reference count.
func PutDriver(t testing.TB) {
if drv == nil {
t.Skip("No driver to put!")
}
drv.refCount--
if drv.refCount == 0 {
cleanup(t, drv)
drv = nil
}
}
// DriverTestCreateEmpty creates a new image and verifies it is empty and the right metadata
func DriverTestCreateEmpty(t testing.TB, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
if err := driver.Create("empty", "", "", nil); err != nil {
t.Fatal(err)
}
defer func() {
if err := driver.Remove("empty"); err != nil {
t.Fatal(err)
}
}()
if !driver.Exists("empty") {
t.Fatal("Newly created image doesn't exist")
}
dir, err := driver.Get("empty", "")
if err != nil {
t.Fatal(err)
}
verifyFile(t, dir, 0755|os.ModeDir, 0, 0)
// Verify that the directory is empty
fis, err := readDir(dir)
if err != nil {
t.Fatal(err)
}
if len(fis) != 0 {
t.Fatal("New directory not empty")
}
driver.Put("empty")
}
// DriverTestCreateBase create a base driver and verify.
func DriverTestCreateBase(t testing.TB, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
createBase(t, driver, "Base")
defer func() {
if err := driver.Remove("Base"); err != nil {
t.Fatal(err)
}
}()
verifyBase(t, driver, "Base")
}
// DriverTestCreateSnap Create a driver and snap and verify.
func DriverTestCreateSnap(t testing.TB, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
createBase(t, driver, "Base")
defer func() {
if err := driver.Remove("Base"); err != nil {
t.Fatal(err)
}
}()
if err := driver.Create("Snap", "Base", "", nil); err != nil {
t.Fatal(err)
}
defer func() {
if err := driver.Remove("Snap"); err != nil {
t.Fatal(err)
}
}()
verifyBase(t, driver, "Snap")
}
// DriverTestDeepLayerRead reads a file from a lower layer under a given number of layers
func DriverTestDeepLayerRead(t testing.TB, layerCount int, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
base := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
t.Fatal(err)
}
content := []byte("test content")
if err := addFile(driver, base, "testfile.txt", content); err != nil {
t.Fatal(err)
}
topLayer, err := addManyLayers(driver, base, layerCount)
if err != nil {
t.Fatal(err)
}
err = checkManyLayers(driver, topLayer, layerCount)
if err != nil {
t.Fatal(err)
}
if err := checkFile(driver, topLayer, "testfile.txt", content); err != nil {
t.Fatal(err)
}
}
// DriverTestDiffApply tests diffing and applying produces the same layer
func DriverTestDiffApply(t testing.TB, fileCount int, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
base := stringid.GenerateRandomID()
upper := stringid.GenerateRandomID()
deleteFile := "file-remove.txt"
deleteFileContent := []byte("This file should get removed in upper!")
if err := driver.Create(base, "", "", nil); err != nil {
t.Fatal(err)
}
if err := addManyFiles(driver, base, fileCount, 3); err != nil {
t.Fatal(err)
}
if err := addFile(driver, base, deleteFile, deleteFileContent); err != nil {
t.Fatal(err)
}
if err := driver.Create(upper, base, "", nil); err != nil {
t.Fatal(err)
}
if err := addManyFiles(driver, upper, fileCount, 6); err != nil {
t.Fatal(err)
}
if err := removeFile(driver, upper, deleteFile); err != nil {
t.Fatal(err)
}
diffSize, err := driver.DiffSize(upper, "")
if err != nil {
t.Fatal(err)
}
diff := stringid.GenerateRandomID()
if err := driver.Create(diff, base, "", nil); err != nil {
t.Fatal(err)
}
if err := checkManyFiles(driver, diff, fileCount, 3); err != nil {
t.Fatal(err)
}
if err := checkFile(driver, diff, deleteFile, deleteFileContent); err != nil {
t.Fatal(err)
}
arch, err := driver.Diff(upper, base)
if err != nil {
t.Fatal(err)
}
buf := bytes.NewBuffer(nil)
if _, err := buf.ReadFrom(arch); err != nil {
t.Fatal(err)
}
if err := arch.Close(); err != nil {
t.Fatal(err)
}
applyDiffSize, err := driver.ApplyDiff(diff, base, bytes.NewReader(buf.Bytes()))
if err != nil {
t.Fatal(err)
}
if applyDiffSize != diffSize {
t.Fatalf("Apply diff size different, got %d, expected %d", applyDiffSize, diffSize)
}
if err := checkManyFiles(driver, diff, fileCount, 6); err != nil {
t.Fatal(err)
}
if err := checkFileRemoved(driver, diff, deleteFile); err != nil {
t.Fatal(err)
}
}
// DriverTestChanges tests computed changes on a layer matches changes made
func DriverTestChanges(t testing.TB, drivername string, driverOptions ...string) {
driver := GetDriver(t, drivername, driverOptions...)
defer PutDriver(t)
base := stringid.GenerateRandomID()
upper := stringid.GenerateRandomID()
if err := driver.Create(base, "", "", nil); err != nil {
t.Fatal(err)
}
if err := addManyFiles(driver, base, 20, 3); err != nil {
t.Fatal(err)
}
if err := driver.Create(upper, base, "", nil); err != nil {
t.Fatal(err)
}
expectedChanges, err := changeManyFiles(driver, upper, 20, 6)
if err != nil {
t.Fatal(err)
}
changes, err := driver.Changes(upper, base)
if err != nil {
t.Fatal(err)
}
if err = checkChanges(expectedChanges, changes); err != nil {
t.Fatal(err)
}
}
func writeRandomFile(path string, size uint64) error {
buf := make([]int64, size/8)
r := rand.NewSource(0)
for i := range buf {
buf[i] = r.Int63()
}
// Cast to []byte
header := *(*reflect.SliceHeader)(unsafe.Pointer(&buf))
header.Len *= 8
header.Cap *= 8
data := *(*[]byte)(unsafe.Pointer(&header))
return ioutil.WriteFile(path, data, 0700)
}
// DriverTestSetQuota Create a driver and test setting quota.
func DriverTestSetQuota(t *testing.T, drivername string) {
driver := GetDriver(t, drivername)
defer PutDriver(t)
createBase(t, driver, "Base")
storageOpt := make(map[string]string, 1)
storageOpt["size"] = "50M"
if err := driver.Create("zfsTest", "Base", "", storageOpt); err != nil {
t.Fatal(err)
}
mountPath, err := driver.Get("zfsTest", "")
if err != nil {
t.Fatal(err)
}
quota := uint64(50 * units.MiB)
err = writeRandomFile(path.Join(mountPath, "file"), quota*2)
if pathError, ok := err.(*os.PathError); ok && pathError.Err != syscall.EDQUOT {
t.Fatalf("expect write() to fail with %v, got %v", syscall.EDQUOT, err)
}
}

View file

@ -0,0 +1 @@
package graphtest

View file

@ -0,0 +1,327 @@
package graphtest
import (
"bytes"
"fmt"
"io/ioutil"
"math/rand"
"os"
"path"
"sort"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/stringid"
)
func randomContent(size int, seed int64) []byte {
s := rand.NewSource(seed)
content := make([]byte, size)
for i := 0; i < len(content); i += 7 {
val := s.Int63()
for j := 0; i+j < len(content) && j < 7; j++ {
content[i+j] = byte(val)
val >>= 8
}
}
return content
}
func addFiles(drv graphdriver.Driver, layer string, seed int64) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
if err := ioutil.WriteFile(path.Join(root, "file-a"), randomContent(64, seed), 0755); err != nil {
return err
}
if err := os.MkdirAll(path.Join(root, "dir-b"), 0755); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(root, "dir-b", "file-b"), randomContent(128, seed+1), 0755); err != nil {
return err
}
return ioutil.WriteFile(path.Join(root, "file-c"), randomContent(128*128, seed+2), 0755)
}
func checkFile(drv graphdriver.Driver, layer, filename string, content []byte) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
fileContent, err := ioutil.ReadFile(path.Join(root, filename))
if err != nil {
return err
}
if bytes.Compare(fileContent, content) != 0 {
return fmt.Errorf("mismatched file content %v, expecting %v", fileContent, content)
}
return nil
}
func addFile(drv graphdriver.Driver, layer, filename string, content []byte) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
return ioutil.WriteFile(path.Join(root, filename), content, 0755)
}
func removeFile(drv graphdriver.Driver, layer, filename string) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
return os.Remove(path.Join(root, filename))
}
func checkFileRemoved(drv graphdriver.Driver, layer, filename string) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
if _, err := os.Stat(path.Join(root, filename)); err == nil {
return fmt.Errorf("file still exists: %s", path.Join(root, filename))
} else if !os.IsNotExist(err) {
return err
}
return nil
}
func addManyFiles(drv graphdriver.Driver, layer string, count int, seed int64) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
for i := 0; i < count; i += 100 {
dir := path.Join(root, fmt.Sprintf("directory-%d", i))
if err := os.MkdirAll(dir, 0755); err != nil {
return err
}
for j := 0; i+j < count && j < 100; j++ {
file := path.Join(dir, fmt.Sprintf("file-%d", i+j))
if err := ioutil.WriteFile(file, randomContent(64, seed+int64(i+j)), 0755); err != nil {
return err
}
}
}
return nil
}
func changeManyFiles(drv graphdriver.Driver, layer string, count int, seed int64) ([]archive.Change, error) {
root, err := drv.Get(layer, "")
if err != nil {
return nil, err
}
defer drv.Put(layer)
changes := []archive.Change{}
for i := 0; i < count; i += 100 {
archiveRoot := fmt.Sprintf("/directory-%d", i)
if err := os.MkdirAll(path.Join(root, archiveRoot), 0755); err != nil {
return nil, err
}
for j := 0; i+j < count && j < 100; j++ {
if j == 0 {
changes = append(changes, archive.Change{
Path: archiveRoot,
Kind: archive.ChangeModify,
})
}
var change archive.Change
switch j % 3 {
// Update file
case 0:
change.Path = path.Join(archiveRoot, fmt.Sprintf("file-%d", i+j))
change.Kind = archive.ChangeModify
if err := ioutil.WriteFile(path.Join(root, change.Path), randomContent(64, seed+int64(i+j)), 0755); err != nil {
return nil, err
}
// Add file
case 1:
change.Path = path.Join(archiveRoot, fmt.Sprintf("file-%d-%d", seed, i+j))
change.Kind = archive.ChangeAdd
if err := ioutil.WriteFile(path.Join(root, change.Path), randomContent(64, seed+int64(i+j)), 0755); err != nil {
return nil, err
}
// Remove file
case 2:
change.Path = path.Join(archiveRoot, fmt.Sprintf("file-%d", i+j))
change.Kind = archive.ChangeDelete
if err := os.Remove(path.Join(root, change.Path)); err != nil {
return nil, err
}
}
changes = append(changes, change)
}
}
return changes, nil
}
func checkManyFiles(drv graphdriver.Driver, layer string, count int, seed int64) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
for i := 0; i < count; i += 100 {
dir := path.Join(root, fmt.Sprintf("directory-%d", i))
for j := 0; i+j < count && j < 100; j++ {
file := path.Join(dir, fmt.Sprintf("file-%d", i+j))
fileContent, err := ioutil.ReadFile(file)
if err != nil {
return err
}
content := randomContent(64, seed+int64(i+j))
if bytes.Compare(fileContent, content) != 0 {
return fmt.Errorf("mismatched file content %v, expecting %v", fileContent, content)
}
}
}
return nil
}
type changeList []archive.Change
func (c changeList) Less(i, j int) bool {
if c[i].Path == c[j].Path {
return c[i].Kind < c[j].Kind
}
return c[i].Path < c[j].Path
}
func (c changeList) Len() int { return len(c) }
func (c changeList) Swap(i, j int) { c[j], c[i] = c[i], c[j] }
func checkChanges(expected, actual []archive.Change) error {
if len(expected) != len(actual) {
return fmt.Errorf("unexpected number of changes, expected %d, got %d", len(expected), len(actual))
}
sort.Sort(changeList(expected))
sort.Sort(changeList(actual))
for i := range expected {
if expected[i] != actual[i] {
return fmt.Errorf("unexpected change, expecting %v, got %v", expected[i], actual[i])
}
}
return nil
}
func addLayerFiles(drv graphdriver.Driver, layer, parent string, i int) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
if err := ioutil.WriteFile(path.Join(root, "top-id"), []byte(layer), 0755); err != nil {
return err
}
layerDir := path.Join(root, fmt.Sprintf("layer-%d", i))
if err := os.MkdirAll(layerDir, 0755); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(layerDir, "layer-id"), []byte(layer), 0755); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(layerDir, "parent-id"), []byte(parent), 0755); err != nil {
return err
}
return nil
}
func addManyLayers(drv graphdriver.Driver, baseLayer string, count int) (string, error) {
lastLayer := baseLayer
for i := 1; i <= count; i++ {
nextLayer := stringid.GenerateRandomID()
if err := drv.Create(nextLayer, lastLayer, "", nil); err != nil {
return "", err
}
if err := addLayerFiles(drv, nextLayer, lastLayer, i); err != nil {
return "", err
}
lastLayer = nextLayer
}
return lastLayer, nil
}
func checkManyLayers(drv graphdriver.Driver, layer string, count int) error {
root, err := drv.Get(layer, "")
if err != nil {
return err
}
defer drv.Put(layer)
layerIDBytes, err := ioutil.ReadFile(path.Join(root, "top-id"))
if err != nil {
return err
}
if bytes.Compare(layerIDBytes, []byte(layer)) != 0 {
return fmt.Errorf("mismatched file content %v, expecting %v", layerIDBytes, []byte(layer))
}
for i := count; i > 0; i-- {
layerDir := path.Join(root, fmt.Sprintf("layer-%d", i))
thisLayerIDBytes, err := ioutil.ReadFile(path.Join(layerDir, "layer-id"))
if err != nil {
return err
}
if bytes.Compare(thisLayerIDBytes, layerIDBytes) != 0 {
return fmt.Errorf("mismatched file content %v, expecting %v", thisLayerIDBytes, layerIDBytes)
}
layerIDBytes, err = ioutil.ReadFile(path.Join(layerDir, "parent-id"))
if err != nil {
return err
}
}
return nil
}
// readDir reads a directory just like ioutil.ReadDir()
// then hides specific files (currently "lost+found")
// so the tests don't "see" it
func readDir(dir string) ([]os.FileInfo, error) {
a, err := ioutil.ReadDir(dir)
if err != nil {
return nil, err
}
b := a[:0]
for _, x := range a {
if x.Name() != "lost+found" { // ext4 always have this dir
b = append(b, x)
}
}
return b, nil
}

View file

@ -0,0 +1,143 @@
// +build linux freebsd
package graphtest
import (
"fmt"
"io/ioutil"
"os"
"path"
"syscall"
"testing"
"github.com/containers/storage/drivers"
)
// InitLoopbacks ensures that the loopback devices are properly created within
// the system running the device mapper tests.
func InitLoopbacks() error {
statT, err := getBaseLoopStats()
if err != nil {
return err
}
// create at least 8 loopback files, ya, that is a good number
for i := 0; i < 8; i++ {
loopPath := fmt.Sprintf("/dev/loop%d", i)
// only create new loopback files if they don't exist
if _, err := os.Stat(loopPath); err != nil {
if mkerr := syscall.Mknod(loopPath,
uint32(statT.Mode|syscall.S_IFBLK), int((7<<8)|(i&0xff)|((i&0xfff00)<<12))); mkerr != nil {
return mkerr
}
os.Chown(loopPath, int(statT.Uid), int(statT.Gid))
}
}
return nil
}
// getBaseLoopStats inspects /dev/loop0 to collect uid,gid, and mode for the
// loop0 device on the system. If it does not exist we assume 0,0,0660 for the
// stat data
func getBaseLoopStats() (*syscall.Stat_t, error) {
loop0, err := os.Stat("/dev/loop0")
if err != nil {
if os.IsNotExist(err) {
return &syscall.Stat_t{
Uid: 0,
Gid: 0,
Mode: 0660,
}, nil
}
return nil, err
}
return loop0.Sys().(*syscall.Stat_t), nil
}
func verifyFile(t testing.TB, path string, mode os.FileMode, uid, gid uint32) {
fi, err := os.Stat(path)
if err != nil {
t.Fatal(err)
}
if fi.Mode()&os.ModeType != mode&os.ModeType {
t.Fatalf("Expected %s type 0x%x, got 0x%x", path, mode&os.ModeType, fi.Mode()&os.ModeType)
}
if fi.Mode()&os.ModePerm != mode&os.ModePerm {
t.Fatalf("Expected %s mode %o, got %o", path, mode&os.ModePerm, fi.Mode()&os.ModePerm)
}
if fi.Mode()&os.ModeSticky != mode&os.ModeSticky {
t.Fatalf("Expected %s sticky 0x%x, got 0x%x", path, mode&os.ModeSticky, fi.Mode()&os.ModeSticky)
}
if fi.Mode()&os.ModeSetuid != mode&os.ModeSetuid {
t.Fatalf("Expected %s setuid 0x%x, got 0x%x", path, mode&os.ModeSetuid, fi.Mode()&os.ModeSetuid)
}
if fi.Mode()&os.ModeSetgid != mode&os.ModeSetgid {
t.Fatalf("Expected %s setgid 0x%x, got 0x%x", path, mode&os.ModeSetgid, fi.Mode()&os.ModeSetgid)
}
if stat, ok := fi.Sys().(*syscall.Stat_t); ok {
if stat.Uid != uid {
t.Fatalf("%s no owned by uid %d", path, uid)
}
if stat.Gid != gid {
t.Fatalf("%s not owned by gid %d", path, gid)
}
}
}
func createBase(t testing.TB, driver graphdriver.Driver, name string) {
// We need to be able to set any perms
oldmask := syscall.Umask(0)
defer syscall.Umask(oldmask)
if err := driver.CreateReadWrite(name, "", "", nil); err != nil {
t.Fatal(err)
}
dir, err := driver.Get(name, "")
if err != nil {
t.Fatal(err)
}
defer driver.Put(name)
subdir := path.Join(dir, "a subdir")
if err := os.Mkdir(subdir, 0705|os.ModeSticky); err != nil {
t.Fatal(err)
}
if err := os.Chown(subdir, 1, 2); err != nil {
t.Fatal(err)
}
file := path.Join(dir, "a file")
if err := ioutil.WriteFile(file, []byte("Some data"), 0222|os.ModeSetuid); err != nil {
t.Fatal(err)
}
}
func verifyBase(t testing.TB, driver graphdriver.Driver, name string) {
dir, err := driver.Get(name, "")
if err != nil {
t.Fatal(err)
}
defer driver.Put(name)
subdir := path.Join(dir, "a subdir")
verifyFile(t, subdir, 0705|os.ModeDir|os.ModeSticky, 1, 2)
file := path.Join(dir, "a file")
verifyFile(t, file, 0222|os.ModeSetuid, 0, 0)
fis, err := readDir(dir)
if err != nil {
t.Fatal(err)
}
if len(fis) != 2 {
t.Fatal("Unexpected files in base image")
}
}

View file

@ -0,0 +1,93 @@
// +build linux
package overlay
import (
"testing"
"github.com/containers/storage/drivers"
"github.com/containers/storage/drivers/graphtest"
"github.com/containers/storage/pkg/archive"
)
func init() {
// Do not sure chroot to speed run time and allow archive
// errors or hangs to be debugged directly from the test process.
graphdriver.ApplyUncompressedLayer = archive.ApplyUncompressedLayer
}
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestOverlaySetup and TestOverlayTeardown
func TestOverlaySetup(t *testing.T) {
graphtest.GetDriver(t, "overlay")
}
func TestOverlayCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, "overlay")
}
func TestOverlayCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, "overlay")
}
func TestOverlayCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "overlay")
}
func TestOverlay50LayerRead(t *testing.T) {
graphtest.DriverTestDeepLayerRead(t, 50, "overlay")
}
// Fails due to bug in calculating changes after apply
// likely related to https://github.com/docker/docker/issues/21555
func TestOverlayDiffApply10Files(t *testing.T) {
t.Skipf("Fails to compute changes after apply intermittently")
graphtest.DriverTestDiffApply(t, 10, "overlay")
}
func TestOverlayChanges(t *testing.T) {
t.Skipf("Fails to compute changes intermittently")
graphtest.DriverTestChanges(t, "overlay")
}
func TestOverlayTeardown(t *testing.T) {
graphtest.PutDriver(t)
}
// Benchmarks should always setup new driver
func BenchmarkExists(b *testing.B) {
graphtest.DriverBenchExists(b, "overlay")
}
func BenchmarkGetEmpty(b *testing.B) {
graphtest.DriverBenchGetEmpty(b, "overlay")
}
func BenchmarkDiffBase(b *testing.B) {
graphtest.DriverBenchDiffBase(b, "overlay")
}
func BenchmarkDiffSmallUpper(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10, 10, "overlay")
}
func BenchmarkDiff10KFileUpper(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10, 10000, "overlay")
}
func BenchmarkDiff10KFilesBottom(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10000, 10, "overlay")
}
func BenchmarkDiffApply100(b *testing.B) {
graphtest.DriverBenchDiffApplyN(b, 100, "overlay")
}
func BenchmarkDiff20Layers(b *testing.B) {
graphtest.DriverBenchDeepLayerDiff(b, 20, "overlay")
}
func BenchmarkRead20Layers(b *testing.B) {
graphtest.DriverBenchDeepLayerRead(b, 20, "overlay")
}

View file

@ -15,7 +15,7 @@ import (
)
func init() {
reexec.Register("docker-mountfrom", mountFromMain)
reexec.Register("storage-mountfrom", mountFromMain)
}
func fatal(err error) {
@ -40,7 +40,7 @@ func mountFrom(dir, device, target, mType, label string) error {
Label: label,
}
cmd := reexec.Command("docker-mountfrom", dir)
cmd := reexec.Command("storage-mountfrom", dir)
w, err := cmd.StdinPipe()
if err != nil {
return fmt.Errorf("mountfrom error on pipe creation: %v", err)
@ -65,7 +65,7 @@ func mountFrom(dir, device, target, mType, label string) error {
return nil
}
// mountfromMain is the entry-point for docker-mountfrom on re-exec.
// mountfromMain is the entry-point for storage-mountfrom on re-exec.
func mountFromMain() {
runtime.LockOSThread()
flag.Parse()

View file

@ -0,0 +1,106 @@
// +build linux
package overlay2
import (
"os"
"syscall"
"testing"
"github.com/containers/storage/drivers"
"github.com/containers/storage/drivers/graphtest"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/reexec"
)
func init() {
// Do not sure chroot to speed run time and allow archive
// errors or hangs to be debugged directly from the test process.
untar = archive.UntarUncompressed
graphdriver.ApplyUncompressedLayer = archive.ApplyUncompressedLayer
reexec.Init()
}
func cdMountFrom(dir, device, target, mType, label string) error {
wd, err := os.Getwd()
if err != nil {
return err
}
os.Chdir(dir)
defer os.Chdir(wd)
return syscall.Mount(device, target, mType, 0, label)
}
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestOverlaySetup and TestOverlayTeardown
func TestOverlaySetup(t *testing.T) {
graphtest.GetDriver(t, driverName)
}
func TestOverlayCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, driverName)
}
func TestOverlayCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, driverName)
}
func TestOverlayCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, driverName)
}
func TestOverlay128LayerRead(t *testing.T) {
graphtest.DriverTestDeepLayerRead(t, 128, driverName)
}
func TestOverlayDiffApply10Files(t *testing.T) {
graphtest.DriverTestDiffApply(t, 10, driverName)
}
func TestOverlayChanges(t *testing.T) {
graphtest.DriverTestChanges(t, driverName)
}
func TestOverlayTeardown(t *testing.T) {
graphtest.PutDriver(t)
}
// Benchmarks should always setup new driver
func BenchmarkExists(b *testing.B) {
graphtest.DriverBenchExists(b, driverName)
}
func BenchmarkGetEmpty(b *testing.B) {
graphtest.DriverBenchGetEmpty(b, driverName)
}
func BenchmarkDiffBase(b *testing.B) {
graphtest.DriverBenchDiffBase(b, driverName)
}
func BenchmarkDiffSmallUpper(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10, 10, driverName)
}
func BenchmarkDiff10KFileUpper(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10, 10000, driverName)
}
func BenchmarkDiff10KFilesBottom(b *testing.B) {
graphtest.DriverBenchDiffN(b, 10000, 10, driverName)
}
func BenchmarkDiffApply100(b *testing.B) {
graphtest.DriverBenchDiffApplyN(b, 100, driverName)
}
func BenchmarkDiff20Layers(b *testing.B) {
graphtest.DriverBenchDeepLayerDiff(b, 20, driverName)
}
func BenchmarkRead20Layers(b *testing.B) {
graphtest.DriverBenchDeepLayerRead(b, 20, driverName)
}

View file

@ -0,0 +1,37 @@
// +build linux
package vfs
import (
"testing"
"github.com/containers/storage/drivers/graphtest"
"github.com/containers/storage/pkg/reexec"
)
func init() {
reexec.Init()
}
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestVfsSetup and TestVfsTeardown
func TestVfsSetup(t *testing.T) {
graphtest.GetDriver(t, "vfs")
}
func TestVfsCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, "vfs")
}
func TestVfsCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, "vfs")
}
func TestVfsCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "vfs")
}
func TestVfsTeardown(t *testing.T) {
graphtest.PutDriver(t)
}

View file

@ -0,0 +1,779 @@
//+build windows
package windows
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"sync"
"syscall"
"unsafe"
"github.com/Microsoft/go-winio"
"github.com/Microsoft/go-winio/archive/tar"
"github.com/Microsoft/go-winio/backuptar"
"github.com/Microsoft/hcsshim"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/drivers"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/longpath"
"github.com/containers/storage/pkg/reexec"
"github.com/containers/storage/pkg/system"
"github.com/vbatts/tar-split/tar/storage"
)
// filterDriver is an HCSShim driver type for the Windows Filter driver.
const filterDriver = 1
// init registers the windows graph drivers to the register.
func init() {
graphdriver.Register("windowsfilter", InitFilter)
reexec.Register("storage-windows-write-layer", writeLayer)
}
type checker struct {
}
func (c *checker) IsMounted(path string) bool {
return false
}
// Driver represents a windows graph driver.
type Driver struct {
// info stores the shim driver information
info hcsshim.DriverInfo
ctr *graphdriver.RefCounter
// it is safe for windows to use a cache here because it does not support
// restoring containers when the daemon dies.
cacheMu sync.Mutex
cache map[string]string
}
func isTP5OrOlder() bool {
return system.GetOSVersion().Build <= 14300
}
// InitFilter returns a new Windows storage filter driver.
func InitFilter(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
logrus.Debugf("WindowsGraphDriver InitFilter at %s", home)
d := &Driver{
info: hcsshim.DriverInfo{
HomeDir: home,
Flavour: filterDriver,
},
cache: make(map[string]string),
ctr: graphdriver.NewRefCounter(&checker{}),
}
return d, nil
}
// String returns the string representation of a driver. This should match
// the name the graph driver has been registered with.
func (d *Driver) String() string {
return "windowsfilter"
}
// Status returns the status of the driver.
func (d *Driver) Status() [][2]string {
return [][2]string{
{"Windows", ""},
}
}
// Exists returns true if the given id is registered with this driver.
func (d *Driver) Exists(id string) bool {
rID, err := d.resolveID(id)
if err != nil {
return false
}
result, err := hcsshim.LayerExists(d.info, rID)
if err != nil {
return false
}
return result
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent, mountLabel string, storageOpt map[string]string) error {
return d.create(id, parent, mountLabel, false, storageOpt)
}
// Create creates a new read-only layer with the given id.
func (d *Driver) Create(id, parent, mountLabel string, storageOpt map[string]string) error {
return d.create(id, parent, mountLabel, true, storageOpt)
}
func (d *Driver) create(id, parent, mountLabel string, readOnly bool, storageOpt map[string]string) error {
if len(storageOpt) != 0 {
return fmt.Errorf("--storage-opt is not supported for windows")
}
rPId, err := d.resolveID(parent)
if err != nil {
return err
}
parentChain, err := d.getLayerChain(rPId)
if err != nil {
return err
}
var layerChain []string
if rPId != "" {
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
if err != nil {
return err
}
if _, err := os.Stat(filepath.Join(parentPath, "Files")); err == nil {
// This is a legitimate parent layer (not the empty "-init" layer),
// so include it in the layer chain.
layerChain = []string{parentPath}
}
}
layerChain = append(layerChain, parentChain...)
if readOnly {
if err := hcsshim.CreateLayer(d.info, id, rPId); err != nil {
return err
}
} else {
var parentPath string
if len(layerChain) != 0 {
parentPath = layerChain[0]
}
if isTP5OrOlder() {
// Pre-create the layer directory, providing an ACL to give the Hyper-V Virtual Machines
// group access. This is necessary to ensure that Hyper-V containers can access the
// virtual machine data. This is not necessary post-TP5.
path, err := syscall.UTF16FromString(filepath.Join(d.info.HomeDir, id))
if err != nil {
return err
}
// Give system and administrators full control, and VMs read, write, and execute.
// Mark these ACEs as inherited.
sd, err := winio.SddlToSecurityDescriptor("D:(A;OICI;FA;;;SY)(A;OICI;FA;;;BA)(A;OICI;FRFWFX;;;S-1-5-83-0)")
if err != nil {
return err
}
err = syscall.CreateDirectory(&path[0], &syscall.SecurityAttributes{
Length: uint32(unsafe.Sizeof(syscall.SecurityAttributes{})),
SecurityDescriptor: uintptr(unsafe.Pointer(&sd[0])),
})
if err != nil {
return err
}
}
if err := hcsshim.CreateSandboxLayer(d.info, id, parentPath, layerChain); err != nil {
return err
}
}
if _, err := os.Lstat(d.dir(parent)); err != nil {
if err2 := hcsshim.DestroyLayer(d.info, id); err2 != nil {
logrus.Warnf("Failed to DestroyLayer %s: %s", id, err2)
}
return fmt.Errorf("Cannot create layer with missing parent %s: %s", parent, err)
}
if err := d.setLayerChain(id, layerChain); err != nil {
if err2 := hcsshim.DestroyLayer(d.info, id); err2 != nil {
logrus.Warnf("Failed to DestroyLayer %s: %s", id, err2)
}
return err
}
return nil
}
// dir returns the absolute path to the layer.
func (d *Driver) dir(id string) string {
return filepath.Join(d.info.HomeDir, filepath.Base(id))
}
// Remove unmounts and removes the dir information.
func (d *Driver) Remove(id string) error {
rID, err := d.resolveID(id)
if err != nil {
return err
}
os.RemoveAll(filepath.Join(d.info.HomeDir, "sysfile-backups", rID)) // ok to fail
return hcsshim.DestroyLayer(d.info, rID)
}
// Get returns the rootfs path for the id. This will mount the dir at it's given path.
func (d *Driver) Get(id, mountLabel string) (string, error) {
logrus.Debugf("WindowsGraphDriver Get() id %s mountLabel %s", id, mountLabel)
var dir string
rID, err := d.resolveID(id)
if err != nil {
return "", err
}
if count := d.ctr.Increment(rID); count > 1 {
return d.cache[rID], nil
}
// Getting the layer paths must be done outside of the lock.
layerChain, err := d.getLayerChain(rID)
if err != nil {
d.ctr.Decrement(rID)
return "", err
}
if err := hcsshim.ActivateLayer(d.info, rID); err != nil {
d.ctr.Decrement(rID)
return "", err
}
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
d.ctr.Decrement(rID)
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
logrus.Warnf("Failed to Deactivate %s: %s", id, err)
}
return "", err
}
mountPath, err := hcsshim.GetLayerMountPath(d.info, rID)
if err != nil {
d.ctr.Decrement(rID)
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
logrus.Warnf("Failed to Deactivate %s: %s", id, err)
}
return "", err
}
d.cacheMu.Lock()
d.cache[rID] = mountPath
d.cacheMu.Unlock()
// If the layer has a mount path, use that. Otherwise, use the
// folder path.
if mountPath != "" {
dir = mountPath
} else {
dir = d.dir(id)
}
return dir, nil
}
// Put adds a new layer to the driver.
func (d *Driver) Put(id string) error {
logrus.Debugf("WindowsGraphDriver Put() id %s", id)
rID, err := d.resolveID(id)
if err != nil {
return err
}
if count := d.ctr.Decrement(rID); count > 0 {
return nil
}
d.cacheMu.Lock()
delete(d.cache, rID)
d.cacheMu.Unlock()
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
return err
}
return hcsshim.DeactivateLayer(d.info, rID)
}
// Cleanup ensures the information the driver stores is properly removed.
func (d *Driver) Cleanup() error {
return nil
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
// The layer should be mounted when calling this function
func (d *Driver) Diff(id, parent string) (_ archive.Archive, err error) {
rID, err := d.resolveID(id)
if err != nil {
return
}
layerChain, err := d.getLayerChain(rID)
if err != nil {
return
}
// this is assuming that the layer is unmounted
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
return nil, err
}
prepare := func() {
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
}
}
arch, err := d.exportLayer(rID, layerChain)
if err != nil {
prepare()
return
}
return ioutils.NewReadCloserWrapper(arch, func() error {
err := arch.Close()
prepare()
return err
}), nil
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
// The layer should be mounted when calling this function
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
rID, err := d.resolveID(id)
if err != nil {
return nil, err
}
parentChain, err := d.getLayerChain(rID)
if err != nil {
return nil, err
}
// this is assuming that the layer is unmounted
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
return nil, err
}
defer func() {
if err := hcsshim.PrepareLayer(d.info, rID, parentChain); err != nil {
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
}
}()
var changes []archive.Change
err = winio.RunWithPrivilege(winio.SeBackupPrivilege, func() error {
r, err := hcsshim.NewLayerReader(d.info, id, parentChain)
if err != nil {
return err
}
defer r.Close()
for {
name, _, fileInfo, err := r.Next()
if err == io.EOF {
return nil
}
if err != nil {
return err
}
name = filepath.ToSlash(name)
if fileInfo == nil {
changes = append(changes, archive.Change{Path: name, Kind: archive.ChangeDelete})
} else {
// Currently there is no way to tell between an add and a modify.
changes = append(changes, archive.Change{Path: name, Kind: archive.ChangeModify})
}
}
})
if err != nil {
return nil, err
}
return changes, nil
}
// ApplyDiff extracts the changeset from the given diff into the
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
// The layer should not be mounted when calling this function
func (d *Driver) ApplyDiff(id, parent string, diff archive.Reader) (int64, error) {
var layerChain []string
if parent != "" {
rPId, err := d.resolveID(parent)
if err != nil {
return 0, err
}
parentChain, err := d.getLayerChain(rPId)
if err != nil {
return 0, err
}
parentPath, err := hcsshim.GetLayerMountPath(d.info, rPId)
if err != nil {
return 0, err
}
layerChain = append(layerChain, parentPath)
layerChain = append(layerChain, parentChain...)
}
size, err := d.importLayer(id, diff, layerChain)
if err != nil {
return 0, err
}
if err = d.setLayerChain(id, layerChain); err != nil {
return 0, err
}
return size, nil
}
// DiffSize calculates the changes between the specified layer
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
rPId, err := d.resolveID(parent)
if err != nil {
return
}
changes, err := d.Changes(id, rPId)
if err != nil {
return
}
layerFs, err := d.Get(id, "")
if err != nil {
return
}
defer d.Put(id)
return archive.ChangesSize(layerFs, changes), nil
}
// GetMetadata returns custom driver information.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
m := make(map[string]string)
m["dir"] = d.dir(id)
return m, nil
}
func writeTarFromLayer(r hcsshim.LayerReader, w io.Writer) error {
t := tar.NewWriter(w)
for {
name, size, fileInfo, err := r.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
if fileInfo == nil {
// Write a whiteout file.
hdr := &tar.Header{
Name: filepath.ToSlash(filepath.Join(filepath.Dir(name), archive.WhiteoutPrefix+filepath.Base(name))),
}
err := t.WriteHeader(hdr)
if err != nil {
return err
}
} else {
err = backuptar.WriteTarFileFromBackupStream(t, r, name, size, fileInfo)
if err != nil {
return err
}
}
}
return t.Close()
}
// exportLayer generates an archive from a layer based on the given ID.
func (d *Driver) exportLayer(id string, parentLayerPaths []string) (archive.Archive, error) {
archive, w := io.Pipe()
go func() {
err := winio.RunWithPrivilege(winio.SeBackupPrivilege, func() error {
r, err := hcsshim.NewLayerReader(d.info, id, parentLayerPaths)
if err != nil {
return err
}
err = writeTarFromLayer(r, w)
cerr := r.Close()
if err == nil {
err = cerr
}
return err
})
w.CloseWithError(err)
}()
return archive, nil
}
func writeLayerFromTar(r archive.Reader, w hcsshim.LayerWriter) (int64, error) {
t := tar.NewReader(r)
hdr, err := t.Next()
totalSize := int64(0)
buf := bufio.NewWriter(nil)
for err == nil {
base := path.Base(hdr.Name)
if strings.HasPrefix(base, archive.WhiteoutPrefix) {
name := path.Join(path.Dir(hdr.Name), base[len(archive.WhiteoutPrefix):])
err = w.Remove(filepath.FromSlash(name))
if err != nil {
return 0, err
}
hdr, err = t.Next()
} else if hdr.Typeflag == tar.TypeLink {
err = w.AddLink(filepath.FromSlash(hdr.Name), filepath.FromSlash(hdr.Linkname))
if err != nil {
return 0, err
}
hdr, err = t.Next()
} else {
var (
name string
size int64
fileInfo *winio.FileBasicInfo
)
name, size, fileInfo, err = backuptar.FileInfoFromHeader(hdr)
if err != nil {
return 0, err
}
err = w.Add(filepath.FromSlash(name), fileInfo)
if err != nil {
return 0, err
}
buf.Reset(w)
// Add the Hyper-V Virtual Machine group ACE to the security descriptor
// for TP5 so that Xenons can access all files. This is not necessary
// for post-TP5 builds.
if isTP5OrOlder() {
if sddl, ok := hdr.Winheaders["sd"]; ok {
var ace string
if hdr.Typeflag == tar.TypeDir {
ace = "(A;OICI;0x1200a9;;;S-1-5-83-0)"
} else {
ace = "(A;;0x1200a9;;;S-1-5-83-0)"
}
if hdr.Winheaders["sd"], ok = addAceToSddlDacl(sddl, ace); !ok {
logrus.Debugf("failed to add VM ACE to %s", sddl)
}
}
}
hdr, err = backuptar.WriteBackupStreamFromTarFile(buf, t, hdr)
ferr := buf.Flush()
if ferr != nil {
err = ferr
}
totalSize += size
}
}
if err != io.EOF {
return 0, err
}
return totalSize, nil
}
func addAceToSddlDacl(sddl, ace string) (string, bool) {
daclStart := strings.Index(sddl, "D:")
if daclStart < 0 {
return sddl, false
}
dacl := sddl[daclStart:]
daclEnd := strings.Index(dacl, "S:")
if daclEnd < 0 {
daclEnd = len(dacl)
}
dacl = dacl[:daclEnd]
if strings.Contains(dacl, ace) {
return sddl, true
}
i := 2
for i+1 < len(dacl) {
if dacl[i] != '(' {
return sddl, false
}
if dacl[i+1] == 'A' {
break
}
i += 2
for p := 1; i < len(dacl) && p > 0; i++ {
if dacl[i] == '(' {
p++
} else if dacl[i] == ')' {
p--
}
}
}
return sddl[:daclStart+i] + ace + sddl[daclStart+i:], true
}
// importLayer adds a new layer to the tag and graph store based on the given data.
func (d *Driver) importLayer(id string, layerData archive.Reader, parentLayerPaths []string) (size int64, err error) {
cmd := reexec.Command(append([]string{"storage-windows-write-layer", d.info.HomeDir, id}, parentLayerPaths...)...)
output := bytes.NewBuffer(nil)
cmd.Stdin = layerData
cmd.Stdout = output
cmd.Stderr = output
if err = cmd.Start(); err != nil {
return
}
if err = cmd.Wait(); err != nil {
return 0, fmt.Errorf("re-exec error: %v: output: %s", err, output)
}
return strconv.ParseInt(output.String(), 10, 64)
}
// writeLayer is the re-exec entry point for writing a layer from a tar file
func writeLayer() {
home := os.Args[1]
id := os.Args[2]
parentLayerPaths := os.Args[3:]
err := func() error {
err := winio.EnableProcessPrivileges([]string{winio.SeBackupPrivilege, winio.SeRestorePrivilege})
if err != nil {
return err
}
info := hcsshim.DriverInfo{
Flavour: filterDriver,
HomeDir: home,
}
w, err := hcsshim.NewLayerWriter(info, id, parentLayerPaths)
if err != nil {
return err
}
size, err := writeLayerFromTar(os.Stdin, w)
if err != nil {
return err
}
err = w.Close()
if err != nil {
return err
}
fmt.Fprint(os.Stdout, size)
return nil
}()
if err != nil {
fmt.Fprint(os.Stderr, err)
os.Exit(1)
}
}
// resolveID computes the layerID information based on the given id.
func (d *Driver) resolveID(id string) (string, error) {
content, err := ioutil.ReadFile(filepath.Join(d.dir(id), "layerID"))
if os.IsNotExist(err) {
return id, nil
} else if err != nil {
return "", err
}
return string(content), nil
}
// setID stores the layerId in disk.
func (d *Driver) setID(id, altID string) error {
err := ioutil.WriteFile(filepath.Join(d.dir(id), "layerId"), []byte(altID), 0600)
if err != nil {
return err
}
return nil
}
// getLayerChain returns the layer chain information.
func (d *Driver) getLayerChain(id string) ([]string, error) {
jPath := filepath.Join(d.dir(id), "layerchain.json")
content, err := ioutil.ReadFile(jPath)
if os.IsNotExist(err) {
return nil, nil
} else if err != nil {
return nil, fmt.Errorf("Unable to read layerchain file - %s", err)
}
var layerChain []string
err = json.Unmarshal(content, &layerChain)
if err != nil {
return nil, fmt.Errorf("Failed to unmarshall layerchain json - %s", err)
}
return layerChain, nil
}
// setLayerChain stores the layer chain information in disk.
func (d *Driver) setLayerChain(id string, chain []string) error {
content, err := json.Marshal(&chain)
if err != nil {
return fmt.Errorf("Failed to marshall layerchain json - %s", err)
}
jPath := filepath.Join(d.dir(id), "layerchain.json")
err = ioutil.WriteFile(jPath, content, 0600)
if err != nil {
return fmt.Errorf("Unable to write layerchain file - %s", err)
}
return nil
}
type fileGetCloserWithBackupPrivileges struct {
path string
}
func (fg *fileGetCloserWithBackupPrivileges) Get(filename string) (io.ReadCloser, error) {
var f *os.File
// Open the file while holding the Windows backup privilege. This ensures that the
// file can be opened even if the caller does not actually have access to it according
// to the security descriptor.
err := winio.RunWithPrivilege(winio.SeBackupPrivilege, func() error {
path := longpath.AddPrefix(filepath.Join(fg.path, filename))
p, err := syscall.UTF16FromString(path)
if err != nil {
return err
}
h, err := syscall.CreateFile(&p[0], syscall.GENERIC_READ, syscall.FILE_SHARE_READ, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
if err != nil {
return &os.PathError{Op: "open", Path: path, Err: err}
}
f = os.NewFile(uintptr(h), path)
return nil
})
return f, err
}
func (fg *fileGetCloserWithBackupPrivileges) Close() error {
return nil
}
type fileGetDestroyCloser struct {
storage.FileGetter
path string
}
func (f *fileGetDestroyCloser) Close() error {
// TODO: activate layers and release here?
return os.RemoveAll(f.path)
}
// DiffGetter returns a FileGetCloser that can read files from the directory that
// contains files for the layer differences. Used for direct access for tar-split.
func (d *Driver) DiffGetter(id string) (graphdriver.FileGetCloser, error) {
id, err := d.resolveID(id)
if err != nil {
return nil, err
}
return &fileGetCloserWithBackupPrivileges{d.dir(id)}, nil
}

View file

@ -0,0 +1,18 @@
package windows
import "testing"
func TestAddAceToSddlDacl(t *testing.T) {
cases := [][3]string{
{"D:", "(A;;;)", "D:(A;;;)"},
{"D:(A;;;)", "(A;;;)", "D:(A;;;)"},
{"O:D:(A;;;stuff)", "(A;;;new)", "O:D:(A;;;new)(A;;;stuff)"},
{"O:D:(D;;;no)(A;;;stuff)", "(A;;;new)", "O:D:(D;;;no)(A;;;new)(A;;;stuff)"},
}
for _, c := range cases {
if newSddl, worked := addAceToSddlDacl(c[0], c[1]); !worked || newSddl != c[2] {
t.Errorf("%s + %s == %s, expected %s (%v)", c[0], c[1], newSddl, c[2], worked)
}
}
}

View file

@ -0,0 +1,35 @@
// +build linux
package zfs
import (
"testing"
"github.com/containers/storage/drivers/graphtest"
)
// This avoids creating a new driver for each test if all tests are run
// Make sure to put new tests between TestZfsSetup and TestZfsTeardown
func TestZfsSetup(t *testing.T) {
graphtest.GetDriver(t, "zfs")
}
func TestZfsCreateEmpty(t *testing.T) {
graphtest.DriverTestCreateEmpty(t, "zfs")
}
func TestZfsCreateBase(t *testing.T) {
graphtest.DriverTestCreateBase(t, "zfs")
}
func TestZfsCreateSnap(t *testing.T) {
graphtest.DriverTestCreateSnap(t, "zfs")
}
func TestZfsSetQuota(t *testing.T) {
graphtest.DriverTestSetQuota(t, "zfs")
}
func TestZfsTeardown(t *testing.T) {
graphtest.PutDriver(t)
}

120
vendor/github.com/containers/storage/hack/.vendor-helpers.sh generated vendored Executable file
View file

@ -0,0 +1,120 @@
#!/usr/bin/env bash
PROJECT=github.com/containers/storage
# Downloads dependencies into vendor/ directory
mkdir -p vendor
if ! go list github.com/containers/storage/storage &> /dev/null; then
rm -rf .gopath
mkdir -p .gopath/src/github.com/containers
ln -sf ../../../.. .gopath/src/${PROJECT}
export GOPATH="${PWD}/.gopath:${PWD}/vendor"
fi
export GOPATH="$GOPATH:${PWD}/vendor"
find='find'
if [ "$(go env GOHOSTOS)" = 'windows' ]; then
find='/usr/bin/find'
fi
clone() {
local vcs="$1"
local pkg="$2"
local rev="$3"
local url="$4"
: ${url:=https://$pkg}
local target="vendor/src/$pkg"
echo -n "$pkg @ $rev: "
if [ -d "$target" ]; then
echo -n 'rm old, '
rm -rf "$target"
fi
echo -n 'clone, '
case "$vcs" in
git)
git clone --quiet --no-checkout "$url" "$target"
( cd "$target" && git checkout --quiet "$rev" && git reset --quiet --hard "$rev" )
;;
hg)
hg clone --quiet --updaterev "$rev" "$url" "$target"
;;
esac
echo -n 'rm VCS, '
( cd "$target" && rm -rf .{git,hg} )
echo -n 'rm vendor, '
( cd "$target" && rm -rf vendor Godeps/_workspace )
echo done
}
clean() {
local packages=(
"${PROJECT}/cmd/oci-storage"
)
local storagePlatforms=( ${STORAGE_OSARCH:="linux/amd64 linux/i386 linux/arm freebsd/amd64 freebsd/386 freebsd/arm windows/amd64"} )
local buildTagCombos=(
''
'experimental'
)
echo
echo -n 'collecting import graph, '
local IFS=$'\n'
local imports=( $(
for platform in "${storagePlatforms[@]}"; do
export GOOS="${platform%/*}";
export GOARCH="${platform##*/}";
for buildTags in "${buildTagCombos[@]}"; do
go list -e -tags "$buildTags" -f '{{join .Deps "\n"}}' "${packages[@]}"
go list -e -tags "$buildTags" -f '{{join .TestImports "\n"}}' "${packages[@]}"
done
done | grep -vE "^${PROJECT}/" | sort -u
) )
imports=( $(go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' "${imports[@]}") )
unset IFS
echo -n 'pruning unused packages, '
findArgs=-false
for import in "${imports[@]}"; do
[ "${#findArgs[@]}" -eq 0 ] || findArgs+=( -or )
findArgs+=( -path "vendor/src/$import" )
done
local IFS=$'\n'
local prune=( $($find vendor -depth -type d -not '(' "${findArgs[@]}" ')') )
unset IFS
for dir in "${prune[@]}"; do
$find "$dir" -maxdepth 1 -not -type d -not -name 'LICENSE*' -not -name 'COPYING*' -exec rm -v -f '{}' ';'
rmdir "$dir" 2>/dev/null || true
done
echo -n 'pruning unused files, '
$find vendor -type f -name '*_test.go' -exec rm -v '{}' ';'
$find vendor -type f -name 'Vagrantfile' -exec rm -v '{}' ';'
# These are the files that are left over after fix_rewritten_imports is run.
echo -n 'pruning .orig files, '
$find vendor -type f -name '*.orig' -exec rm -v '{}' ';'
echo done
}
# Fix up hard-coded imports that refer to Godeps paths so they'll work with our vendoring
fix_rewritten_imports () {
local pkg="$1"
local remove="${pkg}/Godeps/_workspace/src/"
local target="vendor/src/$pkg"
echo "$pkg: fixing rewritten imports"
$find "$target" -name \*.go -exec sed -i'.orig' -e "s|\"${remove}|\"|g" {} \;
}

View file

@ -0,0 +1,35 @@
set +x
set +e
echo ""
echo ""
echo "---"
echo "Now starting POST-BUILD steps"
echo "---"
echo ""
echo INFO: Pointing to $DOCKER_HOST
if [ ! $(docker ps -aq | wc -l) -eq 0 ]; then
echo INFO: Removing containers...
! docker rm -vf $(docker ps -aq)
fi
# Remove all images which don't have docker or debian in the name
if [ ! $(docker images | sed -n '1!p' | grep -v 'docker' | grep -v 'debian' | awk '{ print $3 }' | wc -l) -eq 0 ]; then
echo INFO: Removing images...
! docker rmi -f $(docker images | sed -n '1!p' | grep -v 'docker' | grep -v 'debian' | awk '{ print $3 }')
fi
# Kill off any instances of git, go and docker, just in case
! taskkill -F -IM git.exe -T >& /dev/null
! taskkill -F -IM go.exe -T >& /dev/null
! taskkill -F -IM docker.exe -T >& /dev/null
# Remove everything
! cd /c/jenkins/gopath/src/github.com/docker/docker
! rm -rfd * >& /dev/null
! rm -rfd .* >& /dev/null
echo INFO: Cleanup complete
exit 0

View file

@ -0,0 +1,309 @@
# Jenkins CI script for Windows to Linux CI.
# Heavily modified by John Howard (@jhowardmsft) December 2015 to try to make it more reliable.
set +xe
SCRIPT_VER="Wed Apr 20 18:30:19 UTC 2016"
# TODO to make (even) more resilient:
# - Wait for daemon to be running before executing docker commands
# - Check if jq is installed
# - Make sure bash is v4.3 or later. Can't do until all Azure nodes on the latest version
# - Make sure we are not running as local system. Can't do until all Azure nodes are updated.
# - Error if docker versions are not equal. Can't do until all Azure nodes are updated
# - Error if go versions are not equal. Can't do until all Azure nodes are updated.
# - Error if running 32-bit posix tools. Probably can take from bash --version and check contains "x86_64"
# - Warn if the CI directory cannot be deleted afterwards. Otherwise turdlets are left behind
# - Use %systemdrive% ($SYSTEMDRIVE) rather than hard code to c: for TEMP
# - Consider cross builing the Windows binary and copy across. That's a bit of a heavy lift. Only reason
# for doing that is that it mirrors the actual release process for docker.exe which is cross-built.
# However, should absolutely not be a problem if built natively, so nit-picking.
# - Tidy up of images and containers. Either here, or in the teardown script.
ec=0
uniques=1
echo INFO: Started at `date`. Script version $SCRIPT_VER
# !README!
# There are two daemons running on the remote Linux host:
# - outer: specified by DOCKER_HOST, this is the daemon that will build and run the inner docker daemon
# from the sources matching the PR.
# - inner: runs on the host network, on a port number similar to that of DOCKER_HOST but the last two digits are inverted
# (2357 if DOCKER_HOST had port 2375; and 2367 if DOCKER_HOST had port 2376).
# The windows integration tests are run against this inner daemon.
# get the ip, inner and outer ports.
ip="${DOCKER_HOST#*://}"
port_outer="${ip#*:}"
# inner port is like outer port with last two digits inverted.
port_inner=$(echo "$port_outer" | sed -E 's/(.)(.)$/\2\1/')
ip="${ip%%:*}"
echo "INFO: IP=$ip PORT_OUTER=$port_outer PORT_INNER=$port_inner"
# If TLS is enabled
if [ -n "$DOCKER_TLS_VERIFY" ]; then
protocol=https
if [ -z "$DOCKER_MACHINE_NAME" ]; then
ec=1
echo "ERROR: DOCKER_MACHINE_NAME is undefined"
fi
certs=$(echo ~/.docker/machine/machines/$DOCKER_MACHINE_NAME)
curlopts="--cacert $certs/ca.pem --cert $certs/cert.pem --key $certs/key.pem"
run_extra_args="-v tlscerts:/etc/docker"
daemon_extra_args="--tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem"
else
protocol=http
fi
# Save for use by make.sh and scripts it invokes
export MAIN_DOCKER_HOST="tcp://$ip:$port_inner"
# Verify we can get the remote node to respond to _ping
if [ $ec -eq 0 ]; then
reply=`curl -s $curlopts $protocol://$ip:$port_outer/_ping`
if [ "$reply" != "OK" ]; then
ec=1
echo "ERROR: Failed to get an 'OK' response from the docker daemon on the Linux node"
echo " at $ip:$port_outer when called with an http request for '_ping'. This implies that"
echo " either the daemon has crashed/is not running, or the Linux node is unavailable."
echo
echo " A regular ping to the remote Linux node is below. It should reply. If not, the"
echo " machine cannot be reached at all and may have crashed. If it does reply, it is"
echo " likely a case of the Linux daemon not running or having crashed, which requires"
echo " further investigation."
echo
echo " Try re-running this CI job, or ask on #docker-dev or #docker-maintainers"
echo " for someone to perform further diagnostics, or take this node out of rotation."
echo
ping $ip
else
echo "INFO: The Linux nodes outer daemon replied to a ping. Good!"
fi
fi
# Get the version from the remote node. Note this may fail if jq is not installed.
# That's probably worth checking to make sure, just in case.
if [ $ec -eq 0 ]; then
remoteVersion=`curl -s $curlopts $protocol://$ip:$port_outer/version | jq -c '.Version'`
echo "INFO: Remote daemon is running docker version $remoteVersion"
fi
# Compare versions. We should really fail if result is no 1. Output at end of script.
if [ $ec -eq 0 ]; then
uniques=`docker version | grep Version | /usr/bin/sort -u | wc -l`
fi
# Make sure we are in repo
if [ $ec -eq 0 ]; then
if [ ! -d hack ]; then
echo "ERROR: Are you sure this is being launched from a the root of docker repository?"
echo " If this is a Windows CI machine, it should be c:\jenkins\gopath\src\github.com\docker\docker."
echo " Current directory is `pwd`"
ec=1
fi
fi
# Are we in split binary mode?
if [ `grep DOCKER_CLIENTONLY Makefile | wc -l` -gt 0 ]; then
splitBinary=0
echo "INFO: Running in single binary mode"
else
splitBinary=1
echo "INFO: Running in split binary mode"
fi
# Get the commit has and verify we have something
if [ $ec -eq 0 ]; then
export COMMITHASH=$(git rev-parse --short HEAD)
echo INFO: Commmit hash is $COMMITHASH
if [ -z $COMMITHASH ]; then
echo "ERROR: Failed to get commit hash. Are you sure this is a docker repository?"
ec=1
fi
fi
# Redirect to a temporary location. Check is here for local runs from Jenkins machines just in case not
# in the right directory where the repo is cloned. We also redirect TEMP to not use the environment
# TEMP as when running as a standard user (not local system), it otherwise exposes a bug in posix tar which
# will cause CI to fail from Windows to Linux. Obviously it's not best practice to ever run as local system...
if [ $ec -eq 0 ]; then
export TEMP=/c/CI/CI-$COMMITHASH
export TMP=$TEMP
/usr/bin/mkdir -p $TEMP # Make sure Linux mkdir for -p
fi
# Tidy up time
if [ $ec -eq 0 ]; then
echo INFO: Deleting pre-existing containers and images...
# Force remove all containers based on a previously built image with this commit
! docker rm -f $(docker ps -aq --filter "ancestor=docker:$COMMITHASH") &>/dev/null
# Force remove any container with this commithash as a name
! docker rm -f $(docker ps -aq --filter "name=docker-$COMMITHASH") &>/dev/null
# This SHOULD never happen, but just in case, also blow away any containers
# that might be around.
! if [ ! $(docker ps -aq | wc -l) -eq 0 ]; then
echo WARN: There were some leftover containers. Cleaning them up.
! docker rm -f $(docker ps -aq)
fi
# Force remove the image if it exists
! docker rmi -f "docker-$COMMITHASH" &>/dev/null
fi
# Provide the docker version for debugging purposes. If these fail, game over.
# as the Linux box isn't responding for some reason.
if [ $ec -eq 0 ]; then
echo INFO: Docker version and info of the outer daemon on the Linux node
echo
docker version
ec=$?
if [ 0 -ne $ec ]; then
echo "ERROR: The main linux daemon does not appear to be running. Has the Linux node crashed?"
fi
echo
fi
# Same as above, but docker info
if [ $ec -eq 0 ]; then
echo
docker info
ec=$?
if [ 0 -ne $ec ]; then
echo "ERROR: The main linux daemon does not appear to be running. Has the Linux node crashed?"
fi
echo
fi
# build the daemon image
if [ $ec -eq 0 ]; then
echo "INFO: Running docker build on Linux host at $DOCKER_HOST"
if [ $splitBinary -eq 0 ]; then
set -x
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t "docker:$COMMITHASH" .
cat <<EOF | docker build --rm --force-rm -t "docker:$COMMITHASH" -
FROM docker:$COMMITHASH
RUN hack/make.sh binary
RUN cp bundles/latest/binary/docker /bin/docker
CMD docker daemon -D -H tcp://0.0.0.0:$port_inner $daemon_extra_args
EOF
else
set -x
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t "docker:$COMMITHASH" .
cat <<EOF | docker build --rm --force-rm -t "docker:$COMMITHASH" -
FROM docker:$COMMITHASH
RUN hack/make.sh binary
RUN cp bundles/latest/binary-daemon/dockerd /bin/dockerd
CMD dockerd -D -H tcp://0.0.0.0:$port_inner $daemon_extra_args
EOF
fi
ec=$?
set +x
if [ 0 -ne $ec ]; then
echo "ERROR: docker build failed"
fi
fi
# Start the docker-in-docker daemon from the image we just built
if [ $ec -eq 0 ]; then
echo "INFO: Starting build of a Linux daemon to test against, and starting it..."
set -x
# aufs in aufs is faster than vfs in aufs
docker run -d $run_extra_args -e DOCKER_GRAPHDRIVER=aufs --pid host --privileged --name "docker-$COMMITHASH" --net host "docker:$COMMITHASH"
ec=$?
set +x
if [ 0 -ne $ec ]; then
echo "ERROR: Failed to compile and start the linux daemon"
fi
fi
# Build locally.
if [ $ec -eq 0 ]; then
echo "INFO: Starting local build of Windows binary..."
set -x
export TIMEOUT="120m"
export DOCKER_HOST="tcp://$ip:$port_inner"
# This can be removed
export DOCKER_TEST_HOST="tcp://$ip:$port_inner"
unset DOCKER_CLIENTONLY
export DOCKER_REMOTE_DAEMON=1
hack/make.sh binary
ec=$?
set +x
if [ 0 -ne $ec ]; then
echo "ERROR: Build of binary on Windows failed"
fi
fi
# Make a local copy of the built binary and ensure that is first in our path
if [ $ec -eq 0 ]; then
VERSION=$(< ./VERSION)
if [ $splitBinary -eq 0 ]; then
cp bundles/$VERSION/binary/docker.exe $TEMP
else
cp bundles/$VERSION/binary-client/docker.exe $TEMP
fi
ec=$?
if [ 0 -ne $ec ]; then
echo "ERROR: Failed to copy built binary to $TEMP"
fi
export PATH=$TEMP:$PATH
fi
# Run the integration tests
if [ $ec -eq 0 ]; then
echo "INFO: Running Integration tests..."
set -x
export DOCKER_TEST_TLS_VERIFY="$DOCKER_TLS_VERIFY"
export DOCKER_TEST_CERT_PATH="$DOCKER_CERT_PATH"
#export TESTFLAGS='-check.vv'
hack/make.sh test-integration-cli
ec=$?
set +x
if [ 0 -ne $ec ]; then
echo "ERROR: CLI test failed."
# Next line is useful, but very long winded if included
docker -H=$MAIN_DOCKER_HOST logs --tail 100 "docker-$COMMITHASH"
fi
fi
# Tidy up any temporary files from the CI run
if [ ! -z $COMMITHASH ]; then
rm -rf $TEMP
fi
# CI Integrity check - ensure we are using the same version of go as present in the Dockerfile
GOVER_DOCKERFILE=`grep 'ENV GO_VERSION' Dockerfile | awk '{print $3}'`
GOVER_INSTALLED=`go version | awk '{print $3}'`
if [ "${GOVER_INSTALLED:2}" != "$GOVER_DOCKERFILE" ]; then
#ec=1 # Uncomment to make CI fail once all nodes are updated.
echo
echo "---------------------------------------------------------------------------"
echo "WARN: CI should be using go version $GOVER_DOCKERFILE, but is using ${GOVER_INSTALLED:2}"
echo " Please ping #docker-maintainers on IRC to get this CI server updated."
echo "---------------------------------------------------------------------------"
echo
fi
# Check the Linux box is running a matching version of docker
if [ "$uniques" -ne 1 ]; then
ec=0 # Uncomment to make CI fail once all nodes are updated.
echo
echo "---------------------------------------------------------------------------"
echo "ERROR: This CI node is not running the same version of docker as the daemon."
echo " This is a CI configuration issue."
echo "---------------------------------------------------------------------------"
echo
fi
# Tell the user how we did.
if [ $ec -eq 0 ]; then
echo INFO: Completed successfully at `date`.
else
echo ERROR: Failed with exitcode $ec at `date`.
fi
exit $ec

33
vendor/github.com/containers/storage/hack/dind generated vendored Executable file
View file

@ -0,0 +1,33 @@
#!/bin/bash
set -e
# DinD: a wrapper script which allows docker to be run inside a docker container.
# Original version by Jerome Petazzoni <jerome@docker.com>
# See the blog post: https://blog.docker.com/2013/09/docker-can-now-run-within-docker/
#
# This script should be executed inside a docker container in privileged mode
# ('docker run --privileged', introduced in docker 0.6).
# Usage: dind CMD [ARG...]
# apparmor sucks and Docker needs to know that it's in a container (c) @tianon
export container=docker
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
mount -t securityfs none /sys/kernel/security || {
echo >&2 'Could not mount /sys/kernel/security.'
echo >&2 'AppArmor detection and --privileged mode might break.'
}
fi
# Mount /tmp (conditionally)
if ! mountpoint -q /tmp; then
mount -t tmpfs none /tmp
fi
if [ $# -gt 0 ]; then
exec "$@"
fi
echo >&2 'ERROR: No command specified.'
echo >&2 'You probably want to run hack/make.sh, or maybe a shell?'

View file

@ -0,0 +1,15 @@
#!/bin/bash
set -e
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.."
# see also ".mailmap" for how email addresses and names are deduplicated
{
cat <<-'EOH'
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `hack/generate-authors.sh`.
EOH
echo
git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf
} > AUTHORS

517
vendor/github.com/containers/storage/hack/install.sh generated vendored Normal file
View file

@ -0,0 +1,517 @@
#!/bin/sh
set -e
#
# This script is meant for quick & easy install via:
# 'curl -sSL https://get.docker.com/ | sh'
# or:
# 'wget -qO- https://get.docker.com/ | sh'
#
# For test builds (ie. release candidates):
# 'curl -fsSL https://test.docker.com/ | sh'
# or:
# 'wget -qO- https://test.docker.com/ | sh'
#
# For experimental builds:
# 'curl -fsSL https://experimental.docker.com/ | sh'
# or:
# 'wget -qO- https://experimental.docker.com/ | sh'
#
# Docker Maintainers:
# To update this script on https://get.docker.com,
# use hack/release.sh during a normal release,
# or the following one-liner for script hotfixes:
# aws s3 cp --acl public-read hack/install.sh s3://get.docker.com/index
#
url="https://get.docker.com/"
apt_url="https://apt.dockerproject.org"
yum_url="https://yum.dockerproject.org"
gpg_fingerprint="58118E89F3A912897C070ADBF76221572C52609D"
key_servers="
ha.pool.sks-keyservers.net
pgp.mit.edu
keyserver.ubuntu.com
"
command_exists() {
command -v "$@" > /dev/null 2>&1
}
echo_docker_as_nonroot() {
if command_exists docker && [ -e /var/run/docker.sock ]; then
(
set -x
$sh_c 'docker version'
) || true
fi
your_user=your-user
[ "$user" != 'root' ] && your_user="$user"
# intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output
cat <<-EOF
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:
sudo usermod -aG docker $your_user
Remember that you will have to log out and back in for this to take effect!
EOF
}
# Check if this is a forked Linux distro
check_forked() {
# Check for lsb_release command existence, it usually exists in forked distros
if command_exists lsb_release; then
# Check if the `-u` option is supported
set +e
lsb_release -a -u > /dev/null 2>&1
lsb_release_exit_code=$?
set -e
# Check if the command has exited successfully, it means we're in a forked distro
if [ "$lsb_release_exit_code" = "0" ]; then
# Print info about current distro
cat <<-EOF
You're using '$lsb_dist' version '$dist_version'.
EOF
# Get the upstream release info
lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[[:space:]]')
dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[[:space:]]')
# Print info about upstream distro
cat <<-EOF
Upstream release is '$lsb_dist' version '$dist_version'.
EOF
else
if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ]; then
# We're Debian and don't even know it!
lsb_dist=debian
dist_version="$(cat /etc/debian_version | sed 's/\/.*//' | sed 's/\..*//')"
case "$dist_version" in
8|'Kali Linux 2')
dist_version="jessie"
;;
7)
dist_version="wheezy"
;;
esac
fi
fi
fi
}
rpm_import_repository_key() {
local key=$1; shift
local tmpdir=$(mktemp -d)
chmod 600 "$tmpdir"
for key_server in $key_servers ; do
gpg --homedir "$tmpdir" --keyserver "$key_server" --recv-keys "$key" && break
done
gpg --homedir "$tmpdir" -k "$key" >/dev/null
gpg --homedir "$tmpdir" --export --armor "$key" > "$tmpdir"/repo.key
rpm --import "$tmpdir"/repo.key
rm -rf "$tmpdir"
}
semverParse() {
major="${1%%.*}"
minor="${1#$major.}"
minor="${minor%%.*}"
patch="${1#$major.$minor.}"
patch="${patch%%[-.]*}"
}
do_install() {
case "$(uname -m)" in
*64)
;;
*)
cat >&2 <<-'EOF'
Error: you are not using a 64bit platform.
Docker currently only supports 64bit platforms.
EOF
exit 1
;;
esac
if command_exists docker; then
version="$(docker -v | awk -F '[ ,]+' '{ print $3 }')"
MAJOR_W=1
MINOR_W=10
semverParse $version
shouldWarn=0
if [ $major -lt $MAJOR_W ]; then
shouldWarn=1
fi
if [ $major -le $MAJOR_W ] && [ $minor -lt $MINOR_W ]; then
shouldWarn=1
fi
cat >&2 <<-'EOF'
Warning: the "docker" command appears to already exist on this system.
If you already have Docker installed, this script can cause trouble, which is
why we're displaying this warning and provide the opportunity to cancel the
installation.
If you installed the current Docker package using this script and are using it
EOF
if [ $shouldWarn -eq 1 ]; then
cat >&2 <<-'EOF'
again to update Docker, we urge you to migrate your image store before upgrading
to v1.10+.
You can find instructions for this here:
https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
EOF
else
cat >&2 <<-'EOF'
again to update Docker, you can safely ignore this message.
EOF
fi
cat >&2 <<-'EOF'
You may press Ctrl+C now to abort this script.
EOF
( set -x; sleep 20 )
fi
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if command_exists sudo; then
sh_c='sudo -E sh -c'
elif command_exists su; then
sh_c='su -c'
else
cat >&2 <<-'EOF'
Error: this installer needs the ability to run commands as root.
We are unable to find either "sudo" or "su" available to make this happen.
EOF
exit 1
fi
fi
curl=''
if command_exists curl; then
curl='curl -sSL'
elif command_exists wget; then
curl='wget -qO-'
elif command_exists busybox && busybox --list-modules | grep -q wget; then
curl='busybox wget -qO-'
fi
# check to see which repo they are trying to install from
if [ -z "$repo" ]; then
repo='main'
if [ "https://test.docker.com/" = "$url" ]; then
repo='testing'
elif [ "https://experimental.docker.com/" = "$url" ]; then
repo='experimental'
fi
fi
# perform some very rudimentary platform detection
lsb_dist=''
dist_version=''
if command_exists lsb_release; then
lsb_dist="$(lsb_release -si)"
fi
if [ -z "$lsb_dist" ] && [ -r /etc/lsb-release ]; then
lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")"
fi
if [ -z "$lsb_dist" ] && [ -r /etc/debian_version ]; then
lsb_dist='debian'
fi
if [ -z "$lsb_dist" ] && [ -r /etc/fedora-release ]; then
lsb_dist='fedora'
fi
if [ -z "$lsb_dist" ] && [ -r /etc/oracle-release ]; then
lsb_dist='oracleserver'
fi
if [ -z "$lsb_dist" ] && [ -r /etc/centos-release ]; then
lsb_dist='centos'
fi
if [ -z "$lsb_dist" ] && [ -r /etc/redhat-release ]; then
lsb_dist='redhat'
fi
if [ -z "$lsb_dist" ] && [ -r /etc/os-release ]; then
lsb_dist="$(. /etc/os-release && echo "$ID")"
fi
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
# Special case redhatenterpriseserver
if [ "${lsb_dist}" = "redhatenterpriseserver" ]; then
# Set it to redhat, it will be changed to centos below anyways
lsb_dist='redhat'
fi
case "$lsb_dist" in
ubuntu)
if command_exists lsb_release; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian)
dist_version="$(cat /etc/debian_version | sed 's/\/.*//' | sed 's/\..*//')"
case "$dist_version" in
8)
dist_version="jessie"
;;
7)
dist_version="wheezy"
;;
esac
;;
oracleserver)
# need to switch lsb_dist to match yum repo URL
lsb_dist="oraclelinux"
dist_version="$(rpm -q --whatprovides redhat-release --queryformat "%{VERSION}\n" | sed 's/\/.*//' | sed 's/\..*//' | sed 's/Server*//')"
;;
fedora|centos|redhat)
dist_version="$(rpm -q --whatprovides ${lsb_dist}-release --queryformat "%{VERSION}\n" | sed 's/\/.*//' | sed 's/\..*//' | sed 's/Server*//' | sort | tail -1)"
;;
*)
if command_exists lsb_release; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
# Check if this is a forked Linux distro
check_forked
# Run setup for each distro accordingly
case "$lsb_dist" in
amzn)
(
set -x
$sh_c 'sleep 3; yum -y -q install docker'
)
echo_docker_as_nonroot
exit 0
;;
'opensuse project'|opensuse)
echo 'Going to perform the following operations:'
if [ "$repo" != 'main' ]; then
echo ' * add repository obs://Virtualization:containers'
fi
echo ' * install Docker'
$sh_c 'echo "Press CTRL-C to abort"; sleep 3'
if [ "$repo" != 'main' ]; then
# install experimental packages from OBS://Virtualization:containers
(
set -x
zypper -n ar -f obs://Virtualization:containers Virtualization:containers
rpm_import_repository_key 55A0B34D49501BB7CA474F5AA193FBB572174FC2
)
fi
(
set -x
zypper -n install docker
)
echo_docker_as_nonroot
exit 0
;;
'suse linux'|sle[sd])
echo 'Going to perform the following operations:'
if [ "$repo" != 'main' ]; then
echo ' * add repository obs://Virtualization:containers'
echo ' * install experimental Docker using packages NOT supported by SUSE'
else
echo ' * add the "Containers" module'
echo ' * install Docker using packages supported by SUSE'
fi
$sh_c 'echo "Press CTRL-C to abort"; sleep 3'
if [ "$repo" != 'main' ]; then
# install experimental packages from OBS://Virtualization:containers
echo >&2 'Warning: installing experimental packages from OBS, these packages are NOT supported by SUSE'
(
set -x
zypper -n ar -f obs://Virtualization:containers/SLE_12 Virtualization:containers
rpm_import_repository_key 55A0B34D49501BB7CA474F5AA193FBB572174FC2
)
else
# Add the containers module
# Note well-1: the SLE machine must already be registered against SUSE Customer Center
# Note well-2: the `-r ""` is required to workaround a known issue of SUSEConnect
(
set -x
SUSEConnect -p sle-module-containers/12/x86_64 -r ""
)
fi
(
set -x
zypper -n install docker
)
echo_docker_as_nonroot
exit 0
;;
ubuntu|debian)
export DEBIAN_FRONTEND=noninteractive
did_apt_get_update=
apt_get_update() {
if [ -z "$did_apt_get_update" ]; then
( set -x; $sh_c 'sleep 3; apt-get update' )
did_apt_get_update=1
fi
}
# aufs is preferred over devicemapper; try to ensure the driver is available.
if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then
if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -qE '^ii|^hi' 2>/dev/null; then
kern_extras="linux-image-extra-$(uname -r) linux-image-extra-virtual"
apt_get_update
( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true
if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then
echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)'
echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!'
( set -x; sleep 10 )
fi
else
echo >&2 'Warning: current kernel is not supported by the linux-image-extra-virtual'
echo >&2 ' package. We have no AUFS support. Consider installing the packages'
echo >&2 ' linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.'
( set -x; sleep 10 )
fi
fi
# install apparmor utils if they're missing and apparmor is enabled in the kernel
# otherwise Docker will fail to start
if [ "$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null)" = 'Y' ]; then
if command -v apparmor_parser >/dev/null 2>&1; then
echo 'apparmor is enabled in the kernel and apparmor utils were already installed'
else
echo 'apparmor is enabled in the kernel, but apparmor_parser missing'
apt_get_update
( set -x; $sh_c 'sleep 3; apt-get install -y -q apparmor' )
fi
fi
if [ ! -e /usr/lib/apt/methods/https ]; then
apt_get_update
( set -x; $sh_c 'sleep 3; apt-get install -y -q apt-transport-https ca-certificates' )
fi
if [ -z "$curl" ]; then
apt_get_update
( set -x; $sh_c 'sleep 3; apt-get install -y -q curl ca-certificates' )
curl='curl -sSL'
fi
(
set -x
for key_server in $key_servers ; do
$sh_c "apt-key adv --keyserver hkp://${key_server}:80 --recv-keys ${gpg_fingerprint}" && break
done
$sh_c "apt-key adv -k ${gpg_fingerprint} >/dev/null"
$sh_c "mkdir -p /etc/apt/sources.list.d"
$sh_c "echo deb \[arch=$(dpkg --print-architecture)\] ${apt_url}/repo ${lsb_dist}-${dist_version} ${repo} > /etc/apt/sources.list.d/docker.list"
$sh_c 'sleep 3; apt-get update; apt-get install -y -q docker-engine'
)
echo_docker_as_nonroot
exit 0
;;
fedora|centos|redhat|oraclelinux)
if [ "${lsb_dist}" = "redhat" ]; then
# we use the centos repository for both redhat and centos releases
lsb_dist='centos'
fi
$sh_c "cat >/etc/yum.repos.d/docker-${repo}.repo" <<-EOF
[docker-${repo}-repo]
name=Docker ${repo} Repository
baseurl=${yum_url}/repo/${repo}/${lsb_dist}/${dist_version}
enabled=1
gpgcheck=1
gpgkey=${yum_url}/gpg
EOF
if [ "$lsb_dist" = "fedora" ] && [ "$dist_version" -ge "22" ]; then
(
set -x
$sh_c 'sleep 3; dnf -y -q install docker-engine'
)
else
(
set -x
$sh_c 'sleep 3; yum -y -q install docker-engine'
)
fi
echo_docker_as_nonroot
exit 0
;;
gentoo)
if [ "$url" = "https://test.docker.com/" ]; then
# intentionally mixed spaces and tabs here -- tabs are stripped by "<<-'EOF'", spaces are kept in the output
cat >&2 <<-'EOF'
You appear to be trying to install the latest nightly build in Gentoo.'
The portage tree should contain the latest stable release of Docker, but'
if you want something more recent, you can always use the live ebuild'
provided in the "docker" overlay available via layman. For more'
instructions, please see the following URL:'
https://github.com/tianon/docker-overlay#using-this-overlay'
After adding the "docker" overlay, you should be able to:'
emerge -av =app-emulation/docker-9999'
EOF
exit 1
fi
(
set -x
$sh_c 'sleep 3; emerge app-emulation/docker'
)
exit 0
;;
esac
# intentionally mixed spaces and tabs here -- tabs are stripped by "<<-'EOF'", spaces are kept in the output
cat >&2 <<-'EOF'
Either your platform is not easily detectable, is not supported by this
installer script (yet - PRs welcome! [hack/install.sh]), or does not yet have
a package for Docker. Please visit the following URL for more detailed
installation instructions:
https://docs.docker.com/engine/installation/
EOF
exit 1
}
# wrapped up in a function so that we have some protection against only getting
# half the file during "curl | sh"
do_install

262
vendor/github.com/containers/storage/hack/make.sh generated vendored Executable file
View file

@ -0,0 +1,262 @@
#!/usr/bin/env bash
set -e
# This script builds various binary artifacts from a checkout of the storage
# source code.
#
# Requirements:
# - The current directory should be a checkout of the storage source code
# (https://github.com/containers/storage). Whatever version is checked out will
# be built.
# - The VERSION file, at the root of the repository, should exist, and
# will be used as the oci-storage binary version and package version.
# - The hash of the git commit will also be included in the oci-storage binary,
# with the suffix -unsupported if the repository isn't clean.
# - The right way to call this script is to invoke "make" from
# your checkout of the storage repository.
set -o pipefail
export PKG='github.com/containers/storage'
export SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export MAKEDIR="$SCRIPTDIR/make"
export PKG_CONFIG=${PKG_CONFIG:-pkg-config}
: ${TEST_REPEAT:=0}
# List of bundles to create when no argument is passed
DEFAULT_BUNDLES=(
validate-dco
validate-gofmt
validate-lint
validate-pkg
validate-test
validate-toml
validate-vet
binary
test-unit
gccgo
cross
)
VERSION=$(< ./VERSION)
if command -v git &> /dev/null && [ -d .git ] && git rev-parse &> /dev/null; then
GITCOMMIT=$(git rev-parse --short HEAD)
if [ -n "$(git status --porcelain --untracked-files=no)" ]; then
GITCOMMIT="$GITCOMMIT-unsupported"
echo "#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "# GITCOMMIT = $GITCOMMIT"
echo "# The version you are building is listed as unsupported because"
echo "# there are some files in the git repository that are in an uncommited state."
echo "# Commit these changes, or add to .gitignore to remove the -unsupported from the version."
echo "# Here is the current list:"
git status --porcelain --untracked-files=no
echo "#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
fi
! BUILDTIME=$(date --rfc-3339 ns 2> /dev/null | sed -e 's/ /T/') &> /dev/null
if [ -z $BUILDTIME ]; then
# If using bash 3.1 which doesn't support --rfc-3389, eg Windows CI
BUILDTIME=$(date -u)
fi
elif [ -n "$GITCOMMIT" ]; then
:
else
echo >&2 'error: .git directory missing and GITCOMMIT not specified'
echo >&2 ' Please either build with the .git directory accessible, or specify the'
echo >&2 ' exact (--short) commit hash you are building using GITCOMMIT for'
echo >&2 ' future accountability in diagnosing build issues. Thanks!'
exit 1
fi
if [ "$AUTO_GOPATH" ]; then
rm -rf .gopath
mkdir -p .gopath/src/"$(dirname "${PKG}")"
ln -sf ../../../.. .gopath/src/"${PKG}"
export GOPATH="${PWD}/.gopath:${PWD}/vendor"
if [ "$(go env GOOS)" = 'solaris' ]; then
# sys/unix is installed outside the standard library on solaris
# TODO need to allow for version change, need to get version from go
export GOPATH="${GOPATH}:/usr/lib/gocode/1.6.2"
fi
fi
if [ ! "$GOPATH" ]; then
echo >&2 'error: missing GOPATH; please see https://golang.org/doc/code.html#GOPATH'
echo >&2 ' alternatively, set AUTO_GOPATH=1'
exit 1
fi
if [ "$EXPERIMENTAL" ]; then
echo >&2 '# WARNING! EXPERIMENTAL is set: building experimental features'
echo >&2
BUILDTAGS+=" experimental"
fi
# test whether "btrfs/version.h" exists and apply btrfs_noversion appropriately
if \
command -v gcc &> /dev/null \
&& ! gcc -E - -o /dev/null &> /dev/null <<<'#include <btrfs/version.h>' \
; then
BUILDTAGS+=' btrfs_noversion'
fi
# test whether "libdevmapper.h" is new enough to support deferred remove
# functionality.
if \
command -v gcc &> /dev/null \
&& ! ( echo -e '#include <libdevmapper.h>\nint main() { dm_task_deferred_remove(NULL); }'| gcc -xc - -o /dev/null -ldevmapper &> /dev/null ) \
; then
BUILDTAGS+=' libdm_no_deferred_remove'
fi
# Use these flags when compiling the tests and final binary
source "$SCRIPTDIR/make/.go-autogen"
if [ -z "$DEBUG" ]; then
LDFLAGS='-w'
fi
BUILDFLAGS=( $BUILDFLAGS "${ORIG_BUILDFLAGS[@]}" )
if [ "$(uname -s)" = 'FreeBSD' ]; then
# Tell cgo the compiler is Clang, not GCC
# https://code.google.com/p/go/source/browse/src/cmd/cgo/gcc.go?spec=svne77e74371f2340ee08622ce602e9f7b15f29d8d3&r=e6794866ebeba2bf8818b9261b54e2eef1c9e588#752
export CC=clang
# "-extld clang" is a workaround for
# https://code.google.com/p/go/issues/detail?id=6845
LDFLAGS="$LDFLAGS -extld clang"
fi
HAVE_GO_TEST_COVER=
if \
go help testflag | grep -- -cover > /dev/null \
&& go tool -n cover > /dev/null 2>&1 \
; then
HAVE_GO_TEST_COVER=1
fi
TIMEOUT=5m
# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'.
# You can use this to select certain tests to run, eg.
#
# TESTFLAGS='-test.run ^TestBuild$' ./hack/make.sh test-unit
#
# For integration-cli test, we use [gocheck](https://labix.org/gocheck), if you want
# to run certain tests on your local host, you should run with command:
#
# TESTFLAGS='-check.f DockerSuite.TestBuild*' ./hack/make.sh binary test-integration-cli
#
go_test_dir() {
dir=$1
coverpkg=$2
testcover=()
testcoverprofile=()
testbinary="$DEST/test.main"
if [ "$HAVE_GO_TEST_COVER" ]; then
# if our current go install has -cover, we want to use it :)
mkdir -p "$DEST/coverprofiles"
coverprofile="storage${dir#.}"
coverprofile="$ABS_DEST/coverprofiles/${coverprofile//\//-}"
testcover=( -test.cover )
testcoverprofile=( -test.coverprofile "$coverprofile" $coverpkg )
fi
(
echo '+ go test' $TESTFLAGS "${PKG}${dir#.}"
cd "$dir"
export DEST="$ABS_DEST" # we're in a subshell, so this is safe -- our integration-cli tests need DEST, and "cd" screws it up
go test -c -o "$testbinary" ${testcover[@]} -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}"
i=0
while ((++i)); do
test_env "$testbinary" ${testcoverprofile[@]} $TESTFLAGS
if [ $i -gt "$TEST_REPEAT" ]; then
break
fi
echo "Repeating test ($i)"
done
)
}
test_env() {
# use "env -i" to tightly control the environment variables that bleed into the tests
env -i \
DEST="$DEST" \
GOPATH="$GOPATH" \
GOTRACEBACK=all \
HOME="$ABS_DEST/fake-HOME" \
PATH="$PATH" \
TEMP="$TEMP" \
"$@"
}
# a helper to provide ".exe" when it's appropriate
binary_extension() {
echo -n $(go env GOEXE)
}
hash_files() {
while [ $# -gt 0 ]; do
f="$1"
shift
dir="$(dirname "$f")"
base="$(basename "$f")"
for hashAlgo in md5 sha256; do
if command -v "${hashAlgo}sum" &> /dev/null; then
(
# subshell and cd so that we get output files like:
# $HASH oci-storage-$VERSION
# instead of:
# $HASH /go/src/github.com/.../$VERSION/binary/oci-storage-$VERSION
cd "$dir"
"${hashAlgo}sum" "$base" > "$base.$hashAlgo"
)
fi
done
done
}
bundle() {
local bundle="$1"; shift
echo "---> Making bundle: $(basename "$bundle") (in $DEST)"
source "$SCRIPTDIR/make/$bundle" "$@"
}
main() {
# We want this to fail if the bundles already exist and cannot be removed.
# This is to avoid mixing bundles from different versions of the code.
mkdir -p bundles
if [ -e "bundles/$VERSION" ] && [ -z "$KEEPBUNDLE" ]; then
echo "bundles/$VERSION already exists. Removing."
rm -fr "bundles/$VERSION" && mkdir "bundles/$VERSION" || exit 1
echo
fi
if [ "$(go env GOHOSTOS)" != 'windows' ]; then
# Windows and symlinks don't get along well
rm -f bundles/latest
ln -s "$VERSION" bundles/latest
fi
if [ $# -lt 1 ]; then
bundles=(${DEFAULT_BUNDLES[@]})
else
bundles=($@)
fi
for bundle in ${bundles[@]}; do
export DEST="bundles/$VERSION/$(basename "$bundle")"
# Cygdrive paths don't play well with go build -o.
if [[ "$(uname -s)" == CYGWIN* ]]; then
export DEST="$(cygpath -mw "$DEST")"
fi
mkdir -p "$DEST"
ABS_DEST="$(cd "$DEST" && pwd -P)"
bundle "$bundle"
echo
done
}
main "$@"

64
vendor/github.com/containers/storage/hack/make/.binary generated vendored Normal file
View file

@ -0,0 +1,64 @@
#!/bin/bash
set -e
BINARY_NAME="$BINARY_SHORT_NAME-$VERSION"
BINARY_EXTENSION="$(binary_extension)"
BINARY_FULLNAME="$BINARY_NAME$BINARY_EXTENSION"
source "${MAKEDIR}/.go-autogen"
(
export GOGC=${DOCKER_BUILD_GOGC:-1000}
if [ "$(go env GOOS)/$(go env GOARCH)" != "$(go env GOHOSTOS)/$(go env GOHOSTARCH)" ]; then
# must be cross-compiling!
case "$(go env GOOS)/$(go env GOARCH)" in
windows/amd64)
export CC=x86_64-w64-mingw32-gcc
export CGO_ENABLED=1
;;
esac
fi
if [ "$(go env GOOS)" == "linux" ] ; then
case "$(go env GOARCH)" in
arm*|386)
# linking for Linux on arm or x86 needs external linking to avoid
# https://github.com/golang/go/issues/9510 until we move to Go 1.6
if [ "$IAMSTATIC" == "true" ] ; then
export EXTLDFLAGS_STATIC="$EXTLDFLAGS_STATIC -zmuldefs"
export LDFLAGS_STATIC_DOCKER="$LDFLAGS_STATIC -extldflags \"$EXTLDFLAGS_STATIC\""
else
export LDFLAGS="$LDFLAGS -extldflags -zmuldefs"
fi
;;
esac
fi
if [ "$IAMSTATIC" == "true" ] && [ "$(go env GOHOSTOS)" == "linux" ]; then
if [ "${GOOS}/${GOARCH}" == "darwin/amd64" ]; then
export CGO_ENABLED=1
export CC=o64-clang
export LDFLAGS='-linkmode external -s'
export LDFLAGS_STATIC_DOCKER='-extld='${CC}
else
export BUILDFLAGS=( "${BUILDFLAGS[@]/pkcs11 /}" ) # we cannot dlopen in pkcs11 in a static binary
fi
fi
echo "Building: $DEST/$BINARY_FULLNAME"
go build \
-o "$DEST/$BINARY_FULLNAME" \
"${BUILDFLAGS[@]}" ${BUILDTAGS:+-tags "${BUILDTAGS}"} \
-ldflags "
$LDFLAGS
$LDFLAGS_STATIC_DOCKER
" \
$SOURCE_PATH
)
echo "Created binary: $DEST/$BINARY_FULLNAME"
ln -sf "$BINARY_FULLNAME" "$DEST/$BINARY_SHORT_NAME$BINARY_EXTENSION"
hash_files "$DEST/$BINARY_FULLNAME"

View file

@ -0,0 +1,5 @@
#!/bin/bash
DOCKER_CLIENT_BINARY_NAME='docker'
DOCKER_DAEMON_BINARY_NAME='dockerd'
DOCKER_PROXY_BINARY_NAME='docker-proxy'

View file

@ -0,0 +1 @@
9

View file

@ -0,0 +1,29 @@
Source: docker-engine
Section: admin
Priority: optional
Maintainer: Docker <support@docker.com>
Standards-Version: 3.9.6
Homepage: https://dockerproject.org
Vcs-Browser: https://github.com/docker/docker
Vcs-Git: git://github.com/docker/docker.git
Package: docker-engine
Architecture: linux-any
Depends: iptables, ${misc:Depends}, ${perl:Depends}, ${shlibs:Depends}
Recommends: aufs-tools,
ca-certificates,
cgroupfs-mount | cgroup-lite,
git,
xz-utils,
${apparmor:Recommends}
Conflicts: docker (<< 1.5~), docker.io, lxc-docker, lxc-docker-virtual-package, docker-engine-cs
Description: Docker: the open-source application container engine
Docker is an open source project to build, ship and run any application as a
lightweight container
.
Docker containers are both hardware-agnostic and platform-agnostic. This means
they can run anywhere, from your laptop to the largest EC2 compute instance and
everything in between - and they don't require you to use a particular
language, framework or packaging system. That makes them great building blocks
for deploying and scaling web apps, databases, and backend services without
depending on a particular stack or provider.

View file

@ -0,0 +1 @@
contrib/completion/bash/docker

View file

@ -0,0 +1,12 @@
#contrib/syntax/vim/doc/* /usr/share/vim/vimfiles/doc/
#contrib/syntax/vim/ftdetect/* /usr/share/vim/vimfiles/ftdetect/
#contrib/syntax/vim/syntax/* /usr/share/vim/vimfiles/syntax/
contrib/*-integration usr/share/docker-engine/contrib/
contrib/check-config.sh usr/share/docker-engine/contrib/
contrib/completion/fish/docker.fish usr/share/fish/vendor_completions.d/
contrib/completion/zsh/_docker usr/share/zsh/vendor-completions/
contrib/init/systemd/docker.service lib/systemd/system/
contrib/init/systemd/docker.socket lib/systemd/system/
contrib/mk* usr/share/docker-engine/contrib/
contrib/nuke-graph-directory.sh usr/share/docker-engine/contrib/
contrib/syntax/nano/Dockerfile.nanorc usr/share/nano/

View file

@ -0,0 +1 @@
man/man*/*

Some files were not shown because too many files have changed in this diff Show more