Compare commits

..

No commits in common. "master" and "v1.0.0-rc1" have entirely different histories.

2299 changed files with 616885 additions and 187482 deletions

7
.github/CODEOWNERS vendored
View file

@ -1,7 +0,0 @@
# GitHub code owners
# See https://help.github.com/articles/about-codeowners/
#
# KEEP THIS FILE SORTED. Order is important. Last match takes precedence.
* @mrunalp @runcom
pkg/storage/** @nalind @runcom @rhatdan

View file

@ -1,58 +0,0 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/kubernetes-incubator/cri-o/blob/master/CONTRIBUTING.md#reporting-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support for **CRI-O** can be found at the following locations:
- IRC - #cri-o channel on irc.freenode.org
- Slack - kubernetes.slack.com #sig-node channel
- Post a question on StackOverflow, using the CRI-O tag
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Additional information you deem important (e.g. issue happens only occasionally):**
**Output of `crio --version`:**
```
(paste your output here)
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**

View file

@ -1,23 +0,0 @@
<!--
Please make sure you've read and understood our contributing guidelines;
https://github.com/kubernetes-incubator/cri-o/blob/master/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx"
Please provide the following information:
-->
**- What I did**
**- How I did it**
**- How to verify it**
**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->

11
.gitignore vendored
View file

@ -1,18 +1,17 @@
/.artifacts/
/_output/
/conmon/conmon
/conmon/conmon.o
/docs/*.[158]
/docs/*.[158].gz
/kpod
/crioctl
/crio
/crio.conf
*.o
*.orig
/pause/pause
/pause/pause.o
/bin/
/test/bin2img/bin2img
/test/checkseccomp/checkseccomp
/test/copyimg/copyimg
Vagrantfile
.vagrant/
.vscode/

View file

@ -1,10 +0,0 @@
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Antonio Murdaca <runcom@redhat.com> <runcom@users.noreply.github.com>
CuiHaozhi <cuihaozhi@chinacloud.com.cn> <cuihz@wise2c.com>
Daniel J Walsh <dwalsh@redhat.com>
Haiyan Meng <hmeng@redhat.com> <haiyanalady@gmail.com>
Lorenzo Fontana <lo@linux.com> <fontanalorenz@gmail.com>
Mrunal Patel <mrunalp@gmail.com> <mpatel@redhat.com>
Mrunal Patel <mrunalp@gmail.com> <mrunal@me.com>
Pengfei Ni <feiskyer@gmail.com> <feiskyer@users.noreply.github.com>
Tobias Klauser <tklauser@distanz.ch> <tobias.klauser@gmail.com>

View file

@ -8,7 +8,7 @@ services:
before_install:
- sudo apt-get -qq update
- sudo apt-get -qq install btrfs-tools libdevmapper-dev libgpgme11-dev libapparmor-dev libseccomp-dev
- sudo apt-get -qq install autoconf automake bison e2fslibs-dev libfuse-dev libtool liblzma-dev gettext
- sudo apt-get -qq install autoconf automake bison e2fslibs-dev libfuse-dev libtool liblzma-dev
install:
- make install.tools
@ -32,22 +32,13 @@ jobs:
- make .gitvalidation
- make gofmt
- make lint
- make testunit
- make docs
- make
go: 1.8.x
- stage: Build and Verify
script:
- script:
- make .gitvalidation
- make gofmt
- make lint
- make testunit
- make docs
- make
go: 1.9.x
- script:
- make .gitvalidation
- make testunit
- make docs
- make
go: tip

View file

@ -1,142 +0,0 @@
# Contributing to CRI-O
We'd love to have you join the community! Below summarizes the processes
that we follow.
## Topics
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues
Before reporting an issue, check our backlog of
[open issues](https://github.com/kubernetes-incubator/cri-o/issues)
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
If you find a new issue with the project we'd love to hear about it! The most
important aspect of a bug report is that it includes enough information for
us to reproduce it. So, please include as much detail as possible and try
to remove the extra stuff that doesn't really relate to the issue itself.
The easier it is for us to reproduce it, the faster it'll be fixed!
Please don't include any private/sensitive information in your issue!
## Submitting Pull Requests
No Pull Request (PR) is too small! Typos, additional comments in the code,
new testcases, bug fixes, new features, more documentation, ... it's all
welcome!
While bug fixes can first be identified via an "issue", that is not required.
It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
* additional testcases. Ideally, they should fail w/o your code change applied
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. But, squashing down to just one commit is ok
too since in the end the entire PR will be reviewed anyway. When in doubt,
squash.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
when the PR is merged.
<!--
All PRs require at least two LGTMs (Looks Good To Me) from maintainers.
-->
### Sign your PRs
The sign-off is a line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
## Communications
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `cri-o`
that has been setup.
For discussions around issues/bugs and features, you can use the github
[issues](https://github.com/kubernetes-incubator/cri-o/issues)
and
[PRs](https://github.com/kubernetes-incubator/cri-o/pulls)
tracking system.
<!--
## Becoming a Maintainer
To become a maintainer you must first be nominated by an existing maintainer.
If a majority (>50%) of maintainers agree then the proposal is adopted and
you will be added to the list.
Removing a maintainer requires at least 75% of the remaining maintainers
approval, or if the person requests to be removed then it is automatic.
Normally, a maintainer will only be removed if they are considered to be
inactive for a long period of time or are viewed as disruptive to the community.
The current list of maintainers can be found in the
[MAINTAINERS](MAINTAINERS) file.
-->

View file

@ -12,7 +12,6 @@ RUN apt-get update && apt-get install -y \
curl \
e2fslibs-dev \
gawk \
gettext \
iptables \
pkg-config \
libaio-dev \
@ -24,7 +23,6 @@ RUN apt-get update && apt-get install -y \
libseccomp2/jessie-backports \
libseccomp-dev/jessie-backports \
libtool \
libudev-dev \
protobuf-c-compiler \
protobuf-compiler \
python-minimal \
@ -36,9 +34,7 @@ RUN apt-get update && apt-get install -y \
libgpgme11-dev \
liblzma-dev \
netcat \
socat \
--no-install-recommends \
bsdmainutils \
&& apt-get clean
# install bats
@ -57,7 +53,7 @@ RUN mkdir -p /usr/src/criu \
&& rm -rf /usr/src/criu
# Install runc
ENV RUNC_COMMIT c6e4a1ebeb1a72b529c6f1b6ee2b1ae5b868b14f
ENV RUNC_COMMIT 84a082bfef6f932de921437815355186db37aeb1
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
@ -65,7 +61,7 @@ RUN set -x \
&& git fetch origin --tags \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp selinux" \
&& cp runc /usr/bin/runc \
&& cp runc /usr/local/bin/runc \
&& rm -rf "$GOPATH"
# Install CNI plugins
@ -98,7 +94,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install crictl
ENV CRICTL_COMMIT b42fc3f364dd48f649d55926c34492beeb9b2e99
ENV CRICTL_COMMIT 16e6fe4d7199c5689db4630a9330e6a8a12cecd1
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/kubernetes-incubator/cri-tools.git "$GOPATH/src/github.com/kubernetes-incubator/cri-tools" \
@ -111,8 +107,11 @@ RUN set -x \
# Make sure we have some policy for pulling images
RUN mkdir -p /etc/containers
COPY test/policy.json /etc/containers/policy.json
COPY test/redhat_sigstore.yaml /etc/containers/registries.d/registry.access.redhat.com.yaml
WORKDIR /go/src/github.com/kubernetes-incubator/cri-o
ADD . /go/src/github.com/kubernetes-incubator/cri-o
RUN make test/copyimg/copyimg \
&& mkdir -p .artifacts/redis-image \
&& ./test/copyimg/copyimg --import-from=docker://redis --export-to=dir:.artifacts/redis-image --signature-policy ./test/policy.json

View file

@ -11,19 +11,15 @@ LIBEXECDIR ?= ${PREFIX}/libexec
MANDIR ?= ${PREFIX}/share/man
ETCDIR ?= ${DESTDIR}/etc
ETCDIR_CRIO ?= ${ETCDIR}/crio
BUILDTAGS ?= seccomp $(shell hack/btrfs_tag.sh) $(shell hack/libdm_installed.sh) $(shell hack/libdm_no_deferred_remove_tag.sh) $(shell hack/btrfs_installed_tag.sh) $(shell hack/ostree_tag.sh) $(shell hack/selinux_tag.sh)
CRICTL_CONFIG_DIR=${DESTDIR}/etc
BUILDTAGS ?= selinux seccomp $(shell hack/btrfs_tag.sh) $(shell hack/libdm_tag.sh)
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
OCIUMOUNTINSTALLDIR=$(PREFIX)/share/oci-umount/oci-umount.d
SELINUXOPT ?= $(shell test -x /usr/sbin/selinuxenabled && selinuxenabled && echo -Z)
SELINUXOPT ?= $(shell test -x /usr/sbin/selinuxenabled && selinuxenabled && echo -Z)
PACKAGES ?= $(shell go list -tags "${BUILDTAGS}" ./... | grep -v github.com/kubernetes-incubator/cri-o/vendor)
COMMIT_NO := $(shell git rev-parse HEAD 2> /dev/null || true)
GIT_COMMIT := $(if $(shell git status --porcelain --untracked-files=no),"${COMMIT_NO}-dirty","${COMMIT_NO}")
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_INFO := $(shell date +%s)
VERSION := ${shell cat ./VERSION}
# If GOPATH not specified, use one in the local directory
ifeq ($(GOPATH),)
export GOPATH := $(CURDIR)/_output
@ -34,9 +30,8 @@ GOPKGBASEDIR := $(shell dirname "$(GOPKGDIR)")
# Update VPATH so make finds .gopathok
VPATH := $(VPATH):$(GOPATH)
SHRINKFLAGS := -s -w
BASE_LDFLAGS := ${SHRINKFLAGS} -X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO}
LDFLAGS := -ldflags '${BASE_LDFLAGS}'
LDFLAGS := -ldflags '-X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO} -X main.version=${VERSION}'
all: binaries crio.conf docs
@ -46,7 +41,7 @@ help:
@echo "Usage: make <target>"
@echo
@echo " * 'install' - Install binaries to system locations"
@echo " * 'binaries' - Build crio, conmon and pause"
@echo " * 'binaries' - Build crio, conmon and crioctl"
@echo " * 'integration' - Execute integration tests"
@echo " * 'clean' - Clean artifacts"
@echo " * 'lint' - Execute the source code linter"
@ -64,8 +59,7 @@ lint: .gopathok
@./.tool/lint
gofmt:
find . -name '*.go' ! -path './vendor/*' -exec gofmt -s -w {} \+
git diff --exit-code
@./hack/verify-gofmt.sh
conmon:
$(MAKE) -C $@
@ -74,30 +68,36 @@ pause:
$(MAKE) -C $@
test/bin2img/bin2img: .gopathok $(wildcard test/bin2img/*.go)
$(GO) build -i $(LDFLAGS) -tags "$(BUILDTAGS) containers_image_ostree_stub" -o $@ $(PROJECT)/test/bin2img
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/test/bin2img
test/copyimg/copyimg: .gopathok $(wildcard test/copyimg/*.go)
$(GO) build -i $(LDFLAGS) -tags "$(BUILDTAGS) containers_image_ostree_stub" -o $@ $(PROJECT)/test/copyimg
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/test/copyimg
test/checkseccomp/checkseccomp: .gopathok $(wildcard test/checkseccomp/*.go)
$(GO) build -i $(LDFLAGS) -tags "$(BUILDTAGS) containers_image_ostree_stub" -o $@ $(PROJECT)/test/checkseccomp
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/test/checkseccomp
crio: .gopathok $(shell hack/find-godeps.sh $(GOPKGDIR) cmd/crio $(PROJECT))
$(GO) build -i $(LDFLAGS) -tags "$(BUILDTAGS) containers_image_ostree_stub" -o bin/$@ $(PROJECT)/cmd/crio
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/cmd/crio
crioctl: .gopathok $(shell hack/find-godeps.sh $(GOPKGDIR) cmd/crioctl $(PROJECT))
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/cmd/crioctl
kpod: .gopathok $(shell hack/find-godeps.sh $(GOPKGDIR) cmd/kpod $(PROJECT))
$(GO) build $(LDFLAGS) -tags "$(BUILDTAGS)" -o $@ $(PROJECT)/cmd/kpod
crio.conf: crio
./bin/crio --config="" config --default > crio.conf
./crio --config="" config --default > crio.conf
clean:
ifneq ($(GOPATH),)
rm -f "$(GOPATH)/.gopathok"
endif
rm -rf _output
rm -f docs/*.5 docs/*.8
rm -f docs/*.1 docs/*.5 docs/*.8
rm -fr test/testdata/redis-image
find . -name \*~ -delete
find . -name \#\* -delete
rm -f bin/crio
rm -f crioctl crio kpod
make -C conmon clean
make -C pause clean
rm -f test/bin2img/bin2img
@ -108,23 +108,22 @@ crioimage:
docker build -t ${CRIO_IMAGE} .
dbuild: crioimage
docker run --name=${CRIO_INSTANCE} -e BUILDTAGS --privileged -v ${PWD}:/go/src/${PROJECT} --rm ${CRIO_IMAGE} make binaries
docker run --name=${CRIO_INSTANCE} --privileged ${CRIO_IMAGE} -v ${PWD}:/go/src/${PROJECT} --rm make binaries
integration: crioimage
docker run -e STORAGE_OPTIONS="--storage-driver=vfs" -e TESTFLAGS -e TRAVIS -t --privileged --rm -v ${CURDIR}:/go/src/${PROJECT} ${CRIO_IMAGE} make localintegration
docker run -e TESTFLAGS -e TRAVIS -t --privileged --rm -v ${CURDIR}:/go/src/${PROJECT} ${CRIO_IMAGE} make localintegration
testunit:
$(GO) test -tags "$(BUILDTAGS)" -cover $(PACKAGES)
localintegration: clean binaries test-binaries
localintegration: clean binaries
./test/test_runner.sh ${TESTFLAGS}
binaries: crio conmon pause
test-binaries: test/bin2img/bin2img test/copyimg/copyimg test/checkseccomp/checkseccomp
binaries: crio crioctl kpod conmon pause test/bin2img/bin2img test/copyimg/copyimg test/checkseccomp/checkseccomp
MANPAGES_MD := $(wildcard docs/*.md)
MANPAGES := $(MANPAGES_MD:%.md=%)
docs/%.1: docs/%.1.md .gopathok
(go-md2man -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@) || ($(GOPATH)/bin/go-md2man -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@)
docs/%.5: docs/%.5.md .gopathok
(go-md2man -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@) || ($(GOPATH)/bin/go-md2man -in $< -out $@.tmp && touch $@.tmp && mv $@.tmp $@)
@ -133,27 +132,26 @@ docs/%.8: docs/%.8.md .gopathok
docs: $(MANPAGES)
install: .gopathok install.bin install.man
install.bin:
install ${SELINUXOPT} -D -m 755 bin/crio $(BINDIR)/crio
install ${SELINUXOPT} -D -m 755 bin/conmon $(LIBEXECDIR)/crio/conmon
install ${SELINUXOPT} -D -m 755 bin/pause $(LIBEXECDIR)/crio/pause
install.man:
install: .gopathok
install ${SELINUXOPT} -D -m 755 crio $(BINDIR)/crio
install ${SELINUXOPT} -D -m 755 crioctl $(BINDIR)/crioctl
install ${SELINUXOPT} -D -m 755 kpod $(BINDIR)/kpod
install ${SELINUXOPT} -D -m 755 conmon/conmon $(LIBEXECDIR)/crio/conmon
install ${SELINUXOPT} -D -m 755 pause/pause $(LIBEXECDIR)/crio/pause
install ${SELINUXOPT} -d -m 755 $(MANDIR)/man1
install ${SELINUXOPT} -d -m 755 $(MANDIR)/man5
install ${SELINUXOPT} -d -m 755 $(MANDIR)/man8
install ${SELINUXOPT} -m 644 $(filter %.1,$(MANPAGES)) -t $(MANDIR)/man1
install ${SELINUXOPT} -m 644 $(filter %.5,$(MANPAGES)) -t $(MANDIR)/man5
install ${SELINUXOPT} -m 644 $(filter %.8,$(MANPAGES)) -t $(MANDIR)/man8
install.config: crio.conf
install.config:
install ${SELINUXOPT} -D -m 644 crio.conf $(ETCDIR_CRIO)/crio.conf
install ${SELINUXOPT} -D -m 644 seccomp.json $(ETCDIR_CRIO)/seccomp.json
install ${SELINUXOPT} -D -m 644 crio-umount.conf $(OCIUMOUNTINSTALLDIR)/crio-umount.conf
install ${SELINUXOPT} -D -m 644 crictl.yaml $(CRICTL_CONFIG_DIR)
install.completions:
install ${SELINUXOPT} -d -m 755 ${BASHINSTALLDIR}
install ${SELINUXOPT} -m 644 -D completions/bash/kpod ${BASHINSTALLDIR}
install.systemd:
install ${SELINUXOPT} -D -m 644 contrib/systemd/crio.service $(PREFIX)/lib/systemd/system/crio.service
@ -162,6 +160,7 @@ install.systemd:
uninstall:
rm -f $(BINDIR)/crio
rm -f $(BINDIR)/crioctl
rm -f $(LIBEXECDIR)/crio/conmon
rm -f $(LIBEXECDIR)/crio/pause
for i in $(filter %.1,$(MANPAGES)); do \

2
OWNERS
View file

@ -1,4 +1,4 @@
approvers:
assignees:
- mrunalp
- runcom
- cyphar

View file

@ -1,32 +1,18 @@
![CRI-O logo](https://cdn.rawgit.com/kubernetes-incubator/cri-o/master/logo/crio-logo.svg)
# CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface
![cri-o logo](https://cdn.rawgit.com/kubernetes-incubator/cri-o/master/logo/crio-logo.svg)
# cri-o - OCI-based implementation of Kubernetes Container Runtime Interface
[![Build Status](https://img.shields.io/travis/kubernetes-incubator/cri-o.svg?maxAge=2592000&style=flat-square)](https://travis-ci.org/kubernetes-incubator/cri-o)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-incubator/cri-o?style=flat-square)](https://goreportcard.com/report/github.com/kubernetes-incubator/cri-o)
### Status: Stable
## Compatibility matrix: CRI-O <-> Kubernetes clusters
| Version - Branch | Kubernetes branch/version | Maintenance status |
|----------------------------|-------------------------------|--------------------|
| CRI-O 1.0.x - release-1.0 | Kubernetes 1.7 branch, v1.7.x | = |
| CRI-O 1.8.x - release-1.8 | Kubernetes 1.8 branch, v1.8.x | = |
| CRI-O 1.9.x - release-1.9 | Kubernetes 1.9 branch, v1.9.x | = |
| CRI-O HEAD - master | Kubernetes master branch | ✓ |
Key:
* `✓` Changes in main Kubernetes repo about CRI are actively implemented in CRI-O
* `=` Maintenance is manual, only bugs will be patched.
### Status: beta
## What is the scope of this project?
CRI-O is meant to provide an integration path between OCI conformant runtimes and the kubelet.
cri-o is meant to provide an integration path between OCI conformant runtimes and the kubelet.
Specifically, it implements the Kubelet [Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md) using OCI conformant runtimes.
The scope of CRI-O is tied to the scope of the CRI.
The scope of cri-o is tied to the scope of the CRI.
At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:
At a high level, we expect the scope of cri-o to be restricted to the following functionalities:
* Support multiple image formats including the existing Docker image format
* Support for multiple means to download images including trust & image verification
@ -38,7 +24,7 @@ At a high level, we expect the scope of CRI-O to be restricted to the following
## What is not in scope for this project?
* Building, signing and pushing images to various image storages
* A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.
* A CLI utility for interacting with cri-o. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.
This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.
@ -54,8 +40,28 @@ It is currently in active development in the Kubernetes community through the [d
| Command | Description | Demo|
| ---------------------------------------------------- | --------------------------------------------------------------------------|-----|
| [crio(8)](/docs/crio.8.md) | OCI Kubernetes Container Runtime daemon ||
Note that kpod and its container management and debugging commands have moved to a separate repository, located [here](https://github.com/projectatomic/libpod).
| [kpod(1)](/docs/kpod.1.md) | Simple management tool for pods and images ||
| [kpod-cp(1)](/docs/kpod-cp.1.md) | Copy files/folders between a container and the local filesystem ||
| [kpod-diff(1)](/docs/kpod-diff.1.md) | Inspect changes on a container or image's filesystem ||
| [kpod-export(1)](/docs/kpod-export.1.md) | Export container's filesystem contents as a tar archive ||
| [kpod-history(1)](/docs/kpod-history.1.md) | Shows the history of an image |[![...](/docs/play.png)](https://asciinema.org/a/bCvUQJ6DkxInMELZdc5DinNSx)|
| [kpod-images(1)](/docs/kpod-images.1.md) | List images in local storage |[![...](/docs/play.png)](https://asciinema.org/a/133649)|
| [kpod-info(1)](/docs/kpod-info.1.md) | Display system information ||
| [kpod-inspect(1)](/docs/kpod-inspect.1.md) | Display the configuration of a container or image |[![...](/docs/play.png)](https://asciinema.org/a/133418)|
| [kpod-load(1)](/docs/kpod-load.1.md) | Load an image from docker archive or oci |[![...](/docs/play.png)](https://asciinema.org/a/kp8kOaexEhEa20P1KLZ3L5X4g)|
| [kpod-logs(1)](/docs/kpod-logs.1.md) | Display the logs of a container ||
| [kpod-mount(1)](/docs/kpod-mount.1.md) | Mount a working container's root filesystem ||
| [kpod-ps(1)](/docs/kpod-ps.1.md) | Prints out information about containers ||
| [kpod-pull(1)](/docs/kpod-pull.1.md) | Pull an image from a registry |[![...](/docs/play.png)](https://asciinema.org/a/lr4zfoynHJOUNu1KaXa1dwG2X)|
| [kpod-push(1)](/docs/kpod-push.1.md) | Push an image to a specified destination |[![...](/docs/play.png)](https://asciinema.org/a/133276)|
| [kpod-rename(1)](/docs/kpod-rename.1.md) | Rename a container ||
| [kpod-rm(1)](/docs/kpod-rm.1.md) | Removes one or more containers ||
| [kpod-rmi(1)](/docs/kpod-rmi.1.md) | Removes one or more images |[![...](/docs/play.png)](https://asciinema.org/a/133799)|
| [kpod-save(1)](/docs/kpod-save.1.md) | Saves an image to an archive |[![...](/docs/play.png)](https://asciinema.org/a/kp8kOaexEhEa20P1KLZ3L5X4g)|
| [kpod-stats(1)](/docs/kpod-stats.1.md) | Display a live stream of one or more containers' resource usage statistics||
| [kpod-tag(1)](/docs/kpod-tag.1.md) | Add an additional name to a local image |[![...](/docs/play.png)](https://asciinema.org/a/133803)|
| [kpod-umount(1)](/docs/kpod-umount.1.md) | Unmount a working container's root filesystem ||
| [kpod-version(1)](/docs/kpod-version.1.md) | Display the version information |[![...](/docs/play.png)](https://asciinema.org/a/mfrn61pjZT9Fc8L4NbfdSqfgu)|
## Configuration
| File | Description |
@ -66,28 +72,23 @@ Note that kpod and its container management and debugging commands have moved to
[CRI-O configures OCI Hooks to run when launching a container](./hooks.md)
## CRI-O Usage Transfer
## cri-o Usage Transfer
[Useful information for ops and dev transfer as it relates to infrastructure that utilizes CRI-O](/transfer.md)
[Useful information for ops and dev transfer as it relates to infrastructure that utilizes cri-o](/transfer.md)
## Communication
For async communication and long running discussions please use issues and pull requests on the github repo. This will be the best place to discuss design and implementation.
For sync communication we have an IRC channel #CRI-O, on chat.freenode.net, that everyone is welcome to join and chat about development.
For sync communication we have an IRC channel #cri-o, on chat.freenode.net, that everyone is welcome to join and chat about development.
## Getting started
### Runtime dependencies
### Prerequisites
- runc, Clear Containers runtime, or any other OCI compatible runtime
- socat
- iproute
- iptables
Latest version of `runc` is expected to be installed on the system. It is picked up as the default runtime by crio.
Latest version of `runc` is expected to be installed on the system. It is picked up as the default runtime by CRI-O.
### Build and Run Dependencies
### Build Dependencies
**Required**
@ -97,12 +98,9 @@ Fedora, CentOS, RHEL, and related distributions:
yum install -y \
btrfs-progs-devel \
device-mapper-devel \
git \
glib2-devel \
glibc-devel \
glibc-static \
go \
golang-github-cpuguy83-go-md2man \
gpgme-devel \
libassuan-devel \
libgpg-error-devel \
@ -110,8 +108,7 @@ yum install -y \
libselinux-devel \
ostree-devel \
pkgconfig \
runc \
skopeo-containers
runc
```
Debian, Ubuntu, and related distributions:
@ -119,8 +116,6 @@ Debian, Ubuntu, and related distributions:
```bash
apt-get install -y \
btrfs-tools \
git \
golang-go \
libassuan-dev \
libdevmapper-dev \
libglib2.0-dev \
@ -130,19 +125,13 @@ apt-get install -y \
libseccomp-dev \
libselinux1-dev \
pkg-config \
go-md2man \
runc \
skopeo-containers
runc
```
Debian, Ubuntu, and related distributions will also need a copy of the development libraries for `ostree`, either in the form of the `libostree-dev` package from the [flatpak](https://launchpad.net/~alexlarsson/+archive/ubuntu/flatpak) PPA, or built [from source](https://github.com/ostreedev/ostree) (more on that [here](https://ostree.readthedocs.io/en/latest/#building)).
If using an older release or a long-term support release, be careful to double-check that the version of `runc` is new enough (running `runc --version` should produce `spec: 1.0.0`), or else build your own.
**NOTE**
Be careful to double-check that the version of golang is new enough, version 1.8.x or higher is required. If needed, golang kits are avaliable at https://golang.org/dl/
**Optional**
Fedora, CentOS, RHEL, and related distributions:
@ -158,7 +147,7 @@ apt-get install -y \
### Get Source Code
As with other Go projects, CRI-O must be cloned into a directory structure like:
As with other Go projects, cri-o must be cloned into a directory structure like:
```
GOPATH
@ -192,7 +181,7 @@ make
sudo make install
```
Otherwise, if you do not want to build `CRI-O` with seccomp support you can add `BUILDTAGS=""` when running make.
Otherwise, if you do not want to build `cri-o` with seccomp support you can add `BUILDTAGS=""` when running make.
```bash
make BUILDTAGS=""
@ -201,7 +190,7 @@ sudo make install
#### Build Tags
`CRI-O` supports optional build tags for compiling support of various features.
`cri-o` supports optional build tags for compiling support of various features.
To add build tags to the make option the `BUILDTAGS` variable must be set.
```bash
@ -227,15 +216,14 @@ your system.
### Running with kubernetes
You can run a local version of kubernetes with CRI-O using `local-up-cluster.sh`:
You can run a local version of kubernetes with cri-o using `local-up-cluster.sh`:
1. Clone the [kubernetes repository](https://github.com/kubernetes/kubernetes)
1. Start the CRI-O daemon (`crio`)
1. Start the cri-o daemon (`crio`)
1. From the kubernetes project directory, run:
```shell
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='/var/run/crio/crio.sock --runtime-request-timeout=15m' \
CONTAINER_RUNTIME_ENDPOINT='/var/run/crio.sock --runtime-request-timeout=15m' \
./hack/local-up-cluster.sh
```
@ -249,4 +237,5 @@ To run a full cluster, see [the instructions](kubernetes.md).
1. Support for log management, networking integration using CNI, pluggable image/storage management (done)
1. Support for exec/attach (done)
1. Target fully automated kubernetes testing without failures [e2e status](https://github.com/kubernetes-incubator/cri-o/issues/533)
1. Release 1.0
1. Track upstream k8s releases

1
VERSION Normal file
View file

@ -0,0 +1 @@
1.0.0-rc1

View file

@ -1,103 +0,0 @@
package client
import (
"encoding/json"
"fmt"
"net"
"net/http"
"syscall"
"time"
"github.com/kubernetes-incubator/cri-o/types"
)
const (
maxUnixSocketPathSize = len(syscall.RawSockaddrUnix{}.Path)
)
// CrioClient is an interface to get information from crio daemon endpoint.
type CrioClient interface {
DaemonInfo() (types.CrioInfo, error)
ContainerInfo(string) (*types.ContainerInfo, error)
}
type crioClientImpl struct {
client *http.Client
crioSocketPath string
}
func configureUnixTransport(tr *http.Transport, proto, addr string) error {
if len(addr) > maxUnixSocketPathSize {
return fmt.Errorf("Unix socket path %q is too long", addr)
}
// No need for compression in local communications.
tr.DisableCompression = true
tr.Dial = func(_, _ string) (net.Conn, error) {
return net.DialTimeout(proto, addr, 32*time.Second)
}
return nil
}
// New returns a crio client
func New(crioSocketPath string) (CrioClient, error) {
tr := new(http.Transport)
configureUnixTransport(tr, "unix", crioSocketPath)
c := &http.Client{
Transport: tr,
}
return &crioClientImpl{
client: c,
crioSocketPath: crioSocketPath,
}, nil
}
func (c *crioClientImpl) getRequest(path string) (*http.Request, error) {
req, err := http.NewRequest("GET", path, nil)
if err != nil {
return nil, err
}
// For local communications over a unix socket, it doesn't matter what
// the host is. We just need a valid and meaningful host name.
req.Host = "crio"
req.URL.Host = c.crioSocketPath
req.URL.Scheme = "http"
return req, nil
}
// DaemonInfo return cri-o daemon info from the cri-o
// info endpoint.
func (c *crioClientImpl) DaemonInfo() (types.CrioInfo, error) {
info := types.CrioInfo{}
req, err := c.getRequest("/info")
if err != nil {
return info, err
}
resp, err := c.client.Do(req)
if err != nil {
return info, err
}
defer resp.Body.Close()
if err := json.NewDecoder(resp.Body).Decode(&info); err != nil {
return info, err
}
return info, nil
}
// ContainerInfo returns container info by querying
// the cri-o container endpoint.
func (c *crioClientImpl) ContainerInfo(id string) (*types.ContainerInfo, error) {
req, err := c.getRequest("/containers/" + id)
if err != nil {
return nil, err
}
resp, err := c.client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
cInfo := types.ContainerInfo{}
if err := json.NewDecoder(resp.Body).Decode(&cInfo); err != nil {
return nil, err
}
return &cInfo, nil
}

View file

@ -28,7 +28,8 @@ storage_driver = "{{ .Storage }}"
storage_option = [
{{ range $opt := .StorageOptions }}{{ printf "\t%q,\n" $opt }}{{ end }}]
# The "crio.api" table contains settings for the kubelet/gRPC interface.
# The "crio.api" table contains settings for the kubelet/gRPC
# interface (which is also used by crioctl).
[crio.api]
# listen is the path to the AF_LOCAL socket on which crio will listen.
@ -76,9 +77,6 @@ runtime_untrusted_workload = "{{ .RuntimeUntrustedWorkload }}"
# container runtime for all containers.
default_workload_trust = "{{ .DefaultWorkloadTrust }}"
# no_pivot instructs the runtime to not use pivot_root, but instead use MS_MOVE
no_pivot = {{ .NoPivot }}
# conmon is the path to conmon binary, used for managing the runtime.
conmon = "{{ .Conmon }}"
@ -107,20 +105,9 @@ cgroup_manager = "{{ .CgroupManager }}"
# hooks_dir_path is the oci hooks directory for automatically executed hooks
hooks_dir_path = "{{ .HooksDirPath }}"
# default_mounts is the mounts list to be mounted for the container when created
default_mounts = [
{{ range $mount := .DefaultMounts }}{{ printf "\t%q, \n" $mount }}{{ end }}]
# pids_limit is the number of processes allowed in a container
pids_limit = {{ .PidsLimit }}
# enable using a shared PID namespace for containers in a pod
enable_shared_pid_namespace = {{ .EnableSharedPIDNamespace }}
# log_size_max is the max limit for the container log size in bytes.
# Negative values indicate that no limit is imposed.
log_size_max = {{ .LogSizeMax }}
# The "crio.image" table contains settings pertaining to the
# management of OCI images.
[crio.image]

View file

@ -8,15 +8,12 @@ import (
_ "net/http/pprof"
"os"
"os/signal"
"path/filepath"
"sort"
"strings"
"time"
"github.com/containers/storage/pkg/reexec"
"github.com/kubernetes-incubator/cri-o/lib"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/kubernetes-incubator/cri-o/server"
"github.com/kubernetes-incubator/cri-o/version"
"github.com/opencontainers/selinux/go-selinux"
"github.com/sirupsen/logrus"
"github.com/soheilhy/cmux"
@ -26,24 +23,23 @@ import (
"k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
// This is populated by the Makefile from the VERSION file
// in the repository
var version = ""
// gitCommit is the commit that the binary is being built from.
// It will be populated by the Makefile.
var gitCommit = ""
func validateConfig(config *server.Config) error {
switch config.ImageVolumes {
case lib.ImageVolumesMkdir:
case lib.ImageVolumesIgnore:
case lib.ImageVolumesBind:
case libkpod.ImageVolumesMkdir:
case libkpod.ImageVolumesIgnore:
case libkpod.ImageVolumesBind:
default:
return fmt.Errorf("Unrecognized image volume type specified")
}
// This needs to match the read buffer size in conmon
if config.LogSizeMax >= 0 && config.LogSizeMax < 8192 {
return fmt.Errorf("log size max should be negative or >= 8192")
}
return nil
}
@ -126,18 +122,9 @@ func mergeConfig(config *server.Config, ctx *cli.Context) error {
if ctx.GlobalIsSet("hooks-dir-path") {
config.HooksDirPath = ctx.GlobalString("hooks-dir-path")
}
if ctx.GlobalIsSet("default-mounts") {
config.DefaultMounts = ctx.GlobalStringSlice("default-mounts")
}
if ctx.GlobalIsSet("pids-limit") {
config.PidsLimit = ctx.GlobalInt64("pids-limit")
}
if ctx.GlobalIsSet("enable-shared-pid-namespace") {
config.EnableSharedPIDNamespace = ctx.GlobalBool("enable-shared-pid-namespace")
}
if ctx.GlobalIsSet("log-size-max") {
config.LogSizeMax = ctx.GlobalInt64("log-size-max")
}
if ctx.GlobalIsSet("cni-config-dir") {
config.NetworkDir = ctx.GlobalString("cni-config-dir")
}
@ -145,7 +132,7 @@ func mergeConfig(config *server.Config, ctx *cli.Context) error {
config.PluginDir = ctx.GlobalString("cni-plugin-dir")
}
if ctx.GlobalIsSet("image-volumes") {
config.ImageVolumes = lib.ImageVolumesType(ctx.GlobalString("image-volumes"))
config.ImageVolumes = libkpod.ImageVolumesType(ctx.GlobalString("image-volumes"))
}
return nil
}
@ -166,7 +153,8 @@ func catchShutdown(gserver *grpc.Server, sserver *server.Server, hserver *http.S
*signalled = true
gserver.GracefulStop()
hserver.Shutdown(context.Background())
sserver.StopStreamServer()
// TODO(runcom): enable this after https://github.com/kubernetes/kubernetes/pull/51377
//sserver.StopStreamServer()
sserver.StopExitMonitor()
if err := sserver.Shutdown(); err != nil {
logrus.Warnf("error shutting down main service %v", err)
@ -183,7 +171,9 @@ func main() {
app := cli.NewApp()
var v []string
v = append(v, version.Version)
if version != "" {
v = append(v, version)
}
if gitCommit != "" {
v = append(v, fmt.Sprintf("commit: %s", gitCommit))
}
@ -204,6 +194,10 @@ func main() {
Name: "conmon",
Usage: "path to the conmon executable",
},
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output for logging",
},
cli.StringFlag{
Name: "listen",
Usage: "path to crio socket",
@ -226,11 +220,6 @@ func main() {
Value: "text",
Usage: "set the format used by logs ('text' (default), or 'json')",
},
cli.StringFlag{
Name: "log-level",
Usage: "log messages above specified level: debug, info (default), warn, error, fatal or panic",
},
cli.StringFlag{
Name: "pause-command",
Usage: "name of the pause command in the pause image",
@ -297,18 +286,9 @@ func main() {
},
cli.Int64Flag{
Name: "pids-limit",
Value: lib.DefaultPidsLimit,
Value: libkpod.DefaultPidsLimit,
Usage: "maximum number of processes allowed in a container",
},
cli.BoolFlag{
Name: "enable-shared-pid-namespace",
Usage: "enable using a shared PID namespace for containers in a pod",
},
cli.Int64Flag{
Name: "log-size-max",
Value: lib.DefaultLogSizeMax,
Usage: "maximum log size in bytes for a container",
},
cli.StringFlag{
Name: "cni-config-dir",
Usage: "CNI configuration files directory",
@ -319,18 +299,13 @@ func main() {
},
cli.StringFlag{
Name: "image-volumes",
Value: string(lib.ImageVolumesMkdir),
Usage: "image volume handling ('mkdir', 'bind', or 'ignore')",
Value: string(libkpod.ImageVolumesMkdir),
Usage: "image volume handling ('mkdir' or 'ignore')",
},
cli.StringFlag{
Name: "hooks-dir-path",
Usage: "set the OCI hooks directory path",
Value: lib.DefaultHooksDirPath,
Hidden: true,
},
cli.StringSliceFlag{
Name: "default-mounts",
Usage: "add one or more default mount paths in the form host:container",
Value: libkpod.DefaultHooksDirPath,
Hidden: true,
},
cli.BoolFlag{
@ -378,13 +353,8 @@ func main() {
logrus.SetFormatter(cf)
if loglevel := c.GlobalString("log-level"); loglevel != "" {
level, err := logrus.ParseLevel(loglevel)
if err != nil {
return err
}
logrus.SetLevel(level)
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if path := c.GlobalString("log"); path != "" {
@ -416,16 +386,6 @@ func main() {
}()
}
args := c.Args()
if len(args) > 0 {
for _, command := range app.Commands {
if args[0] == command.Name {
break
}
}
return fmt.Errorf("command %q not supported", args[0])
}
config := c.App.Metadata["config"].(*server.Config)
if !config.SELinux {
@ -437,10 +397,6 @@ func main() {
return fmt.Errorf("invalid --runtime value %q", err)
}
if err := os.MkdirAll(filepath.Dir(config.Listen), 0755); err != nil {
return err
}
// Remove the socket if it already exists
if _, err := os.Stat(config.Listen); err == nil {
if err := os.Remove(config.Listen); err != nil {
@ -492,8 +448,7 @@ func main() {
infoMux := service.GetInfoMux()
srv := &http.Server{
Handler: infoMux,
ReadTimeout: 5 * time.Second,
Handler: infoMux,
}
graceful := false
@ -509,23 +464,26 @@ func main() {
if graceful && strings.Contains(strings.ToLower(err.Error()), "use of closed network connection") {
err = nil
} else {
logrus.Errorf("Failed to serve grpc request: %v", err)
logrus.Errorf("Failed to serve grpc grpc request: %v", err)
}
}
}()
streamServerCloseCh := service.StreamingServerCloseChan()
// TODO(runcom): enable this after https://github.com/kubernetes/kubernetes/pull/51377
//streamServerCloseCh := service.StreamingServerCloseChan()
serverExitMonitorCh := service.ExitMonitorCloseChan()
select {
case <-streamServerCloseCh:
// TODO(runcom): enable this after https://github.com/kubernetes/kubernetes/pull/51377
//case <-streamServerCloseCh:
case <-serverExitMonitorCh:
case <-serverCloseCh:
}
service.Shutdown()
<-streamServerCloseCh
logrus.Debug("closed stream server")
// TODO(runcom): enable this after https://github.com/kubernetes/kubernetes/pull/51377
//<-streamServerCloseCh
//logrus.Debug("closed stream server")
<-serverExitMonitorCh
logrus.Debug("closed exit monitor")
<-serverCloseCh

619
cmd/crioctl/container.go Normal file
View file

@ -0,0 +1,619 @@
package main
import (
"fmt"
"log"
"net/url"
"os"
"strings"
"time"
"github.com/urfave/cli"
"golang.org/x/net/context"
remocommandconsts "k8s.io/apimachinery/pkg/util/remotecommand"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/remotecommand"
pb "k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
var containerCommand = cli.Command{
Name: "container",
Aliases: []string{"ctr"},
Subcommands: []cli.Command{
createContainerCommand,
startContainerCommand,
stopContainerCommand,
removeContainerCommand,
containerStatusCommand,
listContainersCommand,
execSyncCommand,
execCommand,
},
}
type createOptions struct {
// configPath is path to the config for container
configPath string
// name sets the container name
name string
// podID of the container
podID string
// labels for the container
labels map[string]string
}
var createContainerCommand = cli.Command{
Name: "create",
Usage: "create a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "pod",
Usage: "the id of the pod sandbox to which the container belongs",
},
cli.StringFlag{
Name: "config",
Value: "config.json",
Usage: "the path of a container config file",
},
cli.StringFlag{
Name: "name",
Value: "",
Usage: "the name of the container",
},
cli.StringSliceFlag{
Name: "label",
Usage: "add key=value labels to the container",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
if !context.IsSet("pod") {
return fmt.Errorf("Please specify the id of the pod sandbox to which the container belongs via the --pod option")
}
opts := createOptions{
configPath: context.String("config"),
name: context.String("name"),
podID: context.String("pod"),
labels: make(map[string]string),
}
for _, l := range context.StringSlice("label") {
pair := strings.Split(l, "=")
if len(pair) != 2 {
return fmt.Errorf("incorrectly specified label: %v", l)
}
opts.labels[pair[0]] = pair[1]
}
// Test RuntimeServiceClient.CreateContainer
err = CreateContainer(client, opts)
if err != nil {
return fmt.Errorf("Creating container failed: %v", err)
}
return nil
},
}
var startContainerCommand = cli.Command{
Name: "start",
Usage: "start a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = StartContainer(client, context.String("id"))
if err != nil {
return fmt.Errorf("Starting the container failed: %v", err)
}
return nil
},
}
var stopContainerCommand = cli.Command{
Name: "stop",
Usage: "stop a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
cli.Int64Flag{
Name: "timeout",
Value: 10,
Usage: "seconds to wait to kill the container after a graceful stop is requested",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = StopContainer(client, context.String("id"), context.Int64("timeout"))
if err != nil {
return fmt.Errorf("Stopping the container failed: %v", err)
}
return nil
},
}
var removeContainerCommand = cli.Command{
Name: "remove",
Usage: "remove a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = RemoveContainer(client, context.String("id"))
if err != nil {
return fmt.Errorf("Removing the container failed: %v", err)
}
return nil
},
}
var containerStatusCommand = cli.Command{
Name: "status",
Usage: "get the status of a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = ContainerStatus(client, context.String("id"))
if err != nil {
return fmt.Errorf("Getting the status of the container failed: %v", err)
}
return nil
},
}
var execSyncCommand = cli.Command{
Name: "execsync",
Usage: "exec a command synchronously in a container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
cli.Int64Flag{
Name: "timeout",
Value: 0,
Usage: "timeout for the command",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = ExecSync(client, context.String("id"), context.Args(), context.Int64("timeout"))
if err != nil {
return fmt.Errorf("execing command in container failed: %v", err)
}
return nil
},
}
var execCommand = cli.Command{
Name: "exec",
Usage: "prepare a streaming endpoint to execute a command in the container",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the container",
},
cli.BoolFlag{
Name: "tty",
Usage: "whether to use tty",
},
cli.BoolFlag{
Name: "stdin",
Usage: "whether to stream to stdin",
},
cli.BoolFlag{
Name: "url",
Usage: "do not exec command, just prepare streaming endpoint",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = Exec(client, context.String("id"), context.Bool("tty"), context.Bool("stdin"), context.Bool("url"), context.Args())
if err != nil {
return fmt.Errorf("execing command in container failed: %v", err)
}
return nil
},
}
type listOptions struct {
// id of the container
id string
// podID of the container
podID string
// state of the container
state string
// quiet is for listing just container IDs
quiet bool
// labels are selectors for the container
labels map[string]string
}
var listContainersCommand = cli.Command{
Name: "list",
Usage: "list containers",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "quiet",
Usage: "list only container IDs",
},
cli.StringFlag{
Name: "id",
Value: "",
Usage: "filter by container id",
},
cli.StringFlag{
Name: "pod",
Value: "",
Usage: "filter by container pod id",
},
cli.StringFlag{
Name: "state",
Value: "",
Usage: "filter by container state",
},
cli.StringSliceFlag{
Name: "label",
Usage: "filter by key=value label",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
opts := listOptions{
id: context.String("id"),
podID: context.String("pod"),
state: context.String("state"),
quiet: context.Bool("quiet"),
labels: make(map[string]string),
}
for _, l := range context.StringSlice("label") {
pair := strings.Split(l, "=")
if len(pair) != 2 {
return fmt.Errorf("incorrectly specified label: %v", l)
}
opts.labels[pair[0]] = pair[1]
}
err = ListContainers(client, opts)
if err != nil {
return fmt.Errorf("listing containers failed: %v", err)
}
return nil
},
}
// CreateContainer sends a CreateContainerRequest to the server, and parses
// the returned CreateContainerResponse.
func CreateContainer(client pb.RuntimeServiceClient, opts createOptions) error {
config, err := loadContainerConfig(opts.configPath)
if err != nil {
return err
}
// Override the name by the one specified through CLI
if opts.name != "" {
config.Metadata.Name = opts.name
}
for k, v := range opts.labels {
config.Labels[k] = v
}
r, err := client.CreateContainer(context.Background(), &pb.CreateContainerRequest{
PodSandboxId: opts.podID,
Config: config,
// TODO(runcom): this is missing PodSandboxConfig!!!
// we should/could find a way to retrieve it from the fs and set it here
})
if err != nil {
return err
}
fmt.Println(r.ContainerId)
return nil
}
// StartContainer sends a StartContainerRequest to the server, and parses
// the returned StartContainerResponse.
func StartContainer(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
_, err := client.StartContainer(context.Background(), &pb.StartContainerRequest{
ContainerId: ID,
})
if err != nil {
return err
}
fmt.Println(ID)
return nil
}
// StopContainer sends a StopContainerRequest to the server, and parses
// the returned StopContainerResponse.
func StopContainer(client pb.RuntimeServiceClient, ID string, timeout int64) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
_, err := client.StopContainer(context.Background(), &pb.StopContainerRequest{
ContainerId: ID,
Timeout: timeout,
})
if err != nil {
return err
}
fmt.Println(ID)
return nil
}
// RemoveContainer sends a RemoveContainerRequest to the server, and parses
// the returned RemoveContainerResponse.
func RemoveContainer(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
_, err := client.RemoveContainer(context.Background(), &pb.RemoveContainerRequest{
ContainerId: ID,
})
if err != nil {
return err
}
fmt.Println(ID)
return nil
}
// ContainerStatus sends a ContainerStatusRequest to the server, and parses
// the returned ContainerStatusResponse.
func ContainerStatus(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
r, err := client.ContainerStatus(context.Background(), &pb.ContainerStatusRequest{
ContainerId: ID})
if err != nil {
return err
}
fmt.Printf("ID: %s\n", r.Status.Id)
if r.Status.Metadata != nil {
if r.Status.Metadata.Name != "" {
fmt.Printf("Name: %s\n", r.Status.Metadata.Name)
}
fmt.Printf("Attempt: %v\n", r.Status.Metadata.Attempt)
}
// TODO(mzylowski): print it prettier
fmt.Printf("Status: %s\n", r.Status.State)
ctm := time.Unix(0, r.Status.CreatedAt)
fmt.Printf("Created: %v\n", ctm)
stm := time.Unix(0, r.Status.StartedAt)
fmt.Printf("Started: %v\n", stm)
ftm := time.Unix(0, r.Status.FinishedAt)
fmt.Printf("Finished: %v\n", ftm)
fmt.Printf("Exit Code: %v\n", r.Status.ExitCode)
fmt.Printf("Reason: %v\n", r.Status.Reason)
if r.Status.Image != nil {
fmt.Printf("Image: %v\n", r.Status.Image.Image)
}
//
// TODO: https://github.com/kubernetes-incubator/cri-o/issues/531
//
//fmt.Printf("ImageRef: %v\n", r.Status.ImageRef)
return nil
}
// ExecSync sends an ExecSyncRequest to the server, and parses
// the returned ExecSyncResponse.
func ExecSync(client pb.RuntimeServiceClient, ID string, cmd []string, timeout int64) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
r, err := client.ExecSync(context.Background(), &pb.ExecSyncRequest{
ContainerId: ID,
Cmd: cmd,
Timeout: timeout,
})
if err != nil {
return err
}
fmt.Println("Stdout:")
fmt.Println(string(r.Stdout))
fmt.Println("Stderr:")
fmt.Println(string(r.Stderr))
fmt.Printf("Exit code: %v\n", r.ExitCode)
return nil
}
// Exec sends an ExecRequest to the server, and parses
// the returned ExecResponse.
func Exec(client pb.RuntimeServiceClient, ID string, tty bool, stdin bool, urlOnly bool, cmd []string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
r, err := client.Exec(context.Background(), &pb.ExecRequest{
ContainerId: ID,
Cmd: cmd,
Tty: tty,
Stdin: stdin,
})
if err != nil {
return err
}
if urlOnly {
fmt.Println("URL:")
fmt.Println(r.Url)
return nil
}
execURL, err := url.Parse(r.Url)
if err != nil {
return err
}
streamExec, err := remotecommand.NewExecutor(&restclient.Config{}, "GET", execURL)
if err != nil {
return err
}
options := remotecommand.StreamOptions{
SupportedProtocols: remocommandconsts.SupportedStreamingProtocols,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: tty,
}
if stdin {
options.Stdin = os.Stdin
}
return streamExec.Stream(options)
}
// ListContainers sends a ListContainerRequest to the server, and parses
// the returned ListContainerResponse.
func ListContainers(client pb.RuntimeServiceClient, opts listOptions) error {
filter := &pb.ContainerFilter{}
if opts.id != "" {
filter.Id = opts.id
}
if opts.podID != "" {
filter.PodSandboxId = opts.podID
}
if opts.state != "" {
st := &pb.ContainerStateValue{}
st.State = pb.ContainerState_CONTAINER_UNKNOWN
switch opts.state {
case "created":
st.State = pb.ContainerState_CONTAINER_CREATED
filter.State = st
case "running":
st.State = pb.ContainerState_CONTAINER_RUNNING
filter.State = st
case "stopped":
st.State = pb.ContainerState_CONTAINER_EXITED
filter.State = st
default:
log.Fatalf("--state should be one of created, running or stopped")
}
}
if opts.labels != nil {
filter.LabelSelector = opts.labels
}
r, err := client.ListContainers(context.Background(), &pb.ListContainersRequest{
Filter: filter,
})
if err != nil {
return err
}
for _, c := range r.GetContainers() {
if opts.quiet {
fmt.Println(c.Id)
continue
}
fmt.Printf("ID: %s\n", c.Id)
fmt.Printf("Pod: %s\n", c.PodSandboxId)
if c.Metadata != nil {
if c.Metadata.Name != "" {
fmt.Printf("Name: %s\n", c.Metadata.Name)
}
fmt.Printf("Attempt: %v\n", c.Metadata.Attempt)
}
fmt.Printf("Status: %s\n", c.State)
if c.Image != nil {
fmt.Printf("Image: %s\n", c.Image.Image)
}
ctm := time.Unix(0, c.CreatedAt)
fmt.Printf("Created: %v\n", ctm)
if c.Labels != nil {
fmt.Println("Labels:")
for _, k := range getSortedKeys(c.Labels) {
fmt.Printf("\t%s -> %s\n", k, c.Labels[k])
}
}
if c.Annotations != nil {
fmt.Println("Annotations:")
for _, k := range getSortedKeys(c.Annotations) {
fmt.Printf("\t%s -> %s\n", k, c.Annotations[k])
}
}
fmt.Println()
}
return nil
}

173
cmd/crioctl/image.go Normal file
View file

@ -0,0 +1,173 @@
package main
import (
"fmt"
"github.com/urfave/cli"
"golang.org/x/net/context"
pb "k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
var imageCommand = cli.Command{
Name: "image",
Subcommands: []cli.Command{
pullImageCommand,
listImageCommand,
imageStatusCommand,
removeImageCommand,
},
}
var pullImageCommand = cli.Command{
Name: "pull",
Usage: "pull an image",
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewImageServiceClient(conn)
_, err = PullImage(client, context.Args().Get(0))
if err != nil {
return fmt.Errorf("pulling image failed: %v", err)
}
return nil
},
}
var listImageCommand = cli.Command{
Name: "list",
Usage: "list images",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "quiet",
Usage: "list only image IDs",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewImageServiceClient(conn)
r, err := ListImages(client, context.Args().Get(0))
if err != nil {
return fmt.Errorf("listing images failed: %v", err)
}
quiet := context.Bool("quiet")
for _, image := range r.Images {
if quiet {
fmt.Printf("%s\n", image.Id)
continue
}
fmt.Printf("ID: %s\n", image.Id)
for _, tag := range image.RepoTags {
fmt.Printf("Tag: %s\n", tag)
}
for _, digest := range image.RepoDigests {
fmt.Printf("Digest: %s\n", digest)
}
if image.Size_ != 0 {
fmt.Printf("Size: %d\n", image.Size_)
}
}
return nil
},
}
var imageStatusCommand = cli.Command{
Name: "status",
Usage: "return the status of an image",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Usage: "id of the image",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewImageServiceClient(conn)
r, err := ImageStatus(client, context.String("id"))
if err != nil {
return fmt.Errorf("image status request failed: %v", err)
}
image := r.Image
if image == nil {
return fmt.Errorf("no such image present")
}
fmt.Printf("ID: %s\n", image.Id)
for _, tag := range image.RepoTags {
fmt.Printf("Tag: %s\n", tag)
}
for _, digest := range image.RepoDigests {
fmt.Printf("Digest: %s\n", digest)
}
fmt.Printf("Size: %d\n", image.Size_)
return nil
},
}
var removeImageCommand = cli.Command{
Name: "remove",
Usage: "remove an image",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the image",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewImageServiceClient(conn)
_, err = RemoveImage(client, context.String("id"))
if err != nil {
return fmt.Errorf("removing the image failed: %v", err)
}
return nil
},
}
// PullImage sends a PullImageRequest to the server, and parses
// the returned PullImageResponse.
func PullImage(client pb.ImageServiceClient, image string) (*pb.PullImageResponse, error) {
return client.PullImage(context.Background(), &pb.PullImageRequest{Image: &pb.ImageSpec{Image: image}})
}
// ListImages sends a ListImagesRequest to the server, and parses
// the returned ListImagesResponse.
func ListImages(client pb.ImageServiceClient, image string) (*pb.ListImagesResponse, error) {
return client.ListImages(context.Background(), &pb.ListImagesRequest{Filter: &pb.ImageFilter{Image: &pb.ImageSpec{Image: image}}})
}
// ImageStatus sends an ImageStatusRequest to the server, and parses
// the returned ImageStatusResponse.
func ImageStatus(client pb.ImageServiceClient, image string) (*pb.ImageStatusResponse, error) {
return client.ImageStatus(context.Background(), &pb.ImageStatusRequest{Image: &pb.ImageSpec{Image: image}})
}
// RemoveImage sends a RemoveImageRequest to the server, and parses
// the returned RemoveImageResponse.
func RemoveImage(client pb.ImageServiceClient, image string) (*pb.RemoveImageResponse, error) {
if image == "" {
return nil, fmt.Errorf("ID cannot be empty")
}
return client.RemoveImage(context.Background(), &pb.RemoveImageRequest{Image: &pb.ImageSpec{Image: image}})
}

112
cmd/crioctl/main.go Normal file
View file

@ -0,0 +1,112 @@
package main
import (
"encoding/json"
"fmt"
"net"
"os"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"google.golang.org/grpc"
pb "k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
// This is populated by the Makefile from the VERSION file
// in the repository
var version = ""
// gitCommit is the commit that the binary is being built from.
// It will be populated by the Makefile.
var gitCommit = ""
func getClientConnection(context *cli.Context) (*grpc.ClientConn, error) {
conn, err := grpc.Dial(context.GlobalString("connect"), grpc.WithInsecure(), grpc.WithTimeout(context.GlobalDuration("timeout")),
grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
return net.DialTimeout("unix", addr, timeout)
}))
if err != nil {
return nil, fmt.Errorf("failed to connect: %v", err)
}
return conn, nil
}
func openFile(path string) (*os.File, error) {
f, err := os.Open(path)
if err != nil {
if os.IsNotExist(err) {
return nil, fmt.Errorf("config at %s not found", path)
}
return nil, err
}
return f, nil
}
func loadPodSandboxConfig(path string) (*pb.PodSandboxConfig, error) {
f, err := openFile(path)
if err != nil {
return nil, err
}
defer f.Close()
var config pb.PodSandboxConfig
if err := json.NewDecoder(f).Decode(&config); err != nil {
return nil, err
}
return &config, nil
}
func loadContainerConfig(path string) (*pb.ContainerConfig, error) {
f, err := openFile(path)
if err != nil {
return nil, err
}
defer f.Close()
var config pb.ContainerConfig
if err := json.NewDecoder(f).Decode(&config); err != nil {
return nil, err
}
return &config, nil
}
func main() {
app := cli.NewApp()
var v []string
if version != "" {
v = append(v, version)
}
if gitCommit != "" {
v = append(v, fmt.Sprintf("commit: %s", gitCommit))
}
app.Name = "crioctl"
app.Usage = "client for crio"
app.Version = strings.Join(v, "\n")
app.Commands = []cli.Command{
podSandboxCommand,
containerCommand,
runtimeVersionCommand,
imageCommand,
}
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "connect",
Value: "/var/run/crio.sock",
Usage: "Socket to connect to",
},
cli.DurationFlag{
Name: "timeout",
Value: 10 * time.Second,
Usage: "Timeout of connecting to server",
},
}
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
}
}

386
cmd/crioctl/sandbox.go Normal file
View file

@ -0,0 +1,386 @@
package main
import (
"fmt"
"log"
"sort"
"strings"
"time"
"github.com/urfave/cli"
"golang.org/x/net/context"
pb "k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
var podSandboxCommand = cli.Command{
Name: "pod",
Subcommands: []cli.Command{
runPodSandboxCommand,
stopPodSandboxCommand,
removePodSandboxCommand,
podSandboxStatusCommand,
listPodSandboxCommand,
},
}
var runPodSandboxCommand = cli.Command{
Name: "run",
Usage: "run a pod",
Flags: []cli.Flag{
cli.StringFlag{
Name: "config",
Value: "",
Usage: "the path of a pod sandbox config file",
},
cli.StringFlag{
Name: "name",
Value: "",
Usage: "the name of the pod sandbox",
},
cli.StringSliceFlag{
Name: "label",
Usage: "add key=value labels to the container",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
opts := createOptions{
configPath: context.String("config"),
name: context.String("name"),
labels: make(map[string]string),
}
for _, l := range context.StringSlice("label") {
pair := strings.Split(l, "=")
if len(pair) != 2 {
return fmt.Errorf("incorrectly specified label: %v", l)
}
opts.labels[pair[0]] = pair[1]
}
// Test RuntimeServiceClient.RunPodSandbox
err = RunPodSandbox(client, opts)
if err != nil {
return fmt.Errorf("Creating the pod sandbox failed: %v", err)
}
return nil
},
}
var stopPodSandboxCommand = cli.Command{
Name: "stop",
Usage: "stop a pod sandbox",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the pod sandbox",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = StopPodSandbox(client, context.String("id"))
if err != nil {
return fmt.Errorf("stopping the pod sandbox failed: %v", err)
}
return nil
},
}
var removePodSandboxCommand = cli.Command{
Name: "remove",
Usage: "remove a pod sandbox",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the pod sandbox",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = RemovePodSandbox(client, context.String("id"))
if err != nil {
return fmt.Errorf("removing the pod sandbox failed: %v", err)
}
return nil
},
}
var podSandboxStatusCommand = cli.Command{
Name: "status",
Usage: "return the status of a pod",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "id of the pod",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
err = PodSandboxStatus(client, context.String("id"))
if err != nil {
return fmt.Errorf("getting the pod sandbox status failed: %v", err)
}
return nil
},
}
var listPodSandboxCommand = cli.Command{
Name: "list",
Usage: "list pod sandboxes",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Value: "",
Usage: "filter by pod sandbox id",
},
cli.StringFlag{
Name: "state",
Value: "",
Usage: "filter by pod sandbox state",
},
cli.StringSliceFlag{
Name: "label",
Usage: "filter by key=value label",
},
cli.BoolFlag{
Name: "quiet",
Usage: "list only pod IDs",
},
},
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
opts := listOptions{
id: context.String("id"),
state: context.String("state"),
quiet: context.Bool("quiet"),
labels: make(map[string]string),
}
for _, l := range context.StringSlice("label") {
pair := strings.Split(l, "=")
if len(pair) != 2 {
return fmt.Errorf("incorrectly specified label: %v", l)
}
opts.labels[pair[0]] = pair[1]
}
err = ListPodSandboxes(client, opts)
if err != nil {
return fmt.Errorf("listing pod sandboxes failed: %v", err)
}
return nil
},
}
// RunPodSandbox sends a RunPodSandboxRequest to the server, and parses
// the returned RunPodSandboxResponse.
func RunPodSandbox(client pb.RuntimeServiceClient, opts createOptions) error {
config, err := loadPodSandboxConfig(opts.configPath)
if err != nil {
return err
}
// Override the name by the one specified through CLI
if opts.name != "" {
config.Metadata.Name = opts.name
}
for k, v := range opts.labels {
config.Labels[k] = v
}
r, err := client.RunPodSandbox(context.Background(), &pb.RunPodSandboxRequest{Config: config})
if err != nil {
return err
}
fmt.Println(r.PodSandboxId)
return nil
}
// StopPodSandbox sends a StopPodSandboxRequest to the server, and parses
// the returned StopPodSandboxResponse.
func StopPodSandbox(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
_, err := client.StopPodSandbox(context.Background(), &pb.StopPodSandboxRequest{PodSandboxId: ID})
if err != nil {
return err
}
fmt.Println(ID)
return nil
}
// RemovePodSandbox sends a RemovePodSandboxRequest to the server, and parses
// the returned RemovePodSandboxResponse.
func RemovePodSandbox(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
_, err := client.RemovePodSandbox(context.Background(), &pb.RemovePodSandboxRequest{PodSandboxId: ID})
if err != nil {
return err
}
fmt.Println(ID)
return nil
}
// PodSandboxStatus sends a PodSandboxStatusRequest to the server, and parses
// the returned PodSandboxStatusResponse.
func PodSandboxStatus(client pb.RuntimeServiceClient, ID string) error {
if ID == "" {
return fmt.Errorf("ID cannot be empty")
}
r, err := client.PodSandboxStatus(context.Background(), &pb.PodSandboxStatusRequest{PodSandboxId: ID})
if err != nil {
return err
}
fmt.Printf("ID: %s\n", r.Status.Id)
if r.Status.Metadata != nil {
if r.Status.Metadata.Name != "" {
fmt.Printf("Name: %s\n", r.Status.Metadata.Name)
}
if r.Status.Metadata.Uid != "" {
fmt.Printf("UID: %s\n", r.Status.Metadata.Uid)
}
if r.Status.Metadata.Namespace != "" {
fmt.Printf("Namespace: %s\n", r.Status.Metadata.Namespace)
}
fmt.Printf("Attempt: %v\n", r.Status.Metadata.Attempt)
}
fmt.Printf("Status: %s\n", r.Status.State)
ctm := time.Unix(0, r.Status.CreatedAt)
fmt.Printf("Created: %v\n", ctm)
if r.Status.Network != nil {
fmt.Printf("IP Address: %v\n", r.Status.Network.Ip)
}
if r.Status.Labels != nil {
fmt.Println("Labels:")
for _, k := range getSortedKeys(r.Status.Labels) {
fmt.Printf("\t%s -> %s\n", k, r.Status.Labels[k])
}
}
if r.Status.Annotations != nil {
fmt.Println("Annotations:")
for _, k := range getSortedKeys(r.Status.Annotations) {
fmt.Printf("\t%s -> %s\n", k, r.Status.Annotations[k])
}
}
return nil
}
// ListPodSandboxes sends a ListPodSandboxRequest to the server, and parses
// the returned ListPodSandboxResponse.
func ListPodSandboxes(client pb.RuntimeServiceClient, opts listOptions) error {
filter := &pb.PodSandboxFilter{}
if opts.id != "" {
filter.Id = opts.id
}
if opts.state != "" {
st := &pb.PodSandboxStateValue{}
st.State = pb.PodSandboxState_SANDBOX_NOTREADY
switch opts.state {
case "ready":
st.State = pb.PodSandboxState_SANDBOX_READY
filter.State = st
case "notready":
st.State = pb.PodSandboxState_SANDBOX_NOTREADY
filter.State = st
default:
log.Fatalf("--state should be ready or notready")
}
}
if opts.labels != nil {
filter.LabelSelector = opts.labels
}
r, err := client.ListPodSandbox(context.Background(), &pb.ListPodSandboxRequest{
Filter: filter,
})
if err != nil {
return err
}
for _, pod := range r.Items {
if opts.quiet {
fmt.Println(pod.Id)
continue
}
fmt.Printf("ID: %s\n", pod.Id)
if pod.Metadata != nil {
if pod.Metadata.Name != "" {
fmt.Printf("Name: %s\n", pod.Metadata.Name)
}
if pod.Metadata.Uid != "" {
fmt.Printf("UID: %s\n", pod.Metadata.Uid)
}
if pod.Metadata.Namespace != "" {
fmt.Printf("Namespace: %s\n", pod.Metadata.Namespace)
}
fmt.Printf("Attempt: %v\n", pod.Metadata.Attempt)
}
fmt.Printf("Status: %s\n", pod.State)
ctm := time.Unix(0, pod.CreatedAt)
fmt.Printf("Created: %v\n", ctm)
if pod.Labels != nil {
fmt.Println("Labels:")
for _, k := range getSortedKeys(pod.Labels) {
fmt.Printf("\t%s -> %s\n", k, pod.Labels[k])
}
}
if pod.Annotations != nil {
fmt.Println("Annotations:")
for _, k := range getSortedKeys(pod.Annotations) {
fmt.Printf("\t%s -> %s\n", k, pod.Annotations[k])
}
}
fmt.Println()
}
return nil
}
func getSortedKeys(m map[string]string) []string {
var keys []string
for k := range m {
keys = append(keys, k)
}
sort.Strings(keys)
return keys
}

41
cmd/crioctl/system.go Normal file
View file

@ -0,0 +1,41 @@
package main
import (
"fmt"
"github.com/urfave/cli"
"golang.org/x/net/context"
pb "k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime"
)
var runtimeVersionCommand = cli.Command{
Name: "runtimeversion",
Usage: "get runtime version information",
Action: func(context *cli.Context) error {
// Set up a connection to the server.
conn, err := getClientConnection(context)
if err != nil {
return fmt.Errorf("failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewRuntimeServiceClient(conn)
// Test RuntimeServiceClient.Version
version := "v1alpha1"
err = Version(client, version)
if err != nil {
return fmt.Errorf("Getting the runtime version failed: %v", err)
}
return nil
},
}
// Version sends a VersionRequest to the server, and parses the returned VersionResponse.
func Version(client pb.RuntimeServiceClient, version string) error {
r, err := client.Version(context.Background(), &pb.VersionRequest{Version: version})
if err != nil {
return err
}
fmt.Printf("VersionResponse: Version: %s, RuntimeName: %s, RuntimeVersion: %s, RuntimeApiVersion: %s\n", r.Version, r.RuntimeName, r.RuntimeVersion, r.RuntimeApiVersion)
return nil
}

16
cmd/kpod/README.md Normal file
View file

@ -0,0 +1,16 @@
# kpod - Simple debugging tool for pods and images
kpod is a simple client only tool to help with debugging issues when daemons such as CRI runtime and the kubelet are not responding or
failing. A shared API layer could be created to share code between the daemon and kpod. kpod does not require any daemon running. kpod
utilizes the same underlying components that crio uses i.e. containers/image, container/storage, oci-runtime-tool/generate, runc or
any other OCI compatible runtime. kpod shares state with crio and so has the capability to debug pods/images created by crio.
## Use cases
1. List pods.
2. Launch simple pods (that require no daemon support).
3. Exec commands in a container in a pod.
4. Launch additional containers in a pod.
5. List images.
6. Remove images not in use.
7. Pull images.
8. Check image size.
9. Report pod disk resource usage.

102
cmd/kpod/common.go Normal file
View file

@ -0,0 +1,102 @@
package main
import (
"os"
"strings"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/fatih/camelcase"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/kubernetes-incubator/cri-o/libpod"
"github.com/kubernetes-incubator/cri-o/server"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
stores = make(map[storage.Store]struct{})
)
func getStore(c *libkpod.Config) (storage.Store, error) {
options := storage.DefaultStoreOptions
options.GraphRoot = c.Root
options.RunRoot = c.RunRoot
options.GraphDriverName = c.Storage
options.GraphDriverOptions = c.StorageOptions
store, err := storage.GetStore(options)
if err != nil {
return nil, err
}
is.Transport.SetStore(store)
stores[store] = struct{}{}
return store, nil
}
func getRuntime(c *cli.Context) (*libpod.Runtime, error) {
config, err := getConfig(c)
if err != nil {
return nil, errors.Wrapf(err, "could not get config")
}
options := storage.DefaultStoreOptions
options.GraphRoot = config.Root
options.RunRoot = config.RunRoot
options.GraphDriverName = config.Storage
options.GraphDriverOptions = config.StorageOptions
return libpod.NewRuntime(libpod.WithStorageConfig(options))
}
func shutdownStores() {
for store := range stores {
if _, err := store.Shutdown(false); err != nil {
break
}
}
}
func getConfig(c *cli.Context) (*libkpod.Config, error) {
config := libkpod.DefaultConfig()
var configFile string
if c.GlobalIsSet("config") {
configFile = c.GlobalString("config")
} else if _, err := os.Stat(server.CrioConfigPath); err == nil {
configFile = server.CrioConfigPath
}
// load and merge the configfile from the commandline or use
// the default crio config file
if configFile != "" {
err := config.UpdateFromFile(configFile)
if err != nil {
return config, err
}
}
if c.GlobalIsSet("root") {
config.Root = c.GlobalString("root")
}
if c.GlobalIsSet("runroot") {
config.RunRoot = c.GlobalString("runroot")
}
if c.GlobalIsSet("storage-driver") {
config.Storage = c.GlobalString("storage-driver")
}
if c.GlobalIsSet("storage-opt") {
opts := c.GlobalStringSlice("storage-opt")
if len(opts) > 0 {
config.StorageOptions = opts
}
}
if c.GlobalIsSet("runtime") {
config.Runtime = c.GlobalString("runtime")
}
return config, nil
}
func splitCamelCase(src string) string {
entries := camelcase.Split(src)
return strings.Join(entries, " ")
}

60
cmd/kpod/common_test.go Normal file
View file

@ -0,0 +1,60 @@
package main
import (
"os/exec"
"os/user"
"testing"
"flag"
"github.com/containers/storage"
"github.com/urfave/cli"
)
func TestGetStore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
set := flag.NewFlagSet("test", 0)
globalSet := flag.NewFlagSet("test", 0)
globalSet.String("root", "", "path to the root directory in which data, including images, is stored")
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "imagesCommand"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
_, err := getStore(c)
if err != nil {
t.Error(err)
}
}
func failTestIfNotRoot(t *testing.T) {
u, err := user.Current()
if err != nil {
t.Log("Could not determine user. Running without root may cause tests to fail")
} else if u.Uid != "0" {
t.Fatal("tests will fail unless run as root")
}
}
func getStoreForTests() (storage.Store, error) {
set := flag.NewFlagSet("test", 0)
globalSet := flag.NewFlagSet("test", 0)
globalSet.String("root", "", "path to the root directory in which data, including images, is stored")
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "testCommand"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
return getStore(c)
}
func pullTestImage(name string) error {
cmd := exec.Command("crioctl", "image", "pull", name)
err := cmd.Run()
if err != nil {
return err
}
return nil
}

128
cmd/kpod/diff.go Normal file
View file

@ -0,0 +1,128 @@
package main
import (
"fmt"
"github.com/containers/storage/pkg/archive"
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
type diffJSONOutput struct {
Changed []string `json:"changed,omitempty"`
Added []string `json:"added,omitempty"`
Deleted []string `json:"deleted,omitempty"`
}
type diffOutputParams struct {
Change archive.ChangeType
Path string
}
type stdoutStruct struct {
output []diffOutputParams
}
func (so stdoutStruct) Out() error {
for _, d := range so.output {
fmt.Printf("%s %s\n", d.Change, d.Path)
}
return nil
}
var (
diffFlags = []cli.Flag{
cli.BoolFlag{
Name: "archive",
Usage: "Save the diff as a tar archive",
Hidden: true,
},
cli.StringFlag{
Name: "format",
Usage: "Change the output format.",
},
}
diffDescription = fmt.Sprint(`Displays changes on a container or image's filesystem. The
container or image will be compared to its parent layer`)
diffCommand = cli.Command{
Name: "diff",
Usage: "Inspect changes on container's file systems",
Description: diffDescription,
Flags: diffFlags,
Action: diffCmd,
ArgsUsage: "ID-NAME",
}
)
func formatJSON(output []diffOutputParams) (diffJSONOutput, error) {
jsonStruct := diffJSONOutput{}
for _, output := range output {
switch output.Change {
case archive.ChangeModify:
jsonStruct.Changed = append(jsonStruct.Changed, output.Path)
case archive.ChangeAdd:
jsonStruct.Added = append(jsonStruct.Added, output.Path)
case archive.ChangeDelete:
jsonStruct.Deleted = append(jsonStruct.Deleted, output.Path)
default:
return jsonStruct, errors.Errorf("output kind %q not recognized", output.Change.String())
}
}
return jsonStruct, nil
}
func diffCmd(c *cli.Context) error {
if len(c.Args()) != 1 {
return errors.Errorf("container, layer, or image name must be specified: kpod diff [options [...]] ID-NAME")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not get container server")
}
defer server.Shutdown()
to := c.Args().Get(0)
changes, err := server.GetDiff("", to)
if err != nil {
return errors.Wrapf(err, "could not get changes for %q", to)
}
diffOutput := []diffOutputParams{}
outputFormat := c.String("format")
for _, change := range changes {
params := diffOutputParams{
Change: change.Kind,
Path: change.Path,
}
diffOutput = append(diffOutput, params)
}
var out formats.Writer
if outputFormat != "" {
switch outputFormat {
case formats.JSONString:
data, err := formatJSON(diffOutput)
if err != nil {
return err
}
out = formats.JSONStruct{Output: data}
default:
return errors.New("only valid format for diff is 'json'")
}
} else {
out = stdoutStruct{output: diffOutput}
}
formats.Writer(out).Out()
return nil
}

271
cmd/kpod/docker/types.go Normal file
View file

@ -0,0 +1,271 @@
package docker
//
// Types extracted from Docker
//
import (
"time"
"github.com/containers/image/pkg/strslice"
"github.com/opencontainers/go-digest"
)
// TypeLayers github.com/docker/docker/image/rootfs.go
const TypeLayers = "layers"
// V2S2MediaTypeManifest github.com/docker/distribution/manifest/schema2/manifest.go
const V2S2MediaTypeManifest = "application/vnd.docker.distribution.manifest.v2+json"
// V2S2MediaTypeImageConfig github.com/docker/distribution/manifest/schema2/manifest.go
const V2S2MediaTypeImageConfig = "application/vnd.docker.container.image.v1+json"
// V2S2MediaTypeLayer github.com/docker/distribution/manifest/schema2/manifest.go
const V2S2MediaTypeLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
// V2S2MediaTypeUncompressedLayer github.com/docker/distribution/manifest/schema2/manifest.go
const V2S2MediaTypeUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
// V2S2RootFS describes images root filesystem
// This is currently a placeholder that only supports layers. In the future
// this can be made into an interface that supports different implementations.
// github.com/docker/docker/image/rootfs.go
type V2S2RootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
}
// V2S2History stores build commands that were used to create an image
// github.com/docker/docker/image/image.go
type V2S2History struct {
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// CreatedBy keeps the Dockerfile command used while building the image
CreatedBy string `json:"created_by,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// EmptyLayer is set to true if this history item did not generate a
// layer. Otherwise, the history item is associated with the next
// layer in the RootFS section.
EmptyLayer bool `json:"empty_layer,omitempty"`
}
// ID is the content-addressable ID of an image.
// github.com/docker/docker/image/image.go
type ID digest.Digest
// HealthConfig holds configuration settings for the HEALTHCHECK feature.
// github.com/docker/docker/api/types/container/config.go
type HealthConfig struct {
// Test is the test to perform to check that the container is healthy.
// An empty slice means to inherit the default.
// The options are:
// {} : inherit healthcheck
// {"NONE"} : disable healthcheck
// {"CMD", args...} : exec arguments directly
// {"CMD-SHELL", command} : run command with system's default shell
Test []string `json:",omitempty"`
// Zero means to inherit. Durations are expressed as integer nanoseconds.
Interval time.Duration `json:",omitempty"` // Interval is the time to wait between checks.
Timeout time.Duration `json:",omitempty"` // Timeout is the time to wait before considering the check to have hung.
// Retries is the number of consecutive failures needed to consider a container as unhealthy.
// Zero means inherit.
Retries int `json:",omitempty"`
}
// PortSet is a collection of structs indexed by Port
// github.com/docker/go-connections/nat/nat.go
type PortSet map[Port]struct{}
// Port is a string containing port number and protocol in the format "80/tcp"
// github.com/docker/go-connections/nat/nat.go
type Port string
// Config contains the configuration data about a container.
// It should hold only portable information about the container.
// Here, "portable" means "independent from the host we are running on".
// Non-portable information *should* appear in HostConfig.
// All fields added to this struct must be marked `omitempty` to keep getting
// predictable hashes from the old `v1Compatibility` configuration.
// github.com/docker/docker/api/types/container/config.go
type Config struct {
Hostname string // Hostname
Domainname string // Domainname
User string // User that will run the command(s) inside the container, also support user:group
AttachStdin bool // Attach the standard input, makes possible user interaction
AttachStdout bool // Attach the standard output
AttachStderr bool // Attach the standard error
ExposedPorts PortSet `json:",omitempty"` // List of exposed ports
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string // List of environment variable to set in the container
Cmd strslice.StrSlice // Command to run when starting the container
Healthcheck *HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
Volumes map[string]struct{} // List of volumes (mounts) used for the container
WorkingDir string // Current directory (PWD) in the command will be launched
Entrypoint strslice.StrSlice // Entrypoint to run when starting the container
NetworkDisabled bool `json:",omitempty"` // Is network disabled
MacAddress string `json:",omitempty"` // Mac Address of the container
OnBuild []string // ONBUILD metadata that were defined on the image Dockerfile
Labels map[string]string // List of labels set to this container
StopSignal string `json:",omitempty"` // Signal to stop a container
StopTimeout *int `json:",omitempty"` // Timeout (in seconds) to stop a container
Shell strslice.StrSlice `json:",omitempty"` // Shell for shell-form of RUN, CMD, ENTRYPOINT
}
// V1Compatibility - For non-top-level layers, create fake V1Compatibility
// strings that fit the format and don't collide with anything else, but
// don't result in runnable images on their own.
// github.com/docker/distribution/manifest/schema1/config_builder.go
type V1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig struct {
Cmd []string
} `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
// V1Image stores the V1 image configuration.
// github.com/docker/docker/image/image.go
type V1Image struct {
// ID is a unique 64 character identifier of the image
ID string `json:"id,omitempty"`
// Parent is the ID of the parent image
Parent string `json:"parent,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Container is the id of the container used to commit
Container string `json:"container,omitempty"`
// ContainerConfig is the configuration of the container that is committed into the image
ContainerConfig Config `json:"container_config,omitempty"`
// DockerVersion specifies the version of Docker that was used to build the image
DockerVersion string `json:"docker_version,omitempty"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *Config `json:"config,omitempty"`
// Architecture is the hardware that the image is build and runs on
Architecture string `json:"architecture,omitempty"`
// OS is the operating system used to build and run the image
OS string `json:"os,omitempty"`
// Size is the total size of the image including all layers it is composed of
Size int64 `json:",omitempty"`
}
// V2Image stores the image configuration
// github.com/docker/docker/image/image.go
type V2Image struct {
V1Image
Parent ID `json:"parent,omitempty"`
RootFS *V2S2RootFS `json:"rootfs,omitempty"`
History []V2S2History `json:"history,omitempty"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
// rawJSON caches the immutable JSON associated with this image.
//rawJSON []byte
// computedID is the ID computed from the hash of the image config.
// Not to be confused with the legacy V1 ID in V1Image.
//computedID ID
}
// V2Versioned provides a struct with the manifest schemaVersion and mediaType.
// Incoming content with unknown schema version can be decoded against this
// struct to check the version.
// github.com/docker/distribution/manifest/versioned.go
type V2Versioned struct {
// SchemaVersion is the image manifest schema that this image follows
SchemaVersion int `json:"schemaVersion"`
// MediaType is the media type of this schema.
MediaType string `json:"mediaType,omitempty"`
}
// V2S1FSLayer is a container struct for BlobSums defined in an image manifest
// github.com/docker/distribution/manifest/schema1/manifest.go
type V2S1FSLayer struct {
// BlobSum is the tarsum of the referenced filesystem image layer
BlobSum digest.Digest `json:"blobSum"`
}
// V2S1History stores unstructured v1 compatibility information
// github.com/docker/distribution/manifest/schema1/manifest.go
type V2S1History struct {
// V1Compatibility is the raw v1 compatibility information
V1Compatibility string `json:"v1Compatibility"`
}
// V2S1Manifest provides the base accessible fields for working with V2 image
// format in the registry.
// github.com/docker/distribution/manifest/schema1/manifest.go
type V2S1Manifest struct {
V2Versioned
// Name is the name of the image's repository
Name string `json:"name"`
// Tag is the tag of the image specified by this manifest
Tag string `json:"tag"`
// Architecture is the host architecture on which this image is intended to
// run
Architecture string `json:"architecture"`
// FSLayers is a list of filesystem layer blobSums contained in this image
FSLayers []V2S1FSLayer `json:"fsLayers"`
// History is a list of unstructured historical data for v1 compatibility
History []V2S1History `json:"history"`
}
// V2S2Descriptor describes targeted content. Used in conjunction with a blob
// store, a descriptor can be used to fetch, store and target any kind of
// blob. The struct also describes the wire protocol format. Fields should
// only be added but never changed.
// github.com/docker/distribution/blobs.go
type V2S2Descriptor struct {
// MediaType describe the type of the content. All text based formats are
// encoded as utf-8.
MediaType string `json:"mediaType,omitempty"`
// Size in bytes of content.
Size int64 `json:"size,omitempty"`
// Digest uniquely identifies the content. A byte stream can be verified
// against against this digest.
Digest digest.Digest `json:"digest,omitempty"`
// URLs contains the source URLs of this content.
URLs []string `json:"urls,omitempty"`
// NOTE: Before adding a field here, please ensure that all
// other options have been exhausted. Much of the type relationships
// depend on the simplicity of this type.
}
// V2S2Manifest defines a schema2 manifest.
// github.com/docker/distribution/manifest/schema2/manifest.go
type V2S2Manifest struct {
V2Versioned
// Config references the image configuration as a blob.
Config V2S2Descriptor `json:"config"`
// Layers lists descriptors for the layers referenced by the
// configuration.
Layers []V2S2Descriptor `json:"layers"`
}

103
cmd/kpod/export.go Normal file
View file

@ -0,0 +1,103 @@
package main
import (
"io"
"os"
"fmt"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
type exportOptions struct {
output string
container string
}
var (
exportFlags = []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "Write to a file, default is STDOUT",
Value: "/dev/stdout",
},
}
exportDescription = "Exports container's filesystem contents as a tar archive" +
" and saves it on the local machine."
exportCommand = cli.Command{
Name: "export",
Usage: "Export container's filesystem contents as a tar archive",
Description: exportDescription,
Flags: exportFlags,
Action: exportCmd,
ArgsUsage: "CONTAINER",
}
)
// exportCmd saves a container to a tarball on disk
func exportCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container id must be specified")
}
if len(args) > 1 {
return errors.Errorf("too many arguments given, need 1 at most.")
}
container := args[0]
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
output := c.String("output")
if output == "/dev/stdout" {
file := os.Stdout
if logrus.IsTerminal(file) {
return errors.Errorf("refusing to export to terminal. Use -o flag or redirect")
}
}
opts := exportOptions{
output: output,
container: container,
}
return exportContainer(store, opts)
}
// exportContainer exports the contents of a container and saves it as
// a tarball on disk
func exportContainer(store storage.Store, opts exportOptions) error {
mountPoint, err := store.Mount(opts.container, "")
if err != nil {
return errors.Wrapf(err, "error finding container %q", opts.container)
}
defer func() {
if err := store.Unmount(opts.container); err != nil {
fmt.Printf("error unmounting container %q: %v\n", opts.container, err)
}
}()
input, err := archive.Tar(mountPoint, archive.Uncompressed)
if err != nil {
return errors.Wrapf(err, "error reading container directory %q", opts.container)
}
outFile, err := os.Create(opts.output)
if err != nil {
return errors.Wrapf(err, "error creating file %q", opts.output)
}
defer outFile.Close()
_, err = io.Copy(outFile, input)
return err
}

132
cmd/kpod/formats/formats.go Normal file
View file

@ -0,0 +1,132 @@
package formats
import (
"encoding/json"
"fmt"
"os"
"strings"
"text/tabwriter"
"text/template"
"github.com/ghodss/yaml"
"github.com/pkg/errors"
)
const (
// JSONString const to save on duplicate variable names
JSONString = "json"
// IDString const to save on duplicates for Go templates
IDString = "{{.ID}}"
)
// Writer interface for outputs
type Writer interface {
Out() error
}
// JSONStructArray for JSON output
type JSONStructArray struct {
Output []interface{}
}
// StdoutTemplateArray for Go template output
type StdoutTemplateArray struct {
Output []interface{}
Template string
Fields map[string]string
}
// JSONStruct for JSON output
type JSONStruct struct {
Output interface{}
}
// StdoutTemplate for Go template output
type StdoutTemplate struct {
Output interface{}
Template string
Fields map[string]string
}
// YAMLStruct for YAML output
type YAMLStruct struct {
Output interface{}
}
// Out method for JSON Arrays
func (j JSONStructArray) Out() error {
data, err := json.MarshalIndent(j.Output, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
return nil
}
// Out method for Go templates
func (t StdoutTemplateArray) Out() error {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 3, ' ', 0)
if strings.HasPrefix(t.Template, "table") {
// replace any spaces with tabs in template so that tabwriter can align it
t.Template = strings.Replace(strings.TrimSpace(t.Template[5:]), " ", "\t", -1)
headerTmpl, err := template.New("header").Funcs(headerFunctions).Parse(t.Template)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
err = headerTmpl.Execute(w, t.Fields)
if err != nil {
return err
}
fmt.Fprintln(w, "")
}
t.Template = strings.Replace(t.Template, " ", "\t", -1)
tmpl, err := template.New("image").Funcs(basicFunctions).Parse(t.Template)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
for _, img := range t.Output {
basicTmpl := tmpl.Funcs(basicFunctions)
err = basicTmpl.Execute(w, img)
if err != nil {
return err
}
fmt.Fprintln(w, "")
}
return w.Flush()
}
// Out method for JSON struct
func (j JSONStruct) Out() error {
data, err := json.MarshalIndent(j.Output, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
return nil
}
//Out method for Go templates
func (t StdoutTemplate) Out() error {
tmpl, err := template.New("image").Parse(t.Template)
if err != nil {
return errors.Wrapf(err, "template parsing error")
}
err = tmpl.Execute(os.Stdout, t.Output)
if err != nil {
return err
}
fmt.Println()
return nil
}
// Out method for YAML
func (y YAMLStruct) Out() error {
var buf []byte
var err error
buf, err = yaml.Marshal(y.Output)
if err != nil {
return err
}
fmt.Println(string(buf))
return nil
}

View file

@ -1,4 +1,4 @@
package templates
package formats
import (
"bytes"
@ -14,7 +14,7 @@ var basicFunctions = template.FuncMap{
buf := &bytes.Buffer{}
enc := json.NewEncoder(buf)
enc.SetEscapeHTML(false)
enc.Encode(v)
_ = enc.Encode(v)
// Remove the trailing new line added by the encoder
return strings.TrimSpace(buf.String())
},
@ -31,7 +31,7 @@ var basicFunctions = template.FuncMap{
// This is a replacement of basicFunctions for header generation
// because we want the header to remain intact.
// Some functions like `split` are irrelevant so not added.
var HeaderFunctions = template.FuncMap{
var headerFunctions = template.FuncMap{
"json": func(v string) string {
return v
},

272
cmd/kpod/history.go Normal file
View file

@ -0,0 +1,272 @@
package main
import (
"reflect"
"strconv"
"strings"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
units "github.com/docker/go-units"
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/kubernetes-incubator/cri-o/libpod/common"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
const (
createdByTruncLength = 45
idTruncLength = 13
)
// historyTemplateParams stores info about each layer
type historyTemplateParams struct {
ID string
Created string
CreatedBy string
Size string
Comment string
}
// historyJSONParams is only used when the JSON format is specified,
// and is better for data processing from JSON.
// historyJSONParams will be populated by data from v1.History and types.BlobInfo,
// the members of the struct are the sama data types as their sources.
type historyJSONParams struct {
ID string `json:"id"`
Created *time.Time `json:"created"`
CreatedBy string `json:"createdBy"`
Size int64 `json:"size"`
Comment string `json:"comment"`
}
// historyOptions stores cli flag values
type historyOptions struct {
image string
human bool
noTrunc bool
quiet bool
format string
}
var (
historyFlags = []cli.Flag{
cli.BoolTFlag{
Name: "human, H",
Usage: "Display sizes and dates in human readable format",
},
cli.BoolFlag{
Name: "no-trunc, notruncate",
Usage: "Do not truncate the output",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Display the numeric IDs only",
},
cli.StringFlag{
Name: "format",
Usage: "Change the output to JSON or a Go template",
},
}
historyDescription = "Displays the history of an image. The information can be printed out in an easy to read, " +
"or user specified format, and can be truncated."
historyCommand = cli.Command{
Name: "history",
Usage: "Show history of a specified image",
Description: historyDescription,
Flags: historyFlags,
Action: historyCmd,
ArgsUsage: "",
}
)
func historyCmd(c *cli.Context) error {
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
format := genHistoryFormat(c.Bool("quiet"))
if c.IsSet("format") {
format = c.String("format")
}
args := c.Args()
if len(args) == 0 {
return errors.Errorf("an image name must be specified")
}
if len(args) > 1 {
return errors.Errorf("Kpod history takes at most 1 argument")
}
imgName := args[0]
opts := historyOptions{
image: imgName,
human: c.BoolT("human"),
noTrunc: c.Bool("no-trunc"),
quiet: c.Bool("quiet"),
format: format,
}
return generateHistoryOutput(store, opts)
}
func genHistoryFormat(quiet bool) (format string) {
if quiet {
return formats.IDString
}
return "table {{.ID}}\t{{.Created}}\t{{.CreatedBy}}\t{{.Size}}\t{{.Comment}}\t"
}
// historyToGeneric makes an empty array of interfaces for output
func historyToGeneric(templParams []historyTemplateParams, JSONParams []historyJSONParams) (genericParams []interface{}) {
if len(templParams) > 0 {
for _, v := range templParams {
genericParams = append(genericParams, interface{}(v))
}
return
}
for _, v := range JSONParams {
genericParams = append(genericParams, interface{}(v))
}
return
}
// generate the header based on the template provided
func (h *historyTemplateParams) headerMap() map[string]string {
v := reflect.Indirect(reflect.ValueOf(h))
values := make(map[string]string)
for h := 0; h < v.NumField(); h++ {
key := v.Type().Field(h).Name
value := key
values[key] = strings.ToUpper(splitCamelCase(value))
}
return values
}
// getHistory gets the history of an image and information about its layers
func getHistory(store storage.Store, image string) ([]v1.History, []types.BlobInfo, string, error) {
ref, err := is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, nil, "", errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err := is.Transport.GetStoreImage(store, ref)
if err != nil {
return nil, nil, "", errors.Wrapf(err, "no such image %q", image)
}
systemContext := common.GetSystemContext("")
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, nil, "", errors.Wrapf(err, "error instantiating image %q", image)
}
oci, err := src.OCIConfig()
if err != nil {
return nil, nil, "", err
}
return oci.History, src.LayerInfos(), img.ID, nil
}
// getHistorytemplateOutput gets the modified history information to be printed in human readable format
func getHistoryTemplateOutput(history []v1.History, layers []types.BlobInfo, imageID string, opts historyOptions) (historyOutput []historyTemplateParams) {
var (
outputSize string
createdTime string
createdBy string
count = 1
)
for i := len(history) - 1; i >= 0; i-- {
if i != len(history)-1 {
imageID = "<missing>"
}
if !opts.noTrunc && i == len(history)-1 {
imageID = imageID[:idTruncLength]
}
var size int64
if !history[i].EmptyLayer {
size = layers[len(layers)-count].Size
count++
}
if opts.human {
createdTime = units.HumanDuration(time.Since((*history[i].Created))) + " ago"
outputSize = units.HumanSize(float64(size))
} else {
createdTime = (history[i].Created).Format(time.RFC3339)
outputSize = strconv.FormatInt(size, 10)
}
createdBy = strings.Join(strings.Fields(history[i].CreatedBy), " ")
if !opts.noTrunc && len(createdBy) > createdByTruncLength {
createdBy = createdBy[:createdByTruncLength-3] + "..."
}
params := historyTemplateParams{
ID: imageID,
Created: createdTime,
CreatedBy: createdBy,
Size: outputSize,
Comment: history[i].Comment,
}
historyOutput = append(historyOutput, params)
}
return
}
// getHistoryJSONOutput returns the history information in its raw form
func getHistoryJSONOutput(history []v1.History, layers []types.BlobInfo, imageID string) (historyOutput []historyJSONParams) {
count := 1
for i := len(history) - 1; i >= 0; i-- {
var size int64
if !history[i].EmptyLayer {
size = layers[len(layers)-count].Size
count++
}
params := historyJSONParams{
ID: imageID,
Created: history[i].Created,
CreatedBy: history[i].CreatedBy,
Size: size,
Comment: history[i].Comment,
}
historyOutput = append(historyOutput, params)
}
return
}
// generateHistoryOutput generates the history based on the format given
func generateHistoryOutput(store storage.Store, opts historyOptions) error {
history, layers, imageID, err := getHistory(store, opts.image)
if err != nil {
return errors.Wrapf(err, "error getting history of image %q", opts.image)
}
if len(history) == 0 {
return nil
}
var out formats.Writer
switch opts.format {
case formats.JSONString:
historyOutput := getHistoryJSONOutput(history, layers, imageID)
out = formats.JSONStructArray{Output: historyToGeneric([]historyTemplateParams{}, historyOutput)}
default:
historyOutput := getHistoryTemplateOutput(history, layers, imageID, opts)
out = formats.StdoutTemplateArray{Output: historyToGeneric(historyOutput, []historyJSONParams{}), Template: opts.format, Fields: historyOutput[0].headerMap()}
}
return formats.Writer(out).Out()
}

206
cmd/kpod/images.go Normal file
View file

@ -0,0 +1,206 @@
package main
import (
"reflect"
"strings"
"github.com/containers/storage"
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
libpod "github.com/kubernetes-incubator/cri-o/libpod/images"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
imagesFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only image IDs",
},
cli.BoolFlag{
Name: "noheading, n",
Usage: "do not print column headings",
},
cli.BoolFlag{
Name: "no-trunc, notruncate",
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "digests",
Usage: "show digests",
},
cli.StringFlag{
Name: "format",
Usage: "Change the output format to JSON or a Go template",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided (default [])",
},
}
imagesDescription = "lists locally stored images."
imagesCommand = cli.Command{
Name: "images",
Usage: "list images in local storage",
Description: imagesDescription,
Flags: imagesFlags,
Action: imagesCmd,
ArgsUsage: "",
}
)
func imagesCmd(c *cli.Context) error {
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
noheading := false
if c.IsSet("noheading") {
noheading = c.Bool("noheading")
}
truncate := true
if c.IsSet("no-trunc") {
truncate = !c.Bool("no-trunc")
}
digests := false
if c.IsSet("digests") {
digests = c.Bool("digests")
}
outputFormat := genImagesFormat(quiet, truncate, digests)
if c.IsSet("format") {
outputFormat = c.String("format")
}
name := ""
if len(c.Args()) == 1 {
name = c.Args().Get(0)
} else if len(c.Args()) > 1 {
return errors.New("'kpod images' requires at most 1 argument")
}
var params *libpod.FilterParams
if c.IsSet("filter") {
params, err = libpod.ParseFilter(store, c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
} else {
params = nil
}
imageList, err := libpod.GetImagesMatchingFilter(store, params, name)
if err != nil {
return errors.Wrapf(err, "could not get list of images matching filter")
}
return outputImages(store, imageList, truncate, digests, quiet, outputFormat, noheading)
}
func genImagesFormat(quiet, truncate, digests bool) (format string) {
if quiet {
return formats.IDString
}
if truncate {
format = "table {{ .ID | printf \"%-20.12s\" }} "
} else {
format = "table {{ .ID | printf \"%-64s\" }} "
}
format += "{{ .Name | printf \"%-56s\" }} "
if digests {
format += "{{ .Digest | printf \"%-71s \"}} "
}
format += "{{ .CreatedAt | printf \"%-22s\" }} {{.Size}}"
return
}
func outputImages(store storage.Store, images []storage.Image, truncate, digests, quiet bool, outputFormat string, noheading bool) error {
imageOutput := []imageOutputParams{}
lastID := ""
for _, img := range images {
if quiet && lastID == img.ID {
continue // quiet should not show the same ID multiple times
}
createdTime := img.Created
names := []string{""}
if len(img.Names) > 0 {
names = img.Names
}
info, imageDigest, size, _ := libpod.InfoAndDigestAndSize(store, img)
if info != nil {
createdTime = info.Created
}
params := imageOutputParams{
ID: img.ID,
Name: names,
Digest: imageDigest,
CreatedAt: createdTime.Format("Jan 2, 2006 15:04"),
Size: libpod.FormattedSize(float64(size)),
}
imageOutput = append(imageOutput, params)
}
var out formats.Writer
switch outputFormat {
case formats.JSONString:
out = formats.JSONStructArray{Output: toGeneric(imageOutput)}
default:
if len(imageOutput) == 0 {
out = formats.StdoutTemplateArray{}
} else {
out = formats.StdoutTemplateArray{Output: toGeneric(imageOutput), Template: outputFormat, Fields: imageOutput[0].headerMap()}
}
}
formats.Writer(out).Out()
return nil
}
type imageOutputParams struct {
ID string `json:"id"`
Name []string `json:"names"`
Digest digest.Digest `json:"digest"`
CreatedAt string `json:"created"`
Size string `json:"size"`
}
func toGeneric(params []imageOutputParams) []interface{} {
genericParams := make([]interface{}, len(params))
for i, v := range params {
genericParams[i] = interface{}(v)
}
return genericParams
}
func (i *imageOutputParams) headerMap() map[string]string {
v := reflect.Indirect(reflect.ValueOf(i))
values := make(map[string]string)
for i := 0; i < v.NumField(); i++ {
key := v.Type().Field(i).Name
value := key
if value == "ID" || value == "Name" {
value = "Image" + value
}
values[key] = strings.ToUpper(splitCamelCase(value))
}
return values
}

195
cmd/kpod/info.go Normal file
View file

@ -0,0 +1,195 @@
package main
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"runtime"
"github.com/docker/docker/pkg/system"
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
infoDescription = "display system information"
infoCommand = cli.Command{
Name: "info",
Usage: infoDescription,
Description: `Information display here pertain to the host, current storage stats, and build of kpod. Useful for the user and when reporting issues.`,
Flags: infoFlags,
Action: infoCmd,
ArgsUsage: "",
}
infoFlags = []cli.Flag{
cli.BoolFlag{
Name: "debug, D",
Usage: "display additional debug information",
},
cli.StringFlag{
Name: "format",
Usage: "Change the output format to JSON or a Go template",
},
}
)
func infoCmd(c *cli.Context) error {
info := map[string]interface{}{}
infoGivers := []infoGiverFunc{
storeInfo,
hostInfo,
}
if c.Bool("debug") {
infoGivers = append(infoGivers, debugInfo)
}
for _, giver := range infoGivers {
thisName, thisInfo, err := giver(c)
if err != nil {
info[thisName] = infoErr(err)
continue
}
info[thisName] = thisInfo
}
var out formats.Writer
infoOutputFormat := c.String("format")
switch infoOutputFormat {
case formats.JSONString:
out = formats.JSONStruct{Output: info}
case "":
out = formats.YAMLStruct{Output: info}
default:
out = formats.StdoutTemplate{Output: info, Template: infoOutputFormat}
}
formats.Writer(out).Out()
return nil
}
func infoErr(err error) map[string]interface{} {
return map[string]interface{}{
"error": err.Error(),
}
}
type infoGiverFunc func(c *cli.Context) (name string, info map[string]interface{}, err error)
// top-level "debug" info
func debugInfo(c *cli.Context) (string, map[string]interface{}, error) {
info := map[string]interface{}{}
info["compiler"] = runtime.Compiler
info["go version"] = runtime.Version()
return "debug", info, nil
}
// top-level "host" info
func hostInfo(c *cli.Context) (string, map[string]interface{}, error) {
// lets say OS, arch, number of cpus, amount of memory, maybe os distribution/version, hostname, kernel version, uptime
info := map[string]interface{}{}
info["os"] = runtime.GOOS
info["arch"] = runtime.GOARCH
info["cpus"] = runtime.NumCPU()
mi, err := system.ReadMemInfo()
if err != nil {
info["meminfo"] = infoErr(err)
} else {
// TODO this might be a place for github.com/dustin/go-humanize
info["MemTotal"] = mi.MemTotal
info["MemFree"] = mi.MemFree
info["SwapTotal"] = mi.SwapTotal
info["SwapFree"] = mi.SwapFree
}
if kv, err := readKernelVersion(); err != nil {
info["kernel"] = infoErr(err)
} else {
info["kernel"] = kv
}
if up, err := readUptime(); err != nil {
info["uptime"] = infoErr(err)
} else {
info["uptime"] = up
}
if host, err := os.Hostname(); err != nil {
info["hostname"] = infoErr(err)
} else {
info["hostname"] = host
}
return "host", info, nil
}
// top-level "store" info
func storeInfo(c *cli.Context) (string, map[string]interface{}, error) {
storeStr := "store"
config, err := getConfig(c)
if err != nil {
return storeStr, nil, errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return storeStr, nil, err
}
// lets say storage driver in use, number of images, number of containers
info := map[string]interface{}{}
info["GraphRoot"] = store.GraphRoot()
info["RunRoot"] = store.RunRoot()
info["GraphDriverName"] = store.GraphDriverName()
info["GraphOptions"] = store.GraphOptions()
statusPairs, err := store.Status()
if err != nil {
return storeStr, nil, err
}
status := map[string]string{}
for _, pair := range statusPairs {
status[pair[0]] = pair[1]
}
info["GraphStatus"] = status
images, err := store.Images()
if err != nil {
info["ImageStore"] = infoErr(err)
} else {
info["ImageStore"] = map[string]interface{}{
"number": len(images),
}
}
containers, err := store.Containers()
if err != nil {
info["ContainerStore"] = infoErr(err)
} else {
info["ContainerStore"] = map[string]interface{}{
"number": len(containers),
}
}
return storeStr, info, nil
}
func readKernelVersion() (string, error) {
buf, err := ioutil.ReadFile("/proc/version")
if err != nil {
return "", err
}
f := bytes.Fields(buf)
if len(f) < 2 {
return string(bytes.TrimSpace(buf)), nil
}
return string(f[2]), nil
}
func readUptime() (string, error) {
buf, err := ioutil.ReadFile("/proc/uptime")
if err != nil {
return "", err
}
f := bytes.Fields(buf)
if len(f) < 1 {
return "", fmt.Errorf("invalid uptime")
}
return string(f[0]), nil
}

117
cmd/kpod/inspect.go Normal file
View file

@ -0,0 +1,117 @@
package main
import (
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
const (
inspectTypeContainer = "container"
inspectTypeImage = "image"
inspectAll = "all"
)
var (
inspectFlags = []cli.Flag{
cli.StringFlag{
Name: "type, t",
Value: inspectAll,
Usage: "Return JSON for specified type, (e.g image, container or task)",
},
cli.StringFlag{
Name: "format, f",
Usage: "Change the output format to a Go template",
},
cli.BoolFlag{
Name: "size",
Usage: "Display total file size if the type is container",
},
}
inspectDescription = "This displays the low-level information on containers and images identified by name or ID. By default, this will render all results in a JSON array. If the container and image have the same name, this will return container JSON for unspecified type."
inspectCommand = cli.Command{
Name: "inspect",
Usage: "Displays the configuration of a container or image",
Description: inspectDescription,
Flags: inspectFlags,
Action: inspectCmd,
ArgsUsage: "CONTAINER-OR-IMAGE",
}
)
func inspectCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container or image name must be specified: kpod inspect [options [...]] name")
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
itemType := c.String("type")
size := c.Bool("size")
switch itemType {
case inspectTypeContainer:
case inspectTypeImage:
case inspectAll:
default:
return errors.Errorf("the only recognized types are %q, %q, and %q", inspectTypeContainer, inspectTypeImage, inspectAll)
}
name := args[0]
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not get container server")
}
defer server.Shutdown()
if err = server.Update(); err != nil {
return errors.Wrapf(err, "could not update list of containers")
}
outputFormat := c.String("format")
var data interface{}
switch itemType {
case inspectTypeContainer:
data, err = server.GetContainerData(name, size)
if err != nil {
return errors.Wrapf(err, "error parsing container data")
}
case inspectTypeImage:
data, err = images.GetData(server.Store(), name)
if err != nil {
return errors.Wrapf(err, "error parsing image data")
}
case inspectAll:
ctrData, err := server.GetContainerData(name, size)
if err != nil {
imgData, err := images.GetData(server.Store(), name)
if err != nil {
return errors.Wrapf(err, "error parsing container or image data")
}
data = imgData
} else {
data = ctrData
}
}
var out formats.Writer
if outputFormat != "" && outputFormat != formats.JSONString {
//template
out = formats.StdoutTemplate{Output: data, Template: outputFormat}
} else {
// default is json output
out = formats.JSONStruct{Output: data}
}
formats.Writer(out).Out()
return nil
}

109
cmd/kpod/load.go Normal file
View file

@ -0,0 +1,109 @@
package main
import (
"io"
"os"
"io/ioutil"
"github.com/containers/storage"
"github.com/kubernetes-incubator/cri-o/libpod/common"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
type loadOptions struct {
input string
quiet bool
}
var (
loadFlags = []cli.Flag{
cli.StringFlag{
Name: "input, i",
Usage: "Read from archive file, default is STDIN",
Value: "/dev/stdin",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Suppress the output",
},
}
loadDescription = "Loads the image from docker-archive stored on the local machine."
loadCommand = cli.Command{
Name: "load",
Usage: "load an image from docker archive",
Description: loadDescription,
Flags: loadFlags,
Action: loadCmd,
ArgsUsage: "",
}
)
// loadCmd gets the image/file to be loaded from the command line
// and calls loadImage to load the image to containers-storage
func loadCmd(c *cli.Context) error {
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
args := c.Args()
if len(args) > 0 {
return errors.New("too many arguments. Requires exactly 1")
}
input := c.String("input")
quiet := c.Bool("quiet")
if input == "/dev/stdin" {
fi, err := os.Stdin.Stat()
if err != nil {
return err
}
// checking if loading from pipe
if !fi.Mode().IsRegular() {
outFile, err := ioutil.TempFile("/var/tmp", "kpod")
if err != nil {
return errors.Errorf("error creating file %v", err)
}
defer outFile.Close()
defer os.Remove(outFile.Name())
inFile, err := os.OpenFile(input, 0, 0666)
if err != nil {
return errors.Errorf("error reading file %v", err)
}
defer inFile.Close()
_, err = io.Copy(outFile, inFile)
if err != nil {
return errors.Errorf("error copying file %v", err)
}
input = outFile.Name()
}
}
opts := loadOptions{
input: input,
quiet: quiet,
}
return loadImage(store, opts)
}
// loadImage loads the image from docker-archive or oci to containers-storage
// using the pullImage function
func loadImage(store storage.Store, opts loadOptions) error {
systemContext := common.GetSystemContext("")
src := dockerArchive + opts.input
return images.PullImage(store, src, false, opts.quiet, systemContext)
}

89
cmd/kpod/logs.go Normal file
View file

@ -0,0 +1,89 @@
package main
import (
"fmt"
"time"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
logsFlags = []cli.Flag{
cli.BoolFlag{
Name: "details",
Usage: "Show extra details provided to the logs",
Hidden: true,
},
cli.BoolFlag{
Name: "follow, f",
Usage: "Follow log output. The default is false",
},
cli.StringFlag{
Name: "since",
Usage: "Show logs since TIMESTAMP",
},
cli.Uint64Flag{
Name: "tail",
Usage: "Output the specified number of LINES at the end of the logs. Defaults to 0, which prints all lines",
},
}
logsDescription = "The kpod logs command batch-retrieves whatever logs are present for a container at the time of execution. This does not guarantee execution" +
"order when combined with kpod run (i.e. your run may not have generated any logs at the time you execute kpod logs"
logsCommand = cli.Command{
Name: "logs",
Usage: "Fetch the logs of a container",
Description: logsDescription,
Flags: logsFlags,
Action: logsCmd,
ArgsUsage: "CONTAINER",
}
)
func logsCmd(c *cli.Context) error {
args := c.Args()
if len(args) != 1 {
return errors.Errorf("'kpod logs' requires exactly one container name/ID")
}
container := c.Args().First()
var opts libkpod.LogOptions
opts.Details = c.Bool("details")
opts.Follow = c.Bool("follow")
opts.SinceTime = time.Time{}
if c.IsSet("since") {
// parse time, error out if something is wrong
since, err := time.Parse("2006-01-02T15:04:05.999999999-07:00", c.String("since"))
if err != nil {
return errors.Wrapf(err, "could not parse time: %q", c.String("since"))
}
opts.SinceTime = since
}
opts.Tail = c.Uint64("tail")
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not create container server")
}
defer server.Shutdown()
err = server.Update()
if err != nil {
return errors.Wrapf(err, "could not update list of containers")
}
logs := make(chan string)
go func() {
err = server.GetLogs(container, logs, opts)
}()
printLogs(logs)
return err
}
func printLogs(logs chan string) {
for line := range logs {
fmt.Println(line)
}
}

94
cmd/kpod/main.go Normal file
View file

@ -0,0 +1,94 @@
package main
import (
"os"
"github.com/containers/storage/pkg/reexec"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
func main() {
if reexec.Init() {
return
}
app := cli.NewApp()
app.Name = "kpod"
app.Usage = "manage pods and images"
app.Version = "0.0.1"
app.Commands = []cli.Command{
diffCommand,
exportCommand,
historyCommand,
imagesCommand,
infoCommand,
inspectCommand,
loadCommand,
logsCommand,
mountCommand,
psCommand,
pullCommand,
pushCommand,
renameCommand,
rmCommand,
rmiCommand,
saveCommand,
statsCommand,
tagCommand,
umountCommand,
versionCommand,
}
app.Before = func(c *cli.Context) error {
logrus.SetLevel(logrus.ErrorLevel)
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
app.After = func(*cli.Context) error {
// called by Run() when the command handler succeeds
shutdownStores()
return nil
}
cli.OsExiter = func(code int) {
// called by Run() when the command fails, bypassing After()
shutdownStores()
os.Exit(code)
}
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "config, c",
Usage: "path of a config file detailing container server configuration options",
},
cli.BoolFlag{
Name: "debug",
Usage: "print debugging information",
},
cli.StringFlag{
Name: "root",
Usage: "path to the root directory in which data, including images, is stored",
},
cli.StringFlag{
Name: "runroot",
Usage: "path to the 'run directory' where all state information is stored",
},
cli.StringFlag{
Name: "runtime",
Usage: "path to the OCI-compatible binary used to run containers, default is /usr/bin/runc",
},
cli.StringFlag{
Name: "storage-driver, s",
Usage: "select which storage driver is used to manage storage of images and containers (default is overlay2)",
},
cli.StringSliceFlag{
Name: "storage-opt",
Usage: "used to pass an option to the storage driver",
},
}
if err := app.Run(os.Args); err != nil {
logrus.Errorf(err.Error())
cli.OsExiter(1)
}
}

118
cmd/kpod/mount.go Normal file
View file

@ -0,0 +1,118 @@
package main
import (
js "encoding/json"
"fmt"
of "github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
mountDescription = `
kpod mount
Lists all mounted containers mount points
kpod mount CONTAINER-NAME-OR-ID
Mounts the specified container and outputs the mountpoint
`
mountFlags = []cli.Flag{
cli.BoolFlag{
Name: "notruncate",
Usage: "do not truncate output",
},
cli.StringFlag{
Name: "label",
Usage: "SELinux label for the mount point",
},
cli.StringFlag{
Name: "format",
Usage: "Change the output format to Go template",
},
}
mountCommand = cli.Command{
Name: "mount",
Usage: "Mount a working container's root filesystem",
Description: mountDescription,
Action: mountCmd,
ArgsUsage: "[CONTAINER-NAME-OR-ID]",
Flags: mountFlags,
}
)
// MountOutputParams stores info about each layer
type jsonMountPoint struct {
ID string `json:"id"`
Names []string `json:"names"`
MountPoint string `json:"mountpoint"`
}
func mountCmd(c *cli.Context) error {
formats := map[string]bool{
"": true,
of.JSONString: true,
}
args := c.Args()
json := c.String("format") == of.JSONString
if !formats[c.String("format")] {
return errors.Errorf("%q is not a supported format", c.String("format"))
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return errors.Wrapf(err, "error getting store")
}
if len(args) == 1 {
if json {
return errors.Wrapf(err, "json option can not be used with a container id")
}
mountPoint, err := store.Mount(args[0], c.String("label"))
if err != nil {
return errors.Wrapf(err, "error finding container %q", args[0])
}
fmt.Printf("%s\n", mountPoint)
} else {
jsonMountPoints := []jsonMountPoint{}
containers, err2 := store.Containers()
if err2 != nil {
return errors.Wrapf(err2, "error reading list of all containers")
}
for _, container := range containers {
layer, err := store.Layer(container.LayerID)
if err != nil {
return errors.Wrapf(err, "error finding layer %q for container %q", container.LayerID, container.ID)
}
if layer.MountPoint == "" {
continue
}
if json {
jsonMountPoints = append(jsonMountPoints, jsonMountPoint{ID: container.ID, Names: container.Names, MountPoint: layer.MountPoint})
continue
}
if c.Bool("notruncate") {
fmt.Printf("%-64s %s\n", container.ID, layer.MountPoint)
} else {
fmt.Printf("%-12.12s %s\n", container.ID, layer.MountPoint)
}
}
if json {
data, err := js.MarshalIndent(jsonMountPoints, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
}
}
return nil
}

582
cmd/kpod/ps.go Normal file
View file

@ -0,0 +1,582 @@
package main
import (
"reflect"
"regexp"
"strconv"
"strings"
"time"
"github.com/docker/go-units"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/fields"
"github.com/kubernetes-incubator/cri-o/cmd/kpod/formats"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/kubernetes-incubator/cri-o/oci"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
type psOptions struct {
all bool
filter string
format string
last int
latest bool
noTrunc bool
quiet bool
size bool
label string
}
type psTemplateParams struct {
ID string
Image string
Command string
CreatedAt string
RunningFor string
Status string
Ports string
Size string
Names string
Labels string
Mounts string
}
// psJSONParams is only used when the JSON format is specified,
// and is better for data processing from JSON.
// psJSONParams will be populated by data from libkpod.ContainerData,
// the members of the struct are the sama data types as their sources.
type psJSONParams struct {
ID string `json:"id"`
Image string `json:"image"`
ImageID string `json:"image_id"`
Command string `json:"command"`
CreatedAt time.Time `json:"createdAt"`
RunningFor time.Duration `json:"runningFor"`
Status string `json:"status"`
Ports map[string]struct{} `json:"ports"`
Size uint `json:"size"`
Names string `json:"names"`
Labels fields.Set `json:"labels"`
Mounts []specs.Mount `json:"mounts"`
ContainerRunning bool `json:"ctrRunning"`
}
const runningState = "running"
var (
psFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "Show all the containers, default is only running containers",
},
cli.StringFlag{
Name: "filter, f",
Usage: "Filter output based on conditions given",
},
cli.StringFlag{
Name: "format",
Usage: "Pretty-print containers to JSON or using a Go template",
},
cli.IntFlag{
Name: "last, n",
Usage: "Print the n last created containers (all states)",
Value: -1,
},
cli.BoolFlag{
Name: "latest, l",
Usage: "Show the latest container created (all states)",
},
cli.BoolFlag{
Name: "no-trunc",
Usage: "Display the extended information",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Print the numeric IDs of the containers only",
},
cli.BoolFlag{
Name: "size, s",
Usage: "Display the total file sizes",
},
}
psDescription = "Prints out information about the containers"
psCommand = cli.Command{
Name: "ps",
Usage: "List containers",
Description: psDescription,
Flags: psFlags,
Action: psCmd,
ArgsUsage: "",
}
)
func psCmd(c *cli.Context) error {
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "error creating server")
}
if err := server.Update(); err != nil {
return errors.Wrapf(err, "error updating list of containers")
}
if len(c.Args()) > 0 {
return errors.Errorf("too many arguments, ps takes no arguments")
}
format := genPsFormat(c.Bool("quiet"), c.Bool("size"))
if c.IsSet("format") {
format = c.String("format")
}
opts := psOptions{
all: c.Bool("all"),
filter: c.String("filter"),
format: format,
last: c.Int("last"),
latest: c.Bool("latest"),
noTrunc: c.Bool("no-trunc"),
quiet: c.Bool("quiet"),
size: c.Bool("size"),
}
// all, latest, and last are mutually exclusive. Only one flag can be used at a time
exclusiveOpts := 0
if opts.last >= 0 {
exclusiveOpts++
}
if opts.latest {
exclusiveOpts++
}
if opts.all {
exclusiveOpts++
}
if exclusiveOpts > 1 {
return errors.Errorf("Last, latest and all are mutually exclusive")
}
containers, err := server.ListContainers()
if err != nil {
return errors.Wrapf(err, "error getting containers from server")
}
var params *FilterParamsPS
if opts.filter != "" {
params, err = parseFilter(opts.filter, containers)
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
} else {
params = nil
}
containerList := getContainersMatchingFilter(containers, params, server)
return generatePsOutput(containerList, server, opts)
}
// generate the template based on conditions given
func genPsFormat(quiet, size bool) (format string) {
if quiet {
return formats.IDString
}
format = "table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}\t{{.Ports}}\t{{.Names}}\t"
if size {
format += "{{.Size}}\t"
}
return
}
func psToGeneric(templParams []psTemplateParams, JSONParams []psJSONParams) (genericParams []interface{}) {
if len(templParams) > 0 {
for _, v := range templParams {
genericParams = append(genericParams, interface{}(v))
}
return
}
for _, v := range JSONParams {
genericParams = append(genericParams, interface{}(v))
}
return
}
// generate the accurate header based on template given
func (p *psTemplateParams) headerMap() map[string]string {
v := reflect.Indirect(reflect.ValueOf(p))
values := make(map[string]string)
for i := 0; i < v.NumField(); i++ {
key := v.Type().Field(i).Name
value := key
if value == "ID" {
value = "Container" + value
}
values[key] = strings.ToUpper(splitCamelCase(value))
}
return values
}
// getContainers gets the containers that match the flags given
func getContainers(containers []*libkpod.ContainerData, opts psOptions) []*libkpod.ContainerData {
var containersOutput []*libkpod.ContainerData
if opts.last >= 0 && opts.last < len(containers) {
for i := 0; i < opts.last; i++ {
containersOutput = append(containersOutput, containers[i])
}
return containersOutput
}
if opts.latest {
return []*libkpod.ContainerData{containers[0]}
}
if opts.all || opts.last >= len(containers) {
return containers
}
for _, ctr := range containers {
if ctr.State.Status == runningState {
containersOutput = append(containersOutput, ctr)
}
}
return containersOutput
}
// getTemplateOutput returns the modified container information
func getTemplateOutput(containers []*libkpod.ContainerData, opts psOptions) (psOutput []psTemplateParams) {
var status string
for _, ctr := range containers {
ctrID := ctr.ID
runningFor := units.HumanDuration(time.Since(ctr.State.Created))
createdAt := runningFor + " ago"
command := getCommand(ctr.ImageCreatedBy)
imageName := ctr.FromImage
mounts := getMounts(ctr.Mounts, opts.noTrunc)
ports := getPorts(ctr.Config.ExposedPorts)
size := units.HumanSize(float64(ctr.SizeRootFs))
labels := getLabels(ctr.Labels)
switch ctr.State.Status {
case "stopped":
status = "Exited (" + strconv.FormatInt(int64(ctr.State.ExitCode), 10) + ") " + runningFor + " ago"
case runningState:
status = "Up " + runningFor + " ago"
default:
status = "Created"
}
if !opts.noTrunc {
ctrID = ctr.ID[:idTruncLength]
imageName = getImageName(ctr.FromImage)
}
params := psTemplateParams{
ID: ctrID,
Image: imageName,
Command: command,
CreatedAt: createdAt,
RunningFor: runningFor,
Status: status,
Ports: ports,
Size: size,
Names: ctr.Name,
Labels: labels,
Mounts: mounts,
}
psOutput = append(psOutput, params)
}
return
}
// getJSONOutput returns the container info in its raw form
func getJSONOutput(containers []*libkpod.ContainerData) (psOutput []psJSONParams) {
for _, ctr := range containers {
params := psJSONParams{
ID: ctr.ID,
Image: ctr.FromImage,
ImageID: ctr.FromImageID,
Command: getCommand(ctr.ImageCreatedBy),
CreatedAt: ctr.State.Created,
RunningFor: time.Since(ctr.State.Created),
Status: ctr.State.Status,
Ports: ctr.Config.ExposedPorts,
Size: ctr.SizeRootFs,
Names: ctr.Name,
Labels: ctr.Labels,
Mounts: ctr.Mounts,
ContainerRunning: ctr.State.Status == runningState,
}
psOutput = append(psOutput, params)
}
return
}
func generatePsOutput(containers []*libkpod.ContainerData, server *libkpod.ContainerServer, opts psOptions) error {
containersOutput := getContainers(containers, opts)
if len(containersOutput) == 0 {
return nil
}
var out formats.Writer
switch opts.format {
case formats.JSONString:
psOutput := getJSONOutput(containersOutput)
out = formats.JSONStructArray{Output: psToGeneric([]psTemplateParams{}, psOutput)}
default:
psOutput := getTemplateOutput(containersOutput, opts)
out = formats.StdoutTemplateArray{Output: psToGeneric(psOutput, []psJSONParams{}), Template: opts.format, Fields: psOutput[0].headerMap()}
}
return formats.Writer(out).Out()
}
// getCommand gets the actual command from the whole command
func getCommand(cmd string) string {
reg, err := regexp.Compile(".*\\[|\\].*")
if err != nil {
return ""
}
arr := strings.Split(reg.ReplaceAllLiteralString(cmd, ""), ",")
return strings.Join(arr, ",")
}
// getImageName shortens the image name
func getImageName(img string) string {
arr := strings.Split(img, "/")
if arr[0] == "docker.io" && arr[1] == "library" {
img = strings.Join(arr[2:], "/")
} else if arr[0] == "docker.io" {
img = strings.Join(arr[1:], "/")
}
return img
}
// getLabels converts the labels to a string of the form "key=value, key2=value2"
func getLabels(labels fields.Set) string {
var arr []string
if len(labels) > 0 {
for key, val := range labels {
temp := key + "=" + val
arr = append(arr, temp)
}
return strings.Join(arr, ",")
}
return ""
}
// getMounts converts the volumes mounted to a string of the form "mount1, mount2"
// it truncates it if noTrunc is false
func getMounts(mounts []specs.Mount, noTrunc bool) string {
var arr []string
if len(mounts) == 0 {
return ""
}
for _, mount := range mounts {
if noTrunc {
arr = append(arr, mount.Source)
continue
}
tempArr := strings.SplitAfter(mount.Source, "/")
if len(tempArr) >= 3 {
arr = append(arr, strings.Join(tempArr[:3], ""))
} else {
arr = append(arr, mount.Source)
}
}
return strings.Join(arr, ",")
}
// getPorts converts the ports used to a string of the from "port1, port2"
func getPorts(ports map[string]struct{}) string {
var arr []string
if len(ports) == 0 {
return ""
}
for key := range ports {
arr = append(arr, key)
}
return strings.Join(arr, ",")
}
// FilterParamsPS contains the filter options for ps
type FilterParamsPS struct {
id string
label string
name string
exited int32
status string
ancestor string
before time.Time
since time.Time
volume string
}
// parseFilter takes a filter string and a list of containers and filters it
func parseFilter(filter string, containers []*oci.Container) (*FilterParamsPS, error) {
params := new(FilterParamsPS)
allFilters := strings.Split(filter, ",")
for _, param := range allFilters {
pair := strings.SplitN(param, "=", 2)
switch strings.TrimSpace(pair[0]) {
case "id":
params.id = pair[1]
case "label":
params.label = pair[1]
case "name":
params.name = pair[1]
case "exited":
exitedCode, err := strconv.ParseInt(pair[1], 10, 32)
if err != nil {
return nil, errors.Errorf("exited code out of range %q", pair[1])
}
params.exited = int32(exitedCode)
case "status":
params.status = pair[1]
case "ancestor":
params.ancestor = pair[1]
case "before":
if ctr, err := findContainer(containers, pair[1]); err == nil {
params.before = ctr.CreatedAt()
} else {
return nil, errors.Wrapf(err, "no such container %q", pair[1])
}
case "since":
if ctr, err := findContainer(containers, pair[1]); err == nil {
params.before = ctr.CreatedAt()
} else {
return nil, errors.Wrapf(err, "no such container %q", pair[1])
}
case "volume":
params.volume = pair[1]
default:
return nil, errors.Errorf("invalid filter %q", pair[0])
}
}
return params, nil
}
// findContainer finds a container with a specific name or id from a list of containers
func findContainer(containers []*oci.Container, ref string) (*oci.Container, error) {
for _, ctr := range containers {
if strings.HasPrefix(ctr.ID(), ref) || ctr.Name() == ref {
return ctr, nil
}
}
return nil, errors.Errorf("could not find container")
}
// matchesFilter checks if a container matches all the filter parameters
func matchesFilter(ctrData *libkpod.ContainerData, params *FilterParamsPS) bool {
if params == nil {
return true
}
if params.id != "" && !matchesID(ctrData, params.id) {
return false
}
if params.name != "" && !matchesName(ctrData, params.name) {
return false
}
if !params.before.IsZero() && !matchesBeforeContainer(ctrData, params.before) {
return false
}
if !params.since.IsZero() && !matchesSinceContainer(ctrData, params.since) {
return false
}
if params.exited > 0 && !matchesExited(ctrData, params.exited) {
return false
}
if params.status != "" && !matchesStatus(ctrData, params.status) {
return false
}
if params.ancestor != "" && !matchesAncestor(ctrData, params.ancestor) {
return false
}
if params.label != "" && !matchesLabel(ctrData, params.label) {
return false
}
if params.volume != "" && !matchesVolume(ctrData, params.volume) {
return false
}
return true
}
// GetContainersMatchingFilter returns a slice of all the containers that match the provided filter parameters
func getContainersMatchingFilter(containers []*oci.Container, filter *FilterParamsPS, server *libkpod.ContainerServer) []*libkpod.ContainerData {
var filteredCtrs []*libkpod.ContainerData
for _, ctr := range containers {
ctrData, err := server.GetContainerData(ctr.ID(), true)
if err != nil {
logrus.Warn("unable to get container data for matched container")
}
if filter == nil || matchesFilter(ctrData, filter) {
filteredCtrs = append(filteredCtrs, ctrData)
}
}
return filteredCtrs
}
// matchesID returns true if the id's match
func matchesID(ctrData *libkpod.ContainerData, id string) bool {
return strings.HasPrefix(ctrData.ID, id)
}
// matchesBeforeContainer returns true if the container was created before the filter image
func matchesBeforeContainer(ctrData *libkpod.ContainerData, beforeTime time.Time) bool {
return ctrData.State.Created.Before(beforeTime)
}
// matchesSincecontainer returns true if the container was created since the filter image
func matchesSinceContainer(ctrData *libkpod.ContainerData, sinceTime time.Time) bool {
return ctrData.State.Created.After(sinceTime)
}
// matchesLabel returns true if the container label matches that of the filter label
func matchesLabel(ctrData *libkpod.ContainerData, label string) bool {
pair := strings.SplitN(label, "=", 2)
if val, ok := ctrData.Labels[pair[0]]; ok {
if len(pair) == 2 && val == pair[1] {
return true
}
if len(pair) == 1 {
return true
}
return false
}
return false
}
// matchesName returns true if the names are identical
func matchesName(ctrData *libkpod.ContainerData, name string) bool {
return ctrData.Name == name
}
// matchesExited returns true if the exit codes are identical
func matchesExited(ctrData *libkpod.ContainerData, exited int32) bool {
return ctrData.State.ExitCode == exited
}
// matchesStatus returns true if the container status matches that of filter status
func matchesStatus(ctrData *libkpod.ContainerData, status string) bool {
return ctrData.State.Status == status
}
// matchesAncestor returns true if filter ancestor is in container image name
func matchesAncestor(ctrData *libkpod.ContainerData, ancestor string) bool {
return strings.Contains(ctrData.FromImage, ancestor)
}
// matchesVolue returns true if the volume mounted or path to volue of the container matches that of filter volume
func matchesVolume(ctrData *libkpod.ContainerData, volume string) bool {
for _, vol := range ctrData.Mounts {
if strings.Contains(vol.Source, volume) {
return true
}
}
return false
}

56
cmd/kpod/pull.go Normal file
View file

@ -0,0 +1,56 @@
package main
import (
"os"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
pullFlags = []cli.Flag{
cli.BoolFlag{
// all-tags is hidden since it has not been implemented yet
Name: "all-tags, a",
Hidden: true,
Usage: "Download all tagged images in the repository",
},
}
pullDescription = "Pulls an image from a registry and stores it locally.\n" +
"An image can be pulled using its tag or digest. If a tag is not\n" +
"specified, the image with the 'latest' tag (if it exists) is pulled."
pullCommand = cli.Command{
Name: "pull",
Usage: "pull an image from a registry",
Description: pullDescription,
Flags: pullFlags,
Action: pullCmd,
ArgsUsage: "",
}
)
// pullCmd gets the data from the command line and calls pullImage
// to copy an image from a registry to a local machine
func pullCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
logrus.Errorf("an image name must be specified")
return nil
}
if len(args) > 1 {
logrus.Errorf("too many arguments. Requires exactly 1")
return nil
}
image := args[0]
runtime, err := getRuntime(c)
if err != nil {
return errors.Wrapf(err, "could not create runtime")
}
if err := runtime.PullImage(image, c.Bool("all-tags"), os.Stdout); err != nil {
return errors.Errorf("error pulling image from %q: %v", image, err)
}
return nil
}

124
cmd/kpod/push.go Normal file
View file

@ -0,0 +1,124 @@
package main
import (
"fmt"
"os"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/kubernetes-incubator/cri-o/libpod/common"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
var (
pushFlags = []cli.Flag{
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
Hidden: true,
},
cli.StringFlag{
Name: "creds",
Usage: "`credentials` (USERNAME:PASSWORD) to use for authenticating to a registry",
},
cli.StringFlag{
Name: "cert-dir",
Usage: "`pathname` of a directory containing TLS certificates and keys",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when contacting registries (default: true)",
},
cli.BoolFlag{
Name: "remove-signatures",
Usage: "discard any pre-existing signatures in the image",
},
cli.StringFlag{
Name: "sign-by",
Usage: "add a signature at the destination using the specified key",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pushing images",
},
}
pushDescription = fmt.Sprintf(`
Pushes an image to a specified location.
The Image "DESTINATION" uses a "transport":"details" format.
See kpod-push(1) section "DESTINATION" for the expected format`)
pushCommand = cli.Command{
Name: "push",
Usage: "push an image to a specified destination",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE DESTINATION",
}
)
func pushCmd(c *cli.Context) error {
var registryCreds *types.DockerAuthConfig
args := c.Args()
if len(args) < 2 {
return errors.New("kpod push requires exactly 2 arguments")
}
srcName := c.Args().Get(0)
destName := c.Args().Get(1)
signaturePolicy := c.String("signature-policy")
registryCredsString := c.String("creds")
certPath := c.String("cert-dir")
skipVerify := !c.BoolT("tls-verify")
removeSignatures := c.Bool("remove-signatures")
signBy := c.String("sign-by")
if registryCredsString != "" {
creds, err := common.ParseRegistryCreds(registryCredsString)
if err != nil {
if err == common.ErrNoPassword {
fmt.Print("Password: ")
password, err := terminal.ReadPassword(0)
if err != nil {
return errors.Wrapf(err, "could not read password from terminal")
}
creds.Password = string(password)
} else {
return err
}
}
registryCreds = creds
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
options := images.CopyOptions{
Compression: archive.Uncompressed,
SignaturePolicyPath: signaturePolicy,
Store: store,
DockerRegistryOptions: common.DockerRegistryOptions{
DockerRegistryCreds: registryCreds,
DockerCertPath: certPath,
DockerInsecureSkipTLSVerify: skipVerify,
},
SigningOptions: common.SigningOptions{
RemoveSignatures: removeSignatures,
SignBy: signBy,
},
}
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}
return images.PushImage(srcName, destName, options)
}

46
cmd/kpod/rename.go Normal file
View file

@ -0,0 +1,46 @@
package main
import (
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
renameDescription = "Rename a container. Container may be created, running, paused, or stopped"
renameFlags = []cli.Flag{}
renameCommand = cli.Command{
Name: "rename",
Usage: "rename a container",
Description: renameDescription,
Action: renameCmd,
ArgsUsage: "CONTAINER NEW-NAME",
Flags: renameFlags,
}
)
func renameCmd(c *cli.Context) error {
if len(c.Args()) != 2 {
return errors.Errorf("Rename requires a src container name/ID and a dest container name")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not get container server")
}
defer server.Shutdown()
err = server.Update()
if err != nil {
return errors.Wrapf(err, "could not update list of containers")
}
err = server.ContainerRename(c.Args().Get(0), c.Args().Get(1))
if err != nil {
return errors.Wrapf(err, "could not rename container")
}
return nil
}

65
cmd/kpod/rm.go Normal file
View file

@ -0,0 +1,65 @@
package main
import (
"fmt"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
rmFlags = []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Usage: "Force removal of a running container. The default is false",
},
}
rmDescription = "Remove one or more containers"
rmCommand = cli.Command{
Name: "rm",
Usage: fmt.Sprintf(`kpod rm will remove one or more containers from the host. The container name or ID can be used.
This does not remove images. Running containers will not be removed without the -f option.`),
Description: rmDescription,
Flags: rmFlags,
Action: rmCmd,
ArgsUsage: "",
}
)
// saveCmd saves the image to either docker-archive or oci
func rmCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("specify one or more containers to remove")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
server, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not get container server")
}
defer server.Shutdown()
err = server.Update()
if err != nil {
return errors.Wrapf(err, "could not update list of containers")
}
force := c.Bool("force")
for _, container := range c.Args() {
id, err2 := server.Remove(container, force)
if err2 != nil {
if err == nil {
err = err2
} else {
err = errors.Wrapf(err, "%v. Stop the container before attempting removal or use -f\n", err2)
}
} else {
fmt.Println(id)
}
}
return err
}

123
cmd/kpod/rmi.go Normal file
View file

@ -0,0 +1,123 @@
package main
import (
"fmt"
"github.com/containers/storage"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
rmiDescription = "removes one or more locally stored images."
rmiFlags = []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Usage: "force removal of the image",
},
}
rmiCommand = cli.Command{
Name: "rmi",
Usage: "removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
Flags: rmiFlags,
}
)
func rmiCmd(c *cli.Context) error {
force := false
if c.IsSet("force") {
force = c.Bool("force")
}
args := c.Args()
if len(args) == 0 {
return errors.Errorf("image name or ID must be specified")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
for _, id := range args {
image, err := images.FindImage(store, id)
if err != nil {
return errors.Wrapf(err, "could not get image %q", id)
}
if image != nil {
ctrIDs, err := runningContainers(image, store)
if err != nil {
return errors.Wrapf(err, "error getting running containers for image %q", id)
}
if len(ctrIDs) > 0 && len(image.Names) <= 1 {
if force {
removeContainers(ctrIDs, store)
} else {
for ctrID := range ctrIDs {
return fmt.Errorf("Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
}
}
// If the user supplied an ID, we cannot delete the image if it is referred to by multiple tags
if images.MatchesID(image.ID, id) {
if len(image.Names) > 1 && !force {
return fmt.Errorf("unable to delete %s (must force) - image is referred to in multiple tags", image.ID)
}
// If it is forced, we have to untag the image so that it can be deleted
image.Names = image.Names[:0]
} else {
name, err2 := images.UntagImage(store, image, id)
if err2 != nil {
return err
}
fmt.Printf("untagged: %s", name)
}
if len(image.Names) > 0 {
continue
}
id, err := images.RemoveImage(image, store)
if err != nil {
return err
}
fmt.Printf("%s\n", id)
}
}
return nil
}
// Returns a list of running containers associated with the given ImageReference
// TODO: replace this with something in libkpod
func runningContainers(image *storage.Image, store storage.Store) ([]string, error) {
ctrIDs := []string{}
containers, err := store.Containers()
if err != nil {
return nil, err
}
for _, ctr := range containers {
if ctr.ImageID == image.ID {
ctrIDs = append(ctrIDs, ctr.ID)
}
}
return ctrIDs, nil
}
// TODO: replace this with something in libkpod
func removeContainers(ctrIDs []string, store storage.Store) error {
for _, ctrID := range ctrIDs {
if err := store.DeleteContainer(ctrID); err != nil {
return errors.Wrapf(err, "could not remove container %q", ctrID)
}
}
return nil
}

100
cmd/kpod/save.go Normal file
View file

@ -0,0 +1,100 @@
package main
import (
"os"
"github.com/containers/storage"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
const (
dockerArchive = "docker-archive:"
)
type saveOptions struct {
output string
quiet bool
images []string
}
var (
saveFlags = []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "Write to a file, default is STDOUT",
Value: "/dev/stdout",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "Suppress the output",
},
}
saveDescription = "Save an image to docker-archive on the local machine"
saveCommand = cli.Command{
Name: "save",
Usage: "Save image to an archive",
Description: saveDescription,
Flags: saveFlags,
Action: saveCmd,
ArgsUsage: "",
}
)
// saveCmd saves the image to either docker-archive or oci
func saveCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("need at least 1 argument")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
output := c.String("output")
quiet := c.Bool("quiet")
if output == "/dev/stdout" {
fi := os.Stdout
if logrus.IsTerminal(fi) {
return errors.Errorf("refusing to save to terminal. Use -o flag or redirect")
}
}
opts := saveOptions{
output: output,
quiet: quiet,
images: args,
}
return saveImage(store, opts)
}
// saveImage pushes the image to docker-archive or oci by
// calling pushImage
func saveImage(store storage.Store, opts saveOptions) error {
dst := dockerArchive + opts.output
pushOpts := images.CopyOptions{
SignaturePolicyPath: "",
Store: store,
}
// only one image is supported for now
// future pull requests will fix this
for _, image := range opts.images {
dest := dst + ":" + image
if err := images.PushImage(image, dest, pushOpts); err != nil {
return errors.Wrapf(err, "unable to save %q", image)
}
}
return nil
}

241
cmd/kpod/stats.go Normal file
View file

@ -0,0 +1,241 @@
package main
import (
"encoding/json"
"fmt"
"os"
"strings"
"text/template"
"time"
tm "github.com/buger/goterm"
"github.com/kubernetes-incubator/cri-o/libkpod"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/kubernetes-incubator/cri-o/oci"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var printf func(format string, a ...interface{}) (n int, err error)
var println func(a ...interface{}) (n int, err error)
type statsOutputParams struct {
Container string
ID string
CPUPerc string
MemUsage string
MemPerc string
NetIO string
BlockIO string
PIDs uint64
}
var (
statsFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "show all containers. Only running containers are shown by default. The default is false",
},
cli.BoolFlag{
Name: "no-stream",
Usage: "disable streaming stats and only pull the first result, default setting is false",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print container statistics using a Go template",
},
cli.BoolFlag{
Name: "json",
Usage: "output container statistics in json format",
},
}
statsDescription = "display a live stream of one or more containers' resource usage statistics"
statsCommand = cli.Command{
Name: "stats",
Usage: "Display percentage of CPU, memory, network I/O, block I/O and PIDs for one or more containers",
Description: statsDescription,
Flags: statsFlags,
Action: statsCmd,
ArgsUsage: "",
}
)
func statsCmd(c *cli.Context) error {
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "could not read config")
}
containerServer, err := libkpod.New(config)
if err != nil {
return errors.Wrapf(err, "could not create container server")
}
defer containerServer.Shutdown()
err = containerServer.Update()
if err != nil {
return errors.Wrapf(err, "could not update list of containers")
}
times := -1
if c.Bool("no-stream") {
times = 1
}
statsChan := make(chan []*libkpod.ContainerStats)
// iterate over the channel until it is closed
go func() {
// print using goterm
printf = tm.Printf
println = tm.Println
for stats := range statsChan {
// Continually refresh statistics
tm.Clear()
tm.MoveCursor(1, 1)
outputStats(stats, c.String("format"), c.Bool("json"))
tm.Flush()
time.Sleep(time.Second)
}
}()
return getStats(containerServer, c.Args(), c.Bool("all"), statsChan, times)
}
func getStats(server *libkpod.ContainerServer, args []string, all bool, statsChan chan []*libkpod.ContainerStats, times int) error {
ctrs, err := server.ListContainers(isRunning, ctrInList(args))
if err != nil {
return err
}
containerStats := map[string]*libkpod.ContainerStats{}
for _, ctr := range ctrs {
initialStats, err := server.GetContainerStats(ctr, &libkpod.ContainerStats{})
if err != nil {
return err
}
containerStats[ctr.ID()] = initialStats
}
step := 1
if times == -1 {
times = 1
step = 0
}
for i := 0; i < times; i += step {
reportStats := []*libkpod.ContainerStats{}
for _, ctr := range ctrs {
id := ctr.ID()
if _, ok := containerStats[ctr.ID()]; !ok {
initialStats, err := server.GetContainerStats(ctr, &libkpod.ContainerStats{})
if err != nil {
return err
}
containerStats[id] = initialStats
}
stats, err := server.GetContainerStats(ctr, containerStats[id])
if err != nil {
return err
}
// replace the previous measurement with the current one
containerStats[id] = stats
reportStats = append(reportStats, stats)
}
statsChan <- reportStats
err := server.Update()
if err != nil {
return err
}
ctrs, err = server.ListContainers(isRunning, ctrInList(args))
if err != nil {
return err
}
}
return nil
}
func outputStats(stats []*libkpod.ContainerStats, format string, json bool) error {
if format == "" {
outputStatsHeader()
}
if json {
return outputStatsAsJSON(stats)
}
var err error
for _, s := range stats {
if format == "" {
outputStatsUsingFormatString(s)
} else {
params := getStatsOutputParams(s)
err2 := outputStatsUsingTemplate(format, params)
if err2 != nil {
err = errors.Wrapf(err, err2.Error())
}
}
}
return err
}
func outputStatsHeader() {
printf("%-64s %-16s %-32s %-16s %-24s %-24s %s\n", "CONTAINER", "CPU %", "MEM USAGE / MEM LIMIT", "MEM %", "NET I/O", "BLOCK I/O", "PIDS")
}
func outputStatsUsingFormatString(stats *libkpod.ContainerStats) {
printf("%-64s %-16s %-32s %-16s %-24s %-24s %d\n", stats.Container, floatToPercentString(stats.CPU), combineHumanValues(stats.MemUsage, stats.MemLimit), floatToPercentString(stats.MemPerc), combineHumanValues(stats.NetInput, stats.NetOutput), combineHumanValues(stats.BlockInput, stats.BlockOutput), stats.PIDs)
}
func combineHumanValues(a, b uint64) string {
return fmt.Sprintf("%s / %s", images.FormattedSize(float64(a)), images.FormattedSize(float64(b)))
}
func floatToPercentString(f float64) string {
return fmt.Sprintf("%.2f %s", f, "%")
}
func getStatsOutputParams(stats *libkpod.ContainerStats) statsOutputParams {
return statsOutputParams{
Container: stats.Container,
ID: stats.Container,
CPUPerc: floatToPercentString(stats.CPU),
MemUsage: combineHumanValues(stats.MemUsage, stats.MemLimit),
MemPerc: floatToPercentString(stats.MemPerc),
NetIO: combineHumanValues(stats.NetInput, stats.NetOutput),
BlockIO: combineHumanValues(stats.BlockInput, stats.BlockOutput),
PIDs: stats.PIDs,
}
}
func outputStatsUsingTemplate(format string, params statsOutputParams) error {
tmpl, err := template.New("stats").Parse(format)
if err != nil {
return errors.Wrapf(err, "template parsing error")
}
err = tmpl.Execute(os.Stdout, params)
if err != nil {
return err
}
println()
return nil
}
func outputStatsAsJSON(stats []*libkpod.ContainerStats) error {
s, err := json.Marshal(stats)
if err != nil {
return err
}
println(s)
return nil
}
func isRunning(ctr *oci.Container) bool {
return ctr.State().Status == "running"
}
func ctrInList(idsOrNames []string) func(ctr *oci.Container) bool {
if len(idsOrNames) == 0 {
return func(*oci.Container) bool { return true }
}
return func(ctr *oci.Container) bool {
for _, idOrName := range idsOrNames {
if strings.HasPrefix(ctr.ID(), idOrName) || strings.HasSuffix(ctr.Name(), idOrName) {
return true
}
}
return false
}
}

78
cmd/kpod/tag.go Normal file
View file

@ -0,0 +1,78 @@
package main
import (
"github.com/containers/image/docker/reference"
"github.com/containers/storage"
"github.com/kubernetes-incubator/cri-o/libpod/images"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
tagDescription = "Adds one or more additional names to locally-stored image"
tagCommand = cli.Command{
Name: "tag",
Usage: "Add an additional name to a local image",
Description: tagDescription,
Action: tagCmd,
ArgsUsage: "IMAGE-NAME [IMAGE-NAME ...]",
}
)
func tagCmd(c *cli.Context) error {
args := c.Args()
if len(args) < 2 {
return errors.Errorf("image name and at least one new name must be specified")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
img, err := images.FindImage(store, args[0])
if err != nil {
return err
}
if img == nil {
return errors.New("null image")
}
err = addImageNames(store, img, args[1:])
if err != nil {
return errors.Wrapf(err, "error adding names %v to image %q", args[1:], args[0])
}
return nil
}
func addImageNames(store storage.Store, image *storage.Image, addNames []string) error {
// Add tags to the names if applicable
names, err := expandedTags(addNames)
if err != nil {
return err
}
err = store.SetNames(image.ID, append(image.Names, names...))
if err != nil {
return errors.Wrapf(err, "error adding names (%v) to image %q", names, image.ID)
}
return nil
}
func expandedTags(tags []string) ([]string, error) {
expandedNames := []string{}
for _, tag := range tags {
name, err := reference.ParseNormalizedNamed(tag)
if err != nil {
return nil, errors.Wrapf(err, "error parsing tag %q", name)
}
name = reference.TagNameOnly(name)
newTag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
newTag = tagged.Tag()
}
expandedNames = append(expandedNames, name.Name()+":"+newTag)
}
return expandedNames, nil
}

41
cmd/kpod/umount.go Normal file
View file

@ -0,0 +1,41 @@
package main
import (
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
umountCommand = cli.Command{
Name: "umount",
Aliases: []string{"unmount"},
Usage: "Unmount a working container's root filesystem",
Description: "Unmounts a working container's root filesystem",
Action: umountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
}
)
func umountCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container ID must be specified")
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
config, err := getConfig(c)
if err != nil {
return errors.Wrapf(err, "Could not get config")
}
store, err := getStore(config)
if err != nil {
return err
}
err = store.Unmount(args[0])
if err != nil {
return errors.Wrapf(err, "error unmounting container %q", args[0])
}
return nil
}

44
cmd/kpod/version.go Normal file
View file

@ -0,0 +1,44 @@
package main
import (
"fmt"
"runtime"
"strconv"
"time"
"github.com/urfave/cli"
)
// Overwritten at build time
var (
gitCommit string
buildInfo string
)
// versionCmd gets and prints version info for version command
func versionCmd(c *cli.Context) error {
fmt.Println("Version: ", c.App.Version)
fmt.Println("Go Version: ", runtime.Version())
if gitCommit != "" {
fmt.Println("Git Commit: ", gitCommit)
}
if buildInfo != "" {
// Converts unix time from string to int64
buildTime, err := strconv.ParseInt(buildInfo, 10, 64)
if err != nil {
return err
}
// Prints out the build time in readable format
fmt.Println("Built: ", time.Unix(buildTime, 0).Format(time.ANSIC))
}
fmt.Println("OS/Arch: ", runtime.GOOS+"/"+runtime.GOARCH)
return nil
}
// Cli command to print out the full version of kpod
var versionCommand = cli.Command{
Name: "version",
Usage: "Display the KPOD Version Information",
Action: versionCmd,
}

View file

@ -1,3 +1,55 @@
# Kubernetes Community Code of Conduct
## Kubernetes Community Code of Conduct
Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery.
* Personal attacks.
* Trolling or insulting/derogatory comments.
* Public or private harassment.
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission.
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Kubernetes maintainer, Sarah Novotny <sarahnovotny@google.com>, and/or Dan Kohn <dan@linuxfoundation.org>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### Kubernetes Events Code of Conduct
Kubernetes events are working conferences intended for professional networking and collaboration in the
Kubernetes community. Attendees are expected to behave according to professional standards and in accordance
with their employer's policies on appropriate workplace behavior.
While at Kubernetes events or related social networking opportunities, attendees should not engage in
discriminatory or offensive speech or actions regarding gender, sexuality, race, or religion. Speakers should
be especially aware of these concerns.
The Kubernetes team does not condone any statements by speakers contrary to these standards. The Kubernetes
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
be engaging in discriminatory or offensive speech or actions.
Please bring any concerns to the immediate attention of the Kubernetes event staff.

479
completions/bash/kpod Normal file
View file

@ -0,0 +1,479 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
__kpod_list_images() {
COMPREPLY=($(compgen -W "$(kpod images -q)" -- $cur))
}
__kpod_list_containers() {
COMPREPLY=($(compgen -W "$(kpod ps -aq)" -- $cur))
}
_kpod_diff() {
local options_with_args="
--format
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_images
;;
esac
}
_kpod_export() {
local options_with_args="
--output
-o
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_images
;;
esac
}
_kpod_history() {
local options_with_args="
--format
"
local boolean_options="
--human -H
--no-trunc
--quiet -q
"
_complete_ "$options_with_args" "$boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_images
;;
esac
}
_kpod_info() {
local boolean_options="
--help
-h
--debug
"
local options_with_args="
--format
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_images
;;
esac
}
_kpod_images() {
local boolean_options="
--help
-h
--quiet
-q
--noheading
-n
--no-trunc
--digests
--filter
-f
"
local options_with_args="
--format
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
esac
}
_kpod_inspect() {
local boolean_options="
--help
-h
"
local options_with_args="
--format
-f
--type
-t
--size
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
esac
}
_kpod_logs() {
local options_with_args="
--since
--tail
"
local boolean_options="
--follow
-f
"
_complete_ "$options_with_args" "$boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_containers
;;
esac
}
_kpod_pull() {
local options_with_args="
"
local boolean_options="
--all-tags -a
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_unmount() {
_kpod_umount $@
}
_kpod_umount() {
local boolean_options="
--help
-h
"
local options_with_args="
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
esac
}
_kpod_mount() {
local boolean_options="
--help
-h
--notruncate
"
local options_with_args="
--label
--format
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
esac
}
_kpod_push() {
local boolean_options="
--disable-compression
-D
--quiet
-q
--signature-policy
--certs
--tls-verify
--remove-signatures
--sign-by
"
local options_with_args="
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
esac
}
_kpod_rename() {
local boolean_options="
--help
-h
"
local options_with_args="
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_containers
;;
esac
}
_kpod_rm() {
local boolean_options="
--force
-f
"
local options_with_args="
"
local all_options="$options_with_args $boolean_options"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_containers
;;
esac
}
_kpod_rmi() {
local boolean_options="
--help
-h
--force
-f
"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_images
;;
esac
}
_kpod_stats() {
local boolean_options="
--help
--all
-a
--no-stream
--format
"
case "$cur" in
-*)
COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur"))
;;
*)
__kpod_list_containers
;;
esac
}
kpod_tag() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_version() {
local options_with_args="
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_save() {
local options_with_args="
--output -o
"
local boolean_options="
--quiet -q
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_export() {
local options_with_args="
--output -o
"
local boolean_options="
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_ps() {
local options_with_args="
--filter -f
--format
--last -n
"
local boolean_options="
--all -a
--latest -l
--no-trunc
--quiet -q
--size -s
"
_complete_ "$options_with_args" "$boolean_options"
}
_complete_() {
local options_with_args=$1
local boolean_options="$2 -h --help"
case "$prev" in
$options_with_args)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
esac
}
_kpod_load() {
local options_with_args="
--input -i
"
local boolean_options="
--quiet -q
"
_complete_ "$options_with_args" "$boolean_options"
}
_kpod_kpod() {
local options_with_args="
--config -c
--root
--runroot
--storage-driver
--storage-opt
"
local boolean_options="
--debug
--help -h
--version -v
"
commands="
diff
export
history
images
info
inspect
load
logs
mount
ps
pull
push
rename
rm
rmi
save
stats
tag
umount
unmount
version
"
case "$prev" in
$main_options_with_args_glob )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]} help" -- "$cur" ) )
;;
esac
}
_cli_bash_autocomplete() {
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=()
local cur prev words cword
_get_comp_words_by_ref -n : cur prev words cword
local command=${PROG} cpos=0
local counter=1
counter=1
while [ $counter -lt $cword ]; do
case "!${words[$counter]}" in
*)
command=$(echo "${words[$counter]}" | sed 's/-/_/g')
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_kpod_${command}
declare -F $completions_func >/dev/null && $completions_func
eval "$previous_extglob_setting"
return 0
}
complete -F _cli_bash_autocomplete $PROG

View file

@ -5,8 +5,8 @@ override LIBS += $(shell pkg-config --libs glib-2.0)
override CFLAGS += -std=c99 -Os -Wall -Wextra $(shell pkg-config --cflags glib-2.0)
conmon: $(obj)
$(CC) -o ../bin/$@ $^ $(CFLAGS) $(LIBS)
$(CC) -o $@ $^ $(CFLAGS) $(LIBS)
.PHONY: clean
clean:
rm -f $(obj) ../bin/conmon
rm -f $(obj) conmon

View file

@ -7,11 +7,11 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <sys/prctl.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <sys/un.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <sys/eventfd.h>
#include <sys/stat.h>
@ -20,7 +20,6 @@
#include <termios.h>
#include <syslog.h>
#include <unistd.h>
#include <inttypes.h>
#include <glib.h>
#include <glib-unix.h>
@ -95,8 +94,6 @@ static inline void strv_cleanup(char ***strv)
#define CMD_SIZE 1024
#define MAX_EVENTS 10
#define DEFAULT_SOCKET_PATH "/var/lib/crio"
static bool opt_terminal = false;
static bool opt_stdin = false;
static char *opt_cid = NULL;
@ -105,14 +102,11 @@ static char *opt_runtime_path = NULL;
static char *opt_bundle_path = NULL;
static char *opt_pid_file = NULL;
static bool opt_systemd_cgroup = false;
static bool opt_no_pivot = false;
static char *opt_exec_process_spec = NULL;
static bool opt_exec = false;
static char *opt_log_path = NULL;
static char *opt_exit_dir = NULL;
static int opt_timeout = 0;
static int64_t opt_log_size_max = -1;
static char *opt_socket_path = DEFAULT_SOCKET_PATH;
static GOptionEntry opt_entries[] =
{
{ "terminal", 't', 0, G_OPTION_ARG_NONE, &opt_terminal, "Terminal", NULL },
@ -120,7 +114,6 @@ static GOptionEntry opt_entries[] =
{ "cid", 'c', 0, G_OPTION_ARG_STRING, &opt_cid, "Container ID", NULL },
{ "cuuid", 'u', 0, G_OPTION_ARG_STRING, &opt_cuuid, "Container UUID", NULL },
{ "runtime", 'r', 0, G_OPTION_ARG_STRING, &opt_runtime_path, "Runtime path", NULL },
{ "no-pivot", 0, 0, G_OPTION_ARG_NONE, &opt_no_pivot, "do not use pivot_root", NULL },
{ "bundle", 'b', 0, G_OPTION_ARG_STRING, &opt_bundle_path, "Bundle path", NULL },
{ "pidfile", 'p', 0, G_OPTION_ARG_STRING, &opt_pid_file, "PID file", NULL },
{ "systemd-cgroup", 's', 0, G_OPTION_ARG_NONE, &opt_systemd_cgroup, "Enable systemd cgroup manager", NULL },
@ -129,8 +122,6 @@ static GOptionEntry opt_entries[] =
{ "exit-dir", 0, 0, G_OPTION_ARG_STRING, &opt_exit_dir, "Path to the directory where exit files are written", NULL },
{ "log-path", 'l', 0, G_OPTION_ARG_STRING, &opt_log_path, "Log file path", NULL },
{ "timeout", 'T', 0, G_OPTION_ARG_INT, &opt_timeout, "Timeout in seconds", NULL },
{ "log-size-max", 0, 0, G_OPTION_ARG_INT64, &opt_log_size_max, "Maximum size of log file", NULL },
{ "socket-dir-path", 0, 0, G_OPTION_ARG_STRING, &opt_socket_path, "Location of container attach sockets", NULL },
{ NULL }
};
@ -139,8 +130,6 @@ static GOptionEntry opt_entries[] =
#define CGROUP_ROOT "/sys/fs/cgroup"
static int log_fd = -1;
static ssize_t write_all(int fd, const void *buf, size_t count)
{
size_t remaining = count;
@ -292,12 +281,11 @@ const char *stdpipe_name(stdpipe_t pipe)
* line in buf, and will partially write the final line of the log if buf is
* not terminated by a newline.
*/
static int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen)
int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen)
{
char tsbuf[TSBUFLEN];
static stdpipe_t trailing_line = NO_PIPE;
writev_buffer_t bufv = {0};
static int64_t bytes_written = 0;
int64_t bytes_to_be_written = 0;
/*
* Use the same timestamp for every line of the log in this buffer.
@ -311,63 +299,30 @@ static int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen
while (buflen > 0) {
const char *line_end = NULL;
ptrdiff_t line_len = 0;
bool partial = FALSE;
/* Find the end of the line, or alternatively the end of the buffer. */
line_end = memchr(buf, '\n', buflen);
if (line_end == NULL) {
if (line_end == NULL)
line_end = &buf[buflen-1];
partial = TRUE;
}
line_len = line_end - buf + 1;
/* This is line_len bytes + TSBUFLEN - 1 + 2 (- 1 is for ignoring \0). */
bytes_to_be_written = line_len + TSBUFLEN + 1;
/* If partial, then we add a \n */
if (partial) {
bytes_to_be_written += 1;
}
/*
* We re-open the log file if writing out the bytes will exceed the max
* log size. We also reset the state so that the new file is started with
* a timestamp.
* Write the (timestamp, stream) tuple if there isn't any trailing
* output from the previous line (or if there is trailing output but
* the current buffer being printed is from a different pipe).
*/
if ((opt_log_size_max > 0) && (bytes_written + bytes_to_be_written) > opt_log_size_max) {
ninfo("Creating new log file");
bytes_written = 0;
/* Close the existing fd */
close(fd);
/* Unlink the file */
if (unlink(opt_log_path) < 0) {
pexit("Failed to unlink log file");
}
/* Open the log path file again */
log_fd = open(opt_log_path, O_WRONLY | O_APPEND | O_CREAT | O_CLOEXEC, 0600);
if (log_fd < 0)
pexit("Failed to open log file %s: %s", opt_log_path, strerror(errno));
fd = log_fd;
}
/* Output the timestamp */
if (writev_buffer_append_segment(fd, &bufv, tsbuf, -1) < 0) {
nwarn("failed to write (timestamp, stream) to log");
goto next;
}
/* Output log tag for partial or newline */
if (partial) {
if (writev_buffer_append_segment(fd, &bufv, "P ", -1) < 0) {
nwarn("failed to write partial log tag");
goto next;
}
} else {
if (writev_buffer_append_segment(fd, &bufv, "F ", -1) < 0) {
nwarn("failed to write end log tag");
if (trailing_line != pipe) {
/*
* If there was a trailing line from a different pipe, prepend a
* newline to split it properly. This technically breaks the flow
* of the previous line (adding a newline in the log where there
* wasn't one output) but without modifying the file in a
* non-append-only way there's not much we can do.
*/
if ((trailing_line != NO_PIPE &&
writev_buffer_append_segment(fd, &bufv, "\n", -1) < 0) ||
writev_buffer_append_segment(fd, &bufv, tsbuf, -1) < 0) {
nwarn("failed to write (timestamp, stream) to log");
goto next;
}
}
@ -378,15 +333,9 @@ static int write_k8s_log(int fd, stdpipe_t pipe, const char *buf, ssize_t buflen
goto next;
}
/* Output a newline for partial */
if (partial) {
if (writev_buffer_append_segment(fd, &bufv, "\n", -1) < 0) {
nwarn("failed to write newline to log");
goto next;
}
}
/* If we did not output a full line, then we are a trailing_line. */
trailing_line = (*line_end == '\n') ? NO_PIPE : pipe;
bytes_written += bytes_to_be_written;
next:
/* Update the head of the buffer remaining to output. */
buf += line_len;
@ -397,8 +346,6 @@ next:
nwarn("failed to flush buffer to log");
}
ninfo("Total bytes written: %"PRId64"", bytes_written);
return 0;
}
@ -534,6 +481,7 @@ static int conn_sock = -1;
static int conn_sock_readable;
static int conn_sock_writable;
static int log_fd = -1;
static int oom_event_fd = -1;
static int attach_socket_fd = -1;
static int console_socket_fd = -1;
@ -983,14 +931,14 @@ static char *setup_attach_socket(void)
* Create a symlink so we don't exceed unix domain socket
* path length limit.
*/
attach_symlink_dir_path = g_build_filename(opt_socket_path, opt_cuuid, NULL);
attach_symlink_dir_path = g_build_filename("/var/run/crio", opt_cuuid, NULL);
if (unlink(attach_symlink_dir_path) == -1 && errno != ENOENT)
pexit("Failed to remove existing symlink for attach socket directory");
if (symlink(opt_bundle_path, attach_symlink_dir_path) == -1)
pexit("Failed to create symlink for attach socket");
attach_sock_path = g_build_filename(opt_socket_path, opt_cuuid, "attach", NULL);
attach_sock_path = g_build_filename("/var/run/crio", opt_cuuid, "attach", NULL);
ninfo("attach sock path: %s", attach_sock_path);
strncpy(attach_addr.sun_path, attach_sock_path, sizeof(attach_addr.sun_path) - 1);
@ -1120,8 +1068,6 @@ int main(int argc, char *argv[])
if (opt_runtime_path == NULL)
nexit("Runtime path not provided. Use --runtime");
if (access(opt_runtime_path, X_OK) < 0)
pexit("Runtime path %s is not valid: %s", opt_runtime_path, strerror(errno));
if (!opt_exec && opt_exit_dir == NULL)
nexit("Container exit directory not provided. Use --exit-dir");
@ -1263,12 +1209,6 @@ int main(int argc, char *argv[])
NULL);
}
if (!opt_exec && opt_no_pivot) {
add_argv(runtime_argv,
"--no-pivot",
NULL);
}
if (csname != NULL) {
add_argv(runtime_argv,
"--console-socket", csname,

14
contrib/rpm/Makefile Normal file
View file

@ -0,0 +1,14 @@
.PHONY: dist
dist: crio.spec
spectool -g crio.spec
.PHONY: rpm
rpm: dist
rpmbuild --define "_sourcedir `pwd`" --define "_specdir `pwd`" \
--define "_rpmdir `pwd`" --define "_srcrpmdir `pwd`" -ba crio.spec
all: rpm
clean:
rm -f *rpm *gz
rm -rf x86_64

72
contrib/rpm/crio.spec Normal file
View file

@ -0,0 +1,72 @@
%define debug_package %{nil}
%global provider github
%global provider_tld com
%global project kubernetes-incubator
%global repo cri-o
%global Name crio
# https://github.com/kubernetes-incubator/cri-o
%global provider_prefix %{provider}.%{provider_tld}/%{project}/%{repo}
%global import_path %{provider_prefix}
%global commit 8ba639952a95f2e24cc98987689138b67545576c
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: %{Name}
Version: 0.0.1
Release: 1.git%{shortcommit}%{?dist}
Summary: Kubelet Container Runtime Interface (CRI) for OCI runtimes.
Group: Applications/Text
License: Apache 2.0
URL: https://%{provider_prefix}
Source0: https://%{provider_prefix}/archive/%{commit}/%{repo}-%{shortcommit}.tar.gz
Provides: %{repo}
BuildRequires: golang-github-cpuguy83-go-md2man
%description
The crio package provides an implementation of the
Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes.
crio provides following functionalities:
Support multiple image formats including the existing Docker image format
Support for multiple means to download images including trust & image verification
Container image management (managing image layers, overlay filesystems, etc)
Container process lifecycle management
Monitoring and logging required to satisfy the CRI
Resource isolation as required by the CRI
%prep
%setup -q -n %{repo}-%{commit}
%build
make all
%install
%make_install
%make_install install.systemd
#define license tag if not already defined
%{!?_licensedir:%global license %doc}
%files
%{_bindir}/crio
%{_bindir}/crioctl
%{_mandir}/man5/crio.conf.5*
%{_mandir}/man8/crio.8*
%{_sysconfdir}/crio.conf
%dir /%{_libexecdir}/crio
/%{_libexecdir}/crio/conmon
/%{_libexecdir}/crio/pause
%{_unitdir}/crio.service
%doc README.md
%license LICENSE
%preun
%systemd_preun %{Name}
%postun
%systemd_postun_with_restart %{Name}
%changelog
* Mon Oct 31 2016 Dan Walsh <dwalsh@redhat.com> - 0.0.1
- Initial RPM release

View file

@ -1,29 +0,0 @@
FROM centos
ENV VERSION=0 RELEASE=1 ARCH=x86_64
LABEL com.redhat.component="cri-o" \
name="$FGC/cri-o" \
version="$VERSION" \
release="$RELEASE.$DISTTAG" \
architecture="$ARCH" \
usage="atomic install --system --system-package=no crio && systemctl start crio" \
summary="The cri-o daemon as a system container." \
maintainer="Yu Qi Zhang <jzehrarnyg@gmail.com>" \
atomic.type="system"
RUN yum-config-manager --nogpgcheck --add-repo https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/ && \
yum install --disablerepo=extras --nogpgcheck --setopt=tsflags=nodocs -y iptables cri-o socat iproute runc && \
rpm -V iptables cri-o iproute runc && \
yum clean all && \
mkdir -p /exports/hostfs/etc/crio /exports/hostfs/opt/cni/bin/ /exports/hostfs/var/lib/containers/storage/ && \
cp /etc/crio/* /exports/hostfs/etc/crio && \
if test -e /usr/libexec/cni; then cp -Lr /usr/libexec/cni/* /exports/hostfs/opt/cni/bin/; fi
RUN sed -i '/storage_option =/s/.*/&\n"overlay.override_kernel_check=1",/' /exports/hostfs/etc/crio/crio.conf
COPY manifest.json tmpfiles.template config.json.template service.template /exports/
COPY set_mounts.sh /
COPY run.sh /usr/bin/
CMD ["/usr/bin/run.sh"]

View file

@ -1,57 +0,0 @@
# cri-o
This is the cri-o daemon as a system container.
## Building the image from source:
```
# git clone https://github.com/projectatomic/atomic-system-containers
# cd atomic-system-containers/cri-o
# docker build -t crio .
```
## Running the system container, with the atomic CLI:
Pull from registry into ostree:
```
# atomic pull --storage ostree $REGISTRY/crio
```
Or alternatively, pull from local docker:
```
# atomic pull --storage ostree docker:crio:latest
```
Install the container:
Currently we recommend using --system-package=no to avoid having rpmbuild create an rpm file
during installation. This flag will tell the atomic CLI to fall back to copying files to the
host instead.
```
# atomic install --system --system-package=no --name=crio ($REGISTRY)/crio
```
Start as a systemd service:
```
# systemctl start crio
```
Stopping the service
```
# systemctl stop crio
```
Removing the container
```
# atomic uninstall crio
```
## Binary version
You can find the image automatically built as: registry.centos.org/projectatomic/cri-o:latest

View file

@ -1,41 +0,0 @@
# This is for the purpose of building containers on the CentOS Community Container
# Pipeline. The containers are built, tested and delivered to registry.centos.org and
# lifecycled as well. A corresponding entry must exist in the container index itself,
# located at https://github.com/CentOS/container-index/tree/master/index.d
# You can know more at the following links:
# * https://github.com/CentOS/container-pipeline-service/blob/master/README.md
# * https://github.com/CentOS/container-index/blob/master/README.rst
# * https://wiki.centos.org/ContainerPipeline
# This will be part of the name of the container. It should match the job-id in index entry
job-id: cri-o
#the following are optional, can be left blank
#defaults, where applicable are filled in
#nulecule-file : nulecule
# This flag tells the container pipeline to skip user defined tests on their container
test-skip : True
# This is path of the script that initiates the user defined tests. It must be able to
# return an exit code.
test-script : null
# This is the path of custom build script.
build-script : null
# This is the path of the custom delivery script
delivery-script : null
# This flag tells the pipeline to deliver this container to docker hub.
docker-index : True
# This flag can be used to enable or disable the custom delivery
custom-delivery : False
# This flag can be used to enable or disable delivery of container to local registry
local-delivery : True
Upstreams :
- ref :
url :

View file

@ -1,427 +0,0 @@
{
"ociVersion": "1.0.0",
"platform": {
"arch": "amd64",
"os": "linux"
},
"process": {
"args": [
"/usr/bin/run.sh"
],
"capabilities": {
"ambient": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"bounding": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"effective": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"inheritable": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"permitted": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
]
},
"selinuxLabel": "system_u:system_r:container_runtime_t:s0",
"cwd": "/",
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/root/go/bin",
"TERM=xterm",
"LOG_LEVEL=$LOG_LEVEL",
"NAME=$NAME"
],
"noNewPrivileges": false,
"terminal": false,
"user": {
"gid": 0,
"uid": 0
}
},
"root": {
"path": "rootfs",
"readonly": true
},
"hooks": {},
"linux": {
"namespaces": [
{
"type": "mount"
}
],
"resources": {
"devices": [
{
"access": "rwm",
"allow": true
}
]
},
"rootfsPropagation": "private"
},
"mounts": [
{
"destination": "/tmp",
"options": [
"private",
"bind",
"rw",
"mode=755"
],
"source": "/tmp",
"type": "bind"
},
{
"destination": "/etc",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/etc",
"type": "bind"
},
{
"destination": "/lib/modules",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/lib/modules",
"type": "bind"
},
{
"destination": "/root",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/root",
"type": "bind"
},
{
"destination": "/home",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/home",
"type": "bind"
},
{
"destination": "/mnt",
"options": [
"rbind",
"rw",
"rprivate",
"mode=755"
],
"source": "/mnt",
"type": "bind"
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}",
"destination": "/run",
"options": [
"rshared",
"rbind",
"rw",
"mode=755"
]
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}/systemd",
"destination": "/run/systemd",
"options": [
"rslave",
"bind",
"rw",
"mode=755"
]
},
{
"destination": "/var/log",
"options": [
"rbind",
"rslave",
"rw"
],
"source": "/var/log",
"type": "bind"
},
{
"destination": "/var/lib",
"options": [
"rbind",
"rprivate",
"rw"
],
"source": "${STATE_DIRECTORY}",
"type": "bind"
},
{
"destination": "/var/lib/containers/storage",
"options": [
"rbind",
"rshared",
"rw"
],
"source": "${VAR_LIB_CONTAINERS_STORAGE}",
"type": "bind"
},
{
"destination": "/var/lib/origin",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_ORIGIN}",
"type": "bind"
},
{
"destination": "/var/lib/kubelet",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_KUBE}",
"type": "bind"
},
{
"destination": "/opt/cni",
"options": [
"rbind",
"rprivate",
"ro",
"mode=755"
],
"source": "${OPT_CNI}",
"type": "bind"
},
{
"destination": "/dev",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/dev",
"type": "bind"
},
{
"destination": "/sys",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/sys",
"type": "bind"
},
{
"destination": "/proc",
"options": [
"rbind",
"rw",
"mode=755"
],
"source": "/proc",
"type": "proc"
}
]
}

View file

@ -1,10 +0,0 @@
{
"version": "1.0",
"defaultValues": {
"LOG_LEVEL" : "info",
"OPT_CNI" : "/opt/cni",
"VAR_LIB_CONTAINERS_STORAGE" : "/var/lib/containers/storage",
"VAR_LIB_ORIGIN" : "/var/lib/origin",
"VAR_LIB_KUBE" : "/var/lib/kubelet"
}
}

View file

@ -1,11 +0,0 @@
#!/bin/sh
# Ensure that new process maintain this SELinux label
PID=$$
LABEL=`tr -d '\000' < /proc/$PID/attr/current`
printf %s $LABEL > /proc/self/attr/exec
test -e /etc/sysconfig/crio-storage && source /etc/sysconfig/crio-storage
test -e /etc/sysconfig/crio-network && source /etc/sysconfig/crio-network
exec /usr/bin/crio --log-level=$LOG_LEVEL

View file

@ -1,20 +0,0 @@
[Unit]
Description=crio daemon
After=network.target
[Service]
Type=notify
ExecStartPre=/bin/sh $DESTDIR/rootfs/set_mounts.sh
ExecStart=$EXEC_START
ExecStop=$EXEC_STOP
Restart=on-failure
WorkingDirectory=$DESTDIR
RuntimeDirectory=${NAME}
TasksMax=infinity
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View file

@ -1,7 +0,0 @@
#!/bin/sh
findmnt /var/lib/containers/storage > /dev/null || mount --rbind --make-shared /var/lib/containers/storage /var/lib/containers/storage
findmnt /var/lib/origin > /dev/null || mount --bind --make-shared /var/lib/origin /var/lib/origin
findmnt /var/lib/kubelet > /dev/null || mount --bind --make-shared /var/lib/kubelet /var/lib/kubelet
mount --make-shared /run
findmnt /run/systemd > /dev/null || mount --bind --make-rslave /run/systemd /run/systemd

View file

@ -1,5 +0,0 @@
d ${RUN_DIRECTORY}/crio - - - - -
d /etc/crio - - - - -
Z /etc/crio - - - - -
d ${STATE_DIRECTORY}/origin - - - - -
d ${STATE_DIRECTORY}/kubelet - - - - -

View file

@ -1,30 +0,0 @@
FROM registry.fedoraproject.org/fedora:27
ENV VERSION=0 RELEASE=1 ARCH=x86_64
LABEL com.redhat.component="cri-o" \
name="$FGC/cri-o" \
version="$VERSION" \
release="$RELEASE.$DISTTAG" \
architecture="$ARCH" \
usage="atomic install --system --system-package=no crio && systemctl start crio" \
summary="The cri-o daemon as a system container." \
maintainer="Yu Qi Zhang <jzehrarnyg@gmail.com>" \
atomic.type="system"
COPY README.md /
RUN dnf install --enablerepo=updates-testing --setopt=tsflags=nodocs -y iptables cri-o socat iproute runc && \
rpm -V iptables cri-o iproute runc && \
dnf clean all && \
mkdir -p /exports/hostfs/etc/crio /exports/hostfs/opt/cni/bin/ /exports/hostfs/var/lib/containers/storage/ && \
cp /etc/crio/* /exports/hostfs/etc/crio && \
if test -e /usr/libexec/cni; then cp -Lr /usr/libexec/cni/* /exports/hostfs/opt/cni/bin/; fi
RUN sed -i '/storage_option =/s/.*/&\n"overlay.override_kernel_check=1",/' /exports/hostfs/etc/crio/crio.conf
COPY manifest.json tmpfiles.template config.json.template service.template /exports/
COPY set_mounts.sh /
COPY run.sh /usr/bin/
CMD ["/usr/bin/run.sh"]

View file

@ -1,53 +0,0 @@
# cri-o
This is the cri-o daemon as a system container.
## Building the image from source:
```
# git clone https://github.com/projectatomic/atomic-system-containers
# cd atomic-system-containers/cri-o
# docker build -t crio .
```
## Running the system container, with the atomic CLI:
Pull from registry into ostree:
```
# atomic pull --storage ostree $REGISTRY/crio
```
Or alternatively, pull from local docker:
```
# atomic pull --storage ostree docker:crio:latest
```
Install the container:
Currently we recommend using --system-package=no to avoid having rpmbuild create an rpm file
during installation. This flag will tell the atomic CLI to fall back to copying files to the
host instead.
```
# atomic install --system --system-package=no --name=crio ($REGISTRY)/crio
```
Start as a systemd service:
```
# systemctl start crio
```
Stopping the service
```
# systemctl stop crio
```
Removing the container
```
# atomic uninstall crio
```

View file

@ -1,432 +0,0 @@
{
"ociVersion": "1.0.0",
"platform": {
"arch": "amd64",
"os": "linux"
},
"process": {
"args": [
"/usr/bin/run.sh"
],
"selinuxLabel": "system_u:system_r:container_runtime_t:s0",
"capabilities": {
"ambient": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND",
"CAP_AUDIT_READ"
],
"bounding": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND",
"CAP_AUDIT_READ"
],
"effective": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND",
"CAP_AUDIT_READ"
],
"inheritable": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND",
"CAP_AUDIT_READ"
],
"permitted": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND",
"CAP_AUDIT_READ"
]
},
"cwd": "/",
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/root/go/bin",
"TERM=xterm",
"LOG_LEVEL=$LOG_LEVEL",
"NAME=$NAME"
],
"noNewPrivileges": false,
"terminal": false,
"user": {
"gid": 0,
"uid": 0
}
},
"root": {
"path": "rootfs",
"readonly": true
},
"hooks": {},
"linux": {
"namespaces": [
{
"type": "mount"
}
],
"resources": {
"devices": [
{
"access": "rwm",
"allow": true
}
]
},
"rootfsPropagation": "private"
},
"mounts": [
{
"destination": "/tmp",
"options": [
"private",
"bind",
"rw",
"mode=755"
],
"source": "/tmp",
"type": "bind"
},
{
"destination": "/etc",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/etc",
"type": "bind"
},
{
"destination": "/lib/modules",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/lib/modules",
"type": "bind"
},
{
"destination": "/root",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/root",
"type": "bind"
},
{
"destination": "/home",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/home",
"type": "bind"
},
{
"destination": "/mnt",
"options": [
"rbind",
"rw",
"rprivate",
"mode=755"
],
"source": "/mnt",
"type": "bind"
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}",
"destination": "/run",
"options": [
"rshared",
"rbind",
"rw",
"mode=755"
]
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}/systemd",
"destination": "/run/systemd",
"options": [
"rslave",
"bind",
"rw",
"mode=755"
]
},
{
"destination": "/var/log",
"options": [
"rbind",
"rslave",
"rw"
],
"source": "/var/log",
"type": "bind"
},
{
"destination": "/var/lib",
"options": [
"rbind",
"rprivate",
"rw"
],
"source": "${STATE_DIRECTORY}",
"type": "bind"
},
{
"destination": "/var/lib/containers/storage",
"options": [
"rbind",
"rshared",
"rw"
],
"source": "${VAR_LIB_CONTAINERS_STORAGE}",
"type": "bind"
},
{
"destination": "/var/lib/origin",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_ORIGIN}",
"type": "bind"
},
{
"destination": "/var/lib/kubelet",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_KUBE}",
"type": "bind"
},
{
"destination": "/opt/cni",
"options": [
"rbind",
"rprivate",
"ro",
"mode=755"
],
"source": "${OPT_CNI}",
"type": "bind"
},
{
"destination": "/dev",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/dev",
"type": "bind"
},
{
"destination": "/sys",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/sys",
"type": "bind"
},
{
"destination": "/proc",
"options": [
"rbind",
"rw",
"mode=755"
],
"source": "/proc",
"type": "proc"
}
]
}

View file

@ -1,10 +0,0 @@
{
"version": "1.0",
"defaultValues": {
"LOG_LEVEL" : "info",
"OPT_CNI" : "/opt/cni",
"VAR_LIB_CONTAINERS_STORAGE" : "/var/lib/containers/storage",
"VAR_LIB_ORIGIN" : "/var/lib/origin",
"VAR_LIB_KUBE" : "/var/lib/kubelet"
}
}

View file

@ -1,11 +0,0 @@
#!/bin/sh
# Ensure that new process maintain this SELinux label
PID=$$
LABEL=`tr -d '\000' < /proc/$PID/attr/current`
printf %s $LABEL > /proc/self/attr/exec
test -e /etc/sysconfig/crio-storage && source /etc/sysconfig/crio-storage
test -e /etc/sysconfig/crio-network && source /etc/sysconfig/crio-network
exec /usr/bin/crio --log-level=$LOG_LEVEL

View file

@ -1,20 +0,0 @@
[Unit]
Description=crio daemon
After=network.target
[Service]
Type=notify
ExecStartPre=/bin/sh $DESTDIR/rootfs/set_mounts.sh
ExecStart=$EXEC_START
ExecStop=$EXEC_STOP
Restart=on-failure
WorkingDirectory=$DESTDIR
RuntimeDirectory=${NAME}
TasksMax=infinity
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View file

@ -1,7 +0,0 @@
#!/bin/sh
findmnt /var/lib/containers/storage > /dev/null || mount --rbind --make-shared /var/lib/containers/storage /var/lib/containers/storage
findmnt /var/lib/origin > /dev/null || mount --bind --make-shared /var/lib/origin /var/lib/origin
findmnt /var/lib/kubelet > /dev/null || mount --bind --make-shared /var/lib/kubelet /var/lib/kubelet
mount --make-shared /run
findmnt /run/systemd > /dev/null || mount --bind --make-rslave /run/systemd /run/systemd

View file

@ -1,5 +0,0 @@
d ${RUN_DIRECTORY}/crio - - - - -
d /etc/crio - - - - -
Z /etc/crio - - - - -
d ${STATE_DIRECTORY}/origin - - - - -
d ${STATE_DIRECTORY}/kubelet - - - - -

View file

@ -1,41 +0,0 @@
#oit## This file is managed by the OpenShift Image Tool
#oit## by the OpenShift Continuous Delivery team.
#oit##
#oit## Any yum repos listed in this file will effectively be ignored during CD builds.
#oit## Yum repos must be enabled in the oit configuration files.
#oit## Some aspects of this file may be managed programmatically. For example, the image name, labels (version,
#oit## release, and other), and the base FROM. Changes made directly in distgit may be lost during the next
#oit## reconciliation.
#oit##
FROM rhel7:7-released
RUN \
yum install --setopt=tsflags=nodocs -y socat iptables cri-o iproute runc skopeo-containers container-selinux && \
rpm -V socat iptables cri-o iproute runc skopeo-containers container-selinux && \
yum clean all && \
mkdir -p /exports/hostfs/etc/crio /exports/hostfs/opt/cni/bin/ /exports/hostfs/var/lib/containers/storage/ && \
cp /etc/crio/* /exports/hostfs/etc/crio && \
if test -e /usr/libexec/cni; then cp -Lr /usr/libexec/cni/* /exports/hostfs/opt/cni/bin/; fi
COPY manifest.json tmpfiles.template config.json.template service.template /exports/
COPY set_mounts.sh /
COPY run.sh /usr/bin/
CMD ["/usr/bin/run.sh"]
LABEL \
com.redhat.component="cri-o-docker" \
io.k8s.description="CRI-O is an implementation of the Kubernetes CRI. It is a lightweight, OCI-compliant runtime that is native to kubernetes. CRI-O supports OCI container images and can pull from any container registry." \
maintainer="Jhon Honce <jhonce@redhat.com>" \
name="openshift3/cri-o" \
License="GPLv2+" \
io.k8s.display-name="CRI-O" \
summary="OCI-based implementation of Kubernetes Container Runtime Interface" \
release="0.13.0.0" \
version="v3.8.0" \
architecture="x86_64" \
usage="atomic install --system --system-package=no crio && systemctl start crio" \
vendor="Red Hat" \
io.openshift.tags="cri-o system rhel7" \
atomic.type="system"

View file

@ -1,422 +0,0 @@
{
"ociVersion": "1.0.0",
"platform": {
"arch": "amd64",
"os": "linux"
},
"process": {
"args": [
"/usr/bin/run.sh"
],
"capabilities": {
"ambient": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"bounding": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"effective": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"inheritable": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
],
"permitted": [
"CAP_CHOWN",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETPCAP",
"CAP_LINUX_IMMUTABLE",
"CAP_NET_BIND_SERVICE",
"CAP_NET_BROADCAST",
"CAP_NET_ADMIN",
"CAP_NET_RAW",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_SYS_MODULE",
"CAP_SYS_RAWIO",
"CAP_SYS_CHROOT",
"CAP_SYS_PTRACE",
"CAP_SYS_PACCT",
"CAP_SYS_ADMIN",
"CAP_SYS_BOOT",
"CAP_SYS_NICE",
"CAP_SYS_RESOURCE",
"CAP_SYS_TIME",
"CAP_SYS_TTY_CONFIG",
"CAP_MKNOD",
"CAP_LEASE",
"CAP_AUDIT_WRITE",
"CAP_AUDIT_CONTROL",
"CAP_SETFCAP",
"CAP_DAC_OVERRIDE",
"CAP_MAC_OVERRIDE",
"CAP_DAC_READ_SEARCH",
"CAP_MAC_ADMIN",
"CAP_SYSLOG",
"CAP_WAKE_ALARM",
"CAP_BLOCK_SUSPEND"
]
},
"selinuxLabel": "system_u:system_r:container_runtime_t:s0",
"cwd": "/",
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/root/go/bin",
"TERM=xterm",
"LOG_LEVEL=$LOG_LEVEL",
"NAME=$NAME"
],
"noNewPrivileges": false,
"terminal": false,
"user": {
"gid": 0,
"uid": 0
}
},
"root": {
"path": "rootfs",
"readonly": true
},
"hooks": {},
"linux": {
"namespaces": [{
"type": "mount"
}],
"resources": {
"devices": [{
"access": "rwm",
"allow": true
}]
},
"rootfsPropagation": "private"
},
"mounts": [{
"destination": "/tmp",
"options": [
"private",
"bind",
"rw",
"mode=755"
],
"source": "/tmp",
"type": "bind"
},
{
"destination": "/etc",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/etc",
"type": "bind"
},
{
"destination": "/lib/modules",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/lib/modules",
"type": "bind"
},
{
"destination": "/root",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/root",
"type": "bind"
},
{
"destination": "/home",
"options": [
"rbind",
"rprivate",
"rw",
"mode=755"
],
"source": "/home",
"type": "bind"
},
{
"destination": "/mnt",
"options": [
"rbind",
"rw",
"rprivate",
"mode=755"
],
"source": "/mnt",
"type": "bind"
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}",
"destination": "/run",
"options": [
"rshared",
"rbind",
"rw",
"mode=755"
]
},
{
"type": "bind",
"source": "${RUN_DIRECTORY}/systemd",
"destination": "/run/systemd",
"options": [
"rslave",
"bind",
"rw",
"mode=755"
]
},
{
"destination": "/var/log",
"options": [
"rbind",
"rslave",
"rw"
],
"source": "/var/log",
"type": "bind"
},
{
"destination": "/var/lib",
"options": [
"rbind",
"rprivate",
"rw"
],
"source": "${STATE_DIRECTORY}",
"type": "bind"
},
{
"destination": "/var/lib/containers/storage",
"options": [
"rbind",
"rshared",
"rw"
],
"source": "${VAR_LIB_CONTAINERS_STORAGE}",
"type": "bind"
},
{
"destination": "/var/lib/origin",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_ORIGIN}",
"type": "bind"
},
{
"destination": "/var/lib/kubelet",
"options": [
"rshared",
"bind",
"rw"
],
"source": "${VAR_LIB_KUBE}",
"type": "bind"
},
{
"destination": "/opt/cni",
"options": [
"rbind",
"rprivate",
"ro",
"mode=755"
],
"source": "${OPT_CNI}",
"type": "bind"
},
{
"destination": "/dev",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/dev",
"type": "bind"
},
{
"destination": "/sys",
"options": [
"rprivate",
"rbind",
"rw",
"mode=755"
],
"source": "/sys",
"type": "bind"
},
{
"destination": "/proc",
"options": [
"rbind",
"rw",
"mode=755"
],
"source": "/proc",
"type": "proc"
}
]
}

View file

@ -1,37 +0,0 @@
% CRI-O (1) Container Image Pages
% Jhon Honce
% September 7, 2017
# NAME
cri-o - OCI-based implementation of Kubernetes Container Runtime Interface
# DESCRIPTION
CRI-O is an implementation of the Kubernetes CRI. It is a lightweight, OCI-compliant runtime that is native to kubernetes. CRI-O supports OCI container images and can pull from any container registry.
You can find more information on the CRI-O project at <https://github.com/kubernetes-incubator/cri-o/>
# USAGE
Pull from local docker and install system container:
```
# atomic pull --storage ostree docker:openshift3/cri-o:latest
# atomic install --system --system-package=no --name cri-o openshift3/cri-o
```
Start and enable as a systemd service:
```
# systemctl enable --now cri-o
```
Stopping the service
```
# systemctl stop cri-o
```
Removing the container
```
# atomic uninstall cri-o
```
# SEE ALSO
man systemd(1)

View file

@ -1,10 +0,0 @@
{
"version": "1.0",
"defaultValues": {
"LOG_LEVEL": "info",
"OPT_CNI": "/opt/cni",
"VAR_LIB_CONTAINERS_STORAGE": "/var/lib/containers/storage",
"VAR_LIB_ORIGIN": "/var/lib/origin",
"VAR_LIB_KUBE": "/var/lib/kubelet"
}
}

View file

@ -1,11 +0,0 @@
#!/bin/sh
# Ensure that new process maintain this SELinux label
PID=$$
LABEL=`tr -d '\000' < /proc/$PID/attr/current`
printf %s $LABEL > /proc/self/attr/exec
test -e /etc/sysconfig/crio-storage && source /etc/sysconfig/crio-storage
test -e /etc/sysconfig/crio-network && source /etc/sysconfig/crio-network
exec /usr/bin/crio --log-level=$LOG_LEVEL

View file

@ -1,20 +0,0 @@
[Unit]
Description=crio daemon
After=network.target
[Service]
Type=notify
ExecStartPre=/bin/sh $DESTDIR/rootfs/set_mounts.sh
ExecStart=$EXEC_START
ExecStop=$EXEC_STOP
Restart=on-failure
WorkingDirectory=$DESTDIR
RuntimeDirectory=${NAME}
TasksMax=infinity
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View file

@ -1,7 +0,0 @@
#!/bin/sh
findmnt /var/lib/containers/storage > /dev/null || mount --rbind --make-shared /var/lib/containers/storage /var/lib/containers/storage
findmnt /var/lib/origin > /dev/null || mount --bind --make-shared /var/lib/origin /var/lib/origin
findmnt /var/lib/kubelet > /dev/null || mount --bind --make-shared /var/lib/kubelet /var/lib/kubelet
mount --make-shared /run
findmnt /run/systemd > /dev/null || mount --bind --make-rslave /run/systemd /run/systemd

View file

@ -1,5 +0,0 @@
d ${RUN_DIRECTORY}/crio - - - - -
d /etc/crio - - - - -
Z /etc/crio - - - - -
d ${STATE_DIRECTORY}/origin - - - - -
d ${STATE_DIRECTORY}/kubelet - - - - -

View file

@ -12,7 +12,7 @@ ExecStart=/usr/local/bin/crio \
$CRIO_STORAGE_OPTIONS \
$CRIO_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TasksMax=infinity
TasksMax=8192
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

View file

@ -0,0 +1,419 @@
## This playbook expects --extra-vars "commit=<commit>"
## and either --extra-vars "pullrequest=<PR #>" or
## --skip-tags pr
- hosts: all
remote_user: root
vars:
xunit: false
cni_commit: dcf7368eeab15e2affc6256f0bb1e84dd46a34de
tasks:
- name: Update all packages
yum:
name: '*'
state: latest
async: 600
poll: 10
when: (ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS')
ignore_errors: true
- name: Update all packages on Fedora
dnf:
name: '*'
state: latest
async: 600
poll: 10
when: ansible_distribution == 'Fedora'
- name: Make sure we have all required packages
yum:
name: "{{ item }}"
state: latest
with_items:
- wget
- git
- make
- gcc
- tar
- libseccomp-devel
- golang
- glib2-devel
- glibc-static
- container-selinux
- btrfs-progs-devel
- device-mapper-devel
- ostree-devel
- glibc-devel
- gpgme-devel
- libassuan-devel
- libgpg-error-devel
- pkgconfig
- skopeo-containers
- oci-systemd-hook
- oci-register-machine
- oci-umount
async: 600
poll: 10
when: (ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS')
- name: Make sure we have all required packages on Fedora
dnf:
name: "{{ item }}"
state: latest
with_items:
- wget
- git
- make
- gcc
- tar
- libseccomp-devel
- golang
- glib2-devel
- glibc-static
- container-selinux
- btrfs-progs-devel
- device-mapper-devel
- ostree-devel
- glibc-devel
- gpgme-devel
- libassuan-devel
- libgpg-error-devel
- pkgconfig
- skopeo-containers
- oci-systemd-hook
- oci-register-machine
- oci-umount
async: 600
poll: 10
when: ansible_distribution == 'Fedora'
- name: Setup swap to prevent kernel firing off the OOM killer
shell: |
truncate -s 8G /root/swap && \
export SWAPDEV=$(losetup --show -f /root/swap | head -1) && \
mkswap $SWAPDEV && \
swapon $SWAPDEV && \
swapon --show
- name: Make testing directories to conform to testing standards
file:
path: "{{ item }}"
state: directory
with_items:
- /root/src/github.com/kubernetes-incubator
- /root/src/github.com/opencontainers
- /opt/cni/bin
- /etc/cni/net.d
- /usr/local/go
- name: install Golang upstream in CentOS
shell: |
curl -fsSL "https://golang.org/dl/go1.8.1.linux-amd64.tar.gz" \
| tar -xzC /usr/local
when: ansible_distribution == 'CentOS'
- name: Set custom Golang path for CentOS
lineinfile:
dest: /root/.bashrc
line: 'export PATH=/usr/local/go/bin:$PATH'
insertafter: 'EOF'
regexp: 'export PATH=/usr/local/go/bin:$PATH'
state: present
when: ansible_distribution == 'CentOS'
- name: set sysctl vm.overcommit_memory=1 for CentOS
shell: |
sysctl -w vm.overcommit_memory=1
when: ansible_distribution == 'CentOS'
- name: disable selinux on CentOS :(
shell: |
setenforce 0
when: ansible_distribution == 'CentOS'
- name: git clone bats repo
git:
repo: https://github.com/sstephenson/bats.git
dest: /root/src/bats
async: 600
poll: 10
- name: Fetch the xunit feature PR for bats
shell: "git fetch origin +refs/pull/161/head:refs/remotes/origin/pr/161"
args:
chdir: /root/src/bats
async: 600
poll: 10
when: xunit
- name: Git checkout the xunit PR for bats
shell: "git checkout origin/pr/161"
args:
chdir: /root/src/bats
async: 600
poll: 10
when: xunit
- name: git clone crictl repo
git:
repo: https://github.com/kubernetes-incubator/cri-tools
dest: /root/src/github.com/kubernetes-incubator/cri-tools
version: 16e6fe4d7199c5689db4630a9330e6a8a12cecd1
async: 600
poll: 10
- name: git clone runc repo
git:
repo: https://github.com/opencontainers/runc
dest: /root/src/github.com/opencontainers/runc
async: 600
poll: 10
- name: git clone cri-o repo
git:
repo: https://github.com/kubernetes-incubator/cri-o
dest: /root/src/github.com/kubernetes-incubator/cri-o
async: 600
poll: 10
- name: git clone cni repo
git:
repo: https://github.com/containernetworking/plugins
dest: /root/src/github.com/containernetworking/plugins
version: "{{ cni_commit }}"
- name: Git fetch the PR
shell: "git fetch origin +refs/pull/{{ pullrequest }}/head:refs/remotes/origin/pr/{{ pullrequest }}"
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
tags:
- pr
async: 600
poll: 10
- name: Git checkout the commit into working branch
shell: "git checkout {{ commit }}"
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 600
poll: 10
- name: Install bats
command: bats/install.sh /usr/local
args:
chdir: /root/src
- name: Add go testing dir to bashrc files
lineinfile:
dest: /root/.bashrc
line: 'export GOPATH=/root'
insertafter: 'EOF'
regexp: 'export GOPATH=/root'
state: present
- name: Source the bashrc file
shell: source /root/.bashrc
- name: Build cni networking
shell: ./build.sh
args:
chdir: /root/src/github.com/containernetworking/plugins
- name: cp bin to cni bin dir
shell: cp /root/src/github.com/containernetworking/plugins/bin/* /opt/cni/bin
- name: curl crio bridge conf file for cni networking
get_url:
url: https://raw.githubusercontent.com/kubernetes-incubator/cri-o/{{ commit }}/contrib/cni/10-crio-bridge.conf
dest: /etc/cni/net.d/10-crio-bridge.conf
- name: curl loopback conf for cni networking
get_url:
url: https://raw.githubusercontent.com/kubernetes-incubator/cri-o/{{ commit }}/contrib/cni/99-loopback.conf
dest: /etc/cni/net.d/99-loopback.conf
- name: make clean
make:
target: clean
chdir: /root/src/github.com/opencontainers/runc
async: 600
poll: 10
- name: make crictl
shell: |
go install github.com/kubernetes-incubator/cri-tools/cmd/crictl && \
cp $GOPATH/bin/crictl /usr/bin/crictl
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o/
- name: make runc
make:
params: BUILDTAGS="seccomp selinux"
chdir: /root/src/github.com/opencontainers/runc
async: 600
poll: 10
- name: install runc
make:
target: install
chdir: /root/src/github.com/opencontainers/runc
async: 600
poll: 10
- name: Change test_runner.sh to use bats xunit output
lineinfile:
dest: /root/src/github.com/kubernetes-incubator/cri-o/test/test_runner.sh
line: 'execute time bats --tap --junit $TESTS'
regexp: 'execute time bats --tap \$TESTS'
state: present
when: xunit
- name: git clone cni test repo
git:
repo: https://github.com/runcom/plugins
dest: /root/src/github.com/containernetworking/plugins
version: "custom-bridge"
force: yes
- name: Build cni test networking
shell: ./build.sh
args:
chdir: /root/src/github.com/containernetworking/plugins
- name: cp custom-bridge to opt bin
shell: cp /root/src/github.com/containernetworking/plugins/bin/bridge /opt/cni/bin/bridge-custom
# k8s builds with go1.8.x, rhel, fedora don't have it yet
- name: install Golang upstream in Fedora/RHEL
shell: |
curl -fsSL "https://golang.org/dl/go1.8.3.linux-amd64.tar.gz" \
| tar -xzC /usr/local
when: ansible_distribution == 'Fedora' or ansible_distribution == 'RedHat'
- name: Set custom Golang path for Fedora/RHEL
lineinfile:
dest: /root/.bashrc
line: 'export PATH=/usr/local/go/bin:$PATH'
insertafter: 'EOF'
regexp: 'export PATH=/usr/local/go/bin:$PATH'
state: present
when: ansible_distribution == 'Fedora' or ansible_distribution == 'RedHat'
- name: run integration tests RHEL
shell: 'CGROUP_MANAGER=cgroupfs STORAGE_OPTS="--storage-driver=overlay2 --storage-opt overlay2.override_kernel_check=1" make localintegration 2>&1 > testout.txt'
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 3600
poll: 10
ignore_errors: yes
when: ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS'
- name: run integration tests RHEL with xunit results
shell: 'CGROUP_MANAGER=cgroupfs STORAGE_OPTS="--storage-driver=overlay2 --storage-opt overlay2.override_kernel_check=1" make localintegration'
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 3600
poll: 10
ignore_errors: yes
when: (ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS') and xunit
- name: run integration tests Fedora
shell: 'CGROUP_MANAGER=cgroupfs STORAGE_OPTS="--storage-driver=overlay2" make localintegration 2>&1 > testout.txt'
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 3600
poll: 10
ignore_errors: yes
when: ansible_distribution == 'Fedora'
- name: run integration tests Fedora with xunit results
shell: 'CGROUP_MANAGER=cgroupfs STORAGE_OPTS="--storage-driver=overlay2" make localintegration'
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 3600
poll: 10
ignore_errors: yes
when: (ansible_distribution == 'Fedora' and xunit)
- name: Make testing output directory
file:
path: /root/src/github.com/kubernetes-incubator/cri-o/reports
state: directory
ignore_errors: yes
when: xunit
- name: Move all xunit files into one dir to scp
shell: 'mv /root/src/github.com/kubernetes-incubator/cri-o/test/TestReport-bats*.xml /root/src/github.com/kubernetes-incubator/cri-o/reports/'
when: xunit
# XXX: kube tests from now on
- name: git clone k8s repo
git:
repo: https://github.com/runcom/kubernetes
dest: /root/src/k8s.io/kubernetes
# based on kube upstream v1.7.4
version: cri-o-node-e2e-patched
force: yes
async: 600
poll: 10
- name: make and install CRI-O
shell: |
make install.tools && \
make && \
make install && \
make install.systemd && \
make install.config
args:
chdir: /root/src/github.com/kubernetes-incubator/cri-o
async: 600
poll: 10
- name: link runc
file: src=/usr/local/sbin/runc dest=/usr/bin/runc state=link
- name: run with overlay2
replace:
regexp: 'storage_driver = ""'
replace: 'storage_driver = "overlay2"'
name: /etc/crio/crio.conf
backup: yes
- name: run with systemd cgroup manager
replace:
regexp: 'cgroup_manager = "cgroupfs"'
replace: 'cgroup_manager = "systemd"'
name: /etc/crio/crio.conf
backup: yes
- name: add docker.io default registry
lineinfile:
dest: /etc/crio/crio.conf
line: '"docker.io"'
insertafter: 'registries = \['
regexp: 'docker\.io'
state: present
- name: add overlay2 storage opts on RHEL/CentOS
lineinfile:
dest: /etc/crio/crio.conf
line: '"overlay2.override_kernel_check=1"'
insertafter: 'storage_option = \['
regexp: 'overlay2\.override_kernel_check=1'
state: present
when: ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS'
- name: enable and start CRI-O
systemd:
name: crio
state: started
enabled: yes
daemon_reload: yes
async: 600
poll: 10
# see https://github.com/kubernetes-incubator/cri-o/issues/528
- name: disable selinux for k8s conformance tests
shell: |
setenforce 0
async: 600
poll: 10
- name: Go get the go-bindata file
shell: go get -u github.com/jteeuwen/go-bindata/go-bindata
args:
chdir: /root/src/k8s.io/kubernetes
async: 600
poll: 10
- name: Install etcd
command: hack/install-etcd.sh
args:
chdir: /root/src/k8s.io/kubernetes
async: 600
poll: 10
- name: Install necessary github go packages
shell: go get github.com/onsi/ginkgo/ginkgo ; go get github.com/onsi/gomega ; go get -u github.com/cloudflare/cfssl/cmd/...
args:
chdir: /root/src/k8s.io/kubernetes
async: 600
poll: 10
- name: Add path to bashrc files
lineinfile:
dest: /root/.bashrc
line: 'export PATH=$PATH:/root/src/k8s.io/kubernetes/third_party/etcd'
insertafter: 'EOF'
regexp: 'export PATH=\$PATH:/root/src/k8s.io/kubernetes/third_party/etcd'
state: present
- name: gather correct hostname
shell: |
cat /etc/hostname
register: hostname
- name: inject hostname into /etc/hosts
lineinfile:
dest: /etc/hosts
line: '127.0.0.1 {{ hostname.stdout }}'
insertafter: 'EOF'
regexp: '127\.0\.0\.1\s+{{ hostname.stdout }}'
state: present
- name: Flush the iptables
command: iptables -F
async: 600
poll: 10
- name: run k8s tests
shell: |
make test-e2e-node PARALLELISM=1 RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT=/var/run/crio.sock IMAGE_SERVICE_ENDPOINT=/var/run/crio.sock TEST_ARGS='--prepull-images=true --kubelet-flags="--cgroup-driver=systemd"' FOCUS="\[Conformance\]" 2>&1 > node-e2e.log
args:
chdir: /root/src/k8s.io/kubernetes
async: 7200
poll: 10
ignore_errors: true
# XXX: tests on RHEL/CentOS are unreliable fow now
when: ansible_distribution == 'Fedora' or ansible_distribution == 'RedHat'

View file

@ -1,21 +0,0 @@
# Fedora and RHEL Integration and End-to-End Tests
This directory contains playbooks to set up for and run the integration and
end-to-end tests for CRI-O on RHEL and Fedora hosts. Two entrypoints exist:
- `main.yml`: sets up the machine and runs tests
- `results.yml`: gathers test output to `/tmp/artifacts`
When running `main.yml`, three tags are present:
- `setup`: run all tasks to set up the system for testing
- `e2e`: build CRI-O from source and run Kubernetes node E2Es
- `integration`: build CRI-O from source and run the local integration suite
The playbooks assume the following things about your system:
- on RHEL, the server and extras repos are configured and certs are present
- `ansible` is installed and the host is boot-strapped to allow `ansible` to run against it
- the `$GOPATH` is set and present for all shells (*e.g.* written in `/etc/environment`)
- CRI-O is checked out to the correct state at `${GOPATH}/src/github.com/kubernetes-incubator/cri-o`
- the user running the playbook has access to passwordless `sudo`

View file

@ -1,359 +0,0 @@
# config file for ansible -- http://ansible.com/
# ==============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
# some basic default values...
#inventory = inventory
#library = /usr/share/my_modules/
#remote_tmp = $HOME/.ansible/tmp
#local_tmp = .ansible/tmp
#forks = 5
forks = 10
#poll_interval = 15
#sudo_user = root
#ask_sudo_pass = True
ask_sudo_pass = False
#ask_pass = True
ask_pass = False
#transport = smart
#remote_port = 22
#module_lang = C
#module_set_locale = True
# plays will gather facts by default, which contain information about
# the remote system.
#
# smart - gather by default, but don't regather if already gathered
# implicit - gather by default, turn off with gather_facts: False
# explicit - do not gather by default, must say gather_facts: True
#gathering = implicit
gathering = smart
# by default retrieve all facts subsets
# all - gather all subsets
# network - gather min and network facts
# hardware - gather hardware facts (longest facts to retrieve)
# virtual - gather min and virtual facts
# facter - import facts from facter
# ohai - import facts from ohai
# You can combine them using comma (ex: network,virtual)
# You can negate them using ! (ex: !hardware,!facter,!ohai)
# A minimal set of facts is always gathered.
gather_subset = network
# additional paths to search for roles in, colon separated
# N/B: This depends on how ansible is called
#roles_path = $WORKSPACE/kommandir_workspace/roles
# uncomment this to disable SSH key host checking
#host_key_checking = False
host_key_checking = False
# change the default callback
#stdout_callback = skippy
# enable additional callbacks
#callback_whitelist = timer, mail
# Determine whether includes in tasks and handlers are "static" by
# default. As of 2.0, includes are dynamic by default. Setting these
# values to True will make includes behave more like they did in the
# 1.x versions.
task_includes_static = True
handler_includes_static = True
# change this for alternative sudo implementations
#sudo_exe = sudo
# What flags to pass to sudo
# WARNING: leaving out the defaults might create unexpected behaviours
#sudo_flags = -H -S -n
# SSH timeout
#timeout = 10
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root
remote_user = root
# logging is off by default unless this path is defined
# if so defined, consider logrotate
log_path = $ARTIFACTS/main.log
# default module name for /usr/bin/ansible
#module_name = command
# use this shell for commands executed under sudo
# you may need to change this to bin/bash in rare instances
# if sudo is constrained
# executable = /bin/sh
# if inventory variables overlap, does the higher precedence one win
# or are hash values merged together? The default is 'replace' but
# this can also be set to 'merge'.
hash_behaviour = replace
# by default, variables from roles will be visible in the global variable
# scope. To prevent this, the following option can be enabled, and only
# tasks and handlers within the role will see the variables there
private_role_vars = False
# list any Jinja2 extensions to enable here:
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
# if set, always use this private key file for authentication, same as
# if passing --private-key to ansible or ansible-playbook
#private_key_file = /path/to/file
# If set, configures the path to the Vault password file as an alternative to
# specifying --vault-password-file on the command line.
#vault_password_file = /path/to/vault_password_file
# format of string {{ ansible_managed }} available within Jinja2
# templates indicates to users editing templates files will be replaced.
# replacing {file}, {host} and {uid} and strftime codes with proper values.
#ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
# This short version is better used in templates as it won't flag the file as changed every run.
#ansible_managed = Ansible managed: {file} on {host}
# by default, ansible-playbook will display "Skipping [host]" if it determines a task
# should not be run on a host. Set this to "False" if you don't want to see these "Skipping"
# messages. NOTE: the task header will still be shown regardless of whether or not the
# task is skipped.
#display_skipped_hosts = True
display_skipped_hosts = False
# by default, if a task in a playbook does not include a name: field then
# ansible-playbook will construct a header that includes the task's action but
# not the task's args. This is a security feature because ansible cannot know
# if the *module* considers an argument to be no_log at the time that the
# header is printed. If your environment doesn't have a problem securing
# stdout from ansible-playbook (or you have manually specified no_log in your
# playbook on all of the tasks where you have secret information) then you can
# safely set this to True to get more informative messages.
display_args_to_stdout = False
# by default (as of 1.3), Ansible will raise errors when attempting to dereference
# Jinja2 variables that are not set in templates or action lines. Uncomment this line
# to revert the behavior to pre-1.3.
#error_on_undefined_vars = False
# by default (as of 1.6), Ansible may display warnings based on the configuration of the
# system running ansible itself. This may include warnings about 3rd party packages or
# other conditions that should be resolved if possible.
# to disable these warnings, set the following value to False:
system_warnings = False
# by default (as of 1.4), Ansible may display deprecation warnings for language
# features that should no longer be used and will be removed in future versions.
# to disable these warnings, set the following value to False:
deprecation_warnings = False
# (as of 1.8), Ansible can optionally warn when usage of the shell and
# command module appear to be simplified by using a default Ansible module
# instead. These warnings can be silenced by adjusting the following
# setting or adding warn=yes or warn=no to the end of the command line
# parameter string. This will for example suggest using the git module
# instead of shelling out to the git command.
command_warnings = False
# set plugin path directories here, separate with colons
#action_plugins = /usr/share/ansible/plugins/action
#callback_plugins = /usr/share/ansible/plugins/callback
#connection_plugins = /usr/share/ansible/plugins/connection
#lookup_plugins = /usr/share/ansible/plugins/lookup
#vars_plugins = /usr/share/ansible/plugins/vars
#filter_plugins = /usr/share/ansible/plugins/filter
#test_plugins = /usr/share/ansible/plugins/test
#strategy_plugins = /usr/share/ansible/plugins/strategy
# Most callbacks shipped with Ansible are disabled by default
# and need to be whitelisted in your ansible.cfg file in order to function.
callback_whitelist = default
# by default callbacks are not loaded for /bin/ansible, enable this if you
# want, for example, a notification or logging callback to also apply to
# /bin/ansible runs
#bin_ansible_callbacks = False
# don't like cows? that's unfortunate.
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
#nocows = 1
# set which cowsay stencil you'd like to use by default. When set to 'random',
# a random stencil will be selected for each task. The selection will be filtered
# against the `cow_whitelist` option below.
#cow_selection = default
#cow_selection = random
# when using the 'random' option for cowsay, stencils will be restricted to this list.
# it should be formatted as a comma-separated list with no spaces between names.
# NOTE: line continuations here are for formatting purposes only, as the INI parser
# in python does not support them.
#cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\
# hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\
# stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www
# don't like colors either?
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
nocolor = 0
# if set to a persistent type (not 'memory', for example 'redis') fact values
# from previous runs in Ansible will be stored. This may be useful when
# wanting to use, for example, IP information from one group of servers
# without having to talk to them in the same playbook run to get their
# current IP information.
#fact_caching = memory
# retry files
# When a playbook fails by default a .retry file will be created in ~/
# You can disable this feature by setting retry_files_enabled to False
# and you can change the location of the files by setting retry_files_save_path
#retry_files_enabled = False
retry_files_enabled = False
# squash actions
# Ansible can optimise actions that call modules with list parameters
# when looping. Instead of calling the module once per with_ item, the
# module is called once with all items at once. Currently this only works
# under limited circumstances, and only with parameters named 'name'.
squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper
# prevents logging of task data, off by default
#no_log = False
# prevents logging of tasks, but only on the targets, data is still logged on the master/controller
no_target_syslog = True
# controls whether Ansible will raise an error or warning if a task has no
# choice but to create world readable temporary files to execute a module on
# the remote machine. This option is False by default for security. Users may
# turn this on to have behaviour more like Ansible prior to 2.1.x. See
# https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
# for more secure ways to fix this than enabling this option.
#allow_world_readable_tmpfiles = False
# controls the compression level of variables sent to
# worker processes. At the default of 0, no compression
# is used. This value must be an integer from 0 to 9.
#var_compression_level = 9
# controls what compression method is used for new-style ansible modules when
# they are sent to the remote system. The compression types depend on having
# support compiled into both the controller's python and the client's python.
# The names should match with the python Zipfile compression types:
# * ZIP_STORED (no compression. available everywhere)
# * ZIP_DEFLATED (uses zlib, the default)
# These values may be set per host via the ansible_module_compression inventory
# variable
#module_compression = 'ZIP_DEFLATED'
# This controls the cutoff point (in bytes) on --diff for files
# set to 0 for unlimited (RAM may suffer!).
#max_diff_size = 1048576
[privilege_escalation]
#become=True
#become_method=sudo
#become_user=root
become_user=root
#become_ask_pass=False
[paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host
# keys encountered. Increases performance on new host additions. Setting works independently of the
# host key checking setting above.
#record_host_keys=False
# by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this
# line to disable this behaviour.
#pty=False
[ssh_connection]
# ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o PreferredAuthentications=publickey -o ConnectTimeout=13
# The path to use for the ControlPath sockets. This defaults to
# "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with
# very long hostnames or very long path names (caused by long user names or
# deeply nested home directories) this can exceed the character limit on
# file socket names (108 characters for most platforms). In that case, you
# may wish to shorten the string below.
#
# Example:
# control_path = %(directory)s/%%h-%%r
#control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r
# Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant
# performance improvement when enabled, however when using "sudo:" you must
# first disable 'requiretty' in /etc/sudoers
#
# By default, this option is disabled to preserve compatibility with
# sudoers configurations that have requiretty (the default on many distros).
#
#pipelining = False
pipelining=True
# if True, make ansible use scp if the connection type is ssh
# (default is sftp)
#scp_if_ssh = True
# if False, sftp will not use batch mode to transfer files. This may cause some
# types of file transfer failures impossible to catch however, and should
# only be disabled if your sftp version has problems with batch mode
#sftp_batch_mode = False
[accelerate]
#accelerate_port = 5099
#accelerate_timeout = 30
#accelerate_connect_timeout = 5.0
# The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon.
#accelerate_daemon_timeout = 30
# If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must
# have access to the system via SSH to add a new key. The default
# is "no".
#accelerate_multi_key = yes
[selinux]
# file systems that require special treatment when dealing with security context
# the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependent context.
#special_context_filesystems=nfs,vboxsf,fuse,ramfs
# Set this to yes to allow libvirt_lxc connections to work without SELinux.
#libvirt_lxc_noseclabel = yes
[colors]
#highlight = white
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow
#diff_add = green
#diff_remove = red
#diff_lines = cyan

View file

@ -1,17 +0,0 @@
---
- name: clone bats source repo
git:
repo: "https://github.com/sstephenson/bats.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/sstephenson/bats"
- name: install bats
command: "./install.sh /usr/local"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/sstephenson/bats"
- name: link bats
file:
src: /usr/local/bin/bats
dest: /usr/bin/bats
state: link

View file

@ -1,79 +0,0 @@
---
- name: stat the expected cri-o directory
stat:
path: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
register: dir_stat
- name: expect cri-o to be cloned already
fail:
msg: "Expected cri-o to be cloned at {{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o but it wasn't!"
when: not dir_stat.stat.exists
- name: install cri-o tools
make:
target: install.tools
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
- name: build cri-o
make:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
- name: install cri-o
make:
target: install
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
- name: install cri-o systemd files
make:
target: install.systemd
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
- name: install cri-o config
make:
target: install.config
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
- name: install configs
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o/{{ item.src }}"
dest: "{{ item.dest }}"
remote_src: yes
with_items:
- src: contrib/cni/10-crio-bridge.conf
dest: /etc/cni/net.d/10-crio-bridge.conf
- src: contrib/cni/99-loopback.conf
dest: /etc/cni/net.d/99-loopback.conf
- src: test/redhat_sigstore.yaml
dest: /etc/containers/registries.d/registry.access.redhat.com.yaml
- name: run with overlay
replace:
regexp: 'storage_driver = ""'
replace: 'storage_driver = "overlay"'
name: /etc/crio/crio.conf
backup: yes
- name: run with systemd cgroup manager
replace:
regexp: 'cgroup_manager = "cgroupfs"'
replace: 'cgroup_manager = "systemd"'
name: /etc/crio/crio.conf
backup: yes
- name: add docker.io default registry
lineinfile:
dest: /etc/crio/crio.conf
line: '"docker.io"'
insertafter: 'registries = \['
regexp: 'docker\.io'
state: present
- name: add overlay storage opts on RHEL/CentOS
lineinfile:
dest: /etc/crio/crio.conf
line: '"overlay.override_kernel_check=1"'
insertafter: 'storage_option = \['
regexp: 'overlay\.override_kernel_check=1'
state: present
when: ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS'

View file

@ -1,26 +0,0 @@
---
- name: clone cri-tools source repo
git:
repo: "https://github.com/kubernetes-incubator/cri-tools.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-tools"
version: "{{ cri_tools_git_version }}"
force: "{{ force_clone | default(False) | bool}}"
- name: install crictl
command: "/usr/bin/go install github.com/kubernetes-incubator/cri-tools/cmd/crictl"
- name: install critest
command: "/usr/bin/go install github.com/kubernetes-incubator/cri-tools/cmd/critest"
- name: link crictl
file:
src: "{{ ansible_env.GOPATH }}/bin/crictl"
dest: /usr/bin/crictl
state: link
- name: link critest
file:
src: "{{ ansible_env.GOPATH }}/bin/critest"
dest: /usr/bin/critest
state: link

View file

@ -1,67 +0,0 @@
---
- name: clone kubernetes source repo
git:
repo: "https://github.com/{{ k8s_github_fork }}/kubernetes.git"
dest: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
# based on kube v1.9.0-alpha.2, update as needed
version: "{{ k8s_git_version }}"
force: "{{ force_clone | default(False) | bool}}"
- name: install etcd
command: "hack/install-etcd.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
- name: build kubernetes
make:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
- name: Add custom cluster service file for the e2e testing
copy:
dest: /etc/systemd/system/customcluster.service
content: |
[Unit]
After=network-online.target
Wants=network-online.target
[Service]
WorkingDirectory={{ ansible_env.GOPATH }}/src/k8s.io/kubernetes
ExecStart=/usr/local/bin/createcluster.sh
User=root
[Install]
WantedBy=multi-user.target
- name: Add create cluster background script for e2e testing
copy:
dest: /usr/local/bin/createcluster.sh
content: |
#!/bin/bash
export PATH=/usr/local/go/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/root/bin:{{ ansible_env.GOPATH }}/bin:{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/third_party/etcd:{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/_output/local/bin/linux/amd64/
export CONTAINER_RUNTIME=remote
export CGROUP_DRIVER=systemd
export CONTAINER_RUNTIME_ENDPOINT='{{ crio_socket }} --runtime-request-timeout=5m'
export ALLOW_SECURITY_CONTEXT=","
export ALLOW_PRIVILEGED=1
export DNS_SERVER_IP={{ ansible_default_ipv4.address }}
export API_HOST={{ ansible_default_ipv4.address }}
export API_HOST_IP={{ ansible_default_ipv4.address }}
export KUBE_ENABLE_CLUSTER_DNS=true
export ENABLE_HOSTPATH_PROVISIONER=true
export KUBE_ENABLE_CLUSTER_DASHBOARD=true
./hack/local-up-cluster.sh
mode: "u=rwx,g=rwx,o=x"
- name: Set kubernetes_provider to be local
lineinfile:
dest: /etc/environment
line: 'KUBERNETES_PROVIDER=local'
regexp: 'KUBERNETES_PROVIDER='
state: present
- name: Set KUBECONFIG
lineinfile:
dest: /etc/environment
line: 'KUBECONFIG=/var/run/kubernetes/admin.kubeconfig'
regexp: 'KUBECONFIG='
state: present

View file

@ -1,50 +0,0 @@
---
- name: clone plugins source repo
git:
repo: "https://github.com/containernetworking/plugins.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
version: "dcf7368eeab15e2affc6256f0bb1e84dd46a34de"
- name: build plugins
command: "./build.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
- name: install plugins
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins/bin/{{ item }}"
dest: "/opt/cni/bin"
mode: "o=rwx,g=rx,o=rx"
remote_src: yes
with_items:
- bridge
- dhcp
- flannel
- host-local
- ipvlan
- loopback
- macvlan
- ptp
- sample
- tuning
- vlan
- name: clone runcom plugins source repo
git:
repo: "https://github.com/runcom/plugins.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
version: "custom-bridge"
force: yes
- name: build plugins
command: "./build.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
- name: install custom bridge
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins/bin/bridge"
dest: "/opt/cni/bin/bridge-custom"
mode: "o=rwx,g=rx,o=rx"
remote_src: yes

View file

@ -1,23 +0,0 @@
---
- name: clone runc source repo
git:
repo: "https://github.com/opencontainers/runc.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
version: "c6e4a1ebeb1a72b529c6f1b6ee2b1ae5b868b14f"
- name: build runc
make:
params: BUILDTAGS="seccomp selinux"
chdir: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
- name: install runc
make:
target: "install"
chdir: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
- name: link runc
file:
src: /usr/local/sbin/runc
dest: /usr/bin/runc
state: link

View file

@ -1,156 +0,0 @@
'''Plugin to override the default output logic.'''
# upstream: https://gist.github.com/cliffano/9868180
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# For some reason this has to be done
import imp
import os
ANSIBLE_PATH = imp.find_module('ansible')[1]
DEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')
DEFAULT_MODULE = imp.load_source(
'ansible.plugins.callback.default',
DEFAULT_PATH
)
try:
from ansible.plugins.callback import CallbackBase
BASECLASS = CallbackBase
except ImportError: # < ansible 2.1
BASECLASS = DEFAULT_MODULE.CallbackModule
class CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init
'''
Override for the default callback module.
Render std err/out outside of the rest of the result which it prints with
indentation.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
def __init__(self, *args, **kwargs):
# pylint: disable=non-parent-init-called
BASECLASS.__init__(self, *args, **kwargs)
self.failed_task = []
self.result_file = os.environ.get('AHT_RESULT_FILE')
def _dump_results(self, result):
'''Return the text to output for a result.'''
result['_ansible_verbose_always'] = True
save = {}
for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:
if key in result:
save[key] = result.pop(key)
output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access
for key in ['stdout', 'stderr', 'msg']:
if key in save and save[key]:
output += '\n\n%s:\n---\n%s\n---' % (key.upper(), save[key])
for key, value in save.items():
result[key] = value
return output
def v2_runner_on_unreachable(self, result):
self.failed_task = result
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if delegated_vars:
self._display.display("fatal: [%s -> %s]: UNREACHABLE! => %s" % (result._host.get_name(), delegated_vars['ansible_host'], self._dump_results(result._result)), color=C.COLOR_UNREACHABLE)
else:
self._display.display("fatal: [%s]: UNREACHABLE! => %s" % (result._host.get_name(), self._dump_results(result._result)), color=C.COLOR_UNREACHABLE)
def v2_runner_on_failed(self,result, ignore_errors=False):
if ignore_errors is not True:
# Sets environment variable for test failures for use in playboks.
# Handlers tasks can conditionalize themselves using this variable
# to run only on failure.
os.environ["AHT_FAILURE"] = "1"
# Save last failure
self.failed_task = result
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if 'exception' in result._result:
if self._display.verbosity < 3:
# extract just the actual error message from the exception text
error = result._result['exception'].strip().split('\n')[-1]
msg = "An exception occurred during task execution. To see the full traceback, use -vvv. The error was: %s" % error
else:
msg = "An exception occurred during task execution. The full traceback is:\n" + result._result['exception']
self._display.display(msg, color=C.COLOR_ERROR)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
if delegated_vars:
self._display.display("fatal: [%s -> %s]: FAILED! => %s" % (result._host.get_name(), delegated_vars['ansible_host'], self._dump_results(result._result)), color=C.COLOR_ERROR)
else:
self._display.display("fatal: [%s]: FAILED! => %s" % (result._host.get_name(), self._dump_results(result._result)), color=C.COLOR_ERROR)
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
def v2_playbook_on_stats(self, stats):
self._display.banner("PLAY RECAP")
hosts = sorted(stats.processed.keys())
for h in hosts:
t = stats.summarize(h)
self._display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], C.COLOR_OK),
colorize(u'changed', t['changed'], C.COLOR_CHANGED),
colorize(u'unreachable', t['unreachable'], C.COLOR_UNREACHABLE),
colorize(u'failed', t['failures'], C.COLOR_ERROR)),
screen_only=True
)
self._display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None)),
log_only=True
)
self._display.display("", screen_only=True)
# Save result to file if environment variable exists
if self.result_file is not None:
if self.failed_task:
with open(self.result_file, 'w') as f:
f.write("PLAY: %s\n%s\n%s" % (self._play, \
self.failed_task._task, \
self._dump_results(self.failed_task._result)))
else:
open(self.result_file, 'w').close()

View file

@ -1,45 +0,0 @@
---
- name: enable and start CRI-O
systemd:
name: crio
state: started
enabled: yes
daemon_reload: yes
- name: Flush the iptables
command: iptables -F
- name: Enable localnet routing
command: sysctl -w net.ipv4.conf.all.route_localnet=1
- name: Add masquerade for localhost
command: iptables -t nat -I POSTROUTING -s 127.0.0.1 ! -d 127.0.0.1 -j MASQUERADE
- name: run critest validation
shell: "critest -c --runtime-endpoint /var/run/crio/crio.sock --image-endpoint /var/run/crio/crio.sock v"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
async: 5400
poll: 30
when: ansible_distribution not in ['RedHat', 'CentOS']
# XXX: RHEL has an additional test which fails because of selinux but disabling
# it doesn't solve the issue.
# TODO(runcom): enable skipped tests once we fix them (selinux)
# https://bugzilla.redhat.com/show_bug.cgi?id=1414236
# https://access.redhat.com/solutions/2897781
- name: run critest validation
shell: "critest -c --runtime-endpoint /var/run/crio/crio.sock --image-endpoint /var/run/crio/crio.sock -s 'should not allow privilege escalation when true' v"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
async: 5400
poll: 30
when: ansible_distribution in ['RedHat', 'CentOS']
- name: run critest benchmarks
shell: "critest -c --runtime-endpoint /var/run/crio/crio.sock --image-endpoint /var/run/crio/crio.sock b"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
async: 5400
poll: 30

View file

@ -1,58 +0,0 @@
---
- name: enable and start CRI-O
systemd:
name: crio
state: started
enabled: yes
daemon_reload: yes
- name: update the server address for the custom cluster
lineinfile:
dest: /usr/local/bin/createcluster.sh
line: "export {{ item }}={{ ansible_default_ipv4.address }}"
regexp: "^export {{ item }}="
state: present
with_items:
- DNS_SERVER_IP
- API_HOST
- API_HOST_IP
- name: enable and start the custom cluster
systemd:
name: customcluster.service
state: started
enabled: yes
daemon_reload: yes
- name: wait for the cluster to be running
command: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/_output/bin/kubectl get service kubernetes --namespace default"
register: kube_poll
until: kube_poll | succeeded
retries: 100
delay: 30
- name: ensure directory exists for e2e reports
file:
path: "{{ artifacts }}"
state: directory
# TODO remove the last test skipped once https://github.com/kubernetes-incubator/cri-o/pull/1217 is merged
- name: Buffer the e2e testing command to workaround Ansible YAML folding "feature"
set_fact:
e2e_shell_cmd: >
/usr/bin/go run hack/e2e.go
--test
--test_args="-host=https://{{ ansible_default_ipv4.address }}:6443
--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PersistentVolumes|\[HPA\]|should.support.building.a.client.with.a.CSR|should.support.inline.execution.and.attach
--report-dir={{ artifacts }}"
&> {{ artifacts }}/e2e.log
# Fix vim syntax hilighting: "
- name: disable SELinux
command: setenforce 0
- name: run e2e tests
shell: "{{ e2e_shell_cmd | regex_replace('\\s+', ' ') }}"
args:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"

View file

@ -1,55 +0,0 @@
---
- name: ensure Golang dir is empty first
file:
path: /usr/local/go
state: absent
- name: fetch Golang
unarchive:
remote_src: yes
src: "https://storage.googleapis.com/golang/go{{ version }}.linux-amd64.tar.gz"
dest: /usr/local
- name: link go toolchain
file:
src: "/usr/local/go/bin/{{ item }}"
dest: "/usr/bin/{{ item }}"
state: link
with_items:
- go
- gofmt
- godoc
- name: ensure user profile exists
file:
path: "{{ ansible_user_dir }}/.profile"
state: touch
- name: set up PATH for Go toolchain and built binaries
lineinfile:
dest: "{{ ansible_user_dir }}/.profile"
line: 'PATH={{ ansible_env.PATH }}:{{ ansible_env.GOPATH }}/bin:/usr/local/go/bin'
regexp: '^PATH='
state: present
- name: set up directories
file:
path: "{{ item }}"
state: directory
with_items:
- "{{ ansible_env.GOPATH }}/src/github.com/containernetworking"
- "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator"
- "{{ ansible_env.GOPATH }}/src/github.com/k8s.io"
- "{{ ansible_env.GOPATH }}/src/github.com/sstephenson"
- "{{ ansible_env.GOPATH }}/src/github.com/opencontainers"
- name: install Go tools and dependencies
shell: /usr/bin/go get -u "github.com/{{ item }}"
with_items:
- tools/godep
- onsi/ginkgo/ginkgo
- onsi/gomega
- cloudflare/cfssl/cmd/...
- jteeuwen/go-bindata/go-bindata
- cpuguy83/go-md2man

View file

@ -1,125 +0,0 @@
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- setup
tasks:
- name: set up the system
include: system.yml
- name: install Golang tools
include: golang.yml
vars:
version: "1.8.5"
- name: clone build and install bats
include: "build/bats.yml"
- name: clone build and install cri-tools
include: "build/cri-tools.yml"
vars:
cri_tools_git_version: "b42fc3f364dd48f649d55926c34492beeb9b2e99"
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
vars:
k8s_git_version: "cri-o-node-e2e-patched-logs"
k8s_github_fork: "runcom"
crio_socket: "/var/run/crio.sock"
- name: clone build and install runc
include: "build/runc.yml"
- name: clone build and install networking plugins
include: "build/plugins.yml"
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- integration
- e2e
- node-e2e
- critest
tasks:
- name: clone build and install cri-o
include: "build/cri-o.yml"
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- integration
tasks:
- name: clone build and install cri-tools
include: "build/cri-tools.yml"
vars:
force_clone: True
cri_tools_git_version: "a9e38a4a000bc1a4052fb33de1c967b8cfe9ad40"
- name: run cri-o integration tests
include: test.yml
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- critest
tasks:
- name: install Golang tools
include: golang.yml
vars:
version: "1.9.2"
- name: setup critest
include: "build/cri-tools.yml"
vars:
force_clone: True
cri_tools_git_version: "a9e38a4a000bc1a4052fb33de1c967b8cfe9ad40"
- name: run critest validation and benchmarks
include: critest.yml
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- node-e2e
tasks:
- name: install Golang tools
include: golang.yml
vars:
version: "1.9.2"
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
vars:
force_clone: True
k8s_git_version: "master"
k8s_github_fork: "kubernetes"
crio_socket: "/var/run/crio/crio.sock"
- name: run k8s node-e2e tests
include: node-e2e.yml
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- e2e
tasks:
- name: install Golang tools
include: golang.yml
vars:
version: "1.9.2"
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
vars:
force_clone: True
# master as of 12/11/2017
k8s_git_version: "master-nfs-fix"
k8s_github_fork: "runcom"
crio_socket: "/var/run/crio/crio.sock"
- name: run k8s e2e tests
include: e2e.yml

View file

@ -1,26 +0,0 @@
---
- name: enable and start CRI-O
systemd:
name: crio
state: started
enabled: yes
daemon_reload: yes
- name: disable SELinux
command: setenforce 0
- name: Flush the iptables
command: iptables -F
- name: run node-e2e tests
shell: |
# parametrize crio socket
# cgroup-driver???
# TODO(runcom): remove conformance focus, we want everything for testgrid
make test-e2e-node PARALLELISM=1 RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT=/var/run/crio.sock IMAGE_SERVICE_ENDPOINT=/var/run/crio/crio.sock TEST_ARGS='--prepull-images=true --kubelet-flags="--cgroup-driver=systemd"' FOCUS="\[Conformance\]" &> {{ artifacts }}/node-e2e.log
args:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
async: 7200
poll: 10
ignore_errors: true

View file

@ -1,62 +0,0 @@
---
# vim-syntax: ansible
- hosts: '{{ hosts | default("all") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
vars:
_result_filepaths: [] # do not use
_dstfnbuff: [] # do not use
tasks:
- name: The crio_integration_filepath is required
tags:
- integration
set_fact:
_result_filepaths: "{{ _result_filepaths + [crio_integration_filepath] }}"
- name: The crio_node_e2e_filepath is required
tags:
- e2e
set_fact:
_result_filepaths: "{{ _result_filepaths + [crio_node_e2e_filepath] }}"
- name: Verify expectations
assert:
that:
- 'result_dest_basedir | default(False, True)'
- '_result_filepaths | default(False, True)'
- '_dstfnbuff == []'
- 'results_fetched is undefined'
- name: Results directory exists
file:
path: "{{ result_dest_basedir }}"
state: directory
delegate_to: localhost
- name: destination file paths are buffered for overwrite-checking and jUnit conversion
set_fact:
_dstfnbuff: >
{{ _dstfnbuff |
union( [result_dest_basedir ~ "/" ~ inventory_hostname ~ "/" ~ item | basename] ) }}
with_items: '{{ _result_filepaths }}'
- name: Overwriting existing results assumed very very bad
fail:
msg: "Cowardly refusing to overwrite {{ item }}"
when: item | exists
delegate_to: localhost
with_items: '{{ _dstfnbuff }}'
# fetch module doesn't support directories
- name: Retrieve results from all hosts
synchronize:
checksum: True # Don't rely on date/time being in sync
archive: False # Don't bother with permissions or times
copy_links: True # We want files, not links to files
recursive: True
mode: pull
dest: '{{ result_dest_basedir }}/{{ inventory_hostname }}/' # must end in /
src: '{{ item }}'
register: results_fetched
with_items: '{{ _result_filepaths }}'

View file

@ -1,134 +0,0 @@
---
- name: Make sure we have all required packages
package:
name: "{{ item }}"
state: present
with_items:
- atomic-registries
- container-selinux
- curl
- device-mapper-devel
- expect
- findutils
- gcc
- git
- glib2-devel
- glibc-devel
- glibc-static
- gpgme-devel
- hostname
- iproute
- iptables
- krb5-workstation
- libassuan-devel
- libffi-devel
- libgpg-error-devel
- libguestfs-tools
- libseccomp-devel
- libvirt-client
- libvirt-python
- libxml2-devel
- libxslt-devel
- make
- mlocate
- nfs-utils
- nmap-ncat
- oci-register-machine
- oci-systemd-hook
- oci-umount
- openssl
- openssl-devel
- ostree-devel
- pkgconfig
- python
- python2-crypto
- python-devel
- python-rhsm-certificates
- python-virtualenv
- PyYAML
- redhat-rpm-config
- rpcbind
- rsync
- sed
- skopeo-containers
- socat
- tar
- wget
async: 600
poll: 10
- name: Add python2-boto for Fedora
package:
name: "{{ item }}"
state: present
with_items:
- python2-boto
when: ansible_distribution in ['Fedora']
- name: Add python-boto for RHEL and CentOS
package:
name: "{{ item }}"
state: present
with_items:
- python-boto
when: ansible_distribution in ['RedHat', 'CentOS']
- name: Add Btrfs for Fedora
package:
name: "{{ item }}"
state: present
with_items:
- btrfs-progs-devel
when: ansible_distribution in ['Fedora']
- name: Update all packages
package:
name: '*'
state: latest
async: 600
poll: 10
- name: Setup swap to prevent kernel firing off the OOM killer
shell: |
truncate -s 8G /root/swap && \
export SWAPDEV=$(losetup --show -f /root/swap | head -1) && \
mkswap $SWAPDEV && \
swapon $SWAPDEV && \
swapon --show
- name: ensure directories exist as needed
file:
path: "{{ item }}"
state: directory
with_items:
- /opt/cni/bin
- /etc/cni/net.d
- name: set sysctl vm.overcommit_memory=1 for CentOS
sysctl:
name: vm.overcommit_memory
state: present
value: 1
when: ansible_distribution == 'CentOS'
- name: inject hostname into /etc/hosts
lineinfile:
dest: /etc/hosts
line: '{{ ansible_default_ipv4.address }} {{ ansible_nodename }}'
insertafter: 'EOF'
regexp: '{{ ansible_default_ipv4.address }}\s+{{ ansible_nodename }}'
state: present
- name: Flush the iptables
command: iptables -F
- name: Enable localnet routing
command: sysctl -w net.ipv4.conf.all.route_localnet=1
- name: Add masquerade for localhost
command: iptables -t nat -I POSTROUTING -s 127.0.0.1 ! -d 127.0.0.1 -j MASQUERADE
- name: Update the kernel cmdline to include quota support
command: grubby --update-kernel=ALL --args="rootflags=pquota"
when: ansible_distribution in ['RedHat', 'CentOS']

View file

@ -1,25 +0,0 @@
---
- name: Make testing output verbose so it can be converted to xunit
lineinfile:
dest: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/hack/make-rules/test.sh"
line: ' go test -v "${goflags[@]:+${goflags[@]}}" \'
regexp: ' go test \"\$'
state: present
- name: set extra storage options
set_fact:
extra_storage_opts: " --storage-opt overlay.override_kernel_check=1"
when: ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS'
- name: ensure directory exists for e2e reports
file:
path: "{{ artifacts }}"
state: directory
- name: run integration tests
shell: "CGROUP_MANAGER=cgroupfs STORAGE_OPTIONS='--storage-driver=overlay{{ extra_storage_opts | default('') }}' make localintegration >& {{ artifacts }}/testout.txt"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
async: 5400
poll: 30

View file

@ -1,8 +0,0 @@
---
# For results.yml Paths use rsync 'source' conventions
artifacts: "/tmp/artifacts" # Base-directory for collection
crio_integration_filepath: "{{ artifacts }}/testout.txt"
crio_node_e2e_filepath: "{{ artifacts }}/junit_01.xml"
result_dest_basedir: '{{ lookup("env","WORKSPACE") |
default(playbook_dir, True) }}/artifacts'

View file

@ -46,61 +46,54 @@ else
trap 'rm -rf "$PIPCACHE"' EXIT
fi
# Create a directory to contain logs and test artifacts
export ARTIFACTS=$(mkdir -pv $WORKSPACE/artifacts | tail -1 | cut -d \' -f 2)
[ -d "$ARTIFACTS" ] || exit 3
# All command failures from now on are fatal
set -e
echo
echo "Bootstrapping trusted virtual environment, this may take a few minutes, depending on networking."
echo "(logs: \"$ARTIFACTS/crio_venv_setup_log.txt\")"
echo "(logs: \"$WORKSPACE/crio_venv_setup_log.txt\")"
echo
(
set -x
cd "$WORKSPACE"
# When running more than once, make it fast by skipping the bootstrap
if [ ! -d "./.cri-o_venv" ]; then
# N/B: local system's virtualenv binary - uncontrolled version fixed below
virtualenv --no-site-packages --python=python2.7 ./.venvbootstrap
# Set up paths to install/operate out of $WORKSPACE/.venvbootstrap
source ./.venvbootstrap/bin/activate
# N/B: local system's pip binary - uncontrolled version fixed below
# pip may not support --cache-dir, force it's location into $WORKSPACE the ugly-way
OLD_HOME="$HOME"
export HOME="$WORKSPACE"
export PIPCACHE="$WORKSPACE/.cache/pip"
pip install --force-reinstall --upgrade pip==9.0.1
# Undo --cache-dir workaround
export HOME="$OLD_HOME"
# Install fixed, trusted, hashed versions of all requirements (including pip and virtualenv)
pip --cache-dir="$PIPCACHE" install --require-hashes \
--requirement "$SCRIPT_PATH/requirements.txt"
# N/B: local system's virtualenv binary - uncontrolled version fixed below
virtualenv --no-site-packages --python=python2.7 ./.venvbootstrap
# Set up paths to install/operate out of $WORKSPACE/.venvbootstrap
source ./.venvbootstrap/bin/activate
# N/B: local system's pip binary - uncontrolled version fixed below
# pip may not support --cache-dir, force it's location into $WORKSPACE the ugly-way
OLD_HOME="$HOME"
export HOME="$WORKSPACE"
export PIPCACHE="$WORKSPACE/.cache/pip"
pip install --force-reinstall --upgrade pip==9.0.1
# Undo --cache-dir workaround
export HOME="$OLD_HOME"
# Install fixed, trusted, hashed versions of all requirements (including pip and virtualenv)
pip --cache-dir="$PIPCACHE" install --require-hashes \
--requirement "$SCRIPT_PATH/requirements.txt"
# Setup trusted virtualenv using hashed binary from requirements.txt
./.venvbootstrap/bin/virtualenv --no-site-packages --python=python2.7 ./.cri-o_venv
# Exit untrusted virtualenv
deactivate
# Setup trusted virtualenv using hashed binary from requirements.txt
./.venvbootstrap/bin/virtualenv --no-site-packages --python=python2.7 ./.cri-o_venv
# Exit untrusted virtualenv
deactivate
fi
# Enter trusted virtualenv
source ./.cri-o_venv/bin/activate
# Upgrade stock-pip to support hashes
pip install --force-reinstall --cache-dir="$PIPCACHE" --upgrade pip==9.0.1
# Re-install from cache but validate all hashes (including on pip itself)
# Re-install from cache
pip install --force-reinstall --upgrade pip==9.0.1
pip --cache-dir="$PIPCACHE" install --require-hashes \
--requirement "$SCRIPT_PATH/requirements.txt"
# Remove temporary bootstrap virtualenv
rm -rf ./.venvbootstrap
# Exit trusted virtualenv
) &> $ARTIFACTS/crio_venv_setup_log.txt;
) &> $WORKSPACE/crio_venv_setup_log.txt;
echo
echo "Executing \"$WORKSPACE/.cri-o_venv/bin/ansible-playbook $@\""
echo
# Execute command-line arguments under virtualenv
source ${WORKSPACE}/.cri-o_venv/bin/activate
${WORKSPACE}/.cri-o_venv/bin/ansible-playbook $@
cd "$WORKSPACE"
source ./.cri-o_venv/bin/activate
./.cri-o_venv/bin/ansible-playbook $@

View file

@ -1 +0,0 @@
runtime-endpoint: /var/run/crio/crio.sock

Some files were not shown because too many files have changed in this diff Show more