No description
Find a file
Antonio Murdaca 96fb47213e
container_create: correctly set user
We had a bug in ImageStatus where we weren't returning the default
image user if set, thus running all containers as root despite a user
being set in the image config. We weren't populating the Username field
of ImageStatus.
This patch fixes that along with the handling of multiple images based
on the registry patch for multiple images.
It also fixes ListImages to return Username as well.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2018-02-14 13:17:20 +01:00
.github *: add .github 2017-11-01 20:18:00 +01:00
.tool .tool/lint: Use 'command -v' to detect LINTER presence 2018-01-25 15:50:53 -08:00
client client: Add crio client package 2017-09-25 11:59:40 -07:00
cmd/crio fix log 2017-12-29 14:25:55 +08:00
conmon Fix typo in options defition 2018-02-06 08:14:40 -05:00
contrib system container: add /var/tmp as RW 2018-02-12 17:11:37 +01:00
docs docs: Remove the unused play.png 2018-01-19 16:12:39 -08:00
hack Makefile: Use 'git diff' to show gofmt changes 2018-01-22 16:47:09 -08:00
lib sandbox: restore portMappings on restart 2018-02-12 11:32:17 +01:00
logo Adding cri-o logo artwork 2017-03-01 13:34:30 -05:00
oci Support stdin once 2018-01-30 15:24:51 -08:00
pause Makefile: output binaries under bin/ 2017-10-30 17:48:29 +01:00
pkg container_create: correctly set user 2018-02-14 13:17:20 +01:00
releases releases: add v1.9.3 2018-02-10 19:18:44 +01:00
server container_create: correctly set user 2018-02-14 13:17:20 +01:00
test Bump up cri-tools to f1a58d681c056f259802f5cae2fe1fbcc6b28667 2018-01-29 19:25:49 -08:00
types Return image references from the storage package 2017-12-14 14:23:52 -05:00
utils Implement the stats for the image_fs_info command 2018-01-29 10:01:19 -08:00
vendor vendor: bump runtime-tools to fix caps drop handling 2018-02-12 15:15:25 +01:00
version version: bump v1.9.0-dev 2017-11-12 01:54:20 +01:00
.gitignore Add tests for server/config.go 2017-11-13 13:43:47 +01:00
.mailmap .mailmap: Add entries for inconsistent users 2018-01-18 10:17:00 -08:00
.travis.yml Bump up go version in travis 2018-02-02 15:37:57 -08:00
code-of-conduct.md Update code-of-conduct.md 2018-01-02 06:55:21 -08:00
CONTRIBUTING.md Copy CONTRIBUTING.md from skopeo 2017-10-25 13:07:25 +00:00
crictl.yaml drop crioctl source code 2017-11-29 21:07:50 +08:00
crio-umount.conf Tell oci-umount where to remove mountpoints inside container 2017-09-25 15:21:17 -04:00
Dockerfile Merge pull request #1270 from mrunalp/bump_runc 2018-01-31 16:20:04 +01:00
hooks.md Allow additional arguments to be passed into hooks 2018-01-09 13:44:16 -05:00
kubernetes.md kubernetes: Simplify and freshen the required-files table 2018-01-18 13:49:50 -08:00
LICENSE Initial commit 2016-09-09 12:56:31 -07:00
Makefile releases: add releases 2018-01-27 12:43:31 +01:00
OWNERS Rename OWNERS assignees: to approvers: 2017-11-01 08:48:44 -07:00
README.md Add 1.9 release to compatibility table 2018-01-19 10:37:51 -08:00
seccomp.json Update to latest seccomp filters in moby 2017-10-18 05:14:30 -04:00
transfer.md drop crioctl source code 2017-11-29 21:07:50 +08:00
tutorial.md Bump up cri-tools to f1a58d681c056f259802f5cae2fe1fbcc6b28667 2018-01-29 19:25:49 -08:00
vendor.conf vendor: bump runtime-tools to fix caps drop handling 2018-02-12 15:15:25 +01:00

CRI-O logo

CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface

Build Status Go Report Card

Status: Stable

Compatibility matrix: CRI-O <-> Kubernetes clusters

Version - Branch Kubernetes branch/version Maintenance status
CRI-O 1.0.x - release-1.0 Kubernetes 1.7 branch, v1.7.x =
CRI-O 1.8.x - release-1.8 Kubernetes 1.8 branch, v1.8.x =
CRI-O 1.9.x - release-1.9 Kubernetes 1.9 branch, v1.9.x =
CRI-O HEAD - master Kubernetes master branch

Key:

  • Changes in main Kubernetes repo about CRI are actively implemented in CRI-O
  • = Maintenance is manual, only bugs will be patched.

What is the scope of this project?

CRI-O is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of CRI-O is tied to the scope of the CRI.

At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:

  • Support multiple image formats including the existing Docker image format
  • Support for multiple means to download images including trust & image verification
  • Container image management (managing image layers, overlay filesystems, etc)
  • Container process lifecycle management
  • Monitoring and logging required to satisfy the CRI
  • Resource isolation as required by the CRI

What is not in scope for this project?

  • Building, signing and pushing images to various image storages
  • A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.

This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.

The plan is to use OCI projects and best of breed libraries for different aspects:

It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.

Commands

Command Description Demo
crio(8) OCI Kubernetes Container Runtime daemon

Note that kpod and its container management and debugging commands have moved to a separate repository, located here.

Configuration

File Description
crio.conf(5) CRI-O Configuation file

OCI Hooks Support

CRI-O configures OCI Hooks to run when launching a container

CRI-O Usage Transfer

Useful information for ops and dev transfer as it relates to infrastructure that utilizes CRI-O

Communication

For async communication and long running discussions please use issues and pull requests on the github repo. This will be the best place to discuss design and implementation.

For sync communication we have an IRC channel #CRI-O, on chat.freenode.net, that everyone is welcome to join and chat about development.

Getting started

Runtime dependencies

  • runc, Clear Containers runtime, or any other OCI compatible runtime
  • socat
  • iproute
  • iptables

Latest version of runc is expected to be installed on the system. It is picked up as the default runtime by CRI-O.

Build and Run Dependencies

Required

Fedora, CentOS, RHEL, and related distributions:

yum install -y \
  btrfs-progs-devel \
  device-mapper-devel \
  git \
  glib2-devel \
  glibc-devel \
  glibc-static \
  go \
  golang-github-cpuguy83-go-md2man \
  gpgme-devel \
  libassuan-devel \
  libgpg-error-devel \
  libseccomp-devel \
  libselinux-devel \
  ostree-devel \
  pkgconfig \
  runc \
  skopeo-containers

Debian, Ubuntu, and related distributions:

apt-get install -y \
  btrfs-tools \
  git \
  golang-go \
  libassuan-dev \
  libdevmapper-dev \
  libglib2.0-dev \
  libc6-dev \
  libgpgme11-dev \
  libgpg-error-dev \
  libseccomp-dev \
  libselinux1-dev \
  pkg-config \
  go-md2man \
  runc \
  skopeo-containers

Debian, Ubuntu, and related distributions will also need a copy of the development libraries for ostree, either in the form of the libostree-dev package from the flatpak PPA, or built from source (more on that here).

If using an older release or a long-term support release, be careful to double-check that the version of runc is new enough (running runc --version should produce spec: 1.0.0), or else build your own.

NOTE

Be careful to double-check that the version of golang is new enough, version 1.8.x or higher is required. If needed, golang kits are avaliable at https://golang.org/dl/

Optional

Fedora, CentOS, RHEL, and related distributions:

(no optional packages)

Debian, Ubuntu, and related distributions:

apt-get install -y \
  libapparmor-dev

Get Source Code

As with other Go projects, CRI-O must be cloned into a directory structure like:

GOPATH
└── src
    └── github.com
        └── kubernetes-incubator
            └── cri-o

First, configure a GOPATH (if you are using go1.8 or later, this defaults to ~/go).

export GOPATH=~/go
mkdir -p $GOPATH

Next, clone the source code using:

mkdir -p $GOPATH/src/github.com/kubernetes-incubator
cd $_ # or cd $GOPATH/src/github.com/kubernetes-incubator
git clone https://github.com/kubernetes-incubator/cri-o # or your fork
cd cri-o

Build

make install.tools
make
sudo make install

Otherwise, if you do not want to build CRI-O with seccomp support you can add BUILDTAGS="" when running make.

make BUILDTAGS=""
sudo make install

Build Tags

CRI-O supports optional build tags for compiling support of various features. To add build tags to the make option the BUILDTAGS variable must be set.

make BUILDTAGS='seccomp apparmor'
Build Tag Feature Dependency
seccomp syscall filtering libseccomp
selinux selinux process and mount labeling libselinux
apparmor apparmor profile support libapparmor

Running pods and containers

Follow this tutorial to get started with CRI-O.

Setup CNI networking

A proper description of setting up CNI networking is given in the contrib/cni README. But the gist is that you need to have some basic network configurations enabled and CNI plugins installed on your system.

Running with kubernetes

You can run a local version of kubernetes with CRI-O using local-up-cluster.sh:

  1. Clone the kubernetes repository
  2. Start the CRI-O daemon (crio)
  3. From the kubernetes project directory, run:
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='/var/run/crio/crio.sock  --runtime-request-timeout=15m' \
./hack/local-up-cluster.sh

To run a full cluster, see the instructions.

Current Roadmap

  1. Basic pod/container lifecycle, basic image pull (done)
  2. Support for tty handling and state management (done)
  3. Basic integration with kubelet once client side changes are ready (done)
  4. Support for log management, networking integration using CNI, pluggable image/storage management (done)
  5. Support for exec/attach (done)
  6. Target fully automated kubernetes testing without failures e2e status
  7. Track upstream k8s releases