No description
Find a file
Samuel Ortiz 72129ee3fb sandbox: Track and store the pod resolv.conf path
When we get a pod with DNS settings, we need to build
a resolv.conf file and mount it in all pod containers.
In order to do that, we have to track the built resolv.conf
file and store/load it.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
2017-03-24 15:28:14 +01:00
.tool Change lint timeout to 60 seconds, to fix test failure 2017-03-16 17:03:12 -04:00
cmd Switch to using opencontainers/selinux 2017-03-23 15:53:09 -04:00
completions/bash Add missing man pages and bash completions for kpod 2016-12-02 10:17:58 -05:00
conmon conmon: Use conmon for exec'ing a command 2017-01-14 02:02:40 +01:00
contrib contrib: cni: provide example CNI configurations 2017-03-20 23:08:28 +11:00
docs Deprecate --storage-option for --storage-opt 2017-02-25 09:09:50 -05:00
hack Merge pull request #352 from mrunalp/deps 2017-02-02 18:32:44 +01:00
logo Adding cri-o logo artwork 2017-03-01 13:34:30 -05:00
oci Increase the timeout value for create container 2017-03-23 10:06:52 -07:00
pause *: add pause binary as a build target 2016-10-02 19:36:39 +11:00
pkg storage: Support latest containers/image 2017-03-13 08:51:02 -07:00
server sandbox: Track and store the pod resolv.conf path 2017-03-24 15:28:14 +01:00
test storage: Support latest containers/image 2017-03-13 08:51:02 -07:00
utils Fix golint error 2017-03-16 14:09:38 -04:00
vendor Vendor in opencontainers/selinux 2017-03-23 15:53:10 -04:00
.gitignore sort .gitignore 2017-02-21 11:49:28 -08:00
.travis.yml Add make to traivs 2017-03-02 14:13:03 +08:00
code-of-conduct.md Add a code of conduct based on github.com/kubernetes/kubernetes 2016-09-09 15:26:59 -07:00
Dockerfile Dockerfile: pull test image at build time 2017-01-19 18:51:47 +01:00
kubernetes.md doc: Add instruction to run cri-o with kubernetes 2017-02-03 10:37:50 +01:00
LICENSE Initial commit 2016-09-09 12:56:31 -07:00
lock.json Vendor in opencontainers/selinux 2017-03-23 15:53:10 -04:00
Makefile Fix make 2017-03-02 12:37:45 +08:00
manifest.json Vendor in opencontainers/selinux 2017-03-23 15:53:10 -04:00
OWNERS Update the OWNERS file 2017-01-23 11:14:28 -08:00
README.md contrib: cni: provide example CNI configurations 2017-03-20 23:08:28 +11:00
seccomp.json add seccomp support 2016-11-28 22:05:34 +01:00
tutorial.md README.md: replace examples with kelsey's tutorial 2017-01-27 08:30:18 +01:00

cri-o logo

cri-o - OCI-based implementation of Kubernetes Container Runtime Interface

Build Status Go Report Card

Status: pre-alpha

What is the scope of this project?

cri-o is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of cri-o is tied to the scope of the CRI.

At a high level, we expect the scope of cri-o to be restricted to the following functionalities:

  • Support multiple image formats including the existing Docker image format
  • Support for multiple means to download images including trust & image verification
  • Container image management (managing image layers, overlay filesystems, etc)
  • Container process lifecycle management
  • Monitoring and logging required to satisfy the CRI
  • Resource isolation as required by the CRI

What is not in scope for this project?

  • Building, signing and pushing images to various image storages
  • A CLI utility for interacting with cri-o. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backwards compatibility with it.

This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.

The plan is to use OCI projects and best of breed libraries for different aspects:

It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.

Getting started

Prerequisites

runc version 1.0.0.rc1 or greater is expected to be installed on the system. It is picked up as the default runtime by ocid.

Build

btrfs-progs-devel, device-mapper-devel, glib2-devel, glibc-devel, glibc-static, gpgme-devel, libassuan-devel, libgpg-error-devel, and pkgconfig packages on CentOS/Fedora or btrfs-tools, libassuan-dev, libc6-dev, libdevmapper-dev, libglib2.0-dev, libgpg-error-dev, libgpgme11-dev, and pkg-config on Ubuntu or equivalent is required. In order to enable seccomp support you will need to install development files for libseccomp on your platform.

e.g. libseccomp-devel for CentOS/Fedora, or libseccomp-dev for Ubuntu In order to enable apparmor support you will need to install development files for libapparmor on your platform. e.g. libapparmor-dev for Ubuntu

$ GOPATH=/path/to/gopath
$ mkdir $GOPATH
$ go get -d github.com/kubernetes-incubator/cri-o
$ cd $GOPATH/src/github.com/kubernetes-incubator/cri-o
$ make install.tools
$ make
$ sudo make install

Otherwise, if you do not want to build cri-o with seccomp support you can add BUILDTAGS="" when running make.

# create a 'github.com/kubernetes-incubator' in your $GOPATH/src
cd github.com/kubernetes-incubator
git clone https://github.com/kubernetes-incubator/cri-o
cd cri-o

make BUILDTAGS=""
sudo make install

Build Tags

cri-o supports optional build tags for compiling support of various features. To add build tags to the make option the BUILDTAGS variable must be set.

make BUILDTAGS='seccomp apparmor'
Build Tag Feature Dependency
seccomp syscall filtering libseccomp
selinux selinux process and mount labeling
apparmor apparmor profile support libapparmor

Running pods and containers

Follow this tutorial to get started with CRI-O.

Setup CNI networking

A proper description of setting up CNI networking is given in the contrib/cni README. But the gist is that you need to have some basic network configurations enabled and CNI plugins installed on your system.

Running with kubernetes

You can run the local version of kubernetes using local-up-cluster.sh. After starting ocid daemon:

EXPERIMENTAL_CRI=true CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT='/var/run/ocid.sock --runtime-request-timeout=15m' ./hack/local-up-cluster.sh

For running on kubernetes cluster - see the instruction

Current Roadmap

  1. Basic pod/container lifecycle, basic image pull (already works)
  2. Support for tty handling and state management
  3. Basic integration with kubelet once client side changes are ready
  4. Support for log management, networking integration using CNI, pluggable image/storage management
  5. Support for exec/attach
  6. Target fully automated kubernetes testing without failures