No description
Find a file
Jonathan Yu 6c9628cdb1
Build and install from GOPATH
* Rename 'vendor/src' -> 'vendor'
  * Ignore vendor/ instead of vendor/src/ for lint
* Rename 'cmd/client' -> 'cmd/ocic' to make it 'go install'able
* Rename 'cmd/server' -> 'cmd/ocid' to make it 'go install'able
* Update Makefile to build and install from GOPATH
* Update tests to locate ocid/ocic in GOPATH/bin
* Search for binaries in GOPATH/bin instead of PATH
* Install tools using `go get -u`, so they are updated on each run

Signed-off-by: Jonathan Yu <jawnsy@redhat.com>
2017-01-17 12:09:09 -08:00
.tool Build and install from GOPATH 2017-01-17 12:09:09 -08:00
cmd Build and install from GOPATH 2017-01-17 12:09:09 -08:00
completions/bash Add missing man pages and bash completions for kpod 2016-12-02 10:17:58 -05:00
conmon conmon: Use conmon for exec'ing a command 2017-01-14 02:02:40 +01:00
contrib foobar 2016-10-31 15:17:23 -04:00
docs main: Add CNI options 2016-12-20 12:50:17 +01:00
hack Update hack/vendor.sh to clone directly into vendor/ instead of vendor/src/ 2017-01-17 11:19:25 -08:00
oci oci: Do not call the container runtime from ExecSync 2017-01-14 02:02:43 +01:00
pause *: add pause binary as a build target 2016-10-02 19:36:39 +11:00
pkg/storage Add storage utility functions 2016-12-19 11:44:34 -05:00
server Merge pull request #316 from intelsdi-x/kubelet-net-fix 2017-01-17 09:39:03 -08:00
test Build and install from GOPATH 2017-01-17 12:09:09 -08:00
utils conmon: Return the exit status code 2017-01-14 02:00:45 +01:00
vendor Build and install from GOPATH 2017-01-17 12:09:09 -08:00
.gitignore Build and install from GOPATH 2017-01-17 12:09:09 -08:00
.travis.yml add gofmt verify in CI 2016-12-15 14:15:57 +08:00
code-of-conduct.md Add a code of conduct based on github.com/kubernetes/kubernetes 2016-09-09 15:26:59 -07:00
Dockerfile Add build tags for integration tests 2016-12-19 11:44:32 -05:00
LICENSE Initial commit 2016-09-09 12:56:31 -07:00
Makefile Build and install from GOPATH 2017-01-17 12:09:09 -08:00
OWNERS OWNERS: add @sameo 2016-12-15 12:18:47 +01:00
README.md cmd/client: move pod create to pod run 2016-12-14 18:15:37 +01:00
seccomp.json add seccomp support 2016-11-28 22:05:34 +01:00

cri-o - OCI-based implementation of Kubernetes Container Runtime Interface

Build Status Go Report Card

Status: pre-alpha

What is the scope of this project?

cri-o is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of cri-o is tied to the scope of the CRI.

At a high level, we expect the scope of cri-o to be restricted to the following functionalities:

  • Support multiple image formats including the existing Docker image format
  • Support for multiple means to download images including trust & image verification
  • Container image management (managing image layers, overlay filesystems, etc)
  • Container process lifecycle management
  • Monitoring and logging required to satisfy the CRI
  • Resource isolation as required by the CRI

What is not in scope for this project?

  • Building, signing and pushing images to various image storages
  • A CLI utility for interacting with cri-o. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backwards compatibility with it.

This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.

The plan is to use OCI projects and best of breed libraries for different aspects:

It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.

Getting started

Prerequisites

runc version 1.0.0.rc1 or greater is expected to be installed on the system. It is picked up as the default runtime by ocid.

Build

glib2-devel and glibc-static packages on Fedora or libglib2.0-dev on Ubuntu or equivalent is required. In order to enable seccomp support you will need to install libseccomp on your platform.

e.g. libseccomp-devel for CentOS/Fedora, or libseccomp-dev for Ubuntu

$ GOPATH=/path/to/gopath
$ mkdir $GOPATH
$ go get -d github.com/kubernetes-incubator/cri-o
$ cd $GOPATH/src/github.com/kubernetes-incubator/cri-o
$ make install.tools
$ make
$ sudo make install

Otherwise, if you do not want to build cri-o with seccomp support you can add BUILDTAGS="" when running make.

# create a 'github.com/kubernetes-incubator' in your $GOPATH/src
cd github.com/kubernetes-incubator
git clone https://github.com/kubernetes-incubator/cri-o
cd cri-o

make BUILDTAGS=""
sudo make install

Build Tags

cri-o supports optional build tags for compiling support of various features. To add build tags to the make option the BUILDTAGS variable must be set.

make BUILDTAGS='seccomp apparmor'
Build Tag Feature Dependency
seccomp syscall filtering libseccomp
selinux selinux process and mount labeling
apparmor apparmor profile support libapparmor

Running pods and containers

Start the server

# ocid --debug

If the default --runtime value does not point to your runtime:

# ocid --runtime $(which runc)

Create a pod

$ ocic pod run --config test/testdata/sandbox_config.json

Get pod status

# ocic pod status --id <pod_id>

Run a container inside a pod

# ocic ctr create --pod <pod_id> --config test/testdata/container_redis.json

Start a container

# ocic ctr start --id <ctr_id>

Get container status

# ocic ctr status --id <ctr_id>

Stop a container

# ocic ctr stop --id <ctr_id>

Remove a container

# ocic ctr remove --id <ctr_id>

Stop a pod

# ocic pod stop --id <pod_id>

Remove a pod

# ocic pod remove --id <pod_id>

List pods

# ocic pod list

List containers

# ocic ctr list

Setup CNI networking

Follow the steps below in order to setup networking in your pods using the CNI bridge plugin. Nothing else is required after this since CRI-O automatically setup networking if it finds any CNI plugin.

$ go get -d github.com/containernetworking/cni
$ cd $GOPATH/src/github.com/containernetworking/cni
$ sudo mkdir -p /etc/cni/net.d
$ sudo sh -c 'cat >/etc/cni/net.d/10-mynet.conf <<-EOF
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.88.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0"  }
        ]
    }
}
EOF'
$ sudo sh -c 'cat >/etc/cni/net.d/99-loopback.conf <<-EOF
{
    "cniVersion": "0.2.0",
    "type": "loopback"
}
EOF'
$ ./build
$ sudo mkdir -p /opt/cni/bin
$ sudo cp bin/* /opt/cni/bin/

Current Roadmap

  1. Basic pod/container lifecycle, basic image pull (already works)
  2. Support for tty handling and state management
  3. Basic integration with kubelet once client side changes are ready
  4. Support for log management, networking integration using CNI, pluggable image/storage management
  5. Support for exec/attach
  6. Target fully automated kubernetes testing without failures