4ae8606edf
Add an intermediate API layer that uses containers/storage, and a containers/image that has been patched to use it, to manage images and containers, storing the data that we need to know about containers and pods in the metadata fields provided by containers/storage. While ocid manages pods and containers as different types of items, with disjoint sets of IDs and names, it remains true that every pod includes at least one container. When a container's only purpose is to serve as a home for namespaces that are shared with the other containers in the pod, it is referred to as the pod's infrastructure container. At the storage level, a pod is stored as its set of containers. We keep track of both pod IDs and container IDs in the metadata field of Container objects that the storage library manages for us. Containers which bear the same pod ID are members of the pod which has that ID. Other information about the pod, which ocid needs to remember in order to answer requests for information about the pod, is also kept in the metadata field of its member containers. The container's runtime configuration should be stored in the container's ContainerDirectory, and used as a template. Each time the container is about to be started, its layer should be mounted, that configuration template should be read, the template's rootfs location should be replaced with the mountpoint for the container's layer, and the result should be saved to the container's ContainerRunDirectory, for use as the configuration for the container. Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> |
||
---|---|---|
.tool | ||
cmd | ||
completions/bash | ||
conmon | ||
contrib | ||
docs | ||
hack | ||
oci | ||
pause | ||
pkg/storage | ||
server | ||
test | ||
utils | ||
vendor/src | ||
.gitignore | ||
.travis.yml | ||
code-of-conduct.md | ||
Dockerfile | ||
LICENSE | ||
Makefile | ||
OWNERS | ||
README.md | ||
seccomp.json |
cri-o - OCI-based implementation of Kubernetes Container Runtime Interface
Status: pre-alpha
What is the scope of this project?
cri-o is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of cri-o is tied to the scope of the CRI.
At a high level, we expect the scope of cri-o to be restricted to the following functionalities:
- Support multiple image formats including the existing Docker image format
- Support for multiple means to download images including trust & image verification
- Container image management (managing image layers, overlay filesystems, etc)
- Container process lifecycle management
- Monitoring and logging required to satisfy the CRI
- Resource isolation as required by the CRI
What is not in scope for this project?
- Building, signing and pushing images to various image storages
- A CLI utility for interacting with cri-o. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backwards compatibility with it.
This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.
The plan is to use OCI projects and best of breed libraries for different aspects:
- Runtime: runc (or any OCI runtime-spec implementation) and oci runtime tools
- Images: Image management using containers/image
- Storage: Storage and management of image layers using containers/storage
- Networking: Networking support through use of CNI
It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.
Getting started
Prerequisites
runc
version 1.0.0.rc1 or greater is expected to be installed on the system. It is picked up as the default runtime by ocid.
Build
glib2-devel
and glibc-static
packages on Fedora or libglib2.0-dev
on Ubuntu or equivalent is required.
In order to enable seccomp support you will need to install libseccomp
on your platform.
e.g.
libseccomp-devel
for CentOS/Fedora, orlibseccomp-dev
for Ubuntu
$ GOPATH=/path/to/gopath
$ mkdir $GOPATH
$ go get -d github.com/kubernetes-incubator/cri-o
$ cd $GOPATH/src/github.com/kubernetes-incubator/cri-o
$ make install.tools
$ make
$ sudo make install
Otherwise, if you do not want to build cri-o
with seccomp support you can add BUILDTAGS=""
when running make.
# create a 'github.com/kubernetes-incubator' in your $GOPATH/src
cd github.com/kubernetes-incubator
git clone https://github.com/kubernetes-incubator/cri-o
cd cri-o
make BUILDTAGS=""
sudo make install
Build Tags
cri-o
supports optional build tags for compiling support of various features.
To add build tags to the make option the BUILDTAGS
variable must be set.
make BUILDTAGS='seccomp apparmor'
Build Tag | Feature | Dependency |
---|---|---|
seccomp | syscall filtering | libseccomp |
selinux | selinux process and mount labeling | |
apparmor | apparmor profile support | libapparmor |
Running pods and containers
Start the server
# ocid --debug
If the default --runtime
value does not point to your runtime:
# ocid --runtime $(which runc)
Create a pod
$ ocic pod run --config test/testdata/sandbox_config.json
Get pod status
# ocic pod status --id <pod_id>
Run a container inside a pod
# ocic ctr create --pod <pod_id> --config test/testdata/container_redis.json
Start a container
# ocic ctr start --id <ctr_id>
Get container status
# ocic ctr status --id <ctr_id>
Stop a container
# ocic ctr stop --id <ctr_id>
Remove a container
# ocic ctr remove --id <ctr_id>
Stop a pod
# ocic pod stop --id <pod_id>
Remove a pod
# ocic pod remove --id <pod_id>
List pods
# ocic pod list
List containers
# ocic ctr list
Setup CNI networking
Follow the steps below in order to setup networking in your pods using the CNI
bridge plugin. Nothing else is required after this since CRI-O
automatically
setup networking if it finds any CNI plugin.
$ go get -d github.com/containernetworking/cni
$ cd $GOPATH/src/github.com/containernetworking/cni
$ sudo mkdir -p /etc/cni/net.d
$ sudo sh -c 'cat >/etc/cni/net.d/10-mynet.conf <<-EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.88.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF'
$ sudo sh -c 'cat >/etc/cni/net.d/99-loopback.conf <<-EOF
{
"cniVersion": "0.2.0",
"type": "loopback"
}
EOF'
$ ./build
$ sudo mkdir -p /opt/cni/bin
$ sudo cp bin/* /opt/cni/bin/
Current Roadmap
- Basic pod/container lifecycle, basic image pull (already works)
- Support for tty handling and state management
- Basic integration with kubelet once client side changes are ready
- Support for log management, networking integration using CNI, pluggable image/storage management
- Support for exec/attach
- Target fully automated kubernetes testing without failures