The cri-o entries are stale vs. the content currently installed by the Makefile. This commit drops them and just references the make call before starting the table, which lets us stay DRY. runc is not built from the cri-o repository. The docs have claimed it was since983aec63
(doc: Add instruction to run cri-o with kubernetes, 2017-01-31, #353), but it's independent like the CNI plugins. The CNI plugins were moved to containernetworking/plugins in containernetworking/cni@bc0d09e (plugins: moved to containernetworking/plugins, 2017-05-17, containernetworking/cni#457). I've added a link to the in-repo policy.json example. We probably also want to link to the docs (for the version we vendor?) [1], but I've left that alone for now. The CNI config examples were removed from the project README in9088a12c
(contrib: cni: provide example CNI configurations, 2016-12-24, #295). I've adjusted the reference to point to the new location, although again, I'd rather replace this with links to upstream docs. [1]:3d0304a021/docs/policy.json.md
Signed-off-by: W. Trevor King <wking@tremily.us>
4.6 KiB
Running CRI-O on kubernetes cluster
Switching runtime from docker to CRI-O
In standard docker kubernetes cluster, kubelet is running on each node as systemd service and is taking care of communication between runtime and api service.
It is reponsible for starting microservices pods (such as kube-proxy
, kubedns
, etc. - they can be different for various ways of deploying k8s) and user pods.
Configuration of kubelet determines which runtime is used and in what way.
Kubelet itself is executed in docker container (as we can see in kubelet.service
), but, what is important, it's not a kubernetes pod (at least for now),
so we can keep kubelet running inside container (as well as directly on the host), and regardless of this, run pods in chosen runtime.
Below, you can find an instruction how to switch one or more nodes on running kubernetes cluster from docker to CRI-O.
Preparing crio
You must prepare and install crio
on each node you would like to switch.
Besides the files installed by make install install.config
, here's the list of files that must be provided:
File path | Description | Location |
---|---|---|
/etc/containers/policy.json |
containers policy | Example stored in cri-o repository |
/bin/runc |
runc or other OCI runtime |
Can be build from sources opencontainers/runc |
/opt/cni/bin/{flannel, bridge,...} |
CNI plugins binaries | Can be built from sources containernetworking/plugins |
/etc/cni/net.d/... |
CNI network config | Example here |
crio
binary can be executed directly on host, inside the container or in any way.
However, recommended way is to set it as a systemd service.
Here's the example of unit file:
# cat /etc/systemd/system/crio.service
[Unit]
Description=CRI-O daemon
Documentation=https://github.com/kubernetes-incubator/cri-o
[Service]
ExecStart=/bin/crio --runtime /bin/runc --log /root/crio.log --log-level debug
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
Preparing kubelet
At first, you need to stop kubelet service working on the node:
# systemctl stop kubelet
and stop all kubelet docker containers that are still runing.
# docker stop $(docker ps | grep k8s_ | awk '{print $1}')
We have to be sure that kubelet.service
will start after crio.service
.
It can be done by adding crio.service
to Wants=
section in /etc/systemd/system/kubelet.service
:
# cat /etc/systemd/system/kubelet.service | grep Wants
Wants=docker.socket crio.service
If you'd like to change the way of starting kubelet (e.g. directly on host instead of docker container), you can change it here, but, as mentioned, it's not necessary.
Kubelet parameters are stored in /etc/kubernetes/kubelet.env
file.
# cat /etc/kubernetes/kubelet.env | grep KUBELET_ARGS
KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0
--cluster_dns=10.233.0.3 --cluster_domain=cluster.local
--resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml
--require-kubeconfig"
You need to add following parameters to KUBELET_ARGS
:
--experimental-cri=true
- Use Container Runtime Interface. Will be true by default from kubernetes 1.6 release.--container-runtime=remote
- Use remote runtime with provided socket.--container-runtime-endpoint=/var/run/crio/crio.sock
- Socket for remote runtime (defaultcrio
socket localization).--runtime-request-timeout=10m
- Optional but useful. Some requests, especially pulling huge images, may take longer than default (2 minutes) and will cause an error.
Kubelet is prepared now.
Flannel network
If your cluster is using flannel network, your network configuration should be like:
# cat /etc/cni/net.d/10-mynet.conf
{
"name": "mynet",
"type": "flannel"
}
Then, kubelet will take parameters from /run/flannel/subnet.env
- file generated by flannel kubelet microservice.
Starting kubelet with CRI-O
Start crio first, then kubelet. If you created crio
service:
# systemctl start crio
# systemctl start kubelet
You can follow the progress of preparing node using kubectl get nodes
or kubectl get pods --all-namespaces
on kubernetes master.