doc: Add instruction to run cri-o with kubernetes
Signed-off-by: Jacek J. Łakis <jacek.lakis@intel.com>
This commit is contained in:
parent
d6ab91be27
commit
983aec6380
2 changed files with 113 additions and 0 deletions
|
@ -126,6 +126,14 @@ $ sudo mkdir -p /opt/cni/bin
|
|||
$ sudo cp bin/* /opt/cni/bin/
|
||||
```
|
||||
|
||||
### Running with kubernetes
|
||||
You can run the local version of kubernetes using `local-up-cluster.sh`. After starting `ocid` daemon:
|
||||
```sh
|
||||
EXPERIMENTAL_CRI=true CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT='/var/run/ocid.sock --runtime-request-timeout=15m' ./hack/local-up-cluster.sh
|
||||
```
|
||||
For running on kubernetes cluster - see the [instruction](kubernetes.md)
|
||||
|
||||
|
||||
### Current Roadmap
|
||||
|
||||
1. Basic pod/container lifecycle, basic image pull (already works)
|
||||
|
|
105
kubernetes.md
Normal file
105
kubernetes.md
Normal file
|
@ -0,0 +1,105 @@
|
|||
# Running cri-o on kubernetes cluster
|
||||
|
||||
## Switching runtime from docker to cri-o
|
||||
|
||||
In standard docker kubernetes cluster, kubelet is running on each node as systemd service and is taking care of communication between runtime and api service.
|
||||
It is reponsible for starting microservices pods (such as `kube-proxy`, `kubedns`, etc. - they can be different for various ways of deploying k8s) and user pods.
|
||||
Configuration of kubelet determines which runtime is used and in what way.
|
||||
|
||||
Kubelet itself is executed in docker container (as we can see in `kubelet.service`), but, what is important, **it's not** a kubernetes pod (at least for now),
|
||||
so we can keep kubelet running inside container (as well as directly on the host), and regardless of this, run pods in chosen runtime.
|
||||
|
||||
Below, you can find an instruction how to switch one or more nodes on running kubernetes cluster from docker to cri-o.
|
||||
|
||||
### Preparing ocid
|
||||
|
||||
You must prepare and install `ocid` on each node you would like to switch. Here's the list of files that must be provided:
|
||||
|
||||
| File path | Description | Location |
|
||||
|--------------------------------------|----------------------------|-----------------------------------------------------|
|
||||
| `/etc/ocid/ocid.conf` | ocid configuration | Generated on cri-o `make install` |
|
||||
| `/etc/ocid/seccomp.conf` | seccomp config | Example stored in cri-o repository |
|
||||
| `/etc/containers/policy.json` | containers policy | Example stored in cri-o repository |
|
||||
| `/bin/{ocid, runc}` | `ocid` and `runc` binaries | Built from cri-o repository |
|
||||
| `/usr/libexec/ocid/conmon` | `conmon` binary | Built from cri-o repository |
|
||||
| `/opt/cni/bin/{flannel, bridge,...}` | CNI plugins binaries | Can be built from sources `containernetworking/cni` |
|
||||
| `/etc/cni/net.d/10-mynet.conf` | Network config | Example stored in [README file](README.md) |
|
||||
|
||||
`ocid` binary can be executed directly on host, inside the container or in any way.
|
||||
However, recommended way is to set it as a systemd service.
|
||||
Here's the example of unit file:
|
||||
|
||||
```
|
||||
# cat /etc/systemd/system/ocid.service
|
||||
[Unit]
|
||||
Description=CRI-O daemon
|
||||
Documentation=https://github.com/kubernetes-incubator/cri-o
|
||||
|
||||
[Service]
|
||||
ExecStart=/bin/ocid --runtime /bin/runc --log /root/ocid.log --debug
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### Preparing kubelet
|
||||
At first, you need to stop kubelet service working on the node:
|
||||
```
|
||||
# systemctl stop kubelet
|
||||
```
|
||||
and stop all kubelet docker containers that are still runing.
|
||||
|
||||
```
|
||||
# docker stop $(docker ps | grep k8s_ | awk '{print $1}')
|
||||
```
|
||||
|
||||
We have to be sure that `kubelet.service` will start after `ocid.service`.
|
||||
It can be done by adding `ocid.service` to `Wants=` section in `/etc/systemd/system/kubelet.service`:
|
||||
|
||||
```
|
||||
# cat /etc/systemd/system/kubelet.service | grep Wants
|
||||
Wants=docker.socket ocid.service
|
||||
```
|
||||
|
||||
If you'd like to change the way of starting kubelet (e.g. directly on host instead of docker container), you can change it here, but, as mentioned, it's not necessary.
|
||||
|
||||
|
||||
Kubelet parameters are stored in `/etc/kubernetes/kubelet.env` file.
|
||||
```
|
||||
# cat /etc/kubernetes/kubelet.env | grep KUBELET_ARGS
|
||||
KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests
|
||||
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0
|
||||
--cluster_dns=10.233.0.3 --cluster_domain=cluster.local
|
||||
--resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml
|
||||
--require-kubeconfig"
|
||||
```
|
||||
|
||||
You need to add following parameters to `KUBELET_ARGS`:
|
||||
* `--experimental-cri=true` - Use Container Runtime Interface. Will be true by default from kubernetes 1.6 release.
|
||||
* `--container-runtime=remote` - Use remote runtime with provided socket.
|
||||
* `--container-runtime-endpoint=/var/run/ocid.sock` - Socket for remote runtime (default `ocid` socket localization).
|
||||
* `--runtime-request-timeout=10m` - Optional but useful. Some requests, especially pulling huge images, may take longer than default (2 minutes) and will cause an error.
|
||||
|
||||
Kubelet is prepared now.
|
||||
|
||||
## Flannel network
|
||||
If your cluster is using flannel network, your network configuration should be like:
|
||||
```
|
||||
# cat /etc/cni/net.d/10-mynet.conf
|
||||
{
|
||||
"name": "mynet",
|
||||
"type": "flannel"
|
||||
}
|
||||
```
|
||||
Then, kubelet will take parameters from `/run/flannel/subnet.env` - file generated by flannel kubelet microservice.
|
||||
|
||||
## Starting kubelet with cri-o
|
||||
Start ocid first, then kubelet. If you created `ocid` service:
|
||||
```
|
||||
# systemctl start ocid
|
||||
# systemctl start kubelet
|
||||
```
|
||||
|
||||
You can follow the progress of preparing node using `kubectl get nodes` or `kubectl get pods --all-namespaces` on kubernetes master.
|
Loading…
Reference in a new issue