Need to mv to latest released and supported version of logrus
switch github.com/Sirupsen/logrus github.com/sirupsen/logrus
Also vendor in latest containers/storage and containers/image
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The syscall package is locked down and the comment in [1] advises to
switch code to use the corresponding package from golang.org/x/sys. Do
so and replace usage of package syscall where possible (leave
syscall.SysProcAttr and syscall.Stat_t).
[1] https://github.com/golang/go/blob/master/src/syscall/syscall.go#L21-L24
This will also allow to get updates and fixes just by re-vendoring
golang.org/x/sys/unix instead of having to update to a new go version.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
This moves the timeout handling from the go code to conmon, whic
removes some of the complexity from criod, and additionally it will
makes it possible to do the double-fork in the exec case too.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Currently, when creating containers we never call Wait on the
conmon exec.Command, which means that the child hangs around
forever as a zombie after it dies.
However, instead of doing this waitpid() in the parent we instead
do a double-fork in conmon, to daemonize it. That makes a lot of
sense, as conmon really is not tied to the launcher, but needs
to outlive it if e.g. the cri-o daemon restarts.
However, this makes even more obvious a race condition which we
already have. When crio-d puts the conmon pid in a cgroup there
is a race where conmon could already have spawned a child, and
it would then not be part of the cgroup. In order to fix this
we add another synchronization pipe to conmon, which we block
on before we create any children. The parent then makes sure the
pid is in the cgroup before letting it continue.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Container runtimes provide different levels of isolation, from kernel
namespaces to hardware virtualization. When starting a specific
container, one may want to decide which level of isolation to use
depending on how much we trust the container workload. Fully verified
and signed containers may not need the hardware isolation layer but e.g.
CI jobs pulling packages from many untrusted sources should probably not
run only on a kernel namespace isolation layer.
Here we allow CRI-O users to define a container runtime for trusted
containers and another one for untrusted containers, and also to define
a general, default trust level. This anticipates future kubelet
implementations that would be able to tag containers as trusted or
untrusted. When missing a kubelet hint, containers are trusted by
default.
A container becomes untrusted if we get a hint in that direction from
kubelet or if the default trust level is set to "untrusted" and the
container is not privileged. In both cases CRI-O will try to use the
untrusted container runtime. For any other cases, it will switch to the
trusted one.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We use a SOCK_SEQPACKET socket for the attach unix domain socket, which
means the kernel will ensure that the reading side only ever get the
data from one write operation. We use this for frameing, where the
first byte is the pipe that the next bytes are for. We have to make sure
that all reads from the socket are using at least the same size of buffer
as the write side, because otherwise the extra data in the message
will be dropped.
This also adds a stdin pipe for the container, similar to the ones we
use for stdout/err, because we need a way for an attached client
to write to stdin, even if not using a tty.
This fixes https://github.com/kubernetes-incubator/cri-o/issues/569
Signed-off-by: Alexander Larsson <alexl@redhat.com>
conmon has many flags that are parsed when it's executed, one of them
is "-c". During PR #510 where we vendor latest kube master code,
upstream has changed a test to call a "ctr execsync" with a command of
"sh -c commmand ...".
Turns out:
a) conmon has a "-c" flag which refers to the container name/id
b) the exec command has a "-c" flags but it's for "sh"
That leads to conmon parsing the second "-c" flags from the exec
command causing an error. The executed command looks like:
conmon -c [..other flags..] CONTAINERID -e sh -c echo hello world
This patch rewrites the exec sync code to not pass down to conmon the
exec command via command line. Rather, we're now creating an OCI runtime
process spec in a temp file, pass _the path_ down to conmon, and have
runc exec the command using "runc exec --process
/path/to/process-spec.json CONTAINERID". This is far better in which we
don't need to bother anymore about conflicts with flags in conmon.
Added and fixed some tests also.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
The ocid project was renamed to CRI-O, months ago, it is time that we moved
all of the code to the new name. We want to elminate the name ocid from use.
Move fully to crio.
Also cric is being renamed to crioctl for the time being.
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
Now that conmon splits std{out,err} for !terminal containers, ExecSync
can parse that output to return the correct std{out,err} split to the
kubelet. Invalid log lines are ignored but complained about.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Previously we returned an internal error result when a program had a
non-zero exit code, which was incorrect. Fix this as well as change the
tests to actually check the "ExitCode" response from ExecSync (rather
than expecting ocic-ctr to return an internal error).
Signed-off-by: Aleksa Sarai <asarai@suse.de>
This adds a very simple implementation of logging within conmon, where
every buffer read from the masterfd of the container is also written to
the log file (with errors during writing to the log file ignored).
Signed-off-by: Aleksa Sarai <asarai@suse.de>
If I create a sandbox pod and then restart the ocid service, the
pod ends up in a stopped state without an exit file. Whether this is
a bug in ocid or not we should handle this case where a container exits
so that we can clean up the container.
This change just defaults to exit code to -1 if the container is not
running and does not have an exit file.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The sandbox privileged flag is set to true only if either the
pod configuration privileged flag is set to true or when any
of the pod namespaces are the host ones.
A container inherit its privileged flag from its sandbox, and
will be run by the privileged runtime only if it's set to true.
In other words, the privileged runtime (when defined) will be
when one of the below conditions is true:
- The sandbox will be asked to run at least one privileged container.
- The sandbox requires access to either the host IPC or networking
namespaces.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We add a privileged flag to the container and sandbox structures
and can now select the appropriate runtime path for any container
operations depending on that flag.
Here again, the default runtime will be used for non privileged
containers and for privileged ones in case there are no privileged
runtime defined.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Use containers/storage to store images, pod sandboxes, and containers.
A pod sandbox's infrastructure container has the same ID as the pod to
which it belongs, and all containers also keep track of their pod's ID.
The container configuration that we build using the data in a
CreateContainerRequest is stored in the container's ContainerDirectory
and ContainerRunDirectory.
We catch SIGTERM and SIGINT, and when we receive either, we gracefully
exit the grpc loop. If we also think that there aren't any container
filesystems in use, we attempt to do a clean shutdown of the storage
driver.
The test harness now waits for ocid to exit before attempting to delete
the storage root directory.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Some OCI container runtimes (in particular the hypervisor
based ones) will typically create a shim process between
the hypervisor and the runtime caller, in order to not
rely on the hypervisor process for e.g. forwarding the
output streams or getting a command exit code.
When executing a command inside a running container those
runtimes will create that shim process and terminate.
Therefore calling and monitoring them directly from
ExecSync() will fail. Instead we need to have a subreaper
calling the runtime and monitoring the shim process.
This change uses conmon as the subreaper from ExecSync(),
monitors the shim process and read the exec'ed command
exit code from the synchronization pipe.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
waitpid fills its second argument with a value that
contains the process exit code in the 8 least significant
bits. Instead of returning the complete value and then
convert it from ocid, return the exit status directly
by using WEXITSTATUS from conmon.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In later versions of runC, `runc kill` *requires* the signal parameter
to know what signal needs to be sent.
Signed-off-by: Aleksa Sarai <asarai@suse.com>
Because they need to prepare the hypervisor networking interfaces
and have them match the ones created in the pod networking
namespace (typically to bridge TAP and veth interfaces), hypervisor
based container runtimes need the sandbox pod networking namespace
to be set up before it's created. They can then prepare and start
the hypervisor interfaces when creating the pod virtual machine.
In order to do so, we need to create per pod persitent networking
namespaces that we pass to the CNI plugin. This patch leverages
the CNI ns package to create such namespaces under /var/run/netns,
and assign them to all pod containers.
The persitent namespace is removed when either the pod is stopped
or removed.
Since the StopPodSandbox() API can be called multiple times from
kubelet, we track the pod networking namespace state (closed or
not) so that we don't get a containernetworking/ns package error
when calling its Close() routine multiple times as well.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>