We create 2 pods in 2 different networking namespace and
we check if we can ping one from the other.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When removing a pod sandbox or container, remove the ID of the item from
the corresponding ID index, so that we can correctly determine if it was
us or another actor that cleaned them up.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
The client size field that we get back when we inspect an image is a
pointer to a number, not just a number, so we need to dereference it for
display.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We create temporary CNI networking configurations and run 2
functional tests:
- Verify that the networking namespace interface has a valid CIDR
- Ping the networking namespace interface from the host
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We add 2 ocid options for choosing the CNI configuration and plugin
binaries directories: --cni-config-dir and --cni-plugin-dir.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Add an intermediate API layer that uses containers/storage, and a
containers/image that has been patched to use it, to manage images and
containers, storing the data that we need to know about containers and
pods in the metadata fields provided by containers/storage.
While ocid manages pods and containers as different types of items, with
disjoint sets of IDs and names, it remains true that every pod includes
at least one container. When a container's only purpose is to serve as
a home for namespaces that are shared with the other containers in the
pod, it is referred to as the pod's infrastructure container.
At the storage level, a pod is stored as its set of containers. We keep
track of both pod IDs and container IDs in the metadata field of
Container objects that the storage library manages for us. Containers
which bear the same pod ID are members of the pod which has that ID.
Other information about the pod, which ocid needs to remember in order
to answer requests for information about the pod, is also kept in the
metadata field of its member containers.
The container's runtime configuration should be stored in the
container's ContainerDirectory, and used as a template. Each time the
container is about to be started, its layer should be mounted, that
configuration template should be read, the template's rootfs location
should be replaced with the mountpoint for the container's layer, and
the result should be saved to the container's ContainerRunDirectory,
for use as the configuration for the container.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Any binary that will be managing storage needs to initialize the reexec
package in order to be able to apply or read image layers.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Update the versions of containers/storage and containers/image, and add
new dependencies that they pull in.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add the necessary build tags and configuration so that integration tests
can properly build against device mapper and btrfs libraries.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
ns.Close() will not remove and unmount the networking namespace
if it's not currently marked as mounted.
When we restore a sandbox, we generate the sandbox netns from
ns.GetNS() which does not mark the sandbox as mounted.
There currently is a PR open to fix that in the ns package:
https://github.com/containernetworking/cni/pull/342
but meanwhile this patch fixes a netns leak when restoring a pod.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to workaround a bug introduced with runc commit bc84f833,
we create a symbolic link to our permanent networking namespace so
that runC realizes that this is not the host namespace.
Although this bug is now fixed upstream (See commit f33de5ab4), this
patch works with pre rc3 runC versions.
We may want to revert that patch once runC 1.0.0 is released.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>