Because we need a working CNI plugin to setup a correct netns so
sandbox_run can grab a working IP address.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Add new directory /etc/crio/hooks.d, where packagers can drop a json config
file to specify a hook.
The json must specify a valid executable to run.
The json must also specify which stage(s) to run the hook:
prestart, poststart, poststop
The json must specify under which criteria the hook should be launched
If the container HasBindMounts
If the container cmd matches a list of regular expressions
If the containers annotations matches a list of regular expressions.
If any of these match the the hook will be launched.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Basically none of the clever storage drivers will work when we're on top
of AUFS, so if we find ourselves in that situation when running tests,
default to storage options of "--storage-driver vfs".
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
conmon has many flags that are parsed when it's executed, one of them
is "-c". During PR #510 where we vendor latest kube master code,
upstream has changed a test to call a "ctr execsync" with a command of
"sh -c commmand ...".
Turns out:
a) conmon has a "-c" flag which refers to the container name/id
b) the exec command has a "-c" flags but it's for "sh"
That leads to conmon parsing the second "-c" flags from the exec
command causing an error. The executed command looks like:
conmon -c [..other flags..] CONTAINERID -e sh -c echo hello world
This patch rewrites the exec sync code to not pass down to conmon the
exec command via command line. Rather, we're now creating an OCI runtime
process spec in a temp file, pass _the path_ down to conmon, and have
runc exec the command using "runc exec --process
/path/to/process-spec.json CONTAINERID". This is far better in which we
don't need to bother anymore about conflicts with flags in conmon.
Added and fixed some tests also.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
The ocid project was renamed to CRI-O, months ago, it is time that we moved
all of the code to the new name. We want to elminate the name ocid from use.
Move fully to crio.
Also cric is being renamed to crioctl for the time being.
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
Two issues:
1) pod Namespace was always set to "", which prevents plugins from figuring out
what the actual pod is, and from getting more info about that pod from the
runtime via out-of-band mechanisms
2) the pod Name and ID arguments were switched, further preventing #1
Signed-off-by: Dan Williams <dcbw@redhat.com>
Since we no longer fall back to the noop plugin when
CNI configuration files are missing, and since the default
sandbox_config.json test file is running without host
networking, we must install the bridge and loopback
configuration files by default for tests to pass.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The main purpose of these tests is to make sure that the log actually
contains output from the container. We don't test the timestamps or the
stream that's stated at the moment.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
go install acts incredibly weirdly and rarely does what you want, not to
mention that it's just bad for distribution build setups. Switch back to
go build, which works properly and doesn't have half as many issues.
Fixes: 6c9628cdb1 ("Build and install from GOPATH")
Signed-off-by: Aleksa Sarai <asarai@suse.de>
When generating an ocid.conf for use when running tests, make sure we
don't pick up any defaults from an installed copy of ocid by forcing our
copy to read /dev/null as its configuration file.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When we restart ocid as part of a test, wait for the daemon to exit when
we send it a SIGTERM, just as we do when we try to stop it for good.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
kubelet sends a request to create a container with an image ID (as
opposed as an image name). That ID comes from the ImageStatus response.
This patch fixes that by setting the image ID as well as the image name
and fix the login to lookup for image ID as well.
Found while running `make test-e2e-node`.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
When running integration tests on the host, we can now specify
an alternate runtime by setting the RUNTIME variable. For example:
make localintegration RUNTIME=cc-oci-runtime
to use Clear Containers instead of runC.
Obviously, runC is still the default.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When calling copyimg to pull down an image in the integration tests,
don't forget to pass in the test signature policy.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
The CRI doesn't expect us to implicitly pull an image if it isn't
already present before we're asked to use it to create a container, and
the tests no longer depend on us doing so, either.
Limit the logic which attempts to pull an image, if it isn't present, to
only pulling the configured "pause" image, since our use of that image
for running pod sandboxes is an implementation detail that our clients
can't be expected to know or care about. Include the name of the image
that we didn't pull in the error we return when we don't pull one.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add a basic tool for copying images from one location to another,
optionally adding a name if it's to local storage. Ideally we could use
skopeo for this, but we don't want to build it.
Use it to initially populate the test/testdata/redis-image directory, if
it's not been cleaned out, with a copy of "docker://redis:latest", and
to copy it in to the storage that ocid is using before we start up ocid.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add tests which exercise image pulling, listing, and removal. When running
tests, prepopulate the store with an image with the default infrastructure
container's name, using the locally-built "pause" binary, so that tests won't
have to pull it down from the network.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Use containers/storage to store images, pod sandboxes, and containers.
A pod sandbox's infrastructure container has the same ID as the pod to
which it belongs, and all containers also keep track of their pod's ID.
The container configuration that we build using the data in a
CreateContainerRequest is stored in the container's ContainerDirectory
and ContainerRunDirectory.
We catch SIGTERM and SIGINT, and when we receive either, we gracefully
exit the grpc loop. If we also think that there aren't any container
filesystems in use, we attempt to do a clean shutdown of the storage
driver.
The test harness now waits for ocid to exit before attempting to delete
the storage root directory.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
* Rename 'vendor/src' -> 'vendor'
* Ignore vendor/ instead of vendor/src/ for lint
* Rename 'cmd/client' -> 'cmd/ocic' to make it 'go install'able
* Rename 'cmd/server' -> 'cmd/ocid' to make it 'go install'able
* Update Makefile to build and install from GOPATH
* Update tests to locate ocid/ocic in GOPATH/bin
* Search for binaries in GOPATH/bin instead of PATH
* Install tools using `go get -u`, so they are updated on each run
Signed-off-by: Jonathan Yu <jawnsy@redhat.com>
We create 2 pods in 2 different networking namespace and
we check if we can ping one from the other.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We create temporary CNI networking configurations and run 2
functional tests:
- Verify that the networking namespace interface has a valid CIDR
- Ping the networking namespace interface from the host
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>