Previously the wrapper changed to $WORKSPACE prior to executing
the ansible-playbook command. This has the unintended consequence of
preventing use of relative paths. Fix this by using absolute paths in
wrapper script instead of changing directories.
Signed-off-by: Chris Evich <cevich@redhat.com>
It's unsightly and hard to maintain collections of references and long
lists across multiple playbooks/include files. Centralize them all
in ``vars.yml``, then include that in all plays.
Minor: Update all files with a newline at the start and end.
Signed-off-by: Chris Evich <cevich@redhat.com>
Add a playbook to pull down the integration and e2e testing
logs/xml. By default they will appear in a 'artifacts' subdirectory
of wherever the ``results.yml`` playbook lives. If the ``$WORKSPACE``
env. var is set and non-empty, the subdirectory will be created
there instead.
Inside the ``artifacts`` directory, further sub-directories are created,
one for each subject's Ansible inventory name. Within those
sub-directories are all the collected logs from that host. In this way,
automation may simply archive the entire 'artifacts' directory to
capture the important log files.
(Depends on PR #935)
Signed-off-by: Chris Evich <cevich@redhat.com>
Processing node-e2e.log into jUnit format is insane, it's chock-full of
terminal escape codes. They would either need to be scraped/removed or
disabled somehow. Instead, take advantage of ``e2e.go --report-dir=``
option. This will cause it to store native jUnit results in the
specified directory for later collection. The jUnit results are also
needed for the google test grid.
Signed-off-by: Chris Evich <cevich@redhat.com>
When run by hand, it's much easier to spot things going wrong when
they're colored in red. Add an ansible.cfg to make that happen. This
also sets a default output log file (``$ARTIFACTS/main.log``) - that
doesn't contain color-codes.
When executing against multple hosts, the output can sometimes become
difficult to read, esp. with lots of async. tasks. The callback_plugin
script reorganizes how the console and log is organized, making it
clearer which host did what and when.
Signed-off-by: Chris Evich <cevich@redhat.com>
There are no tasks that we need to run after the suite has finished,
like we do with the integration suite, so it does not make sense to
ignore the errors coming out of the e2e suite.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Both the base and extras repos are required. Rather than try to fuss
around with subscription manager, require two variables be defined
pointing to the baseurl's to use. Assert that these variables are set
and non-empty.
Signed-off-by: Chris Evich <cevich@redhat.com>
Depending on circumstances out of our control, the 'integration tests'
may take longer than an hour (3600 seconds). Since the maximum time
is referenced in several places, define a variable with a larger value
then reference it from the affected tasks.
Signed-off-by: Chris Evich <cevich@redhat.com>
Previously, an internal playbook installed many extra
necessary/unnecessary packages before this playbook even started. Since
this is a terrible design, move all dependencies here so that nothing is
unwritten. This includes installing some deps. for ansible itself
(which must be done as a raw command).
Signed-off-by: Chris Evich <cevich@redhat.com>
If running a playbook more than once, there's no need to re-bootstrap
the virtual environment. Assume if the verified crio directory already
exists, it should be used (after re-asserting hashes of requirements).
Signed-off-by: Chris Evich <cevich@redhat.com>
Our CI tests on RHEL and Fedora and we want to test the systemd cgroup
driver. However, kubelet needs to run in tests with systemd cgroup
driver as well, or tests fail. This patch fixes broken CI because of
not matching cgroup driver between CRI-O and the kubelet.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Because we need a working CNI plugin to setup a correct netns so
sandbox_run can grab a working IP address.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Add new directory /etc/crio/hooks.d, where packagers can drop a json config
file to specify a hook.
The json must specify a valid executable to run.
The json must also specify which stage(s) to run the hook:
prestart, poststart, poststop
The json must specify under which criteria the hook should be launched
If the container HasBindMounts
If the container cmd matches a list of regular expressions
If the containers annotations matches a list of regular expressions.
If any of these match the the hook will be launched.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The environment executing the test playbooks matters. Establish a
script to bootstrap a known-good and fixed-version python virtual
environment. Spell out precise execution requirements in a standard
pip 'requirements.txt' file, including version numbers and hashes.
Upon executing the ``venv-ansible-playbook.sh`` wrapper, a virtual
environment is setup and contained within a fixed (or temporary)
directory, with full logs from setup. If this is to be preserved
across executions, the ``$WORKSPACE`` environment variable must be
set and exported beforehand.
Example execution command-line provided in script file
Signed-off-by: Chris Evich <cevich@redhat.com>
Without any swap space enabled, it's possible some intensive operation
can chew up all the memory on the test VM. Enabling swap space will
prevent this for minor cases, but could lead to disk-thrashing if the
memory demand is excessive.
Since the test system never reboots, using a file-backed swap should
suffice. Though not ideal, it's easy to setup and doesn't require any
interactions with the cloud that owns the VM or the job that created it.
Signed-off-by: Chris Evich <cevich@redhat.com>