This commit is contained in:
Chris Evich 2017-11-03 19:55:19 +00:00 committed by GitHub
commit 54ed84b00b
20 changed files with 780 additions and 195 deletions

View file

@ -1,21 +1,93 @@
# Fedora and RHEL Integration and End-to-End Tests
This directory contains playbooks to set up for and run the integration and
end-to-end tests for CRI-O on RHEL and Fedora hosts. Two entrypoints exist:
end-to-end tests for CRI-O on RHEL and Fedora hosts. The expected entry-point
is the ``main.yml``.
- `main.yml`: sets up the machine and runs tests
- `results.yml`: gathers test output to `/tmp/artifacts`
##Definitions:
When running `main.yml`, three tags are present:
Control-host: The system from which the ``ansible-playbook`` or
``venv-ansible-playbook.sh`` command is executed.
- `setup`: run all tasks to set up the system for testing
- `e2e`: build CRI-O from source and run Kubernetes node E2Es
- `integration`: build CRI-O from source and run the local integration suite
Subject-host(s): The target systems, on which actual playbook tasks are
being carried out.
The playbooks assume the following things about your system:
##Topology:
- on RHEL, the server and extras repos are configured and certs are present
- `ansible` is installed and the host is boot-strapped to allow `ansible` to run against it
- the `$GOPATH` is set and present for all shells (*e.g.* written in `/etc/environment`)
- CRI-O is checked out to the correct state at `${GOPATH}/src/github.com/kubernetes-incubator/cri-o`
- the user running the playbook has access to passwordless `sudo`
The control-host:
- May be the subject.
- Is based on either RHEL/CentOS 6 (or later), or Fedora 24 (or later).
- Runs ``main.yml`` from within the cri-o repository already in the
desired state for testing.
The subject-host(s):
- May be the control-host.
- May be executing the ``main.yml`` playbook against itself.
- If RHEL-like, has the ``server``, ``extras``, and ``EPEL`` repositories available
and enabled.
- Has remote password-less ssh configured for access by the control-host.
- When ssh-access is for a regular user, that user has password-less
sudo access to root.
##Runtime Requirements:
Execution of the ``main.yml`` playbook:
- Should occur through the ``cri-o/contrib/test/venv-ansible-playbook.sh`` wrapper.
- Execution may target localhost, or one or more subjects via standard Ansible
inventory arguments.
- Should use a combination (including none) of the following tags:
- ``setup``: Run all tasks to set up the system for testing. Final state must
be self-contained and independent from other tags (i.e. support
stage-caching).
- ``integration``: Assumes 'setup' previously completed successfully.
May be executed from cached-state of ``setup``.
Not required to execute coincident with other tags.
Must build CRI-O from source and run the
integration test suite.
- ``e2e``: Assumes 'setup' previously completed successfully. May be executed
from cached-state of ``setup``. Not required to execute coincident with
other tags. Must build CRI-O from source and run Kubernetes node
E2E tests.
Execution of the ``results.yml`` playbook:
- Assumes 'setup' previously completed successfully.
- Either ``integration``, ``e2e``, or other testing steps
must have completed (even if in failure).
- Must be the authorative collector and producer of results for the run,
whether or not the control-host is the subject.
- Must gather all important/relevant artifacts into a central location.
- Must not duplicate, rename, or obfuscate any other results or artifact files
from this run or any others. Must not fail due to missing files or failed commands.
- May add test-run identification details so long as they don't interfear with
downstream processing or any of the above requirements.
- Must be executed using the ``venv-ansible-playbook.sh`` wrapper (b/c
``junitparser`` requirement).
``cri-o/contrib/test/venv-ansible-playbook.sh`` Wrapper:
- May be executed on the control-host to both hide and version-lock playbook
execution dependencies, ansible and otherwise.
- Must accept all of the valid Ansible command-line options.
- Must sandbox dependencies under a python virtual environment ``.cri-o_venv``
with packages as specified in ``requirements.txt``.
- Requires the control-host has the following fundamental dependencies installed
(or equivalent): ``python2-virtualenv gcc openssl-devel
redhat-rpm-config libffi-devel python-devel libselinux-python rsync
yum-utils python3-pycurl python-simplejson``.
For example:
Given a populated '/path/to/inventory' file, a control-host could run:
./venv-ansible-playbook.sh -i /path/to/inventory ./integration/main.yml
-or-
From a subject-host without an inventory:
./venv-ansible-playbook.sh -i localhost, ./integration/main.yml

View file

@ -57,11 +57,6 @@ gather_subset = network
#host_key_checking = False
host_key_checking = False
# change the default callback
#stdout_callback = skippy
# enable additional callbacks
#callback_whitelist = timer, mail
# Determine whether includes in tasks and handlers are "static" by
# default. As of 2.0, includes are dynamic by default. Setting these
# values to True will make includes behave more like they did in the
@ -165,7 +160,6 @@ deprecation_warnings = False
# instead of shelling out to the git command.
command_warnings = False
# set plugin path directories here, separate with colons
#action_plugins = /usr/share/ansible/plugins/action
#callback_plugins = /usr/share/ansible/plugins/callback
@ -219,7 +213,6 @@ nocolor = 0
# When a playbook fails by default a .retry file will be created in ~/
# You can disable this feature by setting retry_files_enabled to False
# and you can change the location of the files by setting retry_files_save_path
#retry_files_enabled = False
retry_files_enabled = False
@ -248,6 +241,7 @@ no_target_syslog = True
# worker processes. At the default of 0, no compression
# is used. This value must be an integer from 0 to 9.
#var_compression_level = 9
var_compression_level = 3
# controls what compression method is used for new-style ansible modules when
# they are sent to the remote system. The compression types depend on having
@ -298,6 +292,15 @@ ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/de
# Example:
# control_path = %(directory)s/%%h-%%r
#control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r
# Using ssh's ControlPersist feature is desireable because of wide
# compatibility and not needing to mess with /etc/sudoers
# for pipelining (see below). Unfortunately, in cloud environments,
# auto-assigned VM hostnames tend to be rather longs. Worse, in a CI
# context, the default home-directory path may also be lengthy. Fix
# this to a short name, so Ansible doesn't fall back to opening new
# connections for every task.
control_path = /tmp/crio-%%n-%%p
# Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant
@ -308,7 +311,6 @@ ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/de
# sudoers configurations that have requiretty (the default on many distros).
#
#pipelining = False
pipelining=True
# if True, make ansible use scp if the connection type is ssh
# (default is sftp)

View file

@ -3,12 +3,12 @@
- name: clone bats source repo
git:
repo: "https://github.com/sstephenson/bats.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/sstephenson/bats"
dest: "{{ go_path }}/src/github.com/sstephenson/bats"
- name: install bats
command: "./install.sh /usr/local"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/sstephenson/bats"
chdir: "{{ go_path }}/src/github.com/sstephenson/bats"
- name: link bats
file:

View file

@ -1,42 +1,42 @@
---
- name: stat the expected cri-o directory
- name: stat the expected cri-o directory and Makefile exists
stat:
path: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
register: dir_stat
path: "{{ cri_o_dest_path }}/Makefile"
register: crio_stat
- name: expect cri-o to be cloned already
- name: Verify cri-o Makefile exists in expected location
fail:
msg: "Expected cri-o to be cloned at {{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o but it wasn't!"
when: not dir_stat.stat.exists
msg: "Expected cri-o to be cloned at {{ cri_o_dest_path }}, but its 'Makefile' seems to be missing."
when: not crio_stat.stat.exists or not crio_stat.stat.isreg
- name: install cri-o tools
make:
target: install.tools
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
- name: build cri-o
make:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
- name: install cri-o
make:
target: install
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
- name: install cri-o systemd files
make:
target: install.systemd
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
- name: install cri-o config
make:
target: install.config
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
- name: install configs
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o/{{ item.src }}"
src: "{{ cri_o_dest_path }}/{{ item.src }}"
dest: "{{ item.dest }}"
remote_src: yes
with_items:

View file

@ -3,7 +3,7 @@
- name: clone cri-tools source repo
git:
repo: "https://github.com/kubernetes-incubator/cri-tools.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-tools"
dest: "{{ go_path }}/src/github.com/kubernetes-incubator/cri-tools"
version: "9ff5e8f78a4182ab8d5ba9bcccdda5f338600eab"
- name: install crictl
@ -11,6 +11,6 @@
- name: link crictl
file:
src: "{{ ansible_env.GOPATH }}/bin/crictl"
src: "{{ go_path }}/bin/crictl"
dest: /usr/bin/crictl
state: link

View file

@ -3,17 +3,17 @@
- name: clone kubernetes source repo
git:
repo: "https://github.com/runcom/kubernetes.git"
dest: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
dest: "{{ go_path }}/src/k8s.io/kubernetes"
version: "cri-o-patched-1.8"
- name: install etcd
command: "hack/install-etcd.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
chdir: "{{ go_path }}/src/k8s.io/kubernetes"
- name: build kubernetes
make:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
chdir: "{{ go_path }}/src/k8s.io/kubernetes"
- name: Add custom cluster service file for the e2e testing
copy:
@ -23,7 +23,7 @@
After=network-online.target
Wants=network-online.target
[Service]
WorkingDirectory={{ ansible_env.GOPATH }}/src/k8s.io/kubernetes
WorkingDirectory={{ go_path }}/src/k8s.io/kubernetes
ExecStart=/usr/local/bin/createcluster.sh
User=root
[Install]
@ -35,7 +35,7 @@
content: |
#!/bin/bash
export PATH=/usr/local/go/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/root/bin:{{ ansible_env.GOPATH }}/bin:{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/third_party/etcd:{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/_output/local/bin/linux/amd64/
export PATH=/usr/local/go/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/root/bin:{{ go_path }}/bin:{{ go_path }}/src/k8s.io/kubernetes/third_party/etcd:{{ go_path }}/src/k8s.io/kubernetes/_output/local/bin/linux/amd64/
export CONTAINER_RUNTIME=remote
export CGROUP_DRIVER=systemd
export CONTAINER_RUNTIME_ENDPOINT='/var/run/crio.sock --runtime-request-timeout=5m'
@ -47,17 +47,3 @@
export KUBE_ENABLE_CLUSTER_DNS=true
./hack/local-up-cluster.sh
mode: "u=rwx,g=rwx,o=x"
- name: Set kubernetes_provider to be local
lineinfile:
dest: /etc/environment
line: 'KUBERNETES_PROVIDER=local'
regexp: 'KUBERNETES_PROVIDER='
state: present
- name: Set KUBECONFIG
lineinfile:
dest: /etc/environment
line: 'KUBECONFIG=/var/run/kubernetes/admin.kubeconfig'
regexp: 'KUBECONFIG='
state: present

View file

@ -3,17 +3,17 @@
- name: clone plugins source repo
git:
repo: "https://github.com/containernetworking/plugins.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
dest: "{{ go_path }}/src/github.com/containernetworking/plugins"
version: "dcf7368eeab15e2affc6256f0bb1e84dd46a34de"
- name: build plugins
command: "./build.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
chdir: "{{ go_path }}/src/github.com/containernetworking/plugins"
- name: install plugins
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins/bin/{{ item }}"
src: "{{ go_path }}/src/github.com/containernetworking/plugins/bin/{{ item }}"
dest: "/opt/cni/bin"
mode: "o=rwx,g=rx,o=rx"
remote_src: yes
@ -33,18 +33,18 @@
- name: clone runcom plugins source repo
git:
repo: "https://github.com/runcom/plugins.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
dest: "{{ go_path }}/src/github.com/containernetworking/plugins"
version: "custom-bridge"
force: yes
- name: build plugins
command: "./build.sh"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins"
chdir: "{{ go_path }}/src/github.com/containernetworking/plugins"
- name: install custom bridge
copy:
src: "{{ ansible_env.GOPATH }}/src/github.com/containernetworking/plugins/bin/bridge"
src: "{{ go_path }}/src/github.com/containernetworking/plugins/bin/bridge"
dest: "/opt/cni/bin/bridge-custom"
mode: "o=rwx,g=rx,o=rx"
remote_src: yes

View file

@ -3,18 +3,18 @@
- name: clone runc source repo
git:
repo: "https://github.com/opencontainers/runc.git"
dest: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
dest: "{{ go_path }}/src/github.com/opencontainers/runc"
version: "84a082bfef6f932de921437815355186db37aeb1"
- name: build runc
make:
params: BUILDTAGS="seccomp selinux"
chdir: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
chdir: "{{ go_path }}/src/github.com/opencontainers/runc"
- name: install runc
make:
target: "install"
chdir: "{{ ansible_env.GOPATH }}/src/github.com/opencontainers/runc"
chdir: "{{ go_path }}/src/github.com/opencontainers/runc"
- name: link runc
file:

View file

@ -1,8 +1,5 @@
---
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
- name: enable and start CRI-O
systemd:
name: crio
@ -29,7 +26,7 @@
daemon_reload: yes
- name: wait for the cluster to be running
command: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/_output/bin/kubectl get service kubernetes --namespace default"
command: "{{ go_path }}/src/k8s.io/kubernetes/_output/bin/kubectl get service kubernetes --namespace default"
register: kube_poll
until: kube_poll | succeeded
retries: 100
@ -51,10 +48,25 @@
&> {{ artifacts }}/e2e.log
# Fix vim syntax hilighting: "
- name: disable SELinux
command: setenforce 0
- block:
- name: Disable swap during e2e tests
command: 'swapoff -a'
when: not e2e_swap_enabled
- name: Disable selinux during e2e tests
command: 'setenforce 0'
when: not e2e_selinux_enabled
- name: run e2e tests
shell: "{{ e2e_shell_cmd | regex_replace('\\s+', ' ') }}"
args:
chdir: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes"
chdir: "{{ go_path }}/src/k8s.io/kubernetes"
always:
- name: Re-enable SELinux after e2e tsts
command: 'setenforce 1'
- name: Re-enalbe swap after e2e tests
command: 'swapon -a'

View file

@ -0,0 +1,27 @@
---
- name: Verify expectations
assert:
that:
- 'cri_o_dest_path is defined'
- 'cri_o_src_path is defined'
- name: The cri-o repository directory exists
file:
path: "{{ cri_o_dest_path }}"
state: directory
mode: 0777
- name: Synchronize cri-o from control-host to remote subject
synchronize:
archive: False
checksum: True
delete: True
dest: "{{ cri_o_dest_path }}/"
links: True
recursive: True
src: "{{ cri_o_src_path }}/"
times: True
# This task is excessively noisy, logging every change to every file :(
no_log: True

View file

@ -16,28 +16,16 @@
- gofmt
- godoc
- name: ensure user profile exists
file:
path: "{{ ansible_user_dir }}/.profile"
state: touch
- name: set up PATH for Go toolchain and built binaries
lineinfile:
dest: "{{ ansible_user_dir }}/.profile"
line: 'PATH={{ ansible_env.PATH }}:{{ ansible_env.GOPATH }}/bin:/usr/local/go/bin'
regexp: '^PATH='
state: present
- name: set up directories
file:
path: "{{ item }}"
path: "{{ go_path }}/src/github.com/{{ item }}"
state: directory
with_items:
- "{{ ansible_env.GOPATH }}/src/github.com/containernetworking"
- "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator"
- "{{ ansible_env.GOPATH }}/src/github.com/k8s.io"
- "{{ ansible_env.GOPATH }}/src/github.com/sstephenson"
- "{{ ansible_env.GOPATH }}/src/github.com/opencontainers"
- "containernetworking"
- "kubernetes-incubator"
- "k8s.io"
- "sstephenson"
- "opencontainers"
- name: install Go tools and dependencies
shell: /usr/bin/go get -u "github.com/{{ item }}"

View file

@ -1,7 +1,53 @@
- hosts: all
remote_user: root
---
- hosts: '{{ subjects | default("all") }}'
gather_facts: False # Requires low-level ansible-dependencies
# Cannot use vars.yml - it references magic variables from setup module
tags:
- always
tasks:
- name: Ansible setup-module dependencies are installed, ignoring errors (setup runs next).
raw: $(type -P dnf || type -P yum) install -y python2 python2-dnf libselinux-python
ignore_errors: True
- name: Gather only networking facts for speed
setup:
gather_subset: network
- name: Variables from vars.yml are hauled in after setup
include_vars: "{{ playbook_dir }}/vars.yml"
- name: Global environment are defined, but can be overriden on a task-by-task basis.
set_fact:
extra_storage_opts: >
{%- if ansible_distribution in ["RedHat", "CentOS"] -%}
"--storage-opt overlay.override_kernel_check=1"
{%- else -%}
""
{%- endif -%}
environment_variables:
PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:{{ go_path }}/bin:/usr/local/go/bin"
GOPATH: "{{ go_path }}"
KUBERNETES_PROVIDER: "local"
KUBECONFIG: "/var/run/kubernetes/admin.kubeconfig"
CGROUP_MANAGER: "cgroupfs"
STORAGE_OPTS: '--storage-driver=overlay {{ extra_storage_opts | default("") | trim }}'
- hosts: '{{ subjects | default("none") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- setup
tasks:
- name: CRI-O source is available on every subject
include: github.yml
- hosts: '{{ subjects | default("all") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
environment: '{{ environment_variables }}'
tags:
- setup
tasks:
@ -17,42 +63,34 @@
- name: clone build and install cri-tools
include: "build/cri-tools.yml"
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
- name: clone build and install runc
include: "build/runc.yml"
- name: clone build and install networking plugins
include: "build/plugins.yml"
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- integration
- e2e
tasks:
- name: clone build and install cri-o
include: "build/cri-o.yml"
- hosts: all
remote_user: root
- hosts: '{{ subjects | default("all") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- integration
environment: '{{ environment_variables }}'
tasks:
- name: clone build and install kubernetes
include: "build/kubernetes.yml"
tags:
- e2e
- name: Build and install cri-o
include: "build/cri-o.yml"
tags:
- always
- name: run cri-o integration tests
include: test.yml
- hosts: all
remote_user: root
vars_files:
- "{{ playbook_dir }}/vars.yml"
tags:
- e2e
tasks:
- integration
- name: run k8s e2e tests
include: e2e.yml
tags:
- e2e

View file

@ -1,62 +1,99 @@
---
# vim-syntax: ansible
- hosts: '{{ hosts | default("all") }}'
- hosts: '{{ subjects | default("all") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
vars:
_result_filepaths: [] # do not use
_dstfnbuff: [] # do not use
environment: '{{ environment_variables }}'
tasks:
- name: The crio_integration_filepath is required
tags:
- integration
set_fact:
_result_filepaths: "{{ _result_filepaths + [crio_integration_filepath] }}"
- name: The crio_node_e2e_filepath is required
tags:
- e2e
set_fact:
_result_filepaths: "{{ _result_filepaths + [crio_node_e2e_filepath] }}"
- name: Verify expectations
assert:
that:
- 'result_dest_basedir | default(False, True)'
- '_result_filepaths | default(False, True)'
- '_dstfnbuff == []'
- 'results_fetched is undefined'
# Combined "is defined" and "isn't blank" check
- 'artifacts | default("", True) | trim | length'
- 'generated_artifacts | default("", True) | trim | length'
- 'extra_artifact_filepaths is defined'
- 'parsed_artifacts is defined'
- 'canonical_junit is defined'
- 'playbook_dir ~ "/../parse2junit.py" | is_file'
- name: Results directory exists
- name: artifacts directory exists
file:
path: "{{ result_dest_basedir }}"
path: "{{ artifacts }}"
state: directory
delegate_to: localhost
- name: destination file paths are buffered for overwrite-checking and jUnit conversion
set_fact:
_dstfnbuff: >
{{ _dstfnbuff |
union( [result_dest_basedir ~ "/" ~ inventory_hostname ~ "/" ~ item | basename] ) }}
with_items: '{{ _result_filepaths }}'
- name: Extra artifacts are collected, except missing or with clashing filenames
command: 'cp --no-clobber --verbose "{{ item }}" "{{ artifacts }}/"'
ignore_errors: True
with_items: '{{ extra_artifact_filepaths }}'
- name: Overwriting existing results assumed very very bad
fail:
msg: "Cowardly refusing to overwrite {{ item }}"
when: item | exists
delegate_to: localhost
with_items: '{{ _dstfnbuff }}'
- name: Generated artifacts directory exists
file:
path: "{{ artifacts }}/generated"
state: directory
# fetch module doesn't support directories
- name: Retrieve results from all hosts
- name: Generated artifacts are produced
shell: '{{ item.value }} || true &> {{ item.key }}'
args:
chdir: "{{ artifacts }}/generated"
creates: "{{ artifacts }}/generated/{{ item.key }}"
ignore_errors: True
with_dict: "{{ generated_artifacts }}"
- name: Subject produces a single canonical jUnit file by combining parsed_artifacts
script: '{{ playbook_dir }}/../parse2junit.py {{ parsed_artifacts | join(" ") }} "{{ canonical_junit }}"'
args:
chdir: "{{ artifacts }}"
- hosts: '{{ control_host | default("none") }}'
vars_files:
- "{{ playbook_dir }}/vars.yml"
environment: '{{ environment_variables }}'
tasks:
- name: Verify expectations
assert:
that:
# Combined "is defined" and "isn't blank" check
- 'artifacts | default("", True) | trim | length'
- 'canonical_junit is defined'
- 'playbook_dir ~ "/../parse2junit.py" | is_file'
- name: A subdirectory exists for this subject's artifacts
file:
path: "{{ collection_dirpath }}"
state: directory
- name: Artifacts are retrieved from subjects
synchronize:
checksum: True # Don't rely on date/time being in sync
archive: False # Don't bother with permissions or times
checksum: True # Don't rely on date/time being in sync
copy_links: True # We want files, not links to files
recursive: True
mode: pull
dest: '{{ result_dest_basedir }}/{{ inventory_hostname }}/' # must end in /
src: '{{ item }}'
register: results_fetched
with_items: '{{ _result_filepaths }}'
dest: '{{ collection_dirpath }}'
src: '{{ artifacts }}'
rsync_opts: '--ignore-missing-args'
delegate_to: '{{ item }}'
with_inventory_hostnames:
- '{{ subjects | default("all:!localhost") }}'
- name: The paths of canonical_junit files from all subjects are found
find:
paths:
- '{{ collection_dirpath }}'
patterns: "{{ canonical_junit | basename }}"
recurse: True
register: result
- name: Found paths are joined together into a single string
set_fact:
result: '{{ result.files | map(attribute="path") | join(" ") }}'
- name: The control host produces a top-level junit, combining all subject's canonical_junits
script: '{{ playbook_dir }}/../parse2junit.py {{ result }} "./{{ canonical_junit | basename }}"'
args:
chdir: "{{ collection_dirpath }}"
when: result | trim | length

View file

@ -0,0 +1,42 @@
---
- name: Obtain current state of swap
command: swapon --noheadings --show=NAME
register: swapon
- name: Setup swap if none already, to prevent kernel firing off the OOM killer
block:
- name: A unique swapfile path is generated
command: mktemp --tmpdir=/root swapfile_XXX
register: swapfilepath
- name: Swap file path is buffered
set_fact:
swapfilepath: '{{ swapfilepath.stdout | trim }}'
- name: Set swap file permissions
file:
path: "{{ swapfilepath }}"
owner: root
group: root
mode: 0600
- name: Swapfile padded to swapfile_size & timed to help debug any performance problems
shell: 'time dd if=/dev/zero of={{ swapfilepath }} bs={{ swapfileGB }}M count=1024'
- name: Swap file is formatted
command: 'mkswap {{ swapfilepath }}'
- name: Write swap entry in fstab
mount:
path: none
src: "{{ swapfilepath }}"
fstype: swap
opts: sw
state: present
- name: Mount swap
command: "swapon -a"
when: not (swapon.stdout_lines | length)

View file

@ -1,5 +1,12 @@
---
- name: Update all packages
package:
name: '*'
state: latest
async: 600
poll: 10
- name: Make sure we have all required packages
package:
name: "{{ item }}"
@ -54,7 +61,7 @@
- socat
- tar
- wget
async: 600
async: '{{ 20 * 60 }}'
poll: 10
- name: Add Btrfs for Fedora
@ -63,22 +70,11 @@
state: present
with_items:
- btrfs-progs-devel
- python2-virtualenv
when: ansible_distribution in ['Fedora']
- name: Update all packages
package:
name: '*'
state: latest
async: 600
poll: 10
- name: Setup swap to prevent kernel firing off the OOM killer
shell: |
truncate -s 8G /root/swap && \
export SWAPDEV=$(losetup --show -f /root/swap | head -1) && \
mkswap $SWAPDEV && \
swapon $SWAPDEV && \
swapon --show
- name: Check / setup swap
include: "swap.yml"
- name: ensure directories exist as needed
file:

View file

@ -5,24 +5,37 @@
- name: Make testing output verbose so it can be converted to xunit
lineinfile:
dest: "{{ ansible_env.GOPATH }}/src/k8s.io/kubernetes/hack/make-rules/test.sh"
dest: "{{ go_path }}/src/k8s.io/kubernetes/hack/make-rules/test.sh"
line: ' go test -v "${goflags[@]:+${goflags[@]}}" \'
regexp: ' go test \"\$'
state: present
- name: set extra storage options
set_fact:
extra_storage_opts: " --storage-opt overlay.override_kernel_check=1"
when: ansible_distribution == 'RedHat' or ansible_distribution == 'CentOS'
- name: ensure directory exists for e2e reports
- name: ensure directory exists for integration results
file:
path: "{{ artifacts }}"
state: directory
- block:
- name: Disable swap during integration tests
command: 'swapoff -a'
when: not integration_swap_enabled
- name: Disable selinux during integration tests
command: 'setenforce 0'
when: not integration_selinux_enabled
- name: run integration tests
shell: "CGROUP_MANAGER=cgroupfs STORAGE_OPTIONS='--storage-driver=overlay{{ extra_storage_opts | default('') }}' make localintegration >& {{ artifacts }}/testout.txt"
shell: "make localintegration >& {{ artifacts }}/testout.txt"
args:
chdir: "{{ ansible_env.GOPATH }}/src/github.com/kubernetes-incubator/cri-o"
chdir: "{{ cri_o_dest_path }}"
async: 5400
poll: 30
always:
- name: Re-enable SELinux after integration tsts
command: 'setenforce 1'
- name: Re-enalbe swap after integration tests
command: 'swapon -a'

View file

@ -1,8 +1,61 @@
---
# When swap setup is necessary, make it this size
swapfileGB: 8
# When False, turn off all swapping on the system during indicated test.
integration_swap_enabled: False
e2e_swap_enabled: True
# When False, disable SELinux on the system only during
# particular tests.
integration_selinux_enabled: True
e2e_selinux_enabled: False
# Base directory for all go-related source, build, and install.
go_path: "/go"
# Absolute path on control-host where the cri-o source exists
cri_o_src_path: "{{ playbook_dir }}/../../../"
# Absolute path on subjects where cri-o source is expected
cri_o_dest_path: "{{ go_path }}/src/github.com/kubernetes-incubator/cri-o"
# For results.yml Paths use rsync 'source' conventions
artifacts: "/tmp/artifacts" # Base-directory for collection
crio_integration_filepath: "{{ artifacts }}/testout.txt"
crio_node_e2e_filepath: "{{ artifacts }}/junit_01.xml"
result_dest_basedir: '{{ lookup("env","WORKSPACE") |
default(playbook_dir, True) }}/artifacts'
# List of absolute paths to extra filenames to collect into {{ artifacts }}.
# Non-existing files and any name-collisions will be skipped.
extra_artifact_filepaths:
- "/go/src/k8s.io/kubernetes/e2e.log"
- "/tmp/kubelet.log"
- "/tmp/kube-apiserver.log"
- "/tmp/kube-controller-manager.log"
- "/tmp/kube-proxy.log"
- "/tmp/kube-proxy.yaml"
- "/tmp/kube-scheduler.log"
# Mapping of generated artifact filenames and their commands. All
# are relative to {{ artifacts }}/generated/
generated_artifacts:
installed_packages.log: '$(type -P dnf || type -P yum) list installed'
avc_denials.log: 'ausearch -m AVC -m SELINUX_ERR -m USER_AVC'
filesystem.info: 'df -h && sudo pvs && sudo vgs && sudo lvs'
pid1.journal: 'journalctl _PID=1 --no-pager --all --lines=all'
crio.service: 'journalctl --unit crio.service --no-pager --all --lines=all'
customcluster.service: 'journalctl --unit customcluster.service --no-pager --all --lines=all'
systemd-journald.service: 'journalctl --unit systemd-journald.service --no-pager --all --lines=all'
# Use ``parse2junits.py`` on these artifact files
# to produce the '{{ canonical_junit }}' file.
parsed_artifacts:
- "./testout.txt"
- "./junit_01.xml"
# jUnit artifact file for ``combine_junits.py`` output
canonical_junit: "./junit_01.xml"
# When subject != localhost, synchronize "{{ artifacts }}" from
# all subjects into this directory on the control-host.
collection_dirpath: '{{ lookup("env","WORKSPACE") |
default(playbook_dir, True) }}/artifacts/{{ inventory_hostname }}'

313
contrib/test/parse2junit.py Executable file
View file

@ -0,0 +1,313 @@
#!/usr/bin/env python2
# encoding: utf-8
# N/B: Assumes script was called from cri-o repository on the test subject,
# with a remote name of 'origin. It's executing under the results.py
# playbook, which in turn was executed by venv-ansible-playbook.sh
# i.e. everything in requirements.txt is already available
#
# Also Requires:
# python 2.7+
# git
import os
import sys
import argparse
import re
import contextlib
import uuid
from socket import gethostname
import subprocess
from tempfile import NamedTemporaryFile
# Ref: https://github.com/gastlygem/junitparser
import junitparser
# Parser function suffixes and regex patterns of supported input filenames
TEST_TYPE_FILE_RE = dict(integration=re.compile(r'testout\.txt'),
e2e=re.compile(r'junit_\d+.xml'))
INTEGRATION_TEST_COUNT_RE = re.compile(r'^(?P<start>\d+)\.\.(?P<end>\d+)')
INTEGRATION_SKIP_RE = re.compile(r'^(?P<stat>ok|not ok) (?P<tno>\d+) # skip'
r' (?P<sreason>\(.+\)) (?P<desc>.+)')
INTEGRATION_RESULT_RE = re.compile(r'^(?P<stat>ok|not ok) (?P<tno>\d+) (?P<desc>.+)')
def d(msg):
if msg:
try:
sys.stderr.write('{}\n'.format(msg))
sys.stderr.flush()
except IOError:
pass
@contextlib.contextmanager
def if_match(line, regex):
# __enter__
match = regex.search(line)
if match:
yield match
else:
yield None
# __exit__
pass # Do nothing
def if_case_add(suite, line_parser, *parser_args, **parser_dargs):
case = line_parser(*parser_args, **parser_dargs)
if case:
suite.add_testcase(case)
def parse_integration_line(line, classname):
name_fmt = "[CRI-O] [integration] #{} {}"
with if_match(line, INTEGRATION_SKIP_RE) as match:
if match:
name = name_fmt.format(match.group('tno'), match.group('desc'))
case = junitparser.TestCase(name)
case.classname = classname
case.result = junitparser.Skipped(message=match.group('sreason'))
case.system_err = match.group('stat')
return case
with if_match(line, INTEGRATION_RESULT_RE) as match:
if match:
name = name_fmt.format(match.group('tno'), match.group('desc'))
case = junitparser.TestCase(name)
case.classname = classname
case.system_err = match.group('stat')
if match.group('stat') == 'not ok':
# Can't think of anything better to put here
case.result = junitparser.Failed('not ok')
elif not match.group('stat') == 'ok':
case.result = junitparser.Error(match.group('stat'))
return case
return None
# N/B: name suffix corresponds to key in TEST_TYPE_FILE_RE
def parse_integration(input_file_path, hostname):
suite = junitparser.TestSuite('CRI-O Integration suite')
suite.hostname = hostname
suite_stdout = []
classname = 'CRI-O integration suite'
n_tests = -1 # No tests ran
d(" Processing integration results for {}".format(suite.hostname))
with open(input_file_path) as testout_txt:
for line in testout_txt:
line = line.strip()
suite_stdout.append(line) # Basically a copy of the file
# n_tests must come first
with if_match(line, INTEGRATION_TEST_COUNT_RE) as match:
if match:
n_tests = int(match.group('end')) - int(match.group('start')) + 1
d(" Collecting results from {} tests".format(n_tests))
break
if n_tests > 0:
for line in testout_txt:
line = line.strip()
suite_stdout.append(line)
if_case_add(suite, parse_integration_line,
line=line, classname=classname)
else:
d(" Uh oh, no results found, skipping.")
return None
# TODO: No date/time recorded in file
#stat = os.stat(input_file_path)
#test_start = stat.st_mtime
#test_end = stat.st_atime
#duration = test_end - test_start
suite.time = 0
suite.add_property('stdout', '\n'.join(suite_stdout))
d(" Parsed {} integration test cases".format(len(suite)))
return suite
def flatten_testsuites(testsuites):
# The jUnit format allows nesting testsuites, squash into a list for simplicity
if isinstance(testsuites, junitparser.TestSuite):
testsuite = testsuites # for clarity
return [testsuite]
result = []
for testsuite in testsuites:
if isinstance(testsuite, junitparser.TestSuite):
result.append(testsuite)
elif isinstance(testsuite, junitparser.JUnitXml):
nested_suites = flatten_testsuites(testsuite)
if nested_suites:
result += nested_suites
return result
def find_k8s_e2e_suite(testsuites):
testsuites = flatten_testsuites(testsuites)
for testsuite in testsuites:
if testsuite.name and 'Kubernetes e2e' in testsuite.name:
return testsuite
# Name could be None or wrong, check classnames of all tests
classnames = ['Kubernetes e2e' in x.classname.strip() for x in testsuite]
if all(classnames):
return testsuite
return None
# N/B: name suffix corresponds to key in TEST_TYPE_FILE_RE
def parse_e2e(input_file_path, hostname):
# Load junit_xx.xml file, update contents with more identifying info.
try:
testsuites = junitparser.JUnitXml.fromfile(input_file_path)
suite = find_k8s_e2e_suite(testsuites)
except junitparser.JUnitXmlError, xcept:
d(" Error parsing {}, skipping it.: {}".format(input_file_path, xcept))
return None
if not suite:
d(" Failed to find any e2e results in {}".format(input_file_path))
return None
if not suite.hostname:
suite.hostname = hostname
if not suite.name:
suite.name = 'Kubernetes e2e suite'
d(" Processing e2e results for {}".format(suite.hostname))
for testcase in suite:
if not testcase.classname:
d(" Adding missing classname to case {}".format(testcase.name))
testcase.classname = "Kubernetes e2e suite"
d(" Parsed {} e2e test cases".format(len(suite)))
if not suite.time:
stat = os.stat(input_file_path)
test_start = stat.st_ctime
test_end = stat.st_mtime
duration = test_end - test_start
if duration:
suite.time = duration
return testsuites # Retain original structure
def parse_test_output(ifps, results_name, hostname):
time_total = 0
testsuites = junitparser.JUnitXml(results_name)
# Cheat, lookup parser function name suffix from global namespace
_globals = globals()
for input_file_path in ifps:
if not os.path.isfile(input_file_path):
d(" The file {} doesn't appear to exist, skipping it.".format(input_file_path))
continue
parser = None
for tname, regex in TEST_TYPE_FILE_RE.items():
if regex.search(input_file_path):
parser = _globals.get('parse_{}'.format(tname))
break
else:
d(" Could not find parser to handle input"
" file {}, skipping.".format(input_file_path))
continue
d(" Parsing {} using {}".format(input_file_path, parser))
for parsed_testsuite in flatten_testsuites(parser(input_file_path, hostname)):
d(" Adding {} suite for {}".format(parsed_testsuite.name, parsed_testsuite.hostname))
testsuites.add_testsuite(parsed_testsuite)
if parsed_testsuite.time:
time_total += parsed_testsuite.time
testsuites.time = time_total
return testsuites
def make_host_name():
subject = '{}'.format(gethostname())
# Origin-CI doesn't use very distinguishable hostnames :(
if 'openshiftdevel' in subject or 'ip-' in subject:
try:
with open('/etc/machine-id') as machineid:
subject = 'machine-id-{}'.format(machineid.read().strip())
except IOError: # Worst-case, but we gotta pick sumpfin
subject = 'uuid-{}'.format(uuid.uuid4())
return subject
def make_results_name(argv):
script_dir = os.path.dirname(argv[0])
spco = lambda cmd: subprocess.check_output(cmd.split(' '),
stderr=subprocess.STDOUT,
close_fds=True,
cwd=script_dir,
universal_newlines=True)
pr_no = None
head_id = None
try:
head_id = spco('git rev-parse HEAD')
for line in spco('git ls-remote origin refs/pull/[0-9]*/head').strip().splitlines():
cid, ref = line.strip().split(None, 1)
if head_id in cid:
pr_no = ref.strip().split('/')[2]
break
except subprocess.CalledProcessError:
pass
if pr_no:
return "CRI-O Pull Request {}".format(pr_no)
elif head_id:
return "CRI-O Commit {}".format(head_id[:8])
else: # Worst-case, but we gotta pick sumpfin
return "CRI-O Run ID {}".format(uuid.uuid4())
def main(argv):
reload(sys)
sys.setdefaultencoding('utf8')
parser = argparse.ArgumentParser(epilog='Note: The parent directory of input files is'
'assumed to be the test suite name')
parser.add_argument('-f', '--fqdn',
help="Alternative hostname to add to results if none present",
default=make_host_name())
parser.add_argument('-b', '--backup', action="store_true",
help="If output file name matches any input file, backup with"
" 'original_' prefix",
default=False)
parser.add_argument('ifps', nargs='+',
help='Input file paths to test output from {}.'
''.format(TEST_TYPE_FILE_RE.keys()))
parser.add_argument('ofp', nargs=1,
default='-',
help='Output file path for jUnit XML, or "-" for stdout')
options = parser.parse_args(argv[1:])
ofp = options.ofp[0] # nargs==1 still puts it into a list
results_name = make_results_name(argv)
d("Parsing {} to {}".format(options.ifps, ofp))
d("Using results name: {} and hostname {}".format(results_name, options.fqdn))
# Parse all results
new_testsuites = parse_test_output(options.ifps, results_name, options.fqdn)
if not len(new_testsuites):
d("Uh oh, doesn't look like anything was processed. Bailing out")
return None
d("Parsed {} suites".format(len(new_testsuites)))
# etree can't handle files w/o filenames :(
tmp = NamedTemporaryFile(suffix='.tmp', prefix=results_name, bufsize=1)
new_testsuites.write(tmp.name)
tmp.seek(0)
del new_testsuites # close up any open files
if ofp == '-':
sys.stdout.write('\n{}\n'.format(tmp.read()))
else:
for ifp in options.ifps:
if not os.path.isfile(ofp):
break
if os.path.samefile(ifp, ofp):
if not options.backup:
d("Warning {} will be will be combined with other input files."
"".format(ofp))
break
dirname = os.path.dirname(ofp)
basename = os.path.basename(ofp)
origname = 'original_{}'.format(basename)
os.rename(ofp, os.path.join(dirname, origname))
break
with open(ofp, 'w', 1) as output_file:
output_file.truncate(0)
output_file.flush()
d("Writing {}".format(ofp))
output_file.write(tmp.read())
if __name__ == '__main__':
main(sys.argv)

View file

@ -52,3 +52,7 @@ virtualenv==15.1.0 --hash=sha256:39d88b533b422825d644087a21e78c45cf5af0ef7a99a1f
--hash=sha256:02f8102c2436bb03b3ee6dede1919d1dac8a427541652e5ec95171ec8adbc93a
pip==9.0.1 --hash=sha256:690b762c0a8460c303c089d5d0be034fb15a5ea2b75bdf565f40421f542fefb0
future==0.16.0 --hash=sha256:e39ced1ab767b5936646cedba8bcce582398233d6a627067d4c6a454c90cfedb
junitparser==1.0.0 --hash=sha256:5b0f0ffeef3548878b5ae2cac40b5b128ae18337e2a260a8265f5519b52c907c

View file

@ -13,8 +13,9 @@
# All errors are fatal
set -e
SCRIPT_PATH=`realpath $(dirname $0)`
REQUIREMENTS="$SCRIPT_PATH/requirements.txt"
export SCRIPT_PATH=`realpath $(dirname $0)`
export REQUIREMENTS="$SCRIPT_PATH/requirements.txt"
export ANSIBLE_CONFIG="$SCRIPT_PATH/integration/ansible.cfg"
echo
@ -47,7 +48,8 @@ else
fi
# Create a directory to contain logs and test artifacts
export ARTIFACTS=$(mkdir -pv $WORKSPACE/artifacts | tail -1 | cut -d \' -f 2)
[ -n "$ARTIFACTS" ] || export ARTIFACTS="$WORKSPACE/artifacts"
[ -d "$ARTIFACTS" ] || mkdir -pv "$ARTIFACTS"
[ -d "$ARTIFACTS" ] || exit 3
# All command failures from now on are fatal