linux-kselftest-kunit-next-6.2-rc1

This KUnit next update for Linux 6.2-rc1 consists of several enhancements,
 fixes, clean-ups, documentation updates, improvements to logging and KTAP
 compliance of KUnit test output:
 
 - log numbers in decimal and hex
 - parse KTAP compliant test output
 - allow conditionally exposing static symbols to tests
   when KUNIT is enabled
 - make static symbols visible during kunit testing
 - clean-ups to remove unused structure definition
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAmOXnPYACgkQCwJExA0N
 Qxwf9RAAwdBKxgPZuKZ40v69Jm8YhaO3vyKUkyYRH59/HQGFUHMA2f2ONez4krEX
 iXPgBFQ+7pB63FdgQi2HSg2z/u3xY02AaGgZGXDuNJDmg2xYjNDfZ0GjN6tuavlN
 Liz01DGZkjZoVVXM6oV2xT8woBg/0BbdkKNL1OBO9RBZFHzwDryRzfXmQb8cKlNr
 S+tkeZTlCA/s7UW2LNj4VlTzn6wgni4Y9gSk4wbQmSGWn3OX3rHaqAb7GiZ/yPGb
 1WjbMeE8FwyydLU40aOZZ8V6AJRiw5VGPJyFzWJyWZ21xOgN9Z95b+I36z8RXraA
 i/wnazO/FJsrhzvKL83rQkrSW6bpmVY+jGvk+L6deFM6Ro/vEWHJ4DgyKsIdMiJy
 gUM1Q69szptq+ZRHGrZWPlVONBkBXMOL+fePbCbGcMzlaEAS/zsFYW9IBKcvLzwP
 uHzzMS/cMmSUq52ZIyl9jhHQFVSoErCpJwQjAaZBQpYXPmE7yLcZItxnCaSUQTay
 bRwyps5ph5md0oJTTFJKZ4Zx5FJ2ItjbC4y9BIexb9gYRDdRq723ivDoVENZl/Zk
 DFIV95AY+mSxadS5vFagwWwX0ZN0KFKxeM8Tw7VTimal/0Sbglqp+oflsuKFD6JQ
 b5HUixYifKMbWxkH5xrUb8NdjmBj561TYa8U4N+j3oOiaPYu5Ss=
 =UQNn
 -----END PGP SIGNATURE-----

Merge tag 'linux-kselftest-kunit-next-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull KUnit updates from Shuah Khan:
 "Several enhancements, fixes, clean-ups, documentation updates,
  improvements to logging and KTAP compliance of KUnit test output:

   - log numbers in decimal and hex

   - parse KTAP compliant test output

   - allow conditionally exposing static symbols to tests when KUNIT is
     enabled

   - make static symbols visible during kunit testing

   - clean-ups to remove unused structure definition"

* tag 'linux-kselftest-kunit-next-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (29 commits)
  Documentation: dev-tools: Clarify requirements for result description
  apparmor: test: make static symbols visible during kunit testing
  kunit: add macro to allow conditionally exposing static symbols to tests
  kunit: tool: make parser preserve whitespace when printing test log
  Documentation: kunit: Fix "How Do I Use This" / "Next Steps" sections
  kunit: tool: don't include KTAP headers and the like in the test log
  kunit: improve KTAP compliance of KUnit test output
  kunit: tool: parse KTAP compliant test output
  mm: slub: test: Use the kunit_get_current_test() function
  kunit: Use the static key when retrieving the current test
  kunit: Provide a static key to check if KUnit is actively running tests
  kunit: tool: make --json do nothing if --raw_ouput is set
  kunit: tool: tweak error message when no KTAP found
  kunit: remove KUNIT_INIT_MEM_ASSERTION macro
  Documentation: kunit: Remove redundant 'tips.rst' page
  Documentation: KUnit: reword description of assertions
  Documentation: KUnit: make usage.rst a superset of tips.rst, remove duplication
  kunit: eliminate KUNIT_INIT_*_ASSERT_STRUCT macros
  kunit: tool: remove redundant file.close() call in unit test
  kunit: tool: unit tests all check parser errors, standardize formatting a bit
  ...
This commit is contained in:
Linus Torvalds 2022-12-12 16:42:57 -08:00
commit e2ed78d5d9
30 changed files with 900 additions and 699 deletions

View File

@ -80,8 +80,8 @@ have the number 1 and the number then must increase by 1 for each additional
subtest within the same test at the same nesting level. subtest within the same test at the same nesting level.
The description is a description of the test, generally the name of The description is a description of the test, generally the name of
the test, and can be any string of words (can't include #). The the test, and can be any string of characters other than # or a
description is optional, but recommended. newline. The description is optional, but recommended.
The directive and any diagnostic data is optional. If either are present, they The directive and any diagnostic data is optional. If either are present, they
must follow a hash sign, "#". must follow a hash sign, "#".

View File

@ -4,16 +4,17 @@
KUnit Architecture KUnit Architecture
================== ==================
The KUnit architecture can be divided into two parts: The KUnit architecture is divided into two parts:
- `In-Kernel Testing Framework`_ - `In-Kernel Testing Framework`_
- `kunit_tool (Command Line Test Harness)`_ - `kunit_tool (Command-line Test Harness)`_
In-Kernel Testing Framework In-Kernel Testing Framework
=========================== ===========================
The kernel testing library supports KUnit tests written in C using The kernel testing library supports KUnit tests written in C using
KUnit. KUnit tests are kernel code. KUnit does several things: KUnit. These KUnit tests are kernel code. KUnit performs the following
tasks:
- Organizes tests - Organizes tests
- Reports test results - Reports test results
@ -22,19 +23,17 @@ KUnit. KUnit tests are kernel code. KUnit does several things:
Test Cases Test Cases
---------- ----------
The fundamental unit in KUnit is the test case. The KUnit test cases are The test case is the fundamental unit in KUnit. KUnit test cases are organised
grouped into KUnit suites. A KUnit test case is a function with type into suites. A KUnit test case is a function with type signature
signature ``void (*)(struct kunit *test)``. ``void (*)(struct kunit *test)``. These test case functions are wrapped in a
These test case functions are wrapped in a struct called struct called struct kunit_case.
struct kunit_case.
.. note: .. note:
``generate_params`` is optional for non-parameterized tests. ``generate_params`` is optional for non-parameterized tests.
Each KUnit test case gets a ``struct kunit`` context Each KUnit test case receives a ``struct kunit`` context object that tracks a
object passed to it that tracks a running test. The KUnit assertion running test. The KUnit assertion macros and other KUnit utilities use the
macros and other KUnit utilities use the ``struct kunit`` context ``struct kunit`` context object. As an exception, there are two fields:
object. As an exception, there are two fields:
- ``->priv``: The setup functions can use it to store arbitrary test - ``->priv``: The setup functions can use it to store arbitrary test
user data. user data.
@ -77,12 +76,13 @@ Executor
The KUnit executor can list and run built-in KUnit tests on boot. The KUnit executor can list and run built-in KUnit tests on boot.
The Test suites are stored in a linker section The Test suites are stored in a linker section
called ``.kunit_test_suites``. For code, see: called ``.kunit_test_suites``. For the code, see ``KUNIT_TABLE()`` macro
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/asm-generic/vmlinux.lds.h?h=v5.15#n945. definition in
`include/asm-generic/vmlinux.lds.h <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/asm-generic/vmlinux.lds.h?h=v6.0#n950>`_.
The linker section consists of an array of pointers to The linker section consists of an array of pointers to
``struct kunit_suite``, and is populated by the ``kunit_test_suites()`` ``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
macro. To run all tests compiled into the kernel, the KUnit executor macro. The KUnit executor iterates over the linker section array in order to
iterates over the linker section array. run all the tests that are compiled into the kernel.
.. kernel-figure:: kunit_suitememorydiagram.svg .. kernel-figure:: kunit_suitememorydiagram.svg
:alt: KUnit Suite Memory :alt: KUnit Suite Memory
@ -90,17 +90,17 @@ iterates over the linker section array.
KUnit Suite Memory Diagram KUnit Suite Memory Diagram
On the kernel boot, the KUnit executor uses the start and end addresses On the kernel boot, the KUnit executor uses the start and end addresses
of this section to iterate over and run all tests. For code, see: of this section to iterate over and run all tests. For the implementation of the
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/executor.c executor, see
`lib/kunit/executor.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/executor.c>`_.
When built as a module, the ``kunit_test_suites()`` macro defines a When built as a module, the ``kunit_test_suites()`` macro defines a
``module_init()`` function, which runs all the tests in the compilation ``module_init()`` function, which runs all the tests in the compilation
unit instead of utilizing the executor. unit instead of utilizing the executor.
In KUnit tests, some error classes do not affect other tests In KUnit tests, some error classes do not affect other tests
or parts of the kernel, each KUnit case executes in a separate thread or parts of the kernel, each KUnit case executes in a separate thread
context. For code, see: context. See the ``kunit_try_catch_run()`` function in
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/try-catch.c?h=v5.15#n58 `lib/kunit/try-catch.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/try-catch.c?h=v5.15#n58>`_.
Assertion Macros Assertion Macros
---------------- ----------------
@ -111,37 +111,36 @@ All expectations/assertions are formatted as:
- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an - ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
expectation. expectation.
In the event of a failure, the testing flow differs as follows:
- For an expectation, if the check fails, marks the test as failed - For expectations, the test is marked as failed and the failure is logged.
and logs the failure.
- An assertion, on failure, causes the test case to terminate - Failing assertions, on the other hand, result in the test case being
immediately. terminated immediately.
- Assertions call function: - Assertions call the function:
``void __noreturn kunit_abort(struct kunit *)``. ``void __noreturn kunit_abort(struct kunit *)``.
- ``kunit_abort`` calls function: - ``kunit_abort`` calls the function:
``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``. ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
- ``kunit_try_catch_throw`` calls function: - ``kunit_try_catch_throw`` calls the function:
``void kthread_complete_and_exit(struct completion *, long) __noreturn;`` ``void kthread_complete_and_exit(struct completion *, long) __noreturn;``
and terminates the special thread context. and terminates the special thread context.
- ``<op>`` denotes a check with options: ``TRUE`` (supplied property - ``<op>`` denotes a check with options: ``TRUE`` (supplied property
has the boolean value “true”), ``EQ`` (two supplied properties are has the boolean value "true"), ``EQ`` (two supplied properties are
equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
contain an “err” value). contain an "err" value).
- ``[_MSG]`` prints a custom message on failure. - ``[_MSG]`` prints a custom message on failure.
Test Result Reporting Test Result Reporting
--------------------- ---------------------
KUnit prints test results in KTAP format. KTAP is based on TAP14, see: KUnit prints the test results in KTAP format. KTAP is based on TAP14, see
https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md. Documentation/dev-tools/ktap.rst.
KTAP (yet to be standardized format) works with KUnit and Kselftest. KTAP works with KUnit and Kselftest. The KUnit executor prints KTAP results to
The KUnit executor prints KTAP results to dmesg, and debugfs dmesg, and debugfs (if configured).
(if configured).
Parameterized Tests Parameterized Tests
------------------- -------------------
@ -150,33 +149,35 @@ Each KUnit parameterized test is associated with a collection of
parameters. The test is invoked multiple times, once for each parameter parameters. The test is invoked multiple times, once for each parameter
value and the parameter is stored in the ``param_value`` field. value and the parameter is stored in the ``param_value`` field.
The test case includes a KUNIT_CASE_PARAM() macro that accepts a The test case includes a KUNIT_CASE_PARAM() macro that accepts a
generator function. generator function. The generator function is passed the previous parameter
The generator function is passed the previous parameter and returns the next and returns the next parameter. It also includes a macro for generating
parameter. It also provides a macro to generate common-case generators based on array-based common-case generators.
arrays.
kunit_tool (Command Line Test Harness) kunit_tool (Command-line Test Harness)
====================================== ======================================
kunit_tool is a Python script ``(tools/testing/kunit/kunit.py)`` ``kunit_tool`` is a Python script, found in ``tools/testing/kunit/kunit.py``. It
that can be used to configure, build, exec, parse and run (runs other is used to configure, build, execute, parse test results and run all of the
commands in order) test results. You can either run KUnit tests using previous commands in correct order (i.e., configure, build, execute and parse).
kunit_tool or can include KUnit in kernel and parse manually. You have two options for running KUnit tests: either build the kernel with KUnit
enabled and manually parse the results (see
Documentation/dev-tools/kunit/run_manual.rst) or use ``kunit_tool``
(see Documentation/dev-tools/kunit/run_wrapper.rst).
- ``configure`` command generates the kernel ``.config`` from a - ``configure`` command generates the kernel ``.config`` from a
``.kunitconfig`` file (and any architecture-specific options). ``.kunitconfig`` file (and any architecture-specific options).
For some architectures, additional config options are specified in the The Python scripts available in ``qemu_configs`` folder
``qemu_config`` Python script (for example, ``tools/testing/kunit/qemu configs/powerpc.py``) contains
(For example: ``tools/testing/kunit/qemu_configs/powerpc.py``). additional configuration options for specific architectures.
It parses both the existing ``.config`` and the ``.kunitconfig`` files It parses both the existing ``.config`` and the ``.kunitconfig`` files
and ensures that ``.config`` is a superset of ``.kunitconfig``. to ensure that ``.config`` is a superset of ``.kunitconfig``.
If this is not the case, it will combine the two and run If not, it will combine the two and run ``make olddefconfig`` to regenerate
``make olddefconfig`` to regenerate the ``.config`` file. It then the ``.config`` file. It then checks to see if ``.config`` has become a superset.
verifies that ``.config`` is now a superset. This checks if all This verifies that all the Kconfig dependencies are correctly specified in the
Kconfig dependencies are correctly specified in ``.kunitconfig``. file ``.kunitconfig``. The ``kunit_config.py`` script contains the code for parsing
``kunit_config.py`` includes the parsing Kconfigs code. The code which Kconfigs. The code which runs ``make olddefconfig`` is part of the
runs ``make olddefconfig`` is a part of ``kunit_kernel.py``. You can ``kunit_kernel.py`` script. You can invoke this command through:
invoke this command via: ``./tools/testing/kunit/kunit.py config`` and ``./tools/testing/kunit/kunit.py config`` and
generate a ``.config`` file. generate a ``.config`` file.
- ``build`` runs ``make`` on the kernel tree with required options - ``build`` runs ``make`` on the kernel tree with required options
(depends on the architecture and some options, for example: build_dir) (depends on the architecture and some options, for example: build_dir)
@ -184,8 +185,8 @@ kunit_tool or can include KUnit in kernel and parse manually.
To build a KUnit kernel from the current ``.config``, you can use the To build a KUnit kernel from the current ``.config``, you can use the
``build`` argument: ``./tools/testing/kunit/kunit.py build``. ``build`` argument: ``./tools/testing/kunit/kunit.py build``.
- ``exec`` command executes kernel results either directly (using - ``exec`` command executes kernel results either directly (using
User-mode Linux configuration), or via an emulator such User-mode Linux configuration), or through an emulator such
as QEMU. It reads results from the log via standard as QEMU. It reads results from the log using standard
output (stdout), and passes them to ``parse`` to be parsed. output (stdout), and passes them to ``parse`` to be parsed.
If you already have built a kernel with built-in KUnit tests, If you already have built a kernel with built-in KUnit tests,
you can run the kernel and display the test results with the ``exec`` you can run the kernel and display the test results with the ``exec``

View File

@ -16,7 +16,6 @@ KUnit - Linux Kernel Unit Testing
api/index api/index
style style
faq faq
tips
running_tips running_tips
This section details the kernel unit testing framework. This section details the kernel unit testing framework.
@ -100,14 +99,11 @@ Read also :ref:`kinds-of-tests`.
How do I use it? How do I use it?
================ ================
* Documentation/dev-tools/kunit/start.rst - for KUnit new users. You can find a step-by-step guide to writing and running KUnit tests in
* Documentation/dev-tools/kunit/architecture.rst - KUnit architecture. Documentation/dev-tools/kunit/start.rst
* Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
* Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool. Alternatively, feel free to look through the rest of the KUnit documentation,
* Documentation/dev-tools/kunit/usage.rst - write tests. or to experiment with tools/testing/kunit/kunit.py and the example test under
* Documentation/dev-tools/kunit/tips.rst - best practices with lib/kunit/kunit-example-test.c
examples.
* Documentation/dev-tools/kunit/api/index.rst - KUnit APIs Happy testing!
used for testing.
* Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
answers.

View File

@ -294,13 +294,11 @@ Congrats! You just wrote your first KUnit test.
Next Steps Next Steps
========== ==========
* Documentation/dev-tools/kunit/architecture.rst - KUnit architecture. If you're interested in using some of the more advanced features of kunit.py,
* Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool. take a look at Documentation/dev-tools/kunit/run_wrapper.rst
* Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
* Documentation/dev-tools/kunit/usage.rst - write tests. If you'd like to run tests without using kunit.py, check out
* Documentation/dev-tools/kunit/tips.rst - best practices with Documentation/dev-tools/kunit/run_manual.rst
examples.
* Documentation/dev-tools/kunit/api/index.rst - KUnit APIs For more information on writing KUnit tests (including some common techniques
used for testing. for testing different things), see Documentation/dev-tools/kunit/usage.rst
* Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
answers.

View File

@ -1,190 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
============================
Tips For Writing KUnit Tests
============================
Exiting early on failed expectations
------------------------------------
``KUNIT_EXPECT_EQ`` and friends will mark the test as failed and continue
execution. In some cases, it's unsafe to continue and you can use the
``KUNIT_ASSERT`` variant to exit on failure.
.. code-block:: c
void example_test_user_alloc_function(struct kunit *test)
{
void *object = alloc_some_object_for_me();
/* Make sure we got a valid pointer back. */
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, object);
do_something_with_object(object);
}
Allocating memory
-----------------
Where you would use ``kzalloc``, you should prefer ``kunit_kzalloc`` instead.
KUnit will ensure the memory is freed once the test completes.
This is particularly useful since it lets you use the ``KUNIT_ASSERT_EQ``
macros to exit early from a test without having to worry about remembering to
call ``kfree``.
Example:
.. code-block:: c
void example_test_allocation(struct kunit *test)
{
char *buffer = kunit_kzalloc(test, 16, GFP_KERNEL);
/* Ensure allocation succeeded. */
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
KUNIT_ASSERT_STREQ(test, buffer, "");
}
Testing static functions
------------------------
If you don't want to expose functions or variables just for testing, one option
is to conditionally ``#include`` the test file at the end of your .c file, e.g.
.. code-block:: c
/* In my_file.c */
static int do_interesting_thing();
#ifdef CONFIG_MY_KUNIT_TEST
#include "my_kunit_test.c"
#endif
Injecting test-only code
------------------------
Similarly to the above, it can be useful to add test-specific logic.
.. code-block:: c
/* In my_file.h */
#ifdef CONFIG_MY_KUNIT_TEST
/* Defined in my_kunit_test.c */
void test_only_hook(void);
#else
void test_only_hook(void) { }
#endif
This test-only code can be made more useful by accessing the current kunit
test, see below.
Accessing the current test
--------------------------
In some cases, you need to call test-only code from outside the test file, e.g.
like in the example above or if you're providing a fake implementation of an
ops struct.
There is a ``kunit_test`` field in ``task_struct``, so you can access it via
``current->kunit_test``.
Here's a slightly in-depth example of how one could implement "mocking":
.. code-block:: c
#include <linux/sched.h> /* for current */
struct test_data {
int foo_result;
int want_foo_called_with;
};
static int fake_foo(int arg)
{
struct kunit *test = current->kunit_test;
struct test_data *test_data = test->priv;
KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg);
return test_data->foo_result;
}
static void example_simple_test(struct kunit *test)
{
/* Assume priv is allocated in the suite's .init */
struct test_data *test_data = test->priv;
test_data->foo_result = 42;
test_data->want_foo_called_with = 1;
/* In a real test, we'd probably pass a pointer to fake_foo somewhere
* like an ops struct, etc. instead of calling it directly. */
KUNIT_EXPECT_EQ(test, fake_foo(1), 42);
}
Note: here we're able to get away with using ``test->priv``, but if you wanted
something more flexible you could use a named ``kunit_resource``, see
Documentation/dev-tools/kunit/api/test.rst.
Failing the current test
------------------------
But sometimes, you might just want to fail the current test. In that case, we
have ``kunit_fail_current_test(fmt, args...)`` which is defined in ``<kunit/test-bug.h>`` and
doesn't require pulling in ``<kunit/test.h>``.
E.g. say we had an option to enable some extra debug checks on some data structure:
.. code-block:: c
#include <kunit/test-bug.h>
#ifdef CONFIG_EXTRA_DEBUG_CHECKS
static void validate_my_data(struct data *data)
{
if (is_valid(data))
return;
kunit_fail_current_test("data %p is invalid", data);
/* Normal, non-KUnit, error reporting code here. */
}
#else
static void my_debug_function(void) { }
#endif
Customizing error messages
--------------------------
Each of the ``KUNIT_EXPECT`` and ``KUNIT_ASSERT`` macros have a ``_MSG`` variant.
These take a format string and arguments to provide additional context to the automatically generated error messages.
.. code-block:: c
char some_str[41];
generate_sha1_hex_string(some_str);
/* Before. Not easy to tell why the test failed. */
KUNIT_EXPECT_EQ(test, strlen(some_str), 40);
/* After. Now we see the offending string. */
KUNIT_EXPECT_EQ_MSG(test, strlen(some_str), 40, "some_str='%s'", some_str);
Alternatively, one can take full control over the error message by using ``KUNIT_FAIL()``, e.g.
.. code-block:: c
/* Before */
KUNIT_EXPECT_EQ(test, some_setup_function(), 0);
/* After: full control over the failure message. */
if (some_setup_function())
KUNIT_FAIL(test, "Failed to setup thing for testing");
Next Steps
==========
* Optional: see the Documentation/dev-tools/kunit/usage.rst page for a more
in-depth explanation of KUnit.

View File

@ -112,11 +112,45 @@ terminates the test case if the condition is not satisfied. For example:
KUNIT_EXPECT_LE(test, a[i], a[i + 1]); KUNIT_EXPECT_LE(test, a[i], a[i + 1]);
} }
In this example, the method under test should return pointer to a value. If the In this example, we need to be able to allocate an array to test the ``sort()``
pointer returns null or an errno, we want to stop the test since the following function. So we use ``KUNIT_ASSERT_NOT_ERR_OR_NULL()`` to abort the test if
expectation could crash the test case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us there's an allocation error.
to bail out of the test case if the appropriate conditions are not satisfied to
complete the test. .. note::
In other test frameworks, ``ASSERT`` macros are often implemented by calling
``return`` so they only work from the test function. In KUnit, we stop the
current kthread on failure, so you can call them from anywhere.
Customizing error messages
--------------------------
Each of the ``KUNIT_EXPECT`` and ``KUNIT_ASSERT`` macros have a ``_MSG``
variant. These take a format string and arguments to provide additional
context to the automatically generated error messages.
.. code-block:: c
char some_str[41];
generate_sha1_hex_string(some_str);
/* Before. Not easy to tell why the test failed. */
KUNIT_EXPECT_EQ(test, strlen(some_str), 40);
/* After. Now we see the offending string. */
KUNIT_EXPECT_EQ_MSG(test, strlen(some_str), 40, "some_str='%s'", some_str);
Alternatively, one can take full control over the error message by using
``KUNIT_FAIL()``, e.g.
.. code-block:: c
/* Before */
KUNIT_EXPECT_EQ(test, some_setup_function(), 0);
/* After: full control over the failure message. */
if (some_setup_function())
KUNIT_FAIL(test, "Failed to setup thing for testing");
Test Suites Test Suites
~~~~~~~~~~~ ~~~~~~~~~~~
@ -546,24 +580,6 @@ By reusing the same ``cases`` array from above, we can write the test as a
{} {}
}; };
Exiting Early on Failed Expectations
------------------------------------
We can use ``KUNIT_EXPECT_EQ`` to mark the test as failed and continue
execution. In some cases, it is unsafe to continue. We can use the
``KUNIT_ASSERT`` variant to exit on failure.
.. code-block:: c
void example_test_user_alloc_function(struct kunit *test)
{
void *object = alloc_some_object_for_me();
/* Make sure we got a valid pointer back. */
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, object);
do_something_with_object(object);
}
Allocating Memory Allocating Memory
----------------- -----------------
@ -625,17 +641,23 @@ as shown in next section: *Accessing The Current Test*.
Accessing The Current Test Accessing The Current Test
-------------------------- --------------------------
In some cases, we need to call test-only code from outside the test file. In some cases, we need to call test-only code from outside the test file. This
For example, see example in section *Injecting Test-Only Code* or if is helpful, for example, when providing a fake implementation of a function, or
we are providing a fake implementation of an ops struct. Using to fail any current test from within an error handler.
``kunit_test`` field in ``task_struct``, we can access it via We can do this via the ``kunit_test`` field in ``task_struct``, which we can
``current->kunit_test``. access using the ``kunit_get_current_test()`` function in ``kunit/test-bug.h``.
The example below includes how to implement "mocking": ``kunit_get_current_test()`` is safe to call even if KUnit is not enabled. If
KUnit is not enabled, was built as a module (``CONFIG_KUNIT=m``), or no test is
running in the current task, it will return ``NULL``. This compiles down to
either a no-op or a static key check, so will have a negligible performance
impact when no test is running.
The example below uses this to implement a "mock" implementation of a function, ``foo``:
.. code-block:: c .. code-block:: c
#include <linux/sched.h> /* for current */ #include <kunit/test-bug.h> /* for kunit_get_current_test */
struct test_data { struct test_data {
int foo_result; int foo_result;
@ -644,7 +666,7 @@ The example below includes how to implement "mocking":
static int fake_foo(int arg) static int fake_foo(int arg)
{ {
struct kunit *test = current->kunit_test; struct kunit *test = kunit_get_current_test();
struct test_data *test_data = test->priv; struct test_data *test_data = test->priv;
KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg); KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg);
@ -675,7 +697,7 @@ Each test can have multiple resources which have string names providing the same
flexibility as a ``priv`` member, but also, for example, allowing helper flexibility as a ``priv`` member, but also, for example, allowing helper
functions to create resources without conflicting with each other. It is also functions to create resources without conflicting with each other. It is also
possible to define a clean up function for each resource, making it easy to possible to define a clean up function for each resource, making it easy to
avoid resource leaks. For more information, see Documentation/dev-tools/kunit/api/test.rst. avoid resource leaks. For more information, see Documentation/dev-tools/kunit/api/resource.rst.
Failing The Current Test Failing The Current Test
------------------------ ------------------------
@ -703,3 +725,9 @@ structures as shown below:
static void my_debug_function(void) { } static void my_debug_function(void) { }
#endif #endif
``kunit_fail_current_test()`` is safe to call even if KUnit is not enabled. If
KUnit is not enabled, was built as a module (``CONFIG_KUNIT=m``), or no test is
running in the current task, it will do nothing. This compiles down to either a
no-op or a static key check, so will have a negligible performance impact when
no test is running.

View File

@ -315,7 +315,7 @@ static void drm_test_fb_xrgb8888_to_gray8(struct kunit *test)
iosys_map_set_vaddr(&src, xrgb8888); iosys_map_set_vaddr(&src, xrgb8888);
drm_fb_xrgb8888_to_gray8(&dst, &result->dst_pitch, &src, &fb, &params->clip); drm_fb_xrgb8888_to_gray8(&dst, &result->dst_pitch, &src, &fb, &params->clip);
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
} }
static void drm_test_fb_xrgb8888_to_rgb332(struct kunit *test) static void drm_test_fb_xrgb8888_to_rgb332(struct kunit *test)
@ -345,7 +345,7 @@ static void drm_test_fb_xrgb8888_to_rgb332(struct kunit *test)
iosys_map_set_vaddr(&src, xrgb8888); iosys_map_set_vaddr(&src, xrgb8888);
drm_fb_xrgb8888_to_rgb332(&dst, &result->dst_pitch, &src, &fb, &params->clip); drm_fb_xrgb8888_to_rgb332(&dst, &result->dst_pitch, &src, &fb, &params->clip);
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
} }
static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test) static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test)
@ -375,10 +375,10 @@ static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test)
iosys_map_set_vaddr(&src, xrgb8888); iosys_map_set_vaddr(&src, xrgb8888);
drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, false); drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, false);
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, true); drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, true);
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected_swab, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected_swab, dst_size);
} }
static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test) static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test)
@ -408,7 +408,7 @@ static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test)
iosys_map_set_vaddr(&src, xrgb8888); iosys_map_set_vaddr(&src, xrgb8888);
drm_fb_xrgb8888_to_rgb888(&dst, &result->dst_pitch, &src, &fb, &params->clip); drm_fb_xrgb8888_to_rgb888(&dst, &result->dst_pitch, &src, &fb, &params->clip);
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
} }
static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test) static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
@ -439,7 +439,7 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip); drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
} }
static struct kunit_case drm_format_helper_test_cases[] = { static struct kunit_case drm_format_helper_test_cases[] = {

View File

@ -90,19 +90,6 @@ void kunit_unary_assert_format(const struct kunit_assert *assert,
const struct va_format *message, const struct va_format *message,
struct string_stream *stream); struct string_stream *stream);
/**
* KUNIT_INIT_UNARY_ASSERT_STRUCT() - Initializes &struct kunit_unary_assert.
* @cond: A string representation of the expression asserted true or false.
* @expect_true: True if of type KUNIT_{EXPECT|ASSERT}_TRUE, false otherwise.
*
* Initializes a &struct kunit_unary_assert. Intended to be used in
* KUNIT_EXPECT_* and KUNIT_ASSERT_* macros.
*/
#define KUNIT_INIT_UNARY_ASSERT_STRUCT(cond, expect_true) { \
.condition = cond, \
.expected_true = expect_true \
}
/** /**
* struct kunit_ptr_not_err_assert - An expectation/assertion that a pointer is * struct kunit_ptr_not_err_assert - An expectation/assertion that a pointer is
* not NULL and not a -errno. * not NULL and not a -errno.
@ -123,20 +110,6 @@ void kunit_ptr_not_err_assert_format(const struct kunit_assert *assert,
const struct va_format *message, const struct va_format *message,
struct string_stream *stream); struct string_stream *stream);
/**
* KUNIT_INIT_PTR_NOT_ERR_ASSERT_STRUCT() - Initializes a
* &struct kunit_ptr_not_err_assert.
* @txt: A string representation of the expression passed to the expectation.
* @val: The actual evaluated pointer value of the expression.
*
* Initializes a &struct kunit_ptr_not_err_assert. Intended to be used in
* KUNIT_EXPECT_* and KUNIT_ASSERT_* macros.
*/
#define KUNIT_INIT_PTR_NOT_ERR_STRUCT(txt, val) { \
.text = txt, \
.value = val \
}
/** /**
* struct kunit_binary_assert_text - holds strings for &struct * struct kunit_binary_assert_text - holds strings for &struct
* kunit_binary_assert and friends to try and make the structs smaller. * kunit_binary_assert and friends to try and make the structs smaller.
@ -173,27 +146,6 @@ void kunit_binary_assert_format(const struct kunit_assert *assert,
const struct va_format *message, const struct va_format *message,
struct string_stream *stream); struct string_stream *stream);
/**
* KUNIT_INIT_BINARY_ASSERT_STRUCT() - Initializes a binary assert like
* kunit_binary_assert, kunit_binary_ptr_assert, etc.
*
* @text_: Pointer to a kunit_binary_assert_text.
* @left_val: The actual evaluated value of the expression in the left slot.
* @right_val: The actual evaluated value of the expression in the right slot.
*
* Initializes a binary assert like kunit_binary_assert,
* kunit_binary_ptr_assert, etc. This relies on these structs having the same
* fields but with different types for left_val/right_val.
* This is ultimately used by binary assertion macros like KUNIT_EXPECT_EQ, etc.
*/
#define KUNIT_INIT_BINARY_ASSERT_STRUCT(text_, \
left_val, \
right_val) { \
.text = text_, \
.left_value = left_val, \
.right_value = right_val \
}
/** /**
* struct kunit_binary_ptr_assert - An expectation/assertion that compares two * struct kunit_binary_ptr_assert - An expectation/assertion that compares two
* pointer values (for example, KUNIT_EXPECT_PTR_EQ(test, foo, bar)). * pointer values (for example, KUNIT_EXPECT_PTR_EQ(test, foo, bar)).
@ -240,4 +192,30 @@ void kunit_binary_str_assert_format(const struct kunit_assert *assert,
const struct va_format *message, const struct va_format *message,
struct string_stream *stream); struct string_stream *stream);
/**
* struct kunit_mem_assert - An expectation/assertion that compares two
* memory blocks.
* @assert: The parent of this type.
* @text: Holds the textual representations of the operands and comparator.
* @left_value: The actual evaluated value of the expression in the left slot.
* @right_value: The actual evaluated value of the expression in the right slot.
* @size: Size of the memory block analysed in bytes.
*
* Represents an expectation/assertion that compares two memory blocks. For
* example, to expect that the first three bytes of foo is equal to the
* first three bytes of bar, you can use the expectation
* KUNIT_EXPECT_MEMEQ(test, foo, bar, 3);
*/
struct kunit_mem_assert {
struct kunit_assert assert;
const struct kunit_binary_assert_text *text;
const void *left_value;
const void *right_value;
const size_t size;
};
void kunit_mem_assert_format(const struct kunit_assert *assert,
const struct va_format *message,
struct string_stream *stream);
#endif /* _KUNIT_ASSERT_H */ #endif /* _KUNIT_ASSERT_H */

View File

@ -9,16 +9,63 @@
#ifndef _KUNIT_TEST_BUG_H #ifndef _KUNIT_TEST_BUG_H
#define _KUNIT_TEST_BUG_H #define _KUNIT_TEST_BUG_H
#define kunit_fail_current_test(fmt, ...) \
__kunit_fail_current_test(__FILE__, __LINE__, fmt, ##__VA_ARGS__)
#if IS_BUILTIN(CONFIG_KUNIT) #if IS_BUILTIN(CONFIG_KUNIT)
#include <linux/jump_label.h> /* For static branch */
#include <linux/sched.h>
/* Static key if KUnit is running any tests. */
DECLARE_STATIC_KEY_FALSE(kunit_running);
/**
* kunit_get_current_test() - Return a pointer to the currently running
* KUnit test.
*
* If a KUnit test is running in the current task, returns a pointer to its
* associated struct kunit. This pointer can then be passed to any KUnit
* function or assertion. If no test is running (or a test is running in a
* different task), returns NULL.
*
* This function is safe to call even when KUnit is disabled. If CONFIG_KUNIT
* is not enabled, it will compile down to nothing and will return quickly no
* test is running.
*/
static inline struct kunit *kunit_get_current_test(void)
{
if (!static_branch_unlikely(&kunit_running))
return NULL;
return current->kunit_test;
}
/**
* kunit_fail_current_test() - If a KUnit test is running, fail it.
*
* If a KUnit test is running in the current task, mark that test as failed.
*
* This macro will only work if KUnit is built-in (though the tests
* themselves can be modules). Otherwise, it compiles down to nothing.
*/
#define kunit_fail_current_test(fmt, ...) do { \
if (static_branch_unlikely(&kunit_running)) { \
__kunit_fail_current_test(__FILE__, __LINE__, \
fmt, ##__VA_ARGS__); \
} \
} while (0)
extern __printf(3, 4) void __kunit_fail_current_test(const char *file, int line, extern __printf(3, 4) void __kunit_fail_current_test(const char *file, int line,
const char *fmt, ...); const char *fmt, ...);
#else #else
static inline struct kunit *kunit_get_current_test(void) { return NULL; }
/* We define this with an empty helper function so format string warnings work */
#define kunit_fail_current_test(fmt, ...) \
__kunit_fail_current_test(__FILE__, __LINE__, fmt, ##__VA_ARGS__)
static inline __printf(3, 4) void __kunit_fail_current_test(const char *file, int line, static inline __printf(3, 4) void __kunit_fail_current_test(const char *file, int line,
const char *fmt, ...) const char *fmt, ...)
{ {

View File

@ -16,6 +16,7 @@
#include <linux/container_of.h> #include <linux/container_of.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/jump_label.h>
#include <linux/kconfig.h> #include <linux/kconfig.h>
#include <linux/kref.h> #include <linux/kref.h>
#include <linux/list.h> #include <linux/list.h>
@ -27,6 +28,9 @@
#include <asm/rwonce.h> #include <asm/rwonce.h>
/* Static key: true if any KUnit tests are currently running */
DECLARE_STATIC_KEY_FALSE(kunit_running);
struct kunit; struct kunit;
/* Size of log associated with test. */ /* Size of log associated with test. */
@ -515,22 +519,25 @@ void kunit_do_failed_assertion(struct kunit *test,
fmt, \ fmt, \
##__VA_ARGS__) ##__VA_ARGS__)
/* Helper to safely pass around an initializer list to other macros. */
#define KUNIT_INIT_ASSERT(initializers...) { initializers }
#define KUNIT_UNARY_ASSERTION(test, \ #define KUNIT_UNARY_ASSERTION(test, \
assert_type, \ assert_type, \
condition, \ condition_, \
expected_true, \ expected_true_, \
fmt, \ fmt, \
...) \ ...) \
do { \ do { \
if (likely(!!(condition) == !!expected_true)) \ if (likely(!!(condition_) == !!expected_true_)) \
break; \ break; \
\ \
_KUNIT_FAILED(test, \ _KUNIT_FAILED(test, \
assert_type, \ assert_type, \
kunit_unary_assert, \ kunit_unary_assert, \
kunit_unary_assert_format, \ kunit_unary_assert_format, \
KUNIT_INIT_UNARY_ASSERT_STRUCT(#condition, \ KUNIT_INIT_ASSERT(.condition = #condition_, \
expected_true), \ .expected_true = expected_true_), \
fmt, \ fmt, \
##__VA_ARGS__); \ ##__VA_ARGS__); \
} while (0) } while (0)
@ -590,9 +597,9 @@ do { \
assert_type, \ assert_type, \
assert_class, \ assert_class, \
format_func, \ format_func, \
KUNIT_INIT_BINARY_ASSERT_STRUCT(&__text, \ KUNIT_INIT_ASSERT(.text = &__text, \
__left, \ .left_value = __left, \
__right), \ .right_value = __right), \
fmt, \ fmt, \
##__VA_ARGS__); \ ##__VA_ARGS__); \
} while (0) } while (0)
@ -651,9 +658,42 @@ do { \
assert_type, \ assert_type, \
kunit_binary_str_assert, \ kunit_binary_str_assert, \
kunit_binary_str_assert_format, \ kunit_binary_str_assert_format, \
KUNIT_INIT_BINARY_ASSERT_STRUCT(&__text, \ KUNIT_INIT_ASSERT(.text = &__text, \
__left, \ .left_value = __left, \
__right), \ .right_value = __right), \
fmt, \
##__VA_ARGS__); \
} while (0)
#define KUNIT_MEM_ASSERTION(test, \
assert_type, \
left, \
op, \
right, \
size_, \
fmt, \
...) \
do { \
const void *__left = (left); \
const void *__right = (right); \
const size_t __size = (size_); \
static const struct kunit_binary_assert_text __text = { \
.operation = #op, \
.left_text = #left, \
.right_text = #right, \
}; \
\
if (likely(memcmp(__left, __right, __size) op 0)) \
break; \
\
_KUNIT_FAILED(test, \
assert_type, \
kunit_mem_assert, \
kunit_mem_assert_format, \
KUNIT_INIT_ASSERT(.text = &__text, \
.left_value = __left, \
.right_value = __right, \
.size = __size), \
fmt, \ fmt, \
##__VA_ARGS__); \ ##__VA_ARGS__); \
} while (0) } while (0)
@ -673,7 +713,7 @@ do { \
assert_type, \ assert_type, \
kunit_ptr_not_err_assert, \ kunit_ptr_not_err_assert, \
kunit_ptr_not_err_assert_format, \ kunit_ptr_not_err_assert_format, \
KUNIT_INIT_PTR_NOT_ERR_STRUCT(#ptr, __ptr), \ KUNIT_INIT_ASSERT(.text = #ptr, .value = __ptr), \
fmt, \ fmt, \
##__VA_ARGS__); \ ##__VA_ARGS__); \
} while (0) } while (0)
@ -928,6 +968,60 @@ do { \
fmt, \ fmt, \
##__VA_ARGS__) ##__VA_ARGS__)
/**
* KUNIT_EXPECT_MEMEQ() - Expects that the first @size bytes of @left and @right are equal.
* @test: The test context object.
* @left: An arbitrary expression that evaluates to the specified size.
* @right: An arbitrary expression that evaluates to the specified size.
* @size: Number of bytes compared.
*
* Sets an expectation that the values that @left and @right evaluate to are
* equal. This is semantically equivalent to
* KUNIT_EXPECT_TRUE(@test, !memcmp((@left), (@right), (@size))). See
* KUNIT_EXPECT_TRUE() for more information.
*
* Although this expectation works for any memory block, it is not recommended
* for comparing more structured data, such as structs. This expectation is
* recommended for comparing, for example, data arrays.
*/
#define KUNIT_EXPECT_MEMEQ(test, left, right, size) \
KUNIT_EXPECT_MEMEQ_MSG(test, left, right, size, NULL)
#define KUNIT_EXPECT_MEMEQ_MSG(test, left, right, size, fmt, ...) \
KUNIT_MEM_ASSERTION(test, \
KUNIT_EXPECTATION, \
left, ==, right, \
size, \
fmt, \
##__VA_ARGS__)
/**
* KUNIT_EXPECT_MEMNEQ() - Expects that the first @size bytes of @left and @right are not equal.
* @test: The test context object.
* @left: An arbitrary expression that evaluates to the specified size.
* @right: An arbitrary expression that evaluates to the specified size.
* @size: Number of bytes compared.
*
* Sets an expectation that the values that @left and @right evaluate to are
* not equal. This is semantically equivalent to
* KUNIT_EXPECT_TRUE(@test, memcmp((@left), (@right), (@size))). See
* KUNIT_EXPECT_TRUE() for more information.
*
* Although this expectation works for any memory block, it is not recommended
* for comparing more structured data, such as structs. This expectation is
* recommended for comparing, for example, data arrays.
*/
#define KUNIT_EXPECT_MEMNEQ(test, left, right, size) \
KUNIT_EXPECT_MEMNEQ_MSG(test, left, right, size, NULL)
#define KUNIT_EXPECT_MEMNEQ_MSG(test, left, right, size, fmt, ...) \
KUNIT_MEM_ASSERTION(test, \
KUNIT_EXPECTATION, \
left, !=, right, \
size, \
fmt, \
##__VA_ARGS__)
/** /**
* KUNIT_EXPECT_NULL() - Expects that @ptr is null. * KUNIT_EXPECT_NULL() - Expects that @ptr is null.
* @test: The test context object. * @test: The test context object.

View File

@ -0,0 +1,33 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* KUnit API to allow symbols to be conditionally visible during KUnit
* testing
*
* Copyright (C) 2022, Google LLC.
* Author: Rae Moar <rmoar@google.com>
*/
#ifndef _KUNIT_VISIBILITY_H
#define _KUNIT_VISIBILITY_H
#if IS_ENABLED(CONFIG_KUNIT)
/**
* VISIBLE_IF_KUNIT - A macro that sets symbols to be static if
* CONFIG_KUNIT is not enabled. Otherwise if CONFIG_KUNIT is enabled
* there is no change to the symbol definition.
*/
#define VISIBLE_IF_KUNIT
/**
* EXPORT_SYMBOL_IF_KUNIT(symbol) - Exports symbol into
* EXPORTED_FOR_KUNIT_TESTING namespace only if CONFIG_KUNIT is
* enabled. Must use MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING)
* in test file in order to use symbols.
*/
#define EXPORT_SYMBOL_IF_KUNIT(symbol) EXPORT_SYMBOL_NS(symbol, \
EXPORTED_FOR_KUNIT_TESTING)
#else
#define VISIBLE_IF_KUNIT static
#define EXPORT_SYMBOL_IF_KUNIT(symbol)
#endif
#endif /* _KUNIT_VISIBILITY_H */

View File

@ -127,13 +127,15 @@ void kunit_binary_assert_format(const struct kunit_assert *assert,
binary_assert->text->right_text); binary_assert->text->right_text);
if (!is_literal(stream->test, binary_assert->text->left_text, if (!is_literal(stream->test, binary_assert->text->left_text,
binary_assert->left_value, stream->gfp)) binary_assert->left_value, stream->gfp))
string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld\n", string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld (0x%llx)\n",
binary_assert->text->left_text, binary_assert->text->left_text,
binary_assert->left_value,
binary_assert->left_value); binary_assert->left_value);
if (!is_literal(stream->test, binary_assert->text->right_text, if (!is_literal(stream->test, binary_assert->text->right_text,
binary_assert->right_value, stream->gfp)) binary_assert->right_value, stream->gfp))
string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld", string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld (0x%llx)",
binary_assert->text->right_text, binary_assert->text->right_text,
binary_assert->right_value,
binary_assert->right_value); binary_assert->right_value);
kunit_assert_print_msg(message, stream); kunit_assert_print_msg(message, stream);
} }
@ -204,3 +206,59 @@ void kunit_binary_str_assert_format(const struct kunit_assert *assert,
kunit_assert_print_msg(message, stream); kunit_assert_print_msg(message, stream);
} }
EXPORT_SYMBOL_GPL(kunit_binary_str_assert_format); EXPORT_SYMBOL_GPL(kunit_binary_str_assert_format);
/* Adds a hexdump of a buffer to a string_stream comparing it with
* a second buffer. The different bytes are marked with <>.
*/
static void kunit_assert_hexdump(struct string_stream *stream,
const void *buf,
const void *compared_buf,
const size_t len)
{
size_t i;
const u8 *buf1 = buf;
const u8 *buf2 = compared_buf;
string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT);
for (i = 0; i < len; ++i) {
if (!(i % 16) && i)
string_stream_add(stream, "\n" KUNIT_SUBSUBTEST_INDENT);
if (buf1[i] != buf2[i])
string_stream_add(stream, "<%02x>", buf1[i]);
else
string_stream_add(stream, " %02x ", buf1[i]);
}
}
void kunit_mem_assert_format(const struct kunit_assert *assert,
const struct va_format *message,
struct string_stream *stream)
{
struct kunit_mem_assert *mem_assert;
mem_assert = container_of(assert, struct kunit_mem_assert,
assert);
string_stream_add(stream,
KUNIT_SUBTEST_INDENT "Expected %s %s %s, but\n",
mem_assert->text->left_text,
mem_assert->text->operation,
mem_assert->text->right_text);
string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n",
mem_assert->text->left_text);
kunit_assert_hexdump(stream, mem_assert->left_value,
mem_assert->right_value, mem_assert->size);
string_stream_add(stream, "\n");
string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n",
mem_assert->text->right_text);
kunit_assert_hexdump(stream, mem_assert->right_value,
mem_assert->left_value, mem_assert->size);
kunit_assert_print_msg(message, stream);
}
EXPORT_SYMBOL_GPL(kunit_mem_assert_format);

View File

@ -63,7 +63,7 @@ static int debugfs_print_results(struct seq_file *seq, void *v)
kunit_suite_for_each_test_case(suite, test_case) kunit_suite_for_each_test_case(suite, test_case)
debugfs_print_result(seq, suite, test_case); debugfs_print_result(seq, suite, test_case);
seq_printf(seq, "%s %d - %s\n", seq_printf(seq, "%s %d %s\n",
kunit_status_to_ok_not_ok(success), 1, suite->name); kunit_status_to_ok_not_ok(success), 1, suite->name);
return 0; return 0;
} }

View File

@ -166,7 +166,7 @@ static void kunit_exec_run_tests(struct suite_set *suite_set)
{ {
size_t num_suites = suite_set->end - suite_set->start; size_t num_suites = suite_set->end - suite_set->start;
pr_info("TAP version 14\n"); pr_info("KTAP version 1\n");
pr_info("1..%zu\n", num_suites); pr_info("1..%zu\n", num_suites);
__kunit_test_suites_init(suite_set->start, num_suites); __kunit_test_suites_init(suite_set->start, num_suites);
@ -177,8 +177,8 @@ static void kunit_exec_list_tests(struct suite_set *suite_set)
struct kunit_suite * const *suites; struct kunit_suite * const *suites;
struct kunit_case *test_case; struct kunit_case *test_case;
/* Hack: print a tap header so kunit.py can find the start of KUnit output. */ /* Hack: print a ktap header so kunit.py can find the start of KUnit output. */
pr_info("TAP version 14\n"); pr_info("KTAP version 1\n");
for (suites = suite_set->start; suites < suite_set->end; suites++) for (suites = suite_set->start; suites < suite_set->end; suites++)
kunit_suite_for_each_test_case((*suites), test_case) { kunit_suite_for_each_test_case((*suites), test_case) {

View File

@ -86,6 +86,9 @@ static void example_mark_skipped_test(struct kunit *test)
*/ */
static void example_all_expect_macros_test(struct kunit *test) static void example_all_expect_macros_test(struct kunit *test)
{ {
const u32 array1[] = { 0x0F, 0xFF };
const u32 array2[] = { 0x1F, 0xFF };
/* Boolean assertions */ /* Boolean assertions */
KUNIT_EXPECT_TRUE(test, true); KUNIT_EXPECT_TRUE(test, true);
KUNIT_EXPECT_FALSE(test, false); KUNIT_EXPECT_FALSE(test, false);
@ -109,6 +112,10 @@ static void example_all_expect_macros_test(struct kunit *test)
KUNIT_EXPECT_STREQ(test, "hi", "hi"); KUNIT_EXPECT_STREQ(test, "hi", "hi");
KUNIT_EXPECT_STRNEQ(test, "hi", "bye"); KUNIT_EXPECT_STRNEQ(test, "hi", "bye");
/* Memory block assertions */
KUNIT_EXPECT_MEMEQ(test, array1, array1, sizeof(array1));
KUNIT_EXPECT_MEMNEQ(test, array1, array2, sizeof(array1));
/* /*
* There are also ASSERT variants of all of the above that abort test * There are also ASSERT variants of all of the above that abort test
* execution if they fail. Useful for memory allocations, etc. * execution if they fail. Useful for memory allocations, etc.

View File

@ -131,11 +131,6 @@ bool string_stream_is_empty(struct string_stream *stream)
return list_empty(&stream->fragments); return list_empty(&stream->fragments);
} }
struct string_stream_alloc_context {
struct kunit *test;
gfp_t gfp;
};
struct string_stream *alloc_string_stream(struct kunit *test, gfp_t gfp) struct string_stream *alloc_string_stream(struct kunit *test, gfp_t gfp)
{ {
struct string_stream *stream; struct string_stream *stream;

View File

@ -20,6 +20,8 @@
#include "string-stream.h" #include "string-stream.h"
#include "try-catch-impl.h" #include "try-catch-impl.h"
DEFINE_STATIC_KEY_FALSE(kunit_running);
#if IS_BUILTIN(CONFIG_KUNIT) #if IS_BUILTIN(CONFIG_KUNIT)
/* /*
* Fail the current test and print an error message to the log. * Fail the current test and print an error message to the log.
@ -149,6 +151,7 @@ EXPORT_SYMBOL_GPL(kunit_suite_num_test_cases);
static void kunit_print_suite_start(struct kunit_suite *suite) static void kunit_print_suite_start(struct kunit_suite *suite)
{ {
kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "KTAP version 1\n");
kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "# Subtest: %s", kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "# Subtest: %s",
suite->name); suite->name);
kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "1..%zd", kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "1..%zd",
@ -175,13 +178,13 @@ static void kunit_print_ok_not_ok(void *test_or_suite,
* representation. * representation.
*/ */
if (suite) if (suite)
pr_info("%s %zd - %s%s%s\n", pr_info("%s %zd %s%s%s\n",
kunit_status_to_ok_not_ok(status), kunit_status_to_ok_not_ok(status),
test_number, description, directive_header, test_number, description, directive_header,
(status == KUNIT_SKIPPED) ? directive : ""); (status == KUNIT_SKIPPED) ? directive : "");
else else
kunit_log(KERN_INFO, test, kunit_log(KERN_INFO, test,
KUNIT_SUBTEST_INDENT "%s %zd - %s%s%s", KUNIT_SUBTEST_INDENT "%s %zd %s%s%s",
kunit_status_to_ok_not_ok(status), kunit_status_to_ok_not_ok(status),
test_number, description, directive_header, test_number, description, directive_header,
(status == KUNIT_SKIPPED) ? directive : ""); (status == KUNIT_SKIPPED) ? directive : "");
@ -542,6 +545,8 @@ int kunit_run_tests(struct kunit_suite *suite)
/* Get initial param. */ /* Get initial param. */
param_desc[0] = '\0'; param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc); test.param_value = test_case->generate_params(NULL, param_desc);
kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"KTAP version 1\n");
kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"# Subtest: %s", test_case->name); "# Subtest: %s", test_case->name);
@ -555,7 +560,7 @@ int kunit_run_tests(struct kunit_suite *suite)
kunit_log(KERN_INFO, &test, kunit_log(KERN_INFO, &test,
KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"%s %d - %s", "%s %d %s",
kunit_status_to_ok_not_ok(test.status), kunit_status_to_ok_not_ok(test.status),
test.param_index + 1, param_desc); test.param_index + 1, param_desc);
@ -612,10 +617,14 @@ int __kunit_test_suites_init(struct kunit_suite * const * const suites, int num_
return 0; return 0;
} }
static_branch_inc(&kunit_running);
for (i = 0; i < num_suites; i++) { for (i = 0; i < num_suites; i++) {
kunit_init_suite(suites[i]); kunit_init_suite(suites[i]);
kunit_run_tests(suites[i]); kunit_run_tests(suites[i]);
} }
static_branch_dec(&kunit_running);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(__kunit_test_suites_init); EXPORT_SYMBOL_GPL(__kunit_test_suites_init);

View File

@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <kunit/test.h> #include <kunit/test.h>
#include <kunit/test-bug.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>

View File

@ -39,6 +39,7 @@
#include <linux/memcontrol.h> #include <linux/memcontrol.h>
#include <linux/random.h> #include <linux/random.h>
#include <kunit/test.h> #include <kunit/test.h>
#include <kunit/test-bug.h>
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
@ -618,7 +619,7 @@ static bool slab_add_kunit_errors(void)
{ {
struct kunit_resource *resource; struct kunit_resource *resource;
if (likely(!current->kunit_test)) if (!kunit_get_current_test())
return false; return false;
resource = kunit_find_named_resource(current->kunit_test, "slab_errors"); resource = kunit_find_named_resource(current->kunit_test, "slab_errors");

View File

@ -71,11 +71,11 @@ static void dev_addr_test_basic(struct kunit *test)
memset(addr, 2, sizeof(addr)); memset(addr, 2, sizeof(addr));
eth_hw_addr_set(netdev, addr); eth_hw_addr_set(netdev, addr);
KUNIT_EXPECT_EQ(test, 0, memcmp(netdev->dev_addr, addr, sizeof(addr))); KUNIT_EXPECT_MEMEQ(test, netdev->dev_addr, addr, sizeof(addr));
memset(addr, 3, sizeof(addr)); memset(addr, 3, sizeof(addr));
dev_addr_set(netdev, addr); dev_addr_set(netdev, addr);
KUNIT_EXPECT_EQ(test, 0, memcmp(netdev->dev_addr, addr, sizeof(addr))); KUNIT_EXPECT_MEMEQ(test, netdev->dev_addr, addr, sizeof(addr));
} }
static void dev_addr_test_sync_one(struct kunit *test) static void dev_addr_test_sync_one(struct kunit *test)

View File

@ -106,8 +106,8 @@ config SECURITY_APPARMOR_PARANOID_LOAD
Disabling the check will speed up policy loads. Disabling the check will speed up policy loads.
config SECURITY_APPARMOR_KUNIT_TEST config SECURITY_APPARMOR_KUNIT_TEST
bool "Build KUnit tests for policy_unpack.c" if !KUNIT_ALL_TESTS tristate "Build KUnit tests for policy_unpack.c" if !KUNIT_ALL_TESTS
depends on KUNIT=y && SECURITY_APPARMOR depends on KUNIT && SECURITY_APPARMOR
default KUNIT_ALL_TESTS default KUNIT_ALL_TESTS
help help
This builds the AppArmor KUnit tests. This builds the AppArmor KUnit tests.

View File

@ -8,6 +8,9 @@ apparmor-y := apparmorfs.o audit.o capability.o task.o ipc.o lib.o match.o \
resource.o secid.o file.o policy_ns.o label.o mount.o net.o resource.o secid.o file.o policy_ns.o label.o mount.o net.o
apparmor-$(CONFIG_SECURITY_APPARMOR_HASH) += crypto.o apparmor-$(CONFIG_SECURITY_APPARMOR_HASH) += crypto.o
obj-$(CONFIG_SECURITY_APPARMOR_KUNIT_TEST) += apparmor_policy_unpack_test.o
apparmor_policy_unpack_test-objs += policy_unpack_test.o
clean-files := capability_names.h rlim_names.h net_names.h clean-files := capability_names.h rlim_names.h net_names.h
# Build a lower case string table of address family names # Build a lower case string table of address family names

View File

@ -48,6 +48,43 @@ enum {
AAFS_LOADDATA_NDENTS /* count of entries */ AAFS_LOADDATA_NDENTS /* count of entries */
}; };
/*
* The AppArmor interface treats data as a type byte followed by the
* actual data. The interface has the notion of a named entry
* which has a name (AA_NAME typecode followed by name string) followed by
* the entries typecode and data. Named types allow for optional
* elements and extensions to be added and tested for without breaking
* backwards compatibility.
*/
enum aa_code {
AA_U8,
AA_U16,
AA_U32,
AA_U64,
AA_NAME, /* same as string except it is items name */
AA_STRING,
AA_BLOB,
AA_STRUCT,
AA_STRUCTEND,
AA_LIST,
AA_LISTEND,
AA_ARRAY,
AA_ARRAYEND,
};
/*
* aa_ext is the read of the buffer containing the serialized profile. The
* data is copied into a kernel buffer in apparmorfs and then handed off to
* the unpack routines.
*/
struct aa_ext {
void *start;
void *end;
void *pos; /* pointer to current position in the buffer */
u32 version;
};
/* /*
* struct aa_loaddata - buffer of policy raw_data set * struct aa_loaddata - buffer of policy raw_data set
* *
@ -126,4 +163,17 @@ static inline void aa_put_loaddata(struct aa_loaddata *data)
kref_put(&data->count, aa_loaddata_kref); kref_put(&data->count, aa_loaddata_kref);
} }
#if IS_ENABLED(CONFIG_KUNIT)
bool aa_inbounds(struct aa_ext *e, size_t size);
size_t aa_unpack_u16_chunk(struct aa_ext *e, char **chunk);
bool aa_unpack_X(struct aa_ext *e, enum aa_code code);
bool aa_unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name);
bool aa_unpack_u32(struct aa_ext *e, u32 *data, const char *name);
bool aa_unpack_u64(struct aa_ext *e, u64 *data, const char *name);
size_t aa_unpack_array(struct aa_ext *e, const char *name);
size_t aa_unpack_blob(struct aa_ext *e, char **blob, const char *name);
int aa_unpack_str(struct aa_ext *e, const char **string, const char *name);
int aa_unpack_strdup(struct aa_ext *e, char **string, const char *name);
#endif
#endif /* __POLICY_INTERFACE_H */ #endif /* __POLICY_INTERFACE_H */

View File

@ -14,6 +14,7 @@
*/ */
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <kunit/visibility.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/zlib.h> #include <linux/zlib.h>
@ -37,43 +38,6 @@
#define v7 7 #define v7 7
#define v8 8 /* full network masking */ #define v8 8 /* full network masking */
/*
* The AppArmor interface treats data as a type byte followed by the
* actual data. The interface has the notion of a named entry
* which has a name (AA_NAME typecode followed by name string) followed by
* the entries typecode and data. Named types allow for optional
* elements and extensions to be added and tested for without breaking
* backwards compatibility.
*/
enum aa_code {
AA_U8,
AA_U16,
AA_U32,
AA_U64,
AA_NAME, /* same as string except it is items name */
AA_STRING,
AA_BLOB,
AA_STRUCT,
AA_STRUCTEND,
AA_LIST,
AA_LISTEND,
AA_ARRAY,
AA_ARRAYEND,
};
/*
* aa_ext is the read of the buffer containing the serialized profile. The
* data is copied into a kernel buffer in apparmorfs and then handed off to
* the unpack routines.
*/
struct aa_ext {
void *start;
void *end;
void *pos; /* pointer to current position in the buffer */
u32 version;
};
/* audit callback for unpack fields */ /* audit callback for unpack fields */
static void audit_cb(struct audit_buffer *ab, void *va) static void audit_cb(struct audit_buffer *ab, void *va)
{ {
@ -199,10 +163,11 @@ struct aa_loaddata *aa_loaddata_alloc(size_t size)
} }
/* test if read will be in packed data bounds */ /* test if read will be in packed data bounds */
static bool inbounds(struct aa_ext *e, size_t size) VISIBLE_IF_KUNIT bool aa_inbounds(struct aa_ext *e, size_t size)
{ {
return (size <= e->end - e->pos); return (size <= e->end - e->pos);
} }
EXPORT_SYMBOL_IF_KUNIT(aa_inbounds);
static void *kvmemdup(const void *src, size_t len) static void *kvmemdup(const void *src, size_t len)
{ {
@ -214,22 +179,22 @@ static void *kvmemdup(const void *src, size_t len)
} }
/** /**
* unpack_u16_chunk - test and do bounds checking for a u16 size based chunk * aa_unpack_u16_chunk - test and do bounds checking for a u16 size based chunk
* @e: serialized data read head (NOT NULL) * @e: serialized data read head (NOT NULL)
* @chunk: start address for chunk of data (NOT NULL) * @chunk: start address for chunk of data (NOT NULL)
* *
* Returns: the size of chunk found with the read head at the end of the chunk. * Returns: the size of chunk found with the read head at the end of the chunk.
*/ */
static size_t unpack_u16_chunk(struct aa_ext *e, char **chunk) VISIBLE_IF_KUNIT size_t aa_unpack_u16_chunk(struct aa_ext *e, char **chunk)
{ {
size_t size = 0; size_t size = 0;
void *pos = e->pos; void *pos = e->pos;
if (!inbounds(e, sizeof(u16))) if (!aa_inbounds(e, sizeof(u16)))
goto fail; goto fail;
size = le16_to_cpu(get_unaligned((__le16 *) e->pos)); size = le16_to_cpu(get_unaligned((__le16 *) e->pos));
e->pos += sizeof(__le16); e->pos += sizeof(__le16);
if (!inbounds(e, size)) if (!aa_inbounds(e, size))
goto fail; goto fail;
*chunk = e->pos; *chunk = e->pos;
e->pos += size; e->pos += size;
@ -239,20 +204,22 @@ fail:
e->pos = pos; e->pos = pos;
return 0; return 0;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_u16_chunk);
/* unpack control byte */ /* unpack control byte */
static bool unpack_X(struct aa_ext *e, enum aa_code code) VISIBLE_IF_KUNIT bool aa_unpack_X(struct aa_ext *e, enum aa_code code)
{ {
if (!inbounds(e, 1)) if (!aa_inbounds(e, 1))
return false; return false;
if (*(u8 *) e->pos != code) if (*(u8 *) e->pos != code)
return false; return false;
e->pos++; e->pos++;
return true; return true;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_X);
/** /**
* unpack_nameX - check is the next element is of type X with a name of @name * aa_unpack_nameX - check is the next element is of type X with a name of @name
* @e: serialized data extent information (NOT NULL) * @e: serialized data extent information (NOT NULL)
* @code: type code * @code: type code
* @name: name to match to the serialized element. (MAYBE NULL) * @name: name to match to the serialized element. (MAYBE NULL)
@ -267,7 +234,7 @@ static bool unpack_X(struct aa_ext *e, enum aa_code code)
* *
* Returns: false if either match fails, the read head does not move * Returns: false if either match fails, the read head does not move
*/ */
static bool unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name) VISIBLE_IF_KUNIT bool aa_unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name)
{ {
/* /*
* May need to reset pos if name or type doesn't match * May need to reset pos if name or type doesn't match
@ -277,9 +244,9 @@ static bool unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name)
* Check for presence of a tagname, and if present name size * Check for presence of a tagname, and if present name size
* AA_NAME tag value is a u16. * AA_NAME tag value is a u16.
*/ */
if (unpack_X(e, AA_NAME)) { if (aa_unpack_X(e, AA_NAME)) {
char *tag = NULL; char *tag = NULL;
size_t size = unpack_u16_chunk(e, &tag); size_t size = aa_unpack_u16_chunk(e, &tag);
/* if a name is specified it must match. otherwise skip tag */ /* if a name is specified it must match. otherwise skip tag */
if (name && (!size || tag[size-1] != '\0' || strcmp(name, tag))) if (name && (!size || tag[size-1] != '\0' || strcmp(name, tag)))
goto fail; goto fail;
@ -289,20 +256,21 @@ static bool unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name)
} }
/* now check if type code matches */ /* now check if type code matches */
if (unpack_X(e, code)) if (aa_unpack_X(e, code))
return true; return true;
fail: fail:
e->pos = pos; e->pos = pos;
return false; return false;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_nameX);
static bool unpack_u8(struct aa_ext *e, u8 *data, const char *name) static bool unpack_u8(struct aa_ext *e, u8 *data, const char *name)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_U8, name)) { if (aa_unpack_nameX(e, AA_U8, name)) {
if (!inbounds(e, sizeof(u8))) if (!aa_inbounds(e, sizeof(u8)))
goto fail; goto fail;
if (data) if (data)
*data = *((u8 *)e->pos); *data = *((u8 *)e->pos);
@ -315,12 +283,12 @@ fail:
return false; return false;
} }
static bool unpack_u32(struct aa_ext *e, u32 *data, const char *name) VISIBLE_IF_KUNIT bool aa_unpack_u32(struct aa_ext *e, u32 *data, const char *name)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_U32, name)) { if (aa_unpack_nameX(e, AA_U32, name)) {
if (!inbounds(e, sizeof(u32))) if (!aa_inbounds(e, sizeof(u32)))
goto fail; goto fail;
if (data) if (data)
*data = le32_to_cpu(get_unaligned((__le32 *) e->pos)); *data = le32_to_cpu(get_unaligned((__le32 *) e->pos));
@ -332,13 +300,14 @@ fail:
e->pos = pos; e->pos = pos;
return false; return false;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_u32);
static bool unpack_u64(struct aa_ext *e, u64 *data, const char *name) VISIBLE_IF_KUNIT bool aa_unpack_u64(struct aa_ext *e, u64 *data, const char *name)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_U64, name)) { if (aa_unpack_nameX(e, AA_U64, name)) {
if (!inbounds(e, sizeof(u64))) if (!aa_inbounds(e, sizeof(u64)))
goto fail; goto fail;
if (data) if (data)
*data = le64_to_cpu(get_unaligned((__le64 *) e->pos)); *data = le64_to_cpu(get_unaligned((__le64 *) e->pos));
@ -350,14 +319,15 @@ fail:
e->pos = pos; e->pos = pos;
return false; return false;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_u64);
static size_t unpack_array(struct aa_ext *e, const char *name) VISIBLE_IF_KUNIT size_t aa_unpack_array(struct aa_ext *e, const char *name)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_ARRAY, name)) { if (aa_unpack_nameX(e, AA_ARRAY, name)) {
int size; int size;
if (!inbounds(e, sizeof(u16))) if (!aa_inbounds(e, sizeof(u16)))
goto fail; goto fail;
size = (int)le16_to_cpu(get_unaligned((__le16 *) e->pos)); size = (int)le16_to_cpu(get_unaligned((__le16 *) e->pos));
e->pos += sizeof(u16); e->pos += sizeof(u16);
@ -368,18 +338,19 @@ fail:
e->pos = pos; e->pos = pos;
return 0; return 0;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_array);
static size_t unpack_blob(struct aa_ext *e, char **blob, const char *name) VISIBLE_IF_KUNIT size_t aa_unpack_blob(struct aa_ext *e, char **blob, const char *name)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_BLOB, name)) { if (aa_unpack_nameX(e, AA_BLOB, name)) {
u32 size; u32 size;
if (!inbounds(e, sizeof(u32))) if (!aa_inbounds(e, sizeof(u32)))
goto fail; goto fail;
size = le32_to_cpu(get_unaligned((__le32 *) e->pos)); size = le32_to_cpu(get_unaligned((__le32 *) e->pos));
e->pos += sizeof(u32); e->pos += sizeof(u32);
if (inbounds(e, (size_t) size)) { if (aa_inbounds(e, (size_t) size)) {
*blob = e->pos; *blob = e->pos;
e->pos += size; e->pos += size;
return size; return size;
@ -390,15 +361,16 @@ fail:
e->pos = pos; e->pos = pos;
return 0; return 0;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_blob);
static int unpack_str(struct aa_ext *e, const char **string, const char *name) VISIBLE_IF_KUNIT int aa_unpack_str(struct aa_ext *e, const char **string, const char *name)
{ {
char *src_str; char *src_str;
size_t size = 0; size_t size = 0;
void *pos = e->pos; void *pos = e->pos;
*string = NULL; *string = NULL;
if (unpack_nameX(e, AA_STRING, name)) { if (aa_unpack_nameX(e, AA_STRING, name)) {
size = unpack_u16_chunk(e, &src_str); size = aa_unpack_u16_chunk(e, &src_str);
if (size) { if (size) {
/* strings are null terminated, length is size - 1 */ /* strings are null terminated, length is size - 1 */
if (src_str[size - 1] != 0) if (src_str[size - 1] != 0)
@ -413,12 +385,13 @@ fail:
e->pos = pos; e->pos = pos;
return 0; return 0;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_str);
static int unpack_strdup(struct aa_ext *e, char **string, const char *name) VISIBLE_IF_KUNIT int aa_unpack_strdup(struct aa_ext *e, char **string, const char *name)
{ {
const char *tmp; const char *tmp;
void *pos = e->pos; void *pos = e->pos;
int res = unpack_str(e, &tmp, name); int res = aa_unpack_str(e, &tmp, name);
*string = NULL; *string = NULL;
if (!res) if (!res)
@ -432,6 +405,7 @@ static int unpack_strdup(struct aa_ext *e, char **string, const char *name)
return res; return res;
} }
EXPORT_SYMBOL_IF_KUNIT(aa_unpack_strdup);
/** /**
@ -446,7 +420,7 @@ static struct aa_dfa *unpack_dfa(struct aa_ext *e)
size_t size; size_t size;
struct aa_dfa *dfa = NULL; struct aa_dfa *dfa = NULL;
size = unpack_blob(e, &blob, "aadfa"); size = aa_unpack_blob(e, &blob, "aadfa");
if (size) { if (size) {
/* /*
* The dfa is aligned with in the blob to 8 bytes * The dfa is aligned with in the blob to 8 bytes
@ -482,10 +456,10 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_profile *profile)
void *saved_pos = e->pos; void *saved_pos = e->pos;
/* exec table is optional */ /* exec table is optional */
if (unpack_nameX(e, AA_STRUCT, "xtable")) { if (aa_unpack_nameX(e, AA_STRUCT, "xtable")) {
int i, size; int i, size;
size = unpack_array(e, NULL); size = aa_unpack_array(e, NULL);
/* currently 4 exec bits and entries 0-3 are reserved iupcx */ /* currently 4 exec bits and entries 0-3 are reserved iupcx */
if (size > 16 - 4) if (size > 16 - 4)
goto fail; goto fail;
@ -497,8 +471,8 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_profile *profile)
profile->file.trans.size = size; profile->file.trans.size = size;
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
char *str; char *str;
int c, j, pos, size2 = unpack_strdup(e, &str, NULL); int c, j, pos, size2 = aa_unpack_strdup(e, &str, NULL);
/* unpack_strdup verifies that the last character is /* aa_unpack_strdup verifies that the last character is
* null termination byte. * null termination byte.
*/ */
if (!size2) if (!size2)
@ -521,7 +495,7 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_profile *profile)
goto fail; goto fail;
/* beginning with : requires an embedded \0, /* beginning with : requires an embedded \0,
* verify that exactly 1 internal \0 exists * verify that exactly 1 internal \0 exists
* trailing \0 already verified by unpack_strdup * trailing \0 already verified by aa_unpack_strdup
* *
* convert \0 back to : for label_parse * convert \0 back to : for label_parse
*/ */
@ -533,9 +507,9 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_profile *profile)
/* fail - all other cases with embedded \0 */ /* fail - all other cases with embedded \0 */
goto fail; goto fail;
} }
if (!unpack_nameX(e, AA_ARRAYEND, NULL)) if (!aa_unpack_nameX(e, AA_ARRAYEND, NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
return true; return true;
@ -550,21 +524,21 @@ static bool unpack_xattrs(struct aa_ext *e, struct aa_profile *profile)
{ {
void *pos = e->pos; void *pos = e->pos;
if (unpack_nameX(e, AA_STRUCT, "xattrs")) { if (aa_unpack_nameX(e, AA_STRUCT, "xattrs")) {
int i, size; int i, size;
size = unpack_array(e, NULL); size = aa_unpack_array(e, NULL);
profile->xattr_count = size; profile->xattr_count = size;
profile->xattrs = kcalloc(size, sizeof(char *), GFP_KERNEL); profile->xattrs = kcalloc(size, sizeof(char *), GFP_KERNEL);
if (!profile->xattrs) if (!profile->xattrs)
goto fail; goto fail;
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
if (!unpack_strdup(e, &profile->xattrs[i], NULL)) if (!aa_unpack_strdup(e, &profile->xattrs[i], NULL))
goto fail; goto fail;
} }
if (!unpack_nameX(e, AA_ARRAYEND, NULL)) if (!aa_unpack_nameX(e, AA_ARRAYEND, NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
@ -580,8 +554,8 @@ static bool unpack_secmark(struct aa_ext *e, struct aa_profile *profile)
void *pos = e->pos; void *pos = e->pos;
int i, size; int i, size;
if (unpack_nameX(e, AA_STRUCT, "secmark")) { if (aa_unpack_nameX(e, AA_STRUCT, "secmark")) {
size = unpack_array(e, NULL); size = aa_unpack_array(e, NULL);
profile->secmark = kcalloc(size, sizeof(struct aa_secmark), profile->secmark = kcalloc(size, sizeof(struct aa_secmark),
GFP_KERNEL); GFP_KERNEL);
@ -595,12 +569,12 @@ static bool unpack_secmark(struct aa_ext *e, struct aa_profile *profile)
goto fail; goto fail;
if (!unpack_u8(e, &profile->secmark[i].deny, NULL)) if (!unpack_u8(e, &profile->secmark[i].deny, NULL))
goto fail; goto fail;
if (!unpack_strdup(e, &profile->secmark[i].label, NULL)) if (!aa_unpack_strdup(e, &profile->secmark[i].label, NULL))
goto fail; goto fail;
} }
if (!unpack_nameX(e, AA_ARRAYEND, NULL)) if (!aa_unpack_nameX(e, AA_ARRAYEND, NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
@ -624,26 +598,26 @@ static bool unpack_rlimits(struct aa_ext *e, struct aa_profile *profile)
void *pos = e->pos; void *pos = e->pos;
/* rlimits are optional */ /* rlimits are optional */
if (unpack_nameX(e, AA_STRUCT, "rlimits")) { if (aa_unpack_nameX(e, AA_STRUCT, "rlimits")) {
int i, size; int i, size;
u32 tmp = 0; u32 tmp = 0;
if (!unpack_u32(e, &tmp, NULL)) if (!aa_unpack_u32(e, &tmp, NULL))
goto fail; goto fail;
profile->rlimits.mask = tmp; profile->rlimits.mask = tmp;
size = unpack_array(e, NULL); size = aa_unpack_array(e, NULL);
if (size > RLIM_NLIMITS) if (size > RLIM_NLIMITS)
goto fail; goto fail;
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
u64 tmp2 = 0; u64 tmp2 = 0;
int a = aa_map_resource(i); int a = aa_map_resource(i);
if (!unpack_u64(e, &tmp2, NULL)) if (!aa_unpack_u64(e, &tmp2, NULL))
goto fail; goto fail;
profile->rlimits.limits[a].rlim_max = tmp2; profile->rlimits.limits[a].rlim_max = tmp2;
} }
if (!unpack_nameX(e, AA_ARRAYEND, NULL)) if (!aa_unpack_nameX(e, AA_ARRAYEND, NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
return true; return true;
@ -691,9 +665,9 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
*ns_name = NULL; *ns_name = NULL;
/* check that we have the right struct being passed */ /* check that we have the right struct being passed */
if (!unpack_nameX(e, AA_STRUCT, "profile")) if (!aa_unpack_nameX(e, AA_STRUCT, "profile"))
goto fail; goto fail;
if (!unpack_str(e, &name, NULL)) if (!aa_unpack_str(e, &name, NULL))
goto fail; goto fail;
if (*name == '\0') if (*name == '\0')
goto fail; goto fail;
@ -713,10 +687,10 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
/* profile renaming is optional */ /* profile renaming is optional */
(void) unpack_str(e, &profile->rename, "rename"); (void) aa_unpack_str(e, &profile->rename, "rename");
/* attachment string is optional */ /* attachment string is optional */
(void) unpack_str(e, &profile->attach, "attach"); (void) aa_unpack_str(e, &profile->attach, "attach");
/* xmatch is optional and may be NULL */ /* xmatch is optional and may be NULL */
profile->xmatch = unpack_dfa(e); profile->xmatch = unpack_dfa(e);
@ -728,7 +702,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
} }
/* xmatch_len is not optional if xmatch is set */ /* xmatch_len is not optional if xmatch is set */
if (profile->xmatch) { if (profile->xmatch) {
if (!unpack_u32(e, &tmp, NULL)) { if (!aa_unpack_u32(e, &tmp, NULL)) {
info = "missing xmatch len"; info = "missing xmatch len";
goto fail; goto fail;
} }
@ -736,15 +710,15 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
} }
/* disconnected attachment string is optional */ /* disconnected attachment string is optional */
(void) unpack_str(e, &profile->disconnected, "disconnected"); (void) aa_unpack_str(e, &profile->disconnected, "disconnected");
/* per profile debug flags (complain, audit) */ /* per profile debug flags (complain, audit) */
if (!unpack_nameX(e, AA_STRUCT, "flags")) { if (!aa_unpack_nameX(e, AA_STRUCT, "flags")) {
info = "profile missing flags"; info = "profile missing flags";
goto fail; goto fail;
} }
info = "failed to unpack profile flags"; info = "failed to unpack profile flags";
if (!unpack_u32(e, &tmp, NULL)) if (!aa_unpack_u32(e, &tmp, NULL))
goto fail; goto fail;
if (tmp & PACKED_FLAG_HAT) if (tmp & PACKED_FLAG_HAT)
profile->label.flags |= FLAG_HAT; profile->label.flags |= FLAG_HAT;
@ -752,7 +726,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
profile->label.flags |= FLAG_DEBUG1; profile->label.flags |= FLAG_DEBUG1;
if (tmp & PACKED_FLAG_DEBUG2) if (tmp & PACKED_FLAG_DEBUG2)
profile->label.flags |= FLAG_DEBUG2; profile->label.flags |= FLAG_DEBUG2;
if (!unpack_u32(e, &tmp, NULL)) if (!aa_unpack_u32(e, &tmp, NULL))
goto fail; goto fail;
if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG)) { if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG)) {
profile->mode = APPARMOR_COMPLAIN; profile->mode = APPARMOR_COMPLAIN;
@ -766,16 +740,16 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
} else { } else {
goto fail; goto fail;
} }
if (!unpack_u32(e, &tmp, NULL)) if (!aa_unpack_u32(e, &tmp, NULL))
goto fail; goto fail;
if (tmp) if (tmp)
profile->audit = AUDIT_ALL; profile->audit = AUDIT_ALL;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
/* path_flags is optional */ /* path_flags is optional */
if (unpack_u32(e, &profile->path_flags, "path_flags")) if (aa_unpack_u32(e, &profile->path_flags, "path_flags"))
profile->path_flags |= profile->label.flags & profile->path_flags |= profile->label.flags &
PATH_MEDIATE_DELETED; PATH_MEDIATE_DELETED;
else else
@ -783,38 +757,38 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
profile->path_flags = PATH_MEDIATE_DELETED; profile->path_flags = PATH_MEDIATE_DELETED;
info = "failed to unpack profile capabilities"; info = "failed to unpack profile capabilities";
if (!unpack_u32(e, &(profile->caps.allow.cap[0]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.allow.cap[0]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(profile->caps.audit.cap[0]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.audit.cap[0]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(profile->caps.quiet.cap[0]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.quiet.cap[0]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &tmpcap.cap[0], NULL)) if (!aa_unpack_u32(e, &tmpcap.cap[0], NULL))
goto fail; goto fail;
info = "failed to unpack upper profile capabilities"; info = "failed to unpack upper profile capabilities";
if (unpack_nameX(e, AA_STRUCT, "caps64")) { if (aa_unpack_nameX(e, AA_STRUCT, "caps64")) {
/* optional upper half of 64 bit caps */ /* optional upper half of 64 bit caps */
if (!unpack_u32(e, &(profile->caps.allow.cap[1]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.allow.cap[1]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(profile->caps.audit.cap[1]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.audit.cap[1]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(profile->caps.quiet.cap[1]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.quiet.cap[1]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(tmpcap.cap[1]), NULL)) if (!aa_unpack_u32(e, &(tmpcap.cap[1]), NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
info = "failed to unpack extended profile capabilities"; info = "failed to unpack extended profile capabilities";
if (unpack_nameX(e, AA_STRUCT, "capsx")) { if (aa_unpack_nameX(e, AA_STRUCT, "capsx")) {
/* optional extended caps mediation mask */ /* optional extended caps mediation mask */
if (!unpack_u32(e, &(profile->caps.extended.cap[0]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.extended.cap[0]), NULL))
goto fail; goto fail;
if (!unpack_u32(e, &(profile->caps.extended.cap[1]), NULL)) if (!aa_unpack_u32(e, &(profile->caps.extended.cap[1]), NULL))
goto fail; goto fail;
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} }
@ -833,7 +807,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
goto fail; goto fail;
} }
if (unpack_nameX(e, AA_STRUCT, "policydb")) { if (aa_unpack_nameX(e, AA_STRUCT, "policydb")) {
/* generic policy dfa - optional and may be NULL */ /* generic policy dfa - optional and may be NULL */
info = "failed to unpack policydb"; info = "failed to unpack policydb";
profile->policy.dfa = unpack_dfa(e); profile->policy.dfa = unpack_dfa(e);
@ -845,7 +819,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
error = -EPROTO; error = -EPROTO;
goto fail; goto fail;
} }
if (!unpack_u32(e, &profile->policy.start[0], "start")) if (!aa_unpack_u32(e, &profile->policy.start[0], "start"))
/* default start state */ /* default start state */
profile->policy.start[0] = DFA_START; profile->policy.start[0] = DFA_START;
/* setup class index */ /* setup class index */
@ -855,7 +829,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
profile->policy.start[0], profile->policy.start[0],
i); i);
} }
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
goto fail; goto fail;
} else } else
profile->policy.dfa = aa_get_dfa(nulldfa); profile->policy.dfa = aa_get_dfa(nulldfa);
@ -868,7 +842,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
info = "failed to unpack profile file rules"; info = "failed to unpack profile file rules";
goto fail; goto fail;
} else if (profile->file.dfa) { } else if (profile->file.dfa) {
if (!unpack_u32(e, &profile->file.start, "dfa_start")) if (!aa_unpack_u32(e, &profile->file.start, "dfa_start"))
/* default start state */ /* default start state */
profile->file.start = DFA_START; profile->file.start = DFA_START;
} else if (profile->policy.dfa && } else if (profile->policy.dfa &&
@ -883,7 +857,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
goto fail; goto fail;
} }
if (unpack_nameX(e, AA_STRUCT, "data")) { if (aa_unpack_nameX(e, AA_STRUCT, "data")) {
info = "out of memory"; info = "out of memory";
profile->data = kzalloc(sizeof(*profile->data), GFP_KERNEL); profile->data = kzalloc(sizeof(*profile->data), GFP_KERNEL);
if (!profile->data) if (!profile->data)
@ -901,7 +875,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
goto fail; goto fail;
} }
while (unpack_strdup(e, &key, NULL)) { while (aa_unpack_strdup(e, &key, NULL)) {
data = kzalloc(sizeof(*data), GFP_KERNEL); data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data) { if (!data) {
kfree_sensitive(key); kfree_sensitive(key);
@ -909,7 +883,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
} }
data->key = key; data->key = key;
data->size = unpack_blob(e, &data->data, NULL); data->size = aa_unpack_blob(e, &data->data, NULL);
data->data = kvmemdup(data->data, data->size); data->data = kvmemdup(data->data, data->size);
if (data->size && !data->data) { if (data->size && !data->data) {
kfree_sensitive(data->key); kfree_sensitive(data->key);
@ -921,13 +895,13 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
profile->data->p); profile->data->p);
} }
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) { if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL)) {
info = "failed to unpack end of key, value data table"; info = "failed to unpack end of key, value data table";
goto fail; goto fail;
} }
} }
if (!unpack_nameX(e, AA_STRUCTEND, NULL)) { if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL)) {
info = "failed to unpack end of profile"; info = "failed to unpack end of profile";
goto fail; goto fail;
} }
@ -960,7 +934,7 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
*ns = NULL; *ns = NULL;
/* get the interface version */ /* get the interface version */
if (!unpack_u32(e, &e->version, "version")) { if (!aa_unpack_u32(e, &e->version, "version")) {
if (required) { if (required) {
audit_iface(NULL, NULL, NULL, "invalid profile format", audit_iface(NULL, NULL, NULL, "invalid profile format",
e, error); e, error);
@ -979,7 +953,7 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
} }
/* read the namespace if present */ /* read the namespace if present */
if (unpack_str(e, &name, "namespace")) { if (aa_unpack_str(e, &name, "namespace")) {
if (*name == '\0') { if (*name == '\0') {
audit_iface(NULL, NULL, NULL, "invalid namespace name", audit_iface(NULL, NULL, NULL, "invalid namespace name",
e, error); e, error);
@ -1251,7 +1225,3 @@ fail:
return error; return error;
} }
#ifdef CONFIG_SECURITY_APPARMOR_KUNIT_TEST
#include "policy_unpack_test.c"
#endif /* CONFIG_SECURITY_APPARMOR_KUNIT_TEST */

View File

@ -4,6 +4,7 @@
*/ */
#include <kunit/test.h> #include <kunit/test.h>
#include <kunit/visibility.h>
#include "include/policy.h" #include "include/policy.h"
#include "include/policy_unpack.h" #include "include/policy_unpack.h"
@ -43,6 +44,8 @@
#define TEST_ARRAY_BUF_OFFSET \ #define TEST_ARRAY_BUF_OFFSET \
(TEST_NAMED_ARRAY_BUF_OFFSET + 3 + strlen(TEST_ARRAY_NAME) + 1) (TEST_NAMED_ARRAY_BUF_OFFSET + 3 + strlen(TEST_ARRAY_NAME) + 1)
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
struct policy_unpack_fixture { struct policy_unpack_fixture {
struct aa_ext *e; struct aa_ext *e;
size_t e_size; size_t e_size;
@ -125,16 +128,16 @@ static void policy_unpack_test_inbounds_when_inbounds(struct kunit *test)
{ {
struct policy_unpack_fixture *puf = test->priv; struct policy_unpack_fixture *puf = test->priv;
KUNIT_EXPECT_TRUE(test, inbounds(puf->e, 0)); KUNIT_EXPECT_TRUE(test, aa_inbounds(puf->e, 0));
KUNIT_EXPECT_TRUE(test, inbounds(puf->e, puf->e_size / 2)); KUNIT_EXPECT_TRUE(test, aa_inbounds(puf->e, puf->e_size / 2));
KUNIT_EXPECT_TRUE(test, inbounds(puf->e, puf->e_size)); KUNIT_EXPECT_TRUE(test, aa_inbounds(puf->e, puf->e_size));
} }
static void policy_unpack_test_inbounds_when_out_of_bounds(struct kunit *test) static void policy_unpack_test_inbounds_when_out_of_bounds(struct kunit *test)
{ {
struct policy_unpack_fixture *puf = test->priv; struct policy_unpack_fixture *puf = test->priv;
KUNIT_EXPECT_FALSE(test, inbounds(puf->e, puf->e_size + 1)); KUNIT_EXPECT_FALSE(test, aa_inbounds(puf->e, puf->e_size + 1));
} }
static void policy_unpack_test_unpack_array_with_null_name(struct kunit *test) static void policy_unpack_test_unpack_array_with_null_name(struct kunit *test)
@ -144,7 +147,7 @@ static void policy_unpack_test_unpack_array_with_null_name(struct kunit *test)
puf->e->pos += TEST_ARRAY_BUF_OFFSET; puf->e->pos += TEST_ARRAY_BUF_OFFSET;
array_size = unpack_array(puf->e, NULL); array_size = aa_unpack_array(puf->e, NULL);
KUNIT_EXPECT_EQ(test, array_size, (u16)TEST_ARRAY_SIZE); KUNIT_EXPECT_EQ(test, array_size, (u16)TEST_ARRAY_SIZE);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -159,7 +162,7 @@ static void policy_unpack_test_unpack_array_with_name(struct kunit *test)
puf->e->pos += TEST_NAMED_ARRAY_BUF_OFFSET; puf->e->pos += TEST_NAMED_ARRAY_BUF_OFFSET;
array_size = unpack_array(puf->e, name); array_size = aa_unpack_array(puf->e, name);
KUNIT_EXPECT_EQ(test, array_size, (u16)TEST_ARRAY_SIZE); KUNIT_EXPECT_EQ(test, array_size, (u16)TEST_ARRAY_SIZE);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -175,7 +178,7 @@ static void policy_unpack_test_unpack_array_out_of_bounds(struct kunit *test)
puf->e->pos += TEST_NAMED_ARRAY_BUF_OFFSET; puf->e->pos += TEST_NAMED_ARRAY_BUF_OFFSET;
puf->e->end = puf->e->start + TEST_ARRAY_BUF_OFFSET + sizeof(u16); puf->e->end = puf->e->start + TEST_ARRAY_BUF_OFFSET + sizeof(u16);
array_size = unpack_array(puf->e, name); array_size = aa_unpack_array(puf->e, name);
KUNIT_EXPECT_EQ(test, array_size, 0); KUNIT_EXPECT_EQ(test, array_size, 0);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -189,7 +192,7 @@ static void policy_unpack_test_unpack_blob_with_null_name(struct kunit *test)
size_t size; size_t size;
puf->e->pos += TEST_BLOB_BUF_OFFSET; puf->e->pos += TEST_BLOB_BUF_OFFSET;
size = unpack_blob(puf->e, &blob, NULL); size = aa_unpack_blob(puf->e, &blob, NULL);
KUNIT_ASSERT_EQ(test, size, TEST_BLOB_DATA_SIZE); KUNIT_ASSERT_EQ(test, size, TEST_BLOB_DATA_SIZE);
KUNIT_EXPECT_TRUE(test, KUNIT_EXPECT_TRUE(test,
@ -203,7 +206,7 @@ static void policy_unpack_test_unpack_blob_with_name(struct kunit *test)
size_t size; size_t size;
puf->e->pos += TEST_NAMED_BLOB_BUF_OFFSET; puf->e->pos += TEST_NAMED_BLOB_BUF_OFFSET;
size = unpack_blob(puf->e, &blob, TEST_BLOB_NAME); size = aa_unpack_blob(puf->e, &blob, TEST_BLOB_NAME);
KUNIT_ASSERT_EQ(test, size, TEST_BLOB_DATA_SIZE); KUNIT_ASSERT_EQ(test, size, TEST_BLOB_DATA_SIZE);
KUNIT_EXPECT_TRUE(test, KUNIT_EXPECT_TRUE(test,
@ -222,7 +225,7 @@ static void policy_unpack_test_unpack_blob_out_of_bounds(struct kunit *test)
puf->e->end = puf->e->start + TEST_BLOB_BUF_OFFSET puf->e->end = puf->e->start + TEST_BLOB_BUF_OFFSET
+ TEST_BLOB_DATA_SIZE - 1; + TEST_BLOB_DATA_SIZE - 1;
size = unpack_blob(puf->e, &blob, TEST_BLOB_NAME); size = aa_unpack_blob(puf->e, &blob, TEST_BLOB_NAME);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
@ -235,7 +238,7 @@ static void policy_unpack_test_unpack_str_with_null_name(struct kunit *test)
size_t size; size_t size;
puf->e->pos += TEST_STRING_BUF_OFFSET; puf->e->pos += TEST_STRING_BUF_OFFSET;
size = unpack_str(puf->e, &string, NULL); size = aa_unpack_str(puf->e, &string, NULL);
KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1); KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1);
KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA); KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
@ -247,7 +250,7 @@ static void policy_unpack_test_unpack_str_with_name(struct kunit *test)
const char *string = NULL; const char *string = NULL;
size_t size; size_t size;
size = unpack_str(puf->e, &string, TEST_STRING_NAME); size = aa_unpack_str(puf->e, &string, TEST_STRING_NAME);
KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1); KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1);
KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA); KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
@ -263,7 +266,7 @@ static void policy_unpack_test_unpack_str_out_of_bounds(struct kunit *test)
puf->e->end = puf->e->pos + TEST_STRING_BUF_OFFSET puf->e->end = puf->e->pos + TEST_STRING_BUF_OFFSET
+ strlen(TEST_STRING_DATA) - 1; + strlen(TEST_STRING_DATA) - 1;
size = unpack_str(puf->e, &string, TEST_STRING_NAME); size = aa_unpack_str(puf->e, &string, TEST_STRING_NAME);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
@ -276,7 +279,7 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test)
size_t size; size_t size;
puf->e->pos += TEST_STRING_BUF_OFFSET; puf->e->pos += TEST_STRING_BUF_OFFSET;
size = unpack_strdup(puf->e, &string, NULL); size = aa_unpack_strdup(puf->e, &string, NULL);
KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1); KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1);
KUNIT_EXPECT_FALSE(test, KUNIT_EXPECT_FALSE(test,
@ -291,7 +294,7 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
char *string = NULL; char *string = NULL;
size_t size; size_t size;
size = unpack_strdup(puf->e, &string, TEST_STRING_NAME); size = aa_unpack_strdup(puf->e, &string, TEST_STRING_NAME);
KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1); KUNIT_EXPECT_EQ(test, size, strlen(TEST_STRING_DATA) + 1);
KUNIT_EXPECT_FALSE(test, KUNIT_EXPECT_FALSE(test,
@ -310,7 +313,7 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
puf->e->end = puf->e->pos + TEST_STRING_BUF_OFFSET puf->e->end = puf->e->pos + TEST_STRING_BUF_OFFSET
+ strlen(TEST_STRING_DATA) - 1; + strlen(TEST_STRING_DATA) - 1;
size = unpack_strdup(puf->e, &string, TEST_STRING_NAME); size = aa_unpack_strdup(puf->e, &string, TEST_STRING_NAME);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_NULL(test, string); KUNIT_EXPECT_NULL(test, string);
@ -324,7 +327,7 @@ static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test)
puf->e->pos += TEST_U32_BUF_OFFSET; puf->e->pos += TEST_U32_BUF_OFFSET;
success = unpack_nameX(puf->e, AA_U32, NULL); success = aa_unpack_nameX(puf->e, AA_U32, NULL);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -338,7 +341,7 @@ static void policy_unpack_test_unpack_nameX_with_wrong_code(struct kunit *test)
puf->e->pos += TEST_U32_BUF_OFFSET; puf->e->pos += TEST_U32_BUF_OFFSET;
success = unpack_nameX(puf->e, AA_BLOB, NULL); success = aa_unpack_nameX(puf->e, AA_BLOB, NULL);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -353,7 +356,7 @@ static void policy_unpack_test_unpack_nameX_with_name(struct kunit *test)
puf->e->pos += TEST_NAMED_U32_BUF_OFFSET; puf->e->pos += TEST_NAMED_U32_BUF_OFFSET;
success = unpack_nameX(puf->e, AA_U32, name); success = aa_unpack_nameX(puf->e, AA_U32, name);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -368,7 +371,7 @@ static void policy_unpack_test_unpack_nameX_with_wrong_name(struct kunit *test)
puf->e->pos += TEST_NAMED_U32_BUF_OFFSET; puf->e->pos += TEST_NAMED_U32_BUF_OFFSET;
success = unpack_nameX(puf->e, AA_U32, name); success = aa_unpack_nameX(puf->e, AA_U32, name);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -389,7 +392,7 @@ static void policy_unpack_test_unpack_u16_chunk_basic(struct kunit *test)
*/ */
puf->e->end += TEST_U16_DATA; puf->e->end += TEST_U16_DATA;
size = unpack_u16_chunk(puf->e, &chunk); size = aa_unpack_u16_chunk(puf->e, &chunk);
KUNIT_EXPECT_PTR_EQ(test, chunk, KUNIT_EXPECT_PTR_EQ(test, chunk,
puf->e->start + TEST_U16_OFFSET + 2); puf->e->start + TEST_U16_OFFSET + 2);
@ -406,7 +409,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_1(
puf->e->pos = puf->e->end - 1; puf->e->pos = puf->e->end - 1;
size = unpack_u16_chunk(puf->e, &chunk); size = aa_unpack_u16_chunk(puf->e, &chunk);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_NULL(test, chunk); KUNIT_EXPECT_NULL(test, chunk);
@ -428,7 +431,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_2(
*/ */
puf->e->end = puf->e->pos + TEST_U16_DATA - 1; puf->e->end = puf->e->pos + TEST_U16_DATA - 1;
size = unpack_u16_chunk(puf->e, &chunk); size = aa_unpack_u16_chunk(puf->e, &chunk);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_NULL(test, chunk); KUNIT_EXPECT_NULL(test, chunk);
@ -443,7 +446,7 @@ static void policy_unpack_test_unpack_u32_with_null_name(struct kunit *test)
puf->e->pos += TEST_U32_BUF_OFFSET; puf->e->pos += TEST_U32_BUF_OFFSET;
success = unpack_u32(puf->e, &data, NULL); success = aa_unpack_u32(puf->e, &data, NULL);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_EQ(test, data, TEST_U32_DATA); KUNIT_EXPECT_EQ(test, data, TEST_U32_DATA);
@ -460,7 +463,7 @@ static void policy_unpack_test_unpack_u32_with_name(struct kunit *test)
puf->e->pos += TEST_NAMED_U32_BUF_OFFSET; puf->e->pos += TEST_NAMED_U32_BUF_OFFSET;
success = unpack_u32(puf->e, &data, name); success = aa_unpack_u32(puf->e, &data, name);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_EQ(test, data, TEST_U32_DATA); KUNIT_EXPECT_EQ(test, data, TEST_U32_DATA);
@ -478,7 +481,7 @@ static void policy_unpack_test_unpack_u32_out_of_bounds(struct kunit *test)
puf->e->pos += TEST_NAMED_U32_BUF_OFFSET; puf->e->pos += TEST_NAMED_U32_BUF_OFFSET;
puf->e->end = puf->e->start + TEST_U32_BUF_OFFSET + sizeof(u32); puf->e->end = puf->e->start + TEST_U32_BUF_OFFSET + sizeof(u32);
success = unpack_u32(puf->e, &data, name); success = aa_unpack_u32(puf->e, &data, name);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -493,7 +496,7 @@ static void policy_unpack_test_unpack_u64_with_null_name(struct kunit *test)
puf->e->pos += TEST_U64_BUF_OFFSET; puf->e->pos += TEST_U64_BUF_OFFSET;
success = unpack_u64(puf->e, &data, NULL); success = aa_unpack_u64(puf->e, &data, NULL);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_EQ(test, data, TEST_U64_DATA); KUNIT_EXPECT_EQ(test, data, TEST_U64_DATA);
@ -510,7 +513,7 @@ static void policy_unpack_test_unpack_u64_with_name(struct kunit *test)
puf->e->pos += TEST_NAMED_U64_BUF_OFFSET; puf->e->pos += TEST_NAMED_U64_BUF_OFFSET;
success = unpack_u64(puf->e, &data, name); success = aa_unpack_u64(puf->e, &data, name);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_EQ(test, data, TEST_U64_DATA); KUNIT_EXPECT_EQ(test, data, TEST_U64_DATA);
@ -528,7 +531,7 @@ static void policy_unpack_test_unpack_u64_out_of_bounds(struct kunit *test)
puf->e->pos += TEST_NAMED_U64_BUF_OFFSET; puf->e->pos += TEST_NAMED_U64_BUF_OFFSET;
puf->e->end = puf->e->start + TEST_U64_BUF_OFFSET + sizeof(u64); puf->e->end = puf->e->start + TEST_U64_BUF_OFFSET + sizeof(u64);
success = unpack_u64(puf->e, &data, name); success = aa_unpack_u64(puf->e, &data, name);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, KUNIT_EXPECT_PTR_EQ(test, puf->e->pos,
@ -538,7 +541,7 @@ static void policy_unpack_test_unpack_u64_out_of_bounds(struct kunit *test)
static void policy_unpack_test_unpack_X_code_match(struct kunit *test) static void policy_unpack_test_unpack_X_code_match(struct kunit *test)
{ {
struct policy_unpack_fixture *puf = test->priv; struct policy_unpack_fixture *puf = test->priv;
bool success = unpack_X(puf->e, AA_NAME); bool success = aa_unpack_X(puf->e, AA_NAME);
KUNIT_EXPECT_TRUE(test, success); KUNIT_EXPECT_TRUE(test, success);
KUNIT_EXPECT_TRUE(test, puf->e->pos == puf->e->start + 1); KUNIT_EXPECT_TRUE(test, puf->e->pos == puf->e->start + 1);
@ -547,7 +550,7 @@ static void policy_unpack_test_unpack_X_code_match(struct kunit *test)
static void policy_unpack_test_unpack_X_code_mismatch(struct kunit *test) static void policy_unpack_test_unpack_X_code_mismatch(struct kunit *test)
{ {
struct policy_unpack_fixture *puf = test->priv; struct policy_unpack_fixture *puf = test->priv;
bool success = unpack_X(puf->e, AA_STRING); bool success = aa_unpack_X(puf->e, AA_STRING);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
KUNIT_EXPECT_TRUE(test, puf->e->pos == puf->e->start); KUNIT_EXPECT_TRUE(test, puf->e->pos == puf->e->start);
@ -559,7 +562,7 @@ static void policy_unpack_test_unpack_X_out_of_bounds(struct kunit *test)
bool success; bool success;
puf->e->pos = puf->e->end; puf->e->pos = puf->e->end;
success = unpack_X(puf->e, AA_NAME); success = aa_unpack_X(puf->e, AA_NAME);
KUNIT_EXPECT_FALSE(test, success); KUNIT_EXPECT_FALSE(test, success);
} }
@ -605,3 +608,5 @@ static struct kunit_suite apparmor_policy_unpack_test_module = {
}; };
kunit_test_suite(apparmor_policy_unpack_test_module); kunit_test_suite(apparmor_policy_unpack_test_module);
MODULE_LICENSE("GPL");

View File

@ -192,28 +192,30 @@ def _map_to_overall_status(test_status: kunit_parser.TestStatus) -> KunitStatus:
def parse_tests(request: KunitParseRequest, metadata: kunit_json.Metadata, input_data: Iterable[str]) -> Tuple[KunitResult, kunit_parser.Test]: def parse_tests(request: KunitParseRequest, metadata: kunit_json.Metadata, input_data: Iterable[str]) -> Tuple[KunitResult, kunit_parser.Test]:
parse_start = time.time() parse_start = time.time()
test_result = kunit_parser.Test()
if request.raw_output: if request.raw_output:
# Treat unparsed results as one passing test. # Treat unparsed results as one passing test.
test_result.status = kunit_parser.TestStatus.SUCCESS fake_test = kunit_parser.Test()
test_result.counts.passed = 1 fake_test.status = kunit_parser.TestStatus.SUCCESS
fake_test.counts.passed = 1
output: Iterable[str] = input_data output: Iterable[str] = input_data
if request.raw_output == 'all': if request.raw_output == 'all':
pass pass
elif request.raw_output == 'kunit': elif request.raw_output == 'kunit':
output = kunit_parser.extract_tap_lines(output, lstrip=False) output = kunit_parser.extract_tap_lines(output)
for line in output: for line in output:
print(line.rstrip()) print(line.rstrip())
parse_time = time.time() - parse_start
return KunitResult(KunitStatus.SUCCESS, parse_time), fake_test
else:
test_result = kunit_parser.parse_run_tests(input_data) # Actually parse the test results.
parse_end = time.time() test = kunit_parser.parse_run_tests(input_data)
parse_time = time.time() - parse_start
if request.json: if request.json:
json_str = kunit_json.get_json_result( json_str = kunit_json.get_json_result(
test=test_result, test=test,
metadata=metadata) metadata=metadata)
if request.json == 'stdout': if request.json == 'stdout':
print(json_str) print(json_str)
@ -223,10 +225,10 @@ def parse_tests(request: KunitParseRequest, metadata: kunit_json.Metadata, input
stdout.print_with_timestamp("Test results stored in %s" % stdout.print_with_timestamp("Test results stored in %s" %
os.path.abspath(request.json)) os.path.abspath(request.json))
if test_result.status != kunit_parser.TestStatus.SUCCESS: if test.status != kunit_parser.TestStatus.SUCCESS:
return KunitResult(KunitStatus.TEST_FAILURE, parse_end - parse_start), test_result return KunitResult(KunitStatus.TEST_FAILURE, parse_time), test
return KunitResult(KunitStatus.SUCCESS, parse_end - parse_start), test_result return KunitResult(KunitStatus.SUCCESS, parse_time), test
def run_tests(linux: kunit_kernel.LinuxSourceTree, def run_tests(linux: kunit_kernel.LinuxSourceTree,
request: KunitRequest) -> KunitResult: request: KunitRequest) -> KunitResult:
@ -359,14 +361,14 @@ def add_exec_opts(parser) -> None:
choices=['suite', 'test']) choices=['suite', 'test'])
def add_parse_opts(parser) -> None: def add_parse_opts(parser) -> None:
parser.add_argument('--raw_output', help='If set don\'t format output from kernel. ' parser.add_argument('--raw_output', help='If set don\'t parse output from kernel. '
'If set to --raw_output=kunit, filters to just KUnit output.', 'By default, filters to just KUnit output. Use '
'--raw_output=all to show everything',
type=str, nargs='?', const='all', default=None, choices=['all', 'kunit']) type=str, nargs='?', const='all', default=None, choices=['all', 'kunit'])
parser.add_argument('--json', parser.add_argument('--json',
nargs='?', nargs='?',
help='Stores test results in a JSON, and either ' help='Prints parsed test results as JSON to stdout or a file if '
'prints to stdout or saves to file if a ' 'a filename is specified. Does nothing if --raw_output is set.',
'filename is specified',
type=str, const='stdout', default=None, metavar='FILE') type=str, const='stdout', default=None, metavar='FILE')

View File

@ -10,8 +10,10 @@
# Author: Rae Moar <rmoar@google.com> # Author: Rae Moar <rmoar@google.com>
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass
import re import re
import sys import sys
import textwrap
from enum import Enum, auto from enum import Enum, auto
from typing import Iterable, Iterator, List, Optional, Tuple from typing import Iterable, Iterator, List, Optional, Tuple
@ -58,6 +60,10 @@ class Test:
self.counts.errors += 1 self.counts.errors += 1
stdout.print_with_timestamp(stdout.red('[ERROR]') + f' Test: {self.name}: {error_message}') stdout.print_with_timestamp(stdout.red('[ERROR]') + f' Test: {self.name}: {error_message}')
def ok_status(self) -> bool:
"""Returns true if the status was ok, i.e. passed or skipped."""
return self.status in (TestStatus.SUCCESS, TestStatus.SKIPPED)
class TestStatus(Enum): class TestStatus(Enum):
"""An enumeration class to represent the status of a test.""" """An enumeration class to represent the status of a test."""
SUCCESS = auto() SUCCESS = auto()
@ -67,27 +73,17 @@ class TestStatus(Enum):
NO_TESTS = auto() NO_TESTS = auto()
FAILURE_TO_PARSE_TESTS = auto() FAILURE_TO_PARSE_TESTS = auto()
@dataclass
class TestCounts: class TestCounts:
""" """
Tracks the counts of statuses of all test cases and any errors within Tracks the counts of statuses of all test cases and any errors within
a Test. a Test.
Attributes:
passed : int - the number of tests that have passed
failed : int - the number of tests that have failed
crashed : int - the number of tests that have crashed
skipped : int - the number of tests that have skipped
errors : int - the number of errors in the test and subtests
""" """
def __init__(self): passed: int = 0
"""Creates TestCounts object with counts of all test failed: int = 0
statuses and test errors set to 0. crashed: int = 0
""" skipped: int = 0
self.passed = 0 errors: int = 0
self.failed = 0
self.crashed = 0
self.skipped = 0
self.errors = 0
def __str__(self) -> str: def __str__(self) -> str:
"""Returns the string representation of a TestCounts object.""" """Returns the string representation of a TestCounts object."""
@ -213,12 +209,12 @@ class LineStream:
# Parsing helper methods: # Parsing helper methods:
KTAP_START = re.compile(r'KTAP version ([0-9]+)$') KTAP_START = re.compile(r'\s*KTAP version ([0-9]+)$')
TAP_START = re.compile(r'TAP version ([0-9]+)$') TAP_START = re.compile(r'\s*TAP version ([0-9]+)$')
KTAP_END = re.compile('(List of all partitions:|' KTAP_END = re.compile(r'\s*(List of all partitions:|'
'Kernel panic - not syncing: VFS:|reboot: System halted)') 'Kernel panic - not syncing: VFS:|reboot: System halted)')
def extract_tap_lines(kernel_output: Iterable[str], lstrip=True) -> LineStream: def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
"""Extracts KTAP lines from the kernel output.""" """Extracts KTAP lines from the kernel output."""
def isolate_ktap_output(kernel_output: Iterable[str]) \ def isolate_ktap_output(kernel_output: Iterable[str]) \
-> Iterator[Tuple[int, str]]: -> Iterator[Tuple[int, str]]:
@ -244,11 +240,8 @@ def extract_tap_lines(kernel_output: Iterable[str], lstrip=True) -> LineStream:
# stop extracting KTAP lines # stop extracting KTAP lines
break break
elif started: elif started:
# remove the prefix and optionally any leading # remove the prefix, if any.
# whitespace. Our parsing logic relies on this.
line = line[prefix_len:] line = line[prefix_len:]
if lstrip:
line = line.lstrip()
yield line_num, line yield line_num, line
return LineStream(lines=isolate_ktap_output(kernel_output)) return LineStream(lines=isolate_ktap_output(kernel_output))
@ -300,10 +293,10 @@ def parse_ktap_header(lines: LineStream, test: Test) -> bool:
check_version(version_num, TAP_VERSIONS, 'TAP', test) check_version(version_num, TAP_VERSIONS, 'TAP', test)
else: else:
return False return False
test.log.append(lines.pop()) lines.pop()
return True return True
TEST_HEADER = re.compile(r'^# Subtest: (.*)$') TEST_HEADER = re.compile(r'^\s*# Subtest: (.*)$')
def parse_test_header(lines: LineStream, test: Test) -> bool: def parse_test_header(lines: LineStream, test: Test) -> bool:
""" """
@ -323,11 +316,11 @@ def parse_test_header(lines: LineStream, test: Test) -> bool:
match = TEST_HEADER.match(lines.peek()) match = TEST_HEADER.match(lines.peek())
if not match: if not match:
return False return False
test.log.append(lines.pop())
test.name = match.group(1) test.name = match.group(1)
lines.pop()
return True return True
TEST_PLAN = re.compile(r'1\.\.([0-9]+)') TEST_PLAN = re.compile(r'^\s*1\.\.([0-9]+)')
def parse_test_plan(lines: LineStream, test: Test) -> bool: def parse_test_plan(lines: LineStream, test: Test) -> bool:
""" """
@ -350,14 +343,14 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
if not match: if not match:
test.expected_count = None test.expected_count = None
return False return False
test.log.append(lines.pop())
expected_count = int(match.group(1)) expected_count = int(match.group(1))
test.expected_count = expected_count test.expected_count = expected_count
lines.pop()
return True return True
TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?([^#]*)( # .*)?$') TEST_RESULT = re.compile(r'^\s*(ok|not ok) ([0-9]+) (- )?([^#]*)( # .*)?$')
TEST_RESULT_SKIP = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$') TEST_RESULT_SKIP = re.compile(r'^\s*(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$')
def peek_test_name_match(lines: LineStream, test: Test) -> bool: def peek_test_name_match(lines: LineStream, test: Test) -> bool:
""" """
@ -414,7 +407,7 @@ def parse_test_result(lines: LineStream, test: Test,
# Check if line matches test result line format # Check if line matches test result line format
if not match: if not match:
return False return False
test.log.append(lines.pop()) lines.pop()
# Set name of test object # Set name of test object
if skip_match: if skip_match:
@ -446,6 +439,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
- '# Subtest: [test name]' - '# Subtest: [test name]'
- '[ok|not ok] [test number] [-] [test name] [optional skip - '[ok|not ok] [test number] [-] [test name] [optional skip
directive]' directive]'
- 'KTAP version [version number]'
Parameters: Parameters:
lines - LineStream of KTAP output to parse lines - LineStream of KTAP output to parse
@ -454,8 +448,9 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
Log of diagnostic lines Log of diagnostic lines
""" """
log = [] # type: List[str] log = [] # type: List[str]
while lines and not TEST_RESULT.match(lines.peek()) and not \ non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START]
TEST_HEADER.match(lines.peek()): while lines and not any(re.match(lines.peek())
for re in non_diagnostic_lines):
log.append(lines.pop()) log.append(lines.pop())
return log return log
@ -501,17 +496,22 @@ def print_test_header(test: Test) -> None:
test - Test object representing current test being printed test - Test object representing current test being printed
""" """
message = test.name message = test.name
if message != "":
# Add a leading space before the subtest counts only if a test name
# is provided using a "# Subtest" header line.
message += " "
if test.expected_count: if test.expected_count:
if test.expected_count == 1: if test.expected_count == 1:
message += ' (1 subtest)' message += '(1 subtest)'
else: else:
message += f' ({test.expected_count} subtests)' message += f'({test.expected_count} subtests)'
stdout.print_with_timestamp(format_test_divider(message, len(message))) stdout.print_with_timestamp(format_test_divider(message, len(message)))
def print_log(log: Iterable[str]) -> None: def print_log(log: Iterable[str]) -> None:
"""Prints all strings in saved log for test in yellow.""" """Prints all strings in saved log for test in yellow."""
for m in log: formatted = textwrap.dedent('\n'.join(log))
stdout.print_with_timestamp(stdout.yellow(m)) for line in formatted.splitlines():
stdout.print_with_timestamp(stdout.yellow(line))
def format_test_result(test: Test) -> str: def format_test_result(test: Test) -> str:
""" """
@ -565,6 +565,40 @@ def print_test_footer(test: Test) -> None:
stdout.print_with_timestamp(format_test_divider(message, stdout.print_with_timestamp(format_test_divider(message,
len(message) - stdout.color_len())) len(message) - stdout.color_len()))
def _summarize_failed_tests(test: Test) -> str:
"""Tries to summarize all the failing subtests in `test`."""
def failed_names(test: Test, parent_name: str) -> List[str]:
# Note: we use 'main' internally for the top-level test.
if not parent_name or parent_name == 'main':
full_name = test.name
else:
full_name = parent_name + '.' + test.name
if not test.subtests: # this is a leaf node
return [full_name]
# If all the children failed, just say this subtest failed.
# Don't summarize it down "the top-level test failed", though.
failed_subtests = [sub for sub in test.subtests if not sub.ok_status()]
if parent_name and len(failed_subtests) == len(test.subtests):
return [full_name]
all_failures = [] # type: List[str]
for t in failed_subtests:
all_failures.extend(failed_names(t, full_name))
return all_failures
failures = failed_names(test, '')
# If there are too many failures, printing them out will just be noisy.
if len(failures) > 10: # this is an arbitrary limit
return ''
return 'Failures: ' + ', '.join(failures)
def print_summary_line(test: Test) -> None: def print_summary_line(test: Test) -> None:
""" """
Prints summary line of test object. Color of line is dependent on Prints summary line of test object. Color of line is dependent on
@ -587,6 +621,15 @@ def print_summary_line(test: Test) -> None:
color = stdout.red color = stdout.red
stdout.print_with_timestamp(color(f'Testing complete. {test.counts}')) stdout.print_with_timestamp(color(f'Testing complete. {test.counts}'))
# Summarize failures that might have gone off-screen since we had a lot
# of tests (arbitrarily defined as >=100 for now).
if test.ok_status() or test.counts.total() < 100:
return
summarized = _summarize_failed_tests(test)
if not summarized:
return
stdout.print_with_timestamp(color(summarized))
# Other methods: # Other methods:
def bubble_up_test_results(test: Test) -> None: def bubble_up_test_results(test: Test) -> None:
@ -609,7 +652,7 @@ def bubble_up_test_results(test: Test) -> None:
elif test.counts.get_status() == TestStatus.TEST_CRASHED: elif test.counts.get_status() == TestStatus.TEST_CRASHED:
test.status = TestStatus.TEST_CRASHED test.status = TestStatus.TEST_CRASHED
def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test: def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test:
""" """
Finds next test to parse in LineStream, creates new Test object, Finds next test to parse in LineStream, creates new Test object,
parses any subtests of the test, populates Test object with all parses any subtests of the test, populates Test object with all
@ -627,15 +670,32 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
1..4 1..4
[subtests] [subtests]
- Subtest header line - Subtest header (must include either the KTAP version line or
"# Subtest" header line)
Example: Example (preferred format with both KTAP version line and
"# Subtest" line):
KTAP version 1
# Subtest: name
1..3
[subtests]
ok 1 name
Example (only "# Subtest" line):
# Subtest: name # Subtest: name
1..3 1..3
[subtests] [subtests]
ok 1 name ok 1 name
Example (only KTAP version line, compliant with KTAP v1 spec):
KTAP version 1
1..3
[subtests]
ok 1 name
- Test result line - Test result line
Example: Example:
@ -647,28 +707,29 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_num - expected test number for test to be parsed expected_num - expected test number for test to be parsed
log - list of strings containing any preceding diagnostic lines log - list of strings containing any preceding diagnostic lines
corresponding to the current test corresponding to the current test
is_subtest - boolean indicating whether test is a subtest
Return: Return:
Test object populated with characteristics and any subtests Test object populated with characteristics and any subtests
""" """
test = Test() test = Test()
test.log.extend(log) test.log.extend(log)
parent_test = False if not is_subtest:
main = parse_ktap_header(lines, test) # If parsing the main/top-level test, parse KTAP version line and
if main:
# If KTAP/TAP header is found, attempt to parse
# test plan # test plan
test.name = "main" test.name = "main"
ktap_line = parse_ktap_header(lines, test)
parse_test_plan(lines, test) parse_test_plan(lines, test)
parent_test = True parent_test = True
else: else:
# If KTAP/TAP header is not found, test must be subtest # If not the main test, attempt to parse a test header containing
# header or test result line so parse attempt to parser # the KTAP version line and/or subtest header line
# subtest header ktap_line = parse_ktap_header(lines, test)
parent_test = parse_test_header(lines, test) subtest_line = parse_test_header(lines, test)
parent_test = (ktap_line or subtest_line)
if parent_test: if parent_test:
# If subtest header is found, attempt to parse # If KTAP version line and/or subtest header is found, attempt
# test plan and print header # to parse test plan and print test header
parse_test_plan(lines, test) parse_test_plan(lines, test)
print_test_header(test) print_test_header(test)
expected_count = test.expected_count expected_count = test.expected_count
@ -683,7 +744,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
sub_log = parse_diagnostic(lines) sub_log = parse_diagnostic(lines)
sub_test = Test() sub_test = Test()
if not lines or (peek_test_name_match(lines, test) and if not lines or (peek_test_name_match(lines, test) and
not main): is_subtest):
if expected_count and test_num <= expected_count: if expected_count and test_num <= expected_count:
# If parser reaches end of test before # If parser reaches end of test before
# parsing expected number of subtests, print # parsing expected number of subtests, print
@ -697,20 +758,19 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
test.log.extend(sub_log) test.log.extend(sub_log)
break break
else: else:
sub_test = parse_test(lines, test_num, sub_log) sub_test = parse_test(lines, test_num, sub_log, True)
subtests.append(sub_test) subtests.append(sub_test)
test_num += 1 test_num += 1
test.subtests = subtests test.subtests = subtests
if not main: if is_subtest:
# If not main test, look for test result line # If not main test, look for test result line
test.log.extend(parse_diagnostic(lines)) test.log.extend(parse_diagnostic(lines))
if (parent_test and peek_test_name_match(lines, test)) or \ if test.name != "" and not peek_test_name_match(lines, test):
not parent_test:
parse_test_result(lines, test, expected_num)
else:
test.add_error('missing subtest result line!') test.add_error('missing subtest result line!')
else:
parse_test_result(lines, test, expected_num)
# Check for there being no tests # Check for there being no subtests within parent test
if parent_test and len(subtests) == 0: if parent_test and len(subtests) == 0:
# Don't override a bad status if this test had one reported. # Don't override a bad status if this test had one reported.
# Assumption: no subtests means CRASHED is from Test.__init__() # Assumption: no subtests means CRASHED is from Test.__init__()
@ -720,11 +780,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# Add statuses to TestCounts attribute in Test object # Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test) bubble_up_test_results(test)
if parent_test and not main: if parent_test and is_subtest:
# If test has subtests and is not the main test object, print # If test has subtests and is not the main test object, print
# footer. # footer.
print_test_footer(test) print_test_footer(test)
elif not main: elif is_subtest:
print_test_result(test) print_test_result(test)
return test return test
@ -744,10 +804,10 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
test = Test() test = Test()
if not lines: if not lines:
test.name = '<missing>' test.name = '<missing>'
test.add_error('could not find any KTAP output!') test.add_error('Could not find any KTAP output. Did any KUnit tests run?')
test.status = TestStatus.FAILURE_TO_PARSE_TESTS test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else: else:
test = parse_test(lines, 0, []) test = parse_test(lines, 0, [], False)
if test.status != TestStatus.NO_TESTS: if test.status != TestStatus.NO_TESTS:
test.status = test.counts.get_status() test.status = test.counts.get_status()
stdout.print_with_timestamp(DIVIDER) stdout.print_with_timestamp(DIVIDER)

View File

@ -80,6 +80,13 @@ class KconfigTest(unittest.TestCase):
self.assertEqual(actual_kconfig, expected_kconfig) self.assertEqual(actual_kconfig, expected_kconfig)
class KUnitParserTest(unittest.TestCase): class KUnitParserTest(unittest.TestCase):
def setUp(self):
self.print_mock = mock.patch('kunit_printer.Printer.print').start()
self.addCleanup(mock.patch.stopall)
def noPrintCallContains(self, substr: str):
for call in self.print_mock.mock_calls:
self.assertNotIn(substr, call.args[0])
def assertContains(self, needle: str, haystack: kunit_parser.LineStream): def assertContains(self, needle: str, haystack: kunit_parser.LineStream):
# Clone the iterator so we can print the contents on failure. # Clone the iterator so we can print the contents on failure.
@ -133,33 +140,29 @@ class KUnitParserTest(unittest.TestCase):
all_passed_log = test_data_path('test_is_test_passed-all_passed.log') all_passed_log = test_data_path('test_is_test_passed-all_passed.log')
with open(all_passed_log) as file: with open(all_passed_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual(result.counts.errors, 0)
result.status)
def test_parse_successful_nested_tests_log(self): def test_parse_successful_nested_tests_log(self):
all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log') all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log')
with open(all_passed_log) as file: with open(all_passed_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual(result.counts.errors, 0)
result.status)
def test_kselftest_nested(self): def test_kselftest_nested(self):
kselftest_log = test_data_path('test_is_test_passed-kselftest.log') kselftest_log = test_data_path('test_is_test_passed-kselftest.log')
with open(kselftest_log) as file: with open(kselftest_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual(result.counts.errors, 0)
result.status)
def test_parse_failed_test_log(self): def test_parse_failed_test_log(self):
failed_log = test_data_path('test_is_test_passed-failure.log') failed_log = test_data_path('test_is_test_passed-failure.log')
with open(failed_log) as file: with open(failed_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status)
kunit_parser.TestStatus.FAILURE, self.assertEqual(result.counts.errors, 0)
result.status)
def test_no_header(self): def test_no_header(self):
empty_log = test_data_path('test_is_test_passed-no_tests_run_no_header.log') empty_log = test_data_path('test_is_test_passed-no_tests_run_no_header.log')
@ -167,9 +170,8 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests( result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines())) kunit_parser.extract_tap_lines(file.readlines()))
self.assertEqual(0, len(result.subtests)) self.assertEqual(0, len(result.subtests))
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS, result.status)
kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS, self.assertEqual(result.counts.errors, 1)
result.status)
def test_missing_test_plan(self): def test_missing_test_plan(self):
missing_plan_log = test_data_path('test_is_test_passed-' missing_plan_log = test_data_path('test_is_test_passed-'
@ -179,12 +181,8 @@ class KUnitParserTest(unittest.TestCase):
kunit_parser.extract_tap_lines( kunit_parser.extract_tap_lines(
file.readlines())) file.readlines()))
# A missing test plan is not an error. # A missing test plan is not an error.
self.assertEqual(0, result.counts.errors) self.assertEqual(result.counts, kunit_parser.TestCounts(passed=10, errors=0))
# All tests should be accounted for. self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
self.assertEqual(10, result.counts.total())
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
def test_no_tests(self): def test_no_tests(self):
header_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log') header_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log')
@ -192,9 +190,8 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests( result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines())) kunit_parser.extract_tap_lines(file.readlines()))
self.assertEqual(0, len(result.subtests)) self.assertEqual(0, len(result.subtests))
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.NO_TESTS, result.status)
kunit_parser.TestStatus.NO_TESTS, self.assertEqual(result.counts.errors, 1)
result.status)
def test_no_tests_no_plan(self): def test_no_tests_no_plan(self):
no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log') no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log')
@ -205,7 +202,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual( self.assertEqual(
kunit_parser.TestStatus.NO_TESTS, kunit_parser.TestStatus.NO_TESTS,
result.subtests[0].subtests[0].status) result.subtests[0].subtests[0].status)
self.assertEqual(1, result.counts.errors) self.assertEqual(result.counts, kunit_parser.TestCounts(passed=1, errors=1))
def test_no_kunit_output(self): def test_no_kunit_output(self):
@ -214,9 +211,10 @@ class KUnitParserTest(unittest.TestCase):
with open(crash_log) as file: with open(crash_log) as file:
result = kunit_parser.parse_run_tests( result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines())) kunit_parser.extract_tap_lines(file.readlines()))
print_mock.assert_any_call(StrContains('could not find any KTAP output!')) print_mock.assert_any_call(StrContains('Could not find any KTAP output.'))
print_mock.stop() print_mock.stop()
self.assertEqual(0, len(result.subtests)) self.assertEqual(0, len(result.subtests))
self.assertEqual(result.counts.errors, 1)
def test_skipped_test(self): def test_skipped_test(self):
skipped_log = test_data_path('test_skip_tests.log') skipped_log = test_data_path('test_skip_tests.log')
@ -224,18 +222,16 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
# A skipped test does not fail the whole suite. # A skipped test does not fail the whole suite.
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual(result.counts, kunit_parser.TestCounts(passed=4, skipped=1))
result.status)
def test_skipped_all_tests(self): def test_skipped_all_tests(self):
skipped_log = test_data_path('test_skip_all_tests.log') skipped_log = test_data_path('test_skip_all_tests.log')
with open(skipped_log) as file: with open(skipped_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SKIPPED, result.status)
kunit_parser.TestStatus.SKIPPED, self.assertEqual(result.counts, kunit_parser.TestCounts(skipped=5))
result.status)
def test_ignores_hyphen(self): def test_ignores_hyphen(self):
hyphen_log = test_data_path('test_strip_hyphen.log') hyphen_log = test_data_path('test_strip_hyphen.log')
@ -243,71 +239,112 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
# A skipped test does not fail the whole suite. # A skipped test does not fail the whole suite.
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS,
result.status)
self.assertEqual( self.assertEqual(
"sysctl_test", "sysctl_test",
result.subtests[0].name) result.subtests[0].name)
self.assertEqual( self.assertEqual(
"example", "example",
result.subtests[1].name) result.subtests[1].name)
file.close()
def test_ignores_prefix_printk_time(self): def test_ignores_prefix_printk_time(self):
prefix_log = test_data_path('test_config_printk_time.log') prefix_log = test_data_path('test_config_printk_time.log')
with open(prefix_log) as file: with open(prefix_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertEqual(result.counts.errors, 0)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_ignores_multiple_prefixes(self): def test_ignores_multiple_prefixes(self):
prefix_log = test_data_path('test_multiple_prefixes.log') prefix_log = test_data_path('test_multiple_prefixes.log')
with open(prefix_log) as file: with open(prefix_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertEqual(result.counts.errors, 0)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_prefix_mixed_kernel_output(self): def test_prefix_mixed_kernel_output(self):
mixed_prefix_log = test_data_path('test_interrupted_tap_output.log') mixed_prefix_log = test_data_path('test_interrupted_tap_output.log')
with open(mixed_prefix_log) as file: with open(mixed_prefix_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertEqual(result.counts.errors, 0)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_prefix_poundsign(self): def test_prefix_poundsign(self):
pound_log = test_data_path('test_pound_sign.log') pound_log = test_data_path('test_pound_sign.log')
with open(pound_log) as file: with open(pound_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertEqual(result.counts.errors, 0)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_kernel_panic_end(self): def test_kernel_panic_end(self):
panic_log = test_data_path('test_kernel_panic_interrupt.log') panic_log = test_data_path('test_kernel_panic_interrupt.log')
with open(panic_log) as file: with open(panic_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.TEST_CRASHED, result.status)
kunit_parser.TestStatus.TEST_CRASHED, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertGreaterEqual(result.counts.errors, 1)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_pound_no_prefix(self): def test_pound_no_prefix(self):
pound_log = test_data_path('test_pound_no_prefix.log') pound_log = test_data_path('test_pound_no_prefix.log')
with open(pound_log) as file: with open(pound_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual( self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status)
kunit_parser.TestStatus.SUCCESS, self.assertEqual('kunit-resource-test', result.subtests[0].name)
result.status) self.assertEqual(result.counts.errors, 0)
self.assertEqual('kunit-resource-test', result.subtests[0].name)
def test_summarize_failures(self):
output = """
KTAP version 1
1..2
# Subtest: all_failed_suite
1..2
not ok 1 - test1
not ok 2 - test2
not ok 1 - all_failed_suite
# Subtest: some_failed_suite
1..2
ok 1 - test1
not ok 2 - test2
not ok 1 - some_failed_suite
"""
result = kunit_parser.parse_run_tests(output.splitlines())
self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status)
self.assertEqual(kunit_parser._summarize_failed_tests(result),
'Failures: all_failed_suite, some_failed_suite.test2')
def test_ktap_format(self):
ktap_log = test_data_path('test_parse_ktap_output.log')
with open(ktap_log) as file:
result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3))
self.assertEqual('suite', result.subtests[0].name)
self.assertEqual('case_1', result.subtests[0].subtests[0].name)
self.assertEqual('case_2', result.subtests[0].subtests[1].name)
def test_parse_subtest_header(self):
ktap_log = test_data_path('test_parse_subtest_header.log')
with open(ktap_log) as file:
result = kunit_parser.parse_run_tests(file.readlines())
self.print_mock.assert_any_call(StrContains('suite (1 subtest)'))
def test_show_test_output_on_failure(self):
output = """
KTAP version 1
1..1
Test output.
Indented more.
not ok 1 test1
"""
result = kunit_parser.parse_run_tests(output.splitlines())
self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status)
self.print_mock.assert_any_call(StrContains('Test output.'))
self.print_mock.assert_any_call(StrContains(' Indented more.'))
self.noPrintCallContains('not ok 1 test1')
def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream: def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream:
return kunit_parser.LineStream(enumerate(strs, start=1)) return kunit_parser.LineStream(enumerate(strs, start=1))
@ -485,6 +522,9 @@ class LinuxSourceTreeTest(unittest.TestCase):
class KUnitJsonTest(unittest.TestCase): class KUnitJsonTest(unittest.TestCase):
def setUp(self):
self.print_mock = mock.patch('kunit_printer.Printer.print').start()
self.addCleanup(mock.patch.stopall)
def _json_for(self, log_file): def _json_for(self, log_file):
with open(test_data_path(log_file)) as file: with open(test_data_path(log_file)) as file:
@ -581,7 +621,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1) self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1) self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1) self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
self.print_mock.assert_any_call(StrContains('could not find any KTAP output!')) self.print_mock.assert_any_call(StrContains('Could not find any KTAP output.'))
def test_exec_no_tests(self): def test_exec_no_tests(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=['TAP version 14', '1..0']) self.linux_source_mock.run_kernel = mock.Mock(return_value=['TAP version 14', '1..0'])

View File

@ -0,0 +1,8 @@
KTAP version 1
1..1
KTAP version 1
1..3
ok 1 case_1
ok 2 case_2
ok 3 case_3
ok 1 suite

View File

@ -0,0 +1,7 @@
KTAP version 1
1..1
KTAP version 1
# Subtest: suite
1..1
ok 1 test
ok 1 suite