tree: e6b952839cffacf60cdbf341bdd5c012904b20ed [path history] [tgz]
  1. android_libpthread/
  2. android_librt/
  3. newlib_tests/
  4. tests/
  5. .gitignore
  6. cloner.c
  7. errnos.h
  8. gen_version.sh
  9. get_path.c
  10. ltp.pc.in
  11. Makefile
  12. parse_opts.c
  13. random_range.c
  14. README.md
  15. safe_file_ops.c
  16. safe_macros.c
  17. safe_net.c
  18. safe_pthread.c
  19. safe_stdio.c
  20. self_exec.c
  21. signame.h
  22. tlibio.c
  23. tst_af_alg.c
  24. tst_ansi_color.c
  25. tst_arch.c
  26. tst_assert.c
  27. tst_bool_expr.c
  28. tst_buffers.c
  29. tst_capability.c
  30. tst_cgroup.c
  31. tst_checkpoint.c
  32. tst_checksum.c
  33. tst_clocks.c
  34. tst_clone.c
  35. tst_cmd.c
  36. tst_coredump.c
  37. tst_cpu.c
  38. tst_crypto.c
  39. tst_device.c
  40. tst_dir_is_empty.c
  41. tst_epoll.c
  42. tst_fd.c
  43. tst_fill_file.c
  44. tst_fill_fs.c
  45. tst_fips.c
  46. tst_fs_has_free.c
  47. tst_fs_link_count.c
  48. tst_fs_setup.c
  49. tst_fs_type.c
  50. tst_get_bad_addr.c
  51. tst_hugepage.c
  52. tst_ioctl.c
  53. tst_kconfig.c
  54. tst_kernel.c
  55. tst_kvercmp.c
  56. tst_lockdown.c
  57. tst_memutils.c
  58. tst_mkfs.c
  59. tst_module.c
  60. tst_net.c
  61. tst_netdevice.c
  62. tst_netlink.c
  63. tst_parse_opts.c
  64. tst_path_has_mnt_flags.c
  65. tst_pid.c
  66. tst_process_state.c
  67. tst_rand_data.c
  68. tst_res.c
  69. tst_resource.c
  70. tst_rtctime.c
  71. tst_safe_file_at.c
  72. tst_safe_io_uring.c
  73. tst_safe_macros.c
  74. tst_safe_sysv_ipc.c
  75. tst_safe_timerfd.c
  76. tst_sig.c
  77. tst_sig_proc.c
  78. tst_status.c
  79. tst_supported_fs_types.c
  80. tst_sys_conf.c
  81. tst_taint.c
  82. tst_test.c
  83. tst_test_macros.c
  84. tst_thread_state.c
  85. tst_timer.c
  86. tst_timer_test.c
  87. tst_tmpdir.c
  88. tst_uid.c
  89. tst_virt.c
  90. tst_wallclock.c
lib/README.md

Test library design document

High-level picture

library process
+----------------------------+
| main                       |
|  tst_run_tcases            |
|   do_setup                 |
|   for_each_variant         |
|    for_each_filesystem     |   test process
|     fork_testrun ------------->+--------------------------------------------+
|      waitpid               |   | testrun                                    |
|                            |   |  do_test_setup                             |
|                            |   |   tst_test->setup                          |
|                            |   |  run_tests                                 |
|                            |   |   tst_test->test(i) or tst_test->test_all  |
|                            |   |  do_test_cleanup                           |
|                            |   |   tst_test->cleanup                        |
|                            |   |  exit(0)                                   |
|   do_exit                  |   +--------------------------------------------+
|    do_cleanup              |
|     exit(ret)              |
+----------------------------+

Test lifetime overview

When a test is executed the very first thing to happen is that we check for various test prerequisites. These are described in the tst_test structure and range from simple ‘.needs_root’ to a more complicated kernel .config boolean expressions such as: “CONFIG_X86_INTEL_UMIP=y | CONFIG_X86_UMIP=y”.

If all checks are passed, the process continues with setting up the test environment as requested in the tst_test structure. There are many different setup steps that have been put into the test library again ranging from rather simple creation of a unique test temporary directory to a bit more complicated ones such as preparing, formatting, and mounting a block device.

The test library also initializes shared memory used for IPC at this step.

Once all the prerequisites are checked and test environment has been prepared we can move on executing the testcase itself. The actual test is executed in a forked process, however there are a few hops before we get there.

First of all there are test variants, which means that the test is re-executed several times with a slightly different setting. This is usually used to test a family of similar syscalls, where we test each of these syscalls exactly the same, but without re-executing the test binary itself. Test variants are implemented as a simple global variable counter that gets increased on each iteration. In a case of syscall tests we switch between which syscall to call based on the global counter.

Then there is all_filesystems flag which is mostly the same as test variants but executes the test for each filesystem supported by the system. Note that we can get cartesian product between test variants and all filesystems as well.

In a pseudo code it could be expressed as:

for test_variants:
	for all_filesystems:
		fork_testrun()

Before we fork the test process, the test library sets up a timeout alarm and a heartbeat signal handler and it also sets up an alarm(2) accordingly to the test timeout. When a test times out, the test library gets SIGALRM and the alarm handler mercilessly kills all forked children by sending SIGKILL to the whole process group. The heartbeat handler is used by the test process to reset this timer for example when the test functions run in a loop.

With that done we finally fork() the test process. The test process firstly resets signal handlers and sets its pid to be a process group leader so that we can slaughter all children if needed. The test library proceeds with suspending itself in waitpid() syscall and waits for the child to finish at this point.

The test process goes ahead and calls the test setup() function if present in the tst_test structure. It's important that we execute all test callbacks after we have forked the process, that way we cannot crash the test library process. The setup can also cause the test to exit prematurely by either direct or indirect (SAFE_MACROS()) call to tst_brk(). In this case the fork_testrun() function exits, but the loops for test variants or filesystems carries on.

All that is left to be done is to actually execute the tests, what happnes now depends on the -i and -I command line parameters that can request that the run() or run_all() callbacks are executed N times or for N seconds. Again the test can exit at any time by direct or indirect call to tst_brk().

Once the test is finished all that is left for the test process is the test cleanup(). So if a there is a cleanup() callback in the tst_test structure it's executed. The cleanup() callback runs in a special context where the tst_brk(TBROK, ...) calls are converted into tst_res(TWARN, ...) calls. This is because we found out that carrying on with partially broken cleanup is usually better option than exiting it in the middle.

The test cleanup() is also called by the tst_brk() handler in order to cleanup before exiting the test process, hence it must be able to cope even with partial test setup. Usually it suffices to make sure to clean up only resources that already have been set up and to do that in the reverse order that we did in setup().

Once the test process exits or leaves the run() or run_all() function the test library wakes up from the waitpid() call, and checks if the test process exited normally.

Once the testrun is finished, the test library does a cleanup() as well to clean up resources set up in the test library setup(), reports test results and finally exits the process.

Test library and fork()-ing

Things are a bit more complicated when fork()-ing is involved, however the test results are stored in a page of a shared memory and incremented by atomic operations, hence the results are stored right after the test reporting function returns from the test library and the access is, by definition, race-free as well.

On the other hand the test library, apart from sending a SIGKILL to the whole process group on timeout, does not track grandchildren.

This especially means that:

  • The test exits once the main test process exits.

  • While the test results are, by the design, propagated to the test library we may still miss a child that gets killed by a signal or exits unexpectedly.

The test writer should, because of this, take care of reaping these processes properly, in most cases this could be simply done by calling tst_reap_children() to collect and dissect deceased.

Also note that tst_brk() does exit only the current process, so if a child process calls tst_brk() the counters are incremented and only the process exits.

Test library and exec()

The piece of mapped memory to store the results is not preserved over exec(2), hence to use the test library from a binary started by an exec() it has to be remapped. In this case, the process must call tst_reinit() before calling any other library functions. In order to make this happen the program environment carries LTP_IPC_PATH variable with a path to the backing file on tmpfs. This also allows us to use the test library from shell testcases.

Test library and process synchronization

The piece of mapped memory is also used as a base for a futex-based synchronization primitives called checkpoints. And as said previously the memory can be mapped to any process by calling the tst_reinit() function. As a matter of a fact, there is even a tst_checkpoint binary that allows us to use the checkpoints from shell code as well.