1. 3

i.e. you have Python and C++ executables, and you want to run them in a clean dir, log the output, show failures, and produce a summary

I’m looking for some sort of open source tool , and after searching for awhile there doesn’t seem like a common solution

This seems like an extremely basic question … Is the answer really “I don’t” ? And you should write a shell script? :)

(For builds we use Ninja, for Python tests it’s stdlib unittest, for C++ it’s greatest.h)

Maybe something here, but I’ve never heard of anyone using it?

https://testanything.org/consumers.html

  1. 3

    gtest-parallel exists for googletest and I was thinking the other day about extending it to work with unittest. A brief search turns up unittest-parallel.

    Failing that, probably a bunch of Ninja jobs that run in their own target directories. You could use ninja -k to keep going after failure. Otherwise DIY :)

    1. 2

      gtest-parallel is one small Python script and could probably be extended to support arbitrary test discovery.

      1. 1

        Yeah that looks somewhat similar to what I’d want …

        But I think a wrapper around Ninja might work well, which would make sense since Oil already has ./NINJA-config.sh as a step to generate the config

        • optionally clean the _test directory (sometimes you don’t want “incremental tests”, just parallel execution)
        • Ninja runs a wrapper for every test
          • change to a new clean directory
          • that logs to a file, and tests the exit code
          • print nothing or OK on success
          • print the log on failure
        • and then look at all the task files and print a summary
    2. 2

      I would propose to take a look on CTest. It is a part of CMake and allows running any tests. CTest is a test framework-agnostic tool. CTest allows running tests in parallel, rerun tests failed last time, filter tests by label etc.

      See documentation for ctest.1 and for add_test.

      Feel free to ask questions about integration with CTest if you are interested in it.

      1. 2

        For Python tests based on unittest you can use a test runner that supports parallelization. Two that I know of that can do so are zope.testrunner, which is a very mature test runner (it comes from the Zope project but doesn’t actually have anything to do with Zope proper) and Green which is somewhat newer and “thinner” but is quite nice.

        For aggregating across test runners, subunit is one I’ve used and liked. I’m not sure it’s still under active development.

        1. 1

          And if you’re using pytest, there’s pytest-xdist.

          1. 1

            OK but that doesn’t do anything about C++ tests right? Or you would have to wrap each of your C++ tests with a Python test that looks at the exit code?

            I would want to run them in parallel with each other, not just Python tests in parallel, then C++ tests in parallel :)

            1. 1

              There are cross-language test aggregators that do various things in this space. They may or may not solve the parallelization part, but they assist in aggregating runs.

              At a minimum you could do something like spawn 2 C++ test runs, each with half the tests and aggregate the results.

              Wrapping C++ tests in Python might not be too bad if there aren’t any C++ test runners that help.

          2. 2

            As part of the build2 project we’ve developed a shell-like language for writing tests called Testscript. Besides all the things that you want (run in clean directory, analyze output) you also get parallel execution within a single file.

            1. 1

              OK interesting, that’s definitely solving the problem I’m talking about! It’s more of a “meta” test runner, that accomodates different language-specific frameworks.

              Although this is for https://www.oilshell.org/ and I avoid the use of shell-like languages, in favor of shell :) Especially because I noticed CMake files are shell-like, and then they often literally embed shell.

              I understand why CMake is its own language, because it is portable to Windows, and it looks like the rationale for build2 is the same. There is always the “how do you build the build system?” problem. (Similar to “how do you install the package manager?” :-) And on Windows that is very difficult.


              FWIW oil-native is pure C++ now – it doesn’t require a build system to build, only 20 invocations of the C++ compiler run in sequence. It compiles to less than 1 MB of code.

              But it’s not ready and not portable to Windows :)

              https://www.oilshell.org/release/0.12.4/

              https://www.oilshell.org/release/0.12.4/pub/metrics.wwz/line-counts/oil-cpp.txt


              I think I might go with shell wrappers around Ninja since we don’t need Windows portability right now (and if we do, then that would be motivation to port Oil to Windows.) But I understand more what build2 is now :)

            2. 2

              The first thing that comes to mind for me is bazel test https://docs.bazel.build/versions/main/user-manual.html#test I think you have to already be building your project with bazel in order to use it. But once it is set up, it is highly parallel across languages

              1. 1

                Yup that’s probably why this is in my head :-) Because I used Bazel at work for many years

                But I think there is essentially no non-Bazel tool that works this way! lobste.rs prove me wrong :)

                Thanks for the response

              2. 1

                Probably with https://www.gnu.org/software/parallel/ or just pushing them to background tasks in a bash script

                1. 1

                  You might be interested into using pytest to run Python tests and additionally use pytest-cpp plugin to discover and run C++ tests.

                  https://github.com/pytest-dev/pytest-cpp

                  1. 1

                    I guess the reason people don’t do this much is because they want fine-grained reporting from a single test framework (individual test cases), not just a report of which processes failed?