Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export machine-readable test results to a file #9

Open
aspotashev opened this issue Jul 3, 2024 · 3 comments
Open

Export machine-readable test results to a file #9

aspotashev opened this issue Jul 3, 2024 · 3 comments

Comments

@aspotashev
Copy link

aspotashev commented Jul 3, 2024

As a follow up to the testing-devex's feedback for rust-lang/rust#123365 (draft), let me describe on a high level our use case for capturing JUnit output when running Rust test via Bazel.

In our setup, Bazel (build system) runs Rust test binaries. We use a slightly patched version of https://github.com/bazelbuild/rules_rust. Bazel is a mediator between the terminal UI, IDEs, code review websites, or remote build/test services, and the executed test. Bazel runs the test binary directly, i.e. doesn't use cargo test. Bazel uses JUnit XML files for communicating machine-parsable language-agnostic test results (already supported by GoogleTest, JUnit, and other languages). Right now, there is no mechanism to communicate Rust test results through Bazel to users (again, terminal, IDE plugins...). See also Bazel Rust rules FR.


One straightforward candidate solution consists of these 2 steps:

  1. Have CLI flag(s) defined in Rust libtest to control export of machine-readable test results (e.g. JUnit or JSON) from the Rust test processes. For context, these PRs were attempts to add/fix such CLI flag(s):
  2. Modify Bazel Rust rules to pass the relevant CLI flags.

We also considered using a wrapper binary (Bazel runs wrapper, wrapper runs Rust test binary and captures stdout), however it has downsides which are hard-to-impossible to resolve:

  • Capturing an output stream (e.g. stdout, stderr) isn't reliable because a Rust test may also write directly to stdout and that will garble XML/JSON test results output from libtest.
  • Without a significant rework of Bazel, the systems that sit on top of it (e.g. IDEs) will treat the wrapper as a main application, therefore an naive attempt to debug a Rust test would start a debugger for the wrapper instead.

I'm looking forward to hearing your suggestions for problem resolutions that may be acceptable from the perspective of T-testing-devex. If we find a solution that doesn't require a significant rework of libtest / Rust testing framework, that would be great, and I would then be interested in contributing the relevant code.

@heisen-li
Copy link

What is the purpose of exporting test results to a file?My guess is to better manage the testing process and analyze test results.Perhaps consider generating a more comprehensive test report?

In addition to keeping the logfile file consistent with stdout, perhaps consider displaying some other information in the file.For example:

  • show the time spent on each test function: --report-time;
  • even include cpu usage efficiency and more precise memory allocation: refer to golang?
  • Is it necessary to keep a consistent order of output in multithreaded cases?

@epage
Copy link

epage commented Jul 5, 2024

Wanted to highlight some things needed for moving this forward

we need more information about current logfile usage patterns so that we can provide a recommendation on next steps.

Could you share some additional background about the issue you're facing with the current format discrepancy

Particularly

  • Why does this need to baked in, rather than something built on top of json output?

identify use cases leveraging the logfile in its current format?

@epage
Copy link

epage commented Jul 5, 2024

In particular, the unofficial (atm) goal is to narrowly focus the official libtest's functionality

  • Reduce the minimal burden on alternative implementations
  • Reduce the API (including CLI) burden on T-libs-api and maintenance burden on T-libs.

We generally see new feature development happening in either

  • Third party test harnesses (i.e. alternatives to libtest)
  • Tooling layers that sit above the test harness (e.g. cargo test or bazel)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants