Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Test reports don't seem to work #5114

Closed
yurishkuro opened this issue Jan 18, 2024 · 7 comments · Fixed by #5126
Closed

[Bug] Test reports don't seem to work #5114

yurishkuro opened this issue Jan 18, 2024 · 7 comments · Fixed by #5126

Comments

@yurishkuro
Copy link
Member

@albertteoh maybe it's just me, but I am having no luck with this new test reports thing. On the workflow Summary page it only shows statistics with no links (nothing there is clickable):

https://github.com/jaegertracing/jaeger/actions/runs/7552405639?pr=5101

image

If instead I find a comment in the PR, it does have a link:

For more details on these failures, see this check.

However, on that page no error messages are displayed, just the name of the test

image

Overall, I am having much harder time analyzing the failures than before, since the logs of the run are now unreadable JSON, and these summaries are not helpful (and why so hard to find...).

Suggestion:

  • either find another test results uploader that has better integration with GitHub, or
  • go back to plain text logs

BTW, with plain logs my go-to trick was to search for --- FAIL string.

@yurishkuro
Copy link
Member Author

yurishkuro commented Jan 18, 2024

For comparison, running test on the failing package locally produces this output

$ go test ./cmd/agent/app/
--- FAIL: TestCreateCollectorProxy (0.00s)
    --- FAIL: TestCreateCollectorProxy/#00 (0.00s)
panic: value method github.com/jaegertracing/jaeger/cmd/agent/app/reporter/grpc.ProxyBuilder.Close called using nil *ProxyBuilder pointer [recovered]
	panic: value method github.com/jaegertracing/jaeger/cmd/agent/app/reporter/grpc.ProxyBuilder.Close called using nil *ProxyBuilder pointer

goroutine 105 [running]:
testing.tRunner.func1.2({0x10133adc0, 0x140001dbba0})
	/Users/ysh/homebrew/Cellar/go/1.21.5/libexec/src/testing/testing.go:1545 +0x1c4
testing.tRunner.func1()
	/Users/ysh/homebrew/Cellar/go/1.21.5/libexec/src/testing/testing.go:1548 +0x360
panic({0x10133adc0?, 0x140001dbba0?})
	/Users/ysh/homebrew/Cellar/go/1.21.5/libexec/src/runtime/panic.go:914 +0x218
github.com/jaegertracing/jaeger/cmd/agent/app/reporter/grpc.(*ProxyBuilder).Close(0x0?)
	<autogenerated>:1 +0x68
github.com/jaegertracing/jaeger/cmd/agent/app.TestCreateCollectorProxy.func1(0x14000064798?)
	/Users/ysh/dev/jaegertracing/jaeger/cmd/agent/app/builder_test.go:275 +0x654
testing.tRunner(0x14000257380, 0x140001dba10)
	/Users/ysh/homebrew/Cellar/go/1.21.5/libexec/src/testing/testing.go:1595 +0xe8
created by testing.(*T).Run in goroutine 104
	/Users/ysh/homebrew/Cellar/go/1.21.5/libexec/src/testing/testing.go:1648 +0x33c
FAIL	github.com/jaegertracing/jaeger/cmd/agent/app	0.578s

It is a panic, not a normal test failure, but testing.T takes care of these things, and the -json output still contains the same stack dump, so one would think the report generator would show it in the report.

@albertteoh
Copy link
Contributor

Thanks for those details; and agree it's not very usable. One thing I missed on the opening README of the github action is that Go isn't officially supported but might work: https://github.com/EnricoMi/publish-unit-test-result-action?tab=readme-ov-file#generating-test-result-files.

I had a quick look around for alternatives but couldn't find anything suitable for Go.

I vote to revert back to plain old text logs.

What do you think?

@yurishkuro
Copy link
Member Author

Maybe go test does not produce the JSON in the format the tool expects. E.g. perhaps it wants the error output to be a single JSON entry with multi-line string in it? Who knows.

I do recall seeing some relevant output in the past in the reports, not sure if something changed or if it's sensitive to how the tests run / fail.

@albertteoh
Copy link
Contributor

I suspect it works well with assertion failures, but not with panics.

yurishkuro pushed a commit that referenced this issue Jan 21, 2024
## Which problem is this PR solving?
- Resolves #5114

## Description of the changes
- We had a go at using a test summary reporter github action, but it
appeared to lack sufficient support for "unexpected" test failures such
as panics, while also making it harder to manually inspect test log
output.
- The original log output still satisfies our requirements, as it is
still easy to search through to identify the failing test.

## Checklist
- [x] I have read
https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits

---------

Signed-off-by: Albert Teoh <albert@packsmith.io>
Co-authored-by: Albert Teoh <albert@packsmith.io>
@yurishkuro
Copy link
Member Author

@albertteoh alternative idea. When we run the tests with text output, if any of them fail the output will be seen in the logs. However, if we immediately rerun the tests, it's possible that only the failing tests will be rerun (Go is supposed to cache successful test results and not rerun then if the code hasn't changed). In this case, we could do a rerun with -json output (on failure of previous step) and use that to upload to workflow summary. Also, instead of doing all those complicated artifacts juggling, there is a much simpler mechanism of writing output to a special variable and it will be included in the summary, which I think how Dependency Review action works

image

So all we really need is some pretty-print of the JSON into Markdown.

We could also just write plain text output, it would still be more useful than the very verbose main run of the tests (again, assuming that caching would avoid rerunning other tests and printing all package names).

@albertteoh
Copy link
Contributor

Thanks for sharing that idea, Yuri.

We could also just write plain text output

I'd vote for ☝🏼 (keep plain text output); I'm not sure if we'd get much value out of prettified test output considering, from what I understand, that this summary will only appear on a re-run. I think it's better to have a consistent experience (just one way of viewing test output), while it also keeping the CI pipeline simple.

@yurishkuro
Copy link
Member Author

To clarify, by "rerun" I meant an extra step in the workflow that is run automatically, not that we'd need to kick off a rerun manually.

One thing I like about some sort of a summary is to see the count of tests executed, succeeded, skipped, failed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants