Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for "related code" in tests #126932

Open
connor4312 opened this issue Jun 22, 2021 · 29 comments
Open

Support for "related code" in tests #126932

connor4312 opened this issue Jun 22, 2021 · 29 comments
Assignees
Labels
api-proposal feature-request Request for new features or functionality on-testplan testing Built-in testing support

Comments

@connor4312
Copy link
Member

@connor4312 Is there any consideration to how I would present one or more tests as being related to a code item like a function?

For instance, in Pester you can tag tests with anything. If I allowed a special tag that let you say a test was assigned to a certain function named Invoke-MyFunction, I'd want to expose a "play" button right next to the Invoke-MyFunction definition to run all tests "tagged" to that function. In this case though I don't want it to be tracked with a separate ID, I want it to invoke those individual tests as part of the tag "rollup" which might be scattered across different files. Also tricky is how you would present a failure to the user in this case.

Ideally this would work with AutoRun as well whenever I change the function.

At any rate this is definitely a "nice to have" as opposed to a "must have" and could come at a later iteration, just a thought that would make testing really simple for people and not have to lose the context of where they are directly working on the application. I was inspired by your markdown test adapter example.

With the existing implementation it could just be offered under a "Function Test Rollup" header or something and aggregate the results together somehow, but when run it wouldn't update the "referenced" tests necessarily (though I guess I could do this within the context of the extension) so its fragmented and not ideal.

Originally posted by @JustinGrote in #107467 (comment)

@connor4312 connor4312 self-assigned this Jun 22, 2021
@connor4312 connor4312 added feature-request Request for new features or functionality testing Built-in testing support labels Jun 22, 2021
@connor4312 connor4312 added this to the Backlog milestone Jun 22, 2021
@firelizzard18
Copy link

This is applicable to Go. go test recognizes three types of test functions: TestXxx, BenchmarkXxx, and ExampleXxx. Go has an established naming convention for associating examples with what they are an example of. If you follow this naming convention, pkg.go.dev/go doc/etc will associate each example with the thing it is an example of. This naming convention could be used to attach examples (which are a kind of test) to functions/etc.

As far as I know, that naming convention isn't used for test and benchmark functions, but that's probably because those aren't included in documentation. I think it would be reasonable to use the same convention to attach tests and benchmarks to functions/etc.

I am the author of the Go Test Explorer extension, and I may be submitting PRs to vscode-go to implement the testing API.

@connor4312
Copy link
Member Author

connor4312 commented Feb 17, 2022

@jasonrudolph recently did some work on a signficant-other extension that implements something like this. Our friends on the Java team (@jdneo) released similar functionality in November, and @brettcannon wants this in Python too.

How would the API for this look like? Would it be something along the lines of related?: vscode.Location[] on the TestItem? Something more or less advanced?

@connor4312 connor4312 modified the milestones: Backlog, On Deck Feb 17, 2022
@JustinGrote
Copy link
Contributor

How would the API for this look like? Would it be something along the lines of related?: vscode.Location[] on the TestItem? Something more or less advanced?

I don't see why it would need to be more complicated than this, since the extension can also keep updating the location if the code/tests move.

An open question is how to handle "one to many" situations in both directions.

@jdneo
Copy link
Member

jdneo commented Feb 18, 2022

An open question is how to handle "one to many" situations in both directions.

In Java, we leverage the reference-view to deal with the "one to many" case.

@firelizzard18
Copy link

How would the API for this look like? Would it be something along the lines of related?: vscode.Location[] on the TestItem? Something more or less advanced?

@connor4312 I think Location[] is a good first step. I do think it would be valuable to have something like relatedSymbols?: vscode.DocumentSymbol[]. The most useful features that come to mind are:

  1. Auto-run a test whenever the user changes the referenced location. This can be done with Location.
  2. Allow the user to see what symbols are linked to a test, similar to (or incorporated into) find references. This would be much more straight forward if the test item can explicitly link to DocumentSymbols.

@connor4312
Copy link
Member Author

(2) is interesting, how would you envision the UI/UX for that?

@JustinGrote
Copy link
Contributor

Linking to symbols would be useful, since you wouldn't have to constantly update the location for add-on commands, but it could always be a dropdown menu on right-click for the test (maybe even a tree expansion)

@connor4312
Copy link
Member Author

connor4312 commented Feb 22, 2022

The other simple alternative is a pair of functions such as provideImplementationRanges(testItem: vscode.TestItem): ProviderResult<vscode.Range[]> / provideAssociatedTests(range: vscode.Range): ProviderResult<vscode.TestItem>, which VS Code can call on demand when the user asks to go between tests and implementation. This is likely more efficient for implementors.

@brettcannon
Copy link
Member

/cc @kimadeline

@firelizzard18
Copy link

(2) is interesting, how would you envision the UI/UX for that?

@connor4312 IMO what would be most valuable would be to show the test play button in the gutter by the related code. That way, if I write a test for a function, I can edit the function and run the test without changing context. If autorun is enabled, then I can configure it to run related tests automatically, so I can edit the function, save, and then see the test results immediately (after the test completes).

Also I was thinking of a context menu option for tests. For example, if the user right clicks on the test item or the play button in the gutter, the context menu has "Go to related code", which either jumps or opens a peek view like "Go to definition"; and "Find all related code", which opens a side bar view like "Find all references".

The other simple alternative is a pair of functions such as provideImplementationRanges(testItem: vscode.TestItem): ProviderResult<vscode.Range[]> / provideAssociatedTests(range: vscode.Range): ProviderResult<vscode.TestItem>, which VS Code can call on demand when the user asks to go between tests and implementation. This is likely more efficient for implementors.

That should work. I would prefer provideAssociatedTests(symbol: vscode.DocumentSymbol): ProviderResult<vscode.TestItem>. Go associates functions with tests based on symbol name, so it would be useful not to have to get the symbol myself.

@brettcannon
Copy link
Member

what would be most valuable would be to show the test play button in the gutter by the related code. That way, if I write a test for a function, I can edit the function and run the test without changing context.

Would you expect a specific test to run, or just the test file that was found and believed to be associated with the code? While the latter would at least be possible in Python (based on file name matching), for instance, there's no way we would be able to associate code to a specific test.

@firelizzard18
Copy link

@brettcannon Here's what I'm thinking:

  • The extension creates a TestItem for feature_test
  • The extension informs VSCode that this TestItem is associated with feature_func
  • VSCode shows ▷ in the gutter next to feature_func
  • When the user clicks ▷ next to feature_func, the TestItem feature_test is run

It is up to the extension to determine how to associate TestItems with DocumentSymbols (or ranges). Assuming Python creates a TestItem for the file, it can associate feature_func with the file's TestItem, such that clicking ▷ next to feature_func runs the file tests.

I am the author of the test explorer implementation for vscode-go, so my interest is how this relates to Go.

// ExecuteFeature does something
func ExecuteFeature() { ... }

// ExampleExecuteFeature is an example of how to use ExecuteFeature
func ExampleExecuteFeature() { ... }

// TestExecuteFeature tests ExecuteFeature
func TestExecuteFeature(t *testing.T) { ... }

In Go, examples are treated as tests, and are linked to functions or methods by matching the name. Therefore, it is reasonable for vscode-go to assert that ExampleExecuteFeature is an example of ExecuteFeature and TestExecuteFeature is a test of ExecuteFeature and therefore when ExecuteFeature is modified, the example and test should be executed.

@connor4312
Copy link
Member Author

connor4312 commented May 10, 2022

Thanks for the feedback, all.

From feedback, I think one approach we can take is, like we do on tests, something of a hybrid:

  • Have relatedCode?: vscode.Location[] on the TestItem.
    • I do not want to go with the approach of using the DocumentSymbol as @firelizzard18 suggested, as this type is something that LSP/languages understands, but testing doesn't really. We don't generally share types across domains except for a few common types (like URI).
    • Having this as a standalone property allows test extensions to discover related code like they do tests today -- lazily based on editor visibility, eagerly if they're very quick, or on a stronger signal if they're not so fast. This problem of serving both fast and expensive test discovery is quite reminiscent to tests themselves which resulted in the semi-lazy model we have today, so reusing this mechanism would make sense to me.
    • @firelizzard18 wants to be a user of this style of relations 🙂
  • Have an optional handler so that extensions can lazily serve related code only when explicitly asked for
    • If discovery of related code cannot reasonably be done in realtime, allow a provider containing the proposed provideImplementations(testItem: vscode.TestItem): ProviderResult<vscode.Location[]> / provideAssociatedTests(location: vscode.Location): ProviderResult<vscode.TestItem[]> methods to be added to the TestController.
    • This is in addition to the relatedCode? property. These should return ranges and test items rather than adding to the relatedCode property, since the relations are point-in-time and should not be persisted and tracked like the TestItem property.
    • Feedback: do we need this handler -- who's interested in it? I dislike having two ways to do the same thing, but I think the former proposal would be unsuitable for any 'slower' test runners

cc @jdneo / @kimadeline / @JustinGrote

@firelizzard18
Copy link

@connor4312 I've been thinking about how I would actually implement this kind of functionality for Go:

  • What source ranges are associated with test X?
    • To answer this, I would need to scan symbols for all files in the same directory as the test.
  • What tests are associated with source function/method X?
    • Comparing Function or Type.Method against already loaded tests is easy: all I have to do is find a test item in the same directory named TestFunction or TestType_Method, respectively.
    • However, to fully answer this question I would need to resolve all tests in the directory to ensure I didn't miss anything.

So the two provide handlers seem like the best choice when I have to scan. But I do want to have a push mechanism (such as the property) so I can automatically connect tests to source ranges when both files have been opened. Additionally, I am considering adding a configuration flag (disabled by default) that will automatically scan the entire directory for tests and source symbols when any file in the directory is opened. This should have reasonable performance for small projects and it can be done in the background. This would also require a push mechanism.

@jrieken
Copy link
Member

jrieken commented May 12, 2022

This should have reasonable performance for small projects and it can be done in the background. This would also require a push mechanism.

🤣 this is why we prefer pull instead of push. The workbench is usually better equipped in knowing when to ask for what. E.g we should know that files have been opened and we should be asking for info. It should be two steps: statically describe the files for which you can provide something (e.g document selector) and later answer our question. That also enables non obvious scenarios like LS where we might need to know about tests that aren't opened on the machine you are running in.

@connor4312
Copy link
Member Author

Perhaps we could actually go with a "pull-only" model here, and have a user settings to configure the "pull aggression." So by default, we would only pull related code when asked for via a command ("run tests for" or "implementations for"). But maybe there is a setting which does what you describe that pulls implementation ranges for tests of active editors.

I'm not super keen on this since there is complexity around dealing with invalidation and re-pulling, but the path is there.

@firelizzard18
Copy link

@jrieken @connor4312 I see your points and I'm OK with letting VSCode/the user decide when to pull that info.

But in the specific case of both files being opened, I very much like to automatically connect the source ranges. If Sort is defined in sort.go and TestSort is defined in sort_test.go, VSCode will already be fetching document symbols for highlighting, so it should be almost free for me to ask for the document symbols for both files and connect TestSort to Sort. Having the source file and the test file open at the same time seems like it would be a pretty common scenario so I'd like to support it. And once we have the ability to autorun a test when a source range changes, it would be even more useful to support that scenario.

@icetbr
Copy link

icetbr commented Sep 22, 2022

I hacked away a simple extension for my needs, very crude, maybe it can help/inspire someone.

https://marketplace.visualstudio.com/items?itemName=icetbr.vscode-testing-nirvana

@abhijit-chikane
Copy link
Contributor

abhijit-chikane commented May 6, 2024

I encountered an issue where I need to meet the 80% test coverage criteria. Unfortunately, I can't determine which test cases cover which parts of the code simply by looking at the coverage UI.

@connor4312
Copy link
Member Author

That is a little different, but tracked in #212196

@connor4312 connor4312 modified the milestones: On Deck, July 2024 Jul 2, 2024
@connor4312
Copy link
Member Author

Hello, it's been a minute!

I've taken this up again and implemented a proposed API in #222252. This is a simpler provider-based approach, where the provider is called on-demand. There's no passive gutters as in my previous implementation (perhaps an API point we can add in the future) but it should be a lot easier to implement.

In my initial PR this is surfaced in a few ways:

  • "Run tests at cursor", when called outside the range of a test, will automatically try to find tests related to the code and run those. I feel like this is a pretty nice and natural entrypoint.
  • Reusing the references infrastructure, there are commands for going to and peeking related tests (if you're in implementation) or related code (if you're in a test)
image image

The PR implements this in the selfhost provider we use to run tests in the VS Code repo, using a simple file-level import graph.

@firelizzard18
Copy link

@connor4312 What kind of time frame are you looking for on feedback? Coincidentally I'm currently working on the Go test controller again, but I'm part way through a full rewrite so I'm currently focused on feature parity with the existing implementation. Side note, I implemented a TestItemResolver that allows me to expose Go tests via a TreeDataProvider-style interface, which feels much more natural to me than my previous resolver. Also, this new implementation is using the Go language server (gopls) to discover tests, so my previous statements about document symbols are not applicable to the new implementation.

For anyone else looking for the type definitions, here they are (from the PR):

export interface TestController {
	/**
	 * A provider used for associating code location with tests.
	 */
	relatedCodeProvider?: TestRelatedCodeProvider;
}

export interface TestRelatedCodeProvider {
	/**
	 * Returns the tests related to the given code location. This may be called
	 * by the user either explicitly via a "go to test" action, or implicitly
	 * when running tests at a cursor position.
	 *
	 * @param document The document in which the code location is located.
	 * @param position The position in the document.
	 * @param token A cancellation token.
	 * @returns A list of tests related to the position in the code.
	 */
	provideRelatedTests?(document: TextDocument, position: Position, token: CancellationToken): ProviderResult<TestItem[]>;

	/**
	 * Returns the code related to the given test case.
	 *
	 * @param test The test for which to provide related code.
	 * @param token A cancellation token.
	 * @returns A list of locations related to the test.
	 */
	provideRelatedCode?(test: TestItem, token: CancellationToken): ProviderResult<Location[]>;
}

@connor4312
Copy link
Member Author

I would expect that this would not be finalized sooner than September, as we're at the end of the July iteration and will probably want to let it bake in proposed for at least one iteration.

@firelizzard18
Copy link

@connor4312 Here's a bug for your brain that's kind of related. Go doesn't have explicit support for table-driven tests, but there are common patterns such as:

func TestMath(t *testing.T) {
	cases := []struct {
		Name   string
		Expr   string
		Result string
	}{
		{"add", "1+1", "2"},
		{"sub", "1-1", "0"},
		{"mul", "1*1", "1"},
		{"div", "1/1", "1"},
	}

	for _, c := range cases {
		// This creates a sub-test with the given name
		t.Run(c.Name, func(t *testing.T) {
			// Example sub-test implementation
			result, err := script.Eval(c.Expr)
			if err != nil {
				t.Fatal(err)
			}
			if result != c.Result {
				t.Errorf("want %s, got %s", c.Result, result)
			}
		})
	}
}

I plan to add static analysis to detect cases like this, but this presents a dilemma: what do I report as the range for the test item? It seems to me that the "Go to test" action should jump to body (func(t *testing.T)) since if I'm jumping to the test, I probably want to see the logic of the test. On the other hand, there are two issues (golang/vscode-go#1602, golang/vscode-go#2445) that request a way to run a specific case, which would be much easier if I reported the entry in the 'table' as the test item's range.

The best solution I can think of (that doesn't involve adding explicit table-drive test support to test items) is some kind of secondaryRanges: Range[] test item property that VSCode uses to add additional ▷ buttons. I could use provideRelatedCode but A) it sounds like that doesn't add ▷ buttons and B) it doesn't really fit the intent of related code as I understand it.

@connor4312
Copy link
Member Author

Hm, yea, we don't have a very good solution to that right now. I feel like related code might solve that approach but we'd want some more explicit "executed related tests" binding, such that a user could place their cursor in the case's struct and then run that. But it's also not very discoverable without its own 'play' buttons.

I think the way to go might be to actually declare the subtest's range as its case in the struct, rather than t.Run. That would mean no 'play' button in t.Run(c.Name, but if a user wanted to run all subtests at that point they could just run the parent test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-proposal feature-request Request for new features or functionality on-testplan testing Built-in testing support
Projects
None yet
Development

No branches or pull requests