Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TVMC] End to end benchmarking by default #10226

Closed
wants to merge 2 commits into from

Conversation

tmoreau89
Copy link
Contributor

@tmoreau89 tmoreau89 commented Feb 11, 2022

End to end benchmarking was added to the VM and graph executor in order to faithfully represent execution time by accounting for data transfer overheads. This can be particularly significant on discrete GPUs, where PCI-E transfers are important to account for in a typical model serving deployment.

This PR proposes to modify the default measurement done by TVMC to always benchmark end to end execution time from CPU local device context.
Another option is to expose in TVMC a flag that lets the user perform end to end benchmarking. I recommend however not benchmarking without data transfer overheads as it presents an overly optimistic outlook on TVM performance.

@tmoreau89
Copy link
Contributor Author

@jwfromm
Copy link
Contributor

jwfromm commented Feb 15, 2022

Thanks for making this change @tmoreau89. I think changing the default behavior is likely going to cause issues for entrenched users. I just put up PR #10256 which exposes an end_to_end argument to the python and command line interface and defaults it to False. I think this is a less disruptive solution.

@tmoreau89
Copy link
Contributor Author

@jwfromm great points, I'll close this PR in favor of your PR which seems less disruptive

@tmoreau89 tmoreau89 closed this Feb 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants