[bugfix] Link against pytorch library by name rather than path (attempting to fix linking issue). #3246
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This is meant to fix #3240, which was caused by #3225, which in turn was trying to fix #3220, the issue of having DGL compiled against versions of pytorch with different cuda versions (e.g., the default from pip 1.9.0+cu102 vs. 1.9.0+cu111).
This explicitly links against
torch
by name, rather than path. This needs more scrutiny and testing, as I can't say I'm an expert on dynamic linking. In my testing below, this appears to achieve the desired result, of allowing pytorch with different CUDA versions to work against the same tensor adapter binary.Checklist
Please feel free to remove inapplicable items for your PR.
or have been fixed to be compatible with this change
Changes
Prior to #3225, when compiling, cmake would output the following:
After #3225, but without this PR, it would output:
With this PR:
When compiling DGL with this PR and torch==1.9.0+cu102:
Then using the same dgl installation, and changing the pytorch version to
1.9.0+cu111
.