[GraphBolt][CUDA] GPUCachedFeature update bug when feature dimensions differ #7377
Labels
bug:confirmed
Something isn't working
Release Blocker
Issues that blocks release
Work Item
Work items tracked in project tracker
🔨Work Item
IMPORTANT:
Project tracker: https://github.com/orgs/dmlc/projects/2
Description
dgl/examples/sampling/graphbolt/node_classification.py
Line 224 in 67a897f
GPUCachedFeature does not currently support updating the feature with different dimensions when initialized. Our inferencing loop uses the update function. Since layers have differing hidden dimensions, we get the error below.
We need to update GPUCachedFeature so that it reconstructs the GPUCache again when the feature dimension is changed.
This bug prevents us from using GPUCachedFeature in our single GPU examples where we have the inferencing loop.
The text was updated successfully, but these errors were encountered: