Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model] Add dgl.nn.CuGraphSAGEConv model #5137

Merged
merged 14 commits into from
Feb 22, 2023
Merged
7 changes: 5 additions & 2 deletions docs/source/api/python/nn-pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,12 @@ Conv Layers
~dgl.nn.pytorch.conv.GraphConv
~dgl.nn.pytorch.conv.EdgeWeightNorm
~dgl.nn.pytorch.conv.RelGraphConv
~dgl.nn.pytorch.conv.CuGraphRelGraphConv
~dgl.nn.pytorch.conv.TAGConv
~dgl.nn.pytorch.conv.GATConv
~dgl.nn.pytorch.conv.GATv2Conv
~dgl.nn.pytorch.conv.EGATConv
~dgl.nn.pytorch.conv.EdgeConv
~dgl.nn.pytorch.conv.SAGEConv
~dgl.nn.pytorch.conv.CuGraphSAGEConv
~dgl.nn.pytorch.conv.SGConv
~dgl.nn.pytorch.conv.APPNPConv
~dgl.nn.pytorch.conv.GINConv
Expand All @@ -43,6 +41,11 @@ Conv Layers
~dgl.nn.pytorch.conv.PNAConv
~dgl.nn.pytorch.conv.DGNConv

CuGraph Conv Layers
----------------------------------------
tingyu66 marked this conversation as resolved.
Show resolved Hide resolved
~dgl.nn.pytorch.conv.CuGraphRelGraphConv
~dgl.nn.pytorch.conv.CuGraphSAGEConv

Dense Conv Layers
----------------------------------------

Expand Down
18 changes: 11 additions & 7 deletions python/dgl/nn/pytorch/conv/cugraph_sageconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,14 @@
class CuGraphSAGEConv(nn.Module):
r"""An accelerated GraphSAGE layer from `Inductive Representation Learning
on Large Graphs <https://arxiv.org/pdf/1706.02216.pdf>`__ that leverages the
highly-optimized aggregation primitives in cugraph-ops.
highly-optimized aggregation primitives in cugraph-ops:

See :class:`dgl.nn.pytorch.conv.SAGEConv` for mathematical model.
.. math::
h_{\mathcal{N}(i)}^{(l+1)} &= \mathrm{aggregate}
\left(\{h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right)

h_{i}^{(l+1)} &= W \cdot \mathrm{concat}
(h_{i}^{l}, h_{\mathcal{N}(i)}^{(l+1)})

This module depends on :code:`pylibcugraphops` package, which can be
installed via :code:`conda install -c nvidia pylibcugraphops>=23.02`.
Expand All @@ -45,7 +50,6 @@ class CuGraphSAGEConv(nn.Module):
>>> import dgl
>>> import torch
>>> from dgl.nn import CuGraphSAGEConv
...
>>> device = 'cuda'
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])).to(device)
>>> g = dgl.add_self_loop(g)
Expand All @@ -72,8 +76,8 @@ def __init__(
):
if has_pylibcugraphops is False:
raise ModuleNotFoundError(
f"{self.__class__.__name__} requires pylibcugraphops >= 23.02 "
f"to be installed."
f"{self.__class__.__name__} requires pylibcugraphops >= 23.02. "
f"Install via `conda install -c nvidia 'pylibcugraphops>=23.02'`."
)

valid_aggr_types = {"max", "min", "mean", "sum"}
Expand Down Expand Up @@ -102,7 +106,7 @@ def forward(self, g, feat, max_in_degree=None):
g : DGLGraph
The graph.
feat : torch.Tensor
Node features. Shape: :math:`(|V|, D_{in})`.
Node features. Shape: :math:`(N, D_{in})`.
max_in_degree : int
Maximum in-degree of destination nodes. It is only effective when
:attr:`g` is a :class:`DGLBlock`, i.e., bipartite graph. When
Expand All @@ -113,7 +117,7 @@ def forward(self, g, feat, max_in_degree=None):
Returns
-------
torch.Tensor
Output node features. Shape: :math:`(|V|, D_{out})`.
Output node features. Shape: :math:`(N, D_{out})`.
"""
offsets, indices, _ = g.adj_sparse("csc")

Expand Down