Skip to content

Releases: dmlc/dgl

v2.4.0

03 Sep 04:16
Compare
Choose a tag to compare

Highlights

  • DGL 2.4 documentation can be found here: https://www.dgl.ai/dgl_docs/index.html
  • distributed module is not imported in default when import dgl. Users need to import manually: import dgl.distributed.
  • DistNodeDataLoader and DistEdgeDataLoader are moved from dgl.dataloading to dgl.distributed. Users are recommended to call dgl.distributed.DistNode/EdgeDataLoader though dgl.dataloading.DistNode/EdgeDataLoader is still available. Such backward compatibility will be removed in next release.
  • GraphBolt examples are now in examples/graphbolt.
  • The users are now required to install GraphBolt's CUDA wheels if they have a CUDA enabled torch installation.
  • numpy 2.x is now supported.
  • torch 2.4 & CUDA 12.4 are now supported by @pyynb in #7629
  • Importing DGL does not cause an import of GraphBolt anymore. by @Rhett-Ying in #7676, #7756
  • GraphBolt does not depend on the deprecated torchdata package anymore and this release is incompatible with the torchdata package. by @frozenbugs in #7638, #7609, #7667, #7688
  • [GraphBolt][CUDA] Use better memory allocation algorithm to avoid OOM. by @mfbalin in #7618
  • [GraphBolt] GPU utilization has been maximized by eliminating all (known) GPU synchronizations: #7528, #7682, #7709, #7707, #7712, #7705, #7602, #7603, #7634, #7757 by @mfbalin.
  • [GraphBolt][io_uring] gb.DiskBasedFeature is now ready to use for out-of-core training: #7506, #7713, #7562, #7515, #7530, #7518 by @mfbalin.
  • [GraphBolt] Users are now recommended to use gb.numpy_save_aligned instead of numpy.save to save their features for out-of-core training. by @mfbalin in #7524
  • [GraphBolt] gb.CPUCachedFeature was added to speedup out-of-core training: #7492, #7508, #7520, #7526, #7525, #7531, #7537, #7538, #7581, #7723, #7644, #7731 and more by @mfbalin.
  • [GraphBolt] Feature fetching pipeline is fully parallelized by enabling all hardware components run concurrently: #7546, #7547, #7548, #7550, #7549, #7553, #7551, #7552, #7554, #7555, #7559, #7540 and more by @mfbalin.
  • [GraphBolt][Temporal] Temporal sampling support is extended with more samplers and GPU support: #7500, #7503, #7677, #7678 by @mfbalin.
  • [GraphBolt][CUDA] Sampling pipeline parallelism optimizations in #7714, #7665 and example use in #7702, #7669, #7664, #7662 by @mfbalin.
  • [GraphBolt][PyG] Add to_pyg for layer input conversion. by @mfbalin in #7745 and #7747.
  • [Feature] Fixed sampler with limit on sampled nodes/edges in batch subgraph by @ayushnoori in #6668
  • [GraphBolt] Refactor and extend FeatureStore. by @mfbalin in #7558
  • [dev] Several build and setup improvements by @Rhett-Ying in #7565, #7567, #7570, #7571, #7574, #7684
  • [GraphBolt][CUDA] gb.indptr_edge_ids. by @mfbalin in #7592, #7593
  • [GraphBolt] Allow using multiple processes for GraphBolt partition conversion by @thvasilo in #7497
  • [GraphBolt][CUDA] Update CCCL to 2.6.0. by @mfbalin in #7636
  • [Performance] Change hash table for performance. by @mfbalin in #7658, #7631
  • [GraphBolt][CUDA] Refactor overlap_graph_fetch, simplify gb.DataLoader. by @mfbalin in #7681, #7732
  • [Build] Organize cmake file by @mfbalin in #7715
  • [GraphBolt] Feature.count(). by @mfbalin in #7730

Bug Fixes

New Examples

  • [GraphBolt] Add DiskBasedFeature example for DGL model by @Liu-rj in #7624
  • [GraphBolt][PyG] Heterogenous example. by @mfbalin in #7722
  • [GraphBolt][PyG] Link prediction example. by @mfbalin in #7752

New built-in datasets:

  • [GraphBolt] igb-hom-[tiny|small|medium] variants of IGB datasets are added. by @BowenYao18 in #7717

New Contributors

Full Changelog: v2.3.0...v2.4.0

v2.3.0

28 Jun 00:16
4aa02d8
Compare
Choose a tag to compare

Highlights

  • torch 2.3.1 is supported. The supported torch versions range from 2.1 to 2.3.
  • numpy 2.0.0 is not fully supported/compatible. We add a dependency requirement of numpy which should be <2.0.0. We'll remove such limitation in near future.
  • ItemSetDict has been replaced by HeteroItemSet. Please use the new class though we add an alias for deprecated one.
  • Incremental GPU graph caching has been added to GraphBolt in #7470, #7475, #7483. An example use is shown in #7482.
  • exclude_edges is enabled for distributed DGL.
  • From now on, we stop providing built packages for Windows and Mac. Please build and install from source code on your own.

Bug Fixes

  • [DistDGL] enable exclude edges sample_neighbors() by @Rhett-Ying in #7425
  • [DistDGL] enable exclude_edges for sample_etype_neighbors() by @Rhett-Ying in #7427
  • [DistDGL] fix device mismatch when calling all_to_all with gloo backend by @Rhett-Ying in #7409
  • [GraphBolt][CUDA] GPUCachedFeature update fix. by @mfbalin in #7384
  • [GraphBolt][CUDA] Make dataloader pickleable. by @mfbalin in #7391
  • [graphbolt] skip non-existent types in input_nodes by @Rhett-Ying in #7386
  • [DistPart] Fix corner case in dist partition which always led to an assertion error being triggered. by @thvasilo in #7395
  • [GraphBolt] Fix blocks in minibatch when facing with empty edges in subgraph. by @yxy235 in #7413
  • [Feature] Add check for NNZ in COOToCSR by @Skeleton003 in #7459

New Examples

  • [GraphBolt] Labor (Layer-Neighbor Sampling) example by @mfbalin in #7437

New Contributors

Full Changelog: v2.2.1...v2.3.0

v2.2.1

11 May 02:59
2a1ac58
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.2.1. 🎉🎉🎉

Major Changes

  • The supported PyTorch versions are 2.1.0/1/2, 2.2.0/1/2, 2.3.0. See install command here.
  • MiniBatch in GraphBolt is refactored: seed_nodes and node_paris are replaced with unified seeds attribute through out the pipeline. Refer to the latest examples for more details. by @yxy235
  • GraphBolt sampling is enabled in DistGL for node classification. See examples here.
  • [GraphBolt] Optimize hetero sampling on CPU by @RamonZhou in #7360
  • [GraphBolt] torch.compile() support for gb.expand_indptr. by @mfbalin in #7188
  • [GraphBolt] Make unique_and_compact deterministic by @RamonZhou in #7217, #7239
  • [GraphBolt] Hyperlink support in subgraph_sampler. by @yxy235 in #7354
  • [GraphBolt] More features of dgl.dataloading.LaborSampler in gb.LayerNeighborSampler, added layer_dependency and batch_dependency parameters. #7205, #7208, #7212, #7220 by @mfbalin
  • [GraphBolt][CUDA] Faster GPU neighbor sampling and compaction kernels. #7239, #7215 by @mfbalin
  • [GraphBolt][CUDA] Better hetero CPU&GPU performance via fused kernels. #7223, #7312 by @mfbalin
  • [GraphBolt][CUDA] GPU synchronizations eliminated throughout the sampling pipeline. #7240, #7264 by @mfbalin

Bug Fixes

  • [DistGB] revert toindex() but refine tests by @Rhett-Ying in #7197
  • [GraphBolt] PyG advanced example torch.compile() bug workaround. by @mfbalin in #7259
  • [CUDA][Bug] CSR transpose bug in CUDA 12 by @mfbalin in #7295
  • [Determinism] Enable environment var to use cusparse spmm deterministic algorithm by @TristonC in #7310

New Contributors

Full Changelog: v2.1.0...v2.2.1

v2.1.0

06 Mar 03:29
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.1.0. 🎉🎉🎉

Major Changes:

  1. CUDA backend of GraphBolt is now available. Thanks @mfbalin for the extraordinary effort. See the updated examples.
  2. PyTorch 1.13 is not supported any more. The supported PyTorch versions are 2.0.0/1, 2.1.0/1/2, 2.2.0/1.
  3. CUDA 11.6 is not supported any more. The supported CUDA versions are 11.7, 11.8, 12.1.
  4. Data loading performance improvements via pipeline parallelism in #7039 and #6954, see the new gb.DataLoader parameters.
  5. Miscellaneous operation/kernel optimizations.
  6. Add support for converting sampling output of GraphBolt to PyG data format and train with PyG models seamlessly: examples.

Bug Fixes

  • [Grapbolt] Negative node pairs should be 2D by @peizhou001 in #6951
  • [GraphBolt] Fix fanouts setting in rgcn example by @RamonZhou in #6959
  • [GraphBolt] fix random generator for shuffle among all workers by @Rhett-Ying in #6982
  • [GraphBolt] fix preprocess issue for single ntype/etype graph by @Rhett-Ying in #7011
  • [GraphBolt] Fix gpu NegativeSampler for seeds. by @yxy235 in #7068
  • [GraphBolt][CUDA] Fix link prediction early-stop. by @mfbalin in #7083

New Examples

  • [Feature] ARGO: an easy-to-use runtime to improve GNN training performance on multi-core processors by @jasonlin316 in #7003

Acknowledgement

Thanks for all your contributions.
@drivanov @frozenbugs @LourensT @Skeleton003 @mfbalin @RamonZhou @Rhett-Ying @wkmyws @jasonlin316 @caojy1998 @czkkkkkk @hutiechuan @peizhou001 @rudongyu @xiangyuzhi @yxy235

v2.0.0

12 Jan 03:51
92c8f08
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.0.0, a major milestone in our mission to empower developers with cutting-edge tools for Graph Neural Networks (GNNs). 🎉🎉🎉

New Package: dgl.graphbolt

In this release, we introduce a brand new package: dgl.graphbolt, which is a revolutionary data loading framework that supercharges your GNN training/inference by streamlining the data pipeline. Please refer to the documentation page for GraphBolt's overview and end2end notebooks. More end2end examples are available in github code base.

New Additions

  • A hetero-relational GCN example (#6157)
  • Add Node explanation for Heterogenous PGExplainer Impl. (#6050)
  • Add peptides structural dataset in LRGB (#6337)
  • Add peptides functional dataset in LRGB (#6363)
  • Add VOCSuperpixels dataset in LRGB (#6389)
  • Add compact operator (#6352)
  • Add COCOsuperpixel dataset (#6407)
  • Add a graphSAGE example (#6481)
  • Add CIFAR10 MNIST dataset in benchmark-gnn (#6543)
  • Add ogc method (#6437)
  • Add a LADIES example (#6560)
  • Adjusted homophily and label informativeness (#6516)

System/Examples/Documentation Enhancements

  • Update README about DGL container access from NGC (#6133)
  • Cpu docker tcmalloc (#5969)
  • Use scipy's eigs instead of numpy in lap_pe (#5855)
  • Add CMake changes from conda-forge build (#6189)
  • Upgrade googletest to v1.14.0 (#6273)
  • Fix typo in link prediction with sampling example (#6268)
  • Add sparse matrix slicing operator implementation (#6208)
  • Use torchrun instead of torch.distributed.launch (#6304)
  • Sparse sample implementation (#6303)
  • Add relabel python API (#6323)
  • Compact C++ API (#6334)
  • Fix compile warning (#6342)
  • Update Labor sampler docs, add NeurIPS acceptance (#6369)
  • Update docstring of LRGB (#6430)
  • Do not fuse neighbor sampler for 1 thread (#6421)
  • Fix graph_transformer example (#6471)
  • Adding --num_workers input parameter to the EEG_GCNN example. (#6467)
  • Update doc network_emb.py (#6559)
  • Protect temporary changes from persisting if an error occurs during the yield block (#6506)
  • Provide options for bidirectional edge (#6566)
  • Improving the MLP example. (#6593)
  • Improving the JKNET example. (#6596)
  • Avoid calling IsPinned in the coo/csr constructor from every sampling process (#6568)
  • Add tutorial documentation for graph transformer. (#6889, #6949)
  • Refactor SpatialEncoder3d. (#5894)

Bug Fixes

  • Fix cusparseCreateCsr format for cuda12 (#6121)
  • Fix a bug in standalone mode (#6179)
  • Fix extrace_archive default parameter (#6333)
  • Fix device check (#6409)
  • Return batch related ids in g.idtype (#6578)
  • Fix typo in ShaDowKHopSampler (#6587)
  • Fix issue about integer overflow (#6586)
  • Fix the lazy device copy issue of DGL node/edge features (#6564)
  • Fix num_labels to num_classes in dataset files (#6666)
  • Fix Graphormer as key in state_dict has changed (#6806)
  • Fix distributed partition issue (#6847)

Note

Windows packages are not available and will be ready soon.

Acknowledgement

DGL 2.0.0 has been achieved through the dedicated efforts of the DGL team and the invaluable contributions of our external collaborators.

@9rum @AndreaPrati98 @BarclayII @HernandoR @OlegPlatonov @RamonZhou @Rhett-Ying @SinuoXu @Skeleton003 @TristonC @anko-intel @ayushnoori @caojy1998 @chang-l @czkkkkkk @daniil-sizov @drivanov @frozenbugs @hmacdope @isratnisa @jermainewang @keli-wen @mfbalin @ndbaker1 @paoxiaode @peizhou001 @rudongyu @songqing @willarliss @xiangyuzhi @yaox12 @yxy235 @zheng-da

Your collective efforts have been key to the success of this release. We deeply appreciate every contribution, large and small, as they collectively shape and improve DGL. Thank you all for your dedication and hard work!

v1.1.3

11 Dec 09:43
6e2c0f4
Compare
Choose a tag to compare

Major changes

  • Add PyTorch 2.1.0, 2.1.1 (except windows) and the supported versions are 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1.
  • Add CUDA 12.1 and the supported versions are 11.6, 11.7, 11.8, 12.1.
  • Windows support for PyTorch 2.1.0, 2.1.1 are blocked due to a compiling issue. This will be supported as soon as the issue is resolved.

v1.1.2

15 Aug 07:31
d40a3c3
Compare
Choose a tag to compare

Major changes

  • PyTorch 1.12.0, 1.12.1 are deprecated and the supported versions are 1.13.0, 1.13.1, 2.0.0, 2.0.1.
  • CUDA 10.2, 11.3 are deprecated and the supported versions are 11.6, 11.7. 11.8.
  • C++ standard used in build is upgraded to 17.
  • Several performance improvements such as #5885, #5924 and so on.
  • Multiple examples are updated for better readability such as #6035, #6036 and so on.
  • A few bug fixes such as #6044, #6001 and so on.

v1.1.1

27 Jun 03:26
Compare
Choose a tag to compare

What's new

  • Add support for PyTorch 2.0.1.
  • Fix several bugs such as #5872 for DistDGL, #5754 for dgl.khop_daj() and so on.
  • Remove several unused third-party libraries such as xbyak, tvm.
  • A few performance improvements such as #5508, #5685.

v1.1.0

05 May 08:50
Compare
Choose a tag to compare

What's new

  • Sparse API improvement
  • Datasets for evaluating graph transformers and graph learning under heterophily
  • Modules and utilities, including Cugraph convolution modules and SubgraphX
  • Graph transformer deprecation
  • Performance improvement
  • Extended BF16 data type to support 4th Generation Intel® Xeon® Scalable Processors (#5497)

Detailed breakdown

Sparse API improvement (@czkkkkkk )

SparseMatrix class

  • Merge DiagMatrix class into SparseMatrix class, where the diagonal matrix is stored as a sparse matrix and inherits all the operators from sparse matrix. (#5367)
  • Support converting DGLGraph to SparseMatrix.g.adj(self, etype=None, eweight_name=None) returns the sparse matrix representation of the DGL graph g on the edge type etype and edge weight eweight_name. (#5372)
  • Enable zero-overhead conversion between Pytorch sparse tensors and SparseMatrix via dgl.sparse.to_torch_sparse_coo/csr/csc and dgl.sparse.from_torch_sparse. (#5373)

SparseMatrix operators

  • Support element-wise multiplication on two sparse matrices with different sparsities, e.g., A * B. (#5368)
  • Support element-wise division on two sparse matrices with the same sparsity, e.g., A / B. (#5369)
  • Support broadcast operators on a sparse matrix and a 1-D tensor via dgl.sparse.broadcast_add/sub/mul/div. (#5370)
  • Support column-wise softmax. (#5371)

SparseMatrix examples

  • Example for Heterogeneous Graph Attention Networks (#5568, @mufeili )

Datasets

Modules and utilities

Deprecation (#5100, @rudongyu )

  • laplacian_pe is deprecated and replaced by lap_pe
  • LaplacianPE is deprecated and replaced by LapPE
  • LaplacianPosEnc is deprecated and replaced by LapPosEncoder
  • BiasedMultiheadAttention is deprecated and replaced by BiasedMHA

Performance improvement

Speedup the CPU to_block function in graph sampling. (#5305, @peizhou001 )

  • Add a concurrent hash map to speed up the id mapping process by leveraging multi-thread capability (#5241, #5304).
  • Accelerate the expensive to_block by using the new hash map, improve the performance by ~2.5x on average and more when the batch size is large.

Breaking changes

  • Since the new .adj() function of DGLGraph produces a SparseMatrix, the original .adj(self, transpose=False, ctx=F.cpu(), scipy_fmt=None, etype=None) is renamed as .adj_external, returning the sparse format from external packages such as Scipy and Pytorch. (#5372)

v1.0.2

31 Mar 09:24
Compare
Choose a tag to compare

What's new

  • Added support to CUDA 11.8. Please install with
    pip install dgl -f https://data.dgl.ai/wheels/cu118/repo.html
    conda install -c dglteam/label/cu118 dgl
    
  • Added support to Python 3.11
  • Added support to PyTorch 2.0