Skip to content

Commit af00fcf

Browse files
puririshi98pre-commit-ci[bot]akihironitta
authored
Deprecate torch_geometric.distributed (#10411)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Akihiro Nitta <[email protected]>
1 parent f4bca53 commit af00fcf

File tree

13 files changed

+43
-62
lines changed

13 files changed

+43
-62
lines changed

.github/workflows/testing_dist.yml

Lines changed: 0 additions & 56 deletions
This file was deleted.

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
8787

8888
### Deprecated
8989

90+
- Deprecated `torch_geometric.distributed` ([#10411](https://github.com/pyg-team/pytorch_geometric/pull/10411))
91+
9092
### Fixed
9193

9294
- Fixed conversion to/from `cuGraph` graph objects by ensuring `cudf` column names are correctly specified ([#10343](https://github.com/pyg-team/pytorch_geometric/pull/10343))

docs/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
https://download.pytorch.org/whl/cpu/torch-1.13.0%2Bcpu-cp39-cp39-linux_x86_64.whl
1+
https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp310-cp310-manylinux_2_28_x86_64.whl
22
numpy>=1.19.5
33
git+https://github.com/pyg-team/pyg_sphinx_theme.git

docs/source/modules/distributed.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,12 @@
11
torch_geometric.distributed
22
===========================
33

4+
.. warning::
5+
``torch_geometric.distributed`` has been deprecated since 2.7.0 and will
6+
no longer be maintained. For distributed training, refer to :ref:`our
7+
tutorials on distributed training <distributed_tutorials>` or `cuGraph
8+
examples <https://github.com/rapidsai/cugraph-gnn/tree/main/python/cugraph-pyg/cugraph_pyg/examples>`_.
9+
410
.. currentmodule:: torch_geometric.distributed
511

612
.. autosummary::
Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
1+
.. _distributed_tutorials:
2+
13
Distributed Training
24
====================
35

46
.. nbgallery::
57
:name: rst-gallery
68

7-
distributed_pyg
89
multi_gpu_vanilla
910
multi_node_multi_gpu_vanilla
11+
distributed_pyg

docs/source/tutorial/distributed_pyg.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
Distributed Training in PyG
22
===========================
33

4+
.. warning::
5+
``torch_geometric.distributed`` has been deprecated and will no longer be maintained.
6+
For distributed training with cuGraph, refer to `cuGraph examples <https://github.com/rapidsai/cugraph-gnn/tree/main/python/cugraph-pyg/cugraph_pyg/examples>`_.
7+
48
.. figure:: ../_figures/intel_kumo.png
59
:width: 400px
610

docs/source/tutorial/multi_gpu_vanilla.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
Multi-GPU Training in Pure PyTorch
22
==================================
33

4+
.. note::
5+
For multi-GPU training with cuGraph, refer to `cuGraph examples <https://github.com/rapidsai/cugraph-gnn/tree/main/python/cugraph-pyg/cugraph_pyg/examples>`_.
6+
7+
48
For many large scale, real-world datasets, it may be necessary to scale-up training across multiple GPUs.
59
This tutorial goes over how to set up a multi-GPU training pipeline in :pyg:`PyG` with :pytorch:`PyTorch` via :class:`torch.nn.parallel.DistributedDataParallel`, without the need for any other third-party libraries (such as :lightning:`PyTorch Lightning`).
610
Note that this approach is based on data-parallelism.

docs/source/tutorial/multi_node_multi_gpu_vanilla.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
Multi-Node Training using SLURM
22
===============================
33

4+
.. note::
5+
For multi-GPU training with cuGraph, refer to `cuGraph examples <https://github.com/rapidsai/cugraph-gnn/tree/main/python/cugraph-pyg/cugraph_pyg/examples>`_.
6+
7+
48
This tutorial introduces a skeleton on how to perform distributed training on multiple GPUs over multiple nodes using the `SLURM workload manager <https://slurm.schedmd.com/>`_ available at many supercomputing centers.
59
The code is based on `our tutorial on single-node multi-GPU training <multi_gpu_vanilla.html>`_.
610
Please go there first to understand the basics if you are unfamiliar with the concepts of distributed training in :pytorch:`PyTorch`.

examples/distributed/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@
33
This directory contains examples for distributed graph learning.
44
The examples are organized into two subdirectories:
55

6-
1. [`pyg`](./pyg): Distributed training via PyG's own `torch_geometric.distributed` package.
6+
1. [`pyg`](./pyg): Distributed training via PyG's own `torch_geometric.distributed` package (deprecated).
77
1. [`graphlearn_for_pytorch`](./graphlearn_for_pytorch): Distributed training via the external [GraphLearn-for-PyTorch (GLT)](https://github.com/alibaba/graphlearn-for-pytorch) package.
88
1. [`kuzu`](./kuzu): Remote backend via the [Kùzu](https://kuzudb.com/) graph database.

examples/distributed/pyg/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Distributed Training with PyG
22

3-
**[`torch_geometric.distributed`](https://github.com/pyg-team/pytorch_geometric/tree/master/torch_geometric/distributed)** implements a scalable solution for distributed GNN training, built exclusively upon PyTorch and PyG.
3+
**[`torch_geometric.distributed`](https://github.com/pyg-team/pytorch_geometric/tree/master/torch_geometric/distributed)** (deprecated) implements a scalable solution for distributed GNN training, built exclusively upon PyTorch and PyG.
44

55
Current application can be deployed on a cluster of arbitrary size using multiple CPUs.
66
PyG native GPU application is under development and will be released soon.

0 commit comments

Comments
 (0)