Skip to content

Commit 39b7cb8

Browse files
carmoccaawaelchli
andauthored
Remove the FairScale integration (#16400)
Co-authored-by: Adrian Wälchli <[email protected]>
1 parent 9346151 commit 39b7cb8

File tree

31 files changed

+44
-1585
lines changed

31 files changed

+44
-1585
lines changed

docs/source-pytorch/api_references.rst

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -176,13 +176,11 @@ precision
176176
ColossalAIPrecisionPlugin
177177
DeepSpeedPrecisionPlugin
178178
DoublePrecisionPlugin
179-
FullyShardedNativeMixedPrecisionPlugin
180179
FullyShardedNativeNativeMixedPrecisionPlugin
181180
HPUPrecisionPlugin
182181
IPUPrecisionPlugin
183182
MixedPrecisionPlugin
184183
PrecisionPlugin
185-
ShardedNativeMixedPrecisionPlugin
186184
TPUBf16PrecisionPlugin
187185
TPUPrecisionPlugin
188186

@@ -276,9 +274,6 @@ strategies
276274
BaguaStrategy
277275
ColossalAIStrategy
278276
DDPFullyShardedNativeStrategy
279-
DDPFullyShardedStrategy
280-
DDPShardedStrategy
281-
DDPSpawnShardedStrategy
282277
DDPSpawnStrategy
283278
DDPStrategy
284279
DataParallelStrategy

docs/source-pytorch/conf.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,6 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
294294
"numpy": ("https://numpy.org/doc/stable/", None),
295295
"PIL": ("https://pillow.readthedocs.io/en/stable/", None),
296296
"torchmetrics": ("https://torchmetrics.readthedocs.io/en/stable/", None),
297-
"fairscale": ("https://fairscale.readthedocs.io/en/latest/", None),
298297
"graphcore": ("https://docs.graphcore.ai/en/latest/", None),
299298
}
300299

docs/source-pytorch/extensions/plugins.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,13 +55,11 @@ The full list of built-in precision plugins is listed below.
5555
ColossalAIPrecisionPlugin
5656
DeepSpeedPrecisionPlugin
5757
DoublePrecisionPlugin
58-
FullyShardedNativeMixedPrecisionPlugin
5958
FullyShardedNativeNativeMixedPrecisionPlugin
6059
HPUPrecisionPlugin
6160
IPUPrecisionPlugin
6261
MixedPrecisionPlugin
6362
PrecisionPlugin
64-
ShardedNativeMixedPrecisionPlugin
6563
TPUBf16PrecisionPlugin
6664
TPUPrecisionPlugin
6765

docs/source-pytorch/guides/speed.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ GPU Training
2828
Lightning supports a variety of plugins to speed up distributed GPU training. Most notably:
2929

3030
* :class:`~pytorch_lightning.strategies.DDPStrategy`
31-
* :class:`~pytorch_lightning.strategies.DDPShardedStrategy`
31+
* :class:`~pytorch_lightning.strategies.DDPFullyShardedNativeStrategy`
3232
* :class:`~pytorch_lightning.strategies.DeepSpeedStrategy`
3333

3434
.. code-block:: python
Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
if __name__ == "__main__":
22
import bagua # noqa: F401
33
import deepspeed # noqa: F401
4-
import fairscale # noqa: F401

requirements/pytorch/strategies.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,5 +2,4 @@
22
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
33

44
# colossalai>=0.1.10 # TODO: uncomment when there's a stable version released
5-
fairscale>=0.4.5, <0.4.13
65
deepspeed>=0.6.0, <=0.7.0

src/lightning_app/components/multi_node/trainer.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ def run(
4040
try:
4141
pkg = importlib.import_module(pkg_name)
4242
trainers.append(pkg.Trainer)
43-
strategies.append(pkg.strategies.DDPSpawnShardedStrategy)
4443
strategies.append(pkg.strategies.DDPSpawnStrategy)
4544
mps_accelerators.append(pkg.accelerators.MPSAccelerator)
4645
except (ImportError, ModuleNotFoundError):

src/pytorch_lightning/CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
4444

4545
- Removed `Trainer(strategy='horovod')` support ([#16150](https://github.com/Lightning-AI/lightning/pull/16150))
4646

47+
- `FairScale` removal (in favor of PyTorch's FSDP implementation) ([#16400](https://github.com/PyTorchLightning/pytorch-lightning/pull/16400))
48+
* Removed the `pytorch_lightning.overrides.fairscale.LightningShardedDataParallel` class
49+
* Removed the `pytorch_lightning.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin` class
50+
* Removed the `pytorch_lightning.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin` class
51+
* Removed the `pytorch_lightning.strategies.fully_sharded.DDPFullyShardedStrategy` (fsdp) class
52+
* Removed the `pytorch_lightning.strategies.sharded.DDPShardedStrategy` (ddp_sharded) class
53+
* Removed the `pytorch_lightning.strategies.sharded_spawn.DDPSpawnShardedStrategy` (ddp_sharded_spawn) class
54+
4755
- Removed legacy device arguments in Trainer ([#16171](https://github.com/Lightning-AI/lightning/pull/16171))
4856
* Removed the `Trainer(gpus=...)` argument
4957
* Removed the `Trainer(tpu_cores=...)` argument

src/pytorch_lightning/callbacks/stochastic_weight_avg.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
import pytorch_lightning as pl
2626
from lightning_fabric.utilities.types import LRScheduler
2727
from pytorch_lightning.callbacks.callback import Callback
28-
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
28+
from pytorch_lightning.strategies import DeepSpeedStrategy
2929
from pytorch_lightning.strategies.fully_sharded_native import DDPFullyShardedNativeStrategy
3030
from pytorch_lightning.utilities.exceptions import MisconfigurationException
3131
from pytorch_lightning.utilities.rank_zero import rank_zero_info, rank_zero_warn
@@ -146,7 +146,7 @@ def pl_module_contains_batch_norm(pl_module: "pl.LightningModule") -> bool:
146146
return any(isinstance(module, nn.modules.batchnorm._BatchNorm) for module in pl_module.modules())
147147

148148
def setup(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", stage: str) -> None:
149-
if isinstance(trainer.strategy, (DDPFullyShardedStrategy, DDPFullyShardedNativeStrategy, DeepSpeedStrategy)):
149+
if isinstance(trainer.strategy, (DDPFullyShardedNativeStrategy, DeepSpeedStrategy)):
150150
raise MisconfigurationException("SWA does not currently support sharded models.")
151151

152152
# copy the model before moving it to accelerator device.

src/pytorch_lightning/overrides/fairscale.py

Lines changed: 0 additions & 42 deletions
This file was deleted.

0 commit comments

Comments
 (0)