Skip to content

Conversation

awaelchli
Copy link
Contributor

@awaelchli awaelchli commented Jun 21, 2022

What does this PR do?

Fixes #13079, #10535

Before

# Picks the first two available GPUs
Trainer(devices=2, auto_select_gpus=True)

Now:

# Picks the first two available GPUs
Trainer(devices=2)

Does your PR introduce any breaking changes? If yes, please list them.

Yes. The auto_select_gpus argument in Trainer will be removed in v1.9. Users of auto_select_gpus can just remove the flag and still make use of it as it is now turned on by default.

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

@@ -517,7 +515,7 @@ def _set_parallel_devices_and_init_accelerator(self) -> None:
self._gpus = self._devices_flag if not self._gpus else self._gpus
self._tpu_cores = self._devices_flag if not self._tpu_cores else self._tpu_cores

self._set_devices_flag_if_auto_select_gpus_passed()
self._set_devices_flag_if_count_passed()
Copy link
Contributor Author

@awaelchli awaelchli Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could merge this with Accelerator.parse_devices (see line below). I.e., we would implement GPUAccelerator.parse_devices by calling the pick_multiple_gpus function there.

Any thoughts/objections on this?

@@ -49,20 +49,6 @@ def determine_root_gpu_device(gpus: List[_DEVICE]) -> Optional[_DEVICE]:
return root_gpu


def _parse_devices(
Copy link
Contributor Author

@awaelchli awaelchli Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed dead code (unrelated to this PR)

@awaelchli awaelchli added deprecation Includes a deprecation accelerator: cuda Compute Unified Device Architecture GPU trainer: argument labels Jun 21, 2022
@stale
Copy link

stale bot commented Jul 10, 2022

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/generated/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Jul 10, 2022
@awaelchli awaelchli added this to the future milestone Jul 11, 2022
@stale stale bot removed the won't fix This will not be worked on label Jul 11, 2022
@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Oct 29, 2022
@carmocca
Copy link
Contributor

carmocca commented Nov 8, 2022

Is this blocked or something?

@awaelchli
Copy link
Contributor Author

This just needs more work. Requires to look at tests etc. Need to discuss priority and then add to sprint.

@awaelchli awaelchli closed this Dec 20, 2022
@awaelchli awaelchli deleted the refactor/deprecate-auto-select-gpus branch December 20, 2022 23:08
@carmocca carmocca removed this from the future milestone Dec 22, 2022
@carmocca carmocca modified the milestone: v1.9 Dec 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accelerator: cuda Compute Unified Device Architecture GPU deprecation Includes a deprecation pl Generic label for PyTorch Lightning package trainer: argument
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RFC: Deprecate auto_select_gpus Trainer argument
2 participants