Skip to content

PyTorch Profiler leaks memory #11480

@EricWiener

Description

@EricWiener

🐛 Bug

When using profiler="PyTorch", memory usage (as measured by vm_percent) will keep increasing until running out of memory.

To Reproduce

No code yet, but will try to make an example. Just wanted to make this public info.

Expected behavior

The profiler doesn't leak memory.

Environment

  • PyTorch Lightning Version (e.g., 1.5.0): 1.5.8
  • PyTorch Version (e.g., 1.10): 1.10.1
  • Python version (e.g., 3.9): 3.6
  • OS (e.g., Linux): Linux
  • CUDA/cuDNN version: 11.2
  • GPU models and configuration: GeForce RTX 2080 Ti
  • How you installed PyTorch (conda, pip, source): pip
  • If compiling from source, the output of torch.__config__.show():
  • Any other relevant information:

Additional context

N/A

cc @carmocca @kaushikb11 @ninginthecloud @rohitgr7

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingprofilerwon't fixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions