-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workingprofilerwon't fixThis will not be worked onThis will not be worked on
Description
🐛 Bug
When using profiler="PyTorch"
, memory usage (as measured by vm_percent) will keep increasing until running out of memory.
To Reproduce
No code yet, but will try to make an example. Just wanted to make this public info.
Expected behavior
The profiler doesn't leak memory.
Environment
- PyTorch Lightning Version (e.g., 1.5.0): 1.5.8
- PyTorch Version (e.g., 1.10): 1.10.1
- Python version (e.g., 3.9): 3.6
- OS (e.g., Linux): Linux
- CUDA/cuDNN version: 11.2
- GPU models and configuration: GeForce RTX 2080 Ti
- How you installed PyTorch (
conda
,pip
, source): pip - If compiling from source, the output of
torch.__config__.show()
: - Any other relevant information:
Additional context
N/A
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingprofilerwon't fixThis will not be worked onThis will not be worked on