-
-
Notifications
You must be signed in to change notification settings - Fork 5k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
when run validate.py or inference.py, Error with "cuda out of memory". I have resolved the error.
I don't know if there's a problem with my solution
To Reproduce
Steps to reproduce the behavior:
- set "device=cuda:4 && no-prefetcher=False"
- cuda:0 must have no space left
- run validate.py or inference.py
Additional context
CAUSE
this line https://github.com/huggingface/pytorch-image-models/blob/main/timm/data/loader.py#L126,
function "torch.cuda.Stream()". by default, it runs on GPU 0, but my GPU 0 memory is full, so it reports an error.
SOLUTION
so, use this code to wrap around, "with torch.cuda.device(self.device):".
OTHER
The same problem: https://github.com/huggingface/pytorch-image-models/blob/main/timm/data/loader.py#L151
I don't know if there's a problem with my solution, Thank you for your answer.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working