(multi -card environment) in usetorchsummary()
When visualization, the code reports an error:
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
Set the specified GPU in the code, the code is still unsuccessful
device = torch.device(“cuda:3” if torch.cuda.is_available() else “cpu”)
model = model.to(device)
import torch
from torchsummary import summary
from torchvision.models import vgg11
torch.cuda.set_device(2)
model = vgg11(pretrained=False)
if torch.cuda.is_available():
# device = torch.device("cuda:3")
# model = model.to(device)
model.cuda()
summary(model, (3, 224, 224))
Add codetorch.cuda.set_device(2)
Set the current device. Then run successfully, the result is shown in the figure below:
Actually, the official documentation prompts not to encourage the use of this function to set. In most cases, it is best to use CUDA_VISIBLE_DEVIES Environment variables.
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "2"
import torch
from torchsummary import summary
from torchvision.models import vgg11
model = vgg11(pretrained=False)
if torch.cuda.is_available():
model.cuda()
summary(model, (3, 224, 224))
Note
CUDA_VISIBLE_DEVICES
Setting should be loaded on the GPUos.environ['CUDA_VISIBLE_DEVICES']
After limiting the graphics card that can be used, the actual number and the number of the graphics card should be different. For exampleos.environ['CUDA_VISIBLE_DEVICES']="0,2"
, but the graphics card number seen by the program should be changed to'0,1'
, that is, the graphics card number used by the program is actually mapped to the real graphics card number after a mapping.