是否需要self.loss.cuda()
作者:互联网
转自:https://discuss.pytorch.org/t/why-do-we-need-to-do-loss-cuda-when-we-we-have-already-done-model-cuda/91023/5
https://discuss.pytorch.org/t/move-the-loss-function-to-gpu/20060
1.问题
有的模型将损失函数也调用了cuda():
if torch.cuda.is_available(): net.cuda() softMax.cuda() CE_loss.cuda() Dice_loss.cuda()
但如果损失函数没有参数,且输入的tensor本身就在cuda上面,就没有必要调用.cuda()。
需要调用的情况:
output = torch.randn(10, 10, requires_grad=True, device='cuda') target = torch.randint(0, 10, (10,), device='cuda') weight = torch.empty(10).uniform_(0, 1) criterion = nn.CrossEntropyLoss(weight=weight) loss = criterion(output, target) # error > RuntimeError: Expected object of device type cuda but got device type cpu for argument
#3 'weight' in call to _thnn_nll_loss_forward criterion.cuda() loss = criterion(output, target) # works
因为损失函数中用到了weight权重计算,所以需要把criterion 也放到cuda上。
标签:loss,weight,10,self,torch,criterion,cuda 来源: https://www.cnblogs.com/BlueBlueSea/p/15542320.html