libtorch1.5加载lstmFC.pt模型到cuda:1,报错
作者:互联网
1、情景
pytorch的模型,torch.jit.trace转换成pt文件
然后通过C++加载调用模型;
2、报错内容:
terminate called after throwing an instance of 'std::runtime_error' what(): Input and hidden tensors are not at the same device, found input tensor at cuda:1 and hidden tensor at cuda:0 The above operation failed in interpreter. Traceback (most recent call last): Serialized File "code/__torch__/models.py", line 20 _4 = self.dense2 _5 = self.dense1 _6 = torch.slice((self.lstm).forward(input, ), 0, 0, 9223372036854775807, 1) ~~~~~~~~~~~~~~~~~~ <--- HERE lstm_out = torch.slice(torch.select(_6, 1, -1), 1, 0, 9223372036854775807, 1) input0 = torch.cat([lstm_out, x], -1) /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/nn/modules/rnn.py(569): forward /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/nn/modules/module.py(534): _slow_forward /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/nn/modules/module.py(548): __call__ /data/models.py(24): forward /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/nn/modules/module.py(534): _slow_forward /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/nn/modules/module.py(548): __call__ /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/jit/__init__.py(1027): trace_module /root/anaconda3/envs/pytorch_1.5.1_cu92/lib/python3.8/site-packages/torch/jit/__init__.py(873): trace pth_to_pt.py(48): pth_topt pth_to_pt.py(68): <module> Serialized File "code/__torch__/torch/nn/modules/rnn.py", line 16, in forward max_batch_size = ops.prim.NumToTensor(torch.size(input, 0)) hx = torch.zeros([1, int(max_batch_size), 128], dtype=6, layout=0, device=torch.device("cuda:0"), pin_memory=False) lstm_out, _4, _5 = torch.lstm(input, [hx, hx], [_3, _2, _1, _0], True, 1, 0., False, False, True) ~~~~~~~~~~ <--- HERE return lstm_out Aborted (core dumped)
3、原因&解决方式
lstm的隐藏全零输入hx,在初始化的时候,没有导入到cuda1(该版本不支持);
class XXModule(nn.Module):
def __init__(self):
...
self.lstm = nn.LSTM()
# layers * num_directions, max_batch_size, hidden_size
self.zeros = Parameter(torch.zeros(1 * 1, max_batch_size, hidden_size, dtype=torch.float))
def forward(z):
lstm_out, (h_n, _) = self.lstm(z, hx=(self.zeros[:, :z.shape[0], :], self.zeros[:, :z.shape[0], :]))
....
out = ...
return argmax(out)
--
标签:__,pt,self,torch,lstmFC,报错,zeros,lstm,size 来源: https://www.cnblogs.com/xiaoniu-666/p/16379713.html