其他分享
首页 > 其他分享> > pytroch 1.4 踩坑

pytroch 1.4 踩坑

作者:互联网

解决于此链接

pytorch在1.4以及之前可以这样进行反

opt_1.zero_gard()
loss_1 = fun(...)
opt_1.step()

opt_2.zero_gard()
loss_2 = fun(...)
opt_2.step()

但是上述结构在pytorch1.5以及更高的版本中会发生如下错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [200, 120]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

改成下面的结构就可以解决此问题

opt_1.zero_gard()
loss_1 = fun(...)


opt_2.zero_gard()
loss_2 = fun(...)

opt_1.step()
opt_2.step()

标签:1.4,opt,...,gard,step,pytroch,zero,loss
来源: https://www.cnblogs.com/tstk/p/15210548.html