其他分享
首页 > 其他分享> > 【神经网络压缩】A GRADIENT FLOW FRAMEWORK FOR ANALYZING NETWORK PRUNING

【神经网络压缩】A GRADIENT FLOW FRAMEWORK FOR ANALYZING NETWORK PRUNING

作者:互联网

论文阅读

一、重要性评判准则:

变量说明:Θ(t)代表t时刻的参数; g(Θ(t))为loss对t时刻参数的梯度;H(Θ(t))为Hessian矩阵;损失为L(Θ(t));I(Θp(t))为重要性。

1.Magnitude-based measures:

在这里插入图片描述

2.Loss-preservation based measures

在这里插入图片描述

3.Increase in gradient-norm based measures:

在这里插入图片描述

二、论文第四节

1.GRADIENT FLOW AND MAGNITUDE-BASED PRUNING

在这里插入图片描述
Observation 1: The larger the magnitude of parameters at a particular instant, the smaller the model loss at that instant will be. If these large-magnitude parameters are preserved while pruning (instead of smaller ones), the pruned model’s loss decreases faster

Observation 2: Up to a constant, the magnitude of time-derivative of norm of model parameters (the score for magnitude-based pruning) is equal to the importance measure used for loss-preservation (Equation 3). Further, loss-preservation corresponds to removal of the slowest changing parameters.

Observation 3: Due to their closely related nature, when used with additional heuristics, magnitudebased importance measures preserve loss.

Observation 4: Increasing gradient-norm via pruning removes parameters that maximally increase model loss

Observation 5: Preserving gradient-norm maintains second-order model evolution dynamics and results in better-performing models than increasing gradient-norm.
(未完)

总结

此篇论文根据公式出发,解释了过去在剪枝领域中表现较好的论文中提出的重要性评判准则的本质是什么,即为什么不同的剪枝方法都会得到较好的效果。

标签:loss,based,NETWORK,parameters,GRADIENT,measures,FLOW,gradient,norm
来源: https://blog.csdn.net/qu_learner/article/details/121149689