其他分享
首页 > 其他分享> > 02 Transformer 中 Add&Norm (残差和标准化)代码实现

02 Transformer 中 Add&Norm (残差和标准化)代码实现

作者:互联网

python/pytorch 基础

https://www.cnblogs.com/nickchen121

培训机构(Django 类似于 Transformers)

img

首先由一个 norm 函数

norm 里面做残差,会输入( x 和 淡粉色z1,残差值),输出一个值紫粉色的 z1

标准化

\[y = \frac{x-E(x)}{\sqrt{Var(x)+\epsilon}}*\gamma+\beta \]

\(E(x)\) 对 x 求均值

\(Var(x)\) 对 x 求方差

\(\epsilon\) 加在方差上的数字,避免分母为0;

\(\gamma\)和\(\beta\) 为学习参数,二者均可学习随着训练过程而变化;

class LayerNorm(nn.Module):

    def __init__(self, feature, eps=1e-6):
        """
        :param feature: self-attention 的 x 的大小
        :param eps:
        """
        super(LayerNorm, self).__init__()
        self.a_2 = nn.Parameter(torch.ones(feature))
        self.b_2 = nn.Parameter(torch.zeros(feature))
        self.eps = eps

    def forward(self, x):
        mean = x.mean(-1, keepdim=True)
        std = x.std(-1, keepdim=True)
        return self.a_2 * (x - mean) / (std + self.eps) + self.b_2

残差+标准化

class SublayerConnection(nn.Module):
    """
    这不仅仅做了残差,这是把残差和 layernorm 一起给做了

    """
    def __init__(self, size, dropout=0.1):
        super(SublayerConnection, self).__init__()
        # 第一步做 layernorm
        self.layer_norm = LayerNorm(size)
        # 第二步做 dropout
        self.dropout = nn.Dropout(p=dropout)

    def forward(self, x, sublayer):
        """
        :param x: 就是self-attention的输入
        :param sublayer: self-attention层
        :return:
        """
        return self.dropout(self.layer_norm(x + sublayer(x)))

标签:02,__,Transformer,nn,self,残差,eps,Add,dropout
来源: https://www.cnblogs.com/nickchen121/p/16518604.html