其他分享
首页 > 其他分享> > 标签平滑Label Smoothing Demo(附pytorch的NLLLoss(),gather())

标签平滑Label Smoothing Demo(附pytorch的NLLLoss(),gather())

作者:互联网

LabelSmoothing.py

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

# Wangleiofficial
# https://github.com/pytorch/pytorch/issues/7455#issuecomment-720100742
class LabelSmoothingLoss(torch.nn.Module):
    def __init__(self, smoothing: float = 0.1,
                 reduction="mean", weight=None):
        super(LabelSmoothingLoss, self).__init__()
        self.smoothing   = smoothing
        self.reduction = reduction
        self.weight    = weight

    def reduce_loss(self, loss):
        return loss.mean() if self.reduction == 'mean' else loss.sum() \
         if self.reduction == 'sum' else loss

    def linear_combination(self, x, y):
        return self.smoothing * x + (1 - self.smoothing) * y

    def forward(self, preds, target):
        assert 0 <= self.smoothing < 1

        if self.weight is not None:
            self.weight = self.weight.to(preds.device)

        n = preds.size(-1)
        log_preds = F.log_softmax(preds, dim=-1)
        loss = self.reduce_loss(-log_preds.sum(dim=-1))
        nll = F.nll_loss(
            log_preds, target, reduction=self.reduction, weight=self.weight
        )
        return self.linear_combination(loss / n, nll)


# NVIDIA
# https://github.com/NVIDIA/DeepLearningExamples/blob/8d8b21a933fff3defb692e0527fca15532da5dc6/PyTorch/Classification/ConvNets/image_classification/smoothing.py#L18
class LabelSmoothing(nn.Module):
    """NLL loss with label smoothing.
    """
    def __init__(self, smoothing=0.0): # 平滑因子
        """Constructor for the LabelSmoothing module.
        :param smoothing: label smoothing factor
        """
        super(LabelSmoothing, self).__init__()
        self.confidence = 1.0 - smoothing
        self.smoothing = smoothing

    def forward(self, x, target):
        logprobs = torch.nn.functional.log_softmax(x, dim=-1) # x: (batch size * class数量),即log(p(k))
        nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1)) # target: (batch size) 数字标签
        # 相当于取出logprobs中的真实标签的那个位置的logit的负值
        nll_loss = nll_loss.squeeze(1) # (batch size * 1)再squeeze成batch size,即log(p(k))δk,y,δk,y表示除了k=y时该值为1,其余为0
        smooth_loss = -logprobs.mean(dim=-1) # 在class维度取均值,就是对每个样本x的所有类的logprobs取了平均值。
        # smooth_loss = -log(p(k))u(k) = -log(p(k))∗ 1/k

        loss = self.confidence * nll_loss + self.smoothing * smooth_loss # (batch size)
        # loss = (1−ϵ)log(p(k))δk,y + ϵlog(p(k))u(k)
        return loss.mean() # −∑ k=1~K [(1−ϵ)log(p(k))δk,y+ϵlog(p(k))u(k)]



if __name__=="__main__":
    # Wangleiofficial
    crit = LabelSmoothingLoss(smoothing=0.3, reduction="mean")
    predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
                                 [0, 0.9, 0.2, 0.2, 1],
                                 [1, 0.2, 0.7, 0.9, 1]])

    v = crit(Variable(predict),
             Variable(torch.LongTensor([2, 1, 0])))
    print(v)

    # NVIDIA
    crit = LabelSmoothing(smoothing=0.3)
    predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
                                 [0, 0.9, 0.2, 0.2, 1],
                                 [1, 0.2, 0.7, 0.9, 1]])
    v = crit(Variable(predict),
             Variable(torch.LongTensor([2, 1, 0])))
    print(v)

# tensor(1.3883)
# tensor(1.3883)
View Code

 

上面代码可以直接跑出LS的结果

两个LS的实现方法来源于

# Wangleiofficial
# https://github.com/pytorch/pytorch/issues/7455#issuecomment-720100742
# NVIDIA
# https://github.com/NVIDIA/DeepLearningExamples/blob/8d8b21a933fff3defb692e0527fca15532da5dc6/PyTorch/Classification/ConvNets/image_classification/smoothing.py#L18

 

 

 

 

主要讲解NVIDIA里的实现过程

class LabelSmoothing(nn.Module):
    """NLL loss with label smoothing.
    """
    def __init__(self, smoothing=0.0): # 平滑因子
        """Constructor for the LabelSmoothing module.
        :param smoothing: label smoothing factor
        """
        super(LabelSmoothing, self).__init__()
        self.confidence = 1.0 - smoothing
        self.smoothing = smoothing

    def forward(self, x, target):
        logprobs = torch.nn.functional.log_softmax(x, dim=-1) # x: (batch size * class数量),即log(p(k))
        nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1)) # target: (batch size) 数字标签
        # 相当于取出logprobs中的真实标签的那个位置的logit的负值
        nll_loss = nll_loss.squeeze(1) # (batch size * 1)再squeeze成batch size,即log(p(k))δk,y,δk,y表示除了k=y时该值为1,其余为0
        smooth_loss = -logprobs.mean(dim=-1) # 在class维度取均值,就是对每个样本x的所有类的logprobs取了平均值。
        # smooth_loss = -log(p(k))u(k) = -log(p(k))∗ 1/k

        loss = self.confidence * nll_loss + self.smoothing * smooth_loss # (batch size)
        # loss = (1−ϵ)log(p(k))δk,y + ϵlog(p(k))u(k)
        return loss.mean() # −∑ k=1~K [(1−ϵ)log(p(k))δk,y+ϵlog(p(k))u(k)]

 

LS通俗易懂的推导

https://blog.csdn.net/weixin_41811314/article/details/115863126 保姆级代码

 

 

 

狄拉克delta

 

讲解NLLLoss()和CrossEntropyLoss()的区别

 

https://blog.csdn.net/qq_22210253/article/details/85229988/

使用NLLLoss()的时候(即Negative Log Likelihood Loss),需要在网络的最后输出层加上激活函数,这也正是likelihood的定义,即输出一个概率;而使用CrossEntropyLoss()的时候,网络的最后输出不要加激活,在 CrossEntropyLoss()中会帮我们完成此操作

NLLLoss (Negative Log Likelihood Loss)翻译为 “负对数似然损失函数”,但并不计算对数,只是利用对数运算后的值(故需要和LogSofmax组合),进一步结合真实标签计算“负对数似然值”。 “似然”的含义为:所求概率分布与真实概率分布的相似程度。在分类任务中,“似然”的含义就是预测值与真实标签(one-hot类型)的接近程度(实质没有改变)。

 

pytorch中的CrossEntropyLoss中是结合了logsoftmax和Nllloss

softmax,取自然对数后,与Label对应的那个值拿出来,再取相反数,再求均值(除以样本数)。

 

 

 

Pytorch中的torch.gather函数详解,从抽象到具体

https://blog.csdn.net/weixin_41811314/article/details/115869024

torch.gather(input, dim, index, *, sparse_grad=False, out=None)

 

dim=1时,out[i][j][k] = input[i][index[i][j][k]][k]

out[i][j][k]就是先获取index[i][j][k],设为a,dim为1,结果即为input[i][a][k]

input是个仓库

 

 

 

 

更宏观的理解

也就是说,对于out中的每一个{m}位置,我们都去input这个仓库里找到对应的那一个点!
找到那一个点之后,我们从这个点出发,发射一条只照亮"dim"这条线的光线,在这条光线上再根据index找到我们真正需要的那个点就行了!

   

 

标签:loss,torch,log,dim,NLLLoss,self,gather,smoothing,Smoothing
来源: https://www.cnblogs.com/jie-74/p/15686416.html