ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

标签平滑Label Smoothing Demo(附pytorch的NLLLoss(),gather())

2021-12-14 10:05:07  阅读:291  来源: 互联网

标签:loss torch log dim NLLLoss self gather smoothing Smoothing


LabelSmoothing.py

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

# Wangleiofficial
# https://github.com/pytorch/pytorch/issues/7455#issuecomment-720100742
class LabelSmoothingLoss(torch.nn.Module):
    def __init__(self, smoothing: float = 0.1,
                 reduction="mean", weight=None):
        super(LabelSmoothingLoss, self).__init__()
        self.smoothing   = smoothing
        self.reduction = reduction
        self.weight    = weight

    def reduce_loss(self, loss):
        return loss.mean() if self.reduction == 'mean' else loss.sum() \
         if self.reduction == 'sum' else loss

    def linear_combination(self, x, y):
        return self.smoothing * x + (1 - self.smoothing) * y

    def forward(self, preds, target):
        assert 0 <= self.smoothing < 1

        if self.weight is not None:
            self.weight = self.weight.to(preds.device)

        n = preds.size(-1)
        log_preds = F.log_softmax(preds, dim=-1)
        loss = self.reduce_loss(-log_preds.sum(dim=-1))
        nll = F.nll_loss(
            log_preds, target, reduction=self.reduction, weight=self.weight
        )
        return self.linear_combination(loss / n, nll)


# NVIDIA
# https://github.com/NVIDIA/DeepLearningExamples/blob/8d8b21a933fff3defb692e0527fca15532da5dc6/PyTorch/Classification/ConvNets/image_classification/smoothing.py#L18
class LabelSmoothing(nn.Module):
    """NLL loss with label smoothing.
    """
    def __init__(self, smoothing=0.0): # 平滑因子
        """Constructor for the LabelSmoothing module.
        :param smoothing: label smoothing factor
        """
        super(LabelSmoothing, self).__init__()
        self.confidence = 1.0 - smoothing
        self.smoothing = smoothing

    def forward(self, x, target):
        logprobs = torch.nn.functional.log_softmax(x, dim=-1) # x: (batch size * class数量),即log(p(k))
        nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1)) # target: (batch size) 数字标签
        # 相当于取出logprobs中的真实标签的那个位置的logit的负值
        nll_loss = nll_loss.squeeze(1) # (batch size * 1)再squeeze成batch size,即log(p(k))δk,y,δk,y表示除了k=y时该值为1,其余为0
        smooth_loss = -logprobs.mean(dim=-1) # 在class维度取均值,就是对每个样本x的所有类的logprobs取了平均值。
        # smooth_loss = -log(p(k))u(k) = -log(p(k))∗ 1/k

        loss = self.confidence * nll_loss + self.smoothing * smooth_loss # (batch size)
        # loss = (1−ϵ)log(p(k))δk,y + ϵlog(p(k))u(k)
        return loss.mean() # −∑ k=1~K [(1−ϵ)log(p(k))δk,y+ϵlog(p(k))u(k)]



if __name__=="__main__":
    # Wangleiofficial
    crit = LabelSmoothingLoss(smoothing=0.3, reduction="mean")
    predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
                                 [0, 0.9, 0.2, 0.2, 1],
                                 [1, 0.2, 0.7, 0.9, 1]])

    v = crit(Variable(predict),
             Variable(torch.LongTensor([2, 1, 0])))
    print(v)

    # NVIDIA
    crit = LabelSmoothing(smoothing=0.3)
    predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
                                 [0, 0.9, 0.2, 0.2, 1],
                                 [1, 0.2, 0.7, 0.9, 1]])
    v = crit(Variable(predict),
             Variable(torch.LongTensor([2, 1, 0])))
    print(v)

# tensor(1.3883)
# tensor(1.3883)
View Code

 

上面代码可以直接跑出LS的结果

两个LS的实现方法来源于

# Wangleiofficial
# https://github.com/pytorch/pytorch/issues/7455#issuecomment-720100742
# NVIDIA
# https://github.com/NVIDIA/DeepLearningExamples/blob/8d8b21a933fff3defb692e0527fca15532da5dc6/PyTorch/Classification/ConvNets/image_classification/smoothing.py#L18

 

 

 

 

主要讲解NVIDIA里的实现过程

class LabelSmoothing(nn.Module):
    """NLL loss with label smoothing.
    """
    def __init__(self, smoothing=0.0): # 平滑因子
        """Constructor for the LabelSmoothing module.
        :param smoothing: label smoothing factor
        """
        super(LabelSmoothing, self).__init__()
        self.confidence = 1.0 - smoothing
        self.smoothing = smoothing

    def forward(self, x, target):
        logprobs = torch.nn.functional.log_softmax(x, dim=-1) # x: (batch size * class数量),即log(p(k))
        nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1)) # target: (batch size) 数字标签
        # 相当于取出logprobs中的真实标签的那个位置的logit的负值
        nll_loss = nll_loss.squeeze(1) # (batch size * 1)再squeeze成batch size,即log(p(k))δk,y,δk,y表示除了k=y时该值为1,其余为0
        smooth_loss = -logprobs.mean(dim=-1) # 在class维度取均值,就是对每个样本x的所有类的logprobs取了平均值。
        # smooth_loss = -log(p(k))u(k) = -log(p(k))∗ 1/k

        loss = self.confidence * nll_loss + self.smoothing * smooth_loss # (batch size)
        # loss = (1−ϵ)log(p(k))δk,y + ϵlog(p(k))u(k)
        return loss.mean() # −∑ k=1~K [(1−ϵ)log(p(k))δk,y+ϵlog(p(k))u(k)]

 

LS通俗易懂的推导

https://blog.csdn.net/weixin_41811314/article/details/115863126 保姆级代码

 

 

 

狄拉克delta

 

讲解NLLLoss()和CrossEntropyLoss()的区别

 

https://blog.csdn.net/qq_22210253/article/details/85229988/

使用NLLLoss()的时候(即Negative Log Likelihood Loss),需要在网络的最后输出层加上激活函数,这也正是likelihood的定义,即输出一个概率;而使用CrossEntropyLoss()的时候,网络的最后输出不要加激活,在 CrossEntropyLoss()中会帮我们完成此操作

NLLLoss (Negative Log Likelihood Loss)翻译为 “负对数似然损失函数”,但并不计算对数,只是利用对数运算后的值(故需要和LogSofmax组合),进一步结合真实标签计算“负对数似然值”。 “似然”的含义为:所求概率分布与真实概率分布的相似程度。在分类任务中,“似然”的含义就是预测值与真实标签(one-hot类型)的接近程度(实质没有改变)。

 

pytorch中的CrossEntropyLoss中是结合了logsoftmax和Nllloss

softmax,取自然对数后,与Label对应的那个值拿出来,再取相反数,再求均值(除以样本数)。

 

 

 

Pytorch中的torch.gather函数详解,从抽象到具体

https://blog.csdn.net/weixin_41811314/article/details/115869024

torch.gather(input, dim, index, *, sparse_grad=False, out=None)

 

dim=1时,out[i][j][k] = input[i][index[i][j][k]][k]

out[i][j][k]就是先获取index[i][j][k],设为a,dim为1,结果即为input[i][a][k]

input是个仓库

 

 

 

 

更宏观的理解

也就是说,对于out中的每一个{m}位置,我们都去input这个仓库里找到对应的那一个点!
找到那一个点之后,我们从这个点出发,发射一条只照亮"dim"这条线的光线,在这条光线上再根据index找到我们真正需要的那个点就行了!

   

 

标签:loss,torch,log,dim,NLLLoss,self,gather,smoothing,Smoothing
来源: https://www.cnblogs.com/jie-74/p/15686416.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有