ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

np实现BP-与torch模型比较

2021-10-18 21:34:27  阅读:175  来源: 互联网

标签:pred self torch BP diy np model grad


import numpy as np
import torch
import torch.nn as nn
import copy

class TorchModel(nn.Module):
    def __init__(self, input_dim):
        super(TorchModel, self).__init__()
        self.line_layer = nn.Linear(input_dim, input_dim, bias=False)
        self.activate = nn.Sigmoid()
        self.mse = nn.functional.mse_loss

    def forward(self, x, y=None):
        y_pred = self.line_layer(x)
        y_pred = self.activate(y_pred)
        if y is not None:
            loss = self.mse(y_pred, y)
            return loss
        else:
            return y_pred


class DiyModel():
    def __init__(self, weight):
        self.weight = weight

    def forward(self, x, y=None):
        y_pred = np.dot(self.weight, x)
        y_pred = self.sigmoid(y_pred)
        if y is not None:
            return self.mse(y_pred, y)
        else:
            return y_pred

    def sigmoid(self, x):
        return 1 / (1 + np.exp(-x))

    def mse(self, y_pred, y_true):
        return np.sum(np.square((y_pred - y_true))) / len(y_pred)

    def caculate_grad(self,  y_pred,y_true, x):
        '''
        链式求导法则
        '''
        mse_grad = 2 * (y_pred - y_true) / len(y_pred)
        sigmoid_grad = y_pred * (1 - y_pred)

        grad = mse_grad * sigmoid_grad

        grad = np.dot(grad.reshape(len(x), 1), x.reshape(1, len(x)))

        return grad
#梯度更新
def diy_sgd(grad, weight, learning_rate):
    return weight - grad * learning_rate


x = np.array([1, 2, 3, 4])  #输入
y = np.array([3, 2, 4, 5])  #预期输出


#torch实验
torch_model = TorchModel(len(x))
print(torch_model.state_dict())
torch_model_w = torch_model.state_dict()["line_layer.weight"]
print(torch_model_w, "初始化权重")
numpy_model_w = copy.deepcopy(torch_model_w.numpy())

torch_x = torch.FloatTensor([x])
torch_y = torch.FloatTensor([y])
#torch的前向计算过程,得到loss
torch_loss = torch_model.forward(torch_x, torch_y)
print("torch模型计算loss:", torch_loss)
# #手动实现loss计算
diy_model = DiyModel(numpy_model_w)
diy_loss = diy_model.forward(x, y)
print("diy模型计算loss:", diy_loss)



# #设定优化器
learning_rate = 0.1
optimizer = torch.optim.SGD(torch_model.parameters(), lr=learning_rate)
# optimizer = torch.optim.Adam(torch_model.parameters())
optimizer.zero_grad()
#
# #pytorch的反向传播操作
torch_loss.backward()
print(torch_model.line_layer.weight.grad, "torch 计算梯度")  #查看某层权重的梯度

print(diy_model.forward(x),'---------')
# #手动实现反向传播
grad = diy_model.caculate_grad(diy_model.forward(x), y, x)
# print(grad, "diy 计算梯度")
#
# #torch梯度更新
optimizer.step()
# #查看更新后权重
update_torch_model_w = torch_model.state_dict()["line_layer.weight"]
print(update_torch_model_w, "torch更新后权重")
#
# #手动梯度更新
diy_update_w = diy_sgd(grad, numpy_model_w, learning_rate)
# diy_update_w = diy_adam(grad, numpy_model_w)
print(diy_update_w, "diy更新权重")

标签:pred,self,torch,BP,diy,np,model,grad
来源: https://blog.csdn.net/weixin_43198168/article/details/120835127

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有