14.lossfunction

L1loss:求误差和 或 求平均,由参数reduction来控制

https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss

MSEloss:https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss

CROSSENTROPYLOSS,交叉熵

https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss

It is useful when training a classification problem with C classes.

pytorch中的crossentropyloss是融合了softmax的,没搞懂。

比如下图所示,class一共有3个类别。神经网络最后putput(也是crossentropyloss的input)是一组得分(三个class的得分)(未经过处理的得分),而target为1,也就是第1类:dog。

  1. import torch
  2. from torch import nn
  3. input = torch.tensor([1,2,3],dtype=torch.float32)
  4. output = torch.tensor([2,4,6],dtype=torch.float32)
  5. l1loss = nn.L1Loss()
  6. mseloss = nn.MSELoss()
  7. # loss = L1Loss(reduction='sum')
  8. # mseloss = MSELoss(reduction='sum')
  9. result_l1loss = l1loss(input,output)
  10. result_MSEloss = mseloss(input,output)
  11. print(result_l1loss)
  12. print(result_MSEloss)
  13. x = torch.tensor([0.1, 0.2, 0.3])
  14. y = torch.tensor([1])
  15. x = torch.reshape(x,(1,3))
  16. loss_cross = nn.CrossEntropyLoss()
  17. result_cross = loss_cross(x, y)
  18. print(result_cross)
  1. import torchvision
  2. from torch import nn
  3. from torch.nn import Conv2d, MaxPool2d, Linear, Flatten
  4. from torch.utils.data import DataLoader
  5. dataset = torchvision.datasets.CIFAR10('./dataset',train=False,transform=torchvision.transforms.ToTensor(),
  6. download=True)
  7. dataloader = DataLoader(dataset,batch_size=1)
  8. class DEMO(nn.Module):
  9. def __init__(self):
  10. super(DEMO, self).__init__()
  11. self.model = nn.Sequential(
  12. Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2),
  13. MaxPool2d(kernel_size=2),
  14. Conv2d(in_channels=32, out_channels=32, kernel_size=5, padding=2),
  15. MaxPool2d(kernel_size=2),
  16. Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=2),
  17. MaxPool2d(kernel_size=2),
  18. Flatten(),
  19. Linear(in_features=1024, out_features=64),
  20. Linear(in_features=64, out_features=10),
  21. )
  22. def forward(self,x):
  23. x = self.model(x)
  24. return x
  25. demo = DEMO()
  26. loss_cross = nn.CrossEntropyLoss()
  27. for data in dataloader:
  28. imgs,targets = data
  29. output = demo(imgs)
  30. # print(output) output神经网络输出的一组得分(不是概率)
  31. # print(targets) target就是目标
  32. loss = loss_cross(output,targets)
  33. loss.backward()
  34. print(loss)

通过debug可以查看每一层的weight和bias

计算出loss后就可以通过backward计算梯度 然后更新参数

加入backward后,继续debug

可以出现自动计算出的梯度grad(没有backward则不会自动计算),用于后续的优化器。