- 14.lossfunction
- L1loss:求误差和 或 求平均,由参数reduction来控制
- https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss">https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss
- https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss">MSEloss:https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss
- CROSSENTROPYLOSS,交叉熵
- https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss">https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss
- It is useful when training a classification problem with C classes.
14.lossfunction
L1loss:求误差和 或 求平均,由参数reduction来控制
https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss
MSEloss:https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss
CROSSENTROPYLOSS,交叉熵
https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss
It is useful when training a classification problem with C classes.
pytorch中的crossentropyloss是融合了softmax的,没搞懂。
比如下图所示,class一共有3个类别。神经网络最后putput(也是crossentropyloss的input)是一组得分(三个class的得分)(未经过处理的得分),而target为1,也就是第1类:dog。
import torchfrom torch import nninput = torch.tensor([1,2,3],dtype=torch.float32)output = torch.tensor([2,4,6],dtype=torch.float32)l1loss = nn.L1Loss()mseloss = nn.MSELoss()# loss = L1Loss(reduction='sum')# mseloss = MSELoss(reduction='sum')result_l1loss = l1loss(input,output)result_MSEloss = mseloss(input,output)print(result_l1loss)print(result_MSEloss)x = torch.tensor([0.1, 0.2, 0.3])y = torch.tensor([1])x = torch.reshape(x,(1,3))loss_cross = nn.CrossEntropyLoss()result_cross = loss_cross(x, y)print(result_cross)
import torchvisionfrom torch import nnfrom torch.nn import Conv2d, MaxPool2d, Linear, Flattenfrom torch.utils.data import DataLoaderdataset = torchvision.datasets.CIFAR10('./dataset',train=False,transform=torchvision.transforms.ToTensor(),download=True)dataloader = DataLoader(dataset,batch_size=1)class DEMO(nn.Module):def __init__(self):super(DEMO, self).__init__()self.model = nn.Sequential(Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2),MaxPool2d(kernel_size=2),Conv2d(in_channels=32, out_channels=32, kernel_size=5, padding=2),MaxPool2d(kernel_size=2),Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=2),MaxPool2d(kernel_size=2),Flatten(),Linear(in_features=1024, out_features=64),Linear(in_features=64, out_features=10),)def forward(self,x):x = self.model(x)return xdemo = DEMO()loss_cross = nn.CrossEntropyLoss()for data in dataloader:imgs,targets = dataoutput = demo(imgs)# print(output) output神经网络输出的一组得分(不是概率)# print(targets) target就是目标loss = loss_cross(output,targets)loss.backward()print(loss)
通过debug可以查看每一层的weight和bias
计算出loss后就可以通过backward计算梯度 然后更新参数
加入backward后,继续debug
可以出现自动计算出的梯度grad(没有backward则不会自动计算),用于后续的优化器。
