https://pytorch.org/tutorials/beginner/basics/buildmodel_tutorial.html

Preparation

Import packages

  1. import os
  2. import torch
  3. from torch import nn
  4. from torch.utils.data import DataLoader
  5. from torchvision import datasets, transforms

Get device for training

  1. device = 'cuda' if torch.cuda.is_available() else 'cpu'
  2. print(f'Using {device} device')

Define the Class

自定义的模型(类)继承于nn.Module
We define our neural network by subclassing nn.Module, and initialize the neural network layers in **__init__**.
Every nn.Module subclass implements the operations on input data in the forward method.
每个nn.Module 子类都在 forward方法中实现对输入数据的操作

  1. class NeuralNetwork(nn.Module):
  2. def __init__(self):
  3. super(NeuralNetwork,self).__init__()
  4. self.flatten = nn.Flatten()
  5. self.linear_relu_stack = nn.Sequential(
  6. nn.Linear(28*28, 512),
  7. nn.ReLU(),
  8. nn.Linear(512,512),
  9. nn.ReLU(),
  10. nn.Linear(512,10),
  11. )
  12. def forward(self,x):
  13. x = self.flatten(x)
  14. logits = self.linear_relu_stack(x)
  15. return logits

创建模型的实例
We create an instance of NeuralNetwork,and move it to the device, and print its structure.

  1. model = NeuralNetwork().to(device)
  2. print(model)

Use the model

把数据传递给模型,会执行模型的前向操作,以及一些后台操作,不要直接调用model.forward()

  1. X = torch.rand(1,28,28,device=device)
  2. logits = model(X)
  3. pred_probab = nn.Softmax(dim=1)(logits)
  4. y_pred = pred_probab.argmax(1)
  5. print(f"Predicted class:{y_pred}")

Model layer

  1. input_image = torch.rand(3,28,28)
  2. print(input_image.size())
  3. # out:
  4. torch.Size([3, 28, 28])

nn.Flatten

We initialize the nn.Flatten layer to convert each 2D 28x28 image into a contiguous array of 784 pixel values ( the minibatch dimension (at dim=0) is maintained).

  1. flatten = nn.Flatten()
  2. flat_image = flatten(input_image)
  3. print(flat_image.size())
  4. #out:
  5. torch.Size([3, 784])

nn.Linear

The linear layer is a module that applies a linear transformation on the input using its stored weights and biases.
线性变换

  1. layer1 = nn.Linear(in_features=28*28, out_features=20)
  2. hidden1 = layer1(flat_image)
  3. print(hidden1.size())
  4. #out:
  5. torch.Size([3, 20])

nn.ReLU

激活函数

Non-linear activations are what create the complex mappings between the model’s inputs and outputs. They are applied after linear transformations to introduce nonlinearity, helping neural networks learn a wide variety of phenomena.

In this model, we use nn.ReLU between our linear layers, but there’s other activations to introduce non-linearity in your model

  1. print(f"Before ReLU: {hidden1}\n\n")
  2. hidden1 = nn.ReLU()(hidden1)
  3. print(f"After ReLU: {hidden1}")

nn.Sequential

模块的有序容器,数据按照定义的顺序通过所有模块
nn.Sequential is an ordered container of modules. The data is passed through all the modules in the same order as defined. You can use sequential containers to put together a quick network like seq_modules.

  1. seq_modules = nn.Sequential(
  2. flatten,
  3. layer1,
  4. nn.ReLU(),
  5. nn.Linear(20, 10)
  6. )
  7. input_image = torch.rand(3,28,28)
  8. logits = seq_modules(input_image)

nn.Softmax