- Neural networks comprise of layers/modules that perform operations on data. The
torch.nnnamespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses thenn.Module. A neural network is a module itself that consists of other modules (layers). This nested structure allows for building and managing complex architectures easily. In the following sections, we’ll build a neural network to classify images in the FashionMNIST dataset.
import osimport torchfrom torch import nnfrom torch.utils.data import DataLoaderfrom torchvision import datasets, transforms
1、Get Device for Training
We want to be able to train our model on a hardware accelerator like the GPU, if it is available. Let’s check to see if
torch.cudais available, else we continue to use the CPU.torch.cuda:https://pytorch.org/docs/stable/notes/cuda.htmldevice = "cuda" if torch.cuda.is_available() else "cpu"print(f"Using {device} device")"""Using cuda device"""
2、Define the Class
To define a neural network in PyTorch, we define our neural network by subclassing
nn.Module(create a class that inherits fromnn.Module), and initialize the neural network layers in__init__(define the layers of the network in the__init__function). Everynn.Modulesubclass implements the operations on input data in the forward method(specify how data will pass through the network in theforwardfunction).To accelerate operations in the neural network, we move it to the GPU if available. ```python class NeuralNetwork(nn.Module): def init(self):
super(NeuralNetwork, self).__init__()self.flatten = nn.Flatten()self.linear_relu_stack = nn.Sequential(nn.Linear(28*28, 512),nn.ReLU(),nn.Linear(512, 512),nn.ReLU(),nn.Linear(512, 10))
def forward(self, x):
x = self.flatten(x) logits = self.linear_relu_stack(x) return logits
create an instance of NeuralNetwork, and move it to the device
model = NeuralNetwork().to(device)
print its structure.
print(model)
“”” Using cuda device NeuralNetwork( (flatten): Flatten(start_dim=1, end_dim=-1) (linear_relu_stack): Sequential( (0): Linear(in_features=784, out_features=512, bias=True) (1): ReLU() (2): Linear(in_features=512, out_features=512, bias=True) (3): ReLU() (4): Linear(in_features=512, out_features=10, bias=True) ) ) “””
- To use the model, we pass it the input data. This executes the model’s `forward`, along with some background operations.
- background operations:[https://github.com/pytorch/pytorch/blob/270111b7b611d174967ed204776985cefca9c144/torch/nn/modules/module.py#L866](https://github.com/pytorch/pytorch/blob/270111b7b611d174967ed204776985cefca9c144/torch/nn/modules/module.py#L866)
- Do not call `model.forward()` directly!
- Calling the model on the input returns a 10-dimensional tensor with raw predicted values for each class. We get the prediction probabilities by passing it through an instance of the `nn.Softmax` module.
```python
X = torch.rand(1, 28, 28, device=device)
logits = model(X)
pred_probab = nn.Softmax(dim=1)(logits)
y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")
"""
Predicted class: tensor([6], device='cuda:0')
"""
3、Model Layers(以上初始化过程解析)
Let’s break down the layers in the FashionMNIST model. To illustrate it, we will take a sample minibatch of 3 images of size 28x28 and see what happens to it as we pass it through the network.
input_image = torch.rand(3, 28, 28) print(input_image.size()) """ torch.Size([3, 28, 28]) """(1)nn.Flatten
We initialize the
nn.Flattenlayer to convert each 2D 28x28 image into a contiguous array of 784 pixel values ( the minibatch dimension (at dim=0) is maintained).nn.Flatten:https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html ```python flatten = nn.Flatten() flat_image = flatten(input_image) print(flat_image.size())
“”” torch.Size([3, 784]) “””
<a name="cBlif"></a>
### (2)nn.Linear
- The linear layer is a module that applies a linear transformation on the input using its stored weights and biases.
- `nn.Linear`:[https://pytorch.org/docs/stable/generated/torch.nn.Linear.html](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)
```python
layer1 = nn.Linear(in_features=28*28, out_features=20)
hidden1 = layer1(flat_image)
print(hidden1.size())
"""
torch.Size([3, 20])
"""
(3)nn.ReLU
- Non-linear activations are what create the complex mappings between the model’s inputs and outputs. They are applied after linear transformations to introduce nonlinearity, helping neural networks learn a wide variety of phenomena.
- In this model, we use
nn.ReLUbetween our linear layers, but there’s other activations to introduce non-linearity in your model.nn.ReLU:https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html ```python print(f”Before ReLU: {hidden1}\n\n”) hidden1 = nn.ReLU()(hidden1) print(f”After ReLU: {hidden1}”)
“””
Before ReLU: tensor([[-0.4585, -0.5496, -0.1878, -0.3496, 0.0372, -0.3017, 0.2689, 0.2769,
-0.3354, 0.1864, 0.0984, -0.0067, 0.0793, 0.3697, -0.1867, -0.6875,
0.0780, -0.3092, 0.0256, 0.5809],
[-0.5044, -0.3324, -0.6145, -0.1014, -0.3109, -0.1887, 0.0501, 0.4022,
-0.7231, -0.0712, 0.3662, 0.1972, 0.0829, 0.3120, -0.4535, -0.3210,
0.0898, -0.5004, -0.1779, 0.6412],
[-0.4653, -0.3751, -0.6258, 0.1099, -0.2998, -0.0065, -0.0028, 0.8566,
-0.2335, 0.3554, 0.1675, 0.2339, 0.2285, 0.4092, -0.1098, -0.5348,
0.2449, -0.2689, -0.3519, 0.7555]], grad_fn=
After ReLU: tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0372, 0.0000, 0.2689, 0.2769, 0.0000,
0.1864, 0.0984, 0.0000, 0.0793, 0.3697, 0.0000, 0.0000, 0.0780, 0.0000,
0.0256, 0.5809],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0501, 0.4022, 0.0000,
0.0000, 0.3662, 0.1972, 0.0829, 0.3120, 0.0000, 0.0000, 0.0898, 0.0000,
0.0000, 0.6412],
[0.0000, 0.0000, 0.0000, 0.1099, 0.0000, 0.0000, 0.0000, 0.8566, 0.0000,
0.3554, 0.1675, 0.2339, 0.2285, 0.4092, 0.0000, 0.0000, 0.2449, 0.0000,
0.0000, 0.7555]], grad_fn=
<a name="L9OPf"></a>
### (4)nn.Sequential
- `nn.Sequential` is an ordered container of modules. The data is passed through all the modules in the same order as defined. You can use sequential containers to put together a quick network like `seq_modules`.
- `nn.Sequential`:[https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)
```python
seq_modules = nn.Sequential(
flatten,
layer1,
nn.ReLU(),
nn.Linear(20, 10)
)
input_image = torch.rand(3,28,28)
logits = seq_modules(input_image)
(5)nn.Softmax
The last linear layer of the neural network returns logits - raw values in
[-infty, infty]- which are passed to thenn.Softmaxmodule. The logits are scaled to values[0, 1]representing the model’s predicted probabilities for each class.dimparameter indicates the dimension along which the values must sum to 1.nn.Softmax:https://pytorch.org/docs/stable/generated/torch.nn.Softmax.htmlsoftmax = nn.Softmax(dim=1) pred_probab = softmax(logits)4、Model Parameters
Many layers inside a neural network are parameterized, i.e. have associated weights and biases that are optimized during training. Subclassing
nn.Moduleautomatically tracks all fields defined inside your model object, and makes all parameters accessible using your model’sparameters()ornamed_parameters()methods.- In this example, we iterate over each parameter, and print its size and a preview of its values. ```python print(f”Model structure: {model}\n\n”)
for name, param in model.named_parameters(): print(f”Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n”)
“”” Model structure: NeuralNetwork( (flatten): Flatten(start_dim=1, end_dim=-1) (linear_relu_stack): Sequential( (0): Linear(in_features=784, out_features=512, bias=True) (1): ReLU() (2): Linear(in_features=512, out_features=512, bias=True) (3): ReLU() (4): Linear(in_features=512, out_features=10, bias=True) ) )
Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[-0.0344, -0.0281, 0.0077, …, 0.0245, -0.0266, 0.0154],
[-0.0348, -0.0107, -0.0055, …, -0.0212, 0.0112, 0.0110]],
device=’cuda:0’, grad_fn=
Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([-0.0321, -0.0202], device=’cuda:0’, grad_fn=
Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0415, -0.0306, -0.0079, …, 0.0343, -0.0336, -0.0196],
[-0.0379, 0.0097, 0.0202, …, -0.0030, -0.0424, 0.0009]],
device=’cuda:0’, grad_fn=
Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([-0.0035, -0.0339], device=’cuda:0’, grad_fn=
Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[ 0.0329, -0.0243, -0.0016, …, -0.0199, 0.0009, 0.0424],
[ 0.0107, -0.0241, -0.0397, …, 0.0316, 0.0335, -0.0016]],
device=’cuda:0’, grad_fn=
Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([-0.0144, 0.0310], device=’cuda:0’, grad_fn=
5、参考
torch.nnAPI
