- 1. Tensor
- FloatTensor
- IntTensor
- rand
- randn
- range
- 参数:起始、结束、步长
- zeros
- 一个批次输入数据的数量
- 经过隐藏层后保留的数据特征的个数
- 每个数据包含的数据特征
- 输出的数据
- 权重参数
- 训练的总次数
- torch.nn.Sequential
- 序列容器
- models=torch.nn.Sequential(OrderedDict([
- (“Line1”,torch.nn.Linear(input_data,hidden_layer)),
- (“Relu1”,torch.nn.ReLU()),
- (“Line2”,torch.nn.Linear(hidden_layer,output_data))
- ]))
《深度学习之Pytorch实战计算机视觉》阅读笔记
第六章:Pytorch基础
1. Tensor
- 数据类型
- torch.FloatTensor:浮点类型的Tensor
- torch.IntTensor:整型的Tensor
- torch.rand:生成浮点类型且维度指定的Tensor,生成数据在0~1之间均匀分布
- torch.randn:生成浮点类型且维度指定的Tensor,生成数据的取值满足均值为0,方差为1,正态分布
- torch.range:用于生成数据类型为浮点型且自定义起始范围和结束范围的Tensor
- torch.zeros:用于生成数据类型为浮点型且维度指定的Tensor,内部填充数据均为0 ```python import torch
FloatTensor
a=torch.FloatTensor(2,3) b=torch.FloatTensor([3,4,5,6])
IntTensor
a=torch.IntTensor(2,3) b=torch.IntTensor([2,3,4])
rand
a=torch.rand(2,3)
randn
b=torch.randn(2,3)
range
参数:起始、结束、步长
a=torch.range(1,20,1)
zeros
b=torch.zeros(2,3)
- **基本运算**- torch.abs:将输入参数的绝对值作为输出- torch.add:返回输入参数的求和结果,输入参数可以是Tensor也可以是标量- torch.clamp:对输入参数按照自定义的范围进行裁剪,将裁减结果进行输出(参数:Tensor、上边界、下边界;使用Tensor中的每个元素分别和上下边界进行比较,如果元素小于下边界,该元素重写为下边界的值;如果元素大于上边界,该元素重写为上边界的值;)- torch.div:范围输入参数的求商结果,输入参数可以是Tensor也可以是标量- torch.mul:返回输入参数求积的结果,输入参数可以是Tensor也可以是标量- torch.pow:返回输入参数求幂的结果,输入参数可以是Tensor也可以是标量- torch.mm:返回输入参数的求积结果作为输出(矩阵乘法,参数维度要满足矩阵乘法的要求)- torch.mv:将参数传递到函数后返回输入参数的求积结果作为输出,运用矩阵和向量之间的乘法规则进行计算,第一个参数代表矩阵,第二个参数代表向量```pythonimport torch# absa = torch.randn(2, 3)b = torch.abs(a)# addc = torch.add(a, b)d = torch.add(d, 10)# clampc=torch.clamp(a,-0.1,0.1)# divb=torch.randn(2,3)c=torch.div(a,b)d=torch.div(a,10)# mulc=torch.mul(a,b)d=torch.mul(b,10)# powc=torch.pow(a,2)# mmc=torch.mm(a,b)# mvc=torch.mv(a,b)
- 简易神经网络 ```python import torch
一个批次输入数据的数量
batch_n=100
经过隐藏层后保留的数据特征的个数
hidder_layer=100
每个数据包含的数据特征
input_data=1000
输出的数据
output_data=10
x=torch.randn(batch_n,input_data) y=torch.randn(batch_n,output_data)
权重参数
w1=torch.randn(input_data,hidder_layer) w2=torch.randn(hidder_layer,output_data)
训练的总次数
epoch_n=20 learning_rate=1e-6
for epoch in range(epoch_n): h1=x.mm(w1) h1=h1.clamp(min=0) y_pred=h1.mm(w2)
loss=(y_pred-y).pow(2).sum() print(“Epoch:{},Loss:{:.4f}”.format(epoch,loss))
grad_y_pred=2*(y_pred-y) grad_w2=h1.t().mm(grad_y_pred)
gradh=grad_y_pred.clone() grad_h=grad_h.mm(w2.t()) grad_h.clamp(min=0) grad_w1=x.t().mm(grad_h)
w1-=learning_rategrad_w1 w2-=learning_rategrad_w2
<a name="6dc95cb0"></a>### 2. 自动梯度- **autograd**- 完成神经网络后向传播中的链式求导- 通过输入的Tensor数据类型的变量在神经网络的前向传播过程中生成一张计算图,然后根据这个计算图和输出结果准确计算出每个参数需要更新的梯度,并通过完成后向传播完成对参数的梯度更新```pythonimport torchfrom torch.autograd import Variablebatch_n=100hidden_layer=100input_data=1000output_data=10x=Variable(torch.randn(batch_n,input_data),requires_grad=False)y=Variable(torch.randn(batch_n,output_data),requires_grad=False)w1=Variable(torch,randn(input_data,hidden_layer),requires_grad=True)w2=Variable(torch.randn(hidden_layer,output_data),requires_grad=True)epoch_n=20learning_rate=1e-6for epoch in range(epoch_n):y_pred=x.mm(w1).clamp(min=0).mm(w2)loss=(y_pred-y).pow(2).sum()print("Epoch:{},Loss:{:.4f}".format(epoch,loss.data[0]))loss.backward()w1.data-=learning_rate*w1.grad.dataw2.data-=learning_rate*w2.grad.dataw1.grad.data.zero_()w2.grad.data.zero_()
- 自定义传播函数 ```python import torch from torch.autograd import Variable
batch_n = 64 hidden_layer = 100 input_data = 1000 output_data = 10
class Model(torch.nn.Module): def init(self): super(Model, self).init()
def forward(self, input, w1, w2):x = torch.mm(input, w1)x = torch.clamp(x, min=0)x = torch.mm(x, w2)return xdef backward(self):pass
model = Model()
x = Variable(torch.randn(batch_n, input_data), requires_grad=False) y = Variable(torch.randn(batch_n, output_data), requires_grad=False)
w1 = Variable(torch.randn(input_data, hidden_layer), requires_grad=True) w2 = Variable(torch.randn(hidden_layer, output_data), requires_grad=True)
epoch_n = 30 learning_rate = 1e-6
for epoch in range(epoch_n): y_pred = model(x, w1, w2)
loss = (y_pred - y).pow(2).sum()print("Eposch:{},Loss:{:.4f}".format(epoch, loss.item()))loss.backward()w1.data -= learning_rate * w1.grad.dataw2.data -= learning_rate * w2.grad.dataw1.grad.data.zero_()w2.grad.data.zero_()
<a name="28694292"></a>### 3. 参数优化- **torch.nn.Sequential**<br />序列容器,通过在容器中嵌套各种神经网络中具体功能的类,完成对神经网络的搭建。传入参数会按照定义好的顺序自动传递下去。<br />**OrderedDict传入模块:**```pythoonmodels=torch.nn.Sequential(OrderedDict([("Line1",torch.nn.Linear(input_data,hidden_layer)),("Relu1",torch.nn.ReLU()),("Line2",torch.nn.Linear(hidden_layer,output_data))]))
- torch.nn.Linear
定义模型的线性层,输入参数分别为输入特征数、输出特征数、是否使用偏执
torch.nn.Linear类自动生成对应维度的权重参数和偏置 - 激活函数
- torch.nn.ReLU:非线性分类
- torch.nn.MSELoss:均方差误差函数
- torch.nn.L1Loss:平均绝对误差函数
- torch.nn.CrossEntropyLoss:交叉熵
- torch.optim ```python import torch from torch.autograd import Variable
batch_n = 64 hidden_layer = 100 input_data = 1000 output_data = 10
torch.nn.Sequential
序列容器
models = torch.nn.Sequential(torch.nn.Linear(input_data, hidden_layer), torch.nn.ReLU(), torch.nn.Linear(hidden_layer, output_data))
models=torch.nn.Sequential(OrderedDict([
(“Line1”,torch.nn.Linear(input_data,hidden_layer)),
(“Relu1”,torch.nn.ReLU()),
(“Line2”,torch.nn.Linear(hidden_layer,output_data))
]))
x = Variable(torch.randn(batch_n, input_data), requires_grad=False) y = Variable(torch.randn(batch_n, output_data), requires_grad=False)
w1 = Variable(torch.randn(input_data, hidden_layer), requires_grad=True) w2 = Variable(torch.randn(hidden_layer, output_data), requires_grad=True)
epoch_n = 10000 learning_rate = 1e-4 loss_fn = torch.nn.MSELoss()
optimzer = torch.optim.Adam(models.parameters(), lr=learning_rate)
for epoch in range(epoch_n): y_pred = models(x) loss = loss_fn(y_pred, y) if epoch % 1000 == 0: print(“Epoch:{},Loss:{:.4f}”.format(epoch, loss.item())) models.zero_grad() loss.backward() optimzer.step()
# for param in models.parameters():# param.data -= param.grad.data * learning_rate
> 《深度学习之Pytorch实战计算机视觉》阅读笔记> 第六章:Pytorch基础<a name="0ea35372"></a>### 4.手写数字识别- **torchvision**<br />实现数据的处理、导入和预览- **torch.transforms**<br />对载入的数据进行变换或数据增强- **transforms.Normalize**(标准差变化法):<br />使用原始数据的均值(Mean)和标准差(Standard Deviation)进行数据的标准化,在进行标准化之后,数据全部符合均值为0,标准差为1的标准正态分布。<br />计算公式: - **transforms.Resize**<br />对载入的图片数据按照需求的大小进行缩放- **transforms.Scale**<br />对载入的数据按照需求进行缩放- **transforms.CenterCrop**<br />对载入的图片以中心为参考点,按照需求的大小进行裁剪- **transforms.RandomCrop**<br />对载入的图片按照需求的大小进行随机裁剪- **transforms.RandomHorizontalFlip**<br />对载入的图片按照随机概率进行水平翻转- **transforms.RandomVerticalFlip**<br />对载入的图片按照随机概率进行垂直翻转- **transforms.ToTensor**<br />对载入的图片数据进行类型转换,将之前构成PIL图片的数据转换成Tensor数据类型的变量- **transforms.ToPILImage**<br />将Tensor变量的数据转换成PIL图片数据- **DataLoader**- batch_size:每个包的大小- shuffle:装载过程中是否打乱顺序- 数据预览 <br />MNIST图像都是灰度图像,只有一个通道,不需要对所有的通道进行归一化操作```python# 预览图片img = img.numpy().transpose(1, 2, 0)std = [0.5, 0.5, 0.5]mean = [0.5, 0.5, 0.5]img = img * std + meanprint([labels[i] for i in range(64)])plt.imshow(img)# plt.imshow()函数负责对图像进行处理,并显示其格式# plt.show()则是将plt.imshow()处理后的函数显示出来plt.show()
transform = transforms.Compose([transforms.ToTensor(),# 灰度图通道数是1transforms.Normalize([0.5],[0.5])])
- 模型搭建和训练 ```python import torch from torchvision import datasets, transforms import torchvision.transforms from torch.autograd import Variable import numpy as np import matplotlib.pyplot as plt
transform = transforms.Compose([ transforms.ToTensor(),
# 灰度图通道数是1transforms.Normalize([0.5],[0.5])
])
data_train = datasets.MNIST(root=”./data”, transform=transform, train=True, download=True) data_test = datasets.MNIST(root=”./data”, transform=transform, train=False) data_loader_train = torch.utils.data.DataLoader(dataset=data_train, batch_size=64, shuffle=True) data_loader_test = torch.utils.data.DataLoader(dataset=data_test, batch_size=64, shuffle=True)
images, labels = next(iter(data_loader_train)) img = torchvision.utils.make_grid(images)
class Models(torch.nn.Module): def init(self): super(Models, self).init() self.conv1 = torch.nn.Sequential( torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.MaxPool2d(stride=2, kernel_size=2)) self.dense = torch.nn.Sequential(torch.nn.Linear(14 14 128, 1024), torch.nn.ReLU(), torch.nn.Dropout(p=0.5), torch.nn.Linear(1024, 10))
def forward(self, x):x = self.conv1(x)x = x.view(-1, 14 * 14 * 128)x = self.dense(x)return x
model = Models() cost = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters())
n_epochs = 5
for epoch in range(nepochs): running_loss = 0.0 running_correct = 0 print(“Epoch {}/{}”.format(epoch, n_epochs)) for data in data_loader_train: X_train, Y_train = data X_train, Y_train = Variable(X_train), Variable(Y_train) outputs = model(X_train) , pred = torch.max(outputs.data, 1) optimizer.zero_grad() loss = cost(outputs, Y_train)
loss.backward()optimizer.step()running_loss += loss.item()running_correct += torch.sum(pred == Y_train.data)testing_correct = 0for data in data_loader_test:X_test, Y_test = dataX_test, Y_test = Variable(X_test), Variable(Y_test)outputs = model(X_test)_, pred = torch.max(outputs.data, 1)testing_correct += torch.sum(pred == Y_test.data)print("Loss:{:.4f},Train Accuracy:{:.4f},Test Accuracy:{:.4f}".format(running_loss / len(data_train),100 * running_correct / len(data_train),100 * testing_correct / len(data_test)))
```
