8.nn_conv
卷积操作的目的其实就是提取特征
import torchimport torch.nn.functional as Finput = torch.tensor([[1,2,0,3,1],[0,1,2,3,1],[1,2,1,0,0],[5,2,3,1,1],[2,1,0,1,1]])kernel = torch.tensor([[1,2,1],[0,1,0],[2,1,0]])print(input.shape)print(kernel.shape)#reshape(input: Tensor, shape: _size) -> Tensorinput = torch.reshape(input,(1,1,5,5))kernel = torch.reshape(kernel,(1,1,3,3))print(input.shape)print(kernel.shape)# torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor# input – input tensor of shape (minibatch , in_channels , iH , iW)# weight – filters of shape (out_channels , in_channels/groups , kH , kW)print('output1------------------------------')output1 = F.conv2d(input,kernel,stride=1)print(output1)print(output1.shape)print('output2------------------------------')output2 = F.conv2d(input,kernel,stride=2)print(output2)print(output2.shape)print('output3------------------------------')output3 = F.conv2d(input,kernel,stride=2,padding=1)print(output3)print(output3.shape)
conv2d中的参数:
input需要四个参数(四维),前两个参数是 nimibatch 和 输入通道数
weight需要四个参数(四维),前两个参数是 输出通道数 和 输入通道数/groups(groups默认是1)
输出为:
torch.Size([5, 5])torch.Size([3, 3])torch.Size([1, 1, 5, 5])torch.Size([1, 1, 3, 3])output1------------------------------tensor([[[[10, 12, 12],[18, 16, 16],[13, 9, 3]]]])torch.Size([1, 1, 3, 3])output2------------------------------tensor([[[[10, 12],[13, 3]]]])torch.Size([1, 1, 2, 2])output3------------------------------tensor([[[[ 1, 4, 8],[ 7, 16, 8],[14, 9, 4]]]])torch.Size([1, 1, 3, 3])
以output1举例,输入为5×5(四维) ,经过步长为1的3*3卷积核(四维)后 ,输出为4维的3×3
