基本信息

0. 说明

学习到了卷积神经网络,写下一些随笔进行记录和回顾

1. 卷积基本概念

卷积核

就是一个不大的矩阵

互相关和卷积计算

就是对应相乘然后求和填进去的计算
互相关和卷积计算的区别简单的理解就是卷积核上下左右翻转,对于神经网络的训练没啥影响,计算反的最后的训练结果反正也是反的

3. 池化

池化其实就是为了解决过拟合的问题,把一块的数据用某一个数据进行代表
目前接触的池化层有最大池化层和平均池化层,顾名思义就是把最大值或者是平均值作为池化结果

4. 卷积网络

卷积网络往往是卷积层,激活函数,池化层进行搭配而形成的
卷积网络的最后可以是使用全连接层进行连接到输出的维度
在卷积层和全连接层之间的对接:

卷积层块的输出形状为(批量大小, 通道, 高, 宽)。当卷积层块的输出传入全连接层块时,全连接层块会将小批量中每个样本变平(flatten)。也就是说,全连接层的输入形状将变成二维,其中第一维是小批量中的样本,第二维是每个样本变平后的向量表示,且向量长度为通道、高和宽的乘积。全连接层块含3个全连接层。它们的输出个数分别是120、84和10,其中10为输出的类别个数。

具体的例子可见下附代码:cov是卷积层,最后的fc是全连接层

  1. class LeNet(nn.Module):
  2. def __init__(self):
  3. super(LeNet, self).__init__()
  4. self.conv = nn.Sequential(
  5. nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
  6. nn.Sigmoid(),
  7. nn.MaxPool2d(2, 2), # kernel_size, stride
  8. nn.Conv2d(6, 16, 5),
  9. nn.Sigmoid(),
  10. nn.MaxPool2d(2, 2)
  11. )
  12. self.fc = nn.Sequential(
  13. nn.Linear(16*4*4, 120),
  14. nn.Sigmoid(),
  15. nn.Linear(120, 84),
  16. nn.Sigmoid(),
  17. nn.Linear(84, 10)
  18. )
  19. def forward(self, img):
  20. feature = self.conv(img)
  21. output = self.fc(feature.view(img.shape[0], -1))
  22. return output

5. 卷积理解与1*1卷积核

1*1的卷积核构成的卷积层可以看作是全连接层的变种
具体的理解也挺绕的,不过确实是来着
卷积的理解就是利用卷积核对目标数据进行特征的提取

6. 卷积的超参数

超参数包括卷积核大小,目标数据填充大小,输出输入通道,步长
目前不太能理解,但是对于查询的结果进行直观的理解就是,卷积核越大越能在较大的维度来提取数据,步长小的话会对数据进行重复的提取,加大计算量,但是步长太大又会有遗漏数据的问题。

二维卷积层输出的二维数组可以看作是输入在空间维度(宽和高)上某一级的表征,也叫特征图(feature map)。影响元素x的前向计算的所有可能输入区域(可能大于输入的实际尺寸)叫做x的感受野(receptive field)。以图5.1为例,输入中阴影部分的四个元素是输出中阴影部分元素的感受野。我们将图5.1中形状为2×2的输出记为Y,并考虑一个更深的卷积神经网络:将Y与另一个形状为2×2的核数组做互相关运算,输出单个元素z。那么,zY上的感受野包括Y的全部四个元素,在输入上的感受野包括其中全部9个元素。可见,我们可以通过更深的卷积神经网络使特征图中单个元素的感受野变得更加广阔,从而捕捉输入上更大尺寸的特征。 我们常使用“元素”一词来描述数组或矩阵中的成员。在神经网络的术语中,这些元素也可称为“单元”。当含义明确时,本书不对这两个术语做严格区分。

卷积网络

1.Le_Net

利用卷积层和sigmoid激活函数以及最大池化层结合构成卷积层,然后利用线性层和sigmoid激活函数搭建全连接层

  1. lass LeNet(nn.Module):
  2. def __init__(self):
  3. super(LeNet, self).__init__()
  4. self.conv = nn.Sequential(
  5. nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
  6. nn.Sigmoid(),
  7. nn.MaxPool2d(2, 2), # kernel_size, stride
  8. nn.Conv2d(6, 16, 5),
  9. nn.Sigmoid(),
  10. nn.MaxPool2d(2, 2)
  11. )
  12. self.fc = nn.Sequential(
  13. nn.Linear(16*4*4, 120),
  14. nn.Sigmoid(),
  15. nn.Linear(120, 84),
  16. nn.Sigmoid(),
  17. nn.Linear(84, 10)
  18. )
  19. def forward(self, img):
  20. feature = self.conv(img)
  21. output = self.fc(feature.view(img.shape[0], -1))
  22. return output
  23. """
  24. LeNet(
  25. (conv): Sequential(
  26. (0): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  27. (1): Sigmoid()
  28. (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  29. (3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  30. (4): Sigmoid()
  31. (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  32. )
  33. (fc): Sequential(
  34. (0): Linear(in_features=256, out_features=120, bias=True)
  35. (1): Sigmoid()
  36. (2): Linear(in_features=120, out_features=84, bias=True)
  37. (3): Sigmoid()
  38. (4): Linear(in_features=84, out_features=10, bias=True)
  39. )
  40. )
  41. """

2. Alex_Net

先利用较大的卷积核提取信息特征,然后逐渐的减小卷积核大小
差异于引用学习的来源https://tangshusen.me/Dive-into-DL-PyTorch/#/chapter05_CNN/5.6_alexnet

第一,与相对较小的LeNet相比,AlexNet包含8层变换,其中有5层卷积和2层全连接隐藏层,以及1个全连接输出层。下面我们来详细描述这些层的设计。

AlexNet第一层中的卷积窗口形状是11×11。因为ImageNet中绝大多数图像的高和宽均比MNIST图像的高和宽大10倍以上,ImageNet图像的物体占用更多的像素,所以需要更大的卷积窗口来捕获物体。第二层中的卷积窗口形状减小到5×5,之后全采用3×3。此外,第一、第二和第五个卷积层之后都使用了窗口形状为3×3、步幅为2的最大池化层。而且,AlexNet使用的卷积通道数也大于LeNet中的卷积通道数数十倍。

紧接着最后一个卷积层的是两个输出个数为4096的全连接层。这两个巨大的全连接层带来将近1 GB的模型参数。由于早期显存的限制,最早的AlexNet使用双数据流的设计使一个GPU只需要处理一半模型。幸运的是,显存在过去几年得到了长足的发展,因此通常我们不再需要这样的特别设计了。

第二,AlexNet将sigmoid激活函数改成了更加简单的ReLU激活函数。一方面,ReLU激活函数的计算更简单,例如它并没有sigmoid激活函数中的求幂运算。另一方面,ReLU激活函数在不同的参数初始化方法下使模型更容易训练。这是由于当sigmoid激活函数输出极接近0或1时,这些区域的梯度几乎为0,从而造成反向传播无法继续更新部分模型参数;而ReLU激活函数在正区间的梯度恒为1。因此,若模型参数初始化不当,sigmoid函数可能在正区间得到几乎为0的梯度,从而令模型无法得到有效训练。

第三,AlexNet通过丢弃法(参见3.13节)来控制全连接层的模型复杂度。而LeNet并没有使用丢弃法。

第四,AlexNet引入了大量的图像增广,如翻转、裁剪和颜色变化,从而进一步扩大数据集来缓解过拟合。我们将在后面的9.1节(图像增广)详细介绍这种方法。

  1. class AlexNet(nn.Module):
  2. def __init__(self):
  3. super(AlexNet, self).__init__()
  4. self.conv = nn.Sequential(
  5. nn.Conv2d(1, 96, 11, 4), # in_channels, out_channels, kernel_size, stride, padding
  6. nn.ReLU(),
  7. nn.MaxPool2d(3, 2), # kernel_size, stride
  8. # 减小卷积窗口,使用填充为2来使得输入与输出的高和宽一致,且增大输出通道数
  9. nn.Conv2d(96, 256, 5, 1, 2),
  10. nn.ReLU(),
  11. nn.MaxPool2d(3, 2),
  12. # 连续3个卷积层,且使用更小的卷积窗口。除了最后的卷积层外,进一步增大了输出通道数。
  13. # 前两个卷积层后不使用池化层来减小输入的高和宽
  14. nn.Conv2d(256, 384, 3, 1, 1),
  15. nn.ReLU(),
  16. nn.Conv2d(384, 384, 3, 1, 1),
  17. nn.ReLU(),
  18. nn.Conv2d(384, 256, 3, 1, 1),
  19. nn.ReLU(),
  20. nn.MaxPool2d(3, 2)
  21. )
  22. # 这里全连接层的输出个数比LeNet中的大数倍。使用丢弃层来缓解过拟合
  23. self.fc = nn.Sequential(
  24. nn.Linear(256*5*5, 4096),
  25. nn.ReLU(),
  26. nn.Dropout(0.5),
  27. nn.Linear(4096, 4096),
  28. nn.ReLU(),
  29. nn.Dropout(0.5),
  30. # 输出层。由于这里使用Fashion-MNIST,所以用类别数为10,而非论文中的1000
  31. nn.Linear(4096, 10),
  32. )
  33. def forward(self, img):
  34. feature = self.conv(img)
  35. output = self.fc(feature.view(img.shape[0], -1))
  36. return output
  37. """
  38. Net Struct as follows:
  39. AlexNet(
  40. (conv): Sequential(
  41. (0): Conv2d(1, 96, kernel_size=(11, 11), stride=(4, 4))
  42. (1): ReLU()
  43. (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  44. (3): Conv2d(96, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  45. (4): ReLU()
  46. (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  47. (6): Conv2d(256, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  48. (7): ReLU()
  49. (8): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  50. (9): ReLU()
  51. (10): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  52. (11): ReLU()
  53. (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  54. )
  55. (fc): Sequential(
  56. (0): Linear(in_features=6400, out_features=4096, bias=True)
  57. (1): ReLU()
  58. (2): Dropout(p=0.5, inplace=False)
  59. (3): Linear(in_features=4096, out_features=4096, bias=True)
  60. (4): ReLU()
  61. (5): Dropout(p=0.5, inplace=False)
  62. (6): Linear(in_features=4096, out_features=10, bias=True)
  63. )
  64. )
  65. """

3. VGG_Net

利用VGG块和全连接层进行对接

VGG块的组成规律是:连续使用数个相同的填充为1、窗口形状为3×3的卷积层后接上一个步幅为2、窗口形状为2×2的最大池化层。卷积层保持输入的高和宽不变,而池化层则对其减半。我们使用vgg_block函数来实现这个基础的VGG块,它可以指定卷积层的数量和输入输出通道数。

对于给定的感受野(与输出有关的输入图片的局部大小),采用堆积的小卷积核优于采用大的卷积核,因为可以增加网络深度来保证学习更复杂的模式,而且代价还比较小(参数更少)。例如,在VGG中,使用了3个3x3卷积核来代替7x7卷积核,使用了2个3x3卷积核来代替5*5卷积核,这样做的主要目的是在保证具有相同感知野的条件下,提升了网络的深度,在一定程度上提升了神经网络的效果。

  1. def vgg_block(num_convs, in_channels, out_channels):
  2. blk = []
  3. for i in range(num_convs):
  4. if i == 0:
  5. blk.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
  6. else:
  7. blk.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
  8. blk.append(nn.ReLU())
  9. blk.append(nn.MaxPool2d(kernel_size=2, stride=2)) # 这里会使宽高减半
  10. return nn.Sequential(*blk)
  11. def vgg(conv_arch, fc_features, fc_hidden_units=4096):
  12. net = nn.Sequential()
  13. # 卷积层部分
  14. for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
  15. # 每经过一个vgg_block都会使宽高减半
  16. net.add_module("vgg_block_" + str(i+1), vgg_block(num_convs, in_channels, out_channels))
  17. # 全连接层部分
  18. net.add_module("fc", nn.Sequential(FlattenLayer(),
  19. nn.Linear(fc_features, fc_hidden_units),
  20. nn.ReLU(),
  21. nn.Dropout(0.5),
  22. nn.Linear(fc_hidden_units, fc_hidden_units),
  23. nn.ReLU(),
  24. nn.Dropout(0.5),
  25. nn.Linear(fc_hidden_units, 10)
  26. ))
  27. return net
  28. """
  29. Net Struct as follows:
  30. Sequential(
  31. (vgg_block_1): Sequential(
  32. (0): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  33. (1): ReLU()
  34. (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  35. )
  36. (vgg_block_2): Sequential(
  37. (0): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  38. (1): ReLU()
  39. (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  40. )
  41. (vgg_block_3): Sequential(
  42. (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  43. (1): ReLU()
  44. (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  45. (3): ReLU()
  46. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  47. )
  48. (vgg_block_4): Sequential(
  49. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  50. (1): ReLU()
  51. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  52. (3): ReLU()
  53. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  54. )
  55. (vgg_block_5): Sequential(
  56. (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  57. (1): ReLU()
  58. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  59. (3): ReLU()
  60. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  61. )
  62. (fc): Sequential(
  63. (0): FlattenLayer()
  64. (1): Linear(in_features=3136, out_features=512, bias=True)
  65. (2): ReLU()
  66. (3): Dropout(p=0.5, inplace=False)
  67. (4): Linear(in_features=512, out_features=512, bias=True)
  68. (5): ReLU()
  69. (6): Dropout(p=0.5, inplace=False)
  70. (7): Linear(in_features=512, out_features=10, bias=True)
  71. )
  72. )
  73. """

4. NiN_Net

NIN网络可以理解为是卷积网络和全连接层结合作为一个块,然后多个块进行连接
由于前面说过1*1的卷积核构造的卷积层就是一个全连接层,所以这里就是这样构造的.
还有一个不同

除使用NiN块以外,NiN还有一个设计与AlexNet显著不同:NiN去掉了AlexNet最后的3个全连接层,取而代之地,NiN使用了输出通道数等于标签类别数的NiN块,然后使用全局平均池化层对每个通道中所有元素求平均并直接用于分类。这里的全局平均池化层即窗口形状等于输入空间维形状的平均池化层。NiN的这个设计的好处是可以显著减小模型参数尺寸,从而缓解过拟合。然而,该设计有时会造成获得有效模型的训练时间的增加。

  1. def nin_block(in_channels, out_channels, kernel_size, stride, padding):
  2. blk = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
  3. nn.ReLU(),
  4. nn.Conv2d(out_channels, out_channels, kernel_size=1),
  5. nn.ReLU(),
  6. nn.Conv2d(out_channels, out_channels, kernel_size=1),
  7. nn.ReLU())
  8. return blk
  9. class GlobalAvgPool2d(nn.Module):
  10. # 全局平均池化层可通过将池化窗口形状设置成输入的高和宽实现
  11. def __init__(self):
  12. super(GlobalAvgPool2d, self).__init__()
  13. def forward(self, x):
  14. return F.avg_pool2d(x, kernel_size=x.size()[2:])
  15. net = nn.Sequential(
  16. nin_block(1, 96, kernel_size=11, stride=4, padding=0),
  17. nn.MaxPool2d(kernel_size=3, stride=2),
  18. nin_block(96, 256, kernel_size=5, stride=1, padding=2),
  19. nn.MaxPool2d(kernel_size=3, stride=2),
  20. nin_block(256, 384, kernel_size=3, stride=1, padding=1),
  21. nn.MaxPool2d(kernel_size=3, stride=2),
  22. nn.Dropout(0.5),
  23. # 标签类别数是10
  24. nin_block(384, 10, kernel_size=3, stride=1, padding=1),
  25. GlobalAvgPool2d(),
  26. # 将四维的输出转成二维的输出,其形状为(批量大小, 10)
  27. d2l.FlattenLayer())
  28. """
  29. Net Struct as follows:
  30. Sequential(
  31. (0): Sequential(
  32. (0): Conv2d(1, 96, kernel_size=(11, 11), stride=(4, 4))
  33. (1): ReLU()
  34. (2): Conv2d(96, 96, kernel_size=(1, 1), stride=(1, 1))
  35. (3): ReLU()
  36. (4): Conv2d(96, 96, kernel_size=(1, 1), stride=(1, 1))
  37. (5): ReLU()
  38. )
  39. (1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  40. (2): Sequential(
  41. (0): Conv2d(96, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  42. (1): ReLU()
  43. (2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
  44. (3): ReLU()
  45. (4): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
  46. (5): ReLU()
  47. )
  48. (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  49. (4): Sequential(
  50. (0): Conv2d(256, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  51. (1): ReLU()
  52. (2): Conv2d(384, 384, kernel_size=(1, 1), stride=(1, 1))
  53. (3): ReLU()
  54. (4): Conv2d(384, 384, kernel_size=(1, 1), stride=(1, 1))
  55. (5): ReLU()
  56. )
  57. (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  58. (6): Dropout(p=0.5, inplace=False)
  59. (7): Sequential(
  60. (0): Conv2d(384, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  61. (1): ReLU()
  62. (2): Conv2d(10, 10, kernel_size=(1, 1), stride=(1, 1))
  63. (3): ReLU()
  64. (4): Conv2d(10, 10, kernel_size=(1, 1), stride=(1, 1))
  65. (5): ReLU()
  66. )
  67. (8): GlobalAvgPool2d()
  68. (9): FlattenLayer()
  69. )
  70. """

5. GoogLeNet

Inception块里有4条并行的线路。前3条线路使用窗口大小分别是1×1、3×3和5×5的卷积层来抽取不同空间尺寸下的信息,其中中间2个线路会对输入先做1×1卷积来减少输入通道数,以降低模型复杂度。第四条线路则使用3×3最大池化层,后接1×1卷积层来改变通道数。4条线路都使用了合适的填充来使输入与输出的高和宽一致。最后我们将每条线路的输出在通道维上连结,并输入接下来的层中去。

对于这里,我的理解就是这个网络把选择权交给了神经网络自己,由于采用了各种的卷积核,所以最后的参数的优化也是取决于网络自己。哪一个效果更好就用那一个

  1. class Inception(nn.Module):
  2. # c1 - c4为每条线路里的层的输出通道数
  3. def __init__(self, in_c, c1, c2, c3, c4):
  4. super(Inception, self).__init__()
  5. # 线路1,单1 x 1卷积层
  6. self.p1_1 = nn.Conv2d(in_c, c1, kernel_size=1)
  7. # 线路2,1 x 1卷积层后接3 x 3卷积层
  8. self.p2_1 = nn.Conv2d(in_c, c2[0], kernel_size=1)
  9. self.p2_2 = nn.Conv2d(c2[0], c2[1], kernel_size=3, padding=1)
  10. # 线路3,1 x 1卷积层后接5 x 5卷积层
  11. self.p3_1 = nn.Conv2d(in_c, c3[0], kernel_size=1)
  12. self.p3_2 = nn.Conv2d(c3[0], c3[1], kernel_size=5, padding=2)
  13. # 线路4,3 x 3最大池化层后接1 x 1卷积层
  14. self.p4_1 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
  15. self.p4_2 = nn.Conv2d(in_c, c4, kernel_size=1)
  16. def forward(self, x):
  17. p1 = F.relu(self.p1_1(x))
  18. p2 = F.relu(self.p2_2(F.relu(self.p2_1(x))))
  19. p3 = F.relu(self.p3_2(F.relu(self.p3_1(x))))
  20. p4 = F.relu(self.p4_2(self.p4_1(x)))
  21. return torch.cat((p1, p2, p3, p4), dim=1) # 在通道维上连结输出
  1. b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
  2. nn.ReLU(),
  3. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  4. b2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1),
  5. nn.Conv2d(64, 192, kernel_size=3, padding=1),
  6. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  7. b3 = nn.Sequential(Inception(192, 64, (96, 128), (16, 32), 32),
  8. Inception(256, 128, (128, 192), (32, 96), 64),
  9. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  10. b4 = nn.Sequential(Inception(480, 192, (96, 208), (16, 48), 64),
  11. Inception(512, 160, (112, 224), (24, 64), 64),
  12. Inception(512, 128, (128, 256), (24, 64), 64),
  13. Inception(512, 112, (144, 288), (32, 64), 64),
  14. Inception(528, 256, (160, 320), (32, 128), 128),
  15. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  16. b5 = nn.Sequential(Inception(832, 256, (160, 320), (32, 128), 128),
  17. Inception(832, 384, (192, 384), (48, 128), 128),
  18. d2l.GlobalAvgPool2d())
  19. net = nn.Sequential(b1, b2, b3, b4, b5,
  20. d2l.FlattenLayer(), nn.Linear(1024, 10))
  21. """
  22. Net Struct as follows:
  23. Sequential(
  24. (0): Sequential(
  25. (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  26. (1): ReLU()
  27. (2): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  28. )
  29. (1): Sequential(
  30. (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
  31. (1): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  32. (2): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  33. )
  34. (2): Sequential(
  35. (0): Inception(
  36. (p1_1): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))
  37. (p2_1): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1))
  38. (p2_2): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  39. (p3_1): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1))
  40. (p3_2): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  41. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  42. (p4_2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1))
  43. )
  44. (1): Inception(
  45. (p1_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
  46. (p2_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
  47. (p2_2): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  48. (p3_1): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
  49. (p3_2): Conv2d(32, 96, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  50. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  51. (p4_2): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
  52. )
  53. (2): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  54. )
  55. (3): Sequential(
  56. (0): Inception(
  57. (p1_1): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1))
  58. (p2_1): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1))
  59. (p2_2): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  60. (p3_1): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1))
  61. (p3_2): Conv2d(16, 48, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  62. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  63. (p4_2): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1))
  64. )
  65. (1): Inception(
  66. (p1_1): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1))
  67. (p2_1): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))
  68. (p2_2): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  69. (p3_1): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))
  70. (p3_2): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  71. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  72. (p4_2): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
  73. )
  74. (2): Inception(
  75. (p1_1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
  76. (p2_1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
  77. (p2_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  78. (p3_1): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))
  79. (p3_2): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  80. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  81. (p4_2): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
  82. )
  83. (3): Inception(
  84. (p1_1): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))
  85. (p2_1): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1))
  86. (p2_2): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  87. (p3_1): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
  88. (p3_2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  89. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  90. (p4_2): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
  91. )
  92. (4): Inception(
  93. (p1_1): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1))
  94. (p2_1): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1))
  95. (p2_2): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  96. (p3_1): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1))
  97. (p3_2): Conv2d(32, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  98. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  99. (p4_2): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1))
  100. )
  101. (5): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  102. )
  103. (4): Sequential(
  104. (0): Inception(
  105. (p1_1): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1))
  106. (p2_1): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1))
  107. (p2_2): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  108. (p3_1): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1))
  109. (p3_2): Conv2d(32, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  110. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  111. (p4_2): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))
  112. )
  113. (1): Inception(
  114. (p1_1): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1))
  115. (p2_1): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1))
  116. (p2_2): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  117. (p3_1): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1))
  118. (p3_2): Conv2d(48, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  119. (p4_1): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  120. (p4_2): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))
  121. )
  122. (2): GlobalAvgPool2d()
  123. )
  124. (5): FlattenLayer()
  125. (6): Linear(in_features=1024, out_features=10, bias=True)
  126. )
  127. """

这里的另一个核心就是关于通道数配比的选择,这个应该也是超参数
反正不管了,大量的实验表明这样的配方好,那用就完事了

5. Batch_Normalization

原书的介绍很详细,但是这个介绍的更加详细
https://zhuanlan.zhihu.com/p/89422962
摘录部分

我们首先来思考一个问题,为什么神经网络需要对输入做标准化处理?原因在于神经网络本身就是为了学习数据的分布,如果训练集和测试集的分布不同,那么导致学习的神经网络泛化性能大大降低。同时,我们在使用mini-batch对神经网络进行训练时,不同的batch的数据的分布也有可能不同,那么网络就要在每次迭代都去学习适应不同的分布,这样将会大大降低网络的训练速度。因此我们需要对输入数据进行标准化处理。 对于深度网络的训练是一个复杂的过程,只要网络的前面几层发生微小的改变,那么后面几层就会被累积放大下去。一旦网络某一层的输入数据的分布发生改变,那么这一层网络就需要去适应学习这个新的数据分布,所以如果训练过程中,训练数据的分布一直在发生变化,那么将会影响网络的训练速度。 网络一旦train起来,那么参数就要发生更新,除了输入层的数据外(因为输入层数据,我们已经人为的为每个样本归一化),后面网络每一层的输入数据分布是一直在发生变化的,因为在训练的时候,前面层训练参数的更新将导致后面层输入数据分布的变化。我们把网络中间层在训练过程中,数据分布的改变称之为:Internal Covariate Shift(输入分布不稳定)。为了解决Internal Covariate Shift,便有了Batch Normalization的诞生。可以看到,在BN的计算过程中,不仅仅有标准化的操作,还有最后一步,被称为变换重构。为什么要增加这一步呢?其实如果是仅仅使用上面的归一化公式,对网络某一层A的输出数据做归一化,然后送入网络下一层B,这样是会影响到本层网络A所学习到的特征的。打个比方,比如网络中间某一层学习到特征数据本身就分布在S型激活函数的两侧,你强制把它给我归一化处理、标准差也限制在了1,把数据变换成分布于s函数的中间部分,这样就相当于我这一层网络所学习到的特征分布被你搞坏了。于是我们增加了变换重构保留了网络所学习到的特征

具体代码

  1. def batch_norm(is_training, X, gamma, beta, moving_mean, moving_var, eps, momentum):
  2. # 判断当前模式是训练模式还是预测模式
  3. if not is_training:
  4. # 如果是在预测模式下,直接使用传入的移动平均所得的均值和方差
  5. X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)
  6. else:
  7. assert len(X.shape) in (2, 4)
  8. if len(X.shape) == 2:
  9. # 使用全连接层的情况,计算特征维上的均值和方差
  10. mean = X.mean(dim=0)
  11. var = ((X - mean) ** 2).mean(dim=0)
  12. else:
  13. # 使用二维卷积层的情况,计算通道维上(axis=1)的均值和方差。这里我们需要保持
  14. # X的形状以便后面可以做广播运算
  15. mean = X.mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
  16. var = ((X - mean) ** 2).mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
  17. # 训练模式下用当前的均值和方差做标准化
  18. X_hat = (X - mean) / torch.sqrt(var + eps)
  19. # 更新移动平均的均值和方差
  20. moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
  21. moving_var = momentum * moving_var + (1.0 - momentum) * var
  22. Y = gamma * X_hat + beta # 拉伸和偏移
  23. return Y, moving_mean, moving_var
  24. class BatchNorm(nn.Module):
  25. def __init__(self, num_features, num_dims):
  26. super(BatchNorm, self).__init__()
  27. if num_dims == 2:
  28. shape = (1, num_features)
  29. else:
  30. shape = (1, num_features, 1, 1)
  31. # 参与求梯度和迭代的拉伸和偏移参数,分别初始化成0和1
  32. self.gamma = nn.Parameter(torch.ones(shape))
  33. self.beta = nn.Parameter(torch.zeros(shape))
  34. # 不参与求梯度和迭代的变量,全在内存上初始化成0
  35. self.moving_mean = torch.zeros(shape)
  36. self.moving_var = torch.zeros(shape)
  37. def forward(self, X):
  38. # 如果X不在内存上,将moving_mean和moving_var复制到X所在显存上
  39. if self.moving_mean.device != X.device:
  40. self.moving_mean = self.moving_mean.to(X.device)
  41. self.moving_var = self.moving_var.to(X.device)
  42. # 保存更新过的moving_mean和moving_var, Module实例的traning属性默认为true, 调用.eval()后设成false
  43. Y, self.moving_mean, self.moving_var = batch_norm(self.training,
  44. X, self.gamma, self.beta, self.moving_mean,
  45. self.moving_var, eps=1e-5, momentum=0.9)
  46. return Y
  47. net = nn.Sequential(
  48. nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
  49. BatchNorm(6, num_dims=4),
  50. nn.Sigmoid(),
  51. nn.MaxPool2d(2, 2), # kernel_size, stride
  52. nn.Conv2d(6, 16, 5),
  53. BatchNorm(16, num_dims=4),
  54. nn.Sigmoid(),
  55. nn.MaxPool2d(2, 2),
  56. FlattenLayer(),
  57. nn.Linear(16*4*4, 120),
  58. BatchNorm(120, num_dims=2),
  59. nn.Sigmoid(),
  60. nn.Linear(120, 84),
  61. BatchNorm(84, num_dims=2),
  62. nn.Sigmoid(),
  63. nn.Linear(84, 10)
  64. )
  65. net = nn.Sequential(
  66. nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
  67. nn.BatchNorm2d(6),
  68. nn.Sigmoid(),
  69. nn.MaxPool2d(2, 2), # kernel_size, stride
  70. nn.Conv2d(6, 16, 5),
  71. nn.BatchNorm2d(16),
  72. nn.Sigmoid(),
  73. nn.MaxPool2d(2, 2),
  74. FlattenLayer(),
  75. nn.Linear(16*4*4, 120),
  76. nn.BatchNorm1d(120),
  77. nn.Sigmoid(),
  78. nn.Linear(120, 84),
  79. nn.BatchNorm1d(84),
  80. nn.Sigmoid(),
  81. nn.Linear(84, 10)
  82. )
  83. """
  84. Net Struct as follows:
  85. Sequential(
  86. (0): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  87. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  88. (2): Sigmoid()
  89. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  90. (4): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  91. (5): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  92. (6): Sigmoid()
  93. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  94. (8): FlattenLayer()
  95. (9): Linear(in_features=256, out_features=120, bias=True)
  96. (10): BatchNorm1d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  97. (11): Sigmoid()
  98. (12): Linear(in_features=120, out_features=84, bias=True)
  99. (13): BatchNorm1d(84, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  100. (14): Sigmoid()
  101. (15): Linear(in_features=84, out_features=10, bias=True)
  102. )
  103. """

6. ResNet

残差网络是为了解决网络深度加深但是训练效果减弱的问题
优秀参考链接
https://zhuanlan.zhihu.com/p/42706477
原理大致如下图
image.png
就是经过了卷积层的结果和原始数据进行结合,保证训练后的结果保留训练前的特征
换做实现就是经过两个卷积层的结果和原始输入相加再经过ReLu,其中如果经过卷积层有通道数的改变,那么在相加之前需要通过1*1的卷积层
image.png

  1. class Residual(nn.Module): # 本类已保存在d2lzh_pytorch包中方便以后使用
  2. def __init__(self, in_channels, out_channels, use_1x1conv=False, stride=1):
  3. super(Residual, self).__init__()
  4. self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride)
  5. self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
  6. if use_1x1conv:
  7. self.conv3 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride)
  8. else:
  9. self.conv3 = None
  10. self.bn1 = nn.BatchNorm2d(out_channels)
  11. self.bn2 = nn.BatchNorm2d(out_channels)
  12. def forward(self, X):
  13. Y = F.relu(self.bn1(self.conv1(X)))
  14. Y = self.bn2(self.conv2(Y))
  15. if self.conv3:
  16. X = self.conv3(X)
  17. return F.relu(Y + X)
  18. net = nn.Sequential(
  19. nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
  20. nn.BatchNorm2d(64),
  21. nn.ReLU(),
  22. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  23. def resnet_block(in_channels, out_channels, num_residuals, first_block=False):
  24. if first_block:
  25. assert in_channels == out_channels # 第一个模块的通道数同输入通道数一致
  26. blk = []
  27. for i in range(num_residuals):
  28. if i == 0 and not first_block:
  29. blk.append(Residual(in_channels, out_channels, use_1x1conv=True, stride=2))
  30. else:
  31. blk.append(Residual(out_channels, out_channels))
  32. return nn.Sequential(*blk)
  33. net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))
  34. net.add_module("resnet_block2", resnet_block(64, 128, 2))
  35. net.add_module("resnet_block3", resnet_block(128, 256, 2))
  36. net.add_module("resnet_block4", resnet_block(256, 512, 2))
  37. net.add_module("global_avg_pool", GlobalAvgPool2d()) # GlobalAvgPool2d的输出: (Batch, 512, 1, 1)
  38. net.add_module("fc", nn.Sequential(FlattenLayer(), nn.Linear(512, 10)))
  39. """
  40. Net Struct as follows:
  41. Sequential(
  42. (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  43. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  44. (2): ReLU()
  45. (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  46. (resnet_block1): Sequential(
  47. (0): Residual(
  48. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  49. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  50. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  51. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  52. )
  53. (1): Residual(
  54. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  55. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  56. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  57. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  58. )
  59. )
  60. (resnet_block2): Sequential(
  61. (0): Residual(
  62. (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  63. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  64. (conv3): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2))
  65. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  66. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  67. )
  68. (1): Residual(
  69. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  70. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  71. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  72. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  73. )
  74. )
  75. (resnet_block3): Sequential(
  76. (0): Residual(
  77. (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  78. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  79. (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2))
  80. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  81. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  82. )
  83. (1): Residual(
  84. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  85. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  86. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  87. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  88. )
  89. )
  90. (resnet_block4): Sequential(
  91. (0): Residual(
  92. (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  93. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  94. (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2))
  95. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  96. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  97. )
  98. (1): Residual(
  99. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  100. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  101. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  102. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  103. )
  104. )
  105. (global_avg_pool): GlobalAvgPool2d()
  106. (fc): Sequential(
  107. (0): FlattenLayer()
  108. (1): Linear(in_features=512, out_features=10, bias=True)
  109. )
  110. )
  111. """

7. Dense_Net

稠密神经网络的和之前的一个不同点就是,他没有直接的合并两个数据源。而是把两个数据进行并联
所以数据在经过卷积层之后通道数会快速的增加,所以还有一个层是合并数据,减小计算量的

  1. def conv_block(in_channels, out_channels):
  2. blk = nn.Sequential(nn.BatchNorm2d(in_channels),
  3. nn.ReLU(),
  4. nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
  5. return blk
  6. #每一次的循环都会加大outchannel个通道
  7. class DenseBlock(nn.Module):
  8. def __init__(self, num_convs, in_channels, out_channels):
  9. super(DenseBlock, self).__init__()
  10. net = []
  11. for i in range(num_convs):
  12. in_c = in_channels + i * out_channels
  13. net.append(conv_block(in_c, out_channels))
  14. self.net = nn.ModuleList(net)
  15. self.out_channels = in_channels + num_convs * out_channels # 计算输出通道数
  16. def forward(self, X):
  17. for blk in self.net:
  18. #print(blk)
  19. Y = blk(X)
  20. X = torch.cat((X, Y), dim=1) # 在通道维上将输入和输出连结
  21. return X
  22. #这个模块是用于减小数据通道的,利用1*1的卷积以及2的步长减小通道和图片大小
  23. def transition_block(in_channels, out_channels):
  24. blk = nn.Sequential(
  25. nn.BatchNorm2d(in_channels),
  26. nn.ReLU(),
  27. nn.Conv2d(in_channels, out_channels, kernel_size=1),
  28. nn.AvgPool2d(kernel_size=2, stride=2))
  29. return blk
  30. net = nn.Sequential(
  31. nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
  32. nn.BatchNorm2d(64),
  33. nn.ReLU(),
  34. nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
  35. num_channels, growth_rate = 64, 32 # num_channels为当前的通道数
  36. num_convs_in_dense_blocks = [4, 4, 4, 4]
  37. for i, num_convs in enumerate(num_convs_in_dense_blocks):
  38. DB = DenseBlock(num_convs, num_channels, growth_rate)
  39. net.add_module("DenseBlosk_%d" % i, DB)
  40. # 上一个稠密块的输出通道数
  41. num_channels = DB.out_channels
  42. # 在稠密块之间加入通道数减半的过渡层
  43. if i != len(num_convs_in_dense_blocks) - 1:
  44. net.add_module("transition_block_%d" % i, transition_block(num_channels, num_channels // 2))
  45. num_channels = num_channels // 2
  46. net.add_module("BN", nn.BatchNorm2d(num_channels))
  47. net.add_module("relu", nn.ReLU())
  48. net.add_module("global_avg_pool", GlobalAvgPool2d()) # GlobalAvgPool2d的输出: (Batch, num_channels, 1, 1)
  49. net.add_module("fc", nn.Sequential(FlattenLayer(), nn.Linear(num_channels, 10)))
  50. '''
  51. Net Struct as follows:
  52. Sequential(
  53. (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  54. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  55. (2): ReLU()
  56. (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  57. (DenseBlosk_0): DenseBlock(
  58. (net): ModuleList(
  59. (0): Sequential(
  60. (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  61. (1): ReLU()
  62. (2): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  63. )
  64. (1): Sequential(
  65. (0): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  66. (1): ReLU()
  67. (2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  68. )
  69. (2): Sequential(
  70. (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  71. (1): ReLU()
  72. (2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  73. )
  74. (3): Sequential(
  75. (0): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  76. (1): ReLU()
  77. (2): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  78. )
  79. )
  80. )
  81. (transition_block_0): Sequential(
  82. (0): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  83. (1): ReLU()
  84. (2): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1))
  85. (3): AvgPool2d(kernel_size=2, stride=2, padding=0)
  86. )
  87. (DenseBlosk_1): DenseBlock(
  88. (net): ModuleList(
  89. (0): Sequential(
  90. (0): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  91. (1): ReLU()
  92. (2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  93. )
  94. (1): Sequential(
  95. (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  96. (1): ReLU()
  97. (2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  98. )
  99. (2): Sequential(
  100. (0): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  101. (1): ReLU()
  102. (2): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  103. )
  104. (3): Sequential(
  105. (0): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  106. (1): ReLU()
  107. (2): Conv2d(192, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  108. )
  109. )
  110. )
  111. (transition_block_1): Sequential(
  112. (0): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  113. (1): ReLU()
  114. (2): Conv2d(224, 112, kernel_size=(1, 1), stride=(1, 1))
  115. (3): AvgPool2d(kernel_size=2, stride=2, padding=0)
  116. )
  117. (DenseBlosk_2): DenseBlock(
  118. (net): ModuleList(
  119. (0): Sequential(
  120. (0): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  121. (1): ReLU()
  122. (2): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  123. )
  124. (1): Sequential(
  125. (0): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  126. (1): ReLU()
  127. (2): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  128. )
  129. (2): Sequential(
  130. (0): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  131. (1): ReLU()
  132. (2): Conv2d(176, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  133. )
  134. (3): Sequential(
  135. (0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  136. (1): ReLU()
  137. (2): Conv2d(208, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  138. )
  139. )
  140. )
  141. (transition_block_2): Sequential(
  142. (0): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  143. (1): ReLU()
  144. (2): Conv2d(240, 120, kernel_size=(1, 1), stride=(1, 1))
  145. (3): AvgPool2d(kernel_size=2, stride=2, padding=0)
  146. )
  147. (DenseBlosk_3): DenseBlock(
  148. (net): ModuleList(
  149. (0): Sequential(
  150. (0): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  151. (1): ReLU()
  152. (2): Conv2d(120, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  153. )
  154. (1): Sequential(
  155. (0): BatchNorm2d(152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  156. (1): ReLU()
  157. (2): Conv2d(152, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  158. )
  159. (2): Sequential(
  160. (0): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  161. (1): ReLU()
  162. (2): Conv2d(184, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  163. )
  164. (3): Sequential(
  165. (0): BatchNorm2d(216, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  166. (1): ReLU()
  167. (2): Conv2d(216, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  168. )
  169. )
  170. )
  171. (BN): BatchNorm2d(248, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  172. (relu): ReLU()
  173. (global_avg_pool): GlobalAvgPool2d()
  174. (fc): Sequential(
  175. (0): FlattenLayer()
  176. (1): Linear(in_features=248, out_features=10, bias=True)
  177. )
  178. )
  179. '''