分享,重用和分解模型复杂性的有效方法

更新到 Pytorch 4.1版本

你可以找到代码 here

Pytorch是一个开源的深度学习框架,提供了创建ML模型的智能方法。 即使文档制作完好,我仍然发现大多数人仍然能够编写错误的而不是有组织的PyTorch代码。

今天,我们将看到如何使用PyTorch的三个主要构建块:Module,Sequential和ModuleList。 我们将从一个例子开始,迭代地我们将使它变得更好。

所有这四个类都包含在torch.nn中

  1. import torch.nn as nn
  2. # nn.Module
  3. # nn.Sequential
  4. # nn.Module

模块:主要构建块

Module是主要的构建块,它定义了所有神经网络的基类,你必须将它子类化。

让我们创建一个经典的CNN分类器作为示例:

  1. import torch.nn.functional as F
  2. class MyCNNClassifier(nn.Module):
  3. def __init__(self, in_c, n_classes):
  4. super().__init__()
  5. self.conv1 = nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1)
  6. self.bn1 = nn.BatchNorm2d(32)
  7. self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
  8. self.bn2 = nn.BatchNorm2d(32)
  9. self.fc1 = nn.Linear(32 * 28 * 28, 1024)
  10. self.fc2 = nn.Linear(1024, n_classes)
  11. def forward(self, x):
  12. x = self.conv1(x)
  13. x = self.bn1(x)
  14. x = F.relu(x)
  15. x = self.conv2(x)
  16. x = self.bn2(x)
  17. x = F.relu(x)
  18. x = x.view(x.size(0), -1) # flat
  19. x = self.fc1(x)
  20. x = F.sigmoid(x)
  21. x = self.fc2(x)
  22. return x
  1. model = MyCNNClassifier(1, 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  3. (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (fc1): Linear(in_features=25088, out_features=1024, bias=True)
  7. (fc2): Linear(in_features=1024, out_features=10, bias=True)
  8. )

这是一个非常简单的分类器,其编码部分使用两层3x3 convs + batchnorm + relu和一个带有两个线性层的解码部分。 如果您不是PyTorch的新手,您可能以前看过这种类型的编码,但有两个问题。

如果我们想要添加一个图层,我们必须在init和forward函数中再次编写大量代码。 此外,如果我们有一些我们想要在另一个模型中使用的公共块,例如 3x3 conv + batchnorm + relu,我们必须再写一次。

Sequential:堆叠和合并图层

Sequential是一个模块的容器,可以堆叠在一起并同时运行。

你可以注意到我们必须将自己的一切存储起来。 我们可以使用Sequential来改进我们的代码。

  1. class MyCNNClassifier(nn.Module):
  2. def __init__(self, in_c, n_classes):
  3. super().__init__()
  4. self.conv_block1 = nn.Sequential(
  5. nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1),
  6. nn.BatchNorm2d(32),
  7. nn.ReLU()
  8. )
  9. self.conv_block2 = nn.Sequential(
  10. nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
  11. nn.BatchNorm2d(64),
  12. nn.ReLU()
  13. )
  14. self.decoder = nn.Sequential(
  15. nn.Linear(32 * 28 * 28, 1024),
  16. nn.Sigmoid(),
  17. nn.Linear(1024, n_classes)
  18. )
  19. def forward(self, x):
  20. x = self.conv_block1(x)
  21. x = self.conv_block2(x)
  22. x = x.view(x.size(0), -1) # flat
  23. x = self.decoder(x)
  24. return x
  1. model = MyCNNClassifier(1, 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (conv_block1): Sequential(
  3. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  4. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  5. (2): ReLU()
  6. )
  7. (conv_block2): Sequential(
  8. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  9. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. (2): ReLU()
  11. )
  12. (decoder): Sequential(
  13. (0): Linear(in_features=25088, out_features=1024, bias=True)
  14. (1): Sigmoid()
  15. (2): Linear(in_features=1024, out_features=10, bias=True)
  16. )
  17. )

你觉得好多了?

您是否注意到conv_block1和conv_block2看起来几乎相同? 我们可以创建一个重新生成nn.Sequential的函数来简化代码!

  1. def conv_block(in_f, out_f, *args, **kwargs):
  2. return nn.Sequential(
  3. nn.Conv2d(in_f, out_f, *args, **kwargs),
  4. nn.BatchNorm2d(out_f),
  5. nn.ReLU()
  6. )

然后我们可以在我们的模块中调用此函数

  1. class MyCNNClassifier(nn.Module):
  2. def __init__(self, in_c, n_classes):
  3. super().__init__()
  4. self.conv_block1 = conv_block(in_c, 32, kernel_size=3, padding=1)
  5. self.conv_block2 = conv_block(32, 64, kernel_size=3, padding=1)
  6. self.decoder = nn.Sequential(
  7. nn.Linear(32 * 28 * 28, 1024),
  8. nn.Sigmoid(),
  9. nn.Linear(1024, n_classes)
  10. )
  11. def forward(self, x):
  12. x = self.conv_block1(x)
  13. x = self.conv_block2(x)
  14. x = x.view(x.size(0), -1) # flat
  15. x = self.decoder(x)
  16. return x
  1. model = MyCNNClassifier(1, 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (conv_block1): Sequential(
  3. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  4. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  5. (2): ReLU()
  6. )
  7. (conv_block2): Sequential(
  8. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  9. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. (2): ReLU()
  11. )
  12. (decoder): Sequential(
  13. (0): Linear(in_features=25088, out_features=1024, bias=True)
  14. (1): Sigmoid()
  15. (2): Linear(in_features=1024, out_features=10, bias=True)
  16. )
  17. )

Even cleaner! Still conv_block1 and conv_block2 are almost the same! We can merge them using nn.Sequential

  1. class MyCNNClassifier(nn.Module):
  2. def __init__(self, in_c, n_classes):
  3. super().__init__()
  4. self.encoder = nn.Sequential(
  5. conv_block(in_c, 32, kernel_size=3, padding=1),
  6. conv_block(32, 64, kernel_size=3, padding=1)
  7. )
  8. self.decoder = nn.Sequential(
  9. nn.Linear(32 * 28 * 28, 1024),
  10. nn.Sigmoid(),
  11. nn.Linear(1024, n_classes)
  12. )
  13. def forward(self, x):
  14. x = self.encoder(x)
  15. x = x.view(x.size(0), -1) # flat
  16. x = self.decoder(x)
  17. return x
  1. model = MyCNNClassifier(1, 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): Sequential(
  3. (0): Sequential(
  4. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (2): ReLU()
  7. )
  8. (1): Sequential(
  9. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  11. (2): ReLU()
  12. )
  13. )
  14. (decoder): Sequential(
  15. (0): Linear(in_features=25088, out_features=1024, bias=True)
  16. (1): Sigmoid()
  17. (2): Linear(in_features=1024, out_features=10, bias=True)
  18. )
  19. )

self.encoder now holds booth conv_block. We have decoupled logic for our model and make it easier to read and reuse. Our conv_block function can be imported and used in another model.

Dynamic Sequential: create multiple layers at once

What if we can to add a new layers in self.encoder, hardcoded them is not convinient:

  1. self.encoder = nn.Sequential(
  2. conv_block(in_c, 32, kernel_size=3, padding=1),
  3. conv_block(32, 64, kernel_size=3, padding=1),
  4. conv_block(64, 128, kernel_size=3, padding=1),
  5. conv_block(128, 256, kernel_size=3, padding=1),
  6. )

Would it be nice if we can define the sizes as an array and automatically create all the layers without writing each one of them? Fortunately we can create an array and pass it to Sequential

  1. class MyCNNClassifier(nn.Module):
  2. def __init__(self, in_c, n_classes):
  3. super().__init__()
  4. self.enc_sizes = [in_c, 32, 64]
  5. conv_blocks = [conv_block(in_f, out_f, kernel_size=3, padding=1)
  6. for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
  7. self.encoder = nn.Sequential(*conv_blocks)
  8. self.decoder = nn.Sequential(
  9. nn.Linear(32 * 28 * 28, 1024),
  10. nn.Sigmoid(),
  11. nn.Linear(1024, n_classes)
  12. )
  13. def forward(self, x):
  14. x = self.encoder(x)
  15. x = x.view(x.size(0), -1) # flat
  16. x = self.decoder(x)
  17. return x
  1. model = MyCNNClassifier(1, 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): Sequential(
  3. (0): Sequential(
  4. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (2): ReLU()
  7. )
  8. (1): Sequential(
  9. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  11. (2): ReLU()
  12. )
  13. )
  14. (decoder): Sequential(
  15. (0): Linear(in_features=25088, out_features=1024, bias=True)
  16. (1): Sigmoid()
  17. (2): Linear(in_features=1024, out_features=10, bias=True)
  18. )
  19. )

Let’s break it down. We created an array self.enc_sizes that holds the sizes of our encoder. Then we create an array conv_blocks by iterating the sizes. Since we have to give booth a in size and an outsize for each layer we ziped the size’array with itself by shifting it by one.

Just to be clear, take a look at the following example:

  1. sizes = [1, 32, 64]
  2. for in_f,out_f in zip(sizes, sizes[1:]):
  3. print(in_f,out_f)
  1. 1 32
  2. 32 64

Then, since Sequential does not accept a list, we decompose it by using the * operator.

Tada! Now if we just want to add a size, we can easily add a new number to the list. It is a common practice to make the size a parameter.

  1. class MyCNNClassifier(nn.Module):
  2. def __init__(self, in_c, enc_sizes, n_classes):
  3. super().__init__()
  4. self.enc_sizes = [in_c, *enc_sizes]
  5. conv_blokcs = [conv_block(in_f, out_f, kernel_size=3, padding=1)
  6. for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
  7. self.encoder = nn.Sequential(*conv_blokcs)
  8. self.decoder = nn.Sequential(
  9. nn.Linear(32 * 28 * 28, 1024),
  10. nn.Sigmoid(),
  11. nn.Linear(1024, n_classes)
  12. )
  13. def forward(self, x):
  14. x = self.encoder(x)
  15. x = x.view(x.size(0), -1) # flat
  16. x = self.decoder(x)
  17. return x
  1. model = MyCNNClassifier(1, [32,64, 128], 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): Sequential(
  3. (0): Sequential(
  4. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (2): ReLU()
  7. )
  8. (1): Sequential(
  9. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  11. (2): ReLU()
  12. )
  13. (2): Sequential(
  14. (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  15. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  16. (2): ReLU()
  17. )
  18. )
  19. (decoder): Sequential(
  20. (0): Linear(in_features=25088, out_features=1024, bias=True)
  21. (1): Sigmoid()
  22. (2): Linear(in_features=1024, out_features=10, bias=True)
  23. )
  24. )

We can do the same for the decoder part

  1. def dec_block(in_f, out_f):
  2. return nn.Sequential(
  3. nn.Linear(in_f, out_f),
  4. nn.Sigmoid()
  5. )
  6. class MyCNNClassifier(nn.Module):
  7. def __init__(self, in_c, enc_sizes, dec_sizes, n_classes):
  8. super().__init__()
  9. self.enc_sizes = [in_c, *enc_sizes]
  10. self.dec_sizes = [32 * 28 * 28, *dec_sizes]
  11. conv_blokcs = [conv_block(in_f, out_f, kernel_size=3, padding=1)
  12. for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
  13. self.encoder = nn.Sequential(*conv_blokcs)
  14. dec_blocks = [dec_block(in_f, out_f)
  15. for in_f, out_f in zip(self.dec_sizes, self.dec_sizes[1:])]
  16. self.decoder = nn.Sequential(*dec_blocks)
  17. self.last = nn.Linear(self.dec_sizes[-1], n_classes)
  18. def forward(self, x):
  19. x = self.encoder(x)
  20. x = x.view(x.size(0), -1) # flat
  21. x = self.decoder(x)
  22. return x
  1. model = MyCNNClassifier(1, [32,64], [1024, 512], 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): Sequential(
  3. (0): Sequential(
  4. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (2): ReLU()
  7. )
  8. (1): Sequential(
  9. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  11. (2): ReLU()
  12. )
  13. )
  14. (decoder): Sequential(
  15. (0): Sequential(
  16. (0): Linear(in_features=25088, out_features=1024, bias=True)
  17. (1): Sigmoid()
  18. )
  19. (1): Sequential(
  20. (0): Linear(in_features=1024, out_features=512, bias=True)
  21. (1): Sigmoid()
  22. )
  23. )
  24. (last): Linear(in_features=512, out_features=10, bias=True)
  25. )

We followed the same pattern, we create a new block for the decoding part, linear + sigmoid, and we pass an array with the sizes. We had to add a self.last since we do not want to activate the output

Now, we can even break down our model in two! Encoder + Decoder

  1. class MyEncoder(nn.Module):
  2. def __init__(self, enc_sizes):
  3. super().__init__()
  4. self.conv_blokcs = nn.Sequential(*[conv_block(in_f, out_f, kernel_size=3, padding=1)
  5. for in_f, out_f in zip(enc_sizes, enc_sizes[1:])])
  6. def forward(self, x):
  7. return self.conv_blokcs(x)
  8. class MyDecoder(nn.Module):
  9. def __init__(self, dec_sizes, n_classes):
  10. super().__init__()
  11. self.dec_blocks = nn.Sequential(*[dec_block(in_f, out_f)
  12. for in_f, out_f in zip(dec_sizes, dec_sizes[1:])])
  13. self.last = nn.Linear(dec_sizes[-1], n_classes)
  14. def forward(self, x):
  15. return self.dec_blocks()
  16. class MyCNNClassifier(nn.Module):
  17. def __init__(self, in_c, enc_sizes, dec_sizes, n_classes):
  18. super().__init__()
  19. self.enc_sizes = [in_c, *enc_sizes]
  20. self.dec_sizes = [32 * 28 * 28, *dec_sizes]
  21. self.encoder = MyEncoder(self.enc_sizes)
  22. self.decoder = MyDecoder(dec_sizes, n_classes)
  23. def forward(self, x):
  24. x = self.encoder(x)
  25. x = x.flatten(1) # flat
  26. x = self.decoder(x)
  27. return x
  1. model = MyCNNClassifier(1, [32,64], [1024, 512], 10)
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): MyEncoder(
  3. (conv_blokcs): Sequential(
  4. (0): Sequential(
  5. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  6. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  7. (2): ReLU()
  8. )
  9. (1): Sequential(
  10. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  11. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (2): ReLU()
  13. )
  14. )
  15. )
  16. (decoder): MyDecoder(
  17. (dec_blocks): Sequential(
  18. (0): Sequential(
  19. (0): Linear(in_features=1024, out_features=512, bias=True)
  20. (1): Sigmoid()
  21. )
  22. )
  23. (last): Linear(in_features=512, out_features=10, bias=True)
  24. )
  25. )

Be aware that MyEncoder and MyDecoder could also be functions that returns a nn.Sequential. I prefer to use the first pattern for models and the second for building blocks.

By diving our module into submodules it is easier to share the code, debug it and test it.

ModuleList : when we need to iterate

ModuleList allows you to store Module as a list. It can be useful when you need to iterate through layer and store/use some information, like in U-net.

The main difference between Sequential is that ModuleList have not a forward method so the inner layers are not connected. Assuming we need each output of each layer in the decoder, we can store it by:

  1. class MyModule(nn.Module):
  2. def __init__(self, sizes):
  3. super().__init__()
  4. self.layers = nn.ModuleList([nn.Linear(in_f, out_f) for in_f, out_f in zip(sizes, sizes[1:])])
  5. self.trace = []
  6. def forward(self,x):
  7. for layer in self.layers:
  8. x = layer(x)
  9. self.trace.append(x)
  10. return x
  1. model = MyModule([1, 16, 32])
  2. import torch
  3. model(torch.rand((4,1)))
  4. [print(trace.shape) for trace in model.trace]
  1. torch.Size([4, 16])
  2. torch.Size([4, 32])
  3. [None, None]

ModuleDict: when we need to choose

What if we want to switch to LearkyRelu in our conv_block? We can use ModuleDict to create a dictionary of Module and dynamically switch Module when we want

  1. def conv_block(in_f, out_f, activation='relu', *args, **kwargs):
  2. activations = nn.ModuleDict([
  3. ['lrelu', nn.LeakyReLU()],
  4. ['relu', nn.ReLU()]
  5. ])
  6. return nn.Sequential(
  7. nn.Conv2d(in_f, out_f, *args, **kwargs),
  8. nn.BatchNorm2d(out_f),
  9. activations[activation]
  10. )
  1. print(conv_block(1, 32,'lrelu', kernel_size=3, padding=1))
  2. print(conv_block(1, 32,'relu', kernel_size=3, padding=1))
  1. Sequential(
  2. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  3. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (2): LeakyReLU(negative_slope=0.01)
  5. )
  6. Sequential(
  7. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  8. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  9. (2): ReLU()
  10. )

Final implementation

Let’s wrap it up everything!

  1. def conv_block(in_f, out_f, activation='relu', *args, **kwargs):
  2. activations = nn.ModuleDict([
  3. ['lrelu', nn.LeakyReLU()],
  4. ['relu', nn.ReLU()]
  5. ])
  6. return nn.Sequential(
  7. nn.Conv2d(in_f, out_f, *args, **kwargs),
  8. nn.BatchNorm2d(out_f),
  9. activations[activation]
  10. )
  11. def dec_block(in_f, out_f):
  12. return nn.Sequential(
  13. nn.Linear(in_f, out_f),
  14. nn.Sigmoid()
  15. )
  16. class MyEncoder(nn.Module):
  17. def __init__(self, enc_sizes, *args, **kwargs):
  18. super().__init__()
  19. self.conv_blokcs = nn.Sequential(*[conv_block(in_f, out_f, kernel_size=3, padding=1, *args, **kwargs)
  20. for in_f, out_f in zip(enc_sizes, enc_sizes[1:])])
  21. def forward(self, x):
  22. return self.conv_blokcs(x)
  23. class MyDecoder(nn.Module):
  24. def __init__(self, dec_sizes, n_classes):
  25. super().__init__()
  26. self.dec_blocks = nn.Sequential(*[dec_block(in_f, out_f)
  27. for in_f, out_f in zip(dec_sizes, dec_sizes[1:])])
  28. self.last = nn.Linear(dec_sizes[-1], n_classes)
  29. def forward(self, x):
  30. return self.dec_blocks()
  31. class MyCNNClassifier(nn.Module):
  32. def __init__(self, in_c, enc_sizes, dec_sizes, n_classes, activation='relu'):
  33. super().__init__()
  34. self.enc_sizes = [in_c, *enc_sizes]
  35. self.dec_sizes = [32 * 28 * 28, *dec_sizes]
  36. self.encoder = MyEncoder(self.enc_sizes, activation=activation)
  37. self.decoder = MyDecoder(dec_sizes, n_classes)
  38. def forward(self, x):
  39. x = self.encoder(x)
  40. x = x.flatten(1) # flat
  41. x = self.decoder(x)
  42. return x
  1. model = MyCNNClassifier(1, [32,64], [1024, 512], 10, activation='lrelu')
  2. print(model)
  1. MyCNNClassifier(
  2. (encoder): MyEncoder(
  3. (conv_blokcs): Sequential(
  4. (0): Sequential(
  5. (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  6. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  7. (2): LeakyReLU(negative_slope=0.01)
  8. )
  9. (1): Sequential(
  10. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  11. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (2): LeakyReLU(negative_slope=0.01)
  13. )
  14. )
  15. )
  16. (decoder): MyDecoder(
  17. (dec_blocks): Sequential(
  18. (0): Sequential(
  19. (0): Linear(in_features=1024, out_features=512, bias=True)
  20. (1): Sigmoid()
  21. )
  22. )
  23. (last): Linear(in_features=512, out_features=10, bias=True)
  24. )
  25. )

Conclusion

So, in summary.

  • Use Module when you have a big block compose of multiple smaller blocks

  • Use Sequential when you want to create a small block from layers

  • Use ModuleList when you need to iterate through some layers or building blocks and do something

  • Use ModuleDict when you need to parametise some blocks of your model, for example an activation function

That’s all folks!

Thank you for reading