Network in Network (NiN)

:label:sec_nin

LeNet, AlexNet, and VGG all share a common design pattern: extract features exploiting spatial structure via a sequence of convolution and pooling layers and then post-process the representations via fully-connected layers. The improvements upon LeNet by AlexNet and VGG mainly lie in how these later networks widen and deepen these two modules. Alternatively, one could imagine using fully-connected layers earlier in the process. However, a careless use of dense layers might give up the spatial structure of the representation entirely, network in network (NiN) blocks offer an alternative. They were proposed based on a very simple insight: to use an MLP on the channels for each pixel separately :cite:Lin.Chen.Yan.2013.

NiN Blocks

Recall that the inputs and outputs of convolutional layers consist of four-dimensional tensors with axes corresponding to the example, channel, height, and width. Also recall that the inputs and outputs of fully-connected layers are typically two-dimensional tensors corresponding to the example and feature. The idea behind NiN is to apply a fully-connected layer at each pixel location (for each height and width). If we tie the weights across each spatial location, we could think of this as a $1\times 1$ convolutional layer (as described in :numref:sec_channels) or as a fully-connected layer acting independently on each pixel location. Another way to view this is to think of each element in the spatial dimension (height and width) as equivalent to an example and a channel as equivalent to a feature.

:numref:fig_nin illustrates the main structural differences between VGG and NiN, and their blocks. The NiN block consists of one convolutional layer followed by two $1\times 1$ convolutional layers that act as per-pixel fully-connected layers with ReLU activations. The convolution window shape of the first layer is typically set by the user. The subsequent window shapes are fixed to $1 \times 1$.

Comparing architectures of VGG and NiN, and their blocks. :width:600px :label:fig_nin

```{.python .input} from d2l import mxnet as d2l from mxnet import np, npx from mxnet.gluon import nn npx.set_np()

def nin_block(num_channels, kernel_size, strides, padding): blk = nn.Sequential() blk.add(nn.Conv2D(num_channels, kernel_size, strides, padding, activation=’relu’), nn.Conv2D(num_channels, kernel_size=1, activation=’relu’), nn.Conv2D(num_channels, kernel_size=1, activation=’relu’)) return blk

  1. ```{.python .input}
  2. #@tab pytorch
  3. from d2l import torch as d2l
  4. import torch
  5. from torch import nn
  6. def nin_block(in_channels, out_channels, kernel_size, strides, padding):
  7. return nn.Sequential(
  8. nn.Conv2d(in_channels, out_channels, kernel_size, strides, padding),
  9. nn.ReLU(),
  10. nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU(),
  11. nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU())

```{.python .input}

@tab tensorflow

from d2l import tensorflow as d2l import tensorflow as tf

def nin_block(num_channels, kernel_size, strides, padding): return tf.keras.models.Sequential([ tf.keras.layers.Conv2D(num_channels, kernel_size, strides=strides, padding=padding, activation=’relu’), tf.keras.layers.Conv2D(num_channels, kernel_size=1, activation=’relu’), tf.keras.layers.Conv2D(num_channels, kernel_size=1, activation=’relu’)])

  1. ## NiN Model
  2. The original NiN network was proposed shortly after AlexNet
  3. and clearly draws some inspiration.
  4. NiN uses convolutional layers with window shapes
  5. of $11\times 11$, $5\times 5$, and $3\times 3$,
  6. and the corresponding numbers of output channels are the same as in AlexNet. Each NiN block is followed by a maximum pooling layer
  7. with a stride of 2 and a window shape of $3\times 3$.
  8. One significant difference between NiN and AlexNet
  9. is that NiN avoids fully-connected layers altogether.
  10. Instead, NiN uses an NiN block with a number of output channels equal to the number of label classes, followed by a *global* average pooling layer,
  11. yielding a vector of logits.
  12. One advantage of NiN's design is that it significantly
  13. reduces the number of required model parameters.
  14. However, in practice, this design sometimes requires
  15. increased model training time.
  16. ```{.python .input}
  17. net = nn.Sequential()
  18. net.add(nin_block(96, kernel_size=11, strides=4, padding=0),
  19. nn.MaxPool2D(pool_size=3, strides=2),
  20. nin_block(256, kernel_size=5, strides=1, padding=2),
  21. nn.MaxPool2D(pool_size=3, strides=2),
  22. nin_block(384, kernel_size=3, strides=1, padding=1),
  23. nn.MaxPool2D(pool_size=3, strides=2),
  24. nn.Dropout(0.5),
  25. # There are 10 label classes
  26. nin_block(10, kernel_size=3, strides=1, padding=1),
  27. # The global average pooling layer automatically sets the window shape
  28. # to the height and width of the input
  29. nn.GlobalAvgPool2D(),
  30. # Transform the four-dimensional output into two-dimensional output
  31. # with a shape of (batch size, 10)
  32. nn.Flatten())

```{.python .input}

@tab pytorch

net = nn.Sequential( nin_block(1, 96, kernel_size=11, strides=4, padding=0), nn.MaxPool2d(3, stride=2), nin_block(96, 256, kernel_size=5, strides=1, padding=2), nn.MaxPool2d(3, stride=2), nin_block(256, 384, kernel_size=3, strides=1, padding=1), nn.MaxPool2d(3, stride=2), nn.Dropout(0.5),

  1. # There are 10 label classes
  2. nin_block(384, 10, kernel_size=3, strides=1, padding=1),
  3. nn.AdaptiveAvgPool2d((1, 1)),
  4. # Transform the four-dimensional output into two-dimensional output with a
  5. # shape of (batch size, 10)
  6. nn.Flatten())
  1. ```{.python .input}
  2. #@tab tensorflow
  3. def net():
  4. return tf.keras.models.Sequential([
  5. nin_block(96, kernel_size=11, strides=4, padding='valid'),
  6. tf.keras.layers.MaxPool2D(pool_size=3, strides=2),
  7. nin_block(256, kernel_size=5, strides=1, padding='same'),
  8. tf.keras.layers.MaxPool2D(pool_size=3, strides=2),
  9. nin_block(384, kernel_size=3, strides=1, padding='same'),
  10. tf.keras.layers.MaxPool2D(pool_size=3, strides=2),
  11. tf.keras.layers.Dropout(0.5),
  12. # There are 10 label classes
  13. nin_block(10, kernel_size=3, strides=1, padding='same'),
  14. tf.keras.layers.GlobalAveragePooling2D(),
  15. tf.keras.layers.Reshape((1, 1, 10)),
  16. # Transform the four-dimensional output into two-dimensional output
  17. # with a shape of (batch size, 10)
  18. tf.keras.layers.Flatten(),
  19. ])

We create a data example to see the output shape of each block.

```{.python .input} X = np.random.uniform(size=(1, 1, 224, 224)) net.initialize() for layer in net: X = layer(X) print(layer.name, ‘output shape:\t’, X.shape)

  1. ```{.python .input}
  2. #@tab pytorch
  3. X = torch.rand(size=(1, 1, 224, 224))
  4. for layer in net:
  5. X = layer(X)
  6. print(layer.__class__.__name__,'output shape:\t', X.shape)

```{.python .input}

@tab tensorflow

X = tf.random.uniform((1, 224, 224, 1)) for layer in net().layers: X = layer(X) print(layer.class.name,’output shape:\t’, X.shape)

  1. ## Training
  2. As before we use Fashion-MNIST to train the model.
  3. NiN's training is similar to that for AlexNet and VGG.
  4. ```{.python .input}
  5. #@tab all
  6. lr, num_epochs, batch_size = 0.1, 10, 128
  7. train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
  8. d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr)

Summary

  • NiN uses blocks consisting of a convolutional layer and multiple $1\times 1$ convolutional layers. This can be used within the convolutional stack to allow for more per-pixel nonlinearity.
  • NiN removes the fully-connected layers and replaces them with global average pooling (i.e., summing over all locations) after reducing the number of channels to the desired number of outputs (e.g., 10 for Fashion-MNIST).
  • Removing the fully-connected layers reduces overfitting. NiN has dramatically fewer parameters.
  • The NiN design influenced many subsequent CNN designs.

Exercises

  1. Tune the hyperparameters to improve the classification accuracy.
  2. Why are there two $1\times 1$ convolutional layers in the NiN block? Remove one of them, and then observe and analyze the experimental phenomena.
  3. Calculate the resource usage for NiN.
    1. What is the number of parameters?
    2. What is the amount of computation?
    3. What is the amount of memory needed during training?
    4. What is the amount of memory needed during prediction?
  4. What are possible problems with reducing the $384 \times 5 \times 5$ representation to a $10 \times 5 \times 5$ representation in one step?

:begin_tab:mxnet Discussions :end_tab:

:begin_tab:pytorch Discussions :end_tab:

:begin_tab:tensorflow Discussions :end_tab: