Custom Layers

One factor behind deep learning’s success is the availability of a wide range of layers that can be composed in creative ways to design architectures suitable for a wide variety of tasks. For instance, researchers have invented layers specifically for handling images, text, looping over sequential data, and performing dynamic programming. Sooner or later, you will encounter or invent a layer that does not exist yet in the deep learning framework. In these cases, you must build a custom layer. In this section, we show you how.

Layers without Parameters

To start, we construct a custom layer that does not have any parameters of its own. This should look familiar if you recall our introduction to block in :numref:sec_model_construction. The following CenteredLayer class simply subtracts the mean from its input. To build it, we simply need to inherit from the base layer class and implement the forward propagation function.

```{.python .input} from mxnet import gluon, np, npx from mxnet.gluon import nn npx.set_np()

class CenteredLayer(nn.Block): def init(self, kwargs): super().init(kwargs)

  1. def forward(self, X):
  2. return X - X.mean()
  1. ```{.python .input}
  2. #@tab pytorch
  3. import torch
  4. from torch import nn
  5. class CenteredLayer(nn.Module):
  6. def __init__(self):
  7. super().__init__()
  8. def forward(self, X):
  9. return X - X.mean()

```{.python .input}

@tab tensorflow

import tensorflow as tf

class CenteredLayer(tf.keras.Model): def init(self): super().init()

  1. def call(self, inputs):
  2. return inputs - tf.reduce_mean(inputs)
  1. Let us verify that our layer works as intended by feeding some data through it.
  2. ```{.python .input}
  3. layer = CenteredLayer()
  4. layer(np.array([1, 2, 3, 4, 5]))

```{.python .input}

@tab pytorch

layer = CenteredLayer() layer(torch.FloatTensor([1, 2, 3, 4, 5]))

  1. ```{.python .input}
  2. #@tab tensorflow
  3. layer = CenteredLayer()
  4. layer(tf.constant([1, 2, 3, 4, 5]))

We can now incorporate our layer as a component in constructing more complex models.

```{.python .input} net = nn.Sequential() net.add(nn.Dense(128), CenteredLayer()) net.initialize()

  1. ```{.python .input}
  2. #@tab pytorch
  3. net = nn.Sequential(nn.Linear(8, 128), CenteredLayer())

```{.python .input}

@tab tensorflow

net = tf.keras.Sequential([tf.keras.layers.Dense(128), CenteredLayer()])

  1. As an extra sanity check, we can send random data
  2. through the network and check that the mean is in fact 0.
  3. Because we are dealing with floating point numbers,
  4. we may still see a very small nonzero number
  5. due to quantization.
  6. ```{.python .input}
  7. Y = net(np.random.uniform(size=(4, 8)))
  8. Y.mean()

```{.python .input}

@tab pytorch

Y = net(torch.rand(4, 8)) Y.mean()

  1. ```{.python .input}
  2. #@tab tensorflow
  3. Y = net(tf.random.uniform((4, 8)))
  4. tf.reduce_mean(Y)

Layers with Parameters

Now that we know how to define simple layers, let us move on to defining layers with parameters that can be adjusted through training. We can use built-in functions to create parameters, which provide some basic housekeeping functionality. In particular, they govern access, initialization, sharing, saving, and loading model parameters. This way, among other benefits, we will not need to write custom serialization routines for every custom layer.

Now let us implement our own version of the fully-connected layer. Recall that this layer requires two parameters, one to represent the weight and the other for the bias. In this implementation, we bake in the ReLU activation as a default. This layer requires to input arguments: in_units and units, which denote the number of inputs and outputs, respectively.

```{.python .input} class MyDense(nn.Block): def init(self, units, inunits, **kwargs): super()._init(**kwargs) self.weight = self.params.get(‘weight’, shape=(in_units, units)) self.bias = self.params.get(‘bias’, shape=(units,))

  1. def forward(self, x):
  2. linear = np.dot(x, self.weight.data(ctx=x.ctx)) + self.bias.data(
  3. ctx=x.ctx)
  4. return npx.relu(linear)
  1. ```{.python .input}
  2. #@tab pytorch
  3. class MyLinear(nn.Module):
  4. def __init__(self, in_units, units):
  5. super().__init__()
  6. self.weight = nn.Parameter(torch.randn(in_units, units))
  7. self.bias = nn.Parameter(torch.randn(units,))
  8. def forward(self, X):
  9. return torch.matmul(X, self.weight.data) + self.bias.data

```{.python .input}

@tab tensorflow

class MyDense(tf.keras.Model): def init(self, units): super().init() self.units = units

  1. def build(self, X_shape):
  2. self.weight = self.add_weight(name='weight',
  3. shape=[X_shape[-1], self.units],
  4. initializer=tf.random_normal_initializer())
  5. self.bias = self.add_weight(
  6. name='bias', shape=[self.units],
  7. initializer=tf.zeros_initializer())
  8. def call(self, X):
  9. return tf.matmul(X, self.weight) + self.bias
  1. Next, we instantiate the `MyDense` class
  2. and access its model parameters.
  3. ```{.python .input}
  4. dense = MyDense(units=3, in_units=5)
  5. dense.params

```{.python .input}

@tab pytorch

dense = MyLinear(5, 3) dense.weight

  1. ```{.python .input}
  2. #@tab tensorflow
  3. dense = MyDense(3)
  4. dense(tf.random.uniform((2, 5)))
  5. dense.get_weights()

We can directly carry out forward propagation calculations using custom layers.

```{.python .input} dense.initialize() dense(np.random.uniform(size=(2, 5)))

  1. ```{.python .input}
  2. #@tab pytorch
  3. dense(torch.randn(2, 5))

```{.python .input}

@tab tensorflow

dense(tf.random.uniform((2, 5)))

  1. We can also construct models using custom layers.
  2. Once we have that we can use it just like the built-in fully-connected layer.
  3. ```{.python .input}
  4. net = nn.Sequential()
  5. net.add(MyDense(8, in_units=64),
  6. MyDense(1, in_units=8))
  7. net.initialize()
  8. net(np.random.uniform(size=(2, 64)))

```{.python .input}

@tab pytorch

net = nn.Sequential(MyLinear(64, 8), nn.ReLU(), MyLinear(8, 1)) net(torch.randn(2, 64))

  1. ```{.python .input}
  2. #@tab tensorflow
  3. net = tf.keras.models.Sequential([MyDense(8), MyDense(1)])
  4. net(tf.random.uniform((2, 64)))

Summary

  • We can design custom layers via the basic layer class. This allows us to define flexible new layers that behave differently from any existing layers in the library.
  • Once defined, custom layers can be invoked in arbitrary contexts and architectures.
  • Layers can have local parameters, which can be created through built-in functions.

Exercises

  1. Design a layer that takes an input and computes a tensor reduction, i.e., it returns $yk = \sum{i, j} W_{ijk} x_i x_j$.
  2. Design a layer that returns the leading half of the Fourier coefficients of the data.

:begin_tab:mxnet Discussions :end_tab:

:begin_tab:pytorch Discussions :end_tab:

:begin_tab:tensorflow Discussions :end_tab: