Sentiment Analysis: Using Convolutional Neural Networks

:label:sec_sentiment_cnn

In :numref:chap_cnn, we investigated mechanisms for processing two-dimensional image data with two-dimensional CNNs, which were applied to local features such as adjacent pixels. Though originally designed for computer vision, CNNs are also widely used for natural language processing. Simply put, just think of any text sequence as a one-dimensional image. In this way, one-dimensional CNNs can process local features such as $n$-grams in text.

In this section, we will use the textCNN model to demonstrate how to design a CNN architecture for representing single text :cite:Kim.2014. Compared with :numref:fig_nlp-map-sa-rnn that uses an RNN architecture with GloVe pretraining for sentiment analysis, the only difference in :numref:fig_nlp-map-sa-cnn lies in the choice of the architecture.

This section feeds pretrained GloVe to a CNN-based architecture for sentiment analysis. :label:fig_nlp-map-sa-cnn

```{.python .input} from d2l import mxnet as d2l from mxnet import gluon, init, np, npx from mxnet.gluon import nn npx.set_np()

batch_size = 64 train_iter, test_iter, vocab = d2l.load_data_imdb(batch_size)

  1. ```{.python .input}
  2. #@tab pytorch
  3. from d2l import torch as d2l
  4. import torch
  5. from torch import nn
  6. batch_size = 64
  7. train_iter, test_iter, vocab = d2l.load_data_imdb(batch_size)

One-Dimensional Convolutions

Before introducing the model, let’s see how a one-dimensional convolution works. Bear in mind that it is just a special case of a two-dimensional convolution based on the cross-correlation operation.

One-dimensional cross-correlation operation. The shaded portions are the first output element as well as the input and kernel tensor elements used for the output computation: $0\times1+1\times2=2$. :label:fig_conv1d

As shown in :numref:fig_conv1d, in the one-dimensional case, the convolution window slides from left to right across the input tensor. During sliding, the input subtensor (e.g., $0$ and $1$ in :numref:fig_conv1d) contained in the convolution window at a certain position and the kernel tensor (e.g., $1$ and $2$ in :numref:fig_conv1d) are multiplied elementwise. The sum of these multiplications gives the single scalar value (e.g., $0\times1+1\times2=2$ in :numref:fig_conv1d) at the corresponding position of the output tensor.

We implement one-dimensional cross-correlation in the following corr1d function. Given an input tensor X and a kernel tensor K, it returns the output tensor Y.

```{.python .input}

@tab all

def corr1d(X, K): w = K.shape[0] Y = d2l.zeros((X.shape[0] - w + 1)) for i in range(Y.shape[0]): Y[i] = (X[i: i + w] * K).sum() return Y

  1. We can construct the input tensor `X` and the kernel tensor `K` from :numref:`fig_conv1d` to validate the output of the above one-dimensional cross-correlation implementation.
  2. ```{.python .input}
  3. #@tab all
  4. X, K = d2l.tensor([0, 1, 2, 3, 4, 5, 6]), d2l.tensor([1, 2])
  5. corr1d(X, K)

For any one-dimensional input with multiple channels, the convolution kernel needs to have the same number of input channels. Then for each channel, perform a cross-correlation operation on the one-dimensional tensor of the input and the one-dimensional tensor of the convolution kernel, summing the results over all the channels to produce the one-dimensional output tensor. :numref:fig_conv1d_channel shows a one-dimensional cross-correlation operation with 3 input channels.

One-dimensional cross-correlation operation with 3 input channels. The shaded portions are the first output element as well as the input and kernel tensor elements used for the output computation: $0\times1+1\times2+1\times3+2\times4+2\times(-1)+3\times(-3)=2$. :label:fig_conv1d_channel

We can implement the one-dimensional cross-correlation operation for multiple input channels and validate the results in :numref:fig_conv1d_channel.

```{.python .input}

@tab all

def corr1d_multi_in(X, K):

  1. # First, iterate through the 0th dimension (channel dimension) of `X` and
  2. # `K`. Then, add them together
  3. return sum(corr1d(x, k) for x, k in zip(X, K))

X = d2l.tensor([[0, 1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7], [2, 3, 4, 5, 6, 7, 8]]) K = d2l.tensor([[1, 2], [3, 4], [-1, -3]]) corr1d_multi_in(X, K)

  1. Note that
  2. multi-input-channel one-dimensional cross-correlations
  3. are equivalent
  4. to
  5. single-input-channel
  6. two-dimensional cross-correlations.
  7. To illustrate,
  8. an equivalent form of
  9. the multi-input-channel one-dimensional cross-correlation
  10. in :numref:`fig_conv1d_channel`
  11. is
  12. the
  13. single-input-channel
  14. two-dimensional cross-correlation
  15. in :numref:`fig_conv1d_2d`,
  16. where the height of the convolution kernel
  17. has to be the same as that of the input tensor.
  18. ![Two-dimensional cross-correlation operation with a single input channel. The shaded portions are the first output element as well as the input and kernel tensor elements used for the output computation: $2\times(-1)+3\times(-3)+1\times3+2\times4+0\times1+1\times2=2$.](/uploads/projects/d2l-ai-CN/img/conv1d-2d.svg)
  19. :label:`fig_conv1d_2d`
  20. Both the outputs in :numref:`fig_conv1d` and :numref:`fig_conv1d_channel` have only one channel.
  21. Same as two-dimensional convolutions with multiple output channels described in :numref:`subsec_multi-output-channels`,
  22. we can also specify multiple output channels
  23. for one-dimensional convolutions.
  24. ## Max-Over-Time Pooling
  25. Similarly, we can use pooling
  26. to extract the highest value
  27. from sequence representations
  28. as the most important feature
  29. across time steps.
  30. The *max-over-time pooling* used in textCNN
  31. works like
  32. the one-dimensional global maximum pooling
  33. :cite:`Collobert.Weston.Bottou.ea.2011`.
  34. For a multi-channel input
  35. where each channel stores values
  36. at different time steps,
  37. the output at each channel
  38. is the maximum value
  39. for that channel.
  40. Note that
  41. the max-over-time pooling
  42. allows different numbers of time steps
  43. at different channels.
  44. ## The textCNN Model
  45. Using the one-dimensional convolution
  46. and max-over-time pooling,
  47. the textCNN model
  48. takes individual pretrained token representations
  49. as the input,
  50. then obtains and transforms sequence representations
  51. for the downstream application.
  52. For a single text sequence
  53. with $n$ tokens represented by
  54. $d$-dimensional vectors,
  55. the width, height, and number of channels
  56. of the input tensor
  57. are $n$, $1$, and $d$, respectively.
  58. The textCNN model transforms the input
  59. into the output as follows:
  60. 1. Define multiple one-dimensional convolution kernels and perform convolution operations separately on the inputs. Convolution kernels with different widths may capture local features among different numbers of adjacent tokens.
  61. 1. Perform max-over-time pooling on all the output channels, and then concatenate all the scalar pooling outputs as a vector.
  62. 1. Transform the concatenated vector into the output categories using the fully connected layer. Dropout can be used for reducing overfitting.
  63. ![The model architecture of textCNN.](../img/textcnn.svg)
  64. :label:`fig_conv1d_textcnn`
  65. :numref:`fig_conv1d_textcnn`
  66. illustrates the model architecture of textCNN
  67. with a concrete example.
  68. The input is a sentence with 11 tokens,
  69. where
  70. each token is represented by a 6-dimensional vectors.
  71. So we have a 6-channel input with width 11.
  72. Define
  73. two one-dimensional convolution kernels
  74. of widths 2 and 4,
  75. with 4 and 5 output channels, respectively.
  76. They produce
  77. 4 output channels with width $11-2+1=10$
  78. and 5 output channels with width $11-4+1=8$.
  79. Despite different widths of these 9 channels,
  80. the max-over-time pooling
  81. gives a concatenated 9-dimensional vector,
  82. which is finally transformed
  83. into a 2-dimensional output vector
  84. for binary sentiment predictions.
  85. ### Defining the Model
  86. We implement the textCNN model in the following class.
  87. Compared with the bidirectional RNN model in
  88. :numref:`sec_sentiment_rnn`,
  89. besides
  90. replacing recurrent layers with convolutional layers,
  91. we also use two embedding layers:
  92. one with trainable weights and the other
  93. with fixed weights.
  94. ```{.python .input}
  95. class TextCNN(nn.Block):
  96. def __init__(self, vocab_size, embed_size, kernel_sizes, num_channels,
  97. **kwargs):
  98. super(TextCNN, self).__init__(**kwargs)
  99. self.embedding = nn.Embedding(vocab_size, embed_size)
  100. # The embedding layer not to be trained
  101. self.constant_embedding = nn.Embedding(vocab_size, embed_size)
  102. self.dropout = nn.Dropout(0.5)
  103. self.decoder = nn.Dense(2)
  104. # The max-over-time pooling layer has no parameters, so this instance
  105. # can be shared
  106. self.pool = nn.GlobalMaxPool1D()
  107. # Create multiple one-dimensional convolutional layers
  108. self.convs = nn.Sequential()
  109. for c, k in zip(num_channels, kernel_sizes):
  110. self.convs.add(nn.Conv1D(c, k, activation='relu'))
  111. def forward(self, inputs):
  112. # Concatenate two embedding layer outputs with shape (batch size, no.
  113. # of tokens, token vector dimension) along vectors
  114. embeddings = np.concatenate((
  115. self.embedding(inputs), self.constant_embedding(inputs)), axis=2)
  116. # Per the input format of one-dimensional convolutional layers,
  117. # rearrange the tensor so that the second dimension stores channels
  118. embeddings = embeddings.transpose(0, 2, 1)
  119. # For each one-dimensional convolutional layer, after max-over-time
  120. # pooling, a tensor of shape (batch size, no. of channels, 1) is
  121. # obtained. Remove the last dimension and concatenate along channels
  122. encoding = np.concatenate([
  123. np.squeeze(self.pool(conv(embeddings)), axis=-1)
  124. for conv in self.convs], axis=1)
  125. outputs = self.decoder(self.dropout(encoding))
  126. return outputs

```{.python .input}

@tab pytorch

class TextCNN(nn.Module): def init(self, vocabsize, embedsize, kernel_sizes, num_channels, **kwargs): super(TextCNN, self).__init(**kwargs) self.embedding = nn.Embedding(vocab_size, embed_size)

  1. # The embedding layer not to be trained
  2. self.constant_embedding = nn.Embedding(vocab_size, embed_size)
  3. self.dropout = nn.Dropout(0.5)
  4. self.decoder = nn.Linear(sum(num_channels), 2)
  5. # The max-over-time pooling layer has no parameters, so this instance
  6. # can be shared
  7. self.pool = nn.AdaptiveAvgPool1d(1)
  8. self.relu = nn.ReLU()
  9. # Create multiple one-dimensional convolutional layers
  10. self.convs = nn.ModuleList()
  11. for c, k in zip(num_channels, kernel_sizes):
  12. self.convs.append(nn.Conv1d(2 * embed_size, c, k))
  13. def forward(self, inputs):
  14. # Concatenate two embedding layer outputs with shape (batch size, no.
  15. # of tokens, token vector dimension) along vectors
  16. embeddings = torch.cat((
  17. self.embedding(inputs), self.constant_embedding(inputs)), dim=2)
  18. # Per the input format of one-dimensional convolutional layers,
  19. # rearrange the tensor so that the second dimension stores channels
  20. embeddings = embeddings.permute(0, 2, 1)
  21. # For each one-dimensional convolutional layer, after max-over-time
  22. # pooling, a tensor of shape (batch size, no. of channels, 1) is
  23. # obtained. Remove the last dimension and concatenate along channels
  24. encoding = torch.cat([
  25. torch.squeeze(self.relu(self.pool(conv(embeddings))), dim=-1)
  26. for conv in self.convs], dim=1)
  27. outputs = self.decoder(self.dropout(encoding))
  28. return outputs
  1. Let's create a textCNN instance.
  2. It has 3 convolutional layers with kernel widths of 3, 4, and 5, all with 100 output channels.
  3. ```{.python .input}
  4. embed_size, kernel_sizes, nums_channels = 100, [3, 4, 5], [100, 100, 100]
  5. devices = d2l.try_all_gpus()
  6. net = TextCNN(len(vocab), embed_size, kernel_sizes, nums_channels)
  7. net.initialize(init.Xavier(), ctx=devices)

```{.python .input}

@tab pytorch

embed_size, kernel_sizes, nums_channels = 100, [3, 4, 5], [100, 100, 100] devices = d2l.try_all_gpus() net = TextCNN(len(vocab), embed_size, kernel_sizes, nums_channels)

def initweights(m): if type(m) in (nn.Linear, nn.Conv1d): nn.init.xavier_uniform(m.weight)

net.apply(init_weights);

  1. ### Loading Pretrained Word Vectors
  2. Same as :numref:`sec_sentiment_rnn`,
  3. we load pretrained 100-dimensional GloVe embeddings
  4. as the initialized token representations.
  5. These token representations (embedding weights)
  6. will be trained in `embedding`
  7. and fixed in `constant_embedding`.
  8. ```{.python .input}
  9. glove_embedding = d2l.TokenEmbedding('glove.6b.100d')
  10. embeds = glove_embedding[vocab.idx_to_token]
  11. net.embedding.weight.set_data(embeds)
  12. net.constant_embedding.weight.set_data(embeds)
  13. net.constant_embedding.collect_params().setattr('grad_req', 'null')

```{.python .input}

@tab pytorch

gloveembedding = d2l.TokenEmbedding(‘glove.6b.100d’) embeds = glove_embedding[vocab.idx_to_token] net.embedding.weight.data.copy(embeds) net.constantembedding.weight.data.copy(embeds) net.constant_embedding.weight.requires_grad = False

  1. ### Training and Evaluating the Model
  2. Now we can train the textCNN model for sentiment analysis.
  3. ```{.python .input}
  4. lr, num_epochs = 0.001, 5
  5. trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr})
  6. loss = gluon.loss.SoftmaxCrossEntropyLoss()
  7. d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices)

```{.python .input}

@tab pytorch

lr, num_epochs = 0.001, 5 trainer = torch.optim.Adam(net.parameters(), lr=lr) loss = nn.CrossEntropyLoss(reduction=”none”) d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices)

  1. Below we use the trained model to predict the sentiment for two simple sentences.
  2. ```{.python .input}
  3. #@tab all
  4. d2l.predict_sentiment(net, vocab, 'this movie is so great')

```{.python .input}

@tab all

d2l.predict_sentiment(net, vocab, ‘this movie is so bad’) ```

Summary

  • One-dimensional CNNs can process local features such as $n$-grams in text.
  • Multi-input-channel one-dimensional cross-correlations are equivalent to single-input-channel two-dimensional cross-correlations.
  • The max-over-time pooling allows different numbers of time steps at different channels.
  • The textCNN model transforms individual token representations into downstream application outputs using one-dimensional convolutional layers and max-over-time pooling layers.

Exercises

  1. Tune hyperparameters and compare the two architectures for sentiment analysis in :numref:sec_sentiment_rnn and in this section, such as in classification accuracy and computational efficiency.
  2. Can you further improve the classification accuracy of the model by using the methods introduced in the exercises of :numref:sec_sentiment_rnn?
  3. Add positional encoding in the input representations. Does it improve the classification accuracy?

:begin_tab:mxnet Discussions :end_tab:

:begin_tab:pytorch Discussions :end_tab: