Transformer

:label:sec_transformer

我们在 :numref:subsec_cnn-rnn-self-attention中比较了卷积神经网络(CNN)、循环神经网络(RNN)和自注意力(self-attention)。值得注意的是,自注意力同时具有并行计算和最短的最大路径长度这两个优势。因此,使用自注意力来设计深度架构是很有吸引力的。对比之前仍然依赖循环神经网络实现输入表示的自注意力模型 :cite:Cheng.Dong.Lapata.2016,Lin.Feng.Santos.ea.2017,Paulus.Xiong.Socher.2017,transformer模型完全基于注意力机制,没有任何卷积层或循环神经网络层 :cite:Vaswani.Shazeer.Parmar.ea.2017。尽管transformer最初是应用于在文本数据上的序列到序列学习,但现在已经推广到各种现代的深度学习中,例如语言、视觉、语音和强化学习领域。

模型

Transformer作为编码器-解码器架构的一个实例,其整体架构图在 :numref:fig_transformer中展示。正如所见到的,transformer是由编码器和解码器组成的。与 :numref:fig_s2s_attention_details中基于Bahdanau注意力实现的序列到序列的学习相比,transformer的编码器和解码器是基于自注意力的模块叠加而成的,源(输入)序列和目标(输出)序列的嵌入(embedding)表示将加上位置编码(positional encoding),再分别输入到编码器和解码器中。

transformer架构 :width:500px :label:fig_transformer

图 :numref:fig_transformer中概述了transformer的架构。从宏观角度来看,transformer的编码器是由多个相同的层叠加而成的,每个层都有两个子层(子层表示为$\mathrm{sublayer}$)。第一个子层是多头自注意力(multi-head self-attention)汇聚;第二个子层是基于位置的前馈网络(positionwise feed-forward network)。具体来说,在计算编码器的自注意力时,查询、键和值都来自前一个编码器层的输出。受 :numref:sec_resnet中残差网络的启发,每个子层都采用了残差连接(residual connection)。在transformer中,对于序列中任何位置的任何输入$\mathbf{x} \in \mathbb{R}^d$,都要求满足$\mathrm{sublayer}(\mathbf{x}) \in \mathbb{R}^d$,以便残差连接满足$\mathbf{x} + \mathrm{sublayer}(\mathbf{x}) \in \mathbb{R}^d$。在残差连接的加法计算之后,紧接着应用层规范化(layer normalization) :cite:Ba.Kiros.Hinton.2016。因此,输入序列对应的每个位置,transformer编码器都将输出一个$d$维表示向量。

Transformer解码器也是由多个相同的层叠加而成的,并且层中使用了残差连接和层规范化。除了编码器中描述的两个子层之外,解码器还在这两个子层之间插入了第三个子层,称为编码器-解码器注意力(encoder-decoder attention)层。在编码器-解码器注意力中,查询来自前一个解码器层的输出,而键和值来自整个编码器的输出。在解码器自注意力中,查询、键和值都来自上一个解码器层的输出。但是,解码器中的每个位置只能考虑该位置之前的所有位置。这种掩蔽(masked)注意力保留了自回归(auto-regressive)属性,确保预测仅依赖于已生成的输出词元。

我们已经描述并实现了基于缩放点积多头注意力 :numref:sec_multihead-attention和位置编码 :numref:subsec_positional-encoding。接下来,我们将实现transformer模型的剩余部分。

```{.python .input} from d2l import mxnet as d2l import math from mxnet import autograd, np, npx from mxnet.gluon import nn import pandas as pd npx.set_np()

  1. ```{.python .input}
  2. #@tab pytorch
  3. from d2l import torch as d2l
  4. import math
  5. import pandas as pd
  6. import torch
  7. from torch import nn

```{.python .input}

@tab tensorflow

from d2l import tensorflow as d2l import numpy as np import pandas as pd import tensorflow as tf

  1. ## [**基于位置的前馈网络**]
  2. 基于位置的前馈网络对序列中的所有位置的表示进行变换时使用的是同一个多层感知机(MLP),这就是称前馈网络是*基于位置的*(positionwise)的原因。在下面的实现中,输入`X`的形状(批量大小,时间步数或序列长度,隐单元数或特征维度)将被一个两层的感知机转换成形状为(批量大小,时间步数,`ffn_num_outputs`)的输出张量。
  3. ```{.python .input}
  4. #@save
  5. class PositionWiseFFN(nn.Block):
  6. """基于位置的前馈网络"""
  7. def __init__(self, ffn_num_hiddens, ffn_num_outputs, **kwargs):
  8. super(PositionWiseFFN, self).__init__(**kwargs)
  9. self.dense1 = nn.Dense(ffn_num_hiddens, flatten=False,
  10. activation='relu')
  11. self.dense2 = nn.Dense(ffn_num_outputs, flatten=False)
  12. def forward(self, X):
  13. return self.dense2(self.dense1(X))

```{.python .input}

@tab pytorch

@save

class PositionWiseFFN(nn.Module): “””基于位置的前馈网络””” def init(self, ffnnuminput, ffn_num_hiddens, ffn_num_outputs, **kwargs): super(PositionWiseFFN, self).__init(**kwargs) self.dense1 = nn.Linear(ffn_num_input, ffn_num_hiddens) self.relu = nn.ReLU() self.dense2 = nn.Linear(ffn_num_hiddens, ffn_num_outputs)

  1. def forward(self, X):
  2. return self.dense2(self.relu(self.dense1(X)))
  1. ```{.python .input}
  2. #@tab tensorflow
  3. #@save
  4. class PositionWiseFFN(tf.keras.layers.Layer):
  5. """基于位置的前馈网络"""
  6. def __init__(self, ffn_num_hiddens, ffn_num_outputs, **kwargs):
  7. super().__init__(*kwargs)
  8. self.dense1 = tf.keras.layers.Dense(ffn_num_hiddens)
  9. self.relu = tf.keras.layers.ReLU()
  10. self.dense2 = tf.keras.layers.Dense(ffn_num_outputs)
  11. def call(self, X):
  12. return self.dense2(self.relu(self.dense1(X)))

下面的例子显示,[改变张量的最里层维度的尺寸],会改变成基于位置的前馈网络的输出尺寸。因为用同一个多层感知机对所有位置上的输入进行变换,所以当所有这些位置的输入相同时,它们的输出也是相同的。

```{.python .input} ffn = PositionWiseFFN(4, 8) ffn.initialize() ffn(np.ones((2, 3, 4)))[0]

  1. ```{.python .input}
  2. #@tab pytorch
  3. ffn = PositionWiseFFN(4, 4, 8)
  4. ffn.eval()
  5. ffn(d2l.ones((2, 3, 4)))[0]

```{.python .input}

@tab tensorflow

ffn = PositionWiseFFN(4, 8) ffn(tf.ones((2, 3, 4)))[0]

  1. ## 残差连接和层规范化
  2. 现在让我们关注 :numref:`fig_transformer`中的“*加法和规范化*(add&norm)”组件。正如在本节开头所述,这是由残差连接和紧随其后的层规范化组成的。两者都是构建有效的深度架构的关键。
  3. :numref:`sec_batch_norm`中,我们解释了在一个小批量的样本内基于批量规范化对数据进行重新中心化和重新缩放的调整。层规范化和批量规范化的目标相同,但层规范化是基于特征维度进行规范化。尽管批量规范化在计算机视觉中被广泛应用,但在自然语言处理任务中(输入通常是变长序列)批量规范化通常不如层规范化的效果好。
  4. 以下代码[**对比不同维度的层规范化和批量规范化的效果**]。
  5. ```{.python .input}
  6. ln = nn.LayerNorm()
  7. ln.initialize()
  8. bn = nn.BatchNorm()
  9. bn.initialize()
  10. X = d2l.tensor([[1, 2], [2, 3]])
  11. # 在训练模式下计算X的均值和方差
  12. with autograd.record():
  13. print('层规范化:', ln(X), '\n批量规范化:', bn(X))

```{.python .input}

@tab pytorch

ln = nn.LayerNorm(2) bn = nn.BatchNorm1d(2) X = d2l.tensor([[1, 2], [2, 3]], dtype=torch.float32)

在训练模式下计算X的均值和方差

print(‘layer norm:’, ln(X), ‘\nbatch norm:’, bn(X))

  1. ```{.python .input}
  2. #@tab tensorflow
  3. ln = tf.keras.layers.LayerNormalization()
  4. bn = tf.keras.layers.BatchNormalization()
  5. X = tf.constant([[1, 2], [2, 3]], dtype=tf.float32)
  6. print('layer norm:', ln(X), '\nbatch norm:', bn(X, training=True))

现在我们可以[使用残差连接和层规范化]来实现AddNorm类。暂退法也被作为正则化方法使用。

```{.python .input}

@save

class AddNorm(nn.Block): “””残差连接后进行层规范化””” def init(self, dropout, kwargs): super(AddNorm, self).init(kwargs) self.dropout = nn.Dropout(dropout) self.ln = nn.LayerNorm()

  1. def forward(self, X, Y):
  2. return self.ln(self.dropout(Y) + X)
  1. ```{.python .input}
  2. #@tab pytorch
  3. #@save
  4. class AddNorm(nn.Module):
  5. """残差连接后进行层规范化"""
  6. def __init__(self, normalized_shape, dropout, **kwargs):
  7. super(AddNorm, self).__init__(**kwargs)
  8. self.dropout = nn.Dropout(dropout)
  9. self.ln = nn.LayerNorm(normalized_shape)
  10. def forward(self, X, Y):
  11. return self.ln(self.dropout(Y) + X)

```{.python .input}

@tab tensorflow

@save

class AddNorm(tf.keras.layers.Layer): “””残差连接后进行层规范化””” def init(self, normalizedshape, dropout, **kwargs): super()._init(**kwargs) self.dropout = tf.keras.layers.Dropout(dropout) self.ln = tf.keras.layers.LayerNormalization(normalized_shape)

  1. def call(self, X, Y, **kwargs):
  2. return self.ln(self.dropout(Y, **kwargs) + X)
  1. 残差连接要求两个输入的形状相同,以便[**加法操作后输出张量的形状相同**]。
  2. ```{.python .input}
  3. add_norm = AddNorm(0.5)
  4. add_norm.initialize()
  5. add_norm(d2l.ones((2, 3, 4)), d2l.ones((2, 3, 4))).shape

```{.python .input}

@tab pytorch

add_norm = AddNorm([3, 4], 0.5) add_norm.eval() add_norm(d2l.ones((2, 3, 4)), d2l.ones((2, 3, 4))).shape

  1. ```{.python .input}
  2. #@tab tensorflow
  3. add_norm = AddNorm([1, 2], 0.5)
  4. add_norm(tf.ones((2, 3, 4)), tf.ones((2, 3, 4)), training=False).shape

编码器

有了组成transformer编码器的基础组件,现在可以先[实现编码器中的一个层]。下面的EncoderBlock类包含两个子层:多头自注意力和基于位置的前馈网络,这两个子层都使用了残差连接和紧随的层规范化。

```{.python .input}

@save

class EncoderBlock(nn.Block): “””transformer编码器块””” def init(self, numhiddens, ffnnum_hiddens, num_heads, dropout, use_bias=False, **kwargs): super(EncoderBlock, self).__init(**kwargs) self.attention = d2l.MultiHeadAttention( num_hiddens, num_heads, dropout, use_bias) self.addnorm1 = AddNorm(dropout) self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens) self.addnorm2 = AddNorm(dropout)

  1. def forward(self, X, valid_lens):
  2. Y = self.addnorm1(X, self.attention(X, X, X, valid_lens))
  3. return self.addnorm2(Y, self.ffn(Y))
  1. ```{.python .input}
  2. #@tab pytorch
  3. #@save
  4. class EncoderBlock(nn.Module):
  5. """transformer编码器块"""
  6. def __init__(self, key_size, query_size, value_size, num_hiddens,
  7. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  8. dropout, use_bias=False, **kwargs):
  9. super(EncoderBlock, self).__init__(**kwargs)
  10. self.attention = d2l.MultiHeadAttention(
  11. key_size, query_size, value_size, num_hiddens, num_heads, dropout,
  12. use_bias)
  13. self.addnorm1 = AddNorm(norm_shape, dropout)
  14. self.ffn = PositionWiseFFN(
  15. ffn_num_input, ffn_num_hiddens, num_hiddens)
  16. self.addnorm2 = AddNorm(norm_shape, dropout)
  17. def forward(self, X, valid_lens):
  18. Y = self.addnorm1(X, self.attention(X, X, X, valid_lens))
  19. return self.addnorm2(Y, self.ffn(Y))

```{.python .input}

@tab tensorflow

@save

class EncoderBlock(tf.keras.layers.Layer): “””transformer编码器块””” def init(self, keysize, querysize, value_size, num_hiddens, norm_shape, ffn_num_hiddens, num_heads, dropout, bias=False, **kwargs): super().__init(**kwargs) self.attention = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens, num_heads, dropout, bias) self.addnorm1 = AddNorm(norm_shape, dropout) self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens) self.addnorm2 = AddNorm(norm_shape, dropout)

  1. def call(self, X, valid_lens, **kwargs):
  2. Y = self.addnorm1(X, self.attention(X, X, X, valid_lens, **kwargs), **kwargs)
  3. return self.addnorm2(Y, self.ffn(Y), **kwargs)
  1. 正如我们所看到的,[**transformer编码器中的任何层都不会改变其输入的形状**]。
  2. ```{.python .input}
  3. X = d2l.ones((2, 100, 24))
  4. valid_lens = d2l.tensor([3, 2])
  5. encoder_blk = EncoderBlock(24, 48, 8, 0.5)
  6. encoder_blk.initialize()
  7. encoder_blk(X, valid_lens).shape

```{.python .input}

@tab pytorch

X = d2l.ones((2, 100, 24)) valid_lens = d2l.tensor([3, 2]) encoder_blk = EncoderBlock(24, 24, 24, 24, [100, 24], 24, 48, 8, 0.5) encoder_blk.eval() encoder_blk(X, valid_lens).shape

  1. ```{.python .input}
  2. #@tab tensorflow
  3. X = tf.ones((2, 100, 24))
  4. valid_lens = tf.constant([3, 2])
  5. norm_shape = [i for i in range(len(X.shape))][1:]
  6. encoder_blk = EncoderBlock(24, 24, 24, 24, norm_shape, 48, 8, 0.5)
  7. encoder_blk(X, valid_lens, training=False).shape

在实现下面的[transformer编码器]的代码中,我们堆叠了num_layersEncoderBlock类的实例。由于我们使用的是值范围在$-1$和$1$之间的固定位置编码,因此通过学习得到的输入的嵌入表示的值需要先乘以嵌入维度的平方根进行重新缩放,然后再与位置编码相加。

```{.python .input}

@save

class TransformerEncoder(d2l.Encoder): “””transformer编码器””” def init(self, vocabsize, numhiddens, ffnnum_hiddens, num_heads, num_layers, dropout, use_bias=False, **kwargs): super(TransformerEncoder, self).__init(**kwargs) self.num_hiddens = num_hiddens self.embedding = nn.Embedding(vocab_size, num_hiddens) self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout) self.blks = nn.Sequential() for in range(num_layers): self.blks.add( EncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout, use_bias))

  1. def forward(self, X, valid_lens, *args):
  2. # 因为位置编码值在-1和1之间,
  3. # 因此嵌入值乘以嵌入维度的平方根进行缩放,
  4. # 然后再与位置编码相加。
  5. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  6. self.attention_weights = [None] * len(self.blks)
  7. for i, blk in enumerate(self.blks):
  8. X = blk(X, valid_lens)
  9. self.attention_weights[
  10. i] = blk.attention.attention.attention_weights
  11. return X
  1. ```{.python .input}
  2. #@tab pytorch
  3. #@save
  4. class TransformerEncoder(d2l.Encoder):
  5. """transformer编码器"""
  6. def __init__(self, vocab_size, key_size, query_size, value_size,
  7. num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
  8. num_heads, num_layers, dropout, use_bias=False, **kwargs):
  9. super(TransformerEncoder, self).__init__(**kwargs)
  10. self.num_hiddens = num_hiddens
  11. self.embedding = nn.Embedding(vocab_size, num_hiddens)
  12. self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
  13. self.blks = nn.Sequential()
  14. for i in range(num_layers):
  15. self.blks.add_module("block"+str(i),
  16. EncoderBlock(key_size, query_size, value_size, num_hiddens,
  17. norm_shape, ffn_num_input, ffn_num_hiddens,
  18. num_heads, dropout, use_bias))
  19. def forward(self, X, valid_lens, *args):
  20. # 因为位置编码值在-1和1之间,
  21. # 因此嵌入值乘以嵌入维度的平方根进行缩放,
  22. # 然后再与位置编码相加。
  23. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  24. self.attention_weights = [None] * len(self.blks)
  25. for i, blk in enumerate(self.blks):
  26. X = blk(X, valid_lens)
  27. self.attention_weights[
  28. i] = blk.attention.attention.attention_weights
  29. return X

```{.python .input}

@tab tensorflow

@save

class TransformerEncoder(d2l.Encoder): “””transformer编码器””” def init(self, vocabsize, keysize, querysize, value_size, num_hiddens, norm_shape, ffn_num_hiddens, num_heads, num_layers, dropout, bias=False, **kwargs): super().__init(**kwargs) self.num_hiddens = num_hiddens self.embedding = tf.keras.layers.Embedding(vocab_size, num_hiddens) self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout) self.blks = [EncoderBlock( key_size, query_size, value_size, num_hiddens, norm_shape, ffn_num_hiddens, num_heads, dropout, bias) for in range( num_layers)]

  1. def call(self, X, valid_lens, **kwargs):
  2. # 因为位置编码值在-1和1之间,
  3. # 因此嵌入值乘以嵌入维度的平方根进行缩放,
  4. # 然后再与位置编码相加。
  5. X = self.pos_encoding(self.embedding(X) * tf.math.sqrt(
  6. tf.cast(self.num_hiddens, dtype=tf.float32)), **kwargs)
  7. self.attention_weights = [None] * len(self.blks)
  8. for i, blk in enumerate(self.blks):
  9. X = blk(X, valid_lens, **kwargs)
  10. self.attention_weights[
  11. i] = blk.attention.attention.attention_weights
  12. return X
  1. 下面我们指定了超参数来[**创建一个两层的transformer编码器**]。
  2. Transformer编码器输出的形状是(批量大小,时间步数目,`num_hiddens`)。
  3. ```{.python .input}
  4. encoder = TransformerEncoder(200, 24, 48, 8, 2, 0.5)
  5. encoder.initialize()
  6. encoder(np.ones((2, 100)), valid_lens).shape

```{.python .input}

@tab pytorch

encoder = TransformerEncoder( 200, 24, 24, 24, 24, [100, 24], 24, 48, 8, 2, 0.5) encoder.eval() encoder(d2l.ones((2, 100), dtype=torch.long), valid_lens).shape

  1. ```{.python .input}
  2. #@tab tensorflow
  3. encoder = TransformerEncoder(200, 24, 24, 24, 24, [1, 2], 48, 8, 2, 0.5)
  4. encoder(tf.ones((2, 100)), valid_lens, training=False).shape

解码器

如 :numref:fig_transformer所示,[transformer解码器也是由多个相同的层组成]。在DecoderBlock类中实现的每个层包含了三个子层:解码器自注意力、“编码器-解码器”注意力和基于位置的前馈网络。这些子层也都被残差连接和紧随的层规范化围绕。

正如在本节前面所述,在掩蔽多头解码器自注意力层(第一个子层)中,查询、键和值都来自上一个解码器层的输出。关于序列到序列模型(sequence-to-sequence model),在训练阶段,其输出序列的所有位置(时间步)的词元都是已知的;然而,在预测阶段,其输出序列的词元是逐个生成的。因此,在任何解码器时间步中,只有生成的词元才能用于解码器的自注意力计算中。为了在解码器中保留自回归的属性,其掩蔽自注意力设定了参数dec_valid_lens,以便任何查询都只会与解码器中所有已经生成词元的位置(即直到该查询位置为止)进行注意力计算。

```{.python .input} class DecoderBlock(nn.Block): “””解码器中第i个块””” def init(self, numhiddens, ffnnum_hiddens, num_heads, dropout, i, **kwargs): super(DecoderBlock, self).__init(**kwargs) self.i = i self.attention1 = d2l.MultiHeadAttention(num_hiddens, num_heads, dropout) self.addnorm1 = AddNorm(dropout) self.attention2 = d2l.MultiHeadAttention(num_hiddens, num_heads, dropout) self.addnorm2 = AddNorm(dropout) self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens) self.addnorm3 = AddNorm(dropout)

  1. def forward(self, X, state):
  2. enc_outputs, enc_valid_lens = state[0], state[1]
  3. # 训练阶段,输出序列的所有词元都在同一时间处理,
  4. # 因此state[2][self.i]初始化为None。
  5. # 预测阶段,输出序列是通过词元一个接着一个解码的,
  6. # 因此state[2][self.i]包含着直到当前时间步第i个块解码的输出表示
  7. if state[2][self.i] is None:
  8. key_values = X
  9. else:
  10. key_values = np.concatenate((state[2][self.i], X), axis=1)
  11. state[2][self.i] = key_values
  12. if autograd.is_training():
  13. batch_size, num_steps, _ = X.shape
  14. # dec_valid_lens的开头:(batch_size,num_steps),
  15. # 其中每一行是[1,2,...,num_steps]
  16. dec_valid_lens = np.tile(np.arange(1, num_steps + 1, ctx=X.ctx),
  17. (batch_size, 1))
  18. else:
  19. dec_valid_lens = None
  20. # 自注意力
  21. X2 = self.attention1(X, key_values, key_values, dec_valid_lens)
  22. Y = self.addnorm1(X, X2)
  23. # “编码器-解码器”注意力。
  24. # 'enc_outputs'的开头:('batch_size','num_steps','num_hiddens')
  25. Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens)
  26. Z = self.addnorm2(Y, Y2)
  27. return self.addnorm3(Z, self.ffn(Z)), state
  1. ```{.python .input}
  2. #@tab pytorch
  3. class DecoderBlock(nn.Module):
  4. """解码器中第i个块"""
  5. def __init__(self, key_size, query_size, value_size, num_hiddens,
  6. norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
  7. dropout, i, **kwargs):
  8. super(DecoderBlock, self).__init__(**kwargs)
  9. self.i = i
  10. self.attention1 = d2l.MultiHeadAttention(
  11. key_size, query_size, value_size, num_hiddens, num_heads, dropout)
  12. self.addnorm1 = AddNorm(norm_shape, dropout)
  13. self.attention2 = d2l.MultiHeadAttention(
  14. key_size, query_size, value_size, num_hiddens, num_heads, dropout)
  15. self.addnorm2 = AddNorm(norm_shape, dropout)
  16. self.ffn = PositionWiseFFN(ffn_num_input, ffn_num_hiddens,
  17. num_hiddens)
  18. self.addnorm3 = AddNorm(norm_shape, dropout)
  19. def forward(self, X, state):
  20. enc_outputs, enc_valid_lens = state[0], state[1]
  21. # 训练阶段,输出序列的所有词元都在同一时间处理,
  22. # 因此state[2][self.i]初始化为None。
  23. # 预测阶段,输出序列是通过词元一个接着一个解码的,
  24. # 因此state[2][self.i]包含着直到当前时间步第i个块解码的输出表示
  25. if state[2][self.i] is None:
  26. key_values = X
  27. else:
  28. key_values = torch.cat((state[2][self.i], X), axis=1)
  29. state[2][self.i] = key_values
  30. if self.training:
  31. batch_size, num_steps, _ = X.shape
  32. # dec_valid_lens的开头:(batch_size,num_steps),
  33. # 其中每一行是[1,2,...,num_steps]
  34. dec_valid_lens = torch.arange(
  35. 1, num_steps + 1, device=X.device).repeat(batch_size, 1)
  36. else:
  37. dec_valid_lens = None
  38. # 自注意力
  39. X2 = self.attention1(X, key_values, key_values, dec_valid_lens)
  40. Y = self.addnorm1(X, X2)
  41. # 编码器-解码器注意力。
  42. # enc_outputs的开头:(batch_size,num_steps,num_hiddens)
  43. Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens)
  44. Z = self.addnorm2(Y, Y2)
  45. return self.addnorm3(Z, self.ffn(Z)), state

```{.python .input}

@tab tensorflow

class DecoderBlock(tf.keras.layers.Layer): “””解码器中第i个块””” def init(self, keysize, querysize, value_size, num_hiddens, norm_shape, ffn_num_hiddens, num_heads, dropout, i, **kwargs): super().__init(**kwargs) self.i = i self.attention1 = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens, num_heads, dropout) self.addnorm1 = AddNorm(norm_shape, dropout) self.attention2 = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens, num_heads, dropout) self.addnorm2 = AddNorm(norm_shape, dropout) self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens) self.addnorm3 = AddNorm(norm_shape, dropout)

  1. def call(self, X, state, **kwargs):
  2. enc_outputs, enc_valid_lens = state[0], state[1]
  3. # 训练阶段,输出序列的所有词元都在同一时间处理,
  4. # 因此state[2][self.i]初始化为None。
  5. # 预测阶段,输出序列是通过词元一个接着一个解码的,
  6. # 因此state[2][self.i]包含着直到当前时间步第i个块解码的输出表示
  7. if state[2][self.i] is None:
  8. key_values = X
  9. else:
  10. key_values = tf.concat((state[2][self.i], X), axis=1)
  11. state[2][self.i] = key_values
  12. if kwargs["training"]:
  13. batch_size, num_steps, _ = X.shape
  14. # dec_valid_lens的开头:(batch_size,num_steps),
  15. # 其中每一行是[1,2,...,num_steps]
  16. dec_valid_lens = tf.repeat(tf.reshape(tf.range(1, num_steps + 1),
  17. shape=(-1, num_steps)), repeats=batch_size, axis=0)
  18. else:
  19. dec_valid_lens = None
  20. # 自注意力
  21. X2 = self.attention1(X, key_values, key_values, dec_valid_lens, **kwargs)
  22. Y = self.addnorm1(X, X2, **kwargs)
  23. # 编码器-解码器注意力。
  24. # enc_outputs的开头:(batch_size,num_steps,num_hiddens)
  25. Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens, **kwargs)
  26. Z = self.addnorm2(Y, Y2, **kwargs)
  27. return self.addnorm3(Z, self.ffn(Z), **kwargs), state
  1. 为了便于在“编码器-解码器”注意力中进行缩放点积计算和残差连接中进行加法计算,[**编码器和解码器的特征维度都是`num_hiddens`。**]
  2. ```{.python .input}
  3. decoder_blk = DecoderBlock(24, 48, 8, 0.5, 0)
  4. decoder_blk.initialize()
  5. X = np.ones((2, 100, 24))
  6. state = [encoder_blk(X, valid_lens), valid_lens, [None]]
  7. decoder_blk(X, state)[0].shape

```{.python .input}

@tab pytorch

decoder_blk = DecoderBlock(24, 24, 24, 24, [100, 24], 24, 48, 8, 0.5, 0) decoder_blk.eval() X = d2l.ones((2, 100, 24)) state = [encoder_blk(X, valid_lens), valid_lens, [None]] decoder_blk(X, state)[0].shape

  1. ```{.python .input}
  2. #@tab tensorflow
  3. decoder_blk = DecoderBlock(24, 24, 24, 24, [1, 2], 48, 8, 0.5, 0)
  4. X = tf.ones((2, 100, 24))
  5. state = [encoder_blk(X, valid_lens), valid_lens, [None]]
  6. decoder_blk(X, state, training=False)[0].shape

现在我们构建了由num_layersDecoderBlock实例组成的完整的[transformer解码器]。最后,通过一个全连接层计算所有vocab_size个可能的输出词元的预测值。解码器的自注意力权重和编码器解码器注意力权重都被存储下来,方便日后可视化的需要。

```{.python .input} class TransformerDecoder(d2l.AttentionDecoder): def init(self, vocabsize, numhiddens, ffn_num_hiddens, num_heads, num_layers, dropout, **kwargs): super(TransformerDecoder, self).__init(**kwargs) self.num_hiddens = num_hiddens self.num_layers = num_layers self.embedding = nn.Embedding(vocab_size, num_hiddens) self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout) self.blks = nn.Sequential() for i in range(num_layers): self.blks.add( DecoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout, i)) self.dense = nn.Dense(vocab_size, flatten=False)

  1. def init_state(self, enc_outputs, enc_valid_lens, *args):
  2. return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
  3. def forward(self, X, state):
  4. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  5. self._attention_weights = [[None] * len(self.blks) for _ in range (2)]
  6. for i, blk in enumerate(self.blks):
  7. X, state = blk(X, state)
  8. # 解码器自注意力权重
  9. self._attention_weights[0][
  10. i] = blk.attention1.attention.attention_weights
  11. # 编码器-解码器自注意力权重
  12. self._attention_weights[1][
  13. i] = blk.attention2.attention.attention_weights
  14. return self.dense(X), state
  15. @property
  16. def attention_weights(self):
  17. return self._attention_weights
  1. ```{.python .input}
  2. #@tab pytorch
  3. class TransformerDecoder(d2l.AttentionDecoder):
  4. def __init__(self, vocab_size, key_size, query_size, value_size,
  5. num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
  6. num_heads, num_layers, dropout, **kwargs):
  7. super(TransformerDecoder, self).__init__(**kwargs)
  8. self.num_hiddens = num_hiddens
  9. self.num_layers = num_layers
  10. self.embedding = nn.Embedding(vocab_size, num_hiddens)
  11. self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
  12. self.blks = nn.Sequential()
  13. for i in range(num_layers):
  14. self.blks.add_module("block"+str(i),
  15. DecoderBlock(key_size, query_size, value_size, num_hiddens,
  16. norm_shape, ffn_num_input, ffn_num_hiddens,
  17. num_heads, dropout, i))
  18. self.dense = nn.Linear(num_hiddens, vocab_size)
  19. def init_state(self, enc_outputs, enc_valid_lens, *args):
  20. return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
  21. def forward(self, X, state):
  22. X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
  23. self._attention_weights = [[None] * len(self.blks) for _ in range (2)]
  24. for i, blk in enumerate(self.blks):
  25. X, state = blk(X, state)
  26. # 解码器自注意力权重
  27. self._attention_weights[0][
  28. i] = blk.attention1.attention.attention_weights
  29. # “编码器-解码器”自注意力权重
  30. self._attention_weights[1][
  31. i] = blk.attention2.attention.attention_weights
  32. return self.dense(X), state
  33. @property
  34. def attention_weights(self):
  35. return self._attention_weights

```{.python .input}

@tab tensorflow

class TransformerDecoder(d2l.AttentionDecoder): def init(self, vocabsize, keysize, query_size, value_size, num_hiddens, norm_shape, ffn_num_hidens, num_heads, num_layers, dropout, **kwargs): super().__init(**kwargs) self.num_hiddens = num_hiddens self.num_layers = num_layers self.embedding = tf.keras.layers.Embedding(vocab_size, num_hiddens) self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout) self.blks = [DecoderBlock(key_size, query_size, value_size, num_hiddens, norm_shape, ffn_num_hiddens, num_heads, dropout, i) for i in range(num_layers)] self.dense = tf.keras.layers.Dense(vocab_size)

  1. def init_state(self, enc_outputs, enc_valid_lens, *args):
  2. return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
  3. def call(self, X, state, **kwargs):
  4. X = self.pos_encoding(self.embedding(X) * tf.math.sqrt(tf.cast(self.num_hiddens, dtype=tf.float32)), **kwargs)
  5. self._attention_weights = [[None] * len(self.blks) for _ in range(2)] # 解码器中2个注意力层
  6. for i, blk in enumerate(self.blks):
  7. X, state = blk(X, state, **kwargs)
  8. # 解码器自注意力权重
  9. self._attention_weights[0][i] = blk.attention1.attention.attention_weights
  10. # “编码器-解码器”自注意力权重
  11. self._attention_weights[1][i] = blk.attention2.attention.attention_weights
  12. return self.dense(X), state
  13. @property
  14. def attention_weights(self):
  15. return self._attention_weights
  1. ## [**训练**]
  2. 依照transformer架构来实例化编码器-解码器模型。在这里,指定transformer的编码器和解码器都是2层,都使用4头注意力。与 :numref:`sec_seq2seq_training`类似,为了进行序列到序列的学习,我们在“英语-法语”机器翻译数据集上训练transformer模型。
  3. ```{.python .input}
  4. num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10
  5. lr, num_epochs, device = 0.005, 200, d2l.try_gpu()
  6. ffn_num_hiddens, num_heads = 64, 4
  7. train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
  8. encoder = TransformerEncoder(
  9. len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
  10. dropout)
  11. decoder = TransformerDecoder(
  12. len(tgt_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
  13. dropout)
  14. net = d2l.EncoderDecoder(encoder, decoder)
  15. d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)

```{.python .input}

@tab pytorch

num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10 lr, num_epochs, device = 0.005, 200, d2l.try_gpu() ffn_num_input, ffn_num_hiddens, num_heads = 32, 64, 4 key_size, query_size, value_size = 32, 32, 32 norm_shape = [32]

train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)

encoder = TransformerEncoder( len(src_vocab), key_size, query_size, value_size, num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens, num_heads, num_layers, dropout) decoder = TransformerDecoder( len(tgt_vocab), key_size, query_size, value_size, num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens, num_heads, num_layers, dropout) net = d2l.EncoderDecoder(encoder, decoder) d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)

  1. ```{.python .input}
  2. #@tab tensorflow
  3. num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10
  4. lr, num_epochs, device = 0.005, 200, d2l.try_gpu()
  5. ffn_num_hiddens, num_heads = 64, 4
  6. key_size, query_size, value_size = 32, 32, 32
  7. norm_shape = [2]
  8. train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
  9. encoder = TransformerEncoder(
  10. len(src_vocab), key_size, query_size, value_size, num_hiddens, norm_shape,
  11. ffn_num_hiddens, num_heads, num_layers, dropout)
  12. decoder = TransformerDecoder(
  13. len(tgt_vocab), key_size, query_size, value_size, num_hiddens, norm_shape,
  14. ffn_num_hiddens, num_heads, num_layers, dropout)
  15. net = d2l.EncoderDecoder(encoder, decoder)
  16. d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)

训练结束后,使用transformer模型[将一些英语句子翻译成法语],并且计算它们的BLEU分数。

```{.python .input}

@tab mxnet, pytorch

engs = [‘go .’, “i lost .”, ‘he\’s calm .’, ‘i\’m home .’] fras = [‘va !’, ‘j\’ai perdu .’, ‘il est calme .’, ‘je suis chez moi .’] for eng, fra in zip(engs, fras): translation, dec_attention_weight_seq = d2l.predict_seq2seq( net, eng, src_vocab, tgt_vocab, num_steps, device, True) print(f’{eng} => {translation}, ‘, f’bleu {d2l.bleu(translation, fra, k=2):.3f}’)

  1. ```{.python .input}
  2. #@tab tensorflow
  3. engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
  4. fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
  5. for eng, fra in zip(engs, fras):
  6. translation, dec_attention_weight_seq = d2l.predict_seq2seq(
  7. net, eng, src_vocab, tgt_vocab, num_steps, True)
  8. print(f'{eng} => {translation}, ',
  9. f'bleu {d2l.bleu(translation, fra, k=2):.3f}')

当进行最后一个英语到法语的句子翻译工作时,让我们[可视化transformer的注意力权重]。编码器自注意力权重的形状为(编码器层数,注意力头数,num_steps或查询的数目,num_steps或“键-值”对的数目)。

```{.python .input}

@tab all

enc_attention_weights = d2l.reshape( d2l.concat(net.encoder.attention_weights, 0), (num_layers, num_heads, -1, num_steps)) enc_attention_weights.shape

  1. 在编码器的自注意力中,查询和键都来自相同的输入序列。因为填充词元是不携带信息的,因此通过指定输入序列的有效长度可以避免查询与使用填充词元的位置计算注意力。接下来,将逐行呈现两层多头注意力的权重。每个注意力头都根据查询、键和值的不同的表示子空间来表示不同的注意力。
  2. ```{.python .input}
  3. #@tab mxnet, tensorflow
  4. d2l.show_heatmaps(
  5. enc_attention_weights, xlabel='Key positions', ylabel='Query positions',
  6. titles=['Head %d' % i for i in range(1, 5)], figsize=(7, 3.5))

```{.python .input}

@tab pytorch

d2l.show_heatmaps( enc_attention_weights.cpu(), xlabel=’Key positions’, ylabel=’Query positions’, titles=[‘Head %d’ % i for i in range(1, 5)], figsize=(7, 3.5))

  1. [**为了可视化解码器的自注意力权重和“编码器-解码器”的注意力权重,我们需要完成更多的数据操作工作。**]例如,我们用零填充被掩蔽住的注意力权重。值得注意的是,解码器的自注意力权重和“编码器-解码器”的注意力权重都有相同的查询:即以*序列开始词元*(beginning-of-sequence,BOS)打头,再与后续输出的词元共同组成序列。
  2. ```{.python .input}
  3. dec_attention_weights_2d = [d2l.tensor(head[0]).tolist()
  4. for step in dec_attention_weight_seq
  5. for attn in step for blk in attn for head in blk]
  6. dec_attention_weights_filled = d2l.tensor(
  7. pd.DataFrame(dec_attention_weights_2d).fillna(0.0).values)
  8. dec_attention_weights = d2l.reshape(dec_attention_weights_filled,
  9. (-1, 2, num_layers, num_heads, num_steps))
  10. dec_self_attention_weights, dec_inter_attention_weights = \
  11. dec_attention_weights.transpose(1, 2, 3, 0, 4)
  12. dec_self_attention_weights.shape, dec_inter_attention_weights.shape

```{.python .input}

@tab pytorch

dec_attention_weights_2d = [head[0].tolist() for step in dec_attention_weight_seq for attn in step for blk in attn for head in blk] dec_attention_weights_filled = d2l.tensor( pd.DataFrame(dec_attention_weights_2d).fillna(0.0).values) dec_attention_weights = d2l.reshape(dec_attention_weights_filled, (-1, 2, num_layers, num_heads, num_steps)) dec_self_attention_weights, dec_inter_attention_weights = \ dec_attention_weights.permute(1, 2, 3, 0, 4) dec_self_attention_weights.shape, dec_inter_attention_weights.shape

  1. ```{.python .input}
  2. #@tab tensorflow
  3. dec_attention_weights_2d = [head[0] for step in dec_attention_weight_seq
  4. for attn in step
  5. for blk in attn for head in blk]
  6. dec_attention_weights_filled = tf.convert_to_tensor(
  7. np.asarray(pd.DataFrame(dec_attention_weights_2d).fillna(
  8. 0.0).values).astype(np.float32))
  9. dec_attention_weights = tf.reshape(dec_attention_weights_filled, shape=(
  10. -1, 2, num_layers, num_heads, num_steps))
  11. dec_self_attention_weights, dec_inter_attention_weights = tf.transpose(
  12. dec_attention_weights, perm=(1, 2, 3, 0, 4))
  13. print(dec_self_attention_weights.shape, dec_inter_attention_weights.shape)

由于解码器自注意力的自回归属性,查询不会对当前位置之后的“键-值”对进行注意力计算。

```{.python .input}

@tab all

Plusonetoincludethebeginning-of-sequencetoken

d2l.show_heatmaps( dec_self_attention_weights[:, :, :, :len(translation.split()) + 1], xlabel=’Key positions’, ylabel=’Query positions’, titles=[‘Head %d’ % i for i in range(1, 5)], figsize=(7, 3.5))

  1. 与编码器的自注意力的情况类似,通过指定输入序列的有效长度,[**输出序列的查询不会与输入序列中填充位置的词元进行注意力计算**]。
  2. ```{.python .input}
  3. #@tab all
  4. d2l.show_heatmaps(
  5. dec_inter_attention_weights, xlabel='Key positions',
  6. ylabel='Query positions', titles=['Head %d' % i for i in range(1, 5)],
  7. figsize=(7, 3.5))

尽管transformer架构是为了“序列到序列”的学习而提出的,但正如我们将在本书后面提及的那样,transformer编码器或transformer解码器通常被单独用于不同的深度学习任务中。

小结

  • transformer是编码器-解码器架构的一个实践,尽管在实际情况中编码器或解码器可以单独使用。
  • 在transformer中,多头自注意力用于表示输入序列和输出序列,不过解码器必须通过掩蔽机制来保留自回归属性。
  • transformer中的残差连接和层规范化是训练非常深度模型的重要工具。
  • transformer模型中基于位置的前馈网络使用同一个多层感知机,作用是对所有序列位置的表示进行转换。

练习

  1. 在实验中训练更深的transformer将如何影响训练速度和翻译效果?
  2. 在transformer中使用加性注意力取代缩放点积注意力是不是个好办法?为什么?
  3. 对于语言模型,我们应该使用transformer的编码器还是解码器,或者两者都用?如何设计?
  4. 如果输入序列很长,transformer会面临什么挑战?为什么?
  5. 如何提高transformer的计算速度和内存使用效率?提示:可以参考论文 :cite:Tay.Dehghani.Bahri.ea.2020
  6. 如果不使用卷积神经网络,如何设计基于transformer模型的图像分类任务?提示:可以参考Vision Transformer :cite:Dosovitskiy.Beyer.Kolesnikov.ea.2021

:begin_tab:mxnet Discussions :end_tab:

:begin_tab:pytorch Discussions :end_tab: