回忆上一节的内容。跳字模型的核心在于使用softmax运算得到给定中心词来生成背景词的条件概率
%20%3D%20%5Cfrac%7B%5Ctext%7Bexp%7D(%5Cboldsymbol%7Bu%7Do%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%7D%7B%20%5Csum%7Bi%20%5Cin%20%5Cmathcal%7BV%7D%7D%20%5Ctext%7Bexp%7D(%5Cboldsymbol%7Bu%7Di%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%7D.%0A#card=math&code=P%28w_o%20%5Cmid%20w_c%29%20%3D%20%5Cfrac%7B%5Ctext%7Bexp%7D%28%5Cboldsymbol%7Bu%7D_o%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%7D%7B%20%5Csum%7Bi%20%5Cin%20%5Cmathcal%7BV%7D%7D%20%5Ctext%7Bexp%7D%28%5Cboldsymbol%7Bu%7D_i%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%7D.%0A)
该条件概率相应的对数损失
%20%3D%0A-%5Cboldsymbol%7Bu%7Do%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%20%2B%20%5Clog%5Cleft(%5Csum%7Bi%20%5Cin%20%5Cmathcal%7BV%7D%7D%20%5Ctext%7Bexp%7D(%5Cboldsymbol%7Bu%7Di%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%5Cright).#card=math&code=-%5Clog%20P%28w_o%20%5Cmid%20w_c%29%20%3D%0A-%5Cboldsymbol%7Bu%7D_o%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%20%2B%20%5Clog%5Cleft%28%5Csum%7Bi%20%5Cin%20%5Cmathcal%7BV%7D%7D%20%5Ctext%7Bexp%7D%28%5Cboldsymbol%7Bu%7D_i%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%5Cright%29.)
由于softmax运算考虑了背景词可能是词典中的任一词,以上损失包含了词典大小数目的项的累加。在上一节中我们看到,不论是跳字模型还是连续词袋模型,由于条件概率使用了softmax运算,每一步的梯度计算都包含词典大小数目的项的累加。对于含几十万或上百万词的较大词典,每次的梯度计算开销可能过大。为了降低该计算复杂度,本节将介绍两种近似训练方法,即负采样(negative sampling)或层序softmax(hierarchical softmax)。由于跳字模型和连续词袋模型类似,本节仅以跳字模型为例介绍这两种方法。
10.2.1 负采样
负采样修改了原来的目标函数。给定中心词的一个背景窗口,我们把背景词出现在该背景窗口看作一个事件,并将该事件的概率计算为
%20%3D%20%5Csigma(%5Cboldsymbol%7Bu%7D_o%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%2C%0A#card=math&code=P%28D%3D1%5Cmid%20w_c%2C%20w_o%29%20%3D%20%5Csigma%28%5Cboldsymbol%7Bu%7D_o%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%2C%0A)
其中的函数与sigmoid激活函数的定义相同:
%20%3D%20%5Cfrac%7B1%7D%7B1%2B%5Cexp(-x)%7D.%0A#card=math&code=%5Csigma%28x%29%20%3D%20%5Cfrac%7B1%7D%7B1%2B%5Cexp%28-x%29%7D.%0A)
我们先考虑最大化文本序列中所有该事件的联合概率来训练词向量。具体来说,给定一个长度为的文本序列,设时间步的词为%7D#card=math&code=w%5E%7B%28t%29%7D)且背景窗口大小为,考虑最大化联合概率
%7D%2C%20w%5E%7B(t%2Bj)%7D).%0A#card=math&code=%5Cprod%7Bt%3D1%7D%5E%7BT%7D%20%5Cprod%7B-m%20%5Cleq%20j%20%5Cleq%20m%2C%5C%20j%20%5Cneq%200%7D%20P%28D%3D1%5Cmid%20w%5E%7B%28t%29%7D%2C%20w%5E%7B%28t%2Bj%29%7D%29.%0A)
然而,以上模型中包含的事件仅考虑了正类样本。这导致当所有词向量相等且值为无穷大时,以上的联合概率才被最大化为1。很明显,这样的词向量毫无意义。负采样通过采样并添加负类样本使目标函数更有意义。设背景词出现在中心词的一个背景窗口为事件,我们根据分布#card=math&code=P%28w%29)采样个未出现在该背景窗口中的词,即噪声词。设噪声词()不出现在中心词的该背景窗口为事件。假设同时含有正类样本和负类样本的事件相互独立,负采样将以上需要最大化的仅考虑正类样本的联合概率改写为
%7D%20%5Cmid%20w%5E%7B(t)%7D)%2C%0A#card=math&code=%5Cprod%7Bt%3D1%7D%5E%7BT%7D%20%5Cprod%7B-m%20%5Cleq%20j%20%5Cleq%20m%2C%5C%20j%20%5Cneq%200%7D%20P%28w%5E%7B%28t%2Bj%29%7D%20%5Cmid%20w%5E%7B%28t%29%7D%29%2C%0A)
其中条件概率被近似表示为
%7D%20%5Cmid%20w%5E%7B(t)%7D)%20%3DP(D%3D1%5Cmid%20w%5E%7B(t)%7D%2C%20w%5E%7B(t%2Bj)%7D)%5Cprod%7Bk%3D1%2C%5C%20w_k%20%5Csim%20P(w)%7D%5EK%20P(D%3D0%5Cmid%20w%5E%7B(t)%7D%2C%20w_k).%0A#card=math&code=P%28w%5E%7B%28t%2Bj%29%7D%20%5Cmid%20w%5E%7B%28t%29%7D%29%20%3DP%28D%3D1%5Cmid%20w%5E%7B%28t%29%7D%2C%20w%5E%7B%28t%2Bj%29%7D%29%5Cprod%7Bk%3D1%2C%5C%20w_k%20%5Csim%20P%28w%29%7D%5EK%20P%28D%3D0%5Cmid%20w%5E%7B%28t%29%7D%2C%20w_k%29.%0A)
设文本序列中时间步的词%7D#card=math&code=w%5E%7B%28t%29%7D)在词典中的索引为,噪声词在词典中的索引为。有关以上条件概率的对数损失为
%7D%20%5Cmid%20w%5E%7B(t)%7D)%0A%3D%26%20-%5Clog%20P(D%3D1%5Cmid%20w%5E%7B(t)%7D%2C%20w%5E%7B(t%2Bj)%7D)%20-%20%5Csum%7Bk%3D1%2C%5C%20w_k%20%5Csim%20P(w)%7D%5EK%20%5Clog%20P(D%3D0%5Cmid%20w%5E%7B(t)%7D%2C%20w_k)%5C%5C%0A%3D%26-%20%20%5Clog%5C%2C%20%5Csigma%5Cleft(%5Cboldsymbol%7Bu%7D%7Bi%7Bt%2Bj%7D%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright)%20-%20%5Csum%7Bk%3D1%2C%5C%20wk%20%5Csim%20P(w)%7D%5EK%20%5Clog%5Cleft(1-%5Csigma%5Cleft(%5Cboldsymbol%7Bu%7D%7Bhk%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright)%5Cright)%5C%5C%0A%3D%26-%20%20%5Clog%5C%2C%20%5Csigma%5Cleft(%5Cboldsymbol%7Bu%7D%7Bi%7Bt%2Bj%7D%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright)%20-%20%5Csum%7Bk%3D1%2C%5C%20wk%20%5Csim%20P(w)%7D%5EK%20%5Clog%5Csigma%5Cleft(-%5Cboldsymbol%7Bu%7D%7Bhk%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright).%0A%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%0A-%5Clog%20P%28w%5E%7B%28t%2Bj%29%7D%20%5Cmid%20w%5E%7B%28t%29%7D%29%0A%3D%26%20-%5Clog%20P%28D%3D1%5Cmid%20w%5E%7B%28t%29%7D%2C%20w%5E%7B%28t%2Bj%29%7D%29%20-%20%5Csum%7Bk%3D1%2C%5C%20wk%20%5Csim%20P%28w%29%7D%5EK%20%5Clog%20P%28D%3D0%5Cmid%20w%5E%7B%28t%29%7D%2C%20w_k%29%5C%5C%0A%3D%26-%20%20%5Clog%5C%2C%20%5Csigma%5Cleft%28%5Cboldsymbol%7Bu%7D%7Bi%7Bt%2Bj%7D%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright%29%20-%20%5Csum%7Bk%3D1%2C%5C%20wk%20%5Csim%20P%28w%29%7D%5EK%20%5Clog%5Cleft%281-%5Csigma%5Cleft%28%5Cboldsymbol%7Bu%7D%7Bhk%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright%29%5Cright%29%5C%5C%0A%3D%26-%20%20%5Clog%5C%2C%20%5Csigma%5Cleft%28%5Cboldsymbol%7Bu%7D%7Bi%7Bt%2Bj%7D%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bit%7D%5Cright%29%20-%20%5Csum%7Bk%3D1%2C%5C%20wk%20%5Csim%20P%28w%29%7D%5EK%20%5Clog%5Csigma%5Cleft%28-%5Cboldsymbol%7Bu%7D%7Bhk%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D%7Bi_t%7D%5Cright%29.%0A%5Cend%7Baligned%7D%0A)
现在,训练中每一步的梯度计算开销不再与词典大小相关,而与线性相关。当取较小的常数时,负采样在每一步的梯度计算开销较小。
10.2.2 层序softmax
层序softmax是另一种近似训练法。它使用了二叉树这一数据结构,树的每个叶结点代表词典中的每个词。
假设#card=math&code=L%28w%29)为从二叉树的根结点到词的叶结点的路径(包括根结点和叶结点)上的结点数。设#card=math&code=n%28w%2Cj%29)为该路径上第个结点,并设该结点的背景词向量为%7D#card=math&code=%5Cboldsymbol%7Bu%7D_%7Bn%28w%2Cj%29%7D)。以图10.3为例,%20%3D%204#card=math&code=L%28w_3%29%20%3D%204)。层序softmax将跳字模型中的条件概率近似表示为
%20%3D%20%5Cprod%7Bj%3D1%7D%5E%7BL(w_o)-1%7D%20%5Csigma%5Cleft(%20%5B%5C!%5B%20%20n(w_o%2C%20j%2B1)%20%3D%20%5Ctext%7BleftChild%7D(n(w_o%2Cj))%20%5D%5C!%5D%20%5Ccdot%20%5Cboldsymbol%7Bu%7D%7Bn(wo%2Cj)%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%5Cright)%2C%0A#card=math&code=P%28w_o%20%5Cmid%20w_c%29%20%3D%20%5Cprod%7Bj%3D1%7D%5E%7BL%28wo%29-1%7D%20%5Csigma%5Cleft%28%20%5B%5C%21%5B%20%20n%28w_o%2C%20j%2B1%29%20%3D%20%5Ctext%7BleftChild%7D%28n%28w_o%2Cj%29%29%20%5D%5C%21%5D%20%5Ccdot%20%5Cboldsymbol%7Bu%7D%7Bn%28w_o%2Cj%29%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%5Cright%29%2C%0A)
其中函数与3.8节(多层感知机)中sigmoid激活函数的定义相同,#card=math&code=%5Ctext%7BleftChild%7D%28n%29)是结点的左子结点:如果判断为真,;反之。
让我们计算图10.3中给定词生成词的条件概率。我们需要将的词向量和根结点到路径上的非叶结点向量一一求内积。由于在二叉树中由根结点到叶结点的路径上需要向左、向右再向左地遍历(图10.3中加粗的路径),我们得到
%20%3D%20%5Csigma(%5Cboldsymbol%7Bu%7D%7Bn(w_3%2C1)%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%20%5Ccdot%20%5Csigma(-%5Cboldsymbol%7Bu%7D%7Bn(w3%2C2)%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c)%20%5Ccdot%20%5Csigma(%5Cboldsymbol%7Bu%7D%7Bn(w3%2C3)%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c).%0A#card=math&code=P%28w_3%20%5Cmid%20w_c%29%20%3D%20%5Csigma%28%5Cboldsymbol%7Bu%7D%7Bn%28w3%2C1%29%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%20%5Ccdot%20%5Csigma%28-%5Cboldsymbol%7Bu%7D%7Bn%28w3%2C2%29%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29%20%5Ccdot%20%5Csigma%28%5Cboldsymbol%7Bu%7D%7Bn%28w_3%2C3%29%7D%5E%5Ctop%20%5Cboldsymbol%7Bv%7D_c%29.%0A)
由于%2B%5Csigma(-x)%20%3D%201#card=math&code=%5Csigma%28x%29%2B%5Csigma%28-x%29%20%3D%201),给定中心词生成词典中任一词的条件概率之和为1这一条件也将满足:
%20%3D%201.%0A#card=math&code=%5Csum_%7Bw%20%5Cin%20%5Cmathcal%7BV%7D%7D%20P%28w%20%5Cmid%20w_c%29%20%3D%201.%0A)
此外,由于-1#card=math&code=L%28w_o%29-1)的数量级为#card=math&code=%5Cmathcal%7BO%7D%28%5Ctext%7Blog%7D_2%7C%5Cmathcal%7BV%7D%7C%29),当词典很大时,层序softmax在训练中每一步的梯度计算开销相较未使用近似训练时大幅降低。
小结
- 负采样通过考虑同时含有正类样本和负类样本的相互独立事件来构造损失函数。其训练中每一步的梯度计算开销与采样的噪声词的个数线性相关。
- 层序softmax使用了二叉树,并根据根结点到叶结点的路径来构造损失函数。其训练中每一步的梯度计算开销与词典大小的对数相关。
注:本节与原书完全相同,原书传送门