Perceptron and Large margin classifiers
本章是讲义中关于学习理论的最后一部分,我们来介绍另外机器学习模式。在之前的内容中,我们考虑的都是批量学习的情况,即给了我们训练样本集合用于学习,然后用学习得到的假设 来评估和判别测试数据。在本章,我们要讲一种新的机器学习模式:在线学习,这种情况下,我们的学习算法要在进行学习的同时给出预测。
学习算法会获得一个样本序列,其中内容为有次序的学习样本,%7D%2Cy%5E%7B(1)%7D)%2C%20(x%5E%7B(2)%7D%2Cy%5E%7B(2)%7D)%2C%20…(x%5E%7B(m)%7D%2Cy%5E%7B(m)%7D)#card=math&code=%28x%5E%7B%281%29%7D%2Cy%5E%7B%281%29%7D%29%2C%20%28x%5E%7B%282%29%7D%2Cy%5E%7B%282%29%7D%29%2C%20…%28x%5E%7B%28m%29%7D%2Cy%5E%7B%28m%29%7D%29&height=19&width=212)。最开始获得的就是
%7D#card=math&code=x%5E%7B%281%29%7D&height=16&width=21),然后需要预测
%7D#card=math&code=y%5E%7B%281%29%7D&height=18&width=20)。在完成了这个预测之后,再把
%7D#card=math&code=y%5E%7B%281%29%7D&height=18&width=20)的真实值告诉给算法(然后算法就利用这个信息来进行某种学习了)。接下来给算法提供
%7D#card=math&code=x%5E%7B%282%29%7D&height=16&width=21),再让算法对
%7D#card=math&code=y%5E%7B%282%29%7D&height=18&width=20)进行预测,然后再把
%7D#card=math&code=y%5E%7B%282%29%7D&height=18&width=20) 的真实值告诉给算法,这样算法就又能学习到一些信息了。这样的过程一直持续到最末尾的样本
%7D%2Cy%5E%7B(m)%7D)#card=math&code=%28x%5E%7B%28m%29%7D%2Cy%5E%7B%28m%29%7D%29&height=19&width=65)。在这种在线学习的背景下,我们关心的是算法在此过程中出错的总次数。因此,这适合需要一边学习一边给出预测的应用情景。
接下来,我们将对感知器学习算法(perceptron algorithm)的在线学习误差给出一个约束。为了让后续的推导(subsequent derivations)更容易,我们就用正负号来表征分类标签,即设 。
回忆一下感知器算法(在第二章中有讲到),其参数 ,该算法据下面的方程来给出预测:
%3Dg(%5Ctheta%5ET%20x)%5Cqquad%20(1)%0A#card=math&code=h_%5Ctheta%28x%29%3Dg%28%5Ctheta%5ET%20x%29%5Cqquad%20%281%29%0A&height=18&width=133)
其中:
%3D%20%5Cbegin%7Bcases%7D%201%20%26%20if%5Cquad%20z%5Cge%200%20%5C%5C%0A-1%20%26%20if%5Cquad%20z%3C0%20%5Cend%7Bcases%7D%0A#card=math&code=g%28z%29%3D%20%5Cbegin%7Bcases%7D%201%20%26%20if%5Cquad%20z%5Cge%200%20%5C%5C%0A-1%20%26%20if%5Cquad%20z%3C0%20%5Cend%7Bcases%7D%0A&height=36&width=144)
然后,给定一个训练样本 #card=math&code=%28x%2C%20y%29&height=16&width=31),感知器学习规则(perceptron learning rule)就按照如下所示来进行更新。如果
%20%3D%20y#card=math&code=h_%5Ctheta%28x%29%20%3D%20y&height=16&width=57),那么不改变参数。若二者相等关系不成立,则进行更新
:
1 这和之前我们看到的更新规则(update rule)的写法稍微有一点点不一样,因为这里我们把分类标签(labels)改成了
。另外学习速率参数(learning rate parameter)
也被省去了。这个速率参数的效果只是使用某些固定的常数来对参数
进行缩放,并不会影响生成器的行为效果。
当感知器算法作为在线学习算法运行的时候,每次对样本给出错误判断的时候,则更新参数,下面的定理给出了这种情况下的在线学习误差的约束边界。要注意,下面的错误次数的约束边界与整个序列中样本的个数 不具有特定的依赖关系(explicit dependence),和输入特征的维度
也无关。
定理 (Block, 1962, and Novikoff, 1962)。 设有一个样本序列:%7D%2Cy%5E%7B(1)%7D)%2C%20(x%5E%7B(2)%7D%2Cy%5E%7B(2)%7D)%2C%20…(x%5E%7B(m)%7D%2Cy%5E%7B(m)%7D)#card=math&code=%28x%5E%7B%281%29%7D%2Cy%5E%7B%281%29%7D%29%2C%20%28x%5E%7B%282%29%7D%2Cy%5E%7B%282%29%7D%29%2C%20…%28x%5E%7B%28m%29%7D%2Cy%5E%7B%28m%29%7D%29&height=19&width=212)。假设对于所有的
,都有
%7D%7C%7C%20%5Cle%20D#card=math&code=%7C%7Cx%5E%7B%28i%29%7D%7C%7C%20%5Cle%20D&height=19&width=64),更进一步存在一个单位长度向量
#card=math&code=u%20%28%7C%7Cu%7C%7C_2%20%3D%201%29&height=16&width=72) 对序列中的所有样本都满足
%20%5Ccdot%20(u%5ET%20x%5E%7B(i)%7D)%20%5Cge%20%5Cgamma#card=math&code=y%28i%29%20%5Ccdot%20%28u%5ET%20x%5E%7B%28i%29%7D%29%20%5Cge%20%5Cgamma&height=19&width=103)(例如,如果
%7D%20%3D%201#card=math&code=y%5E%7B%28i%29%7D%20%3D%201&height=18&width=43),则
%7D%20%5Cge%20%5Cgamma#card=math&code=u%5ET%20x%5E%7B%28i%29%7D%20%5Cge%20%5Cgamma&height=19&width=61), 而如果
%7D%20%3D%20-1#card=math&code=y%5E%7B%28i%29%7D%20%3D%20-1&height=18&width=53),则
%7D%20%5Cle%20-%5Cgamma#card=math&code=u%5ET%20x%5E%7B%28i%29%7D%20%5Cle%20-%5Cgamma&height=19&width=71),以便
就以一个宽度至少为
的边界分开了样本数据)。而此感知器算法针对这个序列给出错误预测的总数的上限为
%5E2#card=math&code=%28D%2F%5Cgamma%29%5E2&height=18&width=41) 。
证明: 感知器算法每次只针对出错的样本进行权重更新。设 #card=math&code=%5Ctheta%28k%29&height=16&width=24) 为犯了第
个错误(k-th mistake)的时候的权重。则
%7D%20%3D%20-%5Ctheta#card=math&code=%5Ctheta%5E%7B%281%29%7D%20%3D%20-%5Ctheta&height=17&width=54)(因为初始权重为零),若第
个错误发生在样本
%7D%2Cy%5E%7B(i)%7D)#card=math&code=%28x%5E%7B%28i%29%7D%2Cy%5E%7B%28i%29%7D%29&height=19&width=55),则
)%5ET%20%5Ctheta%5E%7B(k)%7D)%20%5Cne%20y%5E%7B(i)%7D#card=math&code=g%28%28x%28i%29%29%5ET%20%5Ctheta%5E%7B%28k%29%7D%29%20%5Cne%20y%5E%7B%28i%29%7D&height=19&width=115),也就意味着:
%7D)%5ET%5Ctheta%5E%7B(k)%7Dy%5E%7B(i)%7D%5Cle%200%5Cqquad(2)%0A#card=math&code=%28x%5E%7B%28i%29%7D%29%5ET%5Ctheta%5E%7B%28k%29%7Dy%5E%7B%28i%29%7D%5Cle%200%5Cqquad%282%29%0A&height=19&width=147)
另外根据感知器算法的定义,我们知道 %7D%20%3D%20%5Ctheta%5E%7B(k)%7D%20%2B%20y%5E%7B(i)%7Dx%5E%7B(i)%7D#card=math&code=%5Ctheta%5E%7B%28k%2B1%29%7D%20%3D%20%5Ctheta%5E%7B%28k%29%7D%20%2B%20y%5E%7B%28i%29%7Dx%5E%7B%28i%29%7D&height=18&width=126)
然后就得到:
%7D)%5ET%20u%20%26%3D%20(%5Ctheta%5E%7B(k)%7D)%5ET%20u%20%2B%20y%5E%7B(i)%7D(x%5E%7B(i)%7D)%5ET%20u%5C%5C%0A%26%5Cge%20%20(%5Ctheta%5E%7B(k)%7D)%5ET%20u%20%2B%20%5Cgamma%0A%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%0A%28%5Ctheta%5E%7B%28k%2B1%29%7D%29%5ET%20u%20%26%3D%20%28%5Ctheta%5E%7B%28k%29%7D%29%5ET%20u%20%2B%20y%5E%7B%28i%29%7D%28x%5E%7B%28i%29%7D%29%5ET%20u%5C%5C%0A%26%5Cge%20%20%28%5Ctheta%5E%7B%28k%29%7D%29%5ET%20u%20%2B%20%5Cgamma%0A%5Cend%7Baligned%7D%0A&height=40&width=210)
利用一个简单的归纳法(straightforward inductive argument)得到:
%7D)%5ET%20u%20%5Cge%20k%5Cgamma%5Cqquad%20(3)%0A#card=math&code=%28%5Ctheta%5E%7B%28k%2B1%29%7D%29%5ET%20u%20%5Cge%20k%5Cgamma%5Cqquad%20%283%29%0A&height=19&width=135)
还是根据感知器算法的定义能得到:
%7D%7C%7C%5E2%20%26%3D%20%7C%7C%5Ctheta%5E%7B(k)%7D%20%2B%20y%5E%7B(i)%7Dx%5E%7B(i)%7D%7C%7C%5E2%20%5C%5C%0A%26%3D%20%7C%7C%5Ctheta%5E%7B(k)%7D%7C%7C%5E2%20%2B%20%7C%7Cx%5E%7B(i)%7D%7C%7C%5E2%20%2B%202y%5E%7B(i)%7D(x%5E%7B(i)%7D)%5ET%5Ctheta%5E%7B(k)%7D%20%5C%5C%0A%26%5Cle%20%7C%7C%5Ctheta%5E%7B(k)%7D%7C%7C%5E2%20%2B%20%7C%7Cx%5E%7B(i)%7D%7C%7C%5E2%20%5C%5C%0A%26%5Cle%20%7C%7C%5Ctheta%5E%7B(k)%7D%7C%7C%5E2%20%2B%20D%5Cqquad%5Cqquad(4)%0A%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%0A%7C%7C%5Ctheta%5E%7B%28k%2B1%29%7D%7C%7C%5E2%20%26%3D%20%7C%7C%5Ctheta%5E%7B%28k%29%7D%20%2B%20y%5E%7B%28i%29%7Dx%5E%7B%28i%29%7D%7C%7C%5E2%20%5C%5C%0A%26%3D%20%7C%7C%5Ctheta%5E%7B%28k%29%7D%7C%7C%5E2%20%2B%20%7C%7Cx%5E%7B%28i%29%7D%7C%7C%5E2%20%2B%202y%5E%7B%28i%29%7D%28x%5E%7B%28i%29%7D%29%5ET%5Ctheta%5E%7B%28k%29%7D%20%5C%5C%0A%26%5Cle%20%7C%7C%5Ctheta%5E%7B%28k%29%7D%7C%7C%5E2%20%2B%20%7C%7Cx%5E%7B%28i%29%7D%7C%7C%5E2%20%5C%5C%0A%26%5Cle%20%7C%7C%5Ctheta%5E%7B%28k%29%7D%7C%7C%5E2%20%2B%20D%5Cqquad%5Cqquad%284%29%0A%5Cend%7Baligned%7D%0A&height=80&width=275)
上面这个推导过程中,第三步用到了等式(2)。另外这里还要使用一次简单归纳法,上面的不等式(4) 表明:
%7D%7C%7C%5E2%20%5Cle%20KD%5E2%0A#card=math&code=%7C%7C%5Ctheta%5E%7B%28k%2B1%29%7D%7C%7C%5E2%20%5Cle%20KD%5E2%0A&height=19&width=101)
把上面的等式 (3) 和不等式 (4) 结合起来:
%7D%7C%7C%20%5C%5C%0A%20%20%20%20%20%20%20%20%20%20%26%5Cge%20(%5Ctheta%5E%7B(k%2B1)%7D)%5ET%20u%20%5C%5C%0A%20%20%20%20%20%20%20%20%20%20%26%5Cge%20k%5Cgamma%0A%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%0A%5Csqrt%7Bk%7DD%20%26%5Cge%20%7C%7C%5Ctheta%5E%7B%28k%2B1%29%7D%7C%7C%20%5C%5C%0A%20%20%20%20%20%20%20%20%20%20%26%5Cge%20%28%5Ctheta%5E%7B%28k%2B1%29%7D%29%5ET%20u%20%5C%5C%0A%20%20%20%20%20%20%20%20%20%20%26%5Cge%20k%5Cgamma%0A%5Cend%7Baligned%7D%0A&height=57&width=111)
上面第二个不等式是基于 是一个单位长度向量(
,其中的
是向量
和向量
的夹角)。结果则表明
%5E2#card=math&code=k%5Cle%20%28D%2F%5Cgamma%29%5E2&height=18&width=67)。因此,如果感知器犯了一个第
个错误,则
%5E2#card=math&code=k%5Cle%20%28D%2F%5Cgamma%29%5E2&height=18&width=67)。