在解决过拟合问题时,虽然简化模型和增大数据集是有效的方法,但是实际中并不常用,简化模型意味着有些特征被丢弃,增大数据集面临数据获取难的问题。权重衰减(weight decay)是一种更为常用的方法。
权重衰减等价于 L2 范数正则化(regularization)
理论知识看教程复习即可。重点用代码实现看一下正则化的效果。

生成数据集

原始数据的函数关系为,p 表示特征向量的维数:
权重衰减 - 图1
为了制造过拟合的现象,减少训练集的大小,增加特征的数量到200个。

  1. import torch
  2. import d2lzh
  3. n_train, n_test, num_inputs = 20, 100, 200
  4. true_w, true_b = torch.ones(num_inputs, 1) * 0.01, 0.05
  5. features = torch.randn((n_train + n_test, num_inputs))
  6. labels = torch.matmul(features, true_w) + true_b
  7. labels += torch.normal(mean=0, std=0.01, size=labels.size())
  8. train_features, test_features = features[:n_train, :], features[n_train:, :]
  9. train_labels, test_labels = labels[:n_train], labels[n_train:]

L2范数惩罚项

下面定义_L_2范数惩罚项。这里只惩罚模型的权重参数。

  1. def l2_penalty(w):
  2. return (w**2).sum() / 2

训练及测试

与前一节的函数类似

  1. batch_size, num_epochs, lr = 1, 100, 0.003
  2. net, loss = d2lzh.linreg, d2lzh.squared_loss
  3. dataset = torch.utils.data.TensorDataset(train_features, train_labels)
  4. train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
  5. def fit_and_plot(lambd):
  6. train_ls, test_ls = [], []
  7. for _ in range(num_epochs):
  8. for x, y in train_iter:
  9. l = loss(net(x, w, b), y) + lambd * l2_penalty(w)
  10. l = l.sum()
  11. if w.grad is not None:
  12. w.grad.data.zero_()
  13. b.grad.data.zero_()
  14. l.backward()
  15. d2lzh.sgd([w, b], lr, batch_size)
  16. train_ls.append(loss(net(train_features, w, b), train_labels).mean().item())
  17. test_ls.append(loss(net(test_features, w, b), test_labels).mean().item())
  18. d2lzh.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
  19. range(1, num_epochs + 1), test_ls, ['train', 'test'])
  20. print('L2 norm of w:', w.norm().item())

观察过拟合

先看一下不加权重衰减的表现,即 惩罚项系数为0

  1. if __name__ == "__main__":
  2. fit_and_plot(lambd=0)

尽管训练误差近似为 0,但泛化误差高居不下

  1. # 矩阵范数
  2. L2 norm of w: 13.747507095336914

image.png
再来看一下加上权重衰减后的效果,

  1. # 加上权重衰减
  2. if __name__ == "__main__":
  3. fit_and_plot(lambd=10)

可以看到,泛化误差已经大大降低了,最终得到的矩阵范数也比较小。

  1. # 矩阵范数
  2. L2 norm of w: 0.02657567523419857

image.png

完整代码如下:

  1. import torch
  2. import d2lzh
  3. n_train, n_test, num_inputs = 20, 100, 200
  4. true_w, true_b = torch.ones(num_inputs, 1) * 0.01, 0.05
  5. features = torch.randn((n_train + n_test, num_inputs))
  6. labels = torch.matmul(features, true_w) + true_b
  7. labels += torch.normal(mean=0, std=0.01, size=labels.size())
  8. train_features, test_features = features[:n_train, :], features[n_train:, :]
  9. train_labels, test_labels = labels[:n_train], labels[n_train:]
  10. w = torch.randn((num_inputs, 1), requires_grad=True)
  11. b = torch.zeros(1, requires_grad=True)
  12. def l2_penalty(w):
  13. return (w ** 2).sum() / 2
  14. batch_size, num_epochs, lr = 1, 100, 0.003
  15. net, loss = d2lzh.linreg, d2lzh.squared_loss
  16. dataset = torch.utils.data.TensorDataset(train_features, train_labels)
  17. train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
  18. def fit_and_plot(lambd):
  19. train_ls, test_ls = [], []
  20. for _ in range(num_epochs):
  21. for x, y in train_iter:
  22. l = loss(net(x, w, b), y) + lambd * l2_penalty(w)
  23. l = l.sum()
  24. if w.grad is not None:
  25. w.grad.data.zero_()
  26. b.grad.data.zero_()
  27. l.backward()
  28. d2lzh.sgd([w, b], lr, batch_size)
  29. train_ls.append(loss(net(train_features, w, b), train_labels).mean().item())
  30. test_ls.append(loss(net(test_features, w, b), test_labels).mean().item())
  31. d2lzh.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
  32. range(1, num_epochs + 1), test_ls, ['train', 'test'])
  33. print('L2 norm of w:', w.norm().item())
  34. if __name__ == "__main__":
  35. fit_and_plot(lambd=10)

简洁实现:

权重衰减在定义优化器时就可以设置

  1. def fit_and_plot_pytorch(wd):
  2. net = nn.Linear(num_inputs, 1)
  3. nn.init.normal_(net.weight, mean=0, std=1)
  4. nn.init.normal_(net.bias, mean=0, std=1)
  5. optimizer_w = torch.optim.SGD(params=[net.weight], lr=lr, weight_decay=wd)
  6. optimizer_b = torch.optim.SGD(params=[net.bias], lr=lr)
  7. train_ls, test_ls = [], []
  8. for _ in range(num_epochs):
  9. for x, y in train_iter:
  10. l = loss(net(x), y).mean()
  11. optimizer_w.zero_grad()
  12. optimizer_b.zero_grad()
  13. l.backward()
  14. optimizer_w.step()
  15. optimizer_b.step()
  16. train_ls.append(loss(net(train_features), train_labels).mean().item())
  17. test_ls.append(loss(net(test_features), test_labels).mean().item())
  18. d2lzh.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
  19. range(1, num_epochs + 1), test_ls, ['train', 'test'])
  20. print('L2 norm of w:', net.weight.data.norm().item())
  21. if __name__ == "__main__":
  22. fit_and_plot_pytorch(0)
  23. fit_and_plot_pytorch(10)

结果跟手动实现差不多

  1. L2 norm of w: 13.253257751464844
  2. L2 norm of w: 0.022790107876062393

image.png
image.png

小结

  • 正则化通过为模型损失函数添加惩罚项使学出的模型参数值较小,是应对过拟合的常用手段。
  • 权重衰减等价于L2L_2_L_2范数正则化,通常会使学到的权重参数的元素较接近0。
  • 权重衰减可以通过优化器中的weight_decay超参数来指定。
  • 可以定义多个优化器实例对不同的模型参数使用不同的迭代方法。