计算机视觉的迁移学习教程

原文:https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

作者Sasank Chilamkurthy

在本教程中,您将学习如何使用迁移学习训练卷积神经网络进行图像分类。 您可以在 cs231n 笔记中阅读有关转学的更多信息。

引用这些注解,

实际上,很少有人从头开始训练整个卷积网络(使用随机初始化),因为拥有足够大小的数据集相对很少。 相反,通常在非常大的数据集上对 ConvNet 进行预训练(例如 ImageNet,其中包含 120 万个具有 1000 个类别的图像),然后将 ConvNet 用作初始化或固定特征提取器以完成感兴趣的任务。

这两个主要的迁移学习方案如下所示:

  • 卷积网络的微调:代替随机初始化,我们使用经过预训练的网络初始化网络,例如在 imagenet 1000 数据集上进行训练的网络。 其余的训练照常进行。
  • 作为固定特征提取器的 ConvNet:在这里,我们将冻结除最终全连接层之外的所有网络的权重。 最后一个全连接层将替换为具有随机权重的新层,并且仅训练该层。
  1. # License: BSD
  2. # Author: Sasank Chilamkurthy
  3. from __future__ import print_function, division
  4. import torch
  5. import torch.nn as nn
  6. import torch.optim as optim
  7. from torch.optim import lr_scheduler
  8. import numpy as np
  9. import torchvision
  10. from torchvision import datasets, models, transforms
  11. import matplotlib.pyplot as plt
  12. import time
  13. import os
  14. import copy
  15. plt.ion() # interactive mode

加载数据

我们将使用torchvisiontorch.utils.data包来加载数据。

我们今天要解决的问题是训练一个模型来对蚂蚁蜜蜂进行分类。 我们为蚂蚁和蜜蜂提供了大约 120 张训练图像。 每个类别有 75 个验证图像。 通常,如果从头开始训练的话,这是一个非常小的数据集。 由于我们正在使用迁移学习,因此我们应该能够很好地概括。

该数据集是 imagenet 的很小一部分。

注意

从的下载数据,并将其提取到当前目录。

  1. # Data augmentation and normalization for training
  2. # Just normalization for validation
  3. data_transforms = {
  4. 'train': transforms.Compose([
  5. transforms.RandomResizedCrop(224),
  6. transforms.RandomHorizontalFlip(),
  7. transforms.ToTensor(),
  8. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  9. ]),
  10. 'val': transforms.Compose([
  11. transforms.Resize(256),
  12. transforms.CenterCrop(224),
  13. transforms.ToTensor(),
  14. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  15. ]),
  16. }
  17. data_dir = 'data/hymenoptera_data'
  18. image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
  19. data_transforms[x])
  20. for x in ['train', 'val']}
  21. dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
  22. shuffle=True, num_workers=4)
  23. for x in ['train', 'val']}
  24. dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
  25. class_names = image_datasets['train'].classes
  26. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

可视化一些图像

让我们可视化一些训练图像,以了解数据扩充。

  1. def imshow(inp, title=None):
  2. """Imshow for Tensor."""
  3. inp = inp.numpy().transpose((1, 2, 0))
  4. mean = np.array([0.485, 0.456, 0.406])
  5. std = np.array([0.229, 0.224, 0.225])
  6. inp = std * inp + mean
  7. inp = np.clip(inp, 0, 1)
  8. plt.imshow(inp)
  9. if title is not None:
  10. plt.title(title)
  11. plt.pause(0.001) # pause a bit so that plots are updated
  12. # Get a batch of training data
  13. inputs, classes = next(iter(dataloaders['train']))
  14. # Make a grid from batch
  15. out = torchvision.utils.make_grid(inputs)
  16. imshow(out, title=[class_names[x] for x in classes])

../_img/sphx_glr_transfer_learning_tutorial_001.png

训练模型

现在,让我们编写一个通用函数来训练模型。 在这里,我们将说明:

  • 安排学习率
  • 保存最佳模型

以下,参数scheduler是来自torch.optim.lr_scheduler的 LR 调度器对象。

  1. def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
  2. since = time.time()
  3. best_model_wts = copy.deepcopy(model.state_dict())
  4. best_acc = 0.0
  5. for epoch in range(num_epochs):
  6. print('Epoch {}/{}'.format(epoch, num_epochs - 1))
  7. print('-' * 10)
  8. # Each epoch has a training and validation phase
  9. for phase in ['train', 'val']:
  10. if phase == 'train':
  11. model.train() # Set model to training mode
  12. else:
  13. model.eval() # Set model to evaluate mode
  14. running_loss = 0.0
  15. running_corrects = 0
  16. # Iterate over data.
  17. for inputs, labels in dataloaders[phase]:
  18. inputs = inputs.to(device)
  19. labels = labels.to(device)
  20. # zero the parameter gradients
  21. optimizer.zero_grad()
  22. # forward
  23. # track history if only in train
  24. with torch.set_grad_enabled(phase == 'train'):
  25. outputs = model(inputs)
  26. _, preds = torch.max(outputs, 1)
  27. loss = criterion(outputs, labels)
  28. # backward + optimize only if in training phase
  29. if phase == 'train':
  30. loss.backward()
  31. optimizer.step()
  32. # statistics
  33. running_loss += loss.item() * inputs.size(0)
  34. running_corrects += torch.sum(preds == labels.data)
  35. if phase == 'train':
  36. scheduler.step()
  37. epoch_loss = running_loss / dataset_sizes[phase]
  38. epoch_acc = running_corrects.double() / dataset_sizes[phase]
  39. print('{} Loss: {:.4f} Acc: {:.4f}'.format(
  40. phase, epoch_loss, epoch_acc))
  41. # deep copy the model
  42. if phase == 'val' and epoch_acc > best_acc:
  43. best_acc = epoch_acc
  44. best_model_wts = copy.deepcopy(model.state_dict())
  45. print()
  46. time_elapsed = time.time() - since
  47. print('Training complete in {:.0f}m {:.0f}s'.format(
  48. time_elapsed // 60, time_elapsed % 60))
  49. print('Best val Acc: {:4f}'.format(best_acc))
  50. # load best model weights
  51. model.load_state_dict(best_model_wts)
  52. return model

可视化模型预测

通用函数,显示一些图像的预测

  1. def visualize_model(model, num_images=6):
  2. was_training = model.training
  3. model.eval()
  4. images_so_far = 0
  5. fig = plt.figure()
  6. with torch.no_grad():
  7. for i, (inputs, labels) in enumerate(dataloaders['val']):
  8. inputs = inputs.to(device)
  9. labels = labels.to(device)
  10. outputs = model(inputs)
  11. _, preds = torch.max(outputs, 1)
  12. for j in range(inputs.size()[0]):
  13. images_so_far += 1
  14. ax = plt.subplot(num_images//2, 2, images_so_far)
  15. ax.axis('off')
  16. ax.set_title(f'predicted: {class_names[preds[j]]}')
  17. imshow(inputs.cpu().data[j])
  18. if images_so_far == num_images:
  19. model.train(mode=was_training)
  20. return
  21. model.train(mode=was_training)

微调 ConvNet

加载预训练的模型并重置最终的全连接层。

  1. model_ft = models.resnet18(pretrained=True)
  2. num_ftrs = model_ft.fc.in_features
  3. # Here the size of each output sample is set to 2.
  4. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
  5. model_ft.fc = nn.Linear(num_ftrs, 2)
  6. model_ft = model_ft.to(device)
  7. criterion = nn.CrossEntropyLoss()
  8. # Observe that all parameters are being optimized
  9. optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
  10. # Decay LR by a factor of 0.1 every 7 epochs
  11. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

训练和评估

在 CPU 上大约需要 15-25 分钟。 但是在 GPU 上,此过程不到一分钟。

  1. model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
  2. num_epochs=25)

出:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.6303 Acc: 0.6926
  4. val Loss: 0.1492 Acc: 0.9346
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.5511 Acc: 0.7869
  8. val Loss: 0.2577 Acc: 0.8889
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.4885 Acc: 0.8115
  12. val Loss: 0.3390 Acc: 0.8758
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.5158 Acc: 0.7992
  16. val Loss: 0.5070 Acc: 0.8366
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.5878 Acc: 0.7992
  20. val Loss: 0.2706 Acc: 0.8758
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.4396 Acc: 0.8279
  24. val Loss: 0.2870 Acc: 0.8954
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.4612 Acc: 0.8238
  28. val Loss: 0.2809 Acc: 0.9150
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.4387 Acc: 0.8402
  32. val Loss: 0.1853 Acc: 0.9281
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.2998 Acc: 0.8648
  36. val Loss: 0.1926 Acc: 0.9085
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.3383 Acc: 0.9016
  40. val Loss: 0.1762 Acc: 0.9281
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.2969 Acc: 0.8730
  44. val Loss: 0.1872 Acc: 0.8954
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.3117 Acc: 0.8811
  48. val Loss: 0.1807 Acc: 0.9150
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.3005 Acc: 0.8770
  52. val Loss: 0.1930 Acc: 0.9085
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.3129 Acc: 0.8689
  56. val Loss: 0.2184 Acc: 0.9150
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.3776 Acc: 0.8607
  60. val Loss: 0.1869 Acc: 0.9216
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.2245 Acc: 0.9016
  64. val Loss: 0.1742 Acc: 0.9346
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.3105 Acc: 0.8607
  68. val Loss: 0.2056 Acc: 0.9216
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.2729 Acc: 0.8893
  72. val Loss: 0.1722 Acc: 0.9085
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.3210 Acc: 0.8730
  76. val Loss: 0.1977 Acc: 0.9281
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.3231 Acc: 0.8566
  80. val Loss: 0.1811 Acc: 0.9216
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.3206 Acc: 0.8648
  84. val Loss: 0.2033 Acc: 0.9150
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.2917 Acc: 0.8648
  88. val Loss: 0.1694 Acc: 0.9150
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.2412 Acc: 0.8852
  92. val Loss: 0.1757 Acc: 0.9216
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.2508 Acc: 0.8975
  96. val Loss: 0.1662 Acc: 0.9281
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.3283 Acc: 0.8566
  100. val Loss: 0.1761 Acc: 0.9281
  101. Training complete in 1m 10s
  102. Best val Acc: 0.934641
  1. visualize_model(model_ft)

../_img/sphx_glr_transfer_learning_tutorial_002.png

作为固定特征提取器的 ConvNet

在这里,我们需要冻结除最后一层之外的所有网络。 我们需要设置requires_grad == False冻结参数,以便不在backward()中计算梯度。

您可以在文档中阅读有关此内容的更多信息

  1. model_conv = torchvision.models.resnet18(pretrained=True)
  2. for param in model_conv.parameters():
  3. param.requires_grad = False
  4. # Parameters of newly constructed modules have requires_grad=True by default
  5. num_ftrs = model_conv.fc.in_features
  6. model_conv.fc = nn.Linear(num_ftrs, 2)
  7. model_conv = model_conv.to(device)
  8. criterion = nn.CrossEntropyLoss()
  9. # Observe that only parameters of final layer are being optimized as
  10. # opposed to before.
  11. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
  12. # Decay LR by a factor of 0.1 every 7 epochs
  13. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)

训练和评估

与以前的方案相比,在 CPU 上将花费大约一半的时间。 这是可以预期的,因为不需要为大多数网络计算梯度。 但是,确实需要计算正向。

  1. model_conv = train_model(model_conv, criterion, optimizer_conv,
  2. exp_lr_scheduler, num_epochs=25)

出:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.7258 Acc: 0.6148
  4. val Loss: 0.2690 Acc: 0.9020
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.5342 Acc: 0.7500
  8. val Loss: 0.1905 Acc: 0.9412
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.4262 Acc: 0.8320
  12. val Loss: 0.1903 Acc: 0.9412
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.4103 Acc: 0.8197
  16. val Loss: 0.2658 Acc: 0.8954
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.3938 Acc: 0.8115
  20. val Loss: 0.2871 Acc: 0.8954
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.4623 Acc: 0.8361
  24. val Loss: 0.1651 Acc: 0.9346
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.5348 Acc: 0.7869
  28. val Loss: 0.1944 Acc: 0.9477
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.3827 Acc: 0.8402
  32. val Loss: 0.1846 Acc: 0.9412
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.3655 Acc: 0.8443
  36. val Loss: 0.1873 Acc: 0.9412
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.3275 Acc: 0.8525
  40. val Loss: 0.2091 Acc: 0.9412
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.3375 Acc: 0.8320
  44. val Loss: 0.1798 Acc: 0.9412
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.3077 Acc: 0.8648
  48. val Loss: 0.1942 Acc: 0.9346
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.4336 Acc: 0.7787
  52. val Loss: 0.1934 Acc: 0.9346
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.3149 Acc: 0.8566
  56. val Loss: 0.2062 Acc: 0.9281
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.3617 Acc: 0.8320
  60. val Loss: 0.1761 Acc: 0.9412
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.3066 Acc: 0.8361
  64. val Loss: 0.1799 Acc: 0.9281
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.3952 Acc: 0.8443
  68. val Loss: 0.1666 Acc: 0.9346
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.3552 Acc: 0.8443
  72. val Loss: 0.1928 Acc: 0.9412
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.3106 Acc: 0.8648
  76. val Loss: 0.1964 Acc: 0.9346
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.3675 Acc: 0.8566
  80. val Loss: 0.1813 Acc: 0.9346
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.3565 Acc: 0.8320
  84. val Loss: 0.1758 Acc: 0.9346
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.2922 Acc: 0.8566
  88. val Loss: 0.2295 Acc: 0.9216
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.3283 Acc: 0.8402
  92. val Loss: 0.2267 Acc: 0.9281
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.2875 Acc: 0.8770
  96. val Loss: 0.1878 Acc: 0.9346
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.3172 Acc: 0.8689
  100. val Loss: 0.1849 Acc: 0.9412
  101. Training complete in 0m 34s
  102. Best val Acc: 0.947712
  1. visualize_model(model_conv)
  2. plt.ioff()
  3. plt.show()

../_img/sphx_glr_transfer_learning_tutorial_003.png

进一步学习

如果您想了解有关迁移学习的更多信息,请查看我们的计算机视觉教程的量化迁移学习

脚本的总运行时间:(1 分钟 56.157 秒)

下载 Python 源码:transfer_learning_tutorial.py

下载 Jupyter 笔记本:transfer_learning_tutorial.ipynb

由 Sphinx 画廊生成的画廊