转移学习的计算机视觉教程

原文: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

注意

单击此处的下载完整的示例代码

作者Sasank Chilamkurthy

在本教程中,您将学习如何使用转移学习训练卷积神经网络进行图像分类。 您可以在 cs231n 笔记上了解有关转移学习的更多信息。

引用这些注释,

实际上,很少有人从头开始训练整个卷积网络(使用随机初始化),因为拥有足够大小的数据集相对很少。 相反,通常在非常大的数据集上对 ConvNet 进行预训练(例如 ImageNet,其中包含 120 万个具有 1000 个类别的图像),然后将 ConvNet 用作初始化或固定特征提取器以完成感兴趣的任务。

这两个主要的转移学习方案如下所示:

  • 对卷积网络进行微调:代替随机初始化,我们使用经过预训练的网络初始化网络,例如在 imagenet 1000 数据集上进行训练的网络。 其余的训练照常进行。
  • ConvNet 作为固定特征提取器:在这里,我们将冻结除最终完全连接层以外的所有网络的权重。 最后一个完全连接的层将替换为具有随机权重的新层,并且仅训练该层。
  1. # License: BSD
  2. # Author: Sasank Chilamkurthy
  3. from __future__ import print_function, division
  4. import torch
  5. import torch.nn as nn
  6. import torch.optim as optim
  7. from torch.optim import lr_scheduler
  8. import numpy as np
  9. import torchvision
  10. from torchvision import datasets, models, transforms
  11. import matplotlib.pyplot as plt
  12. import time
  13. import os
  14. import copy
  15. plt.ion() # interactive mode

载入资料

我们将使用 torchvision 和 torch.utils.data 包来加载数据。

我们今天要解决的问题是训练一个模型来对蚂蚁蜜蜂进行分类。 我们为蚂蚁和蜜蜂提供了大约 120 张训练图像。 每个类别有 75 个验证图像。 通常,如果从头开始训练的话,这是一个很小的数据集。 由于我们正在使用迁移学习,因此我们应该能够很好地概括。

该数据集是 imagenet 的很小一部分。

Note

从的下载数据,并将其提取到当前目录。

  1. # Data augmentation and normalization for training
  2. # Just normalization for validation
  3. data_transforms = {
  4. 'train': transforms.Compose([
  5. transforms.RandomResizedCrop(224),
  6. transforms.RandomHorizontalFlip(),
  7. transforms.ToTensor(),
  8. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  9. ]),
  10. 'val': transforms.Compose([
  11. transforms.Resize(256),
  12. transforms.CenterCrop(224),
  13. transforms.ToTensor(),
  14. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  15. ]),
  16. }
  17. data_dir = 'data/hymenoptera_data'
  18. image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
  19. data_transforms[x])
  20. for x in ['train', 'val']}
  21. dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
  22. shuffle=True, num_workers=4)
  23. for x in ['train', 'val']}
  24. dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
  25. class_names = image_datasets['train'].classes
  26. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

可视化一些图像

让我们可视化一些训练图像,以了解数据扩充。

  1. def imshow(inp, title=None):
  2. """Imshow for Tensor."""
  3. inp = inp.numpy().transpose((1, 2, 0))
  4. mean = np.array([0.485, 0.456, 0.406])
  5. std = np.array([0.229, 0.224, 0.225])
  6. inp = std * inp + mean
  7. inp = np.clip(inp, 0, 1)
  8. plt.imshow(inp)
  9. if title is not None:
  10. plt.title(title)
  11. plt.pause(0.001) # pause a bit so that plots are updated
  12. # Get a batch of training data
  13. inputs, classes = next(iter(dataloaders['train']))
  14. # Make a grid from batch
  15. out = torchvision.utils.make_grid(inputs)
  16. imshow(out, title=[class_names[x] for x in classes])

../_images/sphx_glr_transfer_learning_tutorial_001.png

训练模型

现在,让我们编写一个通用函数来训练模型。 在这里,我们将说明:

  • 安排学习率
  • 保存最佳模型

以下,参数scheduler是来自torch.optim.lr_scheduler的 LR 调度程序对象。

  1. def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
  2. since = time.time()
  3. best_model_wts = copy.deepcopy(model.state_dict())
  4. best_acc = 0.0
  5. for epoch in range(num_epochs):
  6. print('Epoch {}/{}'.format(epoch, num_epochs - 1))
  7. print('-' * 10)
  8. # Each epoch has a training and validation phase
  9. for phase in ['train', 'val']:
  10. if phase == 'train':
  11. model.train() # Set model to training mode
  12. else:
  13. model.eval() # Set model to evaluate mode
  14. running_loss = 0.0
  15. running_corrects = 0
  16. # Iterate over data.
  17. for inputs, labels in dataloaders[phase]:
  18. inputs = inputs.to(device)
  19. labels = labels.to(device)
  20. # zero the parameter gradients
  21. optimizer.zero_grad()
  22. # forward
  23. # track history if only in train
  24. with torch.set_grad_enabled(phase == 'train'):
  25. outputs = model(inputs)
  26. _, preds = torch.max(outputs, 1)
  27. loss = criterion(outputs, labels)
  28. # backward + optimize only if in training phase
  29. if phase == 'train':
  30. loss.backward()
  31. optimizer.step()
  32. # statistics
  33. running_loss += loss.item() * inputs.size(0)
  34. running_corrects += torch.sum(preds == labels.data)
  35. if phase == 'train':
  36. scheduler.step()
  37. epoch_loss = running_loss / dataset_sizes[phase]
  38. epoch_acc = running_corrects.double() / dataset_sizes[phase]
  39. print('{} Loss: {:.4f} Acc: {:.4f}'.format(
  40. phase, epoch_loss, epoch_acc))
  41. # deep copy the model
  42. if phase == 'val' and epoch_acc > best_acc:
  43. best_acc = epoch_acc
  44. best_model_wts = copy.deepcopy(model.state_dict())
  45. print()
  46. time_elapsed = time.time() - since
  47. print('Training complete in {:.0f}m {:.0f}s'.format(
  48. time_elapsed // 60, time_elapsed % 60))
  49. print('Best val Acc: {:4f}'.format(best_acc))
  50. # load best model weights
  51. model.load_state_dict(best_model_wts)
  52. return model

可视化模型预测

通用功能可显示一些图像的预测

  1. def visualize_model(model, num_images=6):
  2. was_training = model.training
  3. model.eval()
  4. images_so_far = 0
  5. fig = plt.figure()
  6. with torch.no_grad():
  7. for i, (inputs, labels) in enumerate(dataloaders['val']):
  8. inputs = inputs.to(device)
  9. labels = labels.to(device)
  10. outputs = model(inputs)
  11. _, preds = torch.max(outputs, 1)
  12. for j in range(inputs.size()[0]):
  13. images_so_far += 1
  14. ax = plt.subplot(num_images//2, 2, images_so_far)
  15. ax.axis('off')
  16. ax.set_title('predicted: {}'.format(class_names[preds[j]]))
  17. imshow(inputs.cpu().data[j])
  18. if images_so_far == num_images:
  19. model.train(mode=was_training)
  20. return
  21. model.train(mode=was_training)

微调 convnet

加载预训练的模型并重置最终的完全连接层。

  1. model_ft = models.resnet18(pretrained=True)
  2. num_ftrs = model_ft.fc.in_features
  3. # Here the size of each output sample is set to 2.
  4. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
  5. model_ft.fc = nn.Linear(num_ftrs, 2)
  6. model_ft = model_ft.to(device)
  7. criterion = nn.CrossEntropyLoss()
  8. # Observe that all parameters are being optimized
  9. optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
  10. # Decay LR by a factor of 0.1 every 7 epochs
  11. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

训练和评估

在 CPU 上大约需要 15-25 分钟。 但是在 GPU 上,此过程不到一分钟。

  1. model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
  2. num_epochs=25)

出:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.5582 Acc: 0.6967
  4. val Loss: 0.1987 Acc: 0.9216
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.4663 Acc: 0.8238
  8. val Loss: 0.2519 Acc: 0.8889
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.5978 Acc: 0.7623
  12. val Loss: 1.2933 Acc: 0.6601
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.4471 Acc: 0.8320
  16. val Loss: 0.2576 Acc: 0.8954
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.3654 Acc: 0.8115
  20. val Loss: 0.2977 Acc: 0.9150
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.4404 Acc: 0.8197
  24. val Loss: 0.3330 Acc: 0.8627
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.6416 Acc: 0.7623
  28. val Loss: 0.3174 Acc: 0.8693
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.4058 Acc: 0.8361
  32. val Loss: 0.2551 Acc: 0.9085
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.2294 Acc: 0.9098
  36. val Loss: 0.2603 Acc: 0.9085
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.2805 Acc: 0.8730
  40. val Loss: 0.2765 Acc: 0.8954
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.3139 Acc: 0.8525
  44. val Loss: 0.2639 Acc: 0.9020
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.3198 Acc: 0.8648
  48. val Loss: 0.2458 Acc: 0.9020
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.2947 Acc: 0.8811
  52. val Loss: 0.2835 Acc: 0.8889
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.3097 Acc: 0.8730
  56. val Loss: 0.2542 Acc: 0.9085
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.1849 Acc: 0.9303
  60. val Loss: 0.2710 Acc: 0.9085
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.2764 Acc: 0.8934
  64. val Loss: 0.2522 Acc: 0.9085
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.2214 Acc: 0.9098
  68. val Loss: 0.2620 Acc: 0.9085
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.2949 Acc: 0.8525
  72. val Loss: 0.2600 Acc: 0.9085
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.2237 Acc: 0.9139
  76. val Loss: 0.2666 Acc: 0.9020
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.2456 Acc: 0.8852
  80. val Loss: 0.2521 Acc: 0.9150
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.2351 Acc: 0.8852
  84. val Loss: 0.2781 Acc: 0.9085
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.2654 Acc: 0.8730
  88. val Loss: 0.2560 Acc: 0.9085
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.1955 Acc: 0.9262
  92. val Loss: 0.2605 Acc: 0.9020
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.2285 Acc: 0.8893
  96. val Loss: 0.2650 Acc: 0.9085
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.2360 Acc: 0.9221
  100. val Loss: 0.2690 Acc: 0.8954
  101. Training complete in 1m 7s
  102. Best val Acc: 0.921569
  1. visualize_model(model_ft)

../_images/sphx_glr_transfer_learning_tutorial_002.png

ConvNet 作为固定特征提取器

在这里,我们需要冻结除最后一层之外的所有网络。 我们需要设置requires_grad == False冻结参数,以便不在backward()中计算梯度。

您可以在的文档中阅读有关此内容的更多信息。

  1. model_conv = torchvision.models.resnet18(pretrained=True)
  2. for param in model_conv.parameters():
  3. param.requires_grad = False
  4. # Parameters of newly constructed modules have requires_grad=True by default
  5. num_ftrs = model_conv.fc.in_features
  6. model_conv.fc = nn.Linear(num_ftrs, 2)
  7. model_conv = model_conv.to(device)
  8. criterion = nn.CrossEntropyLoss()
  9. # Observe that only parameters of final layer are being optimized as
  10. # opposed to before.
  11. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
  12. # Decay LR by a factor of 0.1 every 7 epochs
  13. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)

Train and evaluate

与以前的方案相比,在 CPU 上将花费大约一半的时间。 这是可以预期的,因为不需要为大多数网络计算梯度。 但是,确实需要计算正向。

  1. model_conv = train_model(model_conv, criterion, optimizer_conv,
  2. exp_lr_scheduler, num_epochs=25)

Out:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.5633 Acc: 0.7008
  4. val Loss: 0.2159 Acc: 0.9412
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.4394 Acc: 0.7623
  8. val Loss: 0.2000 Acc: 0.9150
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.5182 Acc: 0.7623
  12. val Loss: 0.1897 Acc: 0.9346
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.3993 Acc: 0.8074
  16. val Loss: 0.3029 Acc: 0.8824
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.4163 Acc: 0.8607
  20. val Loss: 0.2190 Acc: 0.9412
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.4741 Acc: 0.7951
  24. val Loss: 0.1903 Acc: 0.9477
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.4266 Acc: 0.8115
  28. val Loss: 0.2178 Acc: 0.9281
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.3623 Acc: 0.8238
  32. val Loss: 0.2080 Acc: 0.9412
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.3979 Acc: 0.8279
  36. val Loss: 0.1796 Acc: 0.9412
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.3534 Acc: 0.8648
  40. val Loss: 0.2043 Acc: 0.9412
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.3849 Acc: 0.8115
  44. val Loss: 0.2012 Acc: 0.9346
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.3814 Acc: 0.8361
  48. val Loss: 0.2088 Acc: 0.9412
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.3443 Acc: 0.8648
  52. val Loss: 0.1823 Acc: 0.9477
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.2931 Acc: 0.8525
  56. val Loss: 0.1853 Acc: 0.9477
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.2749 Acc: 0.8811
  60. val Loss: 0.2068 Acc: 0.9412
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.3387 Acc: 0.8566
  64. val Loss: 0.2080 Acc: 0.9477
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.2992 Acc: 0.8648
  68. val Loss: 0.2096 Acc: 0.9346
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.3396 Acc: 0.8648
  72. val Loss: 0.1870 Acc: 0.9412
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.3956 Acc: 0.8320
  76. val Loss: 0.1858 Acc: 0.9412
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.3379 Acc: 0.8402
  80. val Loss: 0.1729 Acc: 0.9542
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.2555 Acc: 0.8811
  84. val Loss: 0.2186 Acc: 0.9281
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.3764 Acc: 0.8484
  88. val Loss: 0.1817 Acc: 0.9477
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.2747 Acc: 0.8975
  92. val Loss: 0.2042 Acc: 0.9412
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.3072 Acc: 0.8689
  96. val Loss: 0.1924 Acc: 0.9477
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.3479 Acc: 0.8402
  100. val Loss: 0.1835 Acc: 0.9477
  101. Training complete in 0m 34s
  102. Best val Acc: 0.954248
  1. visualize_model(model_conv)
  2. plt.ioff()
  3. plt.show()

../_images/sphx_glr_transfer_learning_tutorial_003.png

进阶学习

如果您想了解有关迁移学习的更多信息,请查看我们的计算机视觉教程的量化迁移学习

脚本的总运行时间:(1 分钟 53.551 秒)

Download Python source code: transfer_learning_tutorial.py Download Jupyter notebook: transfer_learning_tutorial.ipynb

由狮身人面像画廊生成的画廊