PyTorch 中使用 Eager 模式的静态量化(beta)

原文:https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#

作者Raghuraman Krishnamoorthi

编辑Seth Weidman

本教程说明了如何进行训练后的静态量化,并说明了两种更先进的技术-每通道量化和量化感知训练-可以进一步提高模型的准确率。 请注意,目前仅支持 CPU 量化,因此在本教程中我们将不使用 GPU/CUDA。

在本教程结束时,您将看到 PyTorch 中的量化如何导致模型大小显着减小同时提高速度。 此外,您将在此处看到如何轻松应用显示的一些高级量化技术,以使您的量化模型受到的准确率影响要小得多。

警告:我们使用了许多其他 PyTorch 仓库中的样板代码,例如,定义了MobileNetV2模型架构,定义了数据加载器等等。 我们当然鼓励您阅读它; 但是如果要使用量化功能,请随时跳到“ 4。 训练后静态量化”部分。

我们将从进行必要的导入开始:

  1. import numpy as np
  2. import torch
  3. import torch.nn as nn
  4. import torchvision
  5. from torch.utils.data import DataLoader
  6. from torchvision import datasets
  7. import torchvision.transforms as transforms
  8. import os
  9. import time
  10. import sys
  11. import torch.quantization
  12. # # Setup warnings
  13. import warnings
  14. warnings.filterwarnings(
  15. action='ignore',
  16. category=DeprecationWarning,
  17. module=r'.*'
  18. )
  19. warnings.filterwarnings(
  20. action='default',
  21. module=r'torch.quantization'
  22. )
  23. # Specify random seed for repeatable results
  24. torch.manual_seed(191009)

1.模型架构

我们首先定义 MobileNetV2 模型架构,并进行了一些值得注意的修改以实现量化:

  • nn.quantized.FloatFunctional代替添加
  • 在网络的开头和结尾处插入QuantStubDeQuantStub
  • 用 ReLU 替换 ReLU6

注意:此代码取自此处

  1. from torch.quantization import QuantStub, DeQuantStub
  2. def _make_divisible(v, divisor, min_value=None):
  3. """
  4. This function is taken from the original tf repo.
  5. It ensures that all layers have a channel number that is divisible by 8
  6. It can be seen here:
  7. https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
  8. :param v:
  9. :param divisor:
  10. :param min_value:
  11. :return:
  12. """
  13. if min_value is None:
  14. min_value = divisor
  15. new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
  16. # Make sure that round down does not go down by more than 10%.
  17. if new_v < 0.9 * v:
  18. new_v += divisor
  19. return new_v
  20. class ConvBNReLU(nn.Sequential):
  21. def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
  22. padding = (kernel_size - 1) // 2
  23. super(ConvBNReLU, self).__init__(
  24. nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),
  25. nn.BatchNorm2d(out_planes, momentum=0.1),
  26. # Replace with ReLU
  27. nn.ReLU(inplace=False)
  28. )
  29. class InvertedResidual(nn.Module):
  30. def __init__(self, inp, oup, stride, expand_ratio):
  31. super(InvertedResidual, self).__init__()
  32. self.stride = stride
  33. assert stride in [1, 2]
  34. hidden_dim = int(round(inp * expand_ratio))
  35. self.use_res_connect = self.stride == 1 and inp == oup
  36. layers = []
  37. if expand_ratio != 1:
  38. # pw
  39. layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
  40. layers.extend([
  41. # dw
  42. ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
  43. # pw-linear
  44. nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
  45. nn.BatchNorm2d(oup, momentum=0.1),
  46. ])
  47. self.conv = nn.Sequential(*layers)
  48. # Replace torch.add with floatfunctional
  49. self.skip_add = nn.quantized.FloatFunctional()
  50. def forward(self, x):
  51. if self.use_res_connect:
  52. return self.skip_add.add(x, self.conv(x))
  53. else:
  54. return self.conv(x)
  55. class MobileNetV2(nn.Module):
  56. def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None, round_nearest=8):
  57. """
  58. MobileNet V2 main class
  59. Args:
  60. num_classes (int): Number of classes
  61. width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount
  62. inverted_residual_setting: Network structure
  63. round_nearest (int): Round the number of channels in each layer to be a multiple of this number
  64. Set to 1 to turn off rounding
  65. """
  66. super(MobileNetV2, self).__init__()
  67. block = InvertedResidual
  68. input_channel = 32
  69. last_channel = 1280
  70. if inverted_residual_setting is None:
  71. inverted_residual_setting = [
  72. # t, c, n, s
  73. [1, 16, 1, 1],
  74. [6, 24, 2, 2],
  75. [6, 32, 3, 2],
  76. [6, 64, 4, 2],
  77. [6, 96, 3, 1],
  78. [6, 160, 3, 2],
  79. [6, 320, 1, 1],
  80. ]
  81. # only check the first element, assuming user knows t,c,n,s are required
  82. if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4:
  83. raise ValueError("inverted_residual_setting should be non-empty "
  84. "or a 4-element list, got {}".format(inverted_residual_setting))
  85. # building first layer
  86. input_channel = _make_divisible(input_channel * width_mult, round_nearest)
  87. self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
  88. features = [ConvBNReLU(3, input_channel, stride=2)]
  89. # building inverted residual blocks
  90. for t, c, n, s in inverted_residual_setting:
  91. output_channel = _make_divisible(c * width_mult, round_nearest)
  92. for i in range(n):
  93. stride = s if i == 0 else 1
  94. features.append(block(input_channel, output_channel, stride, expand_ratio=t))
  95. input_channel = output_channel
  96. # building last several layers
  97. features.append(ConvBNReLU(input_channel, self.last_channel, kernel_size=1))
  98. # make it nn.Sequential
  99. self.features = nn.Sequential(*features)
  100. self.quant = QuantStub()
  101. self.dequant = DeQuantStub()
  102. # building classifier
  103. self.classifier = nn.Sequential(
  104. nn.Dropout(0.2),
  105. nn.Linear(self.last_channel, num_classes),
  106. )
  107. # weight initialization
  108. for m in self.modules():
  109. if isinstance(m, nn.Conv2d):
  110. nn.init.kaiming_normal_(m.weight, mode='fan_out')
  111. if m.bias is not None:
  112. nn.init.zeros_(m.bias)
  113. elif isinstance(m, nn.BatchNorm2d):
  114. nn.init.ones_(m.weight)
  115. nn.init.zeros_(m.bias)
  116. elif isinstance(m, nn.Linear):
  117. nn.init.normal_(m.weight, 0, 0.01)
  118. nn.init.zeros_(m.bias)
  119. def forward(self, x):
  120. x = self.quant(x)
  121. x = self.features(x)
  122. x = x.mean([2, 3])
  123. x = self.classifier(x)
  124. x = self.dequant(x)
  125. return x
  126. # Fuse Conv+BN and Conv+BN+Relu modules prior to quantization
  127. # This operation does not change the numerics
  128. def fuse_model(self):
  129. for m in self.modules():
  130. if type(m) == ConvBNReLU:
  131. torch.quantization.fuse_modules(m, ['0', '1', '2'], inplace=True)
  132. if type(m) == InvertedResidual:
  133. for idx in range(len(m.conv)):
  134. if type(m.conv[idx]) == nn.Conv2d:
  135. torch.quantization.fuse_modules(m.conv, [str(idx), str(idx + 1)], inplace=True)

2.辅助函数

接下来,我们定义一些助手函数以帮助模型评估。 这些主要来自这里

  1. class AverageMeter(object):
  2. """Computes and stores the average and current value"""
  3. def __init__(self, name, fmt=':f'):
  4. self.name = name
  5. self.fmt = fmt
  6. self.reset()
  7. def reset(self):
  8. self.val = 0
  9. self.avg = 0
  10. self.sum = 0
  11. self.count = 0
  12. def update(self, val, n=1):
  13. self.val = val
  14. self.sum += val * n
  15. self.count += n
  16. self.avg = self.sum / self.count
  17. def __str__(self):
  18. fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
  19. return fmtstr.format(**self.__dict__)
  20. def accuracy(output, target, topk=(1,)):
  21. """Computes the accuracy over the k top predictions for the specified values of k"""
  22. with torch.no_grad():
  23. maxk = max(topk)
  24. batch_size = target.size(0)
  25. _, pred = output.topk(maxk, 1, True, True)
  26. pred = pred.t()
  27. correct = pred.eq(target.view(1, -1).expand_as(pred))
  28. res = []
  29. for k in topk:
  30. correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
  31. res.append(correct_k.mul_(100.0 / batch_size))
  32. return res
  33. def evaluate(model, criterion, data_loader, neval_batches):
  34. model.eval()
  35. top1 = AverageMeter('Acc@1', ':6.2f')
  36. top5 = AverageMeter('Acc@5', ':6.2f')
  37. cnt = 0
  38. with torch.no_grad():
  39. for image, target in data_loader:
  40. output = model(image)
  41. loss = criterion(output, target)
  42. cnt += 1
  43. acc1, acc5 = accuracy(output, target, topk=(1, 5))
  44. print('.', end = '')
  45. top1.update(acc1[0], image.size(0))
  46. top5.update(acc5[0], image.size(0))
  47. if cnt >= neval_batches:
  48. return top1, top5
  49. return top1, top5
  50. def load_model(model_file):
  51. model = MobileNetV2()
  52. state_dict = torch.load(model_file)
  53. model.load_state_dict(state_dict)
  54. model.to('cpu')
  55. return model
  56. def print_size_of_model(model):
  57. torch.save(model.state_dict(), "temp.p")
  58. print('Size (MB):', os.path.getsize("temp.p")/1e6)
  59. os.remove('temp.p')

3.定义数据集和数据加载器

作为最后的主要设置步骤,我们为训练和测试集定义了数据加载器。

ImageNet 数据

我们为本教程创建的特定数据集仅包含来自 ImageNet 数据的 1000 张图像,每个类别都有一张(此数据集的大小刚好超过 250 MB,可以相对轻松地下载)。 此自定义数据集的 URL 为:

  1. https://s3.amazonaws.com/pytorch-tutorial-assets/imagenet_1k.zip

要使用 Python 在本地下载此数据,可以使用:

  1. import requests
  2. url = 'https://s3.amazonaws.com/pytorch-tutorial-assets/imagenet_1k.zip`
  3. filename = '~/Downloads/imagenet_1k_data.zip'
  4. r = requests.get(url)
  5. with open(filename, 'wb') as f:
  6. f.write(r.content)

为了运行本教程,我们下载了这些数据,并使用 Makefile 中的这些行将其移到正确的位置。

另一方面,要使用整个 ImageNet 数据集运行本教程中的代码,可以在此之后使用torchvision下载数据。 例如,要下载训练集并对其进行一些标准转换,可以使用:

  1. import torchvision
  2. import torchvision.transforms as transforms
  3. imagenet_dataset = torchvision.datasets.ImageNet(
  4. '~/.data/imagenet',
  5. split='train',
  6. download=True,
  7. transforms.Compose([
  8. transforms.RandomResizedCrop(224),
  9. transforms.RandomHorizontalFlip(),
  10. transforms.ToTensor(),
  11. transforms.Normalize(mean=[0.485, 0.456, 0.406],
  12. std=[0.229, 0.224, 0.225]),
  13. ])

下载完数据后,我们在下面显示了一些函数,这些函数定义了将用于读取此数据的数据加载器。 这些函数主要来自此处

  1. def prepare_data_loaders(data_path):
  2. traindir = os.path.join(data_path, 'train')
  3. valdir = os.path.join(data_path, 'val')
  4. normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
  5. std=[0.229, 0.224, 0.225])
  6. dataset = torchvision.datasets.ImageFolder(
  7. traindir,
  8. transforms.Compose([
  9. transforms.RandomResizedCrop(224),
  10. transforms.RandomHorizontalFlip(),
  11. transforms.ToTensor(),
  12. normalize,
  13. ]))
  14. dataset_test = torchvision.datasets.ImageFolder(
  15. valdir,
  16. transforms.Compose([
  17. transforms.Resize(256),
  18. transforms.CenterCrop(224),
  19. transforms.ToTensor(),
  20. normalize,
  21. ]))
  22. train_sampler = torch.utils.data.RandomSampler(dataset)
  23. test_sampler = torch.utils.data.SequentialSampler(dataset_test)
  24. data_loader = torch.utils.data.DataLoader(
  25. dataset, batch_size=train_batch_size,
  26. sampler=train_sampler)
  27. data_loader_test = torch.utils.data.DataLoader(
  28. dataset_test, batch_size=eval_batch_size,
  29. sampler=test_sampler)
  30. return data_loader, data_loader_test

接下来,我们将加载经过预先​​训练的 MobileNetV2 模型。 我们在这里提供用于从torchvision中下载数据的 URL

  1. data_path = 'data/imagenet_1k'
  2. saved_model_dir = 'data/'
  3. float_model_file = 'mobilenet_pretrained_float.pth'
  4. scripted_float_model_file = 'mobilenet_quantization_scripted.pth'
  5. scripted_quantized_model_file = 'mobilenet_quantization_scripted_quantized.pth'
  6. train_batch_size = 30
  7. eval_batch_size = 30
  8. data_loader, data_loader_test = prepare_data_loaders(data_path)
  9. criterion = nn.CrossEntropyLoss()
  10. float_model = load_model(saved_model_dir + float_model_file).to('cpu')

接下来,我们将“融合模块”; 通过节省内存访问量,这可以使模型更快,同时还可以提高数值精度。 尽管这可以用于任何模型,但在量化模型中尤为常见。

  1. print('\n Inverted Residual Block: Before fusion \n\n', float_model.features[1].conv)
  2. float_model.eval()
  3. # Fuses modules
  4. float_model.fuse_model()
  5. # Note fusion of Conv+BN+Relu and Conv+Relu
  6. print('\n Inverted Residual Block: After fusion\n\n',float_model.features[1].conv)

出:

  1. Inverted Residual Block: Before fusion
  2. Sequential(
  3. (0): ConvBNReLU(
  4. (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
  5. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (2): ReLU()
  7. )
  8. (1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
  9. (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. )
  11. Inverted Residual Block: After fusion
  12. Sequential(
  13. (0): ConvBNReLU(
  14. (0): ConvReLU2d(
  15. (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32)
  16. (1): ReLU()
  17. )
  18. (1): Identity()
  19. (2): Identity()
  20. )
  21. (1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
  22. (2): Identity()
  23. )

最后,为了获得“基准”精度,让我们看看带有融合模块的未量化模型的精度

  1. num_eval_batches = 10
  2. print("Size of baseline model")
  3. print_size_of_model(float_model)
  4. top1, top5 = evaluate(float_model, criterion, data_loader_test, neval_batches=num_eval_batches)
  5. print('Evaluation accuracy on %d images, %2.2f'%(num_eval_batches * eval_batch_size, top1.avg))
  6. torch.jit.save(torch.jit.script(float_model), saved_model_dir + scripted_float_model_file)

出:

  1. Size of baseline model
  2. Size (MB): 13.999657
  3. ..........Evaluation accuracy on 300 images, 77.67

我们看到 300 张图像的准确率达到 78%,这是 ImageNet 的坚实基础,特别是考虑到我们的模型只有 14.0 MB。

这将是我们比较的基准。 接下来,让我们尝试不同的量化方法

4.训练后的静态量化

训练后的静态量化不仅涉及像动态量化中那样将权重从float转换为int,而且还执行额外的步骤,即首先通过网络馈送一批数据并计算不同激活的结果分布(具体来说,这通过在记录此数据的不同点插入观察者模块来完成)。 然后使用这些分布来确定在推理时如何具体量化不同的激活(一种简单的技术是将整个激活范围简单地划分为 256 个级别,但我们也支持更复杂的方法)。 重要的是,此附加步骤使我们能够在操作之间传递量化值,而不是在每次操作之间将这些值转换为浮点数,然后再转换为整数,从而显着提高了速度。

  1. num_calibration_batches = 10
  2. myModel = load_model(saved_model_dir + float_model_file).to('cpu')
  3. myModel.eval()
  4. # Fuse Conv, bn and relu
  5. myModel.fuse_model()
  6. # Specify quantization configuration
  7. # Start with simple min/max range estimation and per-tensor quantization of weights
  8. myModel.qconfig = torch.quantization.default_qconfig
  9. print(myModel.qconfig)
  10. torch.quantization.prepare(myModel, inplace=True)
  11. # Calibrate first
  12. print('Post Training Quantization Prepare: Inserting Observers')
  13. print('\n Inverted Residual Block:After observer insertion \n\n', myModel.features[1].conv)
  14. # Calibrate with the training set
  15. evaluate(myModel, criterion, data_loader, neval_batches=num_calibration_batches)
  16. print('Post Training Quantization: Calibration done')
  17. # Convert to quantized model
  18. torch.quantization.convert(myModel, inplace=True)
  19. print('Post Training Quantization: Convert done')
  20. print('\n Inverted Residual Block: After fusion and quantization, note fused modules: \n\n',myModel.features[1].conv)
  21. print("Size of model after quantization")
  22. print_size_of_model(myModel)
  23. top1, top5 = evaluate(myModel, criterion, data_loader_test, neval_batches=num_eval_batches)
  24. print('Evaluation accuracy on %d images, %2.2f'%(num_eval_batches * eval_batch_size, top1.avg))

出:

  1. QConfig(activation=functools.partial(<class 'torch.quantization.observer.MinMaxObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.MinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))
  2. Post Training Quantization Prepare: Inserting Observers
  3. Inverted Residual Block:After observer insertion
  4. Sequential(
  5. (0): ConvBNReLU(
  6. (0): ConvReLU2d(
  7. (0): Conv2d(
  8. 32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32
  9. (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
  10. )
  11. (1): ReLU(
  12. (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
  13. )
  14. )
  15. (1): Identity()
  16. (2): Identity()
  17. )
  18. (1): Conv2d(
  19. 32, 16, kernel_size=(1, 1), stride=(1, 1)
  20. (activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
  21. )
  22. (2): Identity()
  23. )
  24. ..........Post Training Quantization: Calibration done
  25. Post Training Quantization: Convert done
  26. Inverted Residual Block: After fusion and quantization, note fused modules:
  27. Sequential(
  28. (0): ConvBNReLU(
  29. (0): QuantizedConvReLU2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.1516050398349762, zero_point=0, padding=(1, 1), groups=32)
  30. (1): Identity()
  31. (2): Identity()
  32. )
  33. (1): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.17719413340091705, zero_point=63)
  34. (2): Identity()
  35. )
  36. Size of model after quantization
  37. Size (MB): 3.631847
  38. ..........Evaluation accuracy on 300 images, 66.67

对于这个量化模型,我们发现在这 300 张相同的图像上,准确率仅低至约 62%。 不过,我们确实将模型的大小减小到了 3.6 MB 以下,几乎减少了 4 倍。

此外,我们可以通过使用不同的量化配置来显着提高准确率。 我们使用推荐的配置对 x86 架构进行量化,重复相同的练习。 此配置执行以下操作:

  • 量化每个通道的权重
  • 使用直方图观察器,该直方图观察器收集激活的直方图,然后以最佳方式选择量化参数。
  1. per_channel_quantized_model = load_model(saved_model_dir + float_model_file)
  2. per_channel_quantized_model.eval()
  3. per_channel_quantized_model.fuse_model()
  4. per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
  5. print(per_channel_quantized_model.qconfig)
  6. torch.quantization.prepare(per_channel_quantized_model, inplace=True)
  7. evaluate(per_channel_quantized_model,criterion, data_loader, num_calibration_batches)
  8. torch.quantization.convert(per_channel_quantized_model, inplace=True)
  9. top1, top5 = evaluate(per_channel_quantized_model, criterion, data_loader_test, neval_batches=num_eval_batches)
  10. print('Evaluation accuracy on %d images, %2.2f'%(num_eval_batches * eval_batch_size, top1.avg))
  11. torch.jit.save(torch.jit.script(per_channel_quantized_model), saved_model_dir + scripted_quantized_model_file)

出:

  1. QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
  2. ....................Evaluation accuracy on 300 images, 74.67

仅更改这种量化配置方法,就可以将准确率提高到 76% 以上! 尽管如此,这仍比上述 78% 的基准差 1-2%。 因此,让我们尝试量化意识的训练。

5.量化感知的训练

量化感知的训练(QAT)是通常导致最高准确率的量化方法。 使用 QAT,在训练的正向和反向过程中,所有权重和激活都被“伪量化”:即,浮点值四舍五入以模拟int8值,但所有计算仍使用浮点数完成。 因此,在训练过程中进行所有权重调整,同时“意识到”该模型将最终被量化的事实。 因此,在量化之后,此方法通常会比动态量化或训练后静态量化产生更高的精度。

实际执行 QAT 的总体工作流程与之前非常相似:

  • 我们可以使用与以前相同的模型:量化感知的训练不需要额外的准备。
  • 我们需要使用qconfig来指定要在权重和激活之后插入哪种伪量化,而不是指定观察者

我们首先定义一个训练函数:

  1. def train_one_epoch(model, criterion, optimizer, data_loader, device, ntrain_batches):
  2. model.train()
  3. top1 = AverageMeter('Acc@1', ':6.2f')
  4. top5 = AverageMeter('Acc@5', ':6.2f')
  5. avgloss = AverageMeter('Loss', '1.5f')
  6. cnt = 0
  7. for image, target in data_loader:
  8. start_time = time.time()
  9. print('.', end = '')
  10. cnt += 1
  11. image, target = image.to(device), target.to(device)
  12. output = model(image)
  13. loss = criterion(output, target)
  14. optimizer.zero_grad()
  15. loss.backward()
  16. optimizer.step()
  17. acc1, acc5 = accuracy(output, target, topk=(1, 5))
  18. top1.update(acc1[0], image.size(0))
  19. top5.update(acc5[0], image.size(0))
  20. avgloss.update(loss, image.size(0))
  21. if cnt >= ntrain_batches:
  22. print('Loss', avgloss.avg)
  23. print('Training: * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
  24. .format(top1=top1, top5=top5))
  25. return
  26. print('Full imagenet train set: * Acc@1 {top1.global_avg:.3f} Acc@5 {top5.global_avg:.3f}'
  27. .format(top1=top1, top5=top5))
  28. return

我们像以前一样融合模块

  1. qat_model = load_model(saved_model_dir + float_model_file)
  2. qat_model.fuse_model()
  3. optimizer = torch.optim.SGD(qat_model.parameters(), lr = 0.0001)
  4. qat_model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')

最后,prepare_qat执行“伪量化”,为量化感知训练准备模型

  1. torch.quantization.prepare_qat(qat_model, inplace=True)
  2. print('Inverted Residual Block: After preparation for QAT, note fake-quantization modules \n',qat_model.features[1].conv)

出:

  1. Inverted Residual Block: After preparation for QAT, note fake-quantization modules
  2. Sequential(
  3. (0): ConvBNReLU(
  4. (0): ConvBnReLU2d(
  5. 32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False
  6. (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  7. (weight_fake_quant): FakeQuantize(
  8. fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_channel_symmetric, ch_axis=0, scale=tensor([1.]), zero_point=tensor([0])
  9. (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
  10. )
  11. (activation_post_process): FakeQuantize(
  12. fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0])
  13. (activation_post_process): MovingAverageMinMaxObserver(min_val=inf, max_val=-inf)
  14. )
  15. )
  16. (1): Identity()
  17. (2): Identity()
  18. )
  19. (1): ConvBn2d(
  20. 32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False
  21. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  22. (weight_fake_quant): FakeQuantize(
  23. fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_channel_symmetric, ch_axis=0, scale=tensor([1.]), zero_point=tensor([0])
  24. (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
  25. )
  26. (activation_post_process): FakeQuantize(
  27. fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.]), zero_point=tensor([0])
  28. (activation_post_process): MovingAverageMinMaxObserver(min_val=inf, max_val=-inf)
  29. )
  30. )
  31. (2): Identity()
  32. )

高精度训练量化模型需要在推断时对数字进行精确建模。 因此,对于量化感知的训练,我们通过以下方式修改训练循环:

  • 在训练快要结束时切换批量规范以使用运行均值和方差,以更好地匹配推理数字。
  • 我们还冻结了量化器参数(比例和零点),并对权重进行了微调。
  1. num_train_batches = 20
  2. # Train and check accuracy after each epoch
  3. for nepoch in range(8):
  4. train_one_epoch(qat_model, criterion, optimizer, data_loader, torch.device('cpu'), num_train_batches)
  5. if nepoch > 3:
  6. # Freeze quantizer parameters
  7. qat_model.apply(torch.quantization.disable_observer)
  8. if nepoch > 2:
  9. # Freeze batch norm mean and variance estimates
  10. qat_model.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
  11. # Check the accuracy after each epoch
  12. quantized_model = torch.quantization.convert(qat_model.eval(), inplace=False)
  13. quantized_model.eval()
  14. top1, top5 = evaluate(quantized_model,criterion, data_loader_test, neval_batches=num_eval_batches)
  15. print('Epoch %d :Evaluation accuracy on %d images, %2.2f'%(nepoch, num_eval_batches * eval_batch_size, top1.avg))

出:

  1. ....................Loss tensor(2.0747, grad_fn=<DivBackward0>)
  2. Training: * Acc@1 56.167 Acc@5 77.333
  3. ..........Epoch 0 :Evaluation accuracy on 300 images, 77.67
  4. ....................Loss tensor(2.0358, grad_fn=<DivBackward0>)
  5. Training: * Acc@1 54.833 Acc@5 78.500
  6. ..........Epoch 1 :Evaluation accuracy on 300 images, 77.00
  7. ....................Loss tensor(2.0417, grad_fn=<DivBackward0>)
  8. Training: * Acc@1 54.667 Acc@5 77.333
  9. ..........Epoch 2 :Evaluation accuracy on 300 images, 74.67
  10. ....................Loss tensor(1.9055, grad_fn=<DivBackward0>)
  11. Training: * Acc@1 56.833 Acc@5 78.667
  12. ..........Epoch 3 :Evaluation accuracy on 300 images, 76.33
  13. ....................Loss tensor(1.9055, grad_fn=<DivBackward0>)
  14. Training: * Acc@1 58.167 Acc@5 80.000
  15. ..........Epoch 4 :Evaluation accuracy on 300 images, 77.00
  16. ....................Loss tensor(1.7821, grad_fn=<DivBackward0>)
  17. Training: * Acc@1 60.500 Acc@5 82.833
  18. ..........Epoch 5 :Evaluation accuracy on 300 images, 76.33
  19. ....................Loss tensor(1.8145, grad_fn=<DivBackward0>)
  20. Training: * Acc@1 58.833 Acc@5 82.333
  21. ..........Epoch 6 :Evaluation accuracy on 300 images, 74.33
  22. ....................Loss tensor(1.6930, grad_fn=<DivBackward0>)
  23. Training: * Acc@1 63.000 Acc@5 81.333
  24. ..........Epoch 7 :Evaluation accuracy on 300 images, 75.67

在这里,我们只对少数几个周期执行量化感知训练。 尽管如此,量化感知的训练在整个 imagenet 数据集上的准确率仍超过 71%,接近浮点精度 71.9%。

有关量化感知的训练的更多信息:

  • QAT 是训练后量化技术的超集,可以进行更多调试。 例如,我们可以分析模型的准确率是否受到权重或激活量化的限制。
  • 由于我们使用伪量化来对实际量化算术的数值建模,因此我们还可以在浮点中模拟量化模型的准确率。
  • 我们也可以轻松地模拟训练后量化。

来自量化的加速

最后,让我们确认一下我们上面提到的内容:量化模型实际上执行推理的速度更快吗? 让我们测试一下:

  1. def run_benchmark(model_file, img_loader):
  2. elapsed = 0
  3. model = torch.jit.load(model_file)
  4. model.eval()
  5. num_batches = 5
  6. # Run the scripted model on a few batches of images
  7. for i, (images, target) in enumerate(img_loader):
  8. if i < num_batches:
  9. start = time.time()
  10. output = model(images)
  11. end = time.time()
  12. elapsed = elapsed + (end-start)
  13. else:
  14. break
  15. num_images = images.size()[0] * num_batches
  16. print('Elapsed time: %3.0f ms' % (elapsed/num_images*1000))
  17. return elapsed
  18. run_benchmark(saved_model_dir + scripted_float_model_file, data_loader_test)
  19. run_benchmark(saved_model_dir + scripted_quantized_model_file, data_loader_test)

出:

  1. Elapsed time: 7 ms
  2. Elapsed time: 4 ms

在 MacBook Pro 上本地运行此程序,常规模型的运行时间为 61 毫秒,而量化模型的运行时间仅为 20 毫秒,这说明了量化模型与浮点模型相比,典型的 2-4 倍加速。

总结

在本教程中,我们展示了两种量化方法-训练后静态量化和量化感知训练-描述它们在“幕后”进行的操作以及如何在 PyTorch 中使用它们。

谢谢阅读! 与往常一样,我们欢迎您提供反馈,因此,如果有任何问题,请在这里创建一个 ISSUE

脚本的总运行时间:(5 分钟 40.226 秒)

下载 Python 源码:static_quantization_tutorial.py

下载 Jupyter 笔记本:static_quantization_tutorial.ipynb

由 Sphinx 画廊生成的画廊