LSTM Word 语言模型上的(实验)动态量化

原文: https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html

注意

单击此处的下载完整的示例代码

作者James Reed

编辑:赛斯·魏德曼

介绍

量化涉及将模型的权重和激活从 float 转换为 int,这可能会导致模型尺寸更小,推断速度更快,而对准确性的影响很小。

在本教程中,我们将最简单的量化形式-动态量化应用于基于 LSTM 的下一个单词预测模型,紧紧遵循 PyTorch 示例中的单词语言模型

  1. # imports
  2. import os
  3. from io import open
  4. import time
  5. import torch
  6. import torch.nn as nn
  7. import torch.nn.functional as F

1.定义模型

在这里,我们根据词语言模型示例中的模型定义 LSTM 模型体系结构。

  1. class LSTMModel(nn.Module):
  2. """Container module with an encoder, a recurrent module, and a decoder."""
  3. def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):
  4. super(LSTMModel, self).__init__()
  5. self.drop = nn.Dropout(dropout)
  6. self.encoder = nn.Embedding(ntoken, ninp)
  7. self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)
  8. self.decoder = nn.Linear(nhid, ntoken)
  9. self.init_weights()
  10. self.nhid = nhid
  11. self.nlayers = nlayers
  12. def init_weights(self):
  13. initrange = 0.1
  14. self.encoder.weight.data.uniform_(-initrange, initrange)
  15. self.decoder.bias.data.zero_()
  16. self.decoder.weight.data.uniform_(-initrange, initrange)
  17. def forward(self, input, hidden):
  18. emb = self.drop(self.encoder(input))
  19. output, hidden = self.rnn(emb, hidden)
  20. output = self.drop(output)
  21. decoded = self.decoder(output)
  22. return decoded, hidden
  23. def init_hidden(self, bsz):
  24. weight = next(self.parameters())
  25. return (weight.new_zeros(self.nlayers, bsz, self.nhid),
  26. weight.new_zeros(self.nlayers, bsz, self.nhid))

2.加载文本数据

接下来,我们再次根据单词模型示例对预处理,将 Wikitext-2 数据集加载到<cite>语料库</cite>中。

  1. class Dictionary(object):
  2. def __init__(self):
  3. self.word2idx = {}
  4. self.idx2word = []
  5. def add_word(self, word):
  6. if word not in self.word2idx:
  7. self.idx2word.append(word)
  8. self.word2idx[word] = len(self.idx2word) - 1
  9. return self.word2idx[word]
  10. def __len__(self):
  11. return len(self.idx2word)
  12. class Corpus(object):
  13. def __init__(self, path):
  14. self.dictionary = Dictionary()
  15. self.train = self.tokenize(os.path.join(path, 'train.txt'))
  16. self.valid = self.tokenize(os.path.join(path, 'valid.txt'))
  17. self.test = self.tokenize(os.path.join(path, 'test.txt'))
  18. def tokenize(self, path):
  19. """Tokenizes a text file."""
  20. assert os.path.exists(path)
  21. # Add words to the dictionary
  22. with open(path, 'r', encoding="utf8") as f:
  23. for line in f:
  24. words = line.split() + ['<eos>']
  25. for word in words:
  26. self.dictionary.add_word(word)
  27. # Tokenize file content
  28. with open(path, 'r', encoding="utf8") as f:
  29. idss = []
  30. for line in f:
  31. words = line.split() + ['<eos>']
  32. ids = []
  33. for word in words:
  34. ids.append(self.dictionary.word2idx[word])
  35. idss.append(torch.tensor(ids).type(torch.int64))
  36. ids = torch.cat(idss)
  37. return ids
  38. model_data_filepath = 'data/'
  39. corpus = Corpus(model_data_filepath + 'wikitext-2')

3.加载预训练的模型

这是有关动态量化的教程,动态量化是在训练模型后应用的一种量化技术。 因此,我们只需将一些预先训练的权重加载到此模型架构中即可; 这些权重是通过使用单词语言模型示例中的默认设置训练五个纪元而获得的。

  1. ntokens = len(corpus.dictionary)
  2. model = LSTMModel(
  3. ntoken = ntokens,
  4. ninp = 512,
  5. nhid = 256,
  6. nlayers = 5,
  7. )
  8. model.load_state_dict(
  9. torch.load(
  10. model_data_filepath + 'word_language_model_quantize.pth',
  11. map_location=torch.device('cpu')
  12. )
  13. )
  14. model.eval()
  15. print(model)

出:

  1. LSTMModel(
  2. (drop): Dropout(p=0.5, inplace=False)
  3. (encoder): Embedding(33278, 512)
  4. (rnn): LSTM(512, 256, num_layers=5, dropout=0.5)
  5. (decoder): Linear(in_features=256, out_features=33278, bias=True)
  6. )

现在,我们生成一些文本以确保预先训练的模型能够正常工作-与以前类似,我们在此处遵循

  1. input_ = torch.randint(ntokens, (1, 1), dtype=torch.long)
  2. hidden = model.init_hidden(1)
  3. temperature = 1.0
  4. num_words = 1000
  5. with open(model_data_filepath + 'out.txt', 'w') as outf:
  6. with torch.no_grad(): # no tracking history
  7. for i in range(num_words):
  8. output, hidden = model(input_, hidden)
  9. word_weights = output.squeeze().div(temperature).exp().cpu()
  10. word_idx = torch.multinomial(word_weights, 1)[0]
  11. input_.fill_(word_idx)
  12. word = corpus.dictionary.idx2word[word_idx]
  13. outf.write(str(word.encode('utf-8')) + ('\n' if i % 20 == 19 else ' '))
  14. if i % 100 == 0:
  15. print('| Generated {}/{} words'.format(i, 1000))
  16. with open(model_data_filepath + 'out.txt', 'r') as outf:
  17. all_output = outf.read()
  18. print(all_output)

Out:

  1. | Generated 0/1000 words
  2. | Generated 100/1000 words
  3. | Generated 200/1000 words
  4. | Generated 300/1000 words
  5. | Generated 400/1000 words
  6. | Generated 500/1000 words
  7. | Generated 600/1000 words
  8. | Generated 700/1000 words
  9. | Generated 800/1000 words
  10. | Generated 900/1000 words
  11. b'and' b'O' b'\xe2\x80\x99' b'Gacy' b',' b'and' b'then' b'defined' b'that' b'next' b'novel' b'succeeded' b'large' b'property' b',' b'so' b'neither' b'number' b'is' b'currently'
  12. b'a' b'identical' b'planet' b'by' b'stiff' b'culture' b'.' b'Mosley' b'may' b'settle' b'in' b'non' b'@-@' b'bands' b'for' b'the' b'beginning' b'of' b'its' b'home'
  13. b'stations' b',' b'being' b'also' b'in' b'charge' b'for' b'two' b'other' b'@-@' b'month' b'ceremonies' b'.' b'The' b'first' b'Star' b'Overseas' b'took' b'to' b'have'
  14. b'met' b'its' b'leadership' b'for' b'investigation' b'such' b'as' b'Discovered' b'lbw' b',' b'club' b',' b'<unk>' b',' b'<unk>' b',' b'or' b'Crac' b"'Malley" b','
  15. b'although' b'with' b'the' b'other' b'victory' b',' b'assumes' b'it' b'.' b'(' b'not' b'containment' b'to' b'a' b'recent' b'problem' b')' b'.' b'His' b'traditional'
  16. b'scheme' b'process' b'is' b'proceeded' b'outdoor' b'in' b'overweight' b'clusters' b';' b'God' b'Davis' b'was' b'interested' b'on' b'her' b'right' b'touring' b',' b'although' b'they'
  17. b'had' b'previously' b'previously' b'risen' b'near' b'eclipse' b'in' b'his' b'work' b'by' b'the' b'latter' b'@-@' b'perspective' b'.' b'During' b'the' b'release' b'of' b'Bell'
  18. b',' b'the' b'first' b'promotional' b'mention' b'included' b'a' b'Magnetic' b'seam' b'was' b'put' b'into' b'Shakespeare' b"'s" b'Special' b'Company' b'is' b'katra' b'than' b'chops'
  19. b'@-@' b'up' b'history' b'for' b'frets' b'of' b'actions' b'.' b'<eos>' b'Until' b'arrival' b',' b'Griffin' b'wrote' b'that' b'a' b'"' b'sense' b'"' b'included'
  20. b'especially' b'declining' b'individual' b'forces' b',' b'though' b'are' b'stronger' b'<unk>' b'.' b'According' b'to' b'lessen' b'very' b'role' b',' b'Ceres' b'believed' b'he' b'each'
  21. b'conflicted' b'pump' b'fight' b'follows' b'the' b'malignant' b'polynomial' b'to' b'make' b'Albani' b'.' b'The' b'nobility' b'found' b'a' b'spinners' b'from' b'a' b'special' b'to'
  22. b'vertical' b'@-@' b'term' b'crimes' b',' b'and' b'the' b'Neapolitan' b'apparent' b'<unk>' b'show' b'forcing' b'no' b'of' b'the' b'worst' b'traditions' b'of' b'tallest' b'<unk>'
  23. b'teacher' b'+' b'green' b'crushing' b',' b'with' b'4' b'%' b',' b'and' b'560' b'doctrines' b',' b'with' b'other' b'Asian' b'assistance' b'<unk>' b'.' b'The'
  24. b'game' b'is' b'unadorned' b',' b'especially' b'or' b'steadily' b'favoured' b'according' b'to' b'its' b'inside' b',' b'leading' b'to' b'the' b'removal' b'of' b'gauges' b'.'
  25. b'vanishing' b',' b'a' b'jagged' b'race' b'rested' b'with' b'be' b'rich' b'if' b'these' b'legislation' b'remained' b'together' b'.' b'The' b'anthology' b'and' b'initially' b'regularly'
  26. b'Cases' b'Cererian' b'and' b'acknowledge' b'individual' b'being' b'poured' b'with' b'the' b'Chicago' b'melee' b'.' b'Europium' b',' b'<unk>' b',' b'and' b'Lars' b'life' b'for'
  27. b'electron' b'plumage' b',' b'will' b'deprive' b'themselves' b'.' b'The' b'<unk>' b'gryllotalpa' b'behave' b'have' b'Emerald' b'doubt' b'.' b'When' b'limited' b'cubs' b'are' b'rather'
  28. b'attempting' b'to' b'address' b'.' b'Two' b'birds' b'as' b'being' b'also' b'<unk>' b',' b'such' b'as' b'"' b'<unk>' b'"' b',' b'and' b'possessing' b'criminal'
  29. b'spots' b',' b'lambskin' b'ponderosa' b'mosses' b',' b'which' b'might' b'seek' b'to' b'begin' b'less' b'different' b'delineated' b'techniques' b'.' b'Known' b',' b'on' b'the'
  30. b'ground' b',' b'and' b'only' b'cooler' b',' b'first' b'on' b'other' b'females' b'factory' b'in' b'mathematics' b'.' b'Pilgrim' b'alone' b'has' b'a' b'critical' b'substance'
  31. b',' b'probably' b'in' b'line' b'.' b'He' b'used' b'a' b'<unk>' b',' b'with' b'the' b'resin' b'being' b'transported' b'to' b'the' b'12th' b'island' b'during'
  32. b'the' b'year' b'of' b'a' b'mixture' b'show' b'that' b'it' b'is' b'serving' b';' b'they' b'are' b'headed' b'by' b'prone' b'too' b'species' b',' b'rather'
  33. b'than' b'the' b'risk' b'of' b'carbon' b'.' b'In' b'all' b'other' b'typical' b',' b'faith' b'consist' b'of' b'<unk>' b'whereas' b'<unk>' b'when' b'quotes' b'they'
  34. b'Abrams' b'restructuring' b'vessels' b'.' b'It' b'also' b'emerged' b'even' b'when' b'any' b'lack' b'of' b'birds' b'has' b'wide' b'pinkish' b'structures' b',' b'directing' b'a'
  35. b'chelicerae' b'of' b'amputated' b'elementary' b',' b'only' b'they' b'on' b'objects' b'.' b'A' b'female' b'and' b'a' b'female' b'Leisler' b'@-@' b'shaped' b'image' b'for'
  36. b'51' b'@.@' b'5' b'm' b'(' b'5' b'lb' b')' b'Frenchman' b'2' b'at' b'sea' b'times' b'is' b'approximately' b'2' b'years' b'ago' b',' b'particularly'
  37. b'behind' b'reducing' b'Trujillo' b"'s" b'and' b'food' b'specific' b'spores' b'.' b'Males' b'fibrous' b'females' b'can' b'be' b'severely' b'gregarious' b'.' b'The' b'same' b'brood'
  38. b'behind' b'100' b'minutes' b'after' b'it' b'is' b'estimated' b'by' b'damaging' b'the' b'nest' b'base' b',' b'with' b'some' b'other' b'rare' b'birds' b'and' b'behavior'
  39. b',' b'no' b'transport' b'and' b'Duty' b'demand' b'.' b'Two' b'rare' b'chicks' b'have' b'from' b'feed' b'engage' b'to' b'come' b'with' b'some' b'part' b'of'
  40. b'nesting' b'.' b'The' b'1808' b'to' b'be' b'reduced' b'to' b'Scots' b'and' b'fine' b'stones' b'.' b'There' b'they' b'also' b'purple' b'limitations' b'of' b'certain'
  41. b'skin' b'material' b'usually' b'move' b'during' b'somewhat' b'.' b'A' b'mothers' b'of' b'external' b'take' b'from' b'poaching' b',' b'typically' b'have' b'people' b'processes' b'and'
  42. b'toll' b';' b'while' b'bird' b'plumage' b'differs' b'to' b'Fight' b',' b'they' b'may' b'be' b'open' b'after' b'<unk>' b',' b'thus' b'rarely' b'their' b'<unk>'
  43. b'for' b'a' b'emotional' b'circle' b'.' b'Rough' b'Dahlan' b'probably' b'suggested' b'how' b'they' b'impose' b'their' b'cross' b'of' b'relapse' b'where' b'they' b'changed' b'.'
  44. b'They' b'popularisation' b'them' b'of' b'their' b'<unk>' b',' b'charming' b'by' b'limited' b'or' b'Palestinians' b'the' b'<unk>' b'<unk>' b'.' b'Traffic' b'of' b'areas' b'headed'
  45. b',' b'and' b'their' b'push' b'will' b'articulate' b'.' b'<eos>' b'<unk>' b'would' b'be' b'criticized' b'by' b'protein' b'rice' b',' b'particularly' b'often' b'rather' b'of'
  46. b'the' b'cellular' b'extent' b'.' b'They' b'could' b'overlap' b'forward' b',' b'and' b'there' b'are' b'no' b'governing' b'land' b',' b'they' b'do' b'not' b'find'
  47. b'it' b'.' b'In' b'one' b'place' b',' b'reddish' b'kakapo' b'(' b'kakapo' b'<unk>' b')' b'might' b'be' b'performed' b'that' b'conduct' b',' b'stadia' b','
  48. b'gene' b'or' b'air' b',' b'noise' b',' b'and' b'offensive' b'or' b'skin' b',' b'which' b'may' b'be' b'commercially' b'organized' b'strong' b'method' b'.' b'In'
  49. b'changing' b',' b'Chen' b'and' b'eukaryotes' b'were' b'Membrane' b'spiders' b'in' b'larger' b'growth' b',' b'by' b'some' b'regions' b'.' b'If' b'up' b'about' b'5'
  50. b'%' b'of' b'the' b'males' b',' b'there' b'are' b'displays' b'that' b'shift' b'the' b'bird' b'inclination' b'after' b'supreme' b'<unk>' b'to' b'move' b'outside' b'tests'
  51. b'.' b'The' b'aim' b'of' b'Mouquet' b'Sites' b'is' b'faster' b'as' b'an' b'easy' b'asteroid' b',' b'with' b'ocean' b'or' b'grey' b',' b'albeit' b','
  52. b'as' b'they' b'they' b'CBs' b',' b'and' b'do' b'not' b'be' b'performed' b',' b'greatly' b'on' b'other' b'insects' b',' b'they' b'can' b'write' b'chromosomes'
  53. b',' b'and' b'planners' b',' b'galericulata' b'should' b'be' b'a' b'bird' b'.' b'Also' b'on' b'a' b'holodeck' b'they' b'were' b'divine' b'out' b'of' b'bare'
  54. b'handwriting' b'.' b'Unlike' b'this' b',' b'they' b'makes' b'only' b'anything' b'a' b'variation' b'of' b'skin' b'skeletons' b'further' b'.' b'They' b'have' b'to' b'be'
  55. b'able' b'under' b'their' b'herding' b'tree' b',' b'or' b'dart' b'.' b'When' b'many' b'hypothesis' b'(' b'plant' b',' b'they' b'were' b'@-@' b'looped' b'aged'
  56. b'play' b')' b'is' b'very' b'clear' b'as' b'very' b'on' b'comparison' b'.' b'<eos>' b'Furthermore' b',' b'Wikimania' b'decorations' b'@-@' b'sponsored' b'naming' b'hydrogen' b'when'
  57. b'the' b'kakapo' b'commenced' b',' b'they' b'are' b'slowly' b'on' b'heavy' b'isolation' b'.' b'Sometimes' b'that' b'Larssen' b'leave' b'gently' b',' b'they' b'usually' b'made'
  58. b'short' b'care' b'of' b'feral' b'or' b'any' b'dual' b'species' b'.' b'<eos>' b'Further' b'males' b'that' b'outfitting' b',' b'when' b'there' b'are' b'two' b'envelope'
  59. b'shorter' b'flocks' b'to' b'be' b'males' b'ideally' b'they' b'are' b'highly' b'emission' b'.' b'<eos>' b'As' b'of' b'danger' b',' b'taking' b'in' b'one' b'of'
  60. b'the' b'other' b'surviving' b'structure' b'of' b'Ceres' b'can' b'be' b'rebuffed' b'to' b'be' b'caused' b'by' b'any' b'combination' b'of' b'food' b'or' b'modified' b'its'

它不是 GPT-2,但看起来该模型已开始学习语言结构!

我们几乎准备好演示动态量化。 我们只需要定义一些辅助函数:

  1. bptt = 25
  2. criterion = nn.CrossEntropyLoss()
  3. eval_batch_size = 1
  4. # create test data set
  5. def batchify(data, bsz):
  6. # Work out how cleanly we can divide the dataset into bsz parts.
  7. nbatch = data.size(0) // bsz
  8. # Trim off any extra elements that wouldn't cleanly fit (remainders).
  9. data = data.narrow(0, 0, nbatch * bsz)
  10. # Evenly divide the data across the bsz batches.
  11. return data.view(bsz, -1).t().contiguous()
  12. test_data = batchify(corpus.test, eval_batch_size)
  13. # Evaluation functions
  14. def get_batch(source, i):
  15. seq_len = min(bptt, len(source) - 1 - i)
  16. data = source[i:i+seq_len]
  17. target = source[i+1:i+1+seq_len].view(-1)
  18. return data, target
  19. def repackage_hidden(h):
  20. """Wraps hidden states in new Tensors, to detach them from their history."""
  21. if isinstance(h, torch.Tensor):
  22. return h.detach()
  23. else:
  24. return tuple(repackage_hidden(v) for v in h)
  25. def evaluate(model_, data_source):
  26. # Turn on evaluation mode which disables dropout.
  27. model_.eval()
  28. total_loss = 0.
  29. hidden = model_.init_hidden(eval_batch_size)
  30. with torch.no_grad():
  31. for i in range(0, data_source.size(0) - 1, bptt):
  32. data, targets = get_batch(data_source, i)
  33. output, hidden = model_(data, hidden)
  34. hidden = repackage_hidden(hidden)
  35. output_flat = output.view(-1, ntokens)
  36. total_loss += len(data) * criterion(output_flat, targets).item()
  37. return total_loss / (len(data_source) - 1)

4.测试动态量化

最后,我们可以在模型上调用torch.quantization.quantize_dynamic! 特别,

  • 我们指定我们要对模型中的nn.LSTMnn.Linear模块进行量化
  • 我们指定希望将权重转换为int8
  1. import torch.quantization
  2. quantized_model = torch.quantization.quantize_dynamic(
  3. model, {nn.LSTM, nn.Linear}, dtype=torch.qint8
  4. )
  5. print(quantized_model)

Out:

  1. LSTMModel(
  2. (drop): Dropout(p=0.5, inplace=False)
  3. (encoder): Embedding(33278, 512)
  4. (rnn): DynamicQuantizedLSTM(
  5. 512, 256, num_layers=5, dropout=0.5
  6. (_all_weight_values): ModuleList(
  7. (0): PackedParameter()
  8. (1): PackedParameter()
  9. (2): PackedParameter()
  10. (3): PackedParameter()
  11. (4): PackedParameter()
  12. (5): PackedParameter()
  13. (6): PackedParameter()
  14. (7): PackedParameter()
  15. (8): PackedParameter()
  16. (9): PackedParameter()
  17. )
  18. )
  19. (decoder): DynamicQuantizedLinear(
  20. in_features=256, out_features=33278
  21. (_packed_params): LinearPackedParams()
  22. )
  23. )

该模型看起来相同; 这对我们有什么好处? 首先,我们看到模型尺寸显着减小:

  1. def print_size_of_model(model):
  2. torch.save(model.state_dict(), "temp.p")
  3. print('Size (MB):', os.path.getsize("temp.p")/1e6)
  4. os.remove('temp.p')
  5. print_size_of_model(model)
  6. print_size_of_model(quantized_model)

Out:

  1. Size (MB): 113.941574
  2. Size (MB): 76.807204

其次,我们看到了更快的推断时间,而评估损失没有差异:

注意:由于量化模型运行单线程,因此用于单线程比较的线程数为 1。

  1. torch.set_num_threads(1)
  2. def time_model_evaluation(model, test_data):
  3. s = time.time()
  4. loss = evaluate(model, test_data)
  5. elapsed = time.time() - s
  6. print('''loss: {0:.3f}\nelapsed time (seconds): {1:.1f}'''.format(loss, elapsed))
  7. time_model_evaluation(model, test_data)
  8. time_model_evaluation(quantized_model, test_data)

Out:

  1. loss: 5.167
  2. elapsed time (seconds): 233.9
  3. loss: 5.168
  4. elapsed time (seconds): 164.9

在 MacBook Pro 上本地运行此程序,无需进行量化,推理大约需要 200 秒,而进行量化则只需大约 100 秒。

结论

动态量化可能是减小模型大小的简单方法,而对精度的影响有限。

谢谢阅读! 与往常一样,我们欢迎您提供任何反馈,因此,如果有任何问题,请在此处创建一个问题

脚本的总运行时间:(6 分钟 43.291 秒)

Download Python source code: dynamic_quantization_tutorial.py Download Jupyter notebook: dynamic_quantization_tutorial.ipynb

由狮身人面像画廊生成的画廊