文本是一类序列数据,一篇文章可以看作是字符或单词的序列
预处理通常包括四个步骤

  1. 读入文本
  2. 分词
  3. 建立字典,将每个词映射到一个唯一的索引(index)
  4. 将文本从词的序列转换为索引的序列,方便输入模型

    读入文本

    通常有大小写统一为小写,去除两边的空格,去除非字母的符号
  1. import collections
  2. import re
  3. def read_time_machine():
  4. with open('/home/kesci/input/timemachine7163/timemachine.txt', 'r') as f:
  5. lines = [re.sub('[^a-z]+', ' ', line.strip().lower()) for line in f]
  6. return lines
  7. lines = read_time_machine()
  8. print('# sentences %d' % len(lines))

分词

将一个句子划分成若干个词(token),转换为一个词的序列
token一般有两种:

  • 单词
  • 字符
  1. def tokenize(sentences, token='word'):
  2. """Split sentences into word or char tokens"""
  3. if token == 'word':
  4. return [sentence.split(' ') for sentence in sentences]
  5. elif token == 'char':
  6. return [list(sentence) for sentence in sentences]
  7. else:
  8. print('ERROR: unkown token type '+token)

建立字典

为了方便模型处理,我们需要将字符串转换为数字。
因此我们需要先构建一个字典(vocabulary),将每个词映射到一个唯一的索引编号。

  1. class Vocab(object):
  2. def __init__(self, tokens, min_freq=0, use_special_tokens=False):
  3. counter = count_corpus(tokens) # :
  4. self.token_freqs = list(counter.items())
  5. self.idx_to_token = []
  6. if use_special_tokens:
  7. # padding, begin of sentence, end of sentence, unknown
  8. self.pad, self.bos, self.eos, self.unk = (0, 1, 2, 3)
  9. self.idx_to_token += ['', '', '', '']
  10. else:
  11. self.unk = 0
  12. self.idx_to_token += ['']
  13. self.idx_to_token += [token for token, freq in self.token_freqs
  14. if freq >= min_freq and token not in self.idx_to_token]
  15. self.token_to_idx = dict()
  16. for idx, token in enumerate(self.idx_to_token):
  17. self.token_to_idx[token] = idx
  18. def __len__(self):
  19. return len(self.idx_to_token)
  20. def __getitem__(self, tokens):
  21. if not isinstance(tokens, (list, tuple)):
  22. return self.token_to_idx.get(tokens, self.unk)
  23. return [self.__getitem__(token) for token in tokens]
  24. def to_tokens(self, indices):
  25. if not isinstance(indices, (list, tuple)):
  26. return self.idx_to_token[indices]
  27. return [self.idx_to_token[index] for index in indices]
  28. def count_corpus(sentences):
  29. tokens = [tk for st in sentences for tk in st]
  30. return collections.Counter(tokens) # 返回一个字典,记录每个词的出现次数

用现有工具进行分词

前面简单分词方式的缺点:

  • 标点符号通常可以提供语义信息,但是我们的方法直接将其丢弃了
  • 类似“shouldn’t”, “doesn’t”这样的词会被错误地处理
  • 类似”Mr.”, “Dr.”这样的词会被错误地处理

    spaCy

    1. import spacy
    2. nlp = spacy.load('en_core_web_sm')
    3. doc = nlp(text)
    4. print([token.text for token in doc])

    NLTK

  1. from nltk.tokenize import word_tokenize
  2. from nltk import data
  3. data.path.append('/home/kesci/input/nltk_data3784/nltk_data')
  4. print(word_tokenize(text))