文本是一类序列数据,一篇文章可以看作是字符或单词的序列
预处理通常包括四个步骤
import collectionsimport redef read_time_machine():with open('/home/kesci/input/timemachine7163/timemachine.txt', 'r') as f:lines = [re.sub('[^a-z]+', ' ', line.strip().lower()) for line in f]return lineslines = read_time_machine()print('# sentences %d' % len(lines))
分词
将一个句子划分成若干个词(token),转换为一个词的序列
token一般有两种:
- 单词
- 字符
def tokenize(sentences, token='word'):"""Split sentences into word or char tokens"""if token == 'word':return [sentence.split(' ') for sentence in sentences]elif token == 'char':return [list(sentence) for sentence in sentences]else:print('ERROR: unkown token type '+token)
建立字典
为了方便模型处理,我们需要将字符串转换为数字。
因此我们需要先构建一个字典(vocabulary),将每个词映射到一个唯一的索引编号。
class Vocab(object):def __init__(self, tokens, min_freq=0, use_special_tokens=False):counter = count_corpus(tokens) # :self.token_freqs = list(counter.items())self.idx_to_token = []if use_special_tokens:# padding, begin of sentence, end of sentence, unknownself.pad, self.bos, self.eos, self.unk = (0, 1, 2, 3)self.idx_to_token += ['', '', '', '']else:self.unk = 0self.idx_to_token += ['']self.idx_to_token += [token for token, freq in self.token_freqsif freq >= min_freq and token not in self.idx_to_token]self.token_to_idx = dict()for idx, token in enumerate(self.idx_to_token):self.token_to_idx[token] = idxdef __len__(self):return len(self.idx_to_token)def __getitem__(self, tokens):if not isinstance(tokens, (list, tuple)):return self.token_to_idx.get(tokens, self.unk)return [self.__getitem__(token) for token in tokens]def to_tokens(self, indices):if not isinstance(indices, (list, tuple)):return self.idx_to_token[indices]return [self.idx_to_token[index] for index in indices]def count_corpus(sentences):tokens = [tk for st in sentences for tk in st]return collections.Counter(tokens) # 返回一个字典,记录每个词的出现次数
用现有工具进行分词
前面简单分词方式的缺点:
- 标点符号通常可以提供语义信息,但是我们的方法直接将其丢弃了
- 类似“shouldn’t”, “doesn’t”这样的词会被错误地处理
- 类似”Mr.”, “Dr.”这样的词会被错误地处理
spaCy
import spacynlp = spacy.load('en_core_web_sm')doc = nlp(text)print([token.text for token in doc])
NLTK
from nltk.tokenize import word_tokenizefrom nltk import datadata.path.append('/home/kesci/input/nltk_data3784/nltk_data')print(word_tokenize(text))
