The Dataset for Pretraining Word Embeddings

:label:sec_word2vec_data

Now that we know the technical details of the word2vec models and approximate training methods, let us walk through their implementations. Specifically, we will take the skip-gram model in :numref:sec_word2vec and negative sampling in :numref:sec_approx_train as an example. In this section, we begin with the dataset for pretraining the word embedding model: the original format of the data will be transformed into minibatches that can be iterated over during training.

```{.python .input} from d2l import mxnet as d2l import math from mxnet import gluon, np import os import random

  1. ```{.python .input}
  2. #@tab pytorch
  3. from d2l import torch as d2l
  4. import math
  5. import torch
  6. import os
  7. import random

Reading the Dataset

The dataset that we use here is Penn Tree Bank (PTB). This corpus is sampled from Wall Street Journal articles, split into training, validation, and test sets. In the original format, each line of the text file represents a sentence of words that are separated by spaces. Here we treat each word as a token.

```{.python .input}

@tab all

@save

d2l.DATA_HUB[‘ptb’] = (d2l.DATA_URL + ‘ptb.zip’, ‘319d85e578af0cdc590547f26231e4e31cdf1e42’)

@save

def read_ptb(): “””Load the PTB dataset into a list of text lines.””” data_dir = d2l.download_extract(‘ptb’)

  1. # Read the training set.
  2. with open(os.path.join(data_dir, 'ptb.train.txt')) as f:
  3. raw_text = f.read()
  4. return [line.split() for line in raw_text.split('\n')]

sentences = read_ptb() f’# sentences: {len(sentences)}’

  1. After reading the training set,
  2. we build a vocabulary for the corpus,
  3. where any word that appears
  4. less than 10 times is replaced by
  5. the "<unk>" token.
  6. Note that the original dataset
  7. also contains "<unk>" tokens that represent rare (unknown) words.
  8. ```{.python .input}
  9. #@tab all
  10. vocab = d2l.Vocab(sentences, min_freq=10)
  11. f'vocab size: {len(vocab)}'

Subsampling

Text data typically have high-frequency words such as “the”, “a”, and “in”: they may even occur billions of times in very large corpora. However, these words often co-occur with many different words in context windows, providing little useful signals. For instance, consider the word “chip” in a context window: intuitively its co-occurrence with a low-frequency word “intel” is more useful in training than the co-occurrence with a high-frequency word “a”. Moreover, training with vast amounts of (high-frequency) words is slow. Thus, when training word embedding models, high-frequency words can be subsampled :cite:Mikolov.Sutskever.Chen.ea.2013. Specifically, each indexed word $w_i$ in the dataset will be discarded with probability

P(w_i) = \max\left(1 - \sqrt{\frac{t}{f(w_i)}}, 0\right),

where $f(w_i)$ is the ratio of the number of words $w_i$ to the total number of words in the dataset, and the constant $t$ is a hyperparameter ($10^{-4}$ in the experiment). We can see that only when the relative frequency $f(w_i) > t$ can the (high-frequency) word $w_i$ be discarded, and the higher the relative frequency of the word, the greater the probability of being discarded.

```{.python .input}

@tab all

@save

def subsample(sentences, vocab): “””Subsample high-frequency words.”””

  1. # Exclude unknown tokens '<unk>'
  2. sentences = [[token for token in line if vocab[token] != vocab.unk]
  3. for line in sentences]
  4. counter = d2l.count_corpus(sentences)
  5. num_tokens = sum(counter.values())
  6. # Return True if `token` is kept during subsampling
  7. def keep(token):
  8. return(random.uniform(0, 1) <
  9. math.sqrt(1e-4 / counter[token] * num_tokens))
  10. return ([[token for token in line if keep(token)] for line in sentences],
  11. counter)

subsampled, counter = subsample(sentences, vocab)

  1. The following code snippet
  2. plots the histogram of
  3. the number of tokens per sentence
  4. before and after subsampling.
  5. As expected,
  6. subsampling significantly shortens sentences
  7. by dropping high-frequency words,
  8. which will lead to training speedup.
  9. ```{.python .input}
  10. #@tab all
  11. d2l.show_list_len_pair_hist(['origin', 'subsampled'], '# tokens per sentence',
  12. 'count', sentences, subsampled);

For individual tokens, the sampling rate of the high-frequency word “the” is less than 1/20.

```{.python .input}

@tab all

def compare_counts(token): return (f’# of “{token}”: ‘ f’before={sum([l.count(token) for l in sentences])}, ‘ f’after={sum([l.count(token) for l in subsampled])}’)

compare_counts(‘the’)

  1. In contrast,
  2. low-frequency words "join" are completely kept.
  3. ```{.python .input}
  4. #@tab all
  5. compare_counts('join')

After subsampling, we map tokens to their indices for the corpus.

```{.python .input}

@tab all

corpus = [vocab[line] for line in subsampled] corpus[:3]

  1. ## Extracting Center Words and Context Words
  2. The following `get_centers_and_contexts`
  3. function extracts all the
  4. center words and their context words
  5. from `corpus`.
  6. It uniformly samples an integer between 1 and `max_window_size`
  7. at random as the context window size.
  8. For any center word,
  9. those words
  10. whose distance from it
  11. does not exceed the sampled
  12. context window size
  13. are its context words.
  14. ```{.python .input}
  15. #@tab all
  16. #@save
  17. def get_centers_and_contexts(corpus, max_window_size):
  18. """Return center words and context words in skip-gram."""
  19. centers, contexts = [], []
  20. for line in corpus:
  21. # To form a "center word--context word" pair, each sentence needs to
  22. # have at least 2 words
  23. if len(line) < 2:
  24. continue
  25. centers += line
  26. for i in range(len(line)): # Context window centered at `i`
  27. window_size = random.randint(1, max_window_size)
  28. indices = list(range(max(0, i - window_size),
  29. min(len(line), i + 1 + window_size)))
  30. # Exclude the center word from the context words
  31. indices.remove(i)
  32. contexts.append([line[idx] for idx in indices])
  33. return centers, contexts

Next, we create an artificial dataset containing two sentences of 7 and 3 words, respectively. Let the maximum context window size be 2 and print all the center words and their context words.

```{.python .input}

@tab all

tiny_dataset = [list(range(7)), list(range(7, 10))] print(‘dataset’, tiny_dataset) for center, context in zip(*get_centers_and_contexts(tiny_dataset, 2)): print(‘center’, center, ‘has contexts’, context)

  1. When training on the PTB dataset,
  2. we set the maximum context window size to 5.
  3. The following extracts all the center words and their context words in the dataset.
  4. ```{.python .input}
  5. #@tab all
  6. all_centers, all_contexts = get_centers_and_contexts(corpus, 5)
  7. f'# center-context pairs: {sum([len(contexts) for contexts in all_contexts])}'

Negative Sampling

We use negative sampling for approximate training. To sample noise words according to a predefined distribution, we define the following RandomGenerator class, where the (possibly unnormalized) sampling distribution is passed via the argument sampling_weights.

```{.python .input}

@tab all

@save

class RandomGenerator: “””Randomly draw among {1, …, n} according to n sampling weights.””” def init(self, sampling_weights):

  1. # Exclude
  2. self.population = list(range(1, len(sampling_weights) + 1))
  3. self.sampling_weights = sampling_weights
  4. self.candidates = []
  5. self.i = 0
  6. def draw(self):
  7. if self.i == len(self.candidates):
  8. # Cache `k` random sampling results
  9. self.candidates = random.choices(
  10. self.population, self.sampling_weights, k=10000)
  11. self.i = 0
  12. self.i += 1
  13. return self.candidates[self.i - 1]
  1. For example,
  2. we can draw 10 random variables $X$
  3. among indices 1, 2, and 3
  4. with sampling probabilities $P(X=1)=2/9, P(X=2)=3/9$, and $P(X=3)=4/9$ as follows.
  5. ```{.python .input}
  6. generator = RandomGenerator([2, 3, 4])
  7. [generator.draw() for _ in range(10)]

For a pair of center word and context word, we randomly sample K (5 in the experiment) noise words. According to the suggestions in the word2vec paper, the sampling probability $P(w)$ of a noise word $w$ is set to its relative frequency in the dictionary raised to the power of 0.75 :cite:Mikolov.Sutskever.Chen.ea.2013.

```{.python .input}

@tab all

@save

def get_negatives(all_contexts, vocab, counter, K): “””Return noise words in negative sampling.”””

  1. # Sampling weights for words with indices 1, 2, ... (index 0 is the
  2. # excluded unknown token) in the vocabulary
  3. sampling_weights = [counter[vocab.to_tokens(i)]**0.75
  4. for i in range(1, len(vocab))]
  5. all_negatives, generator = [], RandomGenerator(sampling_weights)
  6. for contexts in all_contexts:
  7. negatives = []
  8. while len(negatives) < len(contexts) * K:
  9. neg = generator.draw()
  10. # Noise words cannot be context words
  11. if neg not in contexts:
  12. negatives.append(neg)
  13. all_negatives.append(negatives)
  14. return all_negatives

all_negatives = get_negatives(all_contexts, vocab, counter, 5)

  1. ## Loading Training Examples in Minibatches
  2. :label:`subsec_word2vec-minibatch-loading`
  3. After
  4. all the center words
  5. together with their
  6. context words and sampled noise words are extracted,
  7. they will be transformed into
  8. minibatches of examples
  9. that can be iteratively loaded
  10. during training.
  11. In a minibatch,
  12. the $i^\mathrm{th}$ example includes a center word
  13. and its $n_i$ context words and $m_i$ noise words.
  14. Due to varying context window sizes,
  15. $n_i+m_i$ varies for different $i$.
  16. Thus,
  17. for each example
  18. we concatenate its context words and noise words in
  19. the `contexts_negatives` variable,
  20. and pad zeros until the concatenation length
  21. reaches $\max_i n_i+m_i$ (`max_len`).
  22. To exclude paddings
  23. in the calculation of the loss,
  24. we define a mask variable `masks`.
  25. There is a one-to-one correspondence
  26. between elements in `masks` and elements in `contexts_negatives`,
  27. where zeros (otherwise ones) in `masks` correspond to paddings in `contexts_negatives`.
  28. To distinguish between positive and negative examples,
  29. we separate context words from noise words in `contexts_negatives` via a `labels` variable.
  30. Similar to `masks`,
  31. there is also a one-to-one correspondence
  32. between elements in `labels` and elements in `contexts_negatives`,
  33. where ones (otherwise zeros) in `labels` correspond to context words (positive examples) in `contexts_negatives`.
  34. The above idea is implemented in the following `batchify` function.
  35. Its input `data` is a list with length
  36. equal to the batch size,
  37. where each element is an example
  38. consisting of
  39. the center word `center`, its context words `context`, and its noise words `negative`.
  40. This function returns
  41. a minibatch that can be loaded for calculations
  42. during training,
  43. such as including the mask variable.
  44. ```{.python .input}
  45. #@tab all
  46. #@save
  47. def batchify(data):
  48. """Return a minibatch of examples for skip-gram with negative sampling."""
  49. max_len = max(len(c) + len(n) for _, c, n in data)
  50. centers, contexts_negatives, masks, labels = [], [], [], []
  51. for center, context, negative in data:
  52. cur_len = len(context) + len(negative)
  53. centers += [center]
  54. contexts_negatives += [context + negative + [0] * (max_len - cur_len)]
  55. masks += [[1] * cur_len + [0] * (max_len - cur_len)]
  56. labels += [[1] * len(context) + [0] * (max_len - len(context))]
  57. return (d2l.reshape(d2l.tensor(centers), (-1, 1)), d2l.tensor(
  58. contexts_negatives), d2l.tensor(masks), d2l.tensor(labels))

Let us test this function using a minibatch of two examples.

```{.python .input}

@tab all

x_1 = (1, [2, 2], [3, 3, 3, 3]) x_2 = (1, [2, 2, 2], [3, 3]) batch = batchify((x_1, x_2))

names = [‘centers’, ‘contexts_negatives’, ‘masks’, ‘labels’] for name, data in zip(names, batch): print(name, ‘=’, data)

  1. ## Putting All Things Together
  2. Last, we define the `load_data_ptb` function that reads the PTB dataset and returns the data iterator and the vocabulary.
  3. ```{.python .input}
  4. #@save
  5. def load_data_ptb(batch_size, max_window_size, num_noise_words):
  6. """Download the PTB dataset and then load it into memory."""
  7. sentences = read_ptb()
  8. vocab = d2l.Vocab(sentences, min_freq=10)
  9. subsampled, counter = subsample(sentences, vocab)
  10. corpus = [vocab[line] for line in subsampled]
  11. all_centers, all_contexts = get_centers_and_contexts(
  12. corpus, max_window_size)
  13. all_negatives = get_negatives(
  14. all_contexts, vocab, counter, num_noise_words)
  15. dataset = gluon.data.ArrayDataset(
  16. all_centers, all_contexts, all_negatives)
  17. data_iter = gluon.data.DataLoader(
  18. dataset, batch_size, shuffle=True,batchify_fn=batchify,
  19. num_workers=d2l.get_dataloader_workers())
  20. return data_iter, vocab

```{.python .input}

@tab pytorch

@save

def load_data_ptb(batch_size, max_window_size, num_noise_words): “””Download the PTB dataset and then load it into memory.””” num_workers = d2l.get_dataloader_workers() sentences = read_ptb() vocab = d2l.Vocab(sentences, min_freq=10) subsampled, counter = subsample(sentences, vocab) corpus = [vocab[line] for line in subsampled] all_centers, all_contexts = get_centers_and_contexts( corpus, max_window_size) all_negatives = get_negatives( all_contexts, vocab, counter, num_noise_words)

  1. class PTBDataset(torch.utils.data.Dataset):
  2. def __init__(self, centers, contexts, negatives):
  3. assert len(centers) == len(contexts) == len(negatives)
  4. self.centers = centers
  5. self.contexts = contexts
  6. self.negatives = negatives
  7. def __getitem__(self, index):
  8. return (self.centers[index], self.contexts[index],
  9. self.negatives[index])
  10. def __len__(self):
  11. return len(self.centers)
  12. dataset = PTBDataset(all_centers, all_contexts, all_negatives)
  13. data_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True,
  14. collate_fn=batchify,
  15. num_workers=num_workers)
  16. return data_iter, vocab
  1. Let us print the first minibatch of the data iterator.
  2. ```{.python .input}
  3. #@tab all
  4. data_iter, vocab = load_data_ptb(512, 5, 5)
  5. for batch in data_iter:
  6. for name, data in zip(names, batch):
  7. print(name, 'shape:', data.shape)
  8. break

Summary

  • High-frequency words may not be so useful in training. We can subsample them for speedup in training.
  • For computational efficiency, we load examples in minibatches. We can define other variables to distinguish paddings from non-paddings, and positive examples from negative ones.

Exercises

  1. How does the running time of code in this section changes if not using subsampling?
  2. The RandomGenerator class caches k random sampling results. Set k to other values and see how it affects the data loading speed.
  3. What other hyperparameters in the code of this section may affect the data loading speed?

:begin_tab:mxnet Discussions :end_tab:

:begin_tab:pytorch Discussions :end_tab: