数据增强

image.png
1. 边扰动:在这些增强中,我们以很小的概率在现有图中随机添加或删除边以创建新图。我们也有我们想要扰动的最大比例的边,这样我们就不会最终改变图的基本结构。以下视图生成的代码片段如下:

  1. class EdgePerturbation():
  2. """
  3. Edge perturbation on the given graph or batched graphs. Class objects callable via
  4. method :meth:`views_fn`.
  5. Args:
  6. add (bool, optional): Set :obj:`True` if randomly add edges in a given graph.
  7. (default: :obj:`True`)
  8. drop (bool, optional): Set :obj:`True` if randomly drop edges in a given graph.
  9. (default: :obj:`False`)
  10. ratio (float, optional): Percentage of edges to add or drop. (default: :obj:`0.1`)
  11. """
  12. '''
  13. 对给定的图或成批的图进行边缘扰动
  14. add:随机增加边
  15. drop:随机删除边
  16. ratio:增加或删除的边的百分比,默认0.1
  17. '''
  18. def __init__(self, add=True, drop=False, ratio=0.1):
  19. self.add = add
  20. self.drop = drop
  21. self.ratio = ratio
  22. def do_trans(self, data):
  23. node_num, _ = data.x.size()
  24. _, edge_num = data.edge_index.size()
  25. perturb_num = int(edge_num * self.ratio)
  26. edge_index = data.edge_index.detach().clone()
  27. idx_remain = edge_index
  28. idx_add = torch.tensor([]).reshape(2, -1).long()
  29. if self.drop:
  30. idx_remain = edge_index[:, np.random.choice(edge_num, edge_num-perturb_num, replace=False)]
  31. if self.add:
  32. idx_add = torch.randint(node_num, (2, perturb_num))
  33. new_edge_index = torch.cat((idx_remain, idx_add), dim=1)
  34. new_edge_index = torch.unique(new_edge_index, dim=1)
  35. return Data(x=data.x, edge_index=new_edge_index)

2. 扩散:在这些增强中,使用热核a heat kernel 将邻接矩阵转换为扩散矩阵,该热核提供图的全局视图,而不是邻接矩阵提供的局部视图。

  1. class Diffusion():
  2. """
  3. Diffusion on the given graph or batched graphs, used in
  4. `MVGRL <https://arxiv.org/pdf/2006.05582v1.pdf>`_. Class objects callable via
  5. method :meth:`views_fn`.
  6. Args:
  7. mode (string, optional): Diffusion instantiation mode with two options:
  8. :obj:`"ppr"`: Personalized PageRank; :obj:`"heat"`: heat kernel.
  9. (default: :obj:`"ppr"`)
  10. alpha (float, optinal): Teleport probability in a random walk. (default: :obj:`0.2`)
  11. t (float, optinal): Diffusion time. (default: :obj:`5`)
  12. add_self_loop (bool, optional): Set True to add self-loop to edge_index.
  13. (default: :obj:`True`)
  14. """
  15. '''
  16. 在给定的图形或成批的图形上进行扩散
  17. mode 扩散实例化模式,有两个选项。"ppr"`: 个性化的PageRank; "heat"`:热核
  18. (默认::obj:`"ppr"`)。
  19. alpha (float, optinal): 随机行走中的传送概率。(默认: :obj:`0.2`)
  20. t (float, optinal): 扩散时间。(默认: :obj:`5`)
  21. add_self_loop (bool, optional): 设置为 "True",在edge_index上添加自我循环。
  22. '''
  23. def __init__(self, mode="ppr", alpha=0.2, t=5, add_self_loop=True):
  24. self.mode = mode
  25. self.alpha = alpha
  26. self.t = t
  27. self.add_self_loop = add_self_loop
  28. def do_trans(self, data):
  29. node_num, _ = data.x.size()
  30. if self.add_self_loop:
  31. sl = torch.tensor([[n, n] for n in range(node_num)]).t()
  32. edge_index = torch.cat((data.edge_index, sl), dim=1)
  33. else:
  34. edge_index = data.edge_index.detach().clone()
  35. orig_adj = to_dense_adj(edge_index)[0]
  36. orig_adj = torch.where(orig_adj>1, torch.ones_like(orig_adj), orig_adj)
  37. d = torch.diag(torch.sum(orig_adj, 1))
  38. if self.mode == "ppr":
  39. dinv = torch.inverse(torch.sqrt(d))
  40. at = torch.matmul(torch.matmul(dinv, orig_adj), dinv)
  41. diff_adj = self.alpha * torch.inverse((torch.eye(orig_adj.shape[0]) - (1 - self.alpha) * at))
  42. elif self.mode == "heat":
  43. diff_adj = torch.exp(self.t * (torch.matmul(orig_adj, torch.inverse(d)) - 1))
  44. else:
  45. raise Exception("Must choose one diffusion instantiation mode from 'ppr' and 'heat'!")
  46. edge_ind, edge_attr = dense_to_sparse(diff_adj)
  47. return Data(x=data.x, edge_index=edge_ind, edge_attr=edge_attr)

3. 节点删除:在这些增强中,我们随机删除一小部分节点以创建新图。链接到该特定节点的所有边也会被删除。以下视图生成的代码片段是:

  1. class UniformSample():
  2. """
  3. Uniformly node dropping on the given graph or batched graphs.
  4. Class objects callable via method :meth:`views_fn`.
  5. Args:
  6. ratio (float, optinal): Ratio of nodes to be dropped. (default: :obj:`0.1`)
  7. """
  8. '''
  9. 均匀删除结点
  10. ratio:删除的概率 默认0.1
  11. '''
  12. def __init__(self, ratio=0.1):
  13. self.ratio = ratio
  14. def do_trans(self, data):
  15. node_num, _ = data.x.size()
  16. _, edge_num = data.edge_index.size()
  17. keep_num = int(node_num * (1-self.ratio))
  18. idx_nondrop = torch.randperm(node_num)[:keep_num]
  19. mask_nondrop = torch.zeros_like(data.x[:,0]).scatter_(0, idx_nondrop, 1.0).bool()
  20. edge_index, _ = subgraph(mask_nondrop, data.edge_index, relabel_nodes=True, num_nodes=node_num)
  21. return Data(x=data.x[mask_nondrop], edge_index=edge_index)

torch.randperm(n) 返回一个0到n-1的数组
scatter函数解析
https://www.cnblogs.com/dogecheng/p/11938009.html
https://zhuanlan.zhihu.com/p/339043454
scatter(dim, index, src) 的参数有 3 个

  • dim:沿着哪个维度进行索引
  • index:用来 scatter 的元素索引
  • src:用来 scatter 的源元素,可以是一个标量或一个张量

mask_nondrop = torch.zeros_like(data.x[:,0]).scatter_(0, idx_nondrop, 1.0).bool()
构造一个n行(x的行数,即总的node数)1列的全0矩阵,按行填入1 (这里不确定)

4.基于随机游走的采样:在这些增强中,我们在图上执行随机游走并继续添加节点,直到我们达到固定的预定数量的节点并从中形成子图。通过随机游走,我们的意思是,如果您当前在一个节点处,那么您将随机遍历该节点的一条边。以下视图生成的代码片段是:

  1. class RWSample():
  2. """
  3. Subgraph sampling based on random walk on the given graph or batched graphs.
  4. Class objects callable via method :meth:`views_fn`.
  5. Args:
  6. ratio (float, optional): Percentage of nodes to sample from the graph.
  7. (default: :obj:`0.1`)
  8. add_self_loop (bool, optional): Set True to add self-loop to edge_index.
  9. (default: :obj:`False`)
  10. """
  11. '''
  12. 基于随机游走
  13. ratio:采样比例
  14. 添加自环
  15. '''
  16. def __init__(self, ratio=0.1, add_self_loop=False):
  17. self.ratio = ratio
  18. self.add_self_loop = add_self_loop
  19. def do_trans(self, data):
  20. node_num, _ = data.x.size()
  21. sub_num = int(node_num * self.ratio)
  22. if self.add_self_loop:
  23. sl = torch.tensor([[n, n] for n in range(node_num)]).t()
  24. edge_index = torch.cat((data.edge_index, sl), dim=1)
  25. else:
  26. edge_index = data.edge_index.detach().clone()
  27. idx_sub = [np.random.randint(node_num, size=1)[0]]
  28. idx_neigh = set([n.item() for n in edge_index[1][edge_index[0]==idx_sub[0]]])
  29. count = 0
  30. while len(idx_sub) <= sub_num:
  31. count = count + 1
  32. if count > node_num:
  33. break
  34. if len(idx_neigh) == 0:
  35. break
  36. sample_node = np.random.choice(list(idx_neigh))
  37. if sample_node in idx_sub:
  38. continue
  39. idx_sub.append(sample_node)
  40. idx_neigh.union(set([n.item() for n in edge_index[1][edge_index[0]==idx_sub[-1]]]))
  41. idx_sub = torch.LongTensor(idx_sub).to(data.x.device)
  42. mask_nondrop = torch.zeros_like(data.x[:,0]).scatter_(0, idx_sub, 1.0).bool()
  43. edge_index, _ = subgraph(mask_nondrop, data.edge_index, relabel_nodes=True, num_nodes=node_num)
  44. return Data(x=data.x[mask_nondrop], edge_index=edge_index)

5. 节点属性屏蔽:在这些扩充中,我们屏蔽了一些节点的特征以创建扩充图。这里的掩码是通过从预先指定的均值和方差的高斯采样掩码的每个条目来创建的。希望是学习对节点特征不变且主要取决于图结构的表示。

  1. class NodeAttrMask():
  2. """
  3. Node attribute masking on the given graph or batched graphs.
  4. Class objects callable via method :meth:`views_fn`.
  5. Args:
  6. mode (string, optinal): Masking mode with three options:
  7. :obj:`"whole"`: mask all feature dimensions of the selected node with a Gaussian distribution;
  8. :obj:`"partial"`: mask only selected feature dimensions with a Gaussian distribution;
  9. :obj:`"onehot"`: mask all feature dimensions of the selected node with a one-hot vector.
  10. (default: :obj:`"whole"`)
  11. mask_ratio (float, optinal): The ratio of node attributes to be masked. (default: :obj:`0.1`)
  12. mask_mean (float, optional): Mean of the Gaussian distribution to generate masking values.
  13. (default: :obj:`0.5`)
  14. mask_std (float, optional): Standard deviation of the distribution to generate masking values.
  15. Must be non-negative. (default: :obj:`0.5`)
  16. """
  17. def __init__(self, mode='whole', mask_ratio=0.1, mask_mean=0.5, mask_std=0.5, return_mask=False):
  18. self.mode = mode
  19. self.mask_ratio = mask_ratio
  20. self.mask_mean = mask_mean
  21. self.mask_std = mask_std
  22. self.return_mask = return_mask
  23. def do_trans(self, data):
  24. node_num, feat_dim = data.x.size()
  25. x = data.x.detach().clone()
  26. if self.mode == "whole":
  27. mask = torch.zeros(node_num)
  28. mask_num = int(node_num * self.mask_ratio)
  29. idx_mask = np.random.choice(node_num, mask_num, replace=False)
  30. x[idx_mask] = torch.tensor(np.random.normal(loc=self.mask_mean, scale=self.mask_std,
  31. size=(mask_num, feat_dim)), dtype=torch.float32)
  32. mask[idx_mask] = 1
  33. elif self.mode == "partial":
  34. mask = torch.zeros((node_num, feat_dim))
  35. for i in range(node_num):
  36. for j in range(feat_dim):
  37. if random.random() < self.mask_ratio:
  38. x[i][j] = torch.tensor(np.random.normal(loc=self.mask_mean,
  39. scale=self.mask_std), dtype=torch.float32)
  40. mask[i][j] = 1
  41. elif self.mode == "onehot":
  42. mask = torch.zeros(node_num)
  43. mask_num = int(node_num * self.mask_ratio)
  44. idx_mask = np.random.choice(node_num, mask_num, replace=False)
  45. x[idx_mask] = torch.tensor(np.eye(feat_dim)[np.random.randint(0, feat_dim, size=(mask_num))], dtype=torch.float32)
  46. mask[idx_mask] = 1
  47. else:
  48. raise Exception("Masking mode option '{0:s}' is not available!".format(mode))
  49. if self.return_mask:
  50. return Data(x=x, edge_index=data.edge_index, mask=mask)
  51. else:
  52. return Data(x=x, edge_index=data.edge_index)

图神经网络

1_cVxfTPx8hSq-GAvQ90pUJw.gif

GraphSAGE

  1. import torch.nn as nn
  2. from torch_geometric.nn import SAGEConv
  3. class GraphSAGE(nn.Module):
  4. def __init__(self, feat_dim, hidden_dim, n_layers):
  5. super(GraphSAGE, self).__init__()
  6. self.convs = nn.ModuleList()
  7. self.acts = nn.ModuleList()
  8. self.n_layers = n_layers
  9. a = nn.ReLU()
  10. for i in range(n_layers):
  11. start_dim = hidden_dim if i else feat_dim
  12. conv = SAGEConv(start_dim, hidden_dim)
  13. self.convs.append(conv)
  14. self.acts.append(a)
  15. def forward(self, data):
  16. x, edge_index, batch = data
  17. for i in range(self.n_layers):
  18. x = self.convs[i](x, edge_index)
  19. x = self.acts[i](x)
  20. return x

对比损失

目标是使正对之间的一致性高于负对。对于给定的图,它的正面是使用前面讨论的数据增强方法构建的,而小批量中的所有其他图构成为负面。我们的自我监督模型可以使用InfoNCE 目标[6] 或Jensen-Shannon Estimator [7] 进行训练
image.png
虽然这些指标的推导超出了本博客的范围,但这些指标背后的直觉植根于信息论,因此这些指标试图有效地估计视图之间的互信息。实现这些的代码片段如下

  1. import torch
  2. def infonce(readout_anchor, readout_positive, tau=0.5, norm=True):
  3. """
  4. The InfoNCE (NT-XENT) loss in contrastive learning. The implementation
  5. follows the paper `A Simple Framework for Contrastive Learning of
  6. Visual Representations <https://arxiv.org/abs/2002.05709>`.
  7. Args:
  8. readout_anchor, readout_positive: Tensor of shape [batch_size, feat_dim]
  9. tau: Float. Usually in (0,1].
  10. norm: Boolean. Whether to apply normlization.
  11. """
  12. batch_size = readout_anchor.shape[0]
  13. sim_matrix = torch.einsum("ik,jk->ij", readout_anchor, readout_positive)
  14. if norm:
  15. readout_anchor_abs = readout_anchor.norm(dim=1)
  16. readout_positive_abs = readout_positive.norm(dim=1)
  17. sim_matrix = sim_matrix / torch.einsum("i,j->ij", readout_anchor_abs, readout_positive_abs)
  18. sim_matrix = torch.exp(sim_matrix / tau)
  19. pos_sim = sim_matrix[range(batch_size), range(batch_size)]
  20. loss = pos_sim / (sim_matrix.sum(dim=1) - pos_sim)
  21. loss = - torch.log(loss).mean()
  22. return loss
  1. import torch
  2. import numpy as np
  3. import torch.nn.functional as F
  4. def get_expectation(masked_d_prime, positive=True):
  5. """
  6. Args:
  7. masked_d_prime: Tensor of shape [n_graphs, n_graphs] for global_global,
  8. tensor of shape [n_nodes, n_graphs] for local_global.
  9. positive (bool): Set True if the d_prime is masked for positive pairs,
  10. set False for negative pairs.
  11. """
  12. log_2 = np.log(2.)
  13. if positive:
  14. score = log_2 - F.softplus(-masked_d_prime)
  15. else:
  16. score = F.softplus(-masked_d_prime) + masked_d_prime - log_2
  17. return score
  18. def jensen_shannon(readout_anchor, readout_positive):
  19. """
  20. The Jensen-Shannon Estimator of Mutual Information used in contrastive learning. The
  21. implementation follows the paper `Learning deep representations by mutual information
  22. estimation and maximization <https://arxiv.org/abs/1808.06670>`.
  23. Note: The JSE loss implementation can produce negative values because a :obj:`-2log2` shift is
  24. added to the computation of JSE, for the sake of consistency with other f-convergence losses.
  25. Args:
  26. readout_anchor, readout_positive: Tensor of shape [batch_size, feat_dim].
  27. """
  28. batch_size = readout_anchor.shape[0]
  29. pos_mask = torch.zeros((batch_size, batch_size))
  30. neg_mask = torch.ones((batch_size, batch_size))
  31. for graphidx in range(batch_size):
  32. pos_mask[graphidx][graphidx] = 1.
  33. neg_mask[graphidx][graphidx] = 0.
  34. d_prime = torch.matmul(readout_anchor, readout_positive.t())
  35. E_pos = get_expectation(d_prime * pos_mask, positive=True).sum()
  36. E_pos = E_pos / batch_size
  37. E_neg = get_expectation(d_prime * neg_mask, positive=False).sum()
  38. E_neg = E_neg / (batch_size * (batch_size - 1))
  39. return E_neg - E_pos

现在,我们可以将迄今为止所见的构建块拼凑起来,并在没有任何标记数据的情况下训练我们的模型。如需更多实践经验,请参阅我们的Colab Notebook,它结合了各种自我监督学习技术。我们提供了一个易于使用的界面来训练您自己的模型,以及尝试不同增强、GNN 和对比损失的灵活性。我们的整个代码库都可以在Github上找到

下游任务

0_QOiAt-Uu4MtOfpy2.gif
上图说明了将分子图分类为多个气味类别的任务

让我们考虑图分类的任务,它指的是根据一些结构图属性将图分类为不同类的问题。在这里,我们希望以一种在给定手头任务的情况下在潜在空间中可分离的方式嵌入整个图。我们的模型包括一个 GNN 编码器和一个分类器头,如下面的代码片段所示:

  1. import os
  2. import torch
  3. import torch.nn as nn
  4. class GraphClassificationModel(nn.Module):
  5. """
  6. Model for graph classification.
  7. GNN Encoder followed by linear layer.
  8. Args:
  9. feat_dim (int): The dimension of input node features.
  10. hidden_dim (int): The dimension of node-level (local) embeddings.
  11. n_layers (int, optional): The number of GNN layers in the encoder. (default: :obj:`5`)
  12. gnn (string, optional): The type of GNN layer, :obj:`gcn` or :obj:`gin` or :obj:`gat`
  13. or :obj:`graphsage` or :obj:`resgcn` or :obj:`sgc`. (default: :obj:`gcn`)
  14. load (string, optional): The SSL model to be loaded. The GNN encoder will be
  15. initialized with pretrained SSL weights, and only the classifier head will
  16. be trained. Otherwise, GNN encoder and classifier head are trained end-to-end.
  17. """
  18. def __init__(self, feat_dim, hidden_dim, n_layers, output_dim, gnn, load=None):
  19. super(GraphClassificationModel, self).__init__()
  20. # Encoder is a wrapper class for easy instantiation of pre-implemented graph encoders.
  21. self.encoder = Encoder(feat_dim, hidden_dim, n_layers=n_layers, gnn=gnn)
  22. if load:
  23. ckpt = torch.load(os.path.join("logs", load, "best_model.ckpt"))
  24. self.encoder.load_state_dict(ckpt["state"])
  25. for param in self.encoder.parameters():
  26. param.requires_grad = False
  27. if gnn in ["resgcn", "sgc"]:
  28. feat_dim = hidden_dim
  29. else:
  30. feat_dim = n_layers * hidden_dim
  31. self.classifier = nn.Linear(feat_dim, output_dim)
  32. def forward(self, data):
  33. embeddings = self.encoder(data)
  34. scores = self.classifier(embeddings)
  35. return scores

数据集:多特蒙德工业大学收集了大量不同的图形数据集,称为TUDatasets,可通过PyG中的 torchgeometric.datasets.TUDataset 访问。我们将在较小的数据集之一MUTAG上进行实验。该数据集中的每个图表都代表一种化合物,并且它还具有相关的二元标签,表示它们“对特定革兰氏阴性细菌的诱变作用”。该数据集包括 188 个图,每个图平均有 18 个节点,20 条边。我们打算在这个数据集上执行二进制分类。
**
数据预处理_**:我们将数据集分成 131 个训练、37 个验证和 20 个测试图样本。我们还通过将节点度数表示为 one-hot 编码来为每个节点添加额外的特征。还可以包括传统的手工制作的特征,如节点中心性、聚类系数和图元计数,以获得更丰富的表示。

  1. import torch
  2. import random
  3. import torch.nn.functional as F
  4. from torch_geometric.utils import degree
  5. from torch_geometric.datasets import TUDataset
  6. DATA_SPLIT = [0.7, 0.2, 0.1] # Train / val / test split ratio
  7. def get_max_deg(dataset):
  8. """
  9. Find the max degree across all nodes in all graphs.
  10. """
  11. max_deg = 0
  12. for data in dataset:
  13. row, col = data.edge_index
  14. num_nodes = data.num_nodes
  15. deg = degree(row, num_nodes)
  16. deg = max(deg).item()
  17. if deg > max_deg:
  18. max_deg = int(deg)
  19. return max_deg
  20. class CatDegOnehot(object):
  21. """
  22. Adds the node degree as one hot encodings to the node features.
  23. Args:
  24. max_degree (int): Maximum degree.
  25. in_degree (bool, optional): If set to :obj:`True`, will compute the in-
  26. degree of nodes instead of the out-degree. (default: :obj:`False`)
  27. cat (bool, optional): Concat node degrees to node features instead
  28. of replacing them. (default: :obj:`True`)
  29. """
  30. def __init__(self, max_degree, in_degree=False, cat=True):
  31. self.max_degree = max_degree
  32. self.in_degree = in_degree
  33. self.cat = cat
  34. def __call__(self, data):
  35. idx, x = data.edge_index[1 if self.in_degree else 0], data.x
  36. deg = degree(idx, data.num_nodes, dtype=torch.long)
  37. deg = F.one_hot(deg, num_classes=self.max_degree + 1).to(torch.float)
  38. if x is not None and self.cat:
  39. x = x.view(-1, 1) if x.dim() == 1 else x
  40. data.x = torch.cat([x, deg.to(x.dtype)], dim=-1)
  41. else:
  42. data.x = deg
  43. return data
  44. def split_dataset(dataset, train_data_percent=1.0):
  45. """
  46. Splits the data into train / val / test sets.
  47. Args:
  48. dataset (list): all graphs in the dataset.
  49. train_data_percent (float): Fraction of training data
  50. which is labelled. (default 1.0)
  51. """
  52. random.shuffle(dataset)
  53. n = len(dataset)
  54. train_split, val_split, test_split = DATA_SPLIT
  55. train_end = int(n * DATA_SPLIT[0] * train_data_percent)
  56. val_end = train_end + int(n * DATA_SPLIT[1])
  57. train_dataset, val_dataset, test_dataset = [i for i in dataset[:train_end]], [i for i in dataset[train_end:val_end]], [i for i in dataset[val_end:]]
  58. return train_dataset, val_dataset, test_dataset
  59. # load MUTAG from TUDataset
  60. dataset = TUDataset(root="/tmp/TUDataset/MUTAG", name="MUTAG", use_node_attr=True)
  61. # expand node features by adding node degrees as one hot encodings.
  62. max_degree = get_max_deg(dataset)
  63. transform = CatDegOnehot(max_degree)
  64. dataset = [transform(graph) for graph in dataset]

训练:当使用交叉熵损失和亚当优化器使用 GCN 编码器进行训练时,我们实现了 60% 的分类准确率。由于标记数据量有限,准确率不是很高。
现在,让我们看看我们是否可以使用之前学习的自我监督技术来提高性能。我们可以使用多种数据增强技术,例如 Edge Perturbation 和 Node Dropping,来独立训练 GNN 编码器并学习更好的图嵌入。现在,可以使用可用的标记数据集对预训练的嵌入和分类器头进行微调。在 MUTAG 数据集上进行尝试时,我们观察到准确率跃升至 75%,比以前提高了 15%
我们还在低维空间中可视化来自我们预训练的 GNN 编码器的嵌入。即使没有访问任何标签,自监督模型也能够将这两个类别分开,这是一项了不起的壮举!

这里还有一些例子,我们在不同的数据集上进行了相同的实验。请注意,它们都在 GCN 编码器上进行了训练,并通过边缘扰动和节点丢弃增强以及 InfoNCE 目标函数应用了自我监督。
image.png
image.png
在多个数据集上比较有和没有自监督预训练的分类精度。
这种自我监督的预训练非常有效,尤其是在我们标记数据量有限的情况下。考虑一个我们只能访问 20% 的标记训练数据的设置。再一次,自我监督学习来拯救我们并显着提高模型性能!
image.png
image.png
在多个数据集上比较有和没有自监督预训练的分类精度。在这里,我们仅使用 20% 的标记数据进行训练。
要试验更多数据集和自我监督技术,请按照我们的Google ColabGithub 存储库中的说明进行这项工作。

结论

总结这个博客,我们通过了解不同的数据增强技术以及通过对比学习将它们集成到图神经网络中来了解图的自我监督学习。我们还看到图分类任务的性能显着提高。
最近,许多研究都集中在寻找正确的增强策略,以便为各种图形应用程序学习更好的表示。在这里,我们总结了一些探索图自监督学习的最流行的方法。快乐阅读!
image.png
image.png
图上对比自我监督学习的流行方法。
image.png