torch官方案例

矩阵乘法

torch库 - 图1

创建torch

此处只做一些简单介绍,将展示一些常用的创建torch的方式。

从numpy数据转换

torch.from_numpy``(``_ndarray_``)`` → Tensor

  1. >>> a = numpy.array([1, 2, 3])
  2. >>> t = torch.from_numpy(a)
  3. >>> t
  4. tensor([ 1, 2, 3])
  5. >>> t[0] = -1
  6. >>> a
  7. array([-1, 2, 3])

从list转换

torch.tensor``(``data, dtype=None, device=None, requires_grad=False, pin_memory=False``)`` → Tensor

  • data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.
  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
  • pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.

    1. >>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
    2. tensor([[ 0.1000, 1.2000],
    3. [ 2.2000, 3.1000],
    4. [ 4.9000, 5.2000]])

    随机torch

  • torch.rand(*sizes, out=None) ``→ Tensor

返回一个张量,包含了从区间[0, 1)的均匀分布中抽取的一组随机数。张量的形状由参数sizes定义。
参数:

  • sizes (int…) - 整数序列,定义了输出张量的形状
  • out (Tensor, optinal) - 结果张量
    • torch.randn(*sizes, out=None)`` → Tensor

返回一个张量,包含了从标准正态分布(均值为0,方差为1,即高斯白噪声)中抽取的一组随机数。张量的 形状由参数sizes定义。
参数:

  • sizes (int…) - 整数序列,定义了输出张量的形状
  • out (Tensor, optinal) - 结果张量
    • torch.normal(means, std, out=None) ``→ Tensor

返回一个张量,包含了从指定均值means和标准差std的离散正态分布中抽取的一组随机数。
标准差std是一个张量,包含每个输出元素相关的正态分布标准差。
参数:

  • means (float, optional) - 均值
  • std (Tensor) - 标准差
  • out (Tensor) - 输出张量

    torch.tensor() VS torch.Tensor()

    torch.tensor()能够控制数据的类型,当dtype=None时,默认采用输入数据的原本类型;
    torch.Tensor()相对于torch.FloatTensor(),它会将数据类型变为float

    类似NumPy的方法

  • torch.``ones(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
  • torch.``zeros(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
  • ......

    torch属性

  • torch.dtype

  • torch.device
  • torch.layout
  • shape
  • ……

    维度扩充和压缩

    压缩:去掉维数为1的维度;

    1. b = torch.squeeze(a)

    扩充:在指定dim增加维数为1的维度;

    1. d = torch.unsqueeze(c, 1)

    narrow函数实现数据截取

    1. torch.narrow(input, dim, start, length) Tensor

    输出tensor和输入的维数还是相同,但是相同维度上维数存在截取。
    如下列:

    1. >>> import torch
    2. >>> a = torch.randn((3,3,3))
    3. >>> a
    4. tensor([[[-1.8521, 1.4378, 0.9785],
    5. [-0.7449, 0.5619, -0.0150],
    6. [-2.0747, -1.9051, 2.4881]],
    7. [[ 0.3092, -1.6075, -1.0128],
    8. [ 1.5866, 0.6185, -0.3448],
    9. [ 1.3768, 0.6300, -1.0388]],
    10. [[ 2.3150, 1.1148, 0.1757],
    11. [-0.2820, 0.1473, -0.7576],
    12. [ 0.4451, -1.3251, 0.3433]]])
    13. >>> a.narrow(2,0,2)
    14. tensor([[[-1.8521, 1.4378],
    15. [-0.7449, 0.5619],
    16. [-2.0747, -1.9051]],
    17. [[ 0.3092, -1.6075],
    18. [ 1.5866, 0.6185],
    19. [ 1.3768, 0.6300]],
    20. [[ 2.3150, 1.1148],
    21. [-0.2820, 0.1473],
    22. [ 0.4451, -1.3251]]])

    nonzero获取非零索引

    1. >>> a = torch.randn((2,2,))
    2. >>> a
    3. tensor([[-0.0703, 0.7614],
    4. [-1.4132, 0.0907]])
    5. >>> a.nonzero()
    6. tensor([[0, 0],
    7. [0, 1],
    8. [1, 0],
    9. [1, 1]])

    注意:输出都是二维数组 ```python

    a[:,3] tensor([-0.1084, 0.1881]) torch.nonzero(a[:,3]) tensor([[0],

    1. [1]])

c = torch.randn((2,2,2)) torch.nonzero(c) tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]])

  1. <a name="qMMdk"></a>
  2. ## new创建新的Tensor
  3. 创建一个新的Tensor,该Tensor的type和device都和原有Tensor一致,且无内容。
  4. ```python
  5. # 第一种
  6. a
  7. tensor([[-0.0703, 0.7614],
  8. [-1.4132, 0.0907]])
  9. b = a.new()
  10. b
  11. tensor([])
  12. b.shape
  13. torch.Size([0])
  14. # 第二种
  15. b = torch.Tensor.new(a)
  16. b
  17. tensor([])
  18. b = a.new(1, 3)
  19. b
  20. tensor([[0., 0., nan]])

max函数获取Tensor中最大值及其下标

  1. a
  2. tensor([[-0.0703, 0.7614],
  3. [-1.4132, 0.0907]])
  4. torch.max(a, 1)
  5. torch.return_types.max(
  6. values=tensor([0.7614, 0.0907]),
  7. indices=tensor([1, 1]))

sort排序

  1. c = torch.randn(10)
  2. c
  3. tensor([ 1.8778, 0.9451, 0.0307, 0.7266, -1.5963, -0.7301, 0.6295, 0.6368,
  4. 0.7351, -1.6350])
  5. torch.sort(c,descending=True)
  6. torch.return_types.sort(
  7. values=tensor([ 1.8778, 0.9451, 0.7351, 0.7266, 0.6368, 0.6295, 0.0307, -0.7301,
  8. -1.5963, -1.6350]),
  9. indices=tensor([0, 1, 8, 3, 7, 6, 2, 5, 4, 9]))

返回两个tensor,第一个为value,第二个为index。可以利用index对应的tensor去索引原tensor从而得到sort的value。

index_select进行下标筛选

  1. a = torch.randn((2,3))
  2. a
  3. tensor([[-0.8288, 0.6638, -0.1492],
  4. [ 0.3325, 0.8650, 0.0925]])
  5. torch.index_select(a,1,torch.LongTensor([0,1]))
  6. tensor([[-0.8288, 0.6638],
  7. [ 0.3325, 0.8650]])

参数说明:index_select(x, dim, indices)

repeat函数

  1. repeat(*sizes) -> Tensor
  2. *size(torch.Size or int) - The number of times to repeat this tensor along each dimension.
  3. Repeats this tensor along the specified dimensions.
  1. a
  2. tensor([[-0.8288, 0.6638, -0.1492],
  3. [ 0.3325, 0.8650, 0.0925]])
  4. # 第一个维度repeat2次,第二个维度repeat两次
  5. a.repeat(2,2)
  6. tensor([[-0.8288, 0.6638, -0.1492, -0.8288, 0.6638, -0.1492],
  7. [ 0.3325, 0.8650, 0.0925, 0.3325, 0.8650, 0.0925],
  8. [-0.8288, 0.6638, -0.1492, -0.8288, 0.6638, -0.1492],
  9. [ 0.3325, 0.8650, 0.0925, 0.3325, 0.8650, 0.0925]])