tensor是类似于arraymatrix的结构,可以在GPU或CPU上运行,在PyTorch中,模型的输入、输出和参数都是tensor格式的。

导入相关的库

  1. import torch
  2. import numpy as np

一、初始化一个tensor

1.1 直接从数据转换

  1. data = [[1, 2], [3, 4]]
  2. x_data = torch.tensor(data)

1.2 从numpy.array转换

  1. np_array = np.array(data)
  2. x_np = torch.from_numpy(np_array)

tensor转向一个numpy数组

  1. t_array = torch.ones(5)
  2. n_array = t.numpy()

将GPU上的tensor转到CPU上的numpy

  1. cpu_numpy = gpu_tensor.cpu().detatch().numpy()

tensor数组转至CPU的方法

  1. cpu_tensor = gpu_tensor.cpu()
  2. cpu_tensor = gpu_tensor.to('cpu')

1.3 其他数组传递数组大小

  1. import numpy as np
  2. import torch
  3. list_data = [[1, 2], [3, 4], [5, 6]]
  4. x_data = torch.tensor(list_data)
  5. x_ones = torch.ones_like(x_data)
  6. x_rand = torch.rand_like(x_data, dtype=torch.float)
  7. print(x_ones, x_rand)

1.4 指定大小创建数组

  1. shape = (2,3,)
  2. rand_tensor = torch.rand(shape)
  3. ones_tensor = torch.ones(shape)
  4. zeros_tensor = torch.zeros(shape)

二、输出tensor的属性

  1. import numpy as np
  2. import torch
  3. tensor = torch.rand(3,4)
  4. print(f"Shape of tensor: {tensor.shape}")
  5. print(f"Datatype of tensor: {tensor.dtype}")
  6. print(f"Device tensor is stored on: {tensor.device}")

三、tensor转移到GPU上

  1. # We move our tensor to the GPU if available
  2. if torch.cuda.is_available():
  3. tensor = tensor.to('cuda')

四、tensor的索引

  1. import numpy as np
  2. import torch
  3. data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. tensor = torch.tensor(data)
  5. print('First row: ',tensor[0])
  6. print('First column: ', tensor[:, 0])
  7. print('Last column:', tensor[..., -1])
  8. tensor[:,1] = 0 # 第二列全为0
  9. tensor[1] = 0 # 第二行全为0
  10. print(tensor)

五、tensor的拼接

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data)
  6. t1 = torch.cat([tensor, tensor, tensor], dim=0) # 竖着连接3个tensor
  7. t2 = torch.cat([tensor, tensor, tensor], dim=1) # 横着连接3个tensor

另外有一个函数为torch.stack,但是它和torch.cat有区别,它会增加每次拼接的数组的维度,如下所示:

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data)
  6. t1 = torch.stack([tensor, tensor, tensor], dim=0) # 竖着连接3个tensor
  7. t2 = torch.cat([tensor, tensor, tensor], dim=0) # 横着连接3个tensor
  8. print(t1)
  9. print(t2)

tensor([[[1, 2],
[3, 4]], [[1, 2],
[3, 4]], [[1, 2],
[3, 4]]]) tensor([[1, 2],
[3, 4],
[1, 2],
[3, 4],
[1, 2],
[3, 4]])

六、计算tensor的相乘和点积

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data, dtype=torch.float)
  6. # This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
  7. y1 = tensor @ tensor.T
  8. y2 = tensor.matmul(tensor.T)
  9. y3 = torch.rand_like(tensor)
  10. torch.matmul(tensor, tensor.T, out=y3)
  11. print(y1)
  12. # This computes the element-wise product. z1, z2, z3 will have the same value
  13. z1 = tensor * tensor
  14. z2 = tensor.mul(tensor)
  15. z3 = torch.rand_like(tensor)
  16. torch.mul(tensor, tensor, out=z3)
  17. print(z1)

七、tensor的item属性

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data,dtype=torch.float)
  6. agg = tensor.sum()
  7. agg_item = agg.item()
  8. print(agg_item, type(agg_item))

八、tensor的in place操作

这个操作我不知道怎么翻译,大致意思就是可以直接对这个tensor数组进行一些操作,

8.1 tensor添加一个数

如这个tensor数组整体加上一个数字:

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data,dtype=torch.float)
  6. print(tensor, "\n")
  7. tensor.add_(5) # 数组中的每个数都加上5
  8. print(tensor)

8.2 tensor的复制

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data,dtype=torch.float)
  6. print(tensor, "\n")
  7. tensor_copy = tensor.copy_(tensor) # 复制这个tensor
  8. tensor_copy.add_(5)
  9. print(tensor_copy)

8.3 tensor的转置

  1. import numpy as np
  2. import torch
  3. # data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
  4. data = [[1, 2], [3, 4]]
  5. tensor = torch.tensor(data,dtype=torch.float)
  6. print(tensor, "\n")
  7. tensor_t = tensor.t_() # 转置数组
  8. print(tensor_t)