参考:https://pytorch.org/docs/stable/notes/cuda.html
torch.cuda.device()修改默认cuda设备,修改之后不管是.cuda()还是.to(device=cuda)都会在默认cuda设备上生成张量
cuda = torch.device('cuda') # Default CUDA devicecuda0 = torch.device('cuda:0')cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)x = torch.tensor([1., 2.], device=cuda0)# x.device is device(type='cuda', index=0)y = torch.tensor([1., 2.]).cuda()# y.device is device(type='cuda', index=0)with torch.cuda.device(1):# allocates a tensor on GPU 1a = torch.tensor([1., 2.], device=cuda)# transfers a tensor from CPU to GPU 1b = torch.tensor([1., 2.]).cuda()# a.device and b.device are device(type='cuda', index=1)# You can also use ``Tensor.to`` to transfer a tensor:b2 = torch.tensor([1., 2.]).to(device=cuda)# b.device and b2.device are device(type='cuda', index=1)c = a + b# c.device is device(type='cuda', index=1)z = x + y# z.device is device(type='cuda', index=0)# even within a context, you can specify the device# (or give a GPU index to the .cuda call)d = torch.randn(2, device=cuda2)e = torch.randn(2).to(cuda2)f = torch.randn(2).cuda(cuda2)# d.device, e.device, and f.device are all device(type='cuda', index=2)
