PyTorch:张量 与 Autograd

原文:https://pytorch.org/tutorials/beginner/examples_autograd/polynomial_autograd.html#sphx-glr-beginner-examples-autograd-polynomial-autograd-py

校对:DrDavidS

这里我们准备一个三阶多项式,通过最小化平方欧几里得距离来训练,并预测函数 y = sin(x)-pipi上的值。

此实现使用了 PyTorch 张量(tensor)运算来实现前向传播,并使用 PyTorch Autograd 来计算梯度。

PyTorch 张量表示计算图中的一个节点。 如果x是一个张量,且x.requires_grad=True,则x.grad是另一个张量,它保存了x相对于某个标量值的梯度。

  1. import torch
  2. import math
  3. dtype = torch.float
  4. device = torch.device("cpu")
  5. # device = torch.device("cuda:0") # Uncomment this to run on GPU
  6. # Create Tensors to hold input and outputs.
  7. # By default, requires_grad=False, which indicates that we do not need to
  8. # compute gradients with respect to these Tensors during the backward pass.
  9. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
  10. y = torch.sin(x)
  11. # Create random Tensors for weights. For a third order polynomial, we need
  12. # 4 weights: y = a + b x + c x^2 + d x^3
  13. # Setting requires_grad=True indicates that we want to compute gradients with
  14. # respect to these Tensors during the backward pass.
  15. a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
  16. b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
  17. c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
  18. d = torch.randn((), device=device, dtype=dtype, requires_grad=True)
  19. learning_rate = 1e-6
  20. for t in range(2000):
  21. # Forward pass: compute predicted y using operations on Tensors.
  22. y_pred = a + b * x + c * x ** 2 + d * x ** 3
  23. # Compute and print loss using operations on Tensors.
  24. # Now loss is a Tensor of shape (1,)
  25. # loss.item() gets the scalar value held in the loss.
  26. loss = (y_pred - y).pow(2).sum()
  27. if t % 100 == 99:
  28. print(t, loss.item())
  29. # Use autograd to compute the backward pass. This call will compute the
  30. # gradient of loss with respect to all Tensors with requires_grad=True.
  31. # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
  32. # the gradient of the loss with respect to a, b, c, d respectively.
  33. loss.backward()
  34. # Manually update weights using gradient descent. Wrap in torch.no_grad()
  35. # because weights have requires_grad=True, but we don't need to track this
  36. # in autograd.
  37. with torch.no_grad():
  38. a -= learning_rate * a.grad
  39. b -= learning_rate * b.grad
  40. c -= learning_rate * c.grad
  41. d -= learning_rate * d.grad
  42. # Manually zero the gradients after updating weights
  43. a.grad = None
  44. b.grad = None
  45. c.grad = None
  46. d.grad = None
  47. print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

脚本的总运行时间:(0 分钟 0.000 秒)

下载 Python 源码:polynomial_autograd.py

下载 Jupyter 笔记本:polynomial_autograd.ipynb