在上一篇文章介绍了前向传播的构图过程,在这一篇文章中将介绍反向传播是如何借助动态计算图完成结点梯度的计算的。

  1. import torch
  2. a = torch.tensor(1.0, requires_grad=True)
  3. b = torch.tensor(2.0, requires_grad=True)
  4. c = torch.add(a, b)
  5. d = torch.mul(a, c)
  6. d.backward()
  7. print(f"a grad:{a.grad} grad_fn:{a.grad_fn}")
  8. print(f"b grad:{b.grad} grad_fn:{b.grad_fn}")
  9. print(f"c grad:{c.grad} grad_fn:{c.grad_fn}")
  10. print(f"d grad:{d.grad} grad_fn:{d.grad_fn}")
  11. """
  12. a grad:4.0 grad_fn:None
  13. b grad:1.0 grad_fn:None
  14. c grad:None grad_fn:<AddBackward0 object at 0x7fdb27bcbf90>
  15. d grad:None grad_fn:<MulBackward0 object at 0x7fdb27bcb210>
  16. """

dag_forward_1 (1).svg

1 Python层面的调用

当运行到d.backward()时,反向传播就开始了。

[torch/tensor.py]
def backward(self, gradient=None, retain_graph=None, create_graph=False, inputs=None):
    ...
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    ...

d.backward()调用torch.autograd.backward()

[torch/autograd/__init__.py]
def backward(
    tensors: _TensorOrTensors,
    grad_tensors: Optional[_TensorOrTensors] = None,
    retain_graph: Optional[bool] = None,
    create_graph: bool = False,
    grad_variables: Optional[_TensorOrTensors] = None,
    inputs: Optional[Sequence[torch.Tensor]] = None,
) -> None:
    ...
    tensors = (tensors,) if isinstance(tensors, torch.Tensor) else tuple(tensors)
    inputs = tuple(inputs) if inputs is not None else tuple()

    grad_tensors_ = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))
    grad_tensors_ = _make_grads(tensors, grad_tensors_)
    Variable._execution_engine.run_backward(
        tensors, grad_tensors_, retain_graph, create_graph, inputs,
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
  • 参数tensors对应的是tensor d,因为是d.backward(),然后11行把tensors转成tensors元组
  • 参数grad_tensors原本是None,14-15行把grad_tensors设置为和tensors同样尺寸的tensor并且值为1,torch.ones_like(out, memory_format=torch.preserve_format)

_make_grads()源码如下

[torch/autograd/__init__.py]
def _make_grads(outputs: Sequence[torch.Tensor], grads: Sequence[_OptionalTensor]) -> Tuple[_OptionalTensor, ...]:
    new_grads: List[_OptionalTensor] = []
    for out, grad in zip(outputs, grads):
        if isinstance(grad, torch.Tensor):
            ...
        elif grad is None:
            if out.requires_grad:
                if out.numel() != 1:
                    raise RuntimeError("grad can be implicitly created only for scalar outputs")
                new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
            else:
                new_grads.append(None)
        else:
            ...
    return tuple(new_grads)

torch.autograd.backward()调用Variable._execution_engine.run_backward()实际是调用THPEngine_run_backward(),此时已进入C++底层源码了

2 C++层面的调用

THPEngine_run_backward()的源码如下

[torch/csrc/autograd/python_engine.py]
// Implementation of torch._C._EngineBase.run_backward
PyObject *THPEngine_run_backward(PyObject *self, PyObject *args, PyObject *kwargs)
{
  PyObject *tensors = nullptr;
  PyObject *grad_tensors = nullptr;
  unsigned char keep_graph = 0;
  unsigned char create_graph = 0;
  PyObject *inputs = nullptr;
  unsigned char allow_unreachable = 0;
  unsigned char accumulate_grad = 0; // Indicate whether to accumulate grad into leaf Tensors or capture
  const char *accepted_kwargs[] = { // NOLINT
      "tensors", "grad_tensors", "keep_graph", "create_graph", "inputs",
      "allow_unreachable", "accumulate_grad", nullptr
  };
  if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OObb|Obb", (char**)accepted_kwargs,
        &tensors, &grad_tensors, &keep_graph, &create_graph, &inputs, &allow_unreachable, &accumulate_grad))
    return nullptr;
  ...
  Py_ssize_t num_tensors = PyTuple_GET_SIZE(tensors);
  Py_ssize_t num_gradients = PyTuple_GET_SIZE(grad_tensors);
  ...
  // The user either called autograd.backward(...) or autograd.grad(...) to get here
  bool backward_api_called = accumulate_grad;
  ...
  edge_list roots;
  roots.reserve(num_tensors);
  variable_list grads;
  grads.reserve(num_tensors);
  for (int i = 0; i < num_tensors; i++) {
    PyObject *_tensor = PyTuple_GET_ITEM(tensors, i);
    ...
    auto& variable = ((THPVariable*)_tensor)->cdata;
    ...
    auto gradient_edge = torch::autograd::impl::gradient_edge(variable);
    ...
    roots.push_back(std::move(gradient_edge));

    PyObject *grad = PyTuple_GET_ITEM(grad_tensors, i);
    if (THPVariable_Check(grad)) {
      const Variable& grad_var = ((THPVariable*)grad)->cdata;
      ...
      grads.push_back(grad_var);
    } else {
      ...
    }
  }

  std::vector<Edge> output_edges;
  if (inputs != nullptr) {
    int num_inputs = PyTuple_GET_SIZE(inputs);
    output_edges.reserve(num_inputs);
    for (int i = 0; i < num_inputs; ++i) {
      PyObject *input = PyTuple_GET_ITEM(inputs, i);
      ...
      THPVariable *input_var = (THPVariable*)input;
      ...
      const auto output_nr = input_var->cdata.output_nr();
      auto grad_fn = input_var->cdata.grad_fn();
      if (!grad_fn) {
        grad_fn = torch::autograd::impl::try_get_grad_accumulator(input_var->cdata);
      }
      ...
      if (!grad_fn) {
        output_edges.emplace_back();
      } else {
        output_edges.emplace_back(grad_fn, output_nr);
      }
    }
  }

  variable_list outputs;
  {
    pybind11::gil_scoped_release no_gil;
    auto& engine = python::PythonEngine::get_python_engine();
    outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);
  }

  if (!backward_api_called && inputs != nullptr) {
    int num_inputs = PyTuple_GET_SIZE(inputs);
    THPObjectPtr py_outputs {PyTuple_New(num_inputs)};
    if (!py_outputs) return nullptr;
    for (int i = 0; i < num_inputs; i++) {
      ...
      PyTuple_SET_ITEM(py_outputs.get(), i, THPVariable_Wrap(outputs[i]));
    }
    return py_outputs.release();
  } else {
    Py_RETURN_NONE;
  }
  END_HANDLE_TH_ERRORS
}

THPEngine_run_backward()包括三部分功能(有些功能忽略了):

  • 变量定义以及参数解析(上面代码5-18行)
  • 初始化roots,动态计算图的起点,[(MulBackward0, 0)](上面代码26-47行)
  • 调用execute()执行反向传播

上面的三部分功能只讲第三点,前两点比较直观

2.1 调用execute()执行反向传播

[torch/csrc/autograd/python_engine.py]
// Implementation of torch._C._EngineBase.run_backward
PyObject *THPEngine_run_backward(PyObject *self, PyObject *args, PyObject *kwargs)
{
  variable_list outputs;
  {
    pybind11::gil_scoped_release no_gil;
    auto& engine = python::PythonEngine::get_python_engine();
    outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);
  }
}

THPEngine_run_backward()调用PythonEngine::execute()

[torch/csrc/autograd/python_engine.cpp]
variable_list PythonEngine::execute(
    const edge_list& roots,
    const variable_list& inputs,
    bool keep_graph,
    bool create_graph,
    bool accumulate_grad,
    const edge_list& outputs) {
  TORCH_CHECK(!PyGILState_Check(), "The autograd engine was called while holding the GIL. If you are using the C++ "
                                   "API, the autograd engine is an expensive operation that does not require the "
                                   "GIL to be held so you should release it with 'pybind11::gil_scoped_release no_gil;'"
                                   ". If you are not using the C++ API, please report a bug to the pytorch team.")
  try {
    return Engine::execute(roots, inputs, keep_graph, create_graph, accumulate_grad, outputs);
  } catch (python_error& e) {
    e.restore();
    throw;
  }
}

PythonEngine::execute()最终调用engineexecute()方法完成反向传播的操作的。

[torch/csrc/autograd/engine.cpp]
auto Engine::execute(const edge_list& roots,
                     const variable_list& inputs,
                     bool keep_graph,
                     bool create_graph,
                     bool accumulate_grad,
                     const edge_list& outputs) -> variable_list {
  ...
}

关于execute()方法的执行流程在动态计算图之Engine类中做了详细介绍,这里只给出最终的结果图
autograd_bw.svg