tvm中,与前端ONNX相关的文件是tvm/python/tvm/relay/frontend/onnx.py。ONNX包含许多算子,tvm需要对这些算子进行转换,转换成tvm自己的算子。目前,有些ONNX算子,tvm还不支持(没有实现相应的转换),例如:Loop, NonMaxSuppression等等,需要我们自己去添加这些算子的转换。

重要的类

OnnxOpConverter类,这是一个ONNX 算子转换相关的基类,所有ONNX 算子转换器类都是该类的子类。
Unary类,该类是一元操作符的算子转换的基类。该类是OnnxOpConverter的子类。
Elemwise类,该类是按元素的算子转换的基类。该类是OnnxOpConverter的子类。

  1. class OnnxOpConverter(object):
  2. """ A helper class for holding onnx op converters.
  3. """
  4. @classmethod
  5. def get_converter(cls, opset):
  6. """ Get converter matches given opset.
  7. Parameters
  8. ----------
  9. opset: int
  10. opset from model.
  11. Returns
  12. -------
  13. converter, which should be `_impl_vx`. Number x is the biggest
  14. number smaller than or equal to opset belongs to all support versions.
  15. """
  16. versions = [
  17. int(d.replace('_impl_v', '')) for d in dir(cls) if '_impl_v' in d
  18. ]
  19. versions = sorted(versions + [opset])
  20. version = versions[
  21. max([i for i, v in enumerate(versions) if v == opset]) - 1]
  22. if hasattr(cls, '_impl_v{}'.format(version)):
  23. return getattr(cls, '_impl_v{}'.format(version))
  24. raise NotImplementedError(
  25. 'opset version {} of {} not implemented'.format(
  26. version, cls.__name__))
  27. class Unary(OnnxOpConverter):
  28. """ A helper class for unary op converters.
  29. """
  30. name = ''
  31. @classmethod
  32. def _impl_v1(cls, inputs, attr, params):
  33. assert len(inputs) == 1, "Unary math op {} takes 1 input, {} given".format(
  34. cls.name, len(inputs))
  35. op_name = cls.name
  36. return get_relay_op(op_name)(*inputs)
  37. class Elemwise(OnnxOpConverter):
  38. """ A helper class for elemwise op converters.
  39. """
  40. name = ''
  41. @classmethod
  42. def _impl_v1(cls, inputs, attr, params):
  43. assert len(inputs) == 2, "Math op {} take 2 inputs, {} given".format(
  44. cls.name, len(inputs))
  45. op_name = cls.name
  46. conv_ops = ["conv2d", "conv2d_transpose"]
  47. if attr.get('broadcast', 0) and any(x in str(inputs[0]) for x in conv_ops):
  48. # TODO(zhreshold): remove hard coded infershape
  49. axis = int(attr.get('axis', 0))
  50. inputs[1] = _op.expand_dims(inputs[1], axis=axis, num_newaxis=2)
  51. return get_relay_op(op_name)(*inputs)


那么,如何添加一个新的算子(实际是算子的转换)呢? 以Upsample为例。

添加算子

1 新建一个与算子相关的类

一般的,该类的名字与要添加的算子的名字相同(不同也没关系,但是不要与其他类同名,保证不重名)。以Upsample为例,新建类名为Upsample,继承OnnxOpConverter类。
另外,还必须添加一个类函数,类名必须是”_impl_v“,后面接一个数字x。数字x是小于或等于opset(算子集)所支持版本的最大数(x is the biggest number smaller than or equal to opset belongs to all support versions)。 该函数返回一个与算子名相关的可调用的relay函数(即用relay的函数实现该ONNX算子的功能)。

有关opset (https://github.com/onnx/onnx/blob/master/docs/VersionConverter.md):
ONNX提供了一个库,用于在不同的opset版本之间转换ONNX模型。主要动机是在不增强ONNX后端规范的情况下,提高ONNX模型的向后兼容性。这使后端开发人员可以为特定的opset版本提供支持,并允许用户将模型编写或导出到特定的opset版本,但可以在具有不同opset版本的环境中运行。在实现方面,该库利用了内存中的表示形式,该表示形式比原始protobuf结构以及为ONNX Optimizer开发的protobuf格式的转换器要方便得多。
您可能对调用所提供的特定算子的适配器或实现新的适配器(或两者)感兴趣。默认适配器只能在默认域中工作,但是可以根据相关的重大更改,将其通用化为跨域工作或使用新的转换方法。

  1. class Upsample(OnnxOpConverter):
  2. """ Operator converter for Upsample (nearest mode).
  3. """
  4. @classmethod
  5. def _impl_v9(cls, inputs, attr, params):
  6. scales = attr.get('scales')
  7. if not scales:
  8. #Here we are going to higher OPSET version.
  9. assert len(inputs) == 2, "Upsample op take 2 inputs, {} given".format(len(inputs))
  10. scales = params[inputs[1].name_hint].asnumpy()
  11. inputs = inputs[:1]
  12. assert len(scales) == 4 and scales[0] == 1.0 and scales[1] == 1.0
  13. mode = attr.get('mode')
  14. if mode == b'nearest':
  15. method = "nearest_neighbor"
  16. elif mode == b'linear':
  17. method = "bilinear"
  18. else:
  19. raise tvm.error.OpAttributeInvalid(
  20. 'Value {} in attribute "mode" of operator Upsample is not valid.'.format(mode))
  21. attr = {'scale_h':scales[-2], 'scale_w':scales[-1], 'method':method,
  22. 'layout':'NCHW', 'align_corners':True}
  23. return AttrCvt('upsampling')(inputs, attr)

重点就是该_impl_v函数的实现。

2 添加算子转换

_get_convert_map是一个算子转换映射函数,它定义了算子名称到转换函数的映射。我们需要在该函数中添加Upsample算子的转换。
该映射的key是ONNX的算子名称, value则是tvm中该算子的实现(可调用的函数)。
一个ONNX算子直接对应一个tvm算子(仅仅是算子名不一样的),直接使用Renamer即可。
一个ONNX算子由多个tvm 算子实现,需要采用前面第1步中的方法,新建类,然后调用该类的函数get_converter来实现。

  1. # _convert_map defines maps of name to converter functor(callable)
  2. # for 1 to 1 mapping, use Renamer if nothing but name is different
  3. # use AttrCvt if attributes need to be converted
  4. # for 1 to N mapping(composed), use custom callable functions
  5. # for N to 1 mapping, currently not supported(?)
  6. def _get_convert_map(opset):
  7. return {
  8. # defs/experimental
  9. 'Identity': Renamer('copy'),
  10. # 'Affine'
  11. 'ThresholdedRelu': ThresholdedRelu.get_converter(opset),
  12. 'ScaledTanh': ScaledTanh.get_converter(opset),
  13. 'ParametricSoftplus': ParametricSoftPlus.get_converter(opset),
  14. 'ConstantOfShape': ConstantOfShape.get_converter(opset),
  15. # 'GivenTensorFill'
  16. 'FC': AttrCvt('dense', ignores=['axis', 'axis_w']),
  17. 'Scale': Scale.get_converter(opset),
  18. # 'GRUUnit'
  19. # 'ATen'
  20. # 'ImageScaler'
  21. # 'MeanVarianceNormalization'
  22. # 'Crop'
  23. # 'Embedding'
  24. 'Upsample': Upsample.get_converter(opset),
  25. ...

3 添加测试

在文件tests/python/frontend/onnx/test_forward.py中添加相关测试代码:

  1. def _test_upsample_nearest():
  2. scale = 2
  3. in_shape = (1, 1, 3, 3)
  4. out_shape = (1, 1, 3*scale, 3*scale)
  5. y = helper.make_node("Upsample", ['in'], [
  6. 'out'], mode='nearest', scales=[1.0, 1.0, 2.0, 2.0])
  7. in_array = np.random.uniform(size=in_shape).astype(np.float32)
  8. out_array = topi.testing.upsampling_python(
  9. in_array, (scale, scale), "NCHW")
  10. graph = helper.make_graph([y],
  11. 'upsample_nearest_test',
  12. inputs=[helper.make_tensor_value_info(
  13. "in", TensorProto.FLOAT, list(in_shape))],
  14. outputs=[helper.make_tensor_value_info("out", TensorProto.FLOAT, list(out_shape))])
  15. model = helper.make_model(graph, producer_name='upsample_nearest_test')
  16. for target, ctx in ctx_list():
  17. tvm_out = get_tvm_output(
  18. model, in_array, target, ctx, out_shape, 'float32')
  19. tvm.testing.assert_allclose(out_array, tvm_out)
  20. def _test_upsample_bilinear():
  21. scale = 2
  22. in_shape = (1, 1, 3, 3)
  23. out_shape = (1, 1, 3*scale, 3*scale)
  24. y = helper.make_node("Upsample", ['in'], [
  25. 'out'], mode='linear', scales=[1.0, 1.0, 2.0, 2.0])
  26. in_array = np.random.uniform(size=in_shape).astype(np.float32)
  27. out_array = topi.testing.bilinear_resize_python(
  28. in_array, (3*scale, 3*scale), "NCHW")
  29. graph = helper.make_graph([y],
  30. 'upsample_bilinear_test',
  31. inputs=[helper.make_tensor_value_info(
  32. "in", TensorProto.FLOAT, list(in_shape))],
  33. outputs=[helper.make_tensor_value_info("out", TensorProto.FLOAT, list(out_shape))])
  34. model = helper.make_model(graph, producer_name='upsample_bilinear_test')
  35. for target, ctx in ctx_list():
  36. tvm_out = get_tvm_output(
  37. model, in_array, target, ctx, out_shape, 'float32')
  38. tvm.testing.assert_allclose(out_array, tvm_out, rtol=1e-5, atol=1e-5)
  39. def _test_upsample_bilinear_opset9():
  40. scale = 2
  41. in_shape = (1, 1, 3, 3)
  42. out_shape = (1, 1, 3*scale, 3*scale)
  43. y = helper.make_node("Upsample", ['in', 'scales'], ['out'], mode='linear')
  44. scales = [1.0, 1.0, 2.0, 2.0]
  45. in_array = np.random.uniform(size=in_shape).astype(np.float32)
  46. out_array = topi.testing.bilinear_resize_python(
  47. in_array, (3*scale, 3*scale), "NCHW")
  48. ref_array = np.array(scales)
  49. ref_node = helper.make_node('Constant',
  50. inputs=[],
  51. outputs=['scales'],
  52. value=onnx.helper.make_tensor(name='const_tensor',
  53. data_type=TensorProto.FLOAT,
  54. dims=ref_array.shape,
  55. vals=ref_array.flatten().astype(float)))
  56. graph = helper.make_graph([ref_node, y],
  57. 'upsample_bilinear_opset9_test',
  58. inputs=[helper.make_tensor_value_info(
  59. "in", TensorProto.FLOAT, list(in_shape))],
  60. outputs=[helper.make_tensor_value_info("out", TensorProto.FLOAT, list(out_shape))])
  61. model = helper.make_model(
  62. graph, producer_name='upsample_bilinear_opset9_test')
  63. for target, ctx in ctx_list():
  64. tvm_out = get_tvm_output(
  65. model, in_array, target, ctx, out_shape, 'float32')
  66. tvm.testing.assert_allclose(out_array, tvm_out, rtol=1e-5, atol=1e-5)
  67. def test_upsample():
  68. _test_upsample_nearest()
  69. _test_upsample_bilinear()
  70. _test_upsample_bilinear_opset9()
  71. if __name__ == '__main__':
  72. test_upsample()