- onnx模型介绍
- onnx教程
-
安装TensorRT
官方文档:https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
TAR包格式的安装
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-taronnxruntime源码编译
onnxruntime官网:https://www.onnxruntime.ai/index.html
onnxruntime介绍
- 源码
- 下载源码
- git clone -b rel-1.7.1 —recursive https://github.com/Microsoft/onnxruntime
更新包
- cd onnxruntime && git submodule update —init —recursive
编译
- linux版本-onnxruntime编译
- ./build.sh —build_shared_lib —config Release —use_cuda —cudnn_home /path to cudnn —cuda_home /usr/local/cuda —use_tensorrt —tensorrt_home /path to trthome —update —build
- windows版本-onnxruntime编译
- linux版本-onnxruntime编译
Check for working CUDA compiler: /usr/bin/nvcc - broken
[‘/usr/local/bin/cmake’, ‘/home/felaim/Documents/code/onnxruntime/cmake’, ‘-Donnxruntime_RUN_ONNX_TESTS=OFF’, ‘-Donnxruntime_GENERATE_TEST_REPORTS=ON’, ‘-Donnxruntime_DEV_MODE=ON’, ‘-DPYTHON_EXECUTABLE=/usr/bin/python3’, ‘-Donnxruntime_USE_CUDA=ON’, ‘-Donnxruntime_USE_NSYNC=OFF’, ‘-Donnxruntime_CUDNN_HOME=/usr/lib/x86_64-linux-gnu/‘, ‘-Donnxruntime_USE_AUTOML=OFF’, ‘-Donnxruntime_CUDA_HOME=/usr/local/cuda’, ‘-Donnxruntime_USE_JEMALLOC=OFF’, ‘-Donnxruntime_USE_MIMALLOC=OFF’, ‘-Donnxruntime_ENABLE_PYTHON=OFF’, ‘-Donnxruntime_BUILD_CSHARP=OFF’, ‘-Donnxruntime_BUILD_JAVA=OFF’, ‘-Donnxruntime_BUILD_SHARED_LIB=ON’, ‘-Donnxruntime_USE_EIGEN_FOR_BLAS=ON’, ‘-Donnxruntime_USE_OPENBLAS=OFF’, ‘-Donnxruntime_USE_DNNL=OFF’, ‘-Donnxruntime_USE_MKLML=OFF’, ‘-Donnxruntime_USE_GEMMLOWP=OFF’, ‘-Donnxruntime_USE_NGRAPH=OFF’, ‘-Donnxruntime_USE_OPENVINO=OFF’, ‘-Donnxruntime_USE_OPENVINO_MYRIAD=OFF’, ‘-Donnxruntime_USE_OPENVINO_GPU_FP32=OFF’, ‘-Donnxruntime_USE_OPENVINO_GPU_FP16=OFF’, ‘-Donnxruntime_USE_OPENVINO_CPU_FP32=OFF’, ‘-Donnxruntime_USE_OPENVINO_VAD_M=OFF’, ‘-Donnxruntime_USE_OPENVINO_VAD_F=OFF’, ‘-Donnxruntime_USE_NNAPI=OFF’, ‘-Donnxruntime_USE_OPENMP=ON’, ‘-Donnxruntime_USE_TVM=OFF’, ‘-Donnxruntime_USE_LLVM=OFF’, ‘-Donnxruntime_ENABLE_MICROSOFT_INTERNAL=OFF’, ‘-Donnxruntime_USE_BRAINSLICE=OFF’, ‘-Donnxruntime_USE_NUPHAR=OFF’, ‘-Donnxruntime_USE_EIGEN_THREADPOOL=OFF’, ‘-Donnxruntime_USE_TENSORRT=ON’, ‘-Donnxruntime_TENSORRT_HOME=path to tensorrt’, ‘-Donnxruntime_CROSS_COMPILING=OFF’, ‘-Donnxruntime_BUILD_SERVER=OFF’, ‘-Donnxruntime_BUILD_x86=OFF’, ‘-Donnxruntime_USE_FULL_PROTOBUF=ON’, ‘-Donnxruntime_DISABLE_CONTRIB_OPS=OFF’, ‘-Donnxruntime_MSVC_STATIC_RUNTIME=OFF’, ‘-Donnxruntime_ENABLE_LANGUAGE_INTEROP_OPS=OFF’, ‘-Donnxruntime_USE_DML=OFF’, ‘-Donnxruntime_USE_TELEMETRY=OFF’, ‘-DCUDA_CUDA_LIBRARY=/usr/local/cuda/lib64/stubs’, ‘-Donnxruntime_PYBIND_EXPORT_OPSCHEMA=OFF’, ‘-DCMAKE_BUILD_TYPE=Release’]
./build.sh —build_shared_lib —config Release —use_cuda —cudnn_home /usr/lib/x86_64-linux-gnu/ —cuda_home /usr/local/cuda —use_tensorrt —tensorrt_home your_path_to_tensorrt
————————————————
版权声明:本文为CSDN博主「Felaim」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Felaim/article/details/105726039