RetinaFace MXNet模型转ONNX转TensorRT
作者:互联网
文章目录
RetinaFace MXNet模型转ONNX转TensorRT
1. github开源代码
RetinaFace TensorRT推理的开源代码位置在https://github.com/linghu8812/tensorrt_inference/tree/master/RetinaFace。
2. MXNet模型转ONNX模型
首先通过命令git clone https://github.com/deepinsight/insightface.git
clone insightface的代码,然后将export_onnx.py文件拷贝到./detection/RetinaFace
或者./detection/RetinaFaceAntiCov
文件夹中,通过以下命令生成ONNX文件。对于RetinaFace-R50,RetinaFace-MobileNet0.25和RetinaFaceAntiCov这几个模型都可以支持。通过以下命令可以导出模型:
- 导出resnet50模型
python3 export_onnx.py
- 导出mobilenet 0.25模型
python3 export_onnx.py --prefix ./model/mnet.25
- 导出RetinaFaceAntiCov模型
python3 export_onnx.py --prefix ./model/mnet_cov2 --network net3l
同YOLOv4模型一样,对输出结果也做了concat,如下图所示。
3. ONNX模型转TensorRT模型
3.1 概述
TensorRT模型即TensorRT的推理引擎,代码中通过C++实现。相关配置写在config.yaml文件中,如果存在engine_file
的路径,则读取engine_file
,否则从onnx_file
生成engine_file
。
void RetinaFace::LoadEngine() {
// create and load engine
std::fstream existEngine;
existEngine.open(engine_file, std::ios::in);
if (existEngine) {
readTrtFile(engine_file, engine);
assert(engine != nullptr);
} else {
onnxToTRTModel(onnx_file, engine_file, engine, BATCH_SIZE);
assert(engine != nullptr);
}
}
config.yaml文件可以设置batch size,图像的size及模型的anchor等。
RetinaFace:
onnx_file: "../R50.onnx"
engine_file: "../R50.trt"
BATCH_SIZE: 1
INPUT_CHANNEL: 3
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 640
obj_threshold: 0.5
nms_threshold: 0.45
detect_mask: False
mask_thresh: 0.5
landmark_std: 1
feature_steps: [32, 16, 8]
anchor_sizes: [[512, 256], [128, 64], [32, 16]]
3.2 编译
通过以下命令对项目进行编译,生成RetinaFace_trt
mkdir build && cd build
cmake ..
make -j
3.3 运行
通过以下命令运行项目,得到推理结果
- RetinaFace模型推理
./RetinaFace_trt../config.yaml ../samples
- RetinaFaceAntiCov模型推理
./RetinaFace_trt ../config_anti.yaml ../samples
4. 推理结果
- RetinaFace推理结果:
- RetinaFaceAntiCov推理结果:
标签:RetinaFace,engine,MXNet,ONNX,模型,file,onnx 来源: https://blog.csdn.net/linghu8812/article/details/110677016