3.8 KiB
YOLOv7 usage
NOTE: The yaml file is not required.
- Convert model
- Compile the lib
- Edit the config_infer_primary_yoloV7 file
- Edit the deepstream_app_config file
- Testing the model
Convert model
1. Download the YOLOv7 repo and install the requirements
git clone https://github.com/WongKinYiu/yolov7.git
cd yolov7
pip3 install -r requirements.txt
pip3 install onnx onnxsim onnxruntime
NOTE: It is recommended to use Python virtualenv.
2. Copy conversor
Copy the export_yoloV7.py file from DeepStream-Yolo/utils directory to the yolov7 folder.
3. Download the model
Download the pt file from YOLOv7 releases (example for YOLOv7)
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
NOTE: You can use your custom model.
4. Reparameterize your model (for custom models)
Custom YOLOv7 models cannot be directly converted to engine file. Therefore, you will have to reparameterize your model using the code here. Make sure to convert your custom checkpoints in YOLOv7 repository, and then save your reparmeterized checkpoints for conversion in the next step.
5. Convert model
Generate the ONNX model file (example for YOLOv7)
python3 export_yoloV7.py -w yolov7.pt --dynamic
NOTE: To convert a P6 model
--p6
NOTE: To change the inference size (defaut: 640 / 1280 for --p6 models)
-s SIZE
--size SIZE
-s HEIGHT WIDTH
--size HEIGHT WIDTH
Example for 1280
-s 1280
or
-s 1280 1280
NOTE: To simplify the ONNX model (DeepStream >= 6.0)
--simplify
NOTE: To use dynamic batch-size (DeepStream >= 6.1)
--dynamic
NOTE: To use static batch-size (example for batch-size = 4)
--batch 4
NOTE: If you are using the DeepStream 5.1, remove the --dynamic arg and use opset 12 or lower. The default opset is 12.
--opset 12
6. Copy generated files
Copy the generated ONNX model file and labels.txt file (if generated) to the DeepStream-Yolo folder.
Compile the lib
-
Open the
DeepStream-Yolofolder and compile the lib -
Set the
CUDA_VERaccording to your DeepStream version
export CUDA_VER=XY.Z
-
x86 platform
DeepStream 7.0 / 6.4 = 12.2 DeepStream 6.3 = 12.1 DeepStream 6.2 = 11.8 DeepStream 6.1.1 = 11.7 DeepStream 6.1 = 11.6 DeepStream 6.0.1 / 6.0 = 11.4 DeepStream 5.1 = 11.1 -
Jetson platform
DeepStream 7.0 / 6.4 = 12.2 DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4 DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
- Make the lib
make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
Edit the config_infer_primary_yoloV7 file
Edit the config_infer_primary_yoloV7.txt file according to your model (example for YOLOv7 with 80 classes)
[property]
...
onnx-file=yolov7.onnx
...
num-detected-classes=80
...
parse-bbox-func-name=NvDsInferParseYolo
...
NOTE: The YOLOv7 resizes the input with center padding. To get better accuracy, use
[property]
...
maintain-aspect-ratio=1
symmetric-padding=1
...
Edit the deepstream_app_config file
...
[primary-gie]
...
config-file=config_infer_primary_yoloV7.txt
Testing the model
deepstream-app -c deepstream_app_config.txt
NOTE: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
NOTE: For more information about custom models configuration (batch-size, network-mode, etc), please check the docs/customModels.md file.