Add DeepStream 5.1 support
This commit is contained in:
@@ -43,6 +43,12 @@ Generate the ONNX model file (example for DAMO-YOLO-S*)
|
||||
python3 export_damoyolo.py -w damoyolo_tinynasL25_S_477.pth -c configs/damoyolo_tinynasL25_S.py --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 11 or lower.
|
||||
|
||||
```
|
||||
--opset 11
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
@@ -98,6 +104,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -110,6 +122,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_damoyolo file
|
||||
|
||||
@@ -41,6 +41,12 @@ pip3 install onnx onnxsim onnxruntime
|
||||
python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --simplify
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 11.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated ONNX model file and labels.txt file (if generated) to the `DeepStream-Yolo` folder.
|
||||
@@ -75,6 +81,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -87,6 +99,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_ppyoloe_plus file
|
||||
|
||||
@@ -46,6 +46,12 @@ Generate the ONNX model file (example for YOLO-NAS S)
|
||||
python3 export_yolonas.py -m yolo_nas_s -w yolo_nas_s_coco.pth --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 14.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: Model names
|
||||
|
||||
```
|
||||
@@ -119,6 +125,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -131,6 +143,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yolonas file
|
||||
|
||||
@@ -55,6 +55,12 @@ Generate the ONNX model file
|
||||
python3 export_yolor.py -w yolor-p6.pt --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 12.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
|
||||
```
|
||||
@@ -116,6 +122,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -128,6 +140,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yolor file
|
||||
|
||||
@@ -46,6 +46,12 @@ Generate the ONNX model file (example for YOLOX-s)
|
||||
python3 export_yolox.py -w yolox_s.pth -c exps/default/yolox_s.py --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 11.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
#### 5. Copy generated file
|
||||
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
@@ -80,6 +86,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -92,6 +104,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yolox file
|
||||
|
||||
@@ -47,6 +47,12 @@ Generate the ONNX model file (example for YOLOv5s)
|
||||
python3 export_yoloV5.py -w yolov5s.pt --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 17.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
|
||||
```
|
||||
@@ -108,6 +114,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -120,6 +132,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yoloV5 file
|
||||
|
||||
@@ -47,6 +47,12 @@ Generate the ONNX model file (example for YOLOv6-S 4.0)
|
||||
python3 export_yoloV6.py -w yolov6s.pt --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 13.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
|
||||
```
|
||||
@@ -108,6 +114,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -120,6 +132,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yoloV6 file
|
||||
|
||||
@@ -49,6 +49,12 @@ Generate the ONNX model file (example for YOLOv7)
|
||||
python3 export_yoloV7.py -w yolov7.pt --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 12.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
|
||||
```
|
||||
@@ -110,6 +116,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -122,6 +134,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yoloV7 file
|
||||
|
||||
@@ -46,6 +46,12 @@ Generate the ONNX model file (example for YOLOv8s)
|
||||
python3 export_yoloV8.py -w yolov8s.pt --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 16.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
@@ -101,6 +107,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
@@ -113,6 +125,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yoloV8 file
|
||||
|
||||
Reference in New Issue
Block a user