diff --git a/README.md b/README.md
index 379bd7c..7d00486 100644
--- a/README.md
+++ b/README.md
@@ -2,39 +2,25 @@
NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO models
-### **I will be back with updates soon, I'm full of work from my jobs right now. Sorry for the delay.**
+-------------------------------------
+### **Big update on DeepStream-Yolo**
+-------------------------------------
### Future updates
+* Models benchmarks
* DeepStream tutorials
* Dynamic batch-size
-* Segmentation model support
-* Classification model support
+* Updated INT8 calibration
+* Support for segmentation models
+* Support for classification models
### Improvements on this repository
-* Darknet cfg params parser (no need to edit `nvdsparsebbox_Yolo.cpp` or other files)
-* Support for `new_coords` and `scale_x_y` params
-* Support for new models
-* Support for new layers
-* Support for new activations
-* Support for convolutional groups
* Support for INT8 calibration
* Support for non square models
-* New documentation for multiple models
-* YOLOv5 >= 2.0 support
-* YOLOR support
-* GPU YOLO Decoder [#138](https://github.com/marcoslucianops/DeepStream-Yolo/issues/138)
-* PP-YOLOE support
-* YOLOv7 support
-* Optimized NMS [#142](https://github.com/marcoslucianops/DeepStream-Yolo/issues/142)
-* Models benchmarks
-* YOLOv8 support
-* YOLOX support
-* PP-YOLOE+ support
-* YOLOv6 >= 2.0 support
-* **ONNX model support with GPU post-processing**
-* **YOLO-NAS support (ONNX)**
+* **Support for Darknet YOLO models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing**
+* **Support for YOLO-NAS, PPYOLOE+, PPYOLOE, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing**
##
@@ -55,6 +41,7 @@ NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO mod
* [YOLOR usage](docs/YOLOR.md)
* [YOLOX usage](docs/YOLOX.md)
* [PP-YOLOE / PP-YOLOE+ usage](docs/PPYOLOE.md)
+* [YOLO-NAS usage](docs/YOLONAS.md)
* [Using your custom model](docs/customModels.md)
* [Multiple YOLO GIEs](docs/multipleGIEs.md)
@@ -133,13 +120,14 @@ NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO mod
* [Darknet YOLO](https://github.com/AlexeyAB/darknet)
* [MobileNet-YOLO](https://github.com/dog-qiuqiu/MobileNet-Yolo)
* [YOLO-Fastest](https://github.com/dog-qiuqiu/Yolo-Fastest)
-* [YOLOv5 >= 2.0](https://github.com/ultralytics/yolov5)
-* [YOLOv6 >= 2.0](https://github.com/meituan/YOLOv6)
+* [YOLOv5](https://github.com/ultralytics/yolov5)
+* [YOLOv6](https://github.com/meituan/YOLOv6)
* [YOLOv7](https://github.com/WongKinYiu/yolov7)
* [YOLOv8](https://github.com/ultralytics/ultralytics)
* [YOLOR](https://github.com/WongKinYiu/yolor)
* [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
-* [PP-YOLOE / PP-YOLOE+](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ppyoloe)
+* [PP-YOLOE / PP-YOLOE+](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.6/configs/ppyoloe)
+* [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md)
##
@@ -161,7 +149,7 @@ sample = 1920x1080 video
- Eval
```
-nms-iou-threshold = 0.6 (Darknet and YOLOv8) / 0.65 (YOLOv5, YOLOv6, YOLOv7, YOLOR and YOLOX) / 0.7 (Paddle)
+nms-iou-threshold = 0.6 (Darknet) / 0.65 (YOLOv5, YOLOv6, YOLOv7, YOLOR and YOLOX) / 0.7 (Paddle, YOLO-NAS and YOLOv8)
pre-cluster-threshold = 0.001
topk = 300
```
@@ -169,7 +157,7 @@ topk = 300
- Test
```
-nms-iou-threshold = 0.45 / 0.7 (Paddle)
+nms-iou-threshold = 0.45
pre-cluster-threshold = 0.25
topk = 300
```
@@ -182,30 +170,7 @@ topk = 300
| DeepStream | Precision | Resolution | IoU=0.5:0.95 | IoU=0.5 | IoU=0.75 | FPS
(without display) |
|:------------------:|:---------:|:----------:|:------------:|:-------:|:--------:|:--------------------------:|
-| PP-YOLOE-x | FP16 | 640 | 0.506 | 0.681 | 0.551 | 116.54 |
-| PP-YOLOE-l | FP16 | 640 | 0.498 | 0.674 | 0.545 | 187.93 |
-| PP-YOLOE-m | FP16 | 640 | 0.476 | 0.646 | 0.522 | 257.42 |
-| PP-YOLOE-s (400) | FP16 | 640 | 0.422 | 0.589 | 0.463 | 465.23 |
-| YOLOv7-E6E | FP16 | 1280 | 0.476 | 0.648 | 0.521 | 47.82 |
-| YOLOv7-D6 | FP16 | 1280 | 0.479 | 0.648 | 0.520 | 60.66 |
-| YOLOv7-E6 | FP16 | 1280 | 0.471 | 0.640 | 0.516 | 73.05 |
-| YOLOv7-W6 | FP16 | 1280 | 0.444 | 0.610 | 0.483 | 110.29 |
-| YOLOv7-X* | FP16 | 640 | 0.496 | 0.679 | 0.536 | 162.31 |
-| YOLOv7* | FP16 | 640 | 0.476 | 0.660 | 0.518 | 237.79 |
-| YOLOv7-Tiny Leaky* | FP16 | 640 | 0.345 | 0.516 | 0.372 | 611.36 |
-| YOLOv7-Tiny Leaky* | FP16 | 416 | 0.328 | 0.493 | 0.348 | 633.73 |
-| YOLOv5x6 6.1 | FP16 | 1280 | 0.508 | 0.683 | 0.554 | 54.88 |
-| YOLOv5l6 6.1 | FP16 | 1280 | 0.494 | 0.668 | 0.540 | 87.86 |
-| YOLOv5m6 6.1 | FP16 | 1280 | 0.469 | 0.644 | 0.514 | 142.68 |
-| YOLOv5s6 6.1 | FP16 | 1280 | 0.399 | 0.581 | 0.438 | 271.19 |
-| YOLOv5n6 6.1 | FP16 | 1280 | 0.317 | 0.487 | 0.344 | 392.20 |
-| YOLOv5x 6.1 | FP16 | 640 | 0.470 | 0.652 | 0.513 | 152.99 |
-| YOLOv5l 6.1 | FP16 | 640 | 0.454 | 0.636 | 0.496 | 247.60 |
-| YOLOv5m 6.1 | FP16 | 640 | 0.421 | 0.604 | 0.458 | 375.06 |
-| YOLOv5s 6.1 | FP16 | 640 | 0.344 | 0.528 | 0.371 | 602.44 |
-| YOLOv5n 6.1 | FP16 | 640 | 0.247 | 0.413 | 0.256 | 629.04 |
-| YOLOv4** | FP16 | 608 | 0.497 | 0.739 | 0.549 | 206.23 |
-| YOLOv4-Tiny | FP16 | 416 | 0.215 | 0.402 | 0.205 | 634.69 |
+| Coming soon | FP16 | 640 | | | | |
##
@@ -326,7 +291,7 @@ sudo prime-select nvidia
* Run
```
- sudo sh NVIDIA-Linux-x86_64-510.47.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
+ sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
@@ -1005,7 +970,7 @@ config-file=config_infer_primary_yoloV2.txt
### NMS Configuration
-To change the `nms-iou-threshold`, `pre-cluster-threshold` and `topk` values, modify the config_infer file and regenerate the model engine file
+To change the `nms-iou-threshold`, `pre-cluster-threshold` and `topk` values, modify the config_infer file
```
[class-attrs-all]
@@ -1014,16 +979,14 @@ pre-cluster-threshold=0.25
topk=300
```
-**NOTE**: It is important to regenerate the engine to get the max detection speed based on `pre-cluster-threshold` you set.
-
-**NOTE**: Lower `topk` values will result in more performance.
-
**NOTE**: Make sure to set `cluster-mode=2` in the config_infer file.
##
### INT8 calibration
+**NOTE**: For now, Only for Darknet YOLO model.
+
#### 1. Install OpenCV
```
@@ -1123,7 +1086,7 @@ sudo apt-get install libopencv-dev
deepstream-app -c deepstream_app_config.txt
```
-**NOTE**: NVIDIA recommends at least 500 images to get a good accuracy. On this example, I used 1000 images to get better accuracy (more images = more accuracy). Higher `INT8_CALIB_BATCH_SIZE` values will result in more accuracy and faster calibration speed. Set it according to you GPU memory. This process can take a long time.
+**NOTE**: NVIDIA recommends at least 500 images to get a good accuracy. On this example, I recommend to use 1000 images to get better accuracy (more images = more accuracy). Higher `INT8_CALIB_BATCH_SIZE` values will result in more accuracy and faster calibration speed. Set it according to you GPU memory. This process may take a long time.
##
diff --git a/config_infer_primary_ppyoloe.txt b/config_infer_primary_ppyoloe.txt
index 99a096f..4060360 100644
--- a/config_infer_primary_ppyoloe.txt
+++ b/config_infer_primary_ppyoloe.txt
@@ -3,9 +3,8 @@ gpu-id=0
net-scale-factor=0.0173520735727919486
offsets=123.675;116.28;103.53
model-color-format=0
-custom-network-config=ppyoloe_crn_s_400e_coco.cfg
-model-file=ppyoloe_crn_s_400e_coco.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=ppyoloe_crn_s_400e_coco.onnx
+model-engine-file=ppyoloe_crn_s_400e_coco.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -17,11 +16,10 @@ process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParseYolo
+parse-bbox-func-name=NvDsInferParseYoloE
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
-nms-iou-threshold=0.7
+nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
diff --git a/config_infer_primary_ppyoloe_onnx.txt b/config_infer_primary_ppyoloe_onnx.txt
deleted file mode 100644
index f5c0036..0000000
--- a/config_infer_primary_ppyoloe_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0173520735727919486
-offsets=123.675;116.28;103.53
-model-color-format=0
-onnx-file=ppyoloe_crn_s_400e_coco.onnx
-model-engine-file=ppyoloe_crn_s_400e_coco.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParse_PPYOLOE_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.7
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_ppyoloe_plus.txt b/config_infer_primary_ppyoloe_plus.txt
index b7a6838..5b5b172 100644
--- a/config_infer_primary_ppyoloe_plus.txt
+++ b/config_infer_primary_ppyoloe_plus.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=ppyoloe_plus_crn_s_80e_coco.cfg
-model-file=ppyoloe_plus_crn_s_80e_coco.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
+model-engine-file=ppyoloe_plus_crn_s_80e_coco.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -16,11 +15,10 @@ process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParseYolo
+parse-bbox-func-name=NvDsInferParseYoloE
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
-nms-iou-threshold=0.7
+nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
diff --git a/config_infer_primary_ppyoloe_plus_onnx.txt b/config_infer_primary_ppyoloe_plus_onnx.txt
deleted file mode 100644
index 0baa131..0000000
--- a/config_infer_primary_ppyoloe_plus_onnx.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
-model-engine-file=ppyoloe_plus_crn_s_80e_coco.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParse_PPYOLOE_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.7
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yoloV5.txt b/config_infer_primary_yoloV5.txt
index 601ffb4..f294ef6 100644
--- a/config_infer_primary_yoloV5.txt
+++ b/config_infer_primary_yoloV5.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=yolov5s.cfg
-model-file=yolov5s.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolov5s.onnx
+model-engine-file=yolov5s.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yoloV5_onnx.txt b/config_infer_primary_yoloV5_onnx.txt
deleted file mode 100644
index a059d17..0000000
--- a/config_infer_primary_yoloV5_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-onnx-file=yolov5s.onnx
-model-engine-file=yolov5s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=1
-parse-bbox-func-name=NvDsInferParse_YOLO_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yoloV6.txt b/config_infer_primary_yoloV6.txt
index ffeb800..98a487c 100644
--- a/config_infer_primary_yoloV6.txt
+++ b/config_infer_primary_yoloV6.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=yolov6s.cfg
-model-file=yolov6s.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolov6s.onnx
+model-engine-file=yolov6s.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yoloV6_onnx.txt b/config_infer_primary_yoloV6_onnx.txt
deleted file mode 100644
index 7b0dde6..0000000
--- a/config_infer_primary_yoloV6_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-onnx-file=yolov6s.onnx
-model-engine-file=yolov6s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=1
-parse-bbox-func-name=NvDsInferParse_YOLO_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yoloV7.txt b/config_infer_primary_yoloV7.txt
index 0e35f08..1a16f1d 100644
--- a/config_infer_primary_yoloV7.txt
+++ b/config_infer_primary_yoloV7.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=yolov7.cfg
-model-file=yolov7.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolov7.onnx
+model-engine-file=yolov7.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index 3214bd3..25fabd4 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=yolov8s.cfg
-model-file=yolov8s.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolov8s.onnx
+model-engine-file=yolov8s.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yoloV8_onnx.txt b/config_infer_primary_yoloV8_onnx.txt
deleted file mode 100644
index 2d85b28..0000000
--- a/config_infer_primary_yoloV8_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=yolov8s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=1
-parse-bbox-func-name=NvDsInferParse_YOLOV8_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yolo_nas_onnx.txt b/config_infer_primary_yolo_nas_onnx.txt
deleted file mode 100644
index 5364ad7..0000000
--- a/config_infer_primary_yolo_nas_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-onnx-file=yolo_nas_s.onnx
-model-engine-file=yolo_nas_s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=1
-parse-bbox-func-name=NvDsInferParse_YOLO_NAS_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yoloV7_onnx.txt b/config_infer_primary_yolonas.txt
similarity index 74%
rename from config_infer_primary_yoloV7_onnx.txt
rename to config_infer_primary_yolonas.txt
index c940736..fdf55b6 100644
--- a/config_infer_primary_yoloV7_onnx.txt
+++ b/config_infer_primary_yolonas.txt
@@ -2,8 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-onnx-file=yolov7.onnx
-model-engine-file=yolov7.onnx_b1_gpu0_fp32.engine
+onnx-file=yolo_nas_s_coco.onnx
+model-engine-file=yolo_nas_s_coco.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -15,8 +15,8 @@ process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
-symmetric-padding=1
-parse-bbox-func-name=NvDsInferParse_YOLO_ONNX
+symmetric-padding=0
+parse-bbox-func-name=NvDsInferParseYoloE
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[class-attrs-all]
diff --git a/config_infer_primary_yolor.txt b/config_infer_primary_yolor.txt
index 4e178de..4883e34 100644
--- a/config_infer_primary_yolor.txt
+++ b/config_infer_primary_yolor.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-custom-network-config=yolor_csp.cfg
-model-file=yolor_csp.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolor_csp.onnx
+model-engine-file=yolor_csp.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yolox.txt b/config_infer_primary_yolox.txt
index e006344..339b317 100644
--- a/config_infer_primary_yolox.txt
+++ b/config_infer_primary_yolox.txt
@@ -2,9 +2,8 @@
gpu-id=0
net-scale-factor=0
model-color-format=0
-custom-network-config=yolox_s.cfg
-model-file=yolox_s.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolox_s.onnx
+model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +18,6 @@ maintain-aspect-ratio=1
symmetric-padding=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yolox_legacy.txt b/config_infer_primary_yolox_legacy.txt
index 5c078ce..cc3c3b6 100644
--- a/config_infer_primary_yolox_legacy.txt
+++ b/config_infer_primary_yolox_legacy.txt
@@ -3,9 +3,8 @@ gpu-id=0
net-scale-factor=0.0173520735727919486
offsets=123.675;116.28;103.53
model-color-format=0
-custom-network-config=yolox_s.cfg
-model-file=yolox_s.wts
-model-engine-file=model_b1_gpu0_fp32.engine
+onnx-file=yolox_s.onnx
+model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -20,7 +19,6 @@ maintain-aspect-ratio=1
symmetric-padding=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
diff --git a/config_infer_primary_yolox_legacy_onnx.txt b/config_infer_primary_yolox_legacy_onnx.txt
deleted file mode 100644
index 521a59c..0000000
--- a/config_infer_primary_yolox_legacy_onnx.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0173520735727919486
-offsets=123.675;116.28;103.53
-model-color-format=0
-onnx-file=yolox_s.onnx
-model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=0
-parse-bbox-func-name=NvDsInferParse_YOLOX_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/config_infer_primary_yolox_onnx.txt b/config_infer_primary_yolox_onnx.txt
deleted file mode 100644
index a7120e3..0000000
--- a/config_infer_primary_yolox_onnx.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0
-model-color-format=0
-onnx-file=yolox_s.onnx
-model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=80
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=2
-maintain-aspect-ratio=1
-symmetric-padding=0
-parse-bbox-func-name=NvDsInferParse_YOLOX_ONNX
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-
-[class-attrs-all]
-nms-iou-threshold=0.45
-pre-cluster-threshold=0.25
-topk=300
diff --git a/docs/PPYOLOE.md b/docs/PPYOLOE.md
index 4dc744d..478d61c 100644
--- a/docs/PPYOLOE.md
+++ b/docs/PPYOLOE.md
@@ -1,5 +1,7 @@
# PP-YOLOE / PP-YOLOE+ usage
+**NOTE**: You can use the release/2.6 branch of the PPYOLOE repo to convert all model versions.
+
* [Convert model](#convert-model)
* [Compile the lib](#compile-the-lib)
* [Edit the config_infer_primary_ppyoloe_plus file](#edit-the-config_infer_primary_ppyoloe_plus-file)
@@ -12,35 +14,36 @@
#### 1. Download the PaddleDetection repo and install the requirements
-https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/docs/tutorials/INSTALL.md
+https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.6/docs/tutorials/INSTALL.md
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the `PaddleDetection` folder.
+Copy the `export_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the `PaddleDetection` folder.
#### 3. Download the model
-Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ppyoloe) releases (example for PP-YOLOE+_s)
+Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.6/configs/ppyoloe) releases (example for PP-YOLOE+_s)
```
wget https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`ppyoloe_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg` and `wts` files (example for PP-YOLOE+_s)
+Generate the ONNX model file (example for PP-YOLOE+_s)
```
-python3 gen_wts_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml
+pip3 install onnx onnxsim onnxruntime
+python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --simplify
```
#### 5. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -93,11 +96,13 @@ Edit the `config_infer_primary_ppyoloe_plus.txt` file according to your model (e
```
[property]
...
-custom-network-config=ppyoloe_plus_crn_s_80e_coco.cfg
-model-file=ppyoloe_plus_crn_s_80e_coco.wts
+onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
+model-engine-file=ppyoloe_plus_crn_s_80e_coco.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYoloE
+...
```
**NOTE**: If you use the **legacy** model, you should edit the `config_infer_primary_ppyoloe.txt` file.
diff --git a/docs/YOLONAS.md b/docs/YOLONAS.md
new file mode 100644
index 0000000..14d2ff0
--- /dev/null
+++ b/docs/YOLONAS.md
@@ -0,0 +1,171 @@
+# YOLONAS usage
+
+**NOTE**: The yaml file is not required.
+
+* [Convert model](#convert-model)
+* [Compile the lib](#compile-the-lib)
+* [Edit the config_infer_primary_yolonas file](#edit-the-config_infer_primary_yolonas-file)
+* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
+* [Testing the model](#testing-the-model)
+
+##
+
+### Convert model
+
+#### 1. Download the YOLO-NAS repo and install the requirements
+
+```
+git clone https://github.com/Deci-AI/super-gradients.git
+cd super-gradients
+pip3 install -r requirements.txt
+python3 setup.py install
+pip3 install onnx onnxsim onnxruntime
+```
+
+**NOTE**: It is recommended to use Python virtualenv.
+
+#### 2. Copy conversor
+
+Copy the `export_yolonas.py` file from `DeepStream-Yolo/utils` directory to the `super-gradients` folder.
+
+#### 3. Download the model
+
+Download the `pth` file from [YOLO-NAS](https://sghub.deci.ai/) website (example for YOLO-NAS S)
+
+```
+wget https://sghub.deci.ai/models/yolo_nas_s_coco.pth
+```
+
+**NOTE**: You can use your custom model.
+
+#### 4. Convert model
+
+Generate the ONNX model file (example for YOLO-NAS S)
+
+```
+python3 export_yolonas.py -m yolo_nas_s -w yolo_nas_s_coco.pth --simplify
+```
+
+**NOTE**: Model names
+
+```
+-m yolo_nas_s
+```
+
+or
+
+```
+-m yolo_nas_m
+```
+
+or
+
+```
+-m yolo_nas_l
+```
+
+**NOTE**: To change the inference size (defaut: 640)
+
+```
+-s SIZE
+--size SIZE
+-s HEIGHT WIDTH
+--size HEIGHT WIDTH
+```
+
+Example for 1280
+
+```
+-s 1280
+```
+
+or
+
+```
+-s 1280 1280
+```
+
+#### 5. Copy generated files
+
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
+
+##
+
+### Compile the lib
+
+Open the `DeepStream-Yolo` folder and compile the lib
+
+* DeepStream 6.2 on x86 platform
+
+ ```
+ CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+* DeepStream 6.1.1 on x86 platform
+
+ ```
+ CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+* DeepStream 6.1 on x86 platform
+
+ ```
+ CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+* DeepStream 6.0.1 / 6.0 on x86 platform
+
+ ```
+ CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
+
+ ```
+ CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+* DeepStream 6.0.1 / 6.0 on Jetson platform
+
+ ```
+ CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
+ ```
+
+##
+
+### Edit the config_infer_primary_yolonas file
+
+Edit the `config_infer_primary_yolonas.txt` file according to your model (example for YOLO-NAS S with 80 classes)
+
+```
+[property]
+...
+onnx-file=yolo_nas_s_coco.onnx
+model-engine-file=yolo_nas_s_coco.onnx_b1_gpu0_fp32.engine
+...
+num-detected-classes=80
+...
+parse-bbox-func-name=NvDsInferParseYoloE
+...
+```
+
+##
+
+### Edit the deepstream_app_config file
+
+```
+...
+[primary-gie]
+...
+config-file=config_infer_primary_yolonas.txt
+```
+
+##
+
+### Testing the model
+
+```
+deepstream-app -c deepstream_app_config.txt
+```
+
+**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.
diff --git a/docs/YOLOR.md b/docs/YOLOR.md
index ec416b3..f4ece0a 100644
--- a/docs/YOLOR.md
+++ b/docs/YOLOR.md
@@ -1,8 +1,8 @@
# YOLOR usage
-**NOTE**: You need to use the main branch of the YOLOR repo to convert the model.
+**NOTE**: Select the correct branch of the YOLOR repo before the conversion.
-**NOTE**: The cfg file is required.
+**NOTE**: The cfg file is required for the main branch.
* [Convert model](#convert-model)
* [Compile the lib](#compile-the-lib)
@@ -20,31 +20,71 @@
git clone https://github.com/WongKinYiu/yolor.git
cd yolor
pip3 install -r requirements.txt
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yolor.py` file from `DeepStream-Yolo/utils` directory to the `yolor` folder.
+Copy the `export_yolor.py` file from `DeepStream-Yolo/utils` directory to the `yolor` folder.
#### 3. Download the model
Download the `pt` file from [YOLOR](https://github.com/WongKinYiu/yolor) repo.
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolor_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg` and `wts` files (example for YOLOR-CSP)
+Generate the ONNX model file
+
+- Main branch
+
+ Example for YOLOR-CSP
+
+ ```
+ python3 export_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg --simplify
+ ```
+
+- Paper branch
+
+ Example for YOLOR-P6
+
+ ```
+ python3 export_yolor.py -w yolor-p6.pt --simplify
+ ```
+
+**NOTE**: To convert a P6 model
```
-python3 gen_wts_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg
+--p6
+```
+
+**NOTE**: To change the inference size (defaut: 640)
+
+```
+-s SIZE
+--size SIZE
+-s HEIGHT WIDTH
+--size HEIGHT WIDTH
+```
+
+Example for 1280
+
+```
+-s 1280
+```
+
+or
+
+```
+-s 1280 1280
```
#### 5. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder
##
@@ -97,11 +137,13 @@ Edit the `config_infer_primary_yolor.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolor_csp.cfg
-model-file=yolor_csp.wts
+onnx-file=yolor_csp.onnx
+model-engine-file=yolor_csp.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
##
diff --git a/docs/YOLOX.md b/docs/YOLOX.md
index d1f3337..4571c2f 100644
--- a/docs/YOLOX.md
+++ b/docs/YOLOX.md
@@ -1,5 +1,7 @@
# YOLOX usage
+**NOTE**: You can use the main branch of the YOLOX repo to convert all model versions.
+
**NOTE**: The yaml file is not required.
* [Convert model](#convert-model)
@@ -18,13 +20,15 @@
git clone https://github.com/Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -r requirements.txt
+python3 setup.py develop
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yolox.py` file from `DeepStream-Yolo/utils` directory to the `YOLOX` folder.
+Copy the `export_yolox.py` file from `DeepStream-Yolo/utils` directory to the `YOLOX` folder.
#### 3. Download the model
@@ -34,19 +38,19 @@ Download the `pth` file from [YOLOX](https://github.com/Megvii-BaseDetection/YOL
wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolox_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg` and `wts` files (example for YOLOX-s standard)
+Generate the ONNX model file (example for YOLOX-s standard)
```
-python3 gen_wts_yolox.py -w yolox_s.pth -e exps/default/yolox_s.py
+python3 export_yolox.py -w yolox_s.pth -c exps/default/yolox_s.py --simplify
```
#### 5. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -99,11 +103,13 @@ Edit the `config_infer_primary_yolox.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolox_s.cfg
-model-file=yolox_s.wts
+onnx-file=yolox_s.onnx
+model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
**NOTE**: If you use the **legacy** model, you should edit the `config_infer_primary_yolox_legacy.txt` file.
diff --git a/docs/YOLOv5.md b/docs/YOLOv5.md
index ee7c7b7..bdd6c0a 100644
--- a/docs/YOLOv5.md
+++ b/docs/YOLOv5.md
@@ -1,6 +1,6 @@
# YOLOv5 usage
-**NOTE**: You can use the main branch of the YOLOv5 repo to convert all model versions.
+**NOTE**: You can use the master branch of the YOLOv5 repo to convert all model versions.
**NOTE**: The yaml file is not required.
@@ -20,30 +20,31 @@
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip3 install -r requirements.txt
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yoloV5.py` file from `DeepStream-Yolo/utils` directory to the `yolov5` folder.
+Copy the `export_yoloV5.py` file from `DeepStream-Yolo/utils` directory to the `yolov5` folder.
#### 3. Download the model
-Download the `pt` file from [YOLOv5](https://github.com/ultralytics/yolov5/releases/) releases (example for YOLOv5s 6.1)
+Download the `pt` file from [YOLOv5](https://github.com/ultralytics/yolov5/releases/) releases (example for YOLOv5s 7.0)
```
-wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt
+wget https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov5_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg` and `wts` files (example for YOLOv5s)
+Generate the ONNX model file (example for YOLOv5s)
```
-python3 gen_wts_yoloV5.py -w yolov5s.pt
+python3 export_yoloV5.py -w yolov5s.pt --simplify
```
**NOTE**: To convert a P6 model
@@ -75,7 +76,7 @@ or
#### 5. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -128,11 +129,13 @@ Edit the `config_infer_primary_yoloV5.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolov5s.cfg
-model-file=yolov5s.wts
+onnx-file=yolov5s.onnx
+model-engine-file=yolov5s.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
##
diff --git a/docs/YOLOv6.md b/docs/YOLOv6.md
index 4f46261..e0c3ef9 100644
--- a/docs/YOLOv6.md
+++ b/docs/YOLOv6.md
@@ -18,13 +18,14 @@
git clone https://github.com/meituan/YOLOv6.git
cd YOLOv6
pip3 install -r requirements.txt
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yoloV6.py` file from `DeepStream-Yolo/utils` directory to the `YOLOv6` folder.
+Copy the `export_yoloV6.py` file from `DeepStream-Yolo/utils` directory to the `YOLOv6` folder.
#### 3. Download the model
@@ -34,14 +35,14 @@ Download the `pt` file from [YOLOv6](https://github.com/meituan/YOLOv6/releases/
wget https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s.pt
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov6_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg` and `wts` files (example for YOLOv6-S 3.0)
+Generate the ONNX model file (example for YOLOv6-S 3.0)
```
-python3 gen_wts_yoloV6.py -w yolov6s.pt
+python3 export_yoloV6.py -w yolov6s.pt --simplify
```
**NOTE**: To convert a P6 model
@@ -73,7 +74,7 @@ or
#### 5. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -126,11 +127,13 @@ Edit the `config_infer_primary_yoloV6.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolov6s.cfg
-model-file=yolov6s.wts
+onnx-file=yolov6s.onnx
+model-engine-file=yolov6s.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
##
diff --git a/docs/YOLOv7.md b/docs/YOLOv7.md
index 4274e77..e5bbb66 100644
--- a/docs/YOLOv7.md
+++ b/docs/YOLOv7.md
@@ -18,13 +18,14 @@
git clone https://github.com/WongKinYiu/yolov7.git
cd yolov7
pip3 install -r requirements.txt
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yoloV7.py` file from `DeepStream-Yolo/utils` directory to the `yolov7` folder.
+Copy the `export_yoloV7.py` file from `DeepStream-Yolo/utils` directory to the `yolov7` folder.
#### 3. Download the model
@@ -34,18 +35,18 @@ Download the `pt` file from [YOLOv7](https://github.com/WongKinYiu/yolov7/releas
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov7_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Reparameterize your model
-[YOLOv7](https://github.com/WongKinYiu/yolov7/releases/) and it's variants can't be directly converted to engine file. Therefore, you will have to reparameterize your model using the code [here](https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb). Make sure to convert your checkpoints in yolov7 repository, and then save your reparmeterized checkpoints for conversion in the next step.
+[YOLOv7](https://github.com/WongKinYiu/yolov7/releases/) and its variants cannot be directly converted to engine file. Therefore, you will have to reparameterize your model using the code [here](https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb). Make sure to convert your custom checkpoints in yolov7 repository, and then save your reparmeterized checkpoints for conversion in the next step.
#### 5. Convert model
-Generate the `cfg` and `wts` files (example for YOLOv7)
+Generate the ONNX model file (example for YOLOv7)
```
-python3 gen_wts_yoloV7.py -w yolov7.pt
+python3 export_yoloV7.py -w yolov7.pt --simplify
```
**NOTE**: To convert a P6 model
@@ -77,7 +78,7 @@ or
#### 6. Copy generated files
-Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -130,11 +131,13 @@ Edit the `config_infer_primary_yoloV7.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolov7.cfg
-model-file=yolov7.wts
+onnx-file=yolov7.onnx
+model-engine-file=yolov7.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
##
diff --git a/docs/YOLOv8.md b/docs/YOLOv8.md
index b6e5152..0ebce79 100644
--- a/docs/YOLOv8.md
+++ b/docs/YOLOv8.md
@@ -18,13 +18,15 @@
git clone https://github.com/ultralytics/ultralytics.git
cd ultralytics
pip3 install -r requirements.txt
+python3 setup.py install
+pip3 install onnx onnxsim onnxruntime
```
**NOTE**: It is recommended to use Python virtualenv.
#### 2. Copy conversor
-Copy the `gen_wts_yoloV8.py` file from `DeepStream-Yolo/utils` directory to the `ultralytics` folder.
+Copy the `export_yoloV8.py` file from `DeepStream-Yolo/utils` directory to the `ultralytics` folder.
#### 3. Download the model
@@ -34,14 +36,14 @@ Download the `pt` file from [YOLOv8](https://github.com/ultralytics/assets/relea
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
```
-**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov8_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+**NOTE**: You can use your custom model.
#### 4. Convert model
-Generate the `cfg`, `wts` and `labels.txt` (if available) files (example for YOLOv8s)
+Generate the ONNX model file (example for YOLOv8s)
```
-python3 gen_wts_yoloV8.py -w yolov8s.pt
+python3 export_yoloV8.py -w yolov8s.pt --simplify
```
**NOTE**: To change the inference size (defaut: 640)
@@ -67,7 +69,7 @@ or
#### 5. Copy generated files
-Copy the generated `cfg`, `wts` and `labels.txt` (if generated), files to the `DeepStream-Yolo` folder.
+Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
##
@@ -120,11 +122,13 @@ Edit the `config_infer_primary_yoloV8.txt` file according to your model (example
```
[property]
...
-custom-network-config=yolov8s.cfg
-model-file=yolov8s.wts
+onnx-file=yolov8s.onnx
+model-engine-file=yolov8s.onnx_b1_gpu0_fp32.engine
...
num-detected-classes=80
...
+parse-bbox-func-name=NvDsInferParseYolo
+...
```
##
diff --git a/docs/customModels.md b/docs/customModels.md
index d79ca5e..f1e8a3a 100644
--- a/docs/customModels.md
+++ b/docs/customModels.md
@@ -19,9 +19,7 @@ cd DeepStream-Yolo
#### 2. Copy the class names file to DeepStream-Yolo folder and remane it to `labels.txt`
-#### 3. Copy the `cfg` and `weights`/`wts` files to DeepStream-Yolo folder
-
-**NOTE**: It is important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+#### 3. Copy the `onnx` or `cfg` and `weights` files to DeepStream-Yolo folder
##
@@ -189,24 +187,25 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
model-color-format=0
```
- **NOTE**: Set it according to the number of channels in the `cfg` file (1=GRAYSCALE, 3=RGB).
+ **NOTE**: Set it according to the number of channels in the `cfg` file (1=GRAYSCALE, 3=RGB for Darknet YOLO) or your model configuration (ONNX).
-* custom-network-config
+* custom-network-config and model-file (Darknet YOLO)
* Example for custom YOLOv4 model
```
custom-network-config=yolov4_custom.cfg
- ```
-
-* model-file
-
- * Example for custom YOLOv4 model
-
- ```
model-file=yolov4_custom.weights
```
+* onnx-file (ONNX)
+
+ * Example for custom YOLOv8 model
+
+ ```
+ onnx-file=yolov8s_custom.onnx
+ ```
+
* model-engine-file
* Example for `batch-size=1` and `network-mode=2`
@@ -233,7 +232,7 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
model-engine-file=model_b2_gpu0_fp32.engine
```
- **NOTE**: To change the generated engine filename, you need to edit and rebuild the `nvdsinfer_model_builder.cpp` file (`/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp`, lines 825-827)
+ **NOTE**: To change the generated engine filename (Darknet YOLO), you need to edit and rebuild the `nvdsinfer_model_builder.cpp` file (`/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp`, lines 825-827)
```
suggestedPathName =
@@ -260,7 +259,7 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
num-detected-classes=80
```
- **NOTE**: Set it according to number of classes in `cfg` file.
+ **NOTE**: Set it according to number of classes in `cfg` file (Darknet YOLO) or your model configuration (ONNX).
* interval
diff --git a/docs/multipleGIEs.md b/docs/multipleGIEs.md
index 184cdce..511b4b5 100644
--- a/docs/multipleGIEs.md
+++ b/docs/multipleGIEs.md
@@ -26,9 +26,7 @@ cd DeepStream-Yolo
#### 3. Copy the class names file to each GIE folder and remane it to `labels.txt`
-#### 4. Copy the `cfg` and `weights`/`wts` files to each GIE folder
-
-**NOTE**: It is important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
+#### 4. Copy the `onnx` or `cfg` and `weights` files to each GIE folder
##
@@ -92,22 +90,36 @@ const char* YOLOLAYER_PLUGIN_VERSION {"2"};
### Edit the config_infer_primary files
-**NOTE**: Edit the files according to the model you will use (YOLOv4, YOLOv5, YOLOR, etc).
+**NOTE**: Edit the files according to the model you will use (YOLOv8, YOLOv5, YOLOv4, etc).
**NOTE**: Do it for each GIE folder.
* Edit the path of the `cfg` file
- Example for gie1
+ Example for gie1 (Darknet YOLO)
```
custom-network-config=gie1/yolo.cfg
- ```
+ model-file=yolo.weights
+ ```
- Example for gie2
+ Example for gie2 (Darknet YOLO)
```
custom-network-config=gie2/yolo.cfg
+ model-file=yolo.weights
+ ```
+
+ Example for gie1 (ONNX)
+
+ ```
+ onnx-file=yolo.onnx
+ ```
+
+ Example for gie2 (ONNX)
+
+ ```
+ onnx-file=yolo.onnx
```
* Edit the gie-unique-id
diff --git a/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.cpp b/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.cpp
index 084b22b..0b1fce2 100644
--- a/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.cpp
+++ b/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.cpp
@@ -10,7 +10,7 @@
nvinfer1::ITensor*
batchnormLayer(int layerIdx, std::map& block, std::vector& weights,
- std::vector& trtWeights, int& weightPtr, std::string weightsType, float eps, nvinfer1::ITensor* input,
+ std::vector& trtWeights, int& weightPtr, nvinfer1::ITensor* input,
nvinfer1::INetworkDefinition* network)
{
nvinfer1::ITensor* output;
@@ -26,41 +26,21 @@ batchnormLayer(int layerIdx, std::map& block, std::vec
std::vector bnRunningMean;
std::vector bnRunningVar;
- if (weightsType == "weights") {
- for (int i = 0; i < filters; ++i) {
- bnBiases.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnWeights.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnRunningMean.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnRunningVar.push_back(sqrt(weights[weightPtr] + 1.0e-5));
- ++weightPtr;
- }
+ for (int i = 0; i < filters; ++i) {
+ bnBiases.push_back(weights[weightPtr]);
+ ++weightPtr;
}
- else {
- for (int i = 0; i < filters; ++i) {
- bnWeights.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnBiases.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnRunningMean.push_back(weights[weightPtr]);
- ++weightPtr;
- }
- for (int i = 0; i < filters; ++i) {
- bnRunningVar.push_back(sqrt(weights[weightPtr] + eps));
- ++weightPtr;
- }
+ for (int i = 0; i < filters; ++i) {
+ bnWeights.push_back(weights[weightPtr]);
+ ++weightPtr;
+ }
+ for (int i = 0; i < filters; ++i) {
+ bnRunningMean.push_back(weights[weightPtr]);
+ ++weightPtr;
+ }
+ for (int i = 0; i < filters; ++i) {
+ bnRunningVar.push_back(sqrt(weights[weightPtr] + 1.0e-5));
+ ++weightPtr;
}
int size = filters;
diff --git a/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.h b/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.h
index c3bfffc..fda7fd8 100644
--- a/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.h
+++ b/nvdsinfer_custom_impl_Yolo/layers/batchnorm_layer.h
@@ -14,7 +14,7 @@
#include "activation_layer.h"
nvinfer1::ITensor* batchnormLayer(int layerIdx, std::map& block, std::vector& weights,
- std::vector& trtWeights, int& weightPtr, std::string weightsType, float eps, nvinfer1::ITensor* input,
+ std::vector& trtWeights, int& weightPtr, nvinfer1::ITensor* input,
nvinfer1::INetworkDefinition* network);
#endif
diff --git a/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.cpp b/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.cpp
deleted file mode 100644
index c0cf780..0000000
--- a/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.cpp
+++ /dev/null
@@ -1,82 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#include "c2f_layer.h"
-
-#include
-
-#include "convolutional_layer.h"
-
-nvinfer1::ITensor*
-c2fLayer(int layerIdx, std::map& block, std::vector& weights,
- std::vector& trtWeights, int& weightPtr, std::string weightsType, float eps, nvinfer1::ITensor* input,
- nvinfer1::INetworkDefinition* network)
-{
- nvinfer1::ITensor* output;
-
- assert(block.at("type") == "c2f");
- assert(block.find("n") != block.end());
- assert(block.find("shortcut") != block.end());
- assert(block.find("filters") != block.end());
-
- int n = std::stoi(block.at("n"));
- bool shortcut = (block.at("shortcut") == "1");
- int filters = std::stoi(block.at("filters"));
-
- nvinfer1::Dims inputDims = input->getDimensions();
-
- nvinfer1::ISliceLayer* sliceLt = network->addSlice(*input,nvinfer1::Dims{3, {0, 0, 0}},
- nvinfer1::Dims{3, {inputDims.d[0] / 2, inputDims.d[1], inputDims.d[2]}}, nvinfer1::Dims{3, {1, 1, 1}});
- assert(sliceLt != nullptr);
- std::string sliceLtLayerName = "slice_lt_" + std::to_string(layerIdx);
- sliceLt->setName(sliceLtLayerName.c_str());
- nvinfer1::ITensor* lt = sliceLt->getOutput(0);
-
- nvinfer1::ISliceLayer* sliceRb = network->addSlice(*input,nvinfer1::Dims{3, {inputDims.d[0] / 2, 0, 0}},
- nvinfer1::Dims{3, {inputDims.d[0] / 2, inputDims.d[1], inputDims.d[2]}}, nvinfer1::Dims{3, {1, 1, 1}});
- assert(sliceRb != nullptr);
- std::string sliceRbLayerName = "slice_rb_" + std::to_string(layerIdx);
- sliceRb->setName(sliceRbLayerName.c_str());
- nvinfer1::ITensor* rb = sliceRb->getOutput(0);
-
- std::vector concatInputs;
- concatInputs.push_back(lt);
- concatInputs.push_back(rb);
- output = rb;
-
- for (int i = 0; i < n; ++i) {
- std::string cv1MlayerName = "c2f_1_" + std::to_string(i + 1) + "_";
- nvinfer1::ITensor* cv1M = convolutionalLayer(layerIdx, block, weights, trtWeights, weightPtr, weightsType, filters, eps,
- output, network, cv1MlayerName);
- assert(cv1M != nullptr);
-
- std::string cv2MlayerName = "c2f_2_" + std::to_string(i + 1) + "_";
- nvinfer1::ITensor* cv2M = convolutionalLayer(layerIdx, block, weights, trtWeights, weightPtr, weightsType, filters, eps,
- cv1M, network, cv2MlayerName);
- assert(cv2M != nullptr);
-
- if (shortcut) {
- nvinfer1::IElementWiseLayer* ew = network->addElementWise(*output, *cv2M, nvinfer1::ElementWiseOperation::kSUM);
- assert(ew != nullptr);
- std::string ewLayerName = "shortcut_c2f_" + std::to_string(i + 1) + "_" + std::to_string(layerIdx);
- ew->setName(ewLayerName.c_str());
- output = ew->getOutput(0);
- concatInputs.push_back(output);
- }
- else {
- output = cv2M;
- concatInputs.push_back(output);
- }
- }
-
- nvinfer1::IConcatenationLayer* concat = network->addConcatenation(concatInputs.data(), concatInputs.size());
- assert(concat != nullptr);
- std::string concatLayerName = "route_" + std::to_string(layerIdx);
- concat->setName(concatLayerName.c_str());
- concat->setAxis(0);
- output = concat->getOutput(0);
-
- return output;
-}
diff --git a/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.h b/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.h
deleted file mode 100644
index 28f373f..0000000
--- a/nvdsinfer_custom_impl_Yolo/layers/c2f_layer.h
+++ /dev/null
@@ -1,18 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#ifndef __C2F_LAYER_H__
-#define __C2F_LAYER_H__
-
-#include