New features and fixes
This commit is contained in:
@@ -38,7 +38,25 @@ Generate the ONNX model file (example for PP-YOLOE+_s)
|
||||
|
||||
```
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --simplify
|
||||
python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --simplify --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To simplify the ONNX model
|
||||
|
||||
```
|
||||
--simplify
|
||||
```
|
||||
|
||||
**NOTE**: To use dynamic batch-size
|
||||
|
||||
```
|
||||
--dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To use implicit batch-size (example for batch-size = 4)
|
||||
|
||||
```
|
||||
--batch 4
|
||||
```
|
||||
|
||||
**NOTE**: If you are using DeepStream 5.1, use opset 12 or lower. The default opset is 11.
|
||||
@@ -84,7 +102,7 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
* DeepStream 5.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.1 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
@@ -93,18 +111,12 @@ Open the `DeepStream-Yolo` folder and compile the lib
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
||||
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 5.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 LEGACY=1 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_ppyoloe_plus file
|
||||
@@ -115,7 +127,6 @@ Edit the `config_infer_primary_ppyoloe_plus.txt` file according to your model (e
|
||||
[property]
|
||||
...
|
||||
onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
|
||||
model-engine-file=ppyoloe_plus_crn_s_80e_coco.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
@@ -128,13 +139,17 @@ parse-bbox-func-name=NvDsInferParseYoloE
|
||||
**NOTE**: The **PP-YOLOE+ and PP-YOLOE legacy** do not resize the input with padding. To get better accuracy, use
|
||||
|
||||
```
|
||||
...
|
||||
maintain-aspect-ratio=0
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **PP-YOLOE+** uses zero mean normalization on the image preprocess. It is important to change the `net-scale-factor` according to the trained values.
|
||||
|
||||
```
|
||||
...
|
||||
net-scale-factor=0.0039215697906911373
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **PP-YOLOE legacy** uses normalization on the image preprocess. It is important to change the `net-scale-factor` and `offsets` according to the trained values.
|
||||
@@ -142,8 +157,18 @@ net-scale-factor=0.0039215697906911373
|
||||
Default: `mean = 0.485, 0.456, 0.406` and `std = 0.229, 0.224, 0.225`
|
||||
|
||||
```
|
||||
...
|
||||
net-scale-factor=0.0173520735727919486
|
||||
offsets=123.675;116.28;103.53
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: By default, the dynamic batch-size is set. To use implicit batch-size, uncomment the line
|
||||
|
||||
```
|
||||
...
|
||||
force-implicit-batch-dim=1
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
Reference in New Issue
Block a user