Big update
This commit is contained in:
@@ -1,5 +1,7 @@
|
||||
# PP-YOLOE / PP-YOLOE+ usage
|
||||
|
||||
**NOTE**: You can use the release/2.6 branch of the PPYOLOE repo to convert all model versions.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Edit the config_infer_primary_ppyoloe_plus file](#edit-the-config_infer_primary_ppyoloe_plus-file)
|
||||
@@ -12,35 +14,36 @@
|
||||
|
||||
#### 1. Download the PaddleDetection repo and install the requirements
|
||||
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/docs/tutorials/INSTALL.md
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.6/docs/tutorials/INSTALL.md
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the `PaddleDetection` folder.
|
||||
Copy the `export_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the `PaddleDetection` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ppyoloe) releases (example for PP-YOLOE+_s)
|
||||
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.6/configs/ppyoloe) releases (example for PP-YOLOE+_s)
|
||||
|
||||
```
|
||||
wget https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`ppyoloe_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for PP-YOLOE+_s)
|
||||
Generate the ONNX model file (example for PP-YOLOE+_s)
|
||||
|
||||
```
|
||||
python3 gen_wts_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --simplify
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -93,11 +96,13 @@ Edit the `config_infer_primary_ppyoloe_plus.txt` file according to your model (e
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=ppyoloe_plus_crn_s_80e_coco.cfg
|
||||
model-file=ppyoloe_plus_crn_s_80e_coco.wts
|
||||
onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
|
||||
model-engine-file=ppyoloe_plus_crn_s_80e_coco.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYoloE
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: If you use the **legacy** model, you should edit the `config_infer_primary_ppyoloe.txt` file.
|
||||
|
||||
171
docs/YOLONAS.md
Normal file
171
docs/YOLONAS.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# YOLONAS usage
|
||||
|
||||
**NOTE**: The yaml file is not required.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Edit the config_infer_primary_yolonas file](#edit-the-config_infer_primary_yolonas-file)
|
||||
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
|
||||
* [Testing the model](#testing-the-model)
|
||||
|
||||
##
|
||||
|
||||
### Convert model
|
||||
|
||||
#### 1. Download the YOLO-NAS repo and install the requirements
|
||||
|
||||
```
|
||||
git clone https://github.com/Deci-AI/super-gradients.git
|
||||
cd super-gradients
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `export_yolonas.py` file from `DeepStream-Yolo/utils` directory to the `super-gradients` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pth` file from [YOLO-NAS](https://sghub.deci.ai/) website (example for YOLO-NAS S)
|
||||
|
||||
```
|
||||
wget https://sghub.deci.ai/models/yolo_nas_s_coco.pth
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the ONNX model file (example for YOLO-NAS S)
|
||||
|
||||
```
|
||||
python3 export_yolonas.py -m yolo_nas_s -w yolo_nas_s_coco.pth --simplify
|
||||
```
|
||||
|
||||
**NOTE**: Model names
|
||||
|
||||
```
|
||||
-m yolo_nas_s
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-m yolo_nas_m
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-m yolo_nas_l
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
-s SIZE
|
||||
--size SIZE
|
||||
-s HEIGHT WIDTH
|
||||
--size HEIGHT WIDTH
|
||||
```
|
||||
|
||||
Example for 1280
|
||||
|
||||
```
|
||||
-s 1280
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-s 1280 1280
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
### Compile the lib
|
||||
|
||||
Open the `DeepStream-Yolo` folder and compile the lib
|
||||
|
||||
* DeepStream 6.2 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.1.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yolonas file
|
||||
|
||||
Edit the `config_infer_primary_yolonas.txt` file according to your model (example for YOLO-NAS S with 80 classes)
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolo_nas_s_coco.onnx
|
||||
model-engine-file=yolo_nas_s_coco.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYoloE
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the deepstream_app_config file
|
||||
|
||||
```
|
||||
...
|
||||
[primary-gie]
|
||||
...
|
||||
config-file=config_infer_primary_yolonas.txt
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Testing the model
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.
|
||||
@@ -1,8 +1,8 @@
|
||||
# YOLOR usage
|
||||
|
||||
**NOTE**: You need to use the main branch of the YOLOR repo to convert the model.
|
||||
**NOTE**: Select the correct branch of the YOLOR repo before the conversion.
|
||||
|
||||
**NOTE**: The cfg file is required.
|
||||
**NOTE**: The cfg file is required for the main branch.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
@@ -20,31 +20,71 @@
|
||||
git clone https://github.com/WongKinYiu/yolor.git
|
||||
cd yolor
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yolor.py` file from `DeepStream-Yolo/utils` directory to the `yolor` folder.
|
||||
Copy the `export_yolor.py` file from `DeepStream-Yolo/utils` directory to the `yolor` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pt` file from [YOLOR](https://github.com/WongKinYiu/yolor) repo.
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolor_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for YOLOR-CSP)
|
||||
Generate the ONNX model file
|
||||
|
||||
- Main branch
|
||||
|
||||
Example for YOLOR-CSP
|
||||
|
||||
```
|
||||
python3 export_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg --simplify
|
||||
```
|
||||
|
||||
- Paper branch
|
||||
|
||||
Example for YOLOR-P6
|
||||
|
||||
```
|
||||
python3 export_yolor.py -w yolor-p6.pt --simplify
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
|
||||
```
|
||||
python3 gen_wts_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg
|
||||
--p6
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
-s SIZE
|
||||
--size SIZE
|
||||
-s HEIGHT WIDTH
|
||||
--size HEIGHT WIDTH
|
||||
```
|
||||
|
||||
Example for 1280
|
||||
|
||||
```
|
||||
-s 1280
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-s 1280 1280
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder
|
||||
|
||||
##
|
||||
|
||||
@@ -97,11 +137,13 @@ Edit the `config_infer_primary_yolor.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolor_csp.cfg
|
||||
model-file=yolor_csp.wts
|
||||
onnx-file=yolor_csp.onnx
|
||||
model-engine-file=yolor_csp.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# YOLOX usage
|
||||
|
||||
**NOTE**: You can use the main branch of the YOLOX repo to convert all model versions.
|
||||
|
||||
**NOTE**: The yaml file is not required.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
@@ -18,13 +20,15 @@
|
||||
git clone https://github.com/Megvii-BaseDetection/YOLOX.git
|
||||
cd YOLOX
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py develop
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yolox.py` file from `DeepStream-Yolo/utils` directory to the `YOLOX` folder.
|
||||
Copy the `export_yolox.py` file from `DeepStream-Yolo/utils` directory to the `YOLOX` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
@@ -34,19 +38,19 @@ Download the `pth` file from [YOLOX](https://github.com/Megvii-BaseDetection/YOL
|
||||
wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolox_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for YOLOX-s standard)
|
||||
Generate the ONNX model file (example for YOLOX-s standard)
|
||||
|
||||
```
|
||||
python3 gen_wts_yolox.py -w yolox_s.pth -e exps/default/yolox_s.py
|
||||
python3 export_yolox.py -w yolox_s.pth -c exps/default/yolox_s.py --simplify
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -99,11 +103,13 @@ Edit the `config_infer_primary_yolox.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolox_s.cfg
|
||||
model-file=yolox_s.wts
|
||||
onnx-file=yolox_s.onnx
|
||||
model-engine-file=yolox_s.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: If you use the **legacy** model, you should edit the `config_infer_primary_yolox_legacy.txt` file.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# YOLOv5 usage
|
||||
|
||||
**NOTE**: You can use the main branch of the YOLOv5 repo to convert all model versions.
|
||||
**NOTE**: You can use the master branch of the YOLOv5 repo to convert all model versions.
|
||||
|
||||
**NOTE**: The yaml file is not required.
|
||||
|
||||
@@ -20,30 +20,31 @@
|
||||
git clone https://github.com/ultralytics/yolov5.git
|
||||
cd yolov5
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yoloV5.py` file from `DeepStream-Yolo/utils` directory to the `yolov5` folder.
|
||||
Copy the `export_yoloV5.py` file from `DeepStream-Yolo/utils` directory to the `yolov5` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pt` file from [YOLOv5](https://github.com/ultralytics/yolov5/releases/) releases (example for YOLOv5s 6.1)
|
||||
Download the `pt` file from [YOLOv5](https://github.com/ultralytics/yolov5/releases/) releases (example for YOLOv5s 7.0)
|
||||
|
||||
```
|
||||
wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt
|
||||
wget https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov5_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for YOLOv5s)
|
||||
Generate the ONNX model file (example for YOLOv5s)
|
||||
|
||||
```
|
||||
python3 gen_wts_yoloV5.py -w yolov5s.pt
|
||||
python3 export_yoloV5.py -w yolov5s.pt --simplify
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
@@ -75,7 +76,7 @@ or
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -128,11 +129,13 @@ Edit the `config_infer_primary_yoloV5.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolov5s.cfg
|
||||
model-file=yolov5s.wts
|
||||
onnx-file=yolov5s.onnx
|
||||
model-engine-file=yolov5s.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -18,13 +18,14 @@
|
||||
git clone https://github.com/meituan/YOLOv6.git
|
||||
cd YOLOv6
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yoloV6.py` file from `DeepStream-Yolo/utils` directory to the `YOLOv6` folder.
|
||||
Copy the `export_yoloV6.py` file from `DeepStream-Yolo/utils` directory to the `YOLOv6` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
@@ -34,14 +35,14 @@ Download the `pt` file from [YOLOv6](https://github.com/meituan/YOLOv6/releases/
|
||||
wget https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov6_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for YOLOv6-S 3.0)
|
||||
Generate the ONNX model file (example for YOLOv6-S 3.0)
|
||||
|
||||
```
|
||||
python3 gen_wts_yoloV6.py -w yolov6s.pt
|
||||
python3 export_yoloV6.py -w yolov6s.pt --simplify
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
@@ -73,7 +74,7 @@ or
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -126,11 +127,13 @@ Edit the `config_infer_primary_yoloV6.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolov6s.cfg
|
||||
model-file=yolov6s.wts
|
||||
onnx-file=yolov6s.onnx
|
||||
model-engine-file=yolov6s.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -18,13 +18,14 @@
|
||||
git clone https://github.com/WongKinYiu/yolov7.git
|
||||
cd yolov7
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yoloV7.py` file from `DeepStream-Yolo/utils` directory to the `yolov7` folder.
|
||||
Copy the `export_yoloV7.py` file from `DeepStream-Yolo/utils` directory to the `yolov7` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
@@ -34,18 +35,18 @@ Download the `pt` file from [YOLOv7](https://github.com/WongKinYiu/yolov7/releas
|
||||
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov7_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Reparameterize your model
|
||||
|
||||
[YOLOv7](https://github.com/WongKinYiu/yolov7/releases/) and it's variants can't be directly converted to engine file. Therefore, you will have to reparameterize your model using the code [here](https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb). Make sure to convert your checkpoints in yolov7 repository, and then save your reparmeterized checkpoints for conversion in the next step.
|
||||
[YOLOv7](https://github.com/WongKinYiu/yolov7/releases/) and its variants cannot be directly converted to engine file. Therefore, you will have to reparameterize your model using the code [here](https://github.com/WongKinYiu/yolov7/blob/main/tools/reparameterization.ipynb). Make sure to convert your custom checkpoints in yolov7 repository, and then save your reparmeterized checkpoints for conversion in the next step.
|
||||
|
||||
#### 5. Convert model
|
||||
|
||||
Generate the `cfg` and `wts` files (example for YOLOv7)
|
||||
Generate the ONNX model file (example for YOLOv7)
|
||||
|
||||
```
|
||||
python3 gen_wts_yoloV7.py -w yolov7.pt
|
||||
python3 export_yoloV7.py -w yolov7.pt --simplify
|
||||
```
|
||||
|
||||
**NOTE**: To convert a P6 model
|
||||
@@ -77,7 +78,7 @@ or
|
||||
|
||||
#### 6. Copy generated files
|
||||
|
||||
Copy the generated `cfg` and `wts` files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -130,11 +131,13 @@ Edit the `config_infer_primary_yoloV7.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolov7.cfg
|
||||
model-file=yolov7.wts
|
||||
onnx-file=yolov7.onnx
|
||||
model-engine-file=yolov7.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -18,13 +18,15 @@
|
||||
git clone https://github.com/ultralytics/ultralytics.git
|
||||
cd ultralytics
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `gen_wts_yoloV8.py` file from `DeepStream-Yolo/utils` directory to the `ultralytics` folder.
|
||||
Copy the `export_yoloV8.py` file from `DeepStream-Yolo/utils` directory to the `ultralytics` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
@@ -34,14 +36,14 @@ Download the `pt` file from [YOLOv8](https://github.com/ultralytics/assets/relea
|
||||
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model, but it is important to keep the YOLO model reference (`yolov8_`) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the `cfg`, `wts` and `labels.txt` (if available) files (example for YOLOv8s)
|
||||
Generate the ONNX model file (example for YOLOv8s)
|
||||
|
||||
```
|
||||
python3 gen_wts_yoloV8.py -w yolov8s.pt
|
||||
python3 export_yoloV8.py -w yolov8s.pt --simplify
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
@@ -67,7 +69,7 @@ or
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated `cfg`, `wts` and `labels.txt` (if generated), files to the `DeepStream-Yolo` folder.
|
||||
Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
@@ -120,11 +122,13 @@ Edit the `config_infer_primary_yoloV8.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
custom-network-config=yolov8s.cfg
|
||||
model-file=yolov8s.wts
|
||||
onnx-file=yolov8s.onnx
|
||||
model-engine-file=yolov8s.onnx_b1_gpu0_fp32.engine
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -19,9 +19,7 @@ cd DeepStream-Yolo
|
||||
|
||||
#### 2. Copy the class names file to DeepStream-Yolo folder and remane it to `labels.txt`
|
||||
|
||||
#### 3. Copy the `cfg` and `weights`/`wts` files to DeepStream-Yolo folder
|
||||
|
||||
**NOTE**: It is important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
#### 3. Copy the `onnx` or `cfg` and `weights` files to DeepStream-Yolo folder
|
||||
|
||||
##
|
||||
|
||||
@@ -189,24 +187,25 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
|
||||
model-color-format=0
|
||||
```
|
||||
|
||||
**NOTE**: Set it according to the number of channels in the `cfg` file (1=GRAYSCALE, 3=RGB).
|
||||
**NOTE**: Set it according to the number of channels in the `cfg` file (1=GRAYSCALE, 3=RGB for Darknet YOLO) or your model configuration (ONNX).
|
||||
|
||||
* custom-network-config
|
||||
* custom-network-config and model-file (Darknet YOLO)
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
custom-network-config=yolov4_custom.cfg
|
||||
```
|
||||
|
||||
* model-file
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
model-file=yolov4_custom.weights
|
||||
```
|
||||
|
||||
* onnx-file (ONNX)
|
||||
|
||||
* Example for custom YOLOv8 model
|
||||
|
||||
```
|
||||
onnx-file=yolov8s_custom.onnx
|
||||
```
|
||||
|
||||
* model-engine-file
|
||||
|
||||
* Example for `batch-size=1` and `network-mode=2`
|
||||
@@ -233,7 +232,7 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
|
||||
model-engine-file=model_b2_gpu0_fp32.engine
|
||||
```
|
||||
|
||||
**NOTE**: To change the generated engine filename, you need to edit and rebuild the `nvdsinfer_model_builder.cpp` file (`/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp`, lines 825-827)
|
||||
**NOTE**: To change the generated engine filename (Darknet YOLO), you need to edit and rebuild the `nvdsinfer_model_builder.cpp` file (`/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp`, lines 825-827)
|
||||
|
||||
```
|
||||
suggestedPathName =
|
||||
@@ -260,7 +259,7 @@ To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plu
|
||||
num-detected-classes=80
|
||||
```
|
||||
|
||||
**NOTE**: Set it according to number of classes in `cfg` file.
|
||||
**NOTE**: Set it according to number of classes in `cfg` file (Darknet YOLO) or your model configuration (ONNX).
|
||||
|
||||
* interval
|
||||
|
||||
|
||||
@@ -26,9 +26,7 @@ cd DeepStream-Yolo
|
||||
|
||||
#### 3. Copy the class names file to each GIE folder and remane it to `labels.txt`
|
||||
|
||||
#### 4. Copy the `cfg` and `weights`/`wts` files to each GIE folder
|
||||
|
||||
**NOTE**: It is important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` filenames to generate the engine correctly.
|
||||
#### 4. Copy the `onnx` or `cfg` and `weights` files to each GIE folder
|
||||
|
||||
##
|
||||
|
||||
@@ -92,22 +90,36 @@ const char* YOLOLAYER_PLUGIN_VERSION {"2"};
|
||||
|
||||
### Edit the config_infer_primary files
|
||||
|
||||
**NOTE**: Edit the files according to the model you will use (YOLOv4, YOLOv5, YOLOR, etc).
|
||||
**NOTE**: Edit the files according to the model you will use (YOLOv8, YOLOv5, YOLOv4, etc).
|
||||
|
||||
**NOTE**: Do it for each GIE folder.
|
||||
|
||||
* Edit the path of the `cfg` file
|
||||
|
||||
Example for gie1
|
||||
Example for gie1 (Darknet YOLO)
|
||||
|
||||
```
|
||||
custom-network-config=gie1/yolo.cfg
|
||||
```
|
||||
model-file=yolo.weights
|
||||
```
|
||||
|
||||
Example for gie2
|
||||
Example for gie2 (Darknet YOLO)
|
||||
|
||||
```
|
||||
custom-network-config=gie2/yolo.cfg
|
||||
model-file=yolo.weights
|
||||
```
|
||||
|
||||
Example for gie1 (ONNX)
|
||||
|
||||
```
|
||||
onnx-file=yolo.onnx
|
||||
```
|
||||
|
||||
Example for gie2 (ONNX)
|
||||
|
||||
```
|
||||
onnx-file=yolo.onnx
|
||||
```
|
||||
|
||||
* Edit the gie-unique-id
|
||||
|
||||
Reference in New Issue
Block a user