Revert "Fixed multipleInferences"

This reverts commit c99e970199.
This commit is contained in:
Marcos Luciano
2020-12-31 01:20:02 -03:00
parent 665d2f55e7
commit dbf855490a
6 changed files with 149 additions and 36 deletions

View File

@@ -4,44 +4,57 @@ How to use multiples GIE's on DeepStream
##
1. Download [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
2. Copy each obj.names to deepstream/sources/yolo directory, renaming file to labels_*.txt (* = pgie/sgie1/sgie2/etc), according to each inference type.
3. Copy each yolo.cfg and yolo.weights files to deepstream/sources/yolo directory, renaming files to yolo_*.cfg and yolo_*.weights (* = pgie/sgie1/sgie2/etc), according to each inference type.
4. Make a copy of config_infer_primary.txt file and rename it to config_infer_secondary*.txt (* = 1/2/3/etc), according to inference order.
5. Edit DeepStream for your custom model, according to each yolo_*.cfg (* = pgie/sgie1/sgie2/etc) file: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/customModels.md
2. Make a folder, in deepstream/sources/yolo directory, named pgie (where you will put files of primary inference).
3. Make a folder, for each secondary inference, in deepstream/sources/yolo directory, named sgie* (* = 1, 2, 3, etc.; depending on the number of secondary inferences; where you will put files of others inferences).
4. Copy and remane each obj.names file to labels.txt in each inference directory (pgie, sgie*), according each inference type.
5. Copy your yolo.cfg and yolo.weights files to each inference directory (pgie, sgie*), according each inference type.
6. Move nvdsinfer_custom_impl_Yolo folder and config_infer_primary.txt file to each inference directory (pgie, sgie*; for sgie's, rename config_infer_primary to config_infer_secondary*; * = 1, 2, 3, etc.)
7. Edit DeepStream for your custom model, according each yolo.cfg file: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/customModels.md
**In example folder, on this repository, have all example files to multiple YOLO inferences.**
##
### Editing Makefile
To compile nvdsinfer_custom_impl_Yolo without errors is necessary to edit Makefile (line 34), in nvdsinfer_custom_impl_Yolo folder in each inference directory.
```
CFLAGS+= -I../../includes -I/usr/local/cuda-$(CUDA_VER)/include
```
To:
```
CFLAGS+= -I../../../includes -I/usr/local/cuda-$(CUDA_VER)/include
```
##
### Compiling edited models
1. Check your CUDA version (nvcc --version)
2. Go to deepstream/sources/yolo directory.
2. Go to inference directory.
3. Type command (example for CUDA 10.2 version):
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
```
**Do this for each GIE!**
##
### Add secondary-gie to deepstream_app_config after primary-gie
Example for 1 secondary-gie (2 inferences):
```
[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
# If you want secodary inference operate on specified GIE id (gie-unique-id you want to operate: 1, 2, etc; comment it if you don't want to use)
operate-on-gie-id=1
# If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, 3 etc; comment it if you don't want to use)
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=config_infer_secondary1.txt
config-file=sgie1/config_infer_secondary1.txt
```
Example for 2 secondary-gie (3 inferences):
```
[secondary-gie0]
enable=1
@@ -50,7 +63,7 @@ gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=config_infer_secondary1.txt
config-file=sgie1/config_infer_secondary1.txt
[secondary-gie1]
enable=1
@@ -59,40 +72,51 @@ gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=config_infer_secondary2.txt
config-file=sgie2/config_infer_secondary2.txt
```
Note: remember to edit primary-gie
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
```
to
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie/config_infer_primary.txt
```
##
### Editing config_infer
* Edit config_infer (config_infer_primary, config_infer_secondary1, etc.) files
* Edit path of config (config_infer_primary, config_infer_secondary1, etc.) files
Example for primary
```
custom-network-config=yolo_pgie.cfg
model-file=yolo_pgie.weights
model-engine-file=pgie_b16_gpu0_fp16.engine
labelfile-path=labels_pgie.txt
custom-network-config=pgie/yolo.cfg
```
Example for secondary1
```
custom-network-config=yolo_sgie1.cfg
model-file=yolo_sgie1.weights
model-engine-file=sgie1_b16_gpu0_fp16.engine
labelfile-path=labels_sgie1.txt
custom-network-config=sgie1/yolo.cfg
```
Example for secondary2
```
custom-network-config=yolo_sgie2.cfg
model-file=yolo_sgie2.weights
model-engine-file=sgie2_b16_gpu0_fp16.engine
labelfile-path=labels_sgie2.txt
custom-network-config=sgie2/yolo.cfg
```
##
@@ -137,6 +161,22 @@ Example for all secondary:
batch-size=16
```
##
* If you want secodary inference operate on specified GIE id (gie-unique-id you want to operate: 1, 2, etc.)
```
operate-on-gie-id=1
```
##
* If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, 3 etc.)
```
operate-on-class-ids=0
```
### Testing model
To run your custom YOLO model, use this command
@@ -144,4 +184,4 @@ To run your custom YOLO model, use this command
deepstream-app -c deepstream_app_config.txt
```
** During test process, engine file will be generated. When engine build process is done, rename engine file according to each configured engine name pgie/sgie1/sgie2/etc) in config_infer file.
**During test process, engine file will be generated. When engine build process is done, move engine file to respective GIE folder (pgie, sgie1, etc.)**