Update multipleInferences.md
This commit is contained in:
@@ -34,7 +34,7 @@ enable=1
|
|||||||
gpu-id=0
|
gpu-id=0
|
||||||
gie-unique-id=2
|
gie-unique-id=2
|
||||||
operate-on-gie-id=1
|
operate-on-gie-id=1
|
||||||
# If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, 3 etc; comment it if you don't want to use)
|
# If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, etc; comment it if you don't want to use)
|
||||||
operate-on-class-ids=0
|
operate-on-class-ids=0
|
||||||
nvbuf-memory-type=0
|
nvbuf-memory-type=0
|
||||||
config-file=config_infer_secondary1.txt
|
config-file=config_infer_secondary1.txt
|
||||||
@@ -144,4 +144,4 @@ To run your custom YOLO model, use this command
|
|||||||
deepstream-app -c deepstream_app_config.txt
|
deepstream-app -c deepstream_app_config.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
** During test process, engine file will be generated. When engine build process is done, rename engine file according to each configured engine name pgie/sgie1/sgie2/etc) in config_infer file.
|
**During test process, engine file will be generated. When engine build process is done, rename engine file according to each configured engine name pgie/sgie1/sgie2/etc) in config_infer file.**
|
||||||
|
|||||||
Reference in New Issue
Block a user