Add documentation for multiple models
This commit is contained in:
@@ -1,22 +1,14 @@
|
||||
# How to use custom models in DeepStream
|
||||
# How to use custom models on deepstream-app
|
||||
|
||||
* [Requirements](#requirements)
|
||||
* [Editing files](#editing-files)
|
||||
* [Compile lib](#compile-lib)
|
||||
* [Understanding and editing deepstream_app_config](#understanding-and-editing-deepstream_app_config)
|
||||
* [Understanding and editing config_infer_primary](#understanding-and-editing-config_infer_primary)
|
||||
* [Testing model](#testing-model)
|
||||
* [Directory tree](#directory-tree)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Understanding and editing deepstream_app_config file](#understanding-and-editing-deepstream_app_config-file)
|
||||
* [Understanding and editing config_infer_primary file](#understanding-and-editing-config_infer_primary-file)
|
||||
* [Testing the model](#testing-the-model)
|
||||
|
||||
##
|
||||
|
||||
### Requirements
|
||||
|
||||
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
||||
* Pre-treined YOLO model in Darknet or PyTorch
|
||||
|
||||
##
|
||||
|
||||
### Editing files
|
||||
### Directory tree
|
||||
|
||||
#### 1. Download the repo
|
||||
|
||||
@@ -25,254 +17,249 @@ git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
|
||||
cd DeepStream-Yolo
|
||||
```
|
||||
|
||||
#### 2. Copy your labels file to DeepStream-Yolo directory and remane it to labels.txt
|
||||
#### 2. Copy the class names file to DeepStream-Yolo folder and remane it to `labels.txt`
|
||||
|
||||
#### 3. Copy the yolo.cfg and yolo.weights/yolo.wts files to DeepStream-Yolo directory
|
||||
#### 3. Copy the `cfg` and `weights`/`wts` files to DeepStream-Yolo folder
|
||||
|
||||
**NOTE**: It's important to keep the YOLO model reference (yolov4_, yolov5_, yolor_, etc) in you cfg and weights/wts file to generate the engine correctly.
|
||||
**NOTE**: It's important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` files to generate the engine correctly.
|
||||
|
||||
##
|
||||
|
||||
### Compile lib
|
||||
### Compile the lib
|
||||
|
||||
* x86 platform
|
||||
* DeepStream 6.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
```
|
||||
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* Jetson platform
|
||||
* DeepStream 6.0.1 / 6.0 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
```
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Understanding and editing deepstream_app_config
|
||||
### Understanding and editing deepstream_app_config file
|
||||
|
||||
To understand and edit deepstream_app_config.txt file, read the [DeepStream Reference Application - Configuration Groups](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#configuration-groups)
|
||||
To understand and edit `deepstream_app_config.txt` file, read the [DeepStream Reference Application - Configuration Groups](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#configuration-groups)
|
||||
|
||||
|
||||
* tiled-display
|
||||
|
||||
```
|
||||
[tiled-display]
|
||||
enable=1
|
||||
# If you have 1 stream use 1/1 (rows/columns), if you have 4 streams use 2/2 or 4/1 or 1/4 (rows/columns)
|
||||
rows=1
|
||||
columns=1
|
||||
# Resolution of tiled display
|
||||
width=1280
|
||||
height=720
|
||||
gpu-id=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
|
||||
* source
|
||||
|
||||
* Example for 1 source:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
# 1=Camera (V4L2), 2=URI, 3=MultiURI, 4=RTSP, 5=Camera (CSI; Jetson only)
|
||||
type=3
|
||||
# Stream URL
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
|
||||
# Number of sources copy (if > 1, edit rows/columns in tiled-display section; use type=3 for more than 1 source)
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
* Example for 1 duplcated source:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/
|
||||
num-sources=2
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
* Example for 2 sources:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
|
||||
[source1]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.3/Streaming/Channels/101/
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
* sink
|
||||
|
||||
```
|
||||
[sink0]
|
||||
enable=1
|
||||
# 1=Fakesink, 2=EGL (nveglglessink), 3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
|
||||
type=2
|
||||
# Indicates how fast the stream is to be rendered (0=As fast as possible, 1=Synchronously)
|
||||
sync=0
|
||||
gpu-id=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
|
||||
* streammux
|
||||
|
||||
```
|
||||
[streammux]
|
||||
gpu-id=0
|
||||
# Boolean property to inform muxer that sources are live
|
||||
live-source=1
|
||||
batch-size=1
|
||||
batched-push-timeout=40000
|
||||
# Resolution of streammux
|
||||
width=1920
|
||||
height=1080
|
||||
enable-padding=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
|
||||
* primary-gie
|
||||
|
||||
```
|
||||
[primary-gie]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=1
|
||||
nvbuf-memory-type=0
|
||||
config-file=config_infer_primary.txt
|
||||
```
|
||||
|
||||
**NOTE**: Edit the `config-file` according to your YOLO model.
|
||||
|
||||
##
|
||||
|
||||
#### tiled-display
|
||||
### Understanding and editing config_infer_primary file
|
||||
|
||||
```
|
||||
[tiled-display]
|
||||
enable=1
|
||||
# If you have 1 stream use 1/1 (rows/columns), if you have 4 streams use 2/2 or 4/1 or 1/4 (rows/columns)
|
||||
rows=1
|
||||
columns=1
|
||||
# Resolution of tiled display
|
||||
width=1280
|
||||
height=720
|
||||
gpu-id=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
To understand and edit `config_infer_primary.txt` file, read the [DeepStream Plugin Guide - Gst-nvinfer File Configuration Specifications](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications)
|
||||
|
||||
* model-color-format
|
||||
|
||||
```
|
||||
# 0=RGB, 1=BGR, 2=GRAYSCALE
|
||||
model-color-format=0
|
||||
```
|
||||
|
||||
**NOTE**: Set it according to the number of channels in the `cfg` file (1=GRAYSCALE, 3=RGB).
|
||||
|
||||
* custom-network-config
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
custom-network-config=yolov4_custom.cfg
|
||||
```
|
||||
|
||||
* model-file
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
model-file=yolov4_custom.weights
|
||||
```
|
||||
|
||||
* model-engine-file
|
||||
|
||||
* Example for `batch-size=1` and `network-mode=2`
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_fp16.engine
|
||||
```
|
||||
|
||||
* Example for `batch-size=1` and `network-mode=1`
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_int8.engine
|
||||
```
|
||||
|
||||
* Example for `batch-size=1` and `network-mode=0`
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_fp32.engine
|
||||
```
|
||||
|
||||
* Example for `batch-size=2` and `network-mode=0`
|
||||
|
||||
```
|
||||
model-engine-file=model_b2_gpu0_fp32.engine
|
||||
```
|
||||
|
||||
**NOTE**: To change the generated engine filename, you need to edit and rebuild the `nvdsinfer_model_builder.cpp` file (`/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp`, lines 825-827)
|
||||
|
||||
```
|
||||
suggestedPathName =
|
||||
modelPath + "_b" + std::to_string(initParams.maxBatchSize) + "_" +
|
||||
devId + "_" + networkMode2Str(networkMode) + ".engine";
|
||||
```
|
||||
|
||||
* batch-size
|
||||
|
||||
```
|
||||
batch-size=1
|
||||
```
|
||||
|
||||
* network-mode
|
||||
|
||||
```
|
||||
# 0=FP32, 1=INT8, 2=FP16
|
||||
network-mode=0
|
||||
```
|
||||
|
||||
* num-detected-classes
|
||||
|
||||
```
|
||||
num-detected-classes=80
|
||||
```
|
||||
|
||||
**NOTE**: Set it according to number of classes in `cfg` file.
|
||||
|
||||
* interval
|
||||
|
||||
```
|
||||
# Number of consecutive batches to be skipped
|
||||
interval=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### source
|
||||
|
||||
* Example for 1 source:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
# 1=Camera (V4L2), 2=URI, 3=MultiURI, 4=RTSP, 5=Camera (CSI; Jetson only)
|
||||
type=3
|
||||
# Stream URL
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
|
||||
# Number of sources copy (if > 1, edit rows/columns in tiled-display section; use type=3 for more than 1 source)
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
* Example for 1 duplcated source:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/
|
||||
num-sources=2
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
* Example for 2 sources:
|
||||
|
||||
```
|
||||
[source0]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.2/Streaming/Channels/101/
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
|
||||
[source1]
|
||||
enable=1
|
||||
type=3
|
||||
uri=rtsp://192.168.1.3/Streaming/Channels/101/
|
||||
num-sources=1
|
||||
gpu-id=0
|
||||
cudadec-memtype=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### sink
|
||||
|
||||
```
|
||||
[sink0]
|
||||
enable=1
|
||||
# 1=Fakesink, 2=EGL (nveglglessink), 3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
|
||||
type=2
|
||||
# Indicates how fast the stream is to be rendered (0=As fast as possible, 1=Synchronously)
|
||||
sync=0
|
||||
gpu-id=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### streammux
|
||||
|
||||
```
|
||||
[streammux]
|
||||
gpu-id=0
|
||||
# Boolean property to inform muxer that sources are live
|
||||
live-source=1
|
||||
batch-size=1
|
||||
batched-push-timeout=40000
|
||||
# Resolution of streammux
|
||||
width=1920
|
||||
height=1080
|
||||
enable-padding=0
|
||||
nvbuf-memory-type=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### primary-gie
|
||||
|
||||
```
|
||||
[primary-gie]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=1
|
||||
nvbuf-memory-type=0
|
||||
config-file=config_infer_primary.txt
|
||||
```
|
||||
|
||||
**NOTE**: Choose the correct config_infer_primary based on your YOLO model.
|
||||
|
||||
##
|
||||
|
||||
### Understanding and editing config_infer_primary
|
||||
|
||||
To understand and edit config_infer_primary.txt file, read the [DeepStream Plugin Guide - Gst-nvinfer File Configuration Specifications](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications)
|
||||
|
||||
##
|
||||
|
||||
#### model-color-format
|
||||
|
||||
```
|
||||
# 0=RGB, 1=BGR, 2=GRAYSCALE
|
||||
model-color-format=0
|
||||
```
|
||||
|
||||
**NOTE**: Set it accoding to number of channels in yolo.cfg file (1=GRAYSCALE, 3=RGB)
|
||||
|
||||
##
|
||||
|
||||
#### custom-network-config
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
custom-network-config=yolov4_custom.cfg
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### model-file
|
||||
|
||||
* Example for custom YOLOv4 model
|
||||
|
||||
```
|
||||
model-file=yolov4_custom.weights
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### model-engine-file
|
||||
|
||||
* Example for batch-size=1 and network-mode=2
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_fp16.engine
|
||||
```
|
||||
|
||||
* Example for batch-size=1 and network-mode=1
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_int8.engine
|
||||
```
|
||||
|
||||
* Example for batch-size=1 and network-mode=0
|
||||
|
||||
```
|
||||
model-engine-file=model_b1_gpu0_fp32.engine
|
||||
```
|
||||
|
||||
* Example for batch-size=2 and network-mode=0
|
||||
|
||||
```
|
||||
model-engine-file=model_b2_gpu0_fp32.engine
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### batch-size
|
||||
|
||||
```
|
||||
batch-size=1
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### network-mode
|
||||
|
||||
```
|
||||
# 0=FP32, 1=INT8, 2=FP16
|
||||
network-mode=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
#### num-detected-classes
|
||||
|
||||
```
|
||||
num-detected-classes=80
|
||||
```
|
||||
|
||||
**NOTE**: Set it according to number of classes in yolo.cfg file
|
||||
|
||||
##
|
||||
|
||||
#### interval
|
||||
|
||||
```
|
||||
# Number of consecutive batches to be skipped
|
||||
interval=0
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Testing model
|
||||
### Testing the model
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
|
||||
223
docs/multipleGIEs.md
Normal file
223
docs/multipleGIEs.md
Normal file
@@ -0,0 +1,223 @@
|
||||
# How to use multiple YOLO GIEs on DeepStream
|
||||
|
||||
**NOTE**: The `deepstream-app` does not support multiple primary GIEs. You can only use one YOLO model as primary GIE and the other YOLO models as secondary GIEs (infering the primary detected object). To use 2 or more YOLO models as primary GIE, you need to use a custom code.
|
||||
|
||||
* [Directory tree](#directory-tree)
|
||||
* [Change the YoloLayer plugin version](#change-the-yololayer-plugin-version)
|
||||
* [Compile the libs](#compile-the-libs)
|
||||
* [Edit the config_infer_primary files](#edit-the-config_infer_primary-files)
|
||||
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
|
||||
* [Test](#test)
|
||||
|
||||
##
|
||||
|
||||
### Directory tree
|
||||
|
||||
#### 1. Download the repo
|
||||
|
||||
```
|
||||
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
|
||||
cd DeepStream-Yolo
|
||||
```
|
||||
|
||||
#### 2. Create folders for the GIEs and copy the DeepStream-Yolo files to them
|
||||
|
||||

|
||||
|
||||
#### 3. Copy the class names file to each GIE folder and remane it to `labels.txt`
|
||||
|
||||
#### 4. Copy the `cfg` and `weights`/`wts` files to each GIE folder
|
||||
|
||||
**NOTE**: It's important to keep the YOLO model reference (`yolov4_`, `yolov5_`, `yolor_`, etc) in you `cfg` and `weights`/`wts` files to generate the engine correctly.
|
||||
|
||||
##
|
||||
|
||||
### Change the YoloLayer plugin version
|
||||
|
||||
Edit the `yoloPlugins.h` file (line 53), in each GIE `nvdsinfer_custom_impl_Yolo` folder.
|
||||
|
||||
```
|
||||
const char* YOLOLAYER_PLUGIN_VERSION {"1"};
|
||||
```
|
||||
To:
|
||||
```
|
||||
const char* YOLOLAYER_PLUGIN_VERSION {"2"};
|
||||
```
|
||||
|
||||
**NOTE**: `gie2`: version = 2 / `gie3`: version = 3 / `gie4`: version = 4.
|
||||
|
||||
##
|
||||
|
||||
### Compile the libs
|
||||
|
||||
**NOTE**: Do it for each GIE folder, replacing the GIE folder name (`gie1/nvdsinfer_custom_impl_Yolo`).
|
||||
|
||||
* DeepStream 6.1 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.6 make -C gie1/nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on x86 platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C gie1/nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.1 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=11.4 make -C gie1/nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
||||
|
||||
```
|
||||
CUDA_VER=10.2 make -C gie1/nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary files
|
||||
|
||||
**NOTE**: Edit the files according to the model you will use (YOLOv4, YOLOv5, YOLOR, etc).
|
||||
|
||||
**NOTE**: Do it for each GIE folder.
|
||||
|
||||
* Edit the path of the `cfg` file
|
||||
|
||||
Example for gie1
|
||||
|
||||
```
|
||||
custom-network-config=gie1/yolo.cfg
|
||||
```
|
||||
|
||||
Example for gie2
|
||||
|
||||
```
|
||||
custom-network-config=gie2/yolo.cfg
|
||||
```
|
||||
|
||||
* Edit the gie-unique-id
|
||||
|
||||
Example for gie1
|
||||
|
||||
```
|
||||
gie-unique-id=1
|
||||
```
|
||||
|
||||
Example for gie2
|
||||
|
||||
```
|
||||
gie-unique-id=2
|
||||
```
|
||||
|
||||
* Edit the process-mode
|
||||
|
||||
Example for primary inference engine
|
||||
|
||||
```
|
||||
process-mode=1
|
||||
```
|
||||
|
||||
Example for secondary inference engine (infering the primary detected object)
|
||||
|
||||
```
|
||||
process-mode=2
|
||||
```
|
||||
|
||||
**NOTE**: In the secondary inference, we need to set which gie it will use to operate
|
||||
|
||||
Add
|
||||
```
|
||||
operate-on-gie-id=1
|
||||
```
|
||||
|
||||
To operate on specific class ids
|
||||
|
||||
```
|
||||
operate-on-class-ids=0;1;2
|
||||
```
|
||||
|
||||
* Edit batch-size
|
||||
|
||||
Example for primary inference engine
|
||||
|
||||
```
|
||||
batch-size=1
|
||||
```
|
||||
|
||||
Example for secondary inference engine (infering the primary detected object)
|
||||
|
||||
```
|
||||
batch-size=16
|
||||
```
|
||||
|
||||
### Edit the deepstream_app_config file
|
||||
|
||||
**NOTE**: Add the `secondary-gie` key after `primary-gie` key.
|
||||
|
||||
Example for 1 `secondary-gie` (2 inferences):
|
||||
|
||||
```
|
||||
[secondary-gie0]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=2
|
||||
operate-on-gie-id=1
|
||||
operate-on-class-ids=0
|
||||
nvbuf-memory-type=0
|
||||
config-file=gie2/config_infer_primary.txt
|
||||
```
|
||||
|
||||
Example for 2 `secondary-gie` (3 inferences):
|
||||
```
|
||||
[secondary-gie0]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=2
|
||||
operate-on-gie-id=1
|
||||
operate-on-class-ids=0
|
||||
nvbuf-memory-type=0
|
||||
config-file=gie2/config_infer_primary.txt
|
||||
|
||||
[secondary-gie1]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=3
|
||||
operate-on-gie-id=1
|
||||
operate-on-class-ids=0
|
||||
nvbuf-memory-type=0
|
||||
config-file=gie3/config_infer_primary.txt
|
||||
```
|
||||
|
||||
**NOTE**: Remember to edit `primary-gie` key
|
||||
|
||||
```
|
||||
[primary-gie]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=1
|
||||
nvbuf-memory-type=0
|
||||
config-file=config_infer_primary.txt
|
||||
```
|
||||
|
||||
To
|
||||
```
|
||||
[primary-gie]
|
||||
enable=1
|
||||
gpu-id=0
|
||||
gie-unique-id=1
|
||||
nvbuf-memory-type=0
|
||||
config-file=gie1/config_infer_primary.txt
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Test
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
**NOTE**: During test process, the engine files will be generated in the DeepStream-Yolo folder. When build process is done, for each GIE, move engine file to its respective folder (`gie1`, `gie2`, etc).
|
||||
BIN
docs/multipleGIEs_tree.png
Normal file
BIN
docs/multipleGIEs_tree.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 24 KiB |
Reference in New Issue
Block a user