New updates

* readme.md updated
* Added customModels.md
* Added multipleInferences.md
This commit is contained in:
Marcos Luciano
2020-12-21 11:49:09 -03:00
parent c34158664c
commit 03740debe9
8 changed files with 803 additions and 36 deletions

313
customModels.md Normal file
View File

@@ -0,0 +1,313 @@
# Editing default model to your custom model
How to edit DeepStream files to your custom model
##
* [Requirements](#requirements)
* [Editing default model](#editing-default-model)
* [Compiling edited model](#compiling-edited-model)
* [Understanding and editing deepstream_app_config](#understanding-and-editing-deepstream_app_config)
* [Understanding and editing config_infer_primary](#understanding-and-editing-config_infer_primary)
* [Testing model](#testing-model)
* [Custom functions in your model](#custom-functions-in-your-model)
##
### Requirements
* [NVIDIA DeepStream SDK 5.0.1](https://developer.nvidia.com/deepstream-sdk)
* [DeepStream-Yolo Native](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native)
* [Pre-treined YOLO model](https://github.com/AlexeyAB/darknet)
##
### Editing default model
1. Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
2. Copy and remane your obj.names file to labels.txt to deepstream/sources/yolo directory
3. Copy your yolo.cfg and yolo.weights files to deepstream/sources/yolo directory.
4. Edit config_infer_primary.txt for your model (example for YOLOv4)
```
[property]
...
# CFG
custom-network-config=yolo.cfg
# Weights
model-file=yolo.weights
# Model labels file
labelfile-path=labels.txt
...
```
Note: if you want to use YOLOv2 or YOLOv2-Tiny models, change deepstream_app_config.txt
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV2.txt
```
Note: config_infer_primary.txt uses cluster-mode=4 and NMS = 0.45 (via code) when beta_nms isn't available (when beta_nms is available, NMS = beta_nms), while config_infer_primary_yoloV2.txt uses cluster-mode=2 and nms-iou-threshold=0.45 to set NMS.
##
### Compiling edited model
1. Check your CUDA version (nvcc --version)
2. Go to deepstream/sources/yolo directory
3. Type command (example for CUDA 10.2 version):
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
```
##
### Understanding and editing deepstream_app_config
To understand and edit deepstream_app_config.txt file, read the [DeepStream SDK Development Guide - Configuration Groups](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#configuration-groups)
##
* Edit tiled-display
```
[tiled-display]
enable=1
# If you have 1 stream use 1/1 (rows/columns), if you have 4 streams use 2/2 or 4/1 or 1/4 (rows/columns)
rows=1
columns=1
# Resolution of tiled display
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0
```
##
* Edit source
Example for 1 source:
```
[source0]
enable=1
# 1=Camera (V4L2), 2=URI, 3=MultiURI, 4=RTSP, 5=Camera (CSI; Jetson only)
type=3
# Stream URL
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
# Number of sources copy (if > 1, you need edit rows/columns in tiled-display section and batch-size in streammux section and config_infer_primary.txt; need type=3 for more than 1 source)
num-sources=1
gpu-id=0
cudadec-memtype=0
```
Example for 1 duplcated source:
```
[source0]
enable=1
type=3
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
num-sources=2
gpu-id=0
cudadec-memtype=0
```
Example for 2 sources:
```
[source0]
enable=1
type=3
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
num-sources=1
gpu-id=0
cudadec-memtype=0
[source1]
enable=1
type=3
uri=rtsp://192.168.1.3/Streaming/Channels/101/httppreview
num-sources=1
gpu-id=0
cudadec-memtype=0
```
##
* Edit sink
Example for 1 source or 1 duplicated source:
```
[sink0]
enable=1
# 1=Fakesink, 2=EGL (nveglglessink), 3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
type=2
# Indicates how fast the stream is to be rendered (0=As fast as possible, 1=Synchronously)
sync=0
# The ID of the source whose buffers this sink must use
source-id=0
gpu-id=0
nvbuf-memory-type=0
```
Example for 2 sources:
```
[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=1
type=2
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=0
```
##
* Edit streammux
Example for 1 source:
```
[streammux]
gpu-id=0
# Boolean property to inform muxer that sources are live
live-source=1
# Number of sources
batch-size=1
# Time out in usec, to wait after the first buffer is available to push the batch even if the complete batch is not formed
batched-push-timeout=40000
# Resolution of streammux
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
```
Example for 1 duplicated source or 2 sources:
```
[streammux]
gpu-id=0
live-source=0
batch-size=2
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
```
##
* Edit primary-gie
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
```
* You can remove [tracker] section, if you don't use it.
##
### Understanding and editing config_infer_primary
To understand and edit config_infer_primary.txt file, read the [NVIDIA DeepStream Plugin Manual - Gst-nvinfer File Configuration Specifications](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications)
##
* Edit model-color-format accoding number of channels in yolo.cfg (1=GRAYSCALE, 3=RGB)
```
# 0=RGB, 1=BGR, 2=GRAYSCALE
model-color-format=0
```
##
* Edit model-engine-file (example for batch-size=1 and network-mode=2)
```
model-engine-file=model_b1_gpu0_fp16.engine
```
##
* Edit batch-size
```
# Number of sources
batch-size=1
```
##
* Edit network-mode
```
# 0=FP32, 1=INT8, 2=FP16
network-mode=0
```
##
* Edit num-detected-classes according number of classes in yolo.cfg
```
num-detected-classes=80
```
##
* Edit network-type
```
# 0=Detector, 1=Classifier, 2=Segmentation
network-type=0
```
##
* Add/edit interval (FPS increase if > 0)
```
# Interval of detection
interval=0
```
##
* Change pre-cluster-threshold (optional)
```
[class-attrs-all]
# CONF_THRESH
pre-cluster-threshold=0.25
```
##
### Testing model
To run your custom YOLO model, use command
```
deepstream-app -c deepstream_app_config.txt
```
##
### Custom functions in your model
You can get metadata from deepstream in Python and C++. For C++, you need edit deepstream-app or deepstream-test code. For Python your need install and edit [this](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
You need manipulate NvDsObjectMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsObjectMeta.html)), NvDsFrameMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsFrameMeta.html)) and NvOSD_RectParams ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsOSD/NvOSD_RectParams.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/OSD/Data_Structures/_NvOSD_FrameRectParams.html)) to get label, position, etc. of bboxs.
In C++ deepstream-app application, your code need be in analytics_done_buf_prob function.
In C++/Python deepstream-test application, your code need be in osd_sink_pad_buffer_probe/tiler_src_pad_buffer_probe function.
Python is slightly slower than C (about 5-10%).

View File

@@ -0,0 +1,72 @@
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=3
uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie/config_infer_primary.txt
[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
#operate-on-gie-id=1
#operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie1/config_infer_secondary1.txt
[tests]
file-loop=0

View File

@@ -0,0 +1,23 @@
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolo.cfg
model-file=yolo.weights
model-engine-file=model_b1_gpu0_fp16.engine
labelfile-path=labels.txt
batch-size=1
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
pre-cluster-threshold=0.25

View File

@@ -0,0 +1,71 @@
################################################################################
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Edited by Marcos Luciano
# https://www.github.com/marcoslucianops
################################################################################
CUDA_VER?=
ifeq ($(CUDA_VER),)
$(error "CUDA_VER is not set")
endif
CC:= g++
NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc
CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations
CFLAGS+= -I../../../includes -I/usr/local/cuda-$(CUDA_VER)/include
LIBS:= -lnvinfer_plugin -lnvinfer -lnvparsers -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs
LFLAGS:= -shared -Wl,--start-group $(LIBS) -Wl,--end-group
INCS:= $(wildcard *.h)
SRCFILES:= nvdsinfer_yolo_engine.cpp \
nvdsparsebbox_Yolo.cpp \
yoloPlugins.cpp \
layers/convolutional_layer.cpp \
layers/dropout_layer.cpp \
layers/shortcut_layer.cpp \
layers/route_layer.cpp \
layers/upsample_layer.cpp \
layers/maxpool_layer.cpp \
layers/activation_layer.cpp \
utils.cpp \
yolo.cpp \
yoloForward.cu
TARGET_LIB:= libnvdsinfer_custom_impl_Yolo.so
TARGET_OBJS:= $(SRCFILES:.cpp=.o)
TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)
all: $(TARGET_LIB)
%.o: %.cpp $(INCS) Makefile
$(CC) -c -o $@ $(CFLAGS) $<
%.o: %.cu $(INCS) Makefile
$(NVCC) -c -o $@ --compiler-options '-fPIC' $<
$(TARGET_LIB) : $(TARGET_OBJS)
$(CC) -o $@ $(TARGET_OBJS) $(LFLAGS)
clean:
rm -rf $(TARGET_LIB)
rm -rf $(TARGET_OBJS)

View File

@@ -0,0 +1,25 @@
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolo.cfg
model-file=yolo.weights
model-engine-file=model_b1_gpu0_fp16.engine
labelfile-path=labels.txt
batch-size=16
network-mode=2
num-detected-classes=10
interval=0
gie-unique-id=2
process-mode=2
#operate-on-gie-id=1
#operate-on-class-ids=0
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
pre-cluster-threshold=0.25

View File

@@ -0,0 +1,71 @@
################################################################################
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Edited by Marcos Luciano
# https://www.github.com/marcoslucianops
################################################################################
CUDA_VER?=
ifeq ($(CUDA_VER),)
$(error "CUDA_VER is not set")
endif
CC:= g++
NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc
CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations
CFLAGS+= -I../../../includes -I/usr/local/cuda-$(CUDA_VER)/include
LIBS:= -lnvinfer_plugin -lnvinfer -lnvparsers -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs
LFLAGS:= -shared -Wl,--start-group $(LIBS) -Wl,--end-group
INCS:= $(wildcard *.h)
SRCFILES:= nvdsinfer_yolo_engine.cpp \
nvdsparsebbox_Yolo.cpp \
yoloPlugins.cpp \
layers/convolutional_layer.cpp \
layers/dropout_layer.cpp \
layers/shortcut_layer.cpp \
layers/route_layer.cpp \
layers/upsample_layer.cpp \
layers/maxpool_layer.cpp \
layers/activation_layer.cpp \
utils.cpp \
yolo.cpp \
yoloForward.cu
TARGET_LIB:= libnvdsinfer_custom_impl_Yolo.so
TARGET_OBJS:= $(SRCFILES:.cpp=.o)
TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)
all: $(TARGET_LIB)
%.o: %.cpp $(INCS) Makefile
$(CC) -c -o $@ $(CFLAGS) $<
%.o: %.cu $(INCS) Makefile
$(NVCC) -c -o $@ --compiler-options '-fPIC' $<
$(TARGET_LIB) : $(TARGET_OBJS)
$(CC) -o $@ $(TARGET_OBJS) $(LFLAGS)
clean:
rm -rf $(TARGET_LIB)
rm -rf $(TARGET_OBJS)

185
multipleInferences.md Normal file
View File

@@ -0,0 +1,185 @@
# Multiple YOLO inferences
How to use multiples GIE's on DeepStream
1. Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
2. Make a folder, in deepstream/sources/yolo directory, named pgie (where you will put files of primary inference).
3. Make a folder, for each secondary inference, in deepstream/sources/yolo directory, named sgie* (* = 1, 2, 3, etc.; depending on the number of secondary inferences; where you will put files of others inferences).
4. Copy and remane each obj.names file to labels.txt in each inference directory (pgie, sgie*), according each inference type.
5. Copy your yolo.cfg and yolo.weights files to each inference directory (pgie, sgie*), according each inference type.
6. Move nvdsinfer_custom_impl_Yolo folder and config_infer_primary.txt file to each inference directory (pgie, sgie*; for sgie's, rename config_infer_primary to config_infer_secondary*; * = 1, 2, 3, etc.)
7. Edit DeepStream for your custom model, according each yolo.cfg file: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/customModels.md
**In example folder, on this repository, have all example files to multiple YOLO inferences.**
##
### Editing Makefile
To compile nvdsinfer_custom_impl_Yolo without errors is necessary to edit Makefile (line 34), in nvdsinfer_custom_impl_Yolo folder in each inference directory.
```
CFLAGS+= -I../../includes -I/usr/local/cuda-$(CUDA_VER)/include
```
To:
```
CFLAGS+= -I../../../includes -I/usr/local/cuda-$(CUDA_VER)/include
```
##
### Compiling edited models
1. Check your CUDA version (nvcc --version)
2. Go to inference directory.
3. Type command (example for CUDA 10.2 version):
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
```
**Do this for each GIE!**
##
### Add secondary-gie to deepstream_app_config after primary-gie
Example for 1 secondary-gie (2 inferences):
```
[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
# If you want secodary inference operate on specified GIE id (gie-unique-id you want to operate: 1, 2, etc; comment it if you don't want to use)
operate-on-gie-id=1
# If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, 3 etc; comment it if you don't want to use)
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie1/config_infer_secondary1.txt
```
Example for 2 secondary-gie (3 inferences):
```
[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie1/config_infer_secondary1.txt
[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie2/config_infer_secondary2.txt
```
Note: remember to edit primary-gie
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
```
to
```
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie/config_infer_primary.txt
```
##
### Editing config_infer
* Edit path of config (config_infer_primary, config_infer_secondary1, etc.) files
Example for primary
```
custom-network-config=pgie/yolo.cfg
```
Example for secondary1
```
custom-network-config=sgie1/yolo.cfg
```
Example for secondary2
```
custom-network-config=sgie2/yolo.cfg
```
##
* Edit gie-unique-id
Example for primary
```
gie-unique-id=1
process-mode=1
```
Example for secondary1
```
gie-unique-id=2
process-mode=2
```
Example for secondary2
```
gie-unique-id=3
process-mode=2
```
##
* Edit batch-size
Example for primary
```
# Number of sources
batch-size=1
```
Example for all secondary:
```
batch-size=16
```
##
* If you want secodary inference operate on specified GIE id (gie-unique-id you want to operate: 1, 2, etc.)
```
operate-on-gie-id=1
```
##
* If you want secodary inference operate on specified class ids of GIE (class ids you want to operate: 1, 1;2, 2;3;4, 3 etc.)
```
operate-on-class-ids=0
```
### Testing model
To run your custom YOLO model, use this command
```
deepstream-app -c deepstream_app_config.txt
```
**During test process, engine file will be generated. When engine build process is done, move engine file to respective GIE folder (pgie, sgie1, etc.)**

View File

@@ -5,7 +5,7 @@ NVIDIA DeepStream SDK 5.0.1 configuration for YOLO models
### Improvements on this repository
* Darknet CFG params parser (not need to edit nvdsparsebbox_Yolo.cpp or another file)
* Darknet CFG params parser (not need to edit nvdsparsebbox_Yolo.cpp or another file for native models)
* Support to new_coords, beta_nms and scale_x_y params
* Support to new models not supported in official DeepStream SDK YOLO.
* Support to layers not supported in official DeepStream SDK YOLO.
@@ -15,30 +15,31 @@ NVIDIA DeepStream SDK 5.0.1 configuration for YOLO models
##
Tutorial
* Configuring to your custom model
* Using VOC models
* [Configuring to your custom model](https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/customModels.md)
* [Multiple YOLO inferences](https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/multipleInferences.md)
Benchmark
* [mAP/FPS comparison between models](#mapfps-comparison-between-models)
[Native TensorRT conversion](#native-tensorrt-conversion) (tested models below)
* [YOLOv4x-Mish](https://github.com/AlexeyAB/darknet)
* [YOLOv4-CSP](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-csp)
* [YOLOv4](https://github.com/AlexeyAB/darknet)
* [YOLOv4-Tiny](https://github.com/AlexeyAB/darknet)
* [YOLOv3-SSP](https://github.com/pjreddie/darknet)
* [YOLOv3](https://github.com/pjreddie/darknet)
* [YOLOv3-Tiny-PRN](https://github.com/WongKinYiu/PartialResidualNetworks)
* [YOLOv3-Tiny](https://github.com/pjreddie/darknet)
* [YOLOv3-Lite](https://github.com/dog-qiuqiu/MobileNet-Yolo)
* [YOLOv3-Nano](https://github.com/dog-qiuqiu/MobileNet-Yolo)
* [YOLO-Fastest](https://github.com/dog-qiuqiu/Yolo-Fastest)
* [YOLO-Fastest-XL](https://github.com/dog-qiuqiu/Yolo-Fastest)
* [YOLOv2](https://github.com/pjreddie/darknet)
* [YOLOv2-Tiny](https://github.com/pjreddie/darknet)
TensorRT conversion
* [Native](#native-tensorrt-conversion) (tested models below)
* YOLOv4x-Mish
* YOLOv4-CSP
* YOLOv4
* YOLOv4-Tiny
* YOLOv3-SSP
* YOLOv3
* YOLOv3-Tiny-PRN
* YOLOv3-Tiny
* YOLOv3-Lite
* YOLOv3-Nano
* YOLO-Fastest
* YOLO-Fastest-XL
* YOLOv2
* YOLOv2-Tiny
External TensorRT conversion
* [YOLOv5](https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5.md)
* [External](https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5.md)
* YOLOv5
Request
* [Request native TensorRT conversion for your YOLO-based model](#request-native-tensorrt-conversion-for-your-yolo-based-model)
@@ -67,6 +68,10 @@ PyTorch 1.7.0
Torchvision 0.8.1
```
DeepStream SDK: https://youtu.be/Qi_F_IYpuFQ
Darknet: https://youtu.be/AxJJ9fnJ7Xk
| TensorRT | Precision | Resolution | IoU=0.5:0.95 | IoU=0.5 | IoU=0.75 | FPS<br />(with display) | FPS<br />(without display) |
|:---------------:|:---------:|:----------:|:------------:|:-------:|:--------:|:-----------------------:|:--------------------------:|
| YOLOv5x | FP32 | 608 | 0.406 | 0.562 | 0.441 | 7.91 | 7.99 |
@@ -172,20 +177,20 @@ Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/t
Donwload cfg and weights files from your model and move to deepstream/sources/yolo folder.
* YOLOv4x-Mish [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4x-mish.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4x-mish.weights)]
* YOLOv4-CSP [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-csp.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-csp.weights)]
* YOLOv4 [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights)]
* YOLOv4-Tiny [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights)]
* YOLOv3-SPP [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-spp.cfg)] [[weights](https://pjreddie.com/media/files/yolov3-spp.weights)]
* YOLOv3 [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg)] [[weights](https://pjreddie.com/media/files/yolov3.weights)]
* YOLOv3-Tiny-PRN [[cfg](https://raw.githubusercontent.com/WongKinYiu/PartialResidualNetworks/master/cfg/yolov3-tiny-prn.cfg)] [[weights](https://github.com/WongKinYiu/PartialResidualNetworks/raw/master/model/yolov3-tiny-prn.weights)]
* YOLOv3-Tiny [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-tiny.cfg)] [[weights](https://pjreddie.com/media/files/yolov3-tiny.weights)]
* YOLOv3-Lite [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/MobileNet-Yolo/master/MobileNetV2-YOLOv3-Lite/COCO/MobileNetV2-YOLOv3-Lite-coco.cfg)] [[weights](https://github.com/dog-qiuqiu/MobileNet-Yolo/raw/master/MobileNetV2-YOLOv3-Lite/COCO/MobileNetV2-YOLOv3-Lite-coco.weights)]
* YOLOv3-Nano [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/MobileNet-Yolo/master/MobileNetV2-YOLOv3-Nano/COCO/MobileNetV2-YOLOv3-Nano-coco.cfg)] [[weights](https://github.com/dog-qiuqiu/MobileNet-Yolo/raw/master/MobileNetV2-YOLOv3-Nano/COCO/MobileNetV2-YOLOv3-Nano-coco.weights)]
* YOLO-Fastest [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/Yolo-Fastest/master/Yolo-Fastest/COCO/yolo-fastest.cfg)] [[weights](https://github.com/dog-qiuqiu/Yolo-Fastest/raw/master/Yolo-Fastest/COCO/yolo-fastest.weights)]
* YOLO-Fastest-XL [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/Yolo-Fastest/master/Yolo-Fastest/COCO/yolo-fastest-xl.cfg)] [[weights](https://github.com/dog-qiuqiu/Yolo-Fastest/raw/master/Yolo-Fastest/COCO/yolo-fastest-xl.weights)]
* YOLOv2 [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2.cfg)] [[weights](https://pjreddie.com/media/files/yolov2.weights)]
* YOLOv2-Tiny [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2-tiny.cfg)] [[weights](https://pjreddie.com/media/files/yolov2-tiny.weights)]
* [YOLOv4x-Mish](https://github.com/AlexeyAB/darknet) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4x-mish.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4x-mish.weights)]
* [YOLOv4-CSP](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-csp) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-csp.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-csp.weights)]
* [YOLOv4](https://github.com/AlexeyAB/darknet) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights)]
* [YOLOv4-Tiny](https://github.com/AlexeyAB/darknet) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights)]
* [YOLOv3-SPP](https://github.com/pjreddie/darknet) [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-spp.cfg)] [[weights](https://pjreddie.com/media/files/yolov3-spp.weights)]
* [YOLOv3](https://github.com/pjreddie/darknet) [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg)] [[weights](https://pjreddie.com/media/files/yolov3.weights)]
* [YOLOv3-Tiny-PRN](https://github.com/WongKinYiu/PartialResidualNetworks) [[cfg](https://raw.githubusercontent.com/WongKinYiu/PartialResidualNetworks/master/cfg/yolov3-tiny-prn.cfg)] [[weights](https://github.com/WongKinYiu/PartialResidualNetworks/raw/master/model/yolov3-tiny-prn.weights)]
* [YOLOv3-Tiny](https://github.com/pjreddie/darknet) [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-tiny.cfg)] [[weights](https://pjreddie.com/media/files/yolov3-tiny.weights)]
* [YOLOv3-Lite](https://github.com/dog-qiuqiu/MobileNet-Yolo) [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/MobileNet-Yolo/master/MobileNetV2-YOLOv3-Lite/COCO/MobileNetV2-YOLOv3-Lite-coco.cfg)] [[weights](https://github.com/dog-qiuqiu/MobileNet-Yolo/raw/master/MobileNetV2-YOLOv3-Lite/COCO/MobileNetV2-YOLOv3-Lite-coco.weights)]
* [YOLOv3-Nano](https://github.com/dog-qiuqiu/MobileNet-Yolo) [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/MobileNet-Yolo/master/MobileNetV2-YOLOv3-Nano/COCO/MobileNetV2-YOLOv3-Nano-coco.cfg)] [[weights](https://github.com/dog-qiuqiu/MobileNet-Yolo/raw/master/MobileNetV2-YOLOv3-Nano/COCO/MobileNetV2-YOLOv3-Nano-coco.weights)]
* [YOLO-Fastest](https://github.com/dog-qiuqiu/Yolo-Fastest) [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/Yolo-Fastest/master/Yolo-Fastest/COCO/yolo-fastest.cfg)] [[weights](https://github.com/dog-qiuqiu/Yolo-Fastest/raw/master/Yolo-Fastest/COCO/yolo-fastest.weights)]
* [YOLO-Fastest-XL](https://github.com/dog-qiuqiu/Yolo-Fastest) [[cfg](https://raw.githubusercontent.com/dog-qiuqiu/Yolo-Fastest/master/Yolo-Fastest/COCO/yolo-fastest-xl.cfg)] [[weights](https://github.com/dog-qiuqiu/Yolo-Fastest/raw/master/Yolo-Fastest/COCO/yolo-fastest-xl.weights)]
* [YOLOv2](https://github.com/pjreddie/darknet) [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2.cfg)] [[weights](https://pjreddie.com/media/files/yolov2.weights)]
* [YOLOv2-Tiny](https://github.com/pjreddie/darknet) [[cfg](https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2-tiny.cfg)] [[weights](https://pjreddie.com/media/files/yolov2-tiny.weights)]
Compile
@@ -198,7 +203,7 @@ Edit config_infer_primary.txt for your model (example for YOLOv4)
```
[property]
...
# 0=RGB, 1=BGR
# 0=RGB, 1=BGR, 2=GRAYSCALE
model-color-format=0
# CFG
custom-network-config=yolov4.cfg
@@ -248,4 +253,6 @@ Note: If your model are listed in native tab, you can use [my native folder](htt
##
For commercial DeepStream SDK projects, contact me at email address available in GitHub.
For commercial DeepStream SDK projects, contact me at email address available in GitHub.
My projects: https://www.youtube.com/MarcosLucianoTV