diff --git a/YOLOv5-3.X.md b/YOLOv5-3.X.md
deleted file mode 100644
index eaa8892..0000000
--- a/YOLOv5-3.X.md
+++ /dev/null
@@ -1,192 +0,0 @@
-# YOLOv5
-NVIDIA DeepStream SDK 5.1 configuration for YOLOv5 3.0/3.1 models
-
-Thanks [DanaHan](https://github.com/DanaHan/Yolov5-in-Deepstream-5.0), [wang-xinyu](https://github.com/wang-xinyu/tensorrtx) and [Ultralytics](https://github.com/ultralytics/yolov5)
-
-##
-
-* [Requirements](#requirements)
-* [Convert PyTorch model to wts file](#convert-pytorch-model-to-wts-file)
-* [Convert wts file to TensorRT model](#convert-wts-file-to-tensorrt-model)
-* [Compile nvdsinfer_custom_impl_Yolo](#compile-nvdsinfer_custom_impl_yolo)
-* [Testing model](#testing-model)
-
-##
-
-### Requirements
-* [TensorRTX](https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/install.md)
-
-* [Ultralytics](https://github.com/ultralytics/yolov5/blob/v3.1/requirements.txt)
-
-* Matplotlib (for Jetson plataform)
-```
-sudo apt-get install python3-matplotlib
-```
-
-* PyTorch (for Jetson plataform)
-```
-wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
-pip3 install Cython
-pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-```
-
-* TorchVision (for Jetson platform)
-```
-sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
-git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
-cd torchvision
-export BUILD_VERSION=0.9.0
-python3 setup.py install --user
-```
-
-##
-
-### Convert PyTorch model to wts file
-1. Download repositories
-```
-git clone https://github.com/DanaHan/Yolov5-in-Deepstream-5.0.git yolov5converter
-git clone -b yolov5-v3.1 https://github.com/wang-xinyu/tensorrtx.git
-git clone -b v3.1 https://github.com/ultralytics/yolov5.git
-```
-
-2. Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5/weights directory (example for YOLOv5s)
-```
-wget https://github.com/ultralytics/yolov5/releases/download/v3.1/yolov5s.pt -P yolov5/weights/
-```
-
-3. Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
-```
-cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py
-```
-
-4. Generate wts file
-```
-cd yolov5
-python3 gen_wts.py
-```
-
-yolov5s.wts file will be generated in yolov5 folder
-
-
-
-Note: if you want to generate wts file to another YOLOv5 model (YOLOv5m, YOLOv5l or YOLOv5x), edit get_wts.py file changing yolov5s to your model name
-```
-model = torch.load('weights/yolov5s.pt', map_location=device)['model'].float() # load to FP32
-model.to(device).eval()
-
-f = open('yolov5s.wts', 'w')
-```
-
-##
-
-### Convert wts file to TensorRT model
-1. Replace yololayer files from tensorrtx/yolov5 folder to yololayer and hardswish files from yolov5converter
-```
-mv yolov5converter/yololayer.cu tensorrtx/yolov5/yololayer.cu
-mv yolov5converter/yololayer.h tensorrtx/yolov5/yololayer.h
-```
-
-2. Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
-```
-cp yolov5/yolov5s.wts tensorrtx/yolov5/yolov5s.wts
-```
-
-3. Build tensorrtx/yolov5
-```
-cd tensorrtx/yolov5
-mkdir build
-cd build
-cmake ..
-make
-```
-
-4. Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
-```
-sudo ./yolov5 -s
-```
-
-5. Create a custom yolo folder and copy generated files (example for YOLOv5s)
-```
-mkdir /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-cp yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolo/yolov5s.engine
-```
-
-
-
-Note: by default, yolov5 script generate model with batch size = 1, FP16 mode and s model.
-```
-#define USE_FP16 // comment out this if want to use FP32
-#define DEVICE 0 // GPU id
-#define NMS_THRESH 0.4
-#define CONF_THRESH 0.5
-#define BATCH_SIZE 1
-
-#define NET s // s m l x
-```
-Edit yolov5.cpp file before compile if you want to change this parameters.
-
-##
-
-### Compile nvdsinfer_custom_impl_Yolo
-1. Run command
-```
-sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
-```
-
-2. Donwload [my external/yolov5 folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-3.X) and move files to created yolo folder
-
-3. Compile lib
-
-* x86 platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
-```
-
-* Jetson platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
-```
-
-##
-
-### Testing model
-Use my edited [deepstream_app_config.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-3.X/deepstream_app_config.txt) and [config_infer_primary.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-3.X/config_infer_primary.txt) files available in [my external/yolov5-3.X folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-3.X)
-
-Run command
-```
-deepstream-app -c deepstream_app_config.txt
-```
-
-
-
-Note: based on selected model, edit config_infer_primary.txt file
-
-For example, if you using YOLOv5x
-
-```
-model-engine-file=yolov5s.engine
-```
-
-to
-
-```
-model-engine-file=yolov5x.engine
-```
-
-##
-
-To change NMS_THRESH, edit nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp file and recompile
-
-```
-#define kNMS_THRESH 0.45
-```
-
-To change CONF_THRESH, edit config_infer_primary.txt file
-
-```
-[class-attrs-all]
-pre-cluster-threshold=0.25
-```
diff --git a/YOLOv5-4.0.md b/YOLOv5-4.0.md
deleted file mode 100644
index 8580d75..0000000
--- a/YOLOv5-4.0.md
+++ /dev/null
@@ -1,183 +0,0 @@
-# YOLOv5
-NVIDIA DeepStream SDK 5.1 configuration for YOLOv5 4.0 models
-
-Thanks [wang-xinyu](https://github.com/wang-xinyu/tensorrtx) and [Ultralytics](https://github.com/ultralytics/yolov5)
-
-##
-
-* [Requirements](#requirements)
-* [Convert PyTorch model to wts file](#convert-pytorch-model-to-wts-file)
-* [Convert wts file to TensorRT model](#convert-wts-file-to-tensorrt-model)
-* [Compile nvdsinfer_custom_impl_Yolo](#compile-nvdsinfer_custom_impl_yolo)
-* [Testing model](#testing-model)
-
-##
-
-### Requirements
-* [TensorRTX](https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/install.md)
-
-* [Ultralytics](https://github.com/ultralytics/yolov5/blob/v4.0/requirements.txt)
-
-* Matplotlib (for Jetson plataform)
-```
-sudo apt-get install python3-matplotlib
-```
-
-* PyTorch (for Jetson plataform)
-```
-wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
-pip3 install Cython
-pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-```
-
-* TorchVision (for Jetson platform)
-```
-sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
-git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
-cd torchvision
-export BUILD_VERSION=0.9.0
-python3 setup.py install --user
-```
-
-##
-
-### Convert PyTorch model to wts file
-1. Download repositories
-```
-git clone -b yolov5-v4.0 https://github.com/wang-xinyu/tensorrtx.git
-git clone -b v4.0 https://github.com/ultralytics/yolov5.git
-```
-
-2. Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5/weights directory (example for YOLOv5s)
-```
-wget https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5s.pt -P yolov5/weights
-```
-
-3. Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
-```
-cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py
-```
-
-4. Generate wts file
-```
-cd yolov5
-python3 gen_wts.py
-```
-
-yolov5s.wts file will be generated in yolov5 folder
-
-
-
-Note: if you want to generate wts file to another YOLOv5 model (YOLOv5m, YOLOv5l or YOLOv5x), edit get_wts.py file changing yolov5s to your model name
-```
-model = torch.load('weights/yolov5s.pt', map_location=device)['model'].float() # load to FP32
-model.to(device).eval()
-
-f = open('yolov5s.wts', 'w')
-```
-
-##
-
-### Convert wts file to TensorRT model
-1. Build tensorrtx/yolov5
-```
-cd tensorrtx/yolov5
-mkdir build
-cd build
-cmake ..
-make
-```
-
-2. Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
-```
-cp yolov5/yolov5s.wts tensorrtx/yolov5/build/yolov5s.wts
-```
-
-3. Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
-```
-sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
-```
-
-4. Create a custom yolo folder and copy generated file (example for YOLOv5s)
-```
-mkdir /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-cp yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolo/yolov5s.engine
-```
-
-
-
-Note: by default, yolov5 script generate model with batch size = 1 and FP16 mode.
-```
-#define USE_FP16 // set USE_INT8 or USE_FP16 or USE_FP32
-#define DEVICE 0 // GPU id
-#define NMS_THRESH 0.4
-#define CONF_THRESH 0.5
-#define BATCH_SIZE 1
-```
-Edit yolov5.cpp file before compile if you want to change this parameters.
-
-##
-
-### Compile nvdsinfer_custom_impl_Yolo
-1. Run command
-```
-sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
-```
-
-2. Donwload [my external/yolov5-4.0 folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-4.0) and move files to created yolo folder
-
-3. Compile lib
-
-* x86 platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
-```
-
-* Jetson platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
-```
-
-##
-
-### Testing model
-Use my edited [deepstream_app_config.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-4.0/deepstream_app_config.txt) and [config_infer_primary.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-4.0/config_infer_primary.txt) files available in [my external/yolov5-4.0 folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-4.0)
-
-Run command
-```
-deepstream-app -c deepstream_app_config.txt
-```
-
-
-
-Note: based on selected model, edit config_infer_primary.txt file
-
-For example, if you using YOLOv5x
-
-```
-model-engine-file=yolov5s.engine
-```
-
-to
-
-```
-model-engine-file=yolov5x.engine
-```
-
-##
-
-To change NMS_THRESH, edit nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp file and recompile
-
-```
-#define kNMS_THRESH 0.45
-```
-
-To change CONF_THRESH, edit config_infer_primary.txt file
-
-```
-[class-attrs-all]
-pre-cluster-threshold=0.25
-```
diff --git a/YOLOv5-5.0.md b/YOLOv5-5.0.md
deleted file mode 100644
index 47718cf..0000000
--- a/YOLOv5-5.0.md
+++ /dev/null
@@ -1,173 +0,0 @@
-# YOLOv5
-NVIDIA DeepStream SDK 5.1 configuration for YOLOv5 5.0 models
-
-Thanks [wang-xinyu](https://github.com/wang-xinyu/tensorrtx) and [Ultralytics](https://github.com/ultralytics/yolov5)
-
-##
-
-* [Requirements](#requirements)
-* [Convert PyTorch model to wts file](#convert-pytorch-model-to-wts-file)
-* [Convert wts file to TensorRT model](#convert-wts-file-to-tensorrt-model)
-* [Compile nvdsinfer_custom_impl_Yolo](#compile-nvdsinfer_custom_impl_yolo)
-* [Testing model](#testing-model)
-
-##
-
-### Requirements
-* [TensorRTX](https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/install.md)
-
-* [Ultralytics](https://github.com/ultralytics/yolov5/blob/master/requirements.txt)
-
-* Matplotlib (for Jetson plataform)
-```
-sudo apt-get install python3-matplotlib
-```
-
-* PyTorch (for Jetson plataform)
-```
-wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
-pip3 install Cython
-pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl
-```
-
-* TorchVision (for Jetson platform)
-```
-sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
-git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
-cd torchvision
-export BUILD_VERSION=0.9.0
-python3 setup.py install --user
-```
-
-##
-
-### Convert PyTorch model to wts file
-1. Download repositories
-```
-git clone https://github.com/wang-xinyu/tensorrtx.git
-git clone https://github.com/ultralytics/yolov5.git
-```
-
-2. Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5 folder (example for YOLOv5s)
-```
-wget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt -P yolov5/
-```
-
-3. Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
-```
-cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py
-```
-
-4. Generate wts file
-```
-cd yolov5
-python3 gen_wts.py yolov5s.pt
-```
-
-yolov5s.wts file will be generated in yolov5 folder
-
-##
-
-### Convert wts file to TensorRT model
-1. Build tensorrtx/yolov5
-```
-cd tensorrtx/yolov5
-mkdir build
-cd build
-cmake ..
-make
-```
-
-2. Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
-```
-cp yolov5/yolov5s.wts tensorrtx/yolov5/build/yolov5s.wts
-```
-
-3. Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
-```
-sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
-```
-
-4. Create a custom yolo folder and copy generated file (example for YOLOv5s)
-```
-mkdir /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-cp yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolo/yolov5s.engine
-```
-
-
-
-Note: by default, yolov5 script generate model with batch size = 1 and FP16 mode.
-```
-#define USE_FP16 // set USE_INT8 or USE_FP16 or USE_FP32
-#define DEVICE 0 // GPU id
-#define NMS_THRESH 0.4
-#define CONF_THRESH 0.5
-#define BATCH_SIZE 1
-```
-Edit yolov5.cpp file before compile if you want to change this parameters.
-
-##
-
-### Compile nvdsinfer_custom_impl_Yolo
-1. Run command
-```
-sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
-```
-
-2. Donwload [my external/yolov5-5.0 folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-5.0) and move files to created yolo folder
-
-3. Compile lib
-
-* x86 platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
-```
-
-* Jetson platform
-```
-cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
-CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
-```
-
-##
-
-### Testing model
-Use my edited [deepstream_app_config.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-5.0/deepstream_app_config.txt) and [config_infer_primary.txt](https://raw.githubusercontent.com/marcoslucianops/DeepStream-Yolo/master/external/yolov5-5.0/config_infer_primary.txt) files available in [my external/yolov5-5.0 folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/external/yolov5-5.0)
-
-Run command
-```
-deepstream-app -c deepstream_app_config.txt
-```
-
-
-
-Note: based on selected model, edit config_infer_primary.txt file
-
-For example, if you using YOLOv5x
-
-```
-model-engine-file=yolov5s.engine
-```
-
-to
-
-```
-model-engine-file=yolov5x.engine
-```
-
-##
-
-To change NMS_THRESH, edit nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp file and recompile
-
-```
-#define kNMS_THRESH 0.45
-```
-
-To change CONF_THRESH, edit config_infer_primary.txt file
-
-```
-[class-attrs-all]
-pre-cluster-threshold=0.25
-```
diff --git a/native/config_infer_primary.txt b/config_infer_primary.txt
similarity index 100%
rename from native/config_infer_primary.txt
rename to config_infer_primary.txt
diff --git a/native/config_infer_primary_yoloV2.txt b/config_infer_primary_yoloV2.txt
similarity index 100%
rename from native/config_infer_primary_yoloV2.txt
rename to config_infer_primary_yoloV2.txt
diff --git a/customModels.md b/customModels.md
deleted file mode 100644
index e670716..0000000
--- a/customModels.md
+++ /dev/null
@@ -1,312 +0,0 @@
-# Editing default model to your custom model
-How to edit DeepStream files to your custom model
-
-##
-
-* [Requirements](#requirements)
-* [Editing default model](#editing-default-model)
-* [Compiling edited model](#compiling-edited-model)
-* [Understanding and editing deepstream_app_config](#understanding-and-editing-deepstream_app_config)
-* [Understanding and editing config_infer_primary](#understanding-and-editing-config_infer_primary)
-* [Testing model](#testing-model)
-* [Custom functions in your model](#custom-functions-in-your-model)
-
-##
-
-### Requirements
-* [NVIDIA DeepStream SDK 5.1](https://developer.nvidia.com/deepstream-sdk)
-* [DeepStream-Yolo Native](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native)
-* [Pre-treined YOLO model](https://github.com/AlexeyAB/darknet)
-
-##
-
-### Editing default model
-1. Run command
-```
-sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
-```
-
-2. Download [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
-3. Copy and remane your obj.names file to labels.txt to deepstream/sources/yolo directory
-4. Copy your yolo.cfg and yolo.weights files to deepstream/sources/yolo directory.
-5. Edit config_infer_primary.txt for your model
-```
-[property]
-...
-# CFG
-custom-network-config=yolo.cfg
-# Weights
-model-file=yolo.weights
-# Model labels file
-labelfile-path=labels.txt
-...
-```
-
-Note: if you want to use YOLOv2 or YOLOv2-Tiny models, change deepstream_app_config.txt
-```
-[primary-gie]
-enable=1
-gpu-id=0
-gie-unique-id=1
-nvbuf-memory-type=0
-config-file=config_infer_primary_yoloV2.txt
-```
-
-Note: config_infer_primary.txt uses cluster-mode=4 and NMS = 0.45 (via code) when beta_nms isn't available (when beta_nms is available, NMS = beta_nms), while config_infer_primary_yoloV2.txt uses cluster-mode=2 and nms-iou-threshold=0.45 to set NMS.
-
-##
-
-### Compiling edited model
-1. Check your CUDA version (nvcc --version)
-2. Go to deepstream/sources/yolo directory
-3. Type command to compile:
-
-* x86 platform
-```
-CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
-```
-
-* Jetson platform
-```
-CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
-```
-
-##
-
-### Understanding and editing deepstream_app_config
-To understand and edit deepstream_app_config.txt file, read the [DeepStream SDK Development Guide - Configuration Groups](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#configuration-groups)
-
-##
-
-* Edit tiled-display
-
-```
-[tiled-display]
-enable=1
-# If you have 1 stream use 1/1 (rows/columns), if you have 4 streams use 2/2 or 4/1 or 1/4 (rows/columns)
-rows=1
-columns=1
-# Resolution of tiled display
-width=1280
-height=720
-gpu-id=0
-nvbuf-memory-type=0
-```
-
-##
-
-* Edit source
-
-Example for 1 source:
-```
-[source0]
-enable=1
-# 1=Camera (V4L2), 2=URI, 3=MultiURI, 4=RTSP, 5=Camera (CSI; Jetson only)
-type=3
-# Stream URL
-uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
-# Number of sources copy (if > 1, you need edit rows/columns in tiled-display section and batch-size in streammux section and config_infer_primary.txt; need type=3 for more than 1 source)
-num-sources=1
-gpu-id=0
-cudadec-memtype=0
-```
-
-Example for 1 duplcated source:
-```
-[source0]
-enable=1
-type=3
-uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
-num-sources=2
-gpu-id=0
-cudadec-memtype=0
-```
-
-Example for 2 sources:
-```
-[source0]
-enable=1
-type=3
-uri=rtsp://192.168.1.2/Streaming/Channels/101/httppreview
-num-sources=1
-gpu-id=0
-cudadec-memtype=0
-
-[source1]
-enable=1
-type=3
-uri=rtsp://192.168.1.3/Streaming/Channels/101/httppreview
-num-sources=1
-gpu-id=0
-cudadec-memtype=0
-```
-
-##
-
-* Edit sink
-
-Example for 1 source or 1 duplicated source:
-```
-[sink0]
-enable=1
-# 1=Fakesink, 2=EGL (nveglglessink), 3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
-type=2
-# Indicates how fast the stream is to be rendered (0=As fast as possible, 1=Synchronously)
-sync=0
-# The ID of the source whose buffers this sink must use
-source-id=0
-gpu-id=0
-nvbuf-memory-type=0
-```
-
-Example for 2 sources:
-```
-[sink0]
-enable=1
-type=2
-sync=0
-source-id=0
-gpu-id=0
-nvbuf-memory-type=0
-
-[sink1]
-enable=1
-type=2
-sync=0
-source-id=1
-gpu-id=0
-nvbuf-memory-type=0
-```
-
-##
-
-* Edit streammux
-
-Example for 1 source:
-```
-[streammux]
-gpu-id=0
-# Boolean property to inform muxer that sources are live
-live-source=1
-# Number of sources
-batch-size=1
-# Time out in usec, to wait after the first buffer is available to push the batch even if the complete batch is not formed
-batched-push-timeout=40000
-# Resolution of streammux
-width=1920
-height=1080
-enable-padding=0
-nvbuf-memory-type=0
-```
-
-Example for 1 duplicated source or 2 sources:
-```
-[streammux]
-gpu-id=0
-live-source=0
-batch-size=2
-batched-push-timeout=40000
-width=1920
-height=1080
-enable-padding=0
-nvbuf-memory-type=0
-```
-
-##
-
-* Edit primary-gie
-```
-[primary-gie]
-enable=1
-gpu-id=0
-gie-unique-id=1
-nvbuf-memory-type=0
-config-file=config_infer_primary.txt
-```
-
-* You can remove [tracker] section, if you don't use it.
-
-##
-
-### Understanding and editing config_infer_primary
-To understand and edit config_infer_primary.txt file, read the [NVIDIA DeepStream Plugin Manual - Gst-nvinfer File Configuration Specifications](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications)
-
-##
-
-* Edit model-color-format accoding number of channels in yolo.cfg (1=GRAYSCALE, 3=RGB)
-
-```
-# 0=RGB, 1=BGR, 2=GRAYSCALE
-model-color-format=0
-```
-
-##
-
-* Edit model-engine-file (example for batch-size=1 and network-mode=2)
-
-```
-model-engine-file=model_b1_gpu0_fp16.engine
-```
-
-##
-
-* Edit batch-size
-
-```
-# Number of sources
-batch-size=1
-```
-
-##
-
-* Edit network-mode
-
-```
-# 0=FP32, 1=INT8, 2=FP16
-network-mode=0
-```
-
-##
-
-* Edit num-detected-classes according number of classes in yolo.cfg
-
-```
-num-detected-classes=80
-```
-
-##
-
-* Edit network-type
-
-```
-# 0=Detector, 1=Classifier, 2=Segmentation
-network-type=0
-```
-
-##
-
-* Add/edit interval (FPS increase if > 0)
-
-```
-# Interval of detection
-interval=0
-```
-
-##
-
-* Change pre-cluster-threshold (optional)
-
-```
-[class-attrs-all]
-# CONF_THRESH
-pre-cluster-threshold=0.25
-```
-
-##
-
-### Testing model
-
-To run your custom YOLO model, use command
-```
-deepstream-app -c deepstream_app_config.txt
-```
diff --git a/native/deepstream_app_config.txt b/deepstream_app_config.txt
similarity index 90%
rename from native/deepstream_app_config.txt
rename to deepstream_app_config.txt
index b811b6e..543f195 100644
--- a/native/deepstream_app_config.txt
+++ b/deepstream_app_config.txt
@@ -14,7 +14,7 @@ nvbuf-memory-type=0
[source0]
enable=1
type=3
-uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_1080p_h264.mp4
+uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
diff --git a/examples/multiple_inferences/deepstream_app_config.txt b/examples/multiple_inferences/deepstream_app_config.txt
deleted file mode 100644
index 3740377..0000000
--- a/examples/multiple_inferences/deepstream_app_config.txt
+++ /dev/null
@@ -1,72 +0,0 @@
-[application]
-enable-perf-measurement=1
-perf-measurement-interval-sec=5
-
-[tiled-display]
-enable=1
-rows=1
-columns=1
-width=1280
-height=720
-gpu-id=0
-nvbuf-memory-type=0
-
-[source0]
-enable=1
-type=3
-uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_1080p_h264.mp4
-num-sources=1
-gpu-id=0
-cudadec-memtype=0
-
-[sink0]
-enable=1
-type=2
-sync=0
-source-id=0
-gpu-id=0
-nvbuf-memory-type=0
-
-[osd]
-enable=1
-gpu-id=0
-border-width=1
-text-size=15
-text-color=1;1;1;1;
-text-bg-color=0.3;0.3;0.3;1
-font=Serif
-show-clock=0
-clock-x-offset=800
-clock-y-offset=820
-clock-text-size=12
-clock-color=1;0;0;0
-nvbuf-memory-type=0
-
-[streammux]
-gpu-id=0
-live-source=0
-batch-size=1
-batched-push-timeout=40000
-width=1920
-height=1080
-enable-padding=0
-nvbuf-memory-type=0
-
-[primary-gie]
-enable=1
-gpu-id=0
-gie-unique-id=1
-nvbuf-memory-type=0
-config-file=pgie/config_infer_primary.txt
-
-[secondary-gie0]
-enable=1
-gpu-id=0
-gie-unique-id=2
-operate-on-gie-id=1
-#operate-on-class-ids=0
-nvbuf-memory-type=0
-config-file=sgie1/config_infer_secondary1.txt
-
-[tests]
-file-loop=0
diff --git a/examples/multiple_inferences/pgie/config_infer_primary.txt b/examples/multiple_inferences/pgie/config_infer_primary.txt
deleted file mode 100644
index e59d5c9..0000000
--- a/examples/multiple_inferences/pgie/config_infer_primary.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-custom-network-config=pgie/yolo.cfg
-model-file=yolo.weights
-model-engine-file=model_b1_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=1
-network-mode=0
-num-detected-classes=2
-interval=0
-gie-unique-id=1
-process-mode=1
-network-type=0
-cluster-mode=4
-maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParseYolo
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
-
-[class-attrs-all]
-pre-cluster-threshold=0.25
diff --git a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/Makefile b/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/Makefile
deleted file mode 100644
index f2474bc..0000000
--- a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/Makefile
+++ /dev/null
@@ -1,88 +0,0 @@
-################################################################################
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the "Software"),
-# to deal in the Software without restriction, including without limitation
-# the rights to use, copy, modify, merge, publish, distribute, sublicense,
-# and/or sell copies of the Software, and to permit persons to whom the
-# Software is furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
-# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-#
-# Edited by Marcos Luciano
-# https://www.github.com/marcoslucianops
-################################################################################
-
-CUDA_VER?=
-ifeq ($(CUDA_VER),)
- $(error "CUDA_VER is not set")
-endif
-
-OPENCV?=
-ifeq ($(OPENCV),)
- OPENCV=0
-endif
-
-CC:= g++
-NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc
-
-CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations
-CFLAGS+= -I/opt/nvidia/deepstream/deepstream-5.1/sources/includes -I/usr/local/cuda-$(CUDA_VER)/include
-
-ifeq ($(OPENCV), 1)
-COMMON= -DOPENCV
-CFLAGS+= $(shell pkg-config --cflags opencv4 2> /dev/null || pkg-config --cflags opencv)
-LIBS+= $(shell pkg-config --libs opencv4 2> /dev/null || pkg-config --libs opencv)
-endif
-
-LIBS+= -lnvinfer_plugin -lnvinfer -lnvparsers -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs
-LFLAGS:= -shared -Wl,--start-group $(LIBS) -Wl,--end-group
-
-INCS:= $(wildcard *.h)
-SRCFILES:= nvdsinfer_yolo_engine.cpp \
- nvdsparsebbox_Yolo.cpp \
- yoloPlugins.cpp \
- layers/convolutional_layer.cpp \
- layers/dropout_layer.cpp \
- layers/shortcut_layer.cpp \
- layers/route_layer.cpp \
- layers/upsample_layer.cpp \
- layers/maxpool_layer.cpp \
- layers/activation_layer.cpp \
- utils.cpp \
- yolo.cpp \
- yoloForward.cu
-
-ifeq ($(OPENCV), 1)
-SRCFILES+= calibrator.cpp
-endif
-
-TARGET_LIB:= libnvdsinfer_custom_impl_Yolo.so
-
-TARGET_OBJS:= $(SRCFILES:.cpp=.o)
-TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)
-
-all: $(TARGET_LIB)
-
-%.o: %.cpp $(INCS) Makefile
- $(CC) -c $(COMMON) -o $@ $(CFLAGS) $<
-
-%.o: %.cu $(INCS) Makefile
- $(NVCC) -c -o $@ --compiler-options '-fPIC' $<
-
-$(TARGET_LIB) : $(TARGET_OBJS)
- $(CC) -o $@ $(TARGET_OBJS) $(LFLAGS)
-
-clean:
- rm -rf $(TARGET_LIB)
- rm -rf $(TARGET_OBJS)
diff --git a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.cpp b/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.cpp
deleted file mode 100644
index 0ae7cbb..0000000
--- a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.cpp
+++ /dev/null
@@ -1,209 +0,0 @@
-/*
- * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
-
- * Edited by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#include "yoloPlugins.h"
-#include "NvInferPlugin.h"
-#include
-#include
-#include
-
-int kNUM_CLASSES;
-float kBETA_NMS;
-std::vector kANCHORS;
-std::vector> kMASK;
-
-namespace {
-template
-void write(char*& buffer, const T& val)
-{
- *reinterpret_cast(buffer) = val;
- buffer += sizeof(T);
-}
-
-template
-void read(const char*& buffer, T& val)
-{
- val = *reinterpret_cast(buffer);
- buffer += sizeof(T);
-}
-}
-
-cudaError_t cudaYoloLayer (
- const void* input, void* output, const uint& batchSize,
- const uint& gridSizeX, const uint& gridSizeY, const uint& numOutputClasses,
- const uint& numBBoxes, uint64_t outputSize, cudaStream_t stream, const uint modelCoords, const float modelScale, const uint modelType);
-
-YoloLayer::YoloLayer (const void* data, size_t length)
-{
- const char *d = static_cast(data);
- read(d, m_NumBoxes);
- read(d, m_NumClasses);
- read(d, m_GridSizeX);
- read(d, m_GridSizeY);
- read(d, m_OutputSize);
-
- read(d, m_type);
- read(d, m_new_coords);
- read(d, m_scale_x_y);
- read(d, m_beta_nms);
- uint anchorsSize;
- read(d, anchorsSize);
- for (uint i = 0; i < anchorsSize; i++) {
- float result;
- read(d, result);
- m_Anchors.push_back(result);
- }
- uint maskSize;
- read(d, maskSize);
- for (uint i = 0; i < maskSize; i++) {
- uint nMask;
- read(d, nMask);
- std::vector pMask;
- for (uint f = 0; f < nMask; f++) {
- int result;
- read(d, result);
- pMask.push_back(result);
- }
- m_Mask.push_back(pMask);
- }
- kNUM_CLASSES = m_NumClasses;
- kBETA_NMS = m_beta_nms;
- kANCHORS = m_Anchors;
- kMASK = m_Mask;
-};
-
-YoloLayer::YoloLayer (
- const uint& numBoxes, const uint& numClasses, const uint& gridSizeX, const uint& gridSizeY, const uint model_type, const uint new_coords, const float scale_x_y, const float beta_nms, const std::vector anchors, std::vector> mask) :
- m_NumBoxes(numBoxes),
- m_NumClasses(numClasses),
- m_GridSizeX(gridSizeX),
- m_GridSizeY(gridSizeY),
- m_type(model_type),
- m_new_coords(new_coords),
- m_scale_x_y(scale_x_y),
- m_beta_nms(beta_nms),
- m_Anchors(anchors),
- m_Mask(mask)
-{
- assert(m_NumBoxes > 0);
- assert(m_NumClasses > 0);
- assert(m_GridSizeX > 0);
- assert(m_GridSizeY > 0);
- m_OutputSize = m_GridSizeX * m_GridSizeY * (m_NumBoxes * (4 + 1 + m_NumClasses));
-};
-
-nvinfer1::Dims
-YoloLayer::getOutputDimensions(
- int index, const nvinfer1::Dims* inputs, int nbInputDims)
-{
- assert(index == 0);
- assert(nbInputDims == 1);
- return inputs[0];
-}
-
-bool YoloLayer::supportsFormat (
- nvinfer1::DataType type, nvinfer1::PluginFormat format) const {
- return (type == nvinfer1::DataType::kFLOAT &&
- format == nvinfer1::PluginFormat::kNCHW);
-}
-
-void
-YoloLayer::configureWithFormat (
- const nvinfer1::Dims* inputDims, int nbInputs,
- const nvinfer1::Dims* outputDims, int nbOutputs,
- nvinfer1::DataType type, nvinfer1::PluginFormat format, int maxBatchSize)
-{
- assert(nbInputs == 1);
- assert (format == nvinfer1::PluginFormat::kNCHW);
- assert(inputDims != nullptr);
-}
-
-int YoloLayer::enqueue(
- int batchSize, const void* const* inputs, void** outputs, void* workspace,
- cudaStream_t stream)
-{
- CHECK(cudaYoloLayer(
- inputs[0], outputs[0], batchSize, m_GridSizeX, m_GridSizeY, m_NumClasses, m_NumBoxes,
- m_OutputSize, stream, m_new_coords, m_scale_x_y, m_type));
- return 0;
-}
-
-size_t YoloLayer::getSerializationSize() const
-{
- int anchorsSum = 1;
- for (uint i = 0; i < m_Anchors.size(); i++) {
- anchorsSum += 1;
- }
- int maskSum = 1;
- for (uint i = 0; i < m_Mask.size(); i++) {
- maskSum += 1;
- for (uint f = 0; f < m_Mask[i].size(); f++) {
- maskSum += 1;
- }
- }
-
- return sizeof(m_NumBoxes) + sizeof(m_NumClasses) + sizeof(m_GridSizeX) + sizeof(m_GridSizeY) + sizeof(m_OutputSize) + sizeof(m_type)
- + sizeof(m_new_coords) + sizeof(m_scale_x_y) + sizeof(m_beta_nms) + anchorsSum * sizeof(float) + maskSum * sizeof(int);
-}
-
-void YoloLayer::serialize(void* buffer) const
-{
- char *d = static_cast(buffer);
- write(d, m_NumBoxes);
- write(d, m_NumClasses);
- write(d, m_GridSizeX);
- write(d, m_GridSizeY);
- write(d, m_OutputSize);
-
- write(d, m_type);
- write(d, m_new_coords);
- write(d, m_scale_x_y);
- write(d, m_beta_nms);
- uint anchorsSize = m_Anchors.size();
- write(d, anchorsSize);
- for (uint i = 0; i < anchorsSize; i++) {
- write(d, m_Anchors[i]);
- }
- uint maskSize = m_Mask.size();
- write(d, maskSize);
- for (uint i = 0; i < maskSize; i++) {
- uint pMaskSize = m_Mask[i].size();
- write(d, pMaskSize);
- for (uint f = 0; f < pMaskSize; f++) {
- write(d, m_Mask[i][f]);
- }
- }
- kNUM_CLASSES = m_NumClasses;
- kBETA_NMS = m_beta_nms;
- kANCHORS = m_Anchors;
- kMASK = m_Mask;
-}
-
-nvinfer1::IPluginV2* YoloLayer::clone() const
-{
- return new YoloLayer (m_NumBoxes, m_NumClasses, m_GridSizeX, m_GridSizeY, m_type, m_new_coords, m_scale_x_y, m_beta_nms, m_Anchors, m_Mask);
-}
-
-REGISTER_TENSORRT_PLUGIN(YoloLayerPluginCreator);
\ No newline at end of file
diff --git a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.h b/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.h
deleted file mode 100644
index 177ca10..0000000
--- a/examples/multiple_inferences/pgie/nvdsinfer_custom_impl_Yolo/yoloPlugins.h
+++ /dev/null
@@ -1,156 +0,0 @@
-/*
- * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
-
- * Edited by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#ifndef __YOLO_PLUGINS__
-#define __YOLO_PLUGINS__
-
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-#include "NvInferPlugin.h"
-
-#define CHECK(status) \
- { \
- if (status != 0) \
- { \
- std::cout << "CUDA failure: " << cudaGetErrorString(status) << " in file " << __FILE__ \
- << " at line " << __LINE__ << std::endl; \
- abort(); \
- } \
- }
-
-namespace
-{
-const char* YOLOLAYER_PLUGIN_VERSION {"1"};
-const char* YOLOLAYER_PLUGIN_NAME {"YoloLayer_TRT"};
-} // namespace
-
-class YoloLayer : public nvinfer1::IPluginV2
-{
-public:
- YoloLayer (const void* data, size_t length);
- YoloLayer (const uint& numBoxes, const uint& numClasses, const uint& gridSizeX, const uint& gridSizeY,
- const uint model_type, const uint new_coords, const float scale_x_y, const float beta_nms,
- const std::vector anchors, const std::vector> mask);
- const char* getPluginType () const override { return YOLOLAYER_PLUGIN_NAME; }
- const char* getPluginVersion () const override { return YOLOLAYER_PLUGIN_VERSION; }
- int getNbOutputs () const override { return 1; }
-
- nvinfer1::Dims getOutputDimensions (
- int index, const nvinfer1::Dims* inputs,
- int nbInputDims) override;
-
- bool supportsFormat (
- nvinfer1::DataType type, nvinfer1::PluginFormat format) const override;
-
- void configureWithFormat (
- const nvinfer1::Dims* inputDims, int nbInputs,
- const nvinfer1::Dims* outputDims, int nbOutputs,
- nvinfer1::DataType type, nvinfer1::PluginFormat format, int maxBatchSize) override;
-
- int initialize () override { return 0; }
- void terminate () override {}
- size_t getWorkspaceSize (int maxBatchSize) const override { return 0; }
- int enqueue (
- int batchSize, const void* const* inputs, void** outputs,
- void* workspace, cudaStream_t stream) override;
- size_t getSerializationSize() const override;
- void serialize (void* buffer) const override;
- void destroy () override { delete this; }
- nvinfer1::IPluginV2* clone() const override;
-
- void setPluginNamespace (const char* pluginNamespace)override {
- m_Namespace = pluginNamespace;
- }
- virtual const char* getPluginNamespace () const override {
- return m_Namespace.c_str();
- }
-
-private:
- uint m_NumBoxes {0};
- uint m_NumClasses {0};
- uint m_GridSizeX {0};
- uint m_GridSizeY {0};
- uint64_t m_OutputSize {0};
- std::string m_Namespace {""};
-
- uint m_type {0};
- uint m_new_coords {0};
- float m_scale_x_y {0};
- float m_beta_nms {0};
- std::vector m_Anchors;
- std::vector> m_Mask;
-};
-
-class YoloLayerPluginCreator : public nvinfer1::IPluginCreator
-{
-public:
- YoloLayerPluginCreator () {}
- ~YoloLayerPluginCreator () {}
-
- const char* getPluginName () const override { return YOLOLAYER_PLUGIN_NAME; }
- const char* getPluginVersion () const override { return YOLOLAYER_PLUGIN_VERSION; }
-
- const nvinfer1::PluginFieldCollection* getFieldNames() override {
- std::cerr<< "YoloLayerPluginCreator::getFieldNames is not implemented" << std::endl;
- return nullptr;
- }
-
- nvinfer1::IPluginV2* createPlugin (
- const char* name, const nvinfer1::PluginFieldCollection* fc) override
- {
- std::cerr<< "YoloLayerPluginCreator::getFieldNames is not implemented";
- return nullptr;
- }
-
- nvinfer1::IPluginV2* deserializePlugin (
- const char* name, const void* serialData, size_t serialLength) override
- {
- std::cout << "Deserialize yoloLayer plugin: " << name << std::endl;
- return new YoloLayer(serialData, serialLength);
- }
-
- void setPluginNamespace(const char* libNamespace) override {
- m_Namespace = libNamespace;
- }
- const char* getPluginNamespace() const override {
- return m_Namespace.c_str();
- }
-
-private:
- std::string m_Namespace {""};
-};
-
-extern int kNUM_CLASSES;
-extern float kBETA_NMS;
-extern std::vector kANCHORS;
-extern std::vector> kMASK;
-
-#endif // __YOLO_PLUGINS__
diff --git a/examples/multiple_inferences/sgie1/config_infer_secondary1.txt b/examples/multiple_inferences/sgie1/config_infer_secondary1.txt
deleted file mode 100644
index 076c937..0000000
--- a/examples/multiple_inferences/sgie1/config_infer_secondary1.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-[property]
-gpu-id=0
-net-scale-factor=0.0039215697906911373
-model-color-format=0
-custom-network-config=sgie1/yolo.cfg
-model-file=yolo.weights
-model-engine-file=model_b16_gpu0_fp32.engine
-#int8-calib-file=calib.table
-labelfile-path=labels.txt
-batch-size=16
-network-mode=0
-num-detected-classes=10
-interval=0
-gie-unique-id=2
-process-mode=2
-network-type=0
-cluster-mode=4
-maintain-aspect-ratio=0
-parse-bbox-func-name=NvDsInferParseYolo
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
-engine-create-func-name=NvDsInferYoloCudaEngineGet
-
-[class-attrs-all]
-pre-cluster-threshold=0.25
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/Makefile b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/Makefile
deleted file mode 100644
index f2474bc..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/Makefile
+++ /dev/null
@@ -1,88 +0,0 @@
-################################################################################
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the "Software"),
-# to deal in the Software without restriction, including without limitation
-# the rights to use, copy, modify, merge, publish, distribute, sublicense,
-# and/or sell copies of the Software, and to permit persons to whom the
-# Software is furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
-# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-#
-# Edited by Marcos Luciano
-# https://www.github.com/marcoslucianops
-################################################################################
-
-CUDA_VER?=
-ifeq ($(CUDA_VER),)
- $(error "CUDA_VER is not set")
-endif
-
-OPENCV?=
-ifeq ($(OPENCV),)
- OPENCV=0
-endif
-
-CC:= g++
-NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc
-
-CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations
-CFLAGS+= -I/opt/nvidia/deepstream/deepstream-5.1/sources/includes -I/usr/local/cuda-$(CUDA_VER)/include
-
-ifeq ($(OPENCV), 1)
-COMMON= -DOPENCV
-CFLAGS+= $(shell pkg-config --cflags opencv4 2> /dev/null || pkg-config --cflags opencv)
-LIBS+= $(shell pkg-config --libs opencv4 2> /dev/null || pkg-config --libs opencv)
-endif
-
-LIBS+= -lnvinfer_plugin -lnvinfer -lnvparsers -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs
-LFLAGS:= -shared -Wl,--start-group $(LIBS) -Wl,--end-group
-
-INCS:= $(wildcard *.h)
-SRCFILES:= nvdsinfer_yolo_engine.cpp \
- nvdsparsebbox_Yolo.cpp \
- yoloPlugins.cpp \
- layers/convolutional_layer.cpp \
- layers/dropout_layer.cpp \
- layers/shortcut_layer.cpp \
- layers/route_layer.cpp \
- layers/upsample_layer.cpp \
- layers/maxpool_layer.cpp \
- layers/activation_layer.cpp \
- utils.cpp \
- yolo.cpp \
- yoloForward.cu
-
-ifeq ($(OPENCV), 1)
-SRCFILES+= calibrator.cpp
-endif
-
-TARGET_LIB:= libnvdsinfer_custom_impl_Yolo.so
-
-TARGET_OBJS:= $(SRCFILES:.cpp=.o)
-TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)
-
-all: $(TARGET_LIB)
-
-%.o: %.cpp $(INCS) Makefile
- $(CC) -c $(COMMON) -o $@ $(CFLAGS) $<
-
-%.o: %.cu $(INCS) Makefile
- $(NVCC) -c -o $@ --compiler-options '-fPIC' $<
-
-$(TARGET_LIB) : $(TARGET_OBJS)
- $(CC) -o $@ $(TARGET_OBJS) $(LFLAGS)
-
-clean:
- rm -rf $(TARGET_LIB)
- rm -rf $(TARGET_OBJS)
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.cpp b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.cpp
deleted file mode 100644
index b335cb8..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.cpp
+++ /dev/null
@@ -1,137 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#include "calibrator.h"
-#include
-#include
-
-namespace nvinfer1
-{
- int8EntroyCalibrator::int8EntroyCalibrator(const int &batchsize, const int &channels, const int &height, const int &width, const int &letterbox, const std::string &imgPath,
- const std::string &calibTablePath):batchSize(batchsize), inputC(channels), inputH(height), inputW(width), letterBox(letterbox), calibTablePath(calibTablePath), imageIndex(0)
- {
- inputCount = batchsize * channels * height * width;
- std::fstream f(imgPath);
- if (f.is_open())
- {
- std::string temp;
- while (std::getline(f, temp)) imgPaths.push_back(temp);
- }
- batchData = new float[inputCount];
- CUDA_CHECK(cudaMalloc(&deviceInput, inputCount * sizeof(float)));
- }
-
- int8EntroyCalibrator::~int8EntroyCalibrator()
- {
- CUDA_CHECK(cudaFree(deviceInput));
- if (batchData)
- delete[] batchData;
- }
-
- bool int8EntroyCalibrator::getBatch(void **bindings, const char **names, int nbBindings)
- {
- if (imageIndex + batchSize > uint(imgPaths.size()))
- return false;
-
- float* ptr = batchData;
- for (size_t j = imageIndex; j < imageIndex + batchSize; ++j)
- {
- cv::Mat img = cv::imread(imgPaths[j], cv::IMREAD_COLOR);
- std::vectorinputData = prepareImage(img, inputC, inputH, inputW, letterBox);
-
- int len = (int)(inputData.size());
- memcpy(ptr, inputData.data(), len * sizeof(float));
-
- ptr += inputData.size();
- std::cout << "Load image: " << imgPaths[j] << std::endl;
- std::cout << "Progress: " << (j + 1)*100. / imgPaths.size() << "%" << std::endl;
- }
- imageIndex += batchSize;
- CUDA_CHECK(cudaMemcpy(deviceInput, batchData, inputCount * sizeof(float), cudaMemcpyHostToDevice));
- bindings[0] = deviceInput;
- return true;
- }
-
- const void* int8EntroyCalibrator::readCalibrationCache(std::size_t &length)
- {
- calibrationCache.clear();
- std::ifstream input(calibTablePath, std::ios::binary);
- input >> std::noskipws;
- if (readCache && input.good())
- {
- std::copy(std::istream_iterator(input), std::istream_iterator(),
- std::back_inserter(calibrationCache));
- }
- length = calibrationCache.size();
- return length ? calibrationCache.data() : nullptr;
- }
-
- void int8EntroyCalibrator::writeCalibrationCache(const void *cache, std::size_t length)
- {
- std::ofstream output(calibTablePath, std::ios::binary);
- output.write(reinterpret_cast(cache), length);
- }
-}
-
-std::vector prepareImage(cv::Mat& img, int input_c, int input_h, int input_w, int letter_box)
-{
- cv::Mat out;
- int image_w = img.cols;
- int image_h = img.rows;
- if (image_w != input_w || image_h != input_h)
- {
- if (letter_box == 1)
- {
- float ratio_w = (float)image_w / (float)input_w;
- float ratio_h = (float)image_h / (float)input_h;
- if (ratio_w > ratio_h)
- {
- int new_width = input_w * ratio_h;
- int x = (image_w - new_width) / 2;
- cv::Rect roi(abs(x), 0, new_width, image_h);
- out = img(roi);
- }
- else if (ratio_w < ratio_h)
- {
- int new_height = input_h * ratio_w;
- int y = (image_h - new_height) / 2;
- cv::Rect roi(0, abs(y), image_w, new_height);
- out = img(roi);
- }
- else {
- out = img;
- }
- cv::resize(out, out, cv::Size(input_w, input_h), 0, 0, cv::INTER_CUBIC);
- }
- else
- {
- cv::resize(img, out, cv::Size(input_w, input_h), 0, 0, cv::INTER_CUBIC);
- }
- cv::cvtColor(out, out, cv::COLOR_BGR2RGB);
- }
- else
- {
- cv::cvtColor(img, out, cv::COLOR_BGR2RGB);
- }
- if (input_c == 3)
- {
- out.convertTo(out, CV_32FC3, 1.0 / 255.0);
- }
- else
- {
- out.convertTo(out, CV_32FC1, 1.0 / 255.0);
- }
- std::vector input_channels(input_c);
- cv::split(out, input_channels);
- std::vector result(input_h * input_w * input_c);
- auto data = result.data();
- int channelLength = input_h * input_w;
- for (int i = 0; i < input_c; ++i)
- {
- memcpy(data, input_channels[i].data, channelLength * sizeof(float));
- data += channelLength;
- }
- return result;
-}
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.h b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.h
deleted file mode 100644
index a78e062..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/calibrator.h
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#ifndef CALIBRATOR_H
-#define CALIBRATOR_H
-
-#include "opencv2/opencv.hpp"
-#include "cuda_runtime.h"
-#include "NvInfer.h"
-#include
-#include
-
-#ifndef CUDA_CHECK
-#define CUDA_CHECK(callstr) \
- { \
- cudaError_t error_code = callstr; \
- if (error_code != cudaSuccess) { \
- std::cerr << "CUDA error " << error_code << " at " << __FILE__ << ":" << __LINE__; \
- assert(0); \
- } \
- }
-#endif
-
-namespace nvinfer1 {
- class int8EntroyCalibrator : public nvinfer1::IInt8EntropyCalibrator2 {
- public:
- int8EntroyCalibrator(const int &batchsize,
- const int &channels,
- const int &height,
- const int &width,
- const int &letterbox,
- const std::string &imgPath,
- const std::string &calibTablePath);
-
- virtual ~int8EntroyCalibrator();
- int getBatchSize() const override { return batchSize; }
- bool getBatch(void *bindings[], const char *names[], int nbBindings) override;
- const void *readCalibrationCache(std::size_t &length) override;
- void writeCalibrationCache(const void *ptr, std::size_t length) override;
-
- private:
- int batchSize;
- int inputC;
- int inputH;
- int inputW;
- int letterBox;
- std::string calibTablePath;
- size_t imageIndex;
- size_t inputCount;
- std::vector imgPaths;
- float *batchData{ nullptr };
- void *deviceInput{ nullptr };
- bool readCache;
- std::vector calibrationCache;
- };
-}
-
-std::vector prepareImage(cv::Mat& img, int input_c, int input_h, int input_w, int letter_box);
-
-#endif //CALIBRATOR_H
\ No newline at end of file
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.cpp b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.cpp
deleted file mode 100644
index d730fd2..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.cpp
+++ /dev/null
@@ -1,82 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#include "activation_layer.h"
-
-nvinfer1::ILayer* activationLayer(
- int layerIdx,
- std::string activation,
- nvinfer1::ILayer* output,
- nvinfer1::ITensor* input,
- nvinfer1::INetworkDefinition* network)
-{
- if (activation == "relu")
- {
- nvinfer1::IActivationLayer* relu = network->addActivation(
- *input, nvinfer1::ActivationType::kRELU);
- assert(relu != nullptr);
- std::string reluLayerName = "relu_" + std::to_string(layerIdx);
- relu->setName(reluLayerName.c_str());
- output = relu;
- }
- else if (activation == "sigmoid" || activation == "logistic")
- {
- nvinfer1::IActivationLayer* sigmoid = network->addActivation(
- *input, nvinfer1::ActivationType::kSIGMOID);
- assert(sigmoid != nullptr);
- std::string sigmoidLayerName = "sigmoid_" + std::to_string(layerIdx);
- sigmoid->setName(sigmoidLayerName.c_str());
- output = sigmoid;
- }
- else if (activation == "tanh")
- {
- nvinfer1::IActivationLayer* tanh = network->addActivation(
- *input, nvinfer1::ActivationType::kTANH);
- assert(tanh != nullptr);
- std::string tanhLayerName = "tanh_" + std::to_string(layerIdx);
- tanh->setName(tanhLayerName.c_str());
- output = tanh;
- }
- else if (activation == "leaky")
- {
- nvinfer1::IActivationLayer* leaky = network->addActivation(
- *input, nvinfer1::ActivationType::kLEAKY_RELU);
- leaky->setAlpha(0.1);
- assert(leaky != nullptr);
- std::string leakyLayerName = "leaky_" + std::to_string(layerIdx);
- leaky->setName(leakyLayerName.c_str());
- output = leaky;
- }
- else if (activation == "softplus")
- {
- nvinfer1::IActivationLayer* softplus = network->addActivation(
- *input, nvinfer1::ActivationType::kSOFTPLUS);
- assert(softplus != nullptr);
- std::string softplusLayerName = "softplus_" + std::to_string(layerIdx);
- softplus->setName(softplusLayerName.c_str());
- output = softplus;
- }
- else if (activation == "mish")
- {
- nvinfer1::IActivationLayer* softplus = network->addActivation(
- *input, nvinfer1::ActivationType::kSOFTPLUS);
- assert(softplus != nullptr);
- std::string softplusLayerName = "softplus_" + std::to_string(layerIdx);
- softplus->setName(softplusLayerName.c_str());
- nvinfer1::IActivationLayer* tanh = network->addActivation(
- *softplus->getOutput(0), nvinfer1::ActivationType::kTANH);
- assert(tanh != nullptr);
- std::string tanhLayerName = "tanh_" + std::to_string(layerIdx);
- tanh->setName(tanhLayerName.c_str());
- nvinfer1::IElementWiseLayer* mish = network->addElementWise(
- *tanh->getOutput(0), *input,
- nvinfer1::ElementWiseOperation::kPROD);
- assert(mish != nullptr);
- std::string mishLayerName = "mish_" + std::to_string(layerIdx);
- mish->setName(mishLayerName.c_str());
- output = mish;
- }
- return output;
-}
\ No newline at end of file
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.h b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.h
deleted file mode 100644
index e6081e6..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/activation_layer.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#ifndef __ACTIVATION_LAYER_H__
-#define __ACTIVATION_LAYER_H__
-
-#include
-#include
-
-#include "NvInfer.h"
-
-#include "activation_layer.h"
-
-nvinfer1::ILayer* activationLayer(
- int layerIdx,
- std::string activation,
- nvinfer1::ILayer* output,
- nvinfer1::ITensor* input,
- nvinfer1::INetworkDefinition* network);
-
-#endif
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.cpp b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.cpp
deleted file mode 100644
index abb0d32..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.cpp
+++ /dev/null
@@ -1,168 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#include
-#include "convolutional_layer.h"
-
-nvinfer1::ILayer* convolutionalLayer(
- int layerIdx,
- std::map& block,
- std::vector& weights,
- std::vector& trtWeights,
- int& weightPtr,
- int& inputChannels,
- nvinfer1::ITensor* input,
- nvinfer1::INetworkDefinition* network)
-{
- assert(block.at("type") == "convolutional");
- assert(block.find("filters") != block.end());
- assert(block.find("pad") != block.end());
- assert(block.find("size") != block.end());
- assert(block.find("stride") != block.end());
-
- int filters = std::stoi(block.at("filters"));
- int padding = std::stoi(block.at("pad"));
- int kernelSize = std::stoi(block.at("size"));
- int stride = std::stoi(block.at("stride"));
- std::string activation = block.at("activation");
- int bias = filters;
-
- bool batchNormalize = false;
- if (block.find("batch_normalize") != block.end())
- {
- bias = 0;
- batchNormalize = (block.at("batch_normalize") == "1");
- }
-
- int groups = 1;
- if (block.find("groups") != block.end())
- {
- groups = std::stoi(block.at("groups"));
- }
-
- int pad;
- if (padding)
- pad = (kernelSize - 1) / 2;
- else
- pad = 0;
-
- int size = filters * inputChannels * kernelSize * kernelSize / groups;
- std::vector bnBiases;
- std::vector bnWeights;
- std::vector bnRunningMean;
- std::vector bnRunningVar;
- nvinfer1::Weights convWt{nvinfer1::DataType::kFLOAT, nullptr, size};
- nvinfer1::Weights convBias{nvinfer1::DataType::kFLOAT, nullptr, bias};
-
- if (batchNormalize == false)
- {
- float* val = new float[filters];
- for (int i = 0; i < filters; ++i)
- {
- val[i] = weights[weightPtr];
- weightPtr++;
- }
- convBias.values = val;
- trtWeights.push_back(convBias);
- val = new float[size];
- for (int i = 0; i < size; ++i)
- {
- val[i] = weights[weightPtr];
- weightPtr++;
- }
- convWt.values = val;
- trtWeights.push_back(convWt);
- }
- else
- {
- for (int i = 0; i < filters; ++i)
- {
- bnBiases.push_back(weights[weightPtr]);
- weightPtr++;
- }
-
- for (int i = 0; i < filters; ++i)
- {
- bnWeights.push_back(weights[weightPtr]);
- weightPtr++;
- }
- for (int i = 0; i < filters; ++i)
- {
- bnRunningMean.push_back(weights[weightPtr]);
- weightPtr++;
- }
- for (int i = 0; i < filters; ++i)
- {
- bnRunningVar.push_back(sqrt(weights[weightPtr] + 1.0e-5));
- weightPtr++;
- }
- float* val = new float[size];
- for (int i = 0; i < size; ++i)
- {
- val[i] = weights[weightPtr];
- weightPtr++;
- }
- convWt.values = val;
- trtWeights.push_back(convWt);
- trtWeights.push_back(convBias);
- }
-
- nvinfer1::IConvolutionLayer* conv = network->addConvolution(
- *input, filters, nvinfer1::DimsHW{kernelSize, kernelSize}, convWt, convBias);
- assert(conv != nullptr);
- std::string convLayerName = "conv_" + std::to_string(layerIdx);
- conv->setName(convLayerName.c_str());
- conv->setStride(nvinfer1::DimsHW{stride, stride});
- conv->setPadding(nvinfer1::DimsHW{pad, pad});
-
- if (block.find("groups") != block.end())
- {
- conv->setNbGroups(groups);
- }
-
- nvinfer1::ILayer* output = conv;
-
- if (batchNormalize == true)
- {
- size = filters;
- nvinfer1::Weights shift{nvinfer1::DataType::kFLOAT, nullptr, size};
- nvinfer1::Weights scale{nvinfer1::DataType::kFLOAT, nullptr, size};
- nvinfer1::Weights power{nvinfer1::DataType::kFLOAT, nullptr, size};
- float* shiftWt = new float[size];
- for (int i = 0; i < size; ++i)
- {
- shiftWt[i]
- = bnBiases.at(i) - ((bnRunningMean.at(i) * bnWeights.at(i)) / bnRunningVar.at(i));
- }
- shift.values = shiftWt;
- float* scaleWt = new float[size];
- for (int i = 0; i < size; ++i)
- {
- scaleWt[i] = bnWeights.at(i) / bnRunningVar[i];
- }
- scale.values = scaleWt;
- float* powerWt = new float[size];
- for (int i = 0; i < size; ++i)
- {
- powerWt[i] = 1.0;
- }
- power.values = powerWt;
- trtWeights.push_back(shift);
- trtWeights.push_back(scale);
- trtWeights.push_back(power);
-
- nvinfer1::IScaleLayer* bn = network->addScale(
- *output->getOutput(0), nvinfer1::ScaleMode::kCHANNEL, shift, scale, power);
- assert(bn != nullptr);
- std::string bnLayerName = "batch_norm_" + std::to_string(layerIdx);
- bn->setName(bnLayerName.c_str());
- output = bn;
- }
-
- output = activationLayer(layerIdx, activation, output, output->getOutput(0), network);
- assert(output != nullptr);
-
- return output;
-}
\ No newline at end of file
diff --git a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.h b/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.h
deleted file mode 100644
index b114493..0000000
--- a/examples/multiple_inferences/sgie1/nvdsinfer_custom_impl_Yolo/layers/convolutional_layer.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Created by Marcos Luciano
- * https://www.github.com/marcoslucianops
- */
-
-#ifndef __CONVOLUTIONAL_LAYER_H__
-#define __CONVOLUTIONAL_LAYER_H__
-
-#include