833 lines
20 KiB
Markdown
833 lines
20 KiB
Markdown
# DeepStream-Yolo
|
|
|
|
NVIDIA DeepStream SDK 6.1 / 6.0.1 / 6.0 configuration for YOLO models
|
|
|
|
### Future updates
|
|
|
|
* New documentation for multiple models
|
|
* DeepStream tutorials
|
|
* Native YOLOX support
|
|
* Native PP-YOLO support
|
|
* Dynamic batch-size
|
|
|
|
### Improvements on this repository
|
|
|
|
* Darknet CFG params parser (no need to edit nvdsparsebbox_Yolo.cpp or another file)
|
|
* Support for new_coords, beta_nms and scale_x_y params
|
|
* Support for new models
|
|
* Support for new layers
|
|
* Support for new activations
|
|
* Support for convolutional groups
|
|
* Support for INT8 calibration
|
|
* Support for non square models
|
|
* Support for reorg, implicit and channel layers (YOLOR)
|
|
* YOLOv5 4.0, 5.0, 6.0 and 6.1 native support
|
|
* YOLOR native support
|
|
* Models benchmarks (**outdated**)
|
|
* **GPU YOLO Decoder (moved from CPU to GPU to get better performance)** [#138](https://github.com/marcoslucianops/DeepStream-Yolo/issues/138)
|
|
* **GPU Batched NMS** [#142](https://github.com/marcoslucianops/DeepStream-Yolo/issues/142)
|
|
|
|
##
|
|
|
|
### Getting started
|
|
|
|
* [Requirements](#requirements)
|
|
* [Tested models](#tested-models)
|
|
* [Benchmarks](#benchmarks)
|
|
* [dGPU installation](#dgpu-installation)
|
|
* [Basic usage](#basic-usage)
|
|
* [YOLOv5 usage](#yolov5-usage)
|
|
* [YOLOR usage](#yolor-usage)
|
|
* [NMS configuration](#nms-configuration)
|
|
* [INT8 calibration](#int8-calibration)
|
|
* [Using your custom model](docs/customModels.md)
|
|
|
|
##
|
|
|
|
### Requirements
|
|
|
|
#### DeepStream 6.1 on x86 platform
|
|
|
|
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
|
|
* [CUDA 11.6 Update 1](https://developer.nvidia.com/cuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
|
|
* [TensorRT 8.2 GA Update 4 (8.2.5.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
|
|
* [NVIDIA Driver 510.47.03](https://www.nvidia.com.br/Download/index.aspx)
|
|
* [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-getting-started)
|
|
* [GStreamer 1.16.2](https://gstreamer.freedesktop.org/)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
* [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/)
|
|
* [CUDA 11.4 Update 1](https://developer.nvidia.com/cuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local)
|
|
* [TensorRT 8.0 GA (8.0.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
|
|
* [NVIDIA Driver >= 470.63.01](https://www.nvidia.com.br/Download/index.aspx)
|
|
* [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
|
|
* [GStreamer 1.14.5](https://gstreamer.freedesktop.org/)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.1 on Jetson platform
|
|
|
|
* [JetPack 5.0.1 DP](https://developer.nvidia.com/embedded/jetpack)
|
|
* [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-sdk)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
* [JetPack 4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461)
|
|
* [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
### For YOLOv5 and YOLOR
|
|
|
|
#### x86 platform
|
|
|
|
* [PyTorch >= 1.7.0](https://pytorch.org/get-started/locally/)
|
|
|
|
#### Jetson platform
|
|
|
|
* [PyTorch >= 1.7.0](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)
|
|
|
|
##
|
|
|
|
### Tested models
|
|
|
|
* [Darknet YOLO](https://github.com/AlexeyAB/darknet)
|
|
* [YOLOv5 4.0, 5.0, 6.0 and 6.1](https://github.com/ultralytics/yolov5)
|
|
* [YOLOR](https://github.com/WongKinYiu/yolor)
|
|
* [MobileNet-YOLO](https://github.com/dog-qiuqiu/MobileNet-Yolo)
|
|
* [YOLO-Fastest](https://github.com/dog-qiuqiu/Yolo-Fastest)
|
|
|
|
##
|
|
|
|
### Benchmarks
|
|
|
|
New tests comming soon.
|
|
|
|
##
|
|
|
|
### dGPU installation
|
|
|
|
To install the DeepStream on dGPU (x86 platform), without docker, we need to do some steps to prepare the computer.
|
|
|
|
<details><summary>DeepStream 6.1</summary>
|
|
|
|
#### 1. Disable Secure Boot in BIOS
|
|
|
|
#### 2. Install dependencies
|
|
|
|
```
|
|
sudo apt-get update
|
|
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
|
|
sudo apt-get install python3 python3-dev python3-pip
|
|
sudo apt-get install dkms
|
|
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
|
|
sudo apt-get install linux-headers-$(uname -r)
|
|
```
|
|
|
|
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path).
|
|
|
|
```
|
|
sudo nvidia-uninstall
|
|
sudo $CUDA_PATH/bin/cuda-uninstaller
|
|
sudo apt-get remove --purge '*nvidia*'
|
|
sudo apt-get remove --purge '*cuda*'
|
|
sudo apt-get remove --purge '*cudnn*'
|
|
sudo apt-get remove --purge '*tensorrt*'
|
|
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
|
|
```
|
|
|
|
#### 3. Install CUDA Keyring
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
|
|
sudo dpkg -i cuda-keyring_1.0-1_all.deb
|
|
sudo apt-get update
|
|
```
|
|
|
|
#### 4. Download and install NVIDIA Driver
|
|
|
|
* TITAN, GeForce RTX / GTX series and RTX / Quadro series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
|
|
```
|
|
|
|
* Data center / Tesla series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/tesla/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: This step will disable the nouveau drivers.
|
|
|
|
* Reboot
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
* Install
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
|
|
|
|
```
|
|
sudo apt-get install nvidia-prime
|
|
sudo prime-select nvidia
|
|
```
|
|
|
|
#### 5. Download and install CUDA
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/11.6.1/local_installers/cuda_11.6.1_510.47.03_linux.run
|
|
sudo sh cuda_11.6.1_510.47.03_linux.run --silent --toolkit
|
|
```
|
|
|
|
* Export environment variables
|
|
|
|
```
|
|
nano ~/.bashrc
|
|
```
|
|
|
|
* Add
|
|
|
|
```
|
|
export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}}
|
|
export LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
source ~/.bashrc
|
|
```
|
|
|
|
|
|
#### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT
|
|
|
|
TensorRT 8.2 GA Update 4 for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4 and 11.5 DEB local repo Package
|
|
|
|
```
|
|
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505_1-1_amd64.deb
|
|
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505/82307095.pub
|
|
sudo apt-get update
|
|
sudo apt install tensorrt
|
|
```
|
|
|
|
#### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-getting-started) and install the DeepStream SDK
|
|
|
|
DeepStream 6.1 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.1_6.1.0-1_amd64.deb
|
|
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
|
|
sudo ln -snf /usr/local/cuda-11.6 /usr/local/cuda
|
|
```
|
|
|
|
#### 8. Reboot the computer
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
<details><summary>DeepStream 6.0.1 / 6.0</summary>
|
|
|
|
#### 1. Disable Secure Boot in BIOS
|
|
|
|
<details><summary>If you are using a laptop with newer Intel/AMD processors and your Graphics in Settings->Details->About tab is llvmpipe, please update the kernel.</summary>
|
|
|
|
```
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100_5.11.0-051100.202102142330_all.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-image-unsigned-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-modules-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
sudo dpkg -i *.deb
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
#### 2. Install dependencies
|
|
|
|
```
|
|
sudo apt-get update
|
|
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
|
|
sudo apt-get install python3 python3-dev python3-pip
|
|
sudo apt install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4
|
|
sudo apt-get install linux-headers-$(uname -r)
|
|
```
|
|
|
|
**NOTE**: Install DKMS only if you are using the default Ubuntu kernel.
|
|
|
|
```
|
|
sudo apt-get install dkms
|
|
```
|
|
|
|
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path).
|
|
|
|
```
|
|
sudo nvidia-uninstall
|
|
sudo $CUDA_PATH/bin/cuda-uninstaller
|
|
sudo apt-get remove --purge '*nvidia*'
|
|
sudo apt-get remove --purge '*cuda*'
|
|
sudo apt-get remove --purge '*cudnn*'
|
|
sudo apt-get remove --purge '*tensorrt*'
|
|
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
|
|
```
|
|
|
|
#### 3. Install CUDA Keyring
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
|
|
sudo dpkg -i cuda-keyring_1.0-1_all.deb
|
|
sudo apt-get update
|
|
```
|
|
|
|
#### 4. Download and install NVIDIA Driver
|
|
|
|
* TITAN, GeForce RTX / GTX series and RTX / Quadro series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
|
|
```
|
|
|
|
* Data center / Tesla series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/tesla/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: This step will disable the nouveau drivers.
|
|
|
|
* Reboot
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
* Install
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
|
|
|
|
```
|
|
sudo apt-get install nvidia-prime
|
|
sudo prime-select nvidia
|
|
```
|
|
|
|
#### 5. Download and install CUDA
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run
|
|
sudo sh cuda_11.4.1_470.57.02_linux.run --silent --toolkit
|
|
```
|
|
|
|
* Export environment variables
|
|
|
|
```
|
|
nano ~/.bashrc
|
|
```
|
|
|
|
* Add
|
|
|
|
```
|
|
export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}
|
|
export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
source ~/.bashrc
|
|
```
|
|
|
|
|
|
#### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT
|
|
|
|
TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
|
|
|
|
```
|
|
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
|
|
sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub
|
|
sudo apt-get update
|
|
sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-bin=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libnvinfer-samples=8.0.1-1+cuda11.3 libnvinfer-doc=8.0.1-1+cuda11.3
|
|
```
|
|
|
|
#### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) and install the DeepStream SDK
|
|
|
|
* DeepStream 6.0.1 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.0_6.0.1-1_amd64.deb
|
|
```
|
|
|
|
* DeepStream 6.0 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.0_6.0.0-1_amd64.deb
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
|
|
sudo ln -snf /usr/local/cuda-11.4 /usr/local/cuda
|
|
```
|
|
|
|
#### 8. Reboot the computer
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
##
|
|
|
|
### Basic usage
|
|
|
|
#### 1. Download the repo
|
|
|
|
```
|
|
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
|
|
cd DeepStream-Yolo
|
|
```
|
|
|
|
#### 2. Download cfg and weights files from your model and move to DeepStream-Yolo folder
|
|
|
|
#### 3. Compile lib
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 4. Edit config_infer_primary.txt for your model (example for YOLOv4)
|
|
|
|
```
|
|
[property]
|
|
...
|
|
# 0=RGB, 1=BGR, 2=GRAYSCALE
|
|
model-color-format=0
|
|
# YOLO cfg
|
|
custom-network-config=yolov4.cfg
|
|
# YOLO weights
|
|
model-file=yolov4.weights
|
|
# Generated TensorRT model (will be created if it doesn't exist)
|
|
model-engine-file=model_b1_gpu0_fp32.engine
|
|
# Model labels file
|
|
labelfile-path=labels.txt
|
|
# Batch size
|
|
batch-size=1
|
|
# 0=FP32, 1=INT8, 2=FP16 mode
|
|
network-mode=0
|
|
# Number of classes in label file
|
|
num-detected-classes=80
|
|
...
|
|
```
|
|
|
|
#### 5. Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
**NOTE**: If you want to use YOLOv2 or YOLOv2-Tiny models, change the deepstream_app_config.txt file before run it
|
|
|
|
```
|
|
...
|
|
[primary-gie]
|
|
enable=1
|
|
gpu-id=0
|
|
gie-unique-id=1
|
|
nvbuf-memory-type=0
|
|
config-file=config_infer_primary_yoloV2.txt
|
|
...
|
|
```
|
|
|
|
##
|
|
|
|
### YOLOv5 usage
|
|
|
|
**NOTE**: Make sure to change the YOLOv5 repo version to your model version before conversion.
|
|
|
|
#### 1. Copy gen_wts_yoloV5.py from DeepStream-Yolo/utils to [ultralytics/yolov5](https://github.com/ultralytics/yolov5) folder
|
|
|
|
#### 2. Open the ultralytics/yolov5 folder
|
|
|
|
#### 3. Download pt file from [ultralytics/yolov5](https://github.com/ultralytics/yolov5/releases/) website (example for YOLOv5n 6.1)
|
|
|
|
```
|
|
wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt
|
|
```
|
|
|
|
#### 4. Generate cfg and wts files (example for YOLOv5n)
|
|
|
|
```
|
|
python3 gen_wts_yoloV5.py -w yolov5n.pt
|
|
```
|
|
|
|
#### 5. Copy generated cfg and wts files to DeepStream-Yolo folder
|
|
|
|
#### 6. Open DeepStream-Yolo folder
|
|
|
|
#### 7. Compile lib
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 8. Edit config_infer_primary_yoloV5.txt for your model (example for YOLOv5n)
|
|
|
|
```
|
|
[property]
|
|
...
|
|
# 0=RGB, 1=BGR, 2=GRAYSCALE
|
|
model-color-format=0
|
|
# CFG
|
|
custom-network-config=yolov5n.cfg
|
|
# WTS
|
|
model-file=yolov5n.wts
|
|
# Generated TensorRT model (will be created if it doesn't exist)
|
|
model-engine-file=model_b1_gpu0_fp32.engine
|
|
# Model labels file
|
|
labelfile-path=labels.txt
|
|
# Batch size
|
|
batch-size=1
|
|
# 0=FP32, 1=INT8, 2=FP16 mode
|
|
network-mode=0
|
|
# Number of classes in label file
|
|
num-detected-classes=80
|
|
...
|
|
```
|
|
|
|
#### 8. Change the deepstream_app_config.txt file
|
|
|
|
```
|
|
...
|
|
[primary-gie]
|
|
enable=1
|
|
gpu-id=0
|
|
gie-unique-id=1
|
|
nvbuf-memory-type=0
|
|
config-file=config_infer_primary_yoloV5.txt
|
|
```
|
|
|
|
#### 9. Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
**NOTE**: For YOLOv5 P6 or custom models, check the gen_wts_yoloV5.py args and use them according to your model
|
|
|
|
* Input weights (.pt) file path **(required)**
|
|
|
|
```
|
|
-w or --weights
|
|
```
|
|
|
|
* Input cfg (.yaml) file path
|
|
|
|
```
|
|
-c or --yaml
|
|
```
|
|
|
|
* Model width **(default = 640 / 1280 [P6])**
|
|
|
|
```
|
|
-mw or --width
|
|
```
|
|
|
|
* Model height **(default = 640 / 1280 [P6])**
|
|
|
|
```
|
|
-mh or --height
|
|
```
|
|
|
|
* Model channels **(default = 3)**
|
|
|
|
```
|
|
-mc or --channels
|
|
```
|
|
|
|
* P6 model
|
|
|
|
```
|
|
--p6
|
|
```
|
|
|
|
##
|
|
|
|
### YOLOR usage
|
|
|
|
#### 1. Copy gen_wts_yolor.py from DeepStream-Yolo/utils to [yolor](https://github.com/WongKinYiu/yolor) folder
|
|
|
|
#### 2. Open the yolor folder
|
|
|
|
#### 3. Download pt file from [yolor](https://github.com/WongKinYiu/yolor) website
|
|
|
|
#### 4. Generate wts file (example for YOLOR-CSP)
|
|
|
|
```
|
|
python3 gen_wts_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg
|
|
```
|
|
|
|
#### 5. Copy cfg and generated wts files to DeepStream-Yolo folder
|
|
|
|
#### 6. Open DeepStream-Yolo folder
|
|
|
|
#### 7. Compile lib
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 8. Edit config_infer_primary_yolor.txt for your model (example for YOLOR-CSP)
|
|
|
|
```
|
|
[property]
|
|
...
|
|
# 0=RGB, 1=BGR, 2=GRAYSCALE
|
|
model-color-format=0
|
|
# CFG
|
|
custom-network-config=yolor_csp.cfg
|
|
# WTS
|
|
model-file=yolor_csp.wts
|
|
# Generated TensorRT model (will be created if it doesn't exist)
|
|
model-engine-file=model_b1_gpu0_fp32.engine
|
|
# Model labels file
|
|
labelfile-path=labels.txt
|
|
# Batch size
|
|
batch-size=1
|
|
# 0=FP32, 1=INT8, 2=FP16 mode
|
|
network-mode=0
|
|
# Number of classes in label file
|
|
num-detected-classes=80
|
|
...
|
|
```
|
|
|
|
#### 8. Change the deepstream_app_config.txt file
|
|
|
|
```
|
|
...
|
|
[primary-gie]
|
|
enable=1
|
|
gpu-id=0
|
|
gie-unique-id=1
|
|
nvbuf-memory-type=0
|
|
config-file=config_infer_primary_yolor.txt
|
|
```
|
|
|
|
#### 9. Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
##
|
|
|
|
### NMS Configuration
|
|
|
|
To change the `iou-threshold`, `score-threshold` and `topk` values, modify the `config_nms.txt` file and regenerate the model engine file.
|
|
|
|
**NOTE**: Lower `topk` values will result in more performance.
|
|
|
|
**NOTE**: Make sure to set cluster-mode=4 and pre-cluster-threshold=0 in config_infer file.
|
|
|
|
```
|
|
[property]
|
|
iou-threshold=0.45
|
|
score-threshold=0.25
|
|
topk=300
|
|
```
|
|
|
|
##
|
|
|
|
### INT8 calibration
|
|
|
|
#### 1. Install OpenCV
|
|
|
|
```
|
|
sudo apt-get install libopencv-dev
|
|
```
|
|
|
|
#### 2. Compile/recompile the nvdsinfer_custom_impl_Yolo lib with OpenCV support
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
cd DeepStream-Yolo
|
|
CUDA_VER=11.6 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
cd DeepStream-Yolo
|
|
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
cd DeepStream-Yolo
|
|
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
cd DeepStream-Yolo
|
|
CUDA_VER=10.2 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 3. For COCO dataset, download the [val2017](https://drive.google.com/file/d/1gbvfn7mcsGDRZ_luJwtITL-ru2kK99aK/view?usp=sharing), extract, and move to DeepStream-Yolo folder
|
|
|
|
##### Select 1000 random images from COCO dataset to run calibration
|
|
|
|
```
|
|
mkdir calibration
|
|
```
|
|
|
|
```
|
|
for jpg in $(ls -1 val2017/*.jpg | sort -R | head -1000); do \
|
|
cp ${jpg} calibration/; \
|
|
done
|
|
```
|
|
|
|
##### Create the calibration.txt file with all selected images
|
|
|
|
```
|
|
realpath calibration/*jpg > calibration.txt
|
|
```
|
|
|
|
##### Set environment variables
|
|
|
|
```
|
|
export INT8_CALIB_IMG_PATH=calibration.txt
|
|
export INT8_CALIB_BATCH_SIZE=1
|
|
```
|
|
|
|
##### Change config_infer_primary.txt file
|
|
|
|
```
|
|
...
|
|
model-engine-file=model_b1_gpu0_fp32.engine
|
|
#int8-calib-file=calib.table
|
|
...
|
|
network-mode=0
|
|
...
|
|
```
|
|
|
|
* To
|
|
|
|
```
|
|
...
|
|
model-engine-file=model_b1_gpu0_int8.engine
|
|
int8-calib-file=calib.table
|
|
...
|
|
network-mode=1
|
|
...
|
|
```
|
|
|
|
##### Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
**NOTE**: NVIDIA recommends at least 500 images to get a good accuracy. In this example I used 1000 images to get better accuracy (more images = more accuracy). Higher INT8_CALIB_BATCH_SIZE values will increase the accuracy and calibration speed. Set it according to you GPU memory. This process can take a long time.
|
|
|
|
##
|
|
|
|
### Extract metadata
|
|
|
|
You can get metadata from deepstream in Python and C/C++. For C/C++, you need edit deepstream-app or deepstream-test code. For Python your need install and edit [deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
|
|
|
|
Basically, you need manipulate NvDsObjectMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html)/[C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsObjectMeta.html)) and NvDsFrameMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html)/[C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsFrameMeta.html)) to get label, position, etc. of bboxes.
|
|
|
|
##
|
|
|
|
My projects: https://www.youtube.com/MarcosLucianoTV
|