575 lines
17 KiB
Markdown
575 lines
17 KiB
Markdown
# DeepStream-Yolo
|
|
|
|
NVIDIA DeepStream SDK 6.1 / 6.0.1 / 6.0 configuration for YOLO models
|
|
|
|
### Future updates
|
|
|
|
* Models benchmarks
|
|
* DeepStream tutorials
|
|
* YOLOX support
|
|
* PP-YOLO support
|
|
* YOLOv6 support
|
|
* YOLOv7 support
|
|
* Dynamic batch-size
|
|
|
|
### Improvements on this repository
|
|
|
|
* Darknet cfg params parser (no need to edit `nvdsparsebbox_Yolo.cpp` or other files)
|
|
* Support for `new_coords` and `scale_x_y` params
|
|
* Support for new models
|
|
* Support for new layers
|
|
* Support for new activations
|
|
* Support for convolutional groups
|
|
* Support for INT8 calibration
|
|
* Support for non square models
|
|
* New documentation for multiple models
|
|
* **YOLOv5 >= 2.0 support**
|
|
* **YOLOR support**
|
|
* **GPU YOLO Decoder** [#138](https://github.com/marcoslucianops/DeepStream-Yolo/issues/138)
|
|
* **GPU Batched NMS** [#142](https://github.com/marcoslucianops/DeepStream-Yolo/issues/142)
|
|
* **New YOLOv5 conversion**
|
|
|
|
##
|
|
|
|
### Getting started
|
|
|
|
* [Requirements](#requirements)
|
|
* [Tested models](#tested-models)
|
|
* [Benchmarks](#benchmarks)
|
|
* [dGPU installation](#dgpu-installation)
|
|
* [Basic usage](#basic-usage)
|
|
* [NMS configuration](#nms-configuration)
|
|
* [INT8 calibration](#int8-calibration)
|
|
* [YOLOv5 usage](#docs/YOLOv5.md)
|
|
* [YOLOR usage](#docs/YOLOR.md)
|
|
* [Using your custom model](docs/customModels.md)
|
|
* [Multiple YOLO GIEs](docs/multipleGIEs.md)
|
|
|
|
##
|
|
|
|
### Requirements
|
|
|
|
#### DeepStream 6.1 on x86 platform
|
|
|
|
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
|
|
* [CUDA 11.6 Update 1](https://developer.nvidia.com/cuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
|
|
* [TensorRT 8.2 GA Update 4 (8.2.5.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
|
|
* [NVIDIA Driver 510.47.03](https://www.nvidia.com.br/Download/index.aspx)
|
|
* [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-getting-started)
|
|
* [GStreamer 1.16.2](https://gstreamer.freedesktop.org/)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
* [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/)
|
|
* [CUDA 11.4 Update 1](https://developer.nvidia.com/cuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local)
|
|
* [TensorRT 8.0 GA (8.0.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
|
|
* [NVIDIA Driver >= 470.63.01](https://www.nvidia.com.br/Download/index.aspx)
|
|
* [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
|
|
* [GStreamer 1.14.5](https://gstreamer.freedesktop.org/)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.1 on Jetson platform
|
|
|
|
* [JetPack 5.0.1 DP](https://developer.nvidia.com/embedded/jetpack)
|
|
* [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-sdk)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
#### DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
* [JetPack 4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461)
|
|
* [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived)
|
|
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
|
|
|
|
### For YOLOv5 and YOLOR
|
|
|
|
#### x86 platform
|
|
|
|
* [PyTorch >= 1.7.0](https://pytorch.org/get-started/locally/)
|
|
|
|
#### Jetson platform
|
|
|
|
* [PyTorch >= 1.7.0](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)
|
|
|
|
##
|
|
|
|
### Tested models
|
|
|
|
* [Darknet YOLO](https://github.com/AlexeyAB/darknet)
|
|
* [YOLOv5 >= 2.0](https://github.com/ultralytics/yolov5)
|
|
* [YOLOR](https://github.com/WongKinYiu/yolor)
|
|
* [MobileNet-YOLO](https://github.com/dog-qiuqiu/MobileNet-Yolo)
|
|
* [YOLO-Fastest](https://github.com/dog-qiuqiu/Yolo-Fastest)
|
|
|
|
##
|
|
|
|
### Benchmarks
|
|
|
|
New tests comming soon.
|
|
|
|
##
|
|
|
|
### dGPU installation
|
|
|
|
To install the DeepStream on dGPU (x86 platform), without docker, we need to do some steps to prepare the computer.
|
|
|
|
<details><summary>DeepStream 6.1</summary>
|
|
|
|
#### 1. Disable Secure Boot in BIOS
|
|
|
|
#### 2. Install dependencies
|
|
|
|
```
|
|
sudo apt-get update
|
|
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
|
|
sudo apt-get install python3 python3-dev python3-pip
|
|
sudo apt-get install dkms
|
|
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
|
|
sudo apt-get install linux-headers-$(uname -r)
|
|
```
|
|
|
|
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
|
|
|
|
```
|
|
sudo nvidia-uninstall
|
|
sudo $CUDA_PATH/bin/cuda-uninstaller
|
|
sudo apt-get remove --purge '*nvidia*'
|
|
sudo apt-get remove --purge '*cuda*'
|
|
sudo apt-get remove --purge '*cudnn*'
|
|
sudo apt-get remove --purge '*tensorrt*'
|
|
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
|
|
```
|
|
|
|
#### 3. Install CUDA Keyring
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
|
|
sudo dpkg -i cuda-keyring_1.0-1_all.deb
|
|
sudo apt-get update
|
|
```
|
|
|
|
#### 4. Download and install NVIDIA Driver
|
|
|
|
* TITAN, GeForce RTX / GTX series and RTX / Quadro series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
|
|
```
|
|
|
|
* Data center / Tesla series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/tesla/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: This step will disable the nouveau drivers.
|
|
|
|
* Reboot
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
* Install
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-510.47.03.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
|
|
|
|
```
|
|
sudo apt-get install nvidia-prime
|
|
sudo prime-select nvidia
|
|
```
|
|
|
|
#### 5. Download and install CUDA
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/11.6.1/local_installers/cuda_11.6.1_510.47.03_linux.run
|
|
sudo sh cuda_11.6.1_510.47.03_linux.run --silent --toolkit
|
|
```
|
|
|
|
* Export environment variables
|
|
|
|
```
|
|
echo $'export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
|
|
```
|
|
|
|
#### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT
|
|
|
|
TensorRT 8.2 GA Update 4 for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4 and 11.5 DEB local repo Package
|
|
|
|
```
|
|
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505_1-1_amd64.deb
|
|
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505/82307095.pub
|
|
sudo apt-get update
|
|
sudo apt-get install libnvinfer8=8.2.5-1+cuda11.4 libnvinfer-plugin8=8.2.5-1+cuda11.4 libnvparsers8=8.2.5-1+cuda11.4 libnvonnxparsers8=8.2.5-1+cuda11.4 libnvinfer-bin=8.2.5-1+cuda11.4 libnvinfer-dev=8.2.5-1+cuda11.4 libnvinfer-plugin-dev=8.2.5-1+cuda11.4 libnvparsers-dev=8.2.5-1+cuda11.4 libnvonnxparsers-dev=8.2.5-1+cuda11.4 libnvinfer-samples=8.2.5-1+cuda11.4 libnvinfer-doc=8.2.5-1+cuda11.4 libcudnn8-dev=8.4.0.27-1+cuda11.6 libcudnn8=8.4.0.27-1+cuda11.6
|
|
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt
|
|
```
|
|
|
|
#### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-getting-started) and install the DeepStream SDK
|
|
|
|
DeepStream 6.1 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.1_6.1.0-1_amd64.deb
|
|
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
|
|
sudo ln -snf /usr/local/cuda-11.6 /usr/local/cuda
|
|
```
|
|
|
|
#### 8. Reboot the computer
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
<details><summary>DeepStream 6.0.1 / 6.0</summary>
|
|
|
|
#### 1. Disable Secure Boot in BIOS
|
|
|
|
<details><summary>If you are using a laptop with newer Intel/AMD processors and your Graphics in Settings->Details->About tab is llvmpipe, please update the kernel.</summary>
|
|
|
|
```
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100_5.11.0-051100.202102142330_all.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-headers-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-image-unsigned-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/amd64/linux-modules-5.11.0-051100-generic_5.11.0-051100.202102142330_amd64.deb
|
|
sudo dpkg -i *.deb
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
#### 2. Install dependencies
|
|
|
|
```
|
|
sudo apt-get update
|
|
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
|
|
sudo apt-get install python3 python3-dev python3-pip
|
|
sudo apt install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4
|
|
sudo apt-get install linux-headers-$(uname -r)
|
|
```
|
|
|
|
**NOTE**: Install DKMS only if you are using the default Ubuntu kernel
|
|
|
|
```
|
|
sudo apt-get install dkms
|
|
```
|
|
|
|
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
|
|
|
|
```
|
|
sudo nvidia-uninstall
|
|
sudo $CUDA_PATH/bin/cuda-uninstaller
|
|
sudo apt-get remove --purge '*nvidia*'
|
|
sudo apt-get remove --purge '*cuda*'
|
|
sudo apt-get remove --purge '*cudnn*'
|
|
sudo apt-get remove --purge '*tensorrt*'
|
|
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
|
|
```
|
|
|
|
#### 3. Install CUDA Keyring
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
|
|
sudo dpkg -i cuda-keyring_1.0-1_all.deb
|
|
sudo apt-get update
|
|
```
|
|
|
|
#### 4. Download and install NVIDIA Driver
|
|
|
|
* TITAN, GeForce RTX / GTX series and RTX / Quadro series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
|
|
```
|
|
|
|
* Data center / Tesla series
|
|
|
|
```
|
|
wget https://us.download.nvidia.com/tesla/470.129.06/NVIDIA-Linux-x86_64-470.129.06.run
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: This step will disable the nouveau drivers.
|
|
|
|
**NOTE**: Remove --dkms flag if you installed the 5.11.0 kernel.
|
|
|
|
* Reboot
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
* Install
|
|
|
|
```
|
|
sudo sh NVIDIA-Linux-x86_64-470.129.06.run --silent --disable-nouveau --dkms --install-libglvnd
|
|
```
|
|
|
|
**NOTE**: Remove --dkms flag if you installed the 5.11.0 kernel.
|
|
|
|
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
|
|
|
|
```
|
|
sudo apt-get install nvidia-prime
|
|
sudo prime-select nvidia
|
|
```
|
|
|
|
#### 5. Download and install CUDA
|
|
|
|
```
|
|
wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run
|
|
sudo sh cuda_11.4.1_470.57.02_linux.run --silent --toolkit
|
|
```
|
|
|
|
* Export environment variables
|
|
|
|
```
|
|
echo $'export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
|
|
```
|
|
|
|
#### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT
|
|
|
|
TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
|
|
|
|
```
|
|
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
|
|
sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub
|
|
sudo apt-get update
|
|
sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-bin=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libnvinfer-samples=8.0.1-1+cuda11.3 libnvinfer-doc=8.0.1-1+cuda11.3 libcudnn8-dev=8.2.1.32-1+cuda11.3 libcudnn8=8.2.1.32-1+cuda11.3
|
|
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt
|
|
```
|
|
|
|
#### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) and install the DeepStream SDK
|
|
|
|
* DeepStream 6.0.1 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.0_6.0.1-1_amd64.deb
|
|
```
|
|
|
|
* DeepStream 6.0 for Servers and Workstations (.deb)
|
|
|
|
```
|
|
sudo apt-get install ./deepstream-6.0_6.0.0-1_amd64.deb
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
|
|
sudo ln -snf /usr/local/cuda-11.4 /usr/local/cuda
|
|
```
|
|
|
|
#### 8. Reboot the computer
|
|
|
|
```
|
|
sudo reboot
|
|
```
|
|
|
|
</details>
|
|
|
|
##
|
|
|
|
### Basic usage
|
|
|
|
#### 1. Download the repo
|
|
|
|
```
|
|
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
|
|
cd DeepStream-Yolo
|
|
```
|
|
|
|
#### 2. Download the `cfg` and `weights` files from [Darknet](https://github.com/AlexeyAB/darknet) repo to the DeepStream-Yolo folder
|
|
|
|
#### 3. Compile the lib
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 4. Edit the `config_infer_primary.txt` file according to your model (example for YOLOv4)
|
|
|
|
```
|
|
[property]
|
|
...
|
|
custom-network-config=yolov4.cfg
|
|
model-file=yolov4.weights
|
|
...
|
|
```
|
|
|
|
#### 5. Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
**NOTE**: If you want to use YOLOv2 or YOLOv2-Tiny models, change the `deepstream_app_config.txt` file before run it
|
|
|
|
```
|
|
...
|
|
[primary-gie]
|
|
...
|
|
config-file=config_infer_primary_yoloV2.txt
|
|
...
|
|
```
|
|
|
|
##
|
|
|
|
### NMS Configuration
|
|
|
|
To change the `iou-threshold`, `score-threshold` and `topk` values, modify the `config_nms.txt` file and regenerate the model engine file.
|
|
|
|
```
|
|
[property]
|
|
iou-threshold=0.45
|
|
score-threshold=0.25
|
|
topk=300
|
|
```
|
|
|
|
**NOTE**: Lower `topk` values will result in more performance.
|
|
|
|
**NOTE**: Make sure to set `cluster-mode=4` in the config_infer file.
|
|
|
|
**NOTE**: You are still able to change the `pre-cluster-threshold` values in the config_infer files.
|
|
|
|
##
|
|
|
|
### INT8 calibration
|
|
|
|
#### 1. Install OpenCV
|
|
|
|
```
|
|
sudo apt-get install libopencv-dev
|
|
```
|
|
|
|
#### 2. Compile/recompile the `nvdsinfer_custom_impl_Yolo` lib with OpenCV support
|
|
|
|
* DeepStream 6.1 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.6 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on x86 platform
|
|
|
|
```
|
|
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.1 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
* DeepStream 6.0.1 / 6.0 on Jetson platform
|
|
|
|
```
|
|
CUDA_VER=10.2 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
|
|
```
|
|
|
|
#### 3. For COCO dataset, download the [val2017](https://drive.google.com/file/d/1gbvfn7mcsGDRZ_luJwtITL-ru2kK99aK/view?usp=sharing), extract, and move to DeepStream-Yolo folder
|
|
|
|
* Select 1000 random images from COCO dataset to run calibration
|
|
|
|
```
|
|
mkdir calibration
|
|
```
|
|
|
|
```
|
|
for jpg in $(ls -1 val2017/*.jpg | sort -R | head -1000); do \
|
|
cp ${jpg} calibration/; \
|
|
done
|
|
```
|
|
|
|
* Create the `calibration.txt` file with all selected images
|
|
|
|
```
|
|
realpath calibration/*jpg > calibration.txt
|
|
```
|
|
|
|
* Set environment variables
|
|
|
|
```
|
|
export INT8_CALIB_IMG_PATH=calibration.txt
|
|
export INT8_CALIB_BATCH_SIZE=1
|
|
```
|
|
|
|
* Edit the `config_infer` file
|
|
|
|
```
|
|
...
|
|
model-engine-file=model_b1_gpu0_fp32.engine
|
|
#int8-calib-file=calib.table
|
|
...
|
|
network-mode=0
|
|
...
|
|
```
|
|
|
|
To
|
|
|
|
```
|
|
...
|
|
model-engine-file=model_b1_gpu0_int8.engine
|
|
int8-calib-file=calib.table
|
|
...
|
|
network-mode=1
|
|
...
|
|
```
|
|
|
|
* Run
|
|
|
|
```
|
|
deepstream-app -c deepstream_app_config.txt
|
|
```
|
|
|
|
**NOTE**: NVIDIA recommends at least 500 images to get a good accuracy. On this example, I used 1000 images to get better accuracy (more images = more accuracy). Higher `INT8_CALIB_BATCH_SIZE` values will result in more accuracy and faster calibration speed. Set it according to you GPU memory. This process can take a long time.
|
|
|
|
##
|
|
|
|
### Extract metadata
|
|
|
|
You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the `deepstream-app` or `deepstream-test` codes. For Python, your can install and edit [deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
|
|
|
|
Basically, you need manipulate the `NvDsObjectMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsObjectMeta.html)) `and NvDsFrameMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsFrameMeta.html)) to get the label, position, etc. of bboxes.
|
|
|
|
##
|
|
|
|
My projects: https://www.youtube.com/MarcosLucianoTV
|