DeepStream 7.0 update

This commit is contained in:
Marcos Luciano
2024-05-12 20:47:10 -03:00
parent 9bda315ee0
commit ef84cc048f
22 changed files with 1004 additions and 802 deletions

164
README.md
View File

@@ -1,7 +1,9 @@
# DeepStream-Yolo # DeepStream-Yolo
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models
--------------------------------------------------------------------------------------------------
For now, I am limited for some updates. Thank you for understanding.
-------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------
### YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose ### YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose
### YOLO-Seg: https://github.com/marcoslucianops/DeepStream-Yolo-Seg ### YOLO-Seg: https://github.com/marcoslucianops/DeepStream-Yolo-Seg
@@ -23,15 +25,10 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* Models benchmarks * Models benchmarks
* Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing * Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
* Support for RT-DETR, YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing * Support for RT-DETR, YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
* GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests) * GPU bbox parser
* Support for DeepStream 5.1 * Custom ONNX model parser
* Custom ONNX model parser (`NvDsInferYoloCudaEngineGet`) * Dynamic batch-size
* Dynamic batch-size for Darknet and ONNX exported models
* INT8 calibration (PTQ) for Darknet and ONNX exported models * INT8 calibration (PTQ) for Darknet and ONNX exported models
* New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model
* **RT-DETR PyTorch (https://github.com/lyuwenyu/RT-DETR/tree/main/rtdetr_pytorch)**
* **RT-DETR Paddle (https://github.com/lyuwenyu/RT-DETR/tree/main/rtdetr_paddle)**
* **RT-DETR Ultralytics (https://docs.ultralytics.com/models/rtdetr)**
## ##
@@ -44,6 +41,7 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Basic usage](#basic-usage) * [Basic usage](#basic-usage)
* [Docker usage](#docker-usage) * [Docker usage](#docker-usage)
* [NMS configuration](#nms-configuration) * [NMS configuration](#nms-configuration)
* [Notes](#notes)
* [INT8 calibration](docs/INT8Calibration.md) * [INT8 calibration](docs/INT8Calibration.md)
* [YOLOv5 usage](docs/YOLOv5.md) * [YOLOv5 usage](docs/YOLOv5.md)
* [YOLOv6 usage](docs/YOLOv6.md) * [YOLOv6 usage](docs/YOLOv6.md)
@@ -64,13 +62,33 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
### Requirements ### Requirements
#### DeepStream 7.0 on x86 platform
* [Ubuntu 22.04](https://releases.ubuntu.com/22.04/)
* [CUDA 12.2 Update 2](https://developer.nvidia.com/cuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)
* [TensorRT 8.6 GA (8.6.1.6)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 535 (>= 535.161.08)](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 7.0](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=7.0)
* [GStreamer 1.20.3](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 6.4 on x86 platform
* [Ubuntu 22.04](https://releases.ubuntu.com/22.04/)
* [CUDA 12.2 Update 2](https://developer.nvidia.com/cuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)
* [TensorRT 8.6 GA (8.6.1.6)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 535 (>= 535.104.12)](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.4](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=6.4)
* [GStreamer 1.20.3](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 6.3 on x86 platform #### DeepStream 6.3 on x86 platform
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/) * [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
* [CUDA 12.1 Update 1](https://developer.nvidia.com/cuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local) * [CUDA 12.1 Update 1](https://developer.nvidia.com/cuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
* [TensorRT 8.5 GA Update 2 (8.5.3.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download) * [TensorRT 8.5 GA Update 2 (8.5.3.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 525.125.06 (Data center / Tesla series) / 530.41.03 (TITAN, GeForce RTX / GTX series and RTX / Quadro series)](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 525 (>= 525.125.06)](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.3](https://developer.nvidia.com/deepstream-getting-started) * [NVIDIA DeepStream SDK 6.3](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=6.3)
* [GStreamer 1.16.3](https://gstreamer.freedesktop.org/) * [GStreamer 1.16.3](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -79,7 +97,7 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/) * [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
* [CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local) * [CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
* [TensorRT 8.5 GA Update 1 (8.5.2.2)](https://developer.nvidia.com/nvidia-tensorrt-8x-download) * [TensorRT 8.5 GA Update 1 (8.5.2.2)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 525.85.12 (Data center / Tesla series) / 525.105.17 (TITAN, GeForce RTX / GTX series and RTX / Quadro series)](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 525 (>= 525.85.12)](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.2](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) * [NVIDIA DeepStream SDK 6.2](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
* [GStreamer 1.16.3](https://gstreamer.freedesktop.org/) * [GStreamer 1.16.3](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -89,7 +107,7 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/) * [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
* [CUDA 11.7 Update 1](https://developer.nvidia.com/cuda-11-7-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local) * [CUDA 11.7 Update 1](https://developer.nvidia.com/cuda-11-7-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
* [TensorRT 8.4 GA (8.4.1.5)](https://developer.nvidia.com/nvidia-tensorrt-8x-download) * [TensorRT 8.4 GA (8.4.1.5)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 515.65.01](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 515.65.01](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.1.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) * [NVIDIA DeepStream SDK 6.1.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
* [GStreamer 1.16.2](https://gstreamer.freedesktop.org/) * [GStreamer 1.16.2](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -99,7 +117,7 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/) * [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
* [CUDA 11.6 Update 1](https://developer.nvidia.com/cuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local) * [CUDA 11.6 Update 1](https://developer.nvidia.com/cuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
* [TensorRT 8.2 GA Update 4 (8.2.5.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download) * [TensorRT 8.2 GA Update 4 (8.2.5.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 510.47.03](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 510.47.03](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) * [NVIDIA DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
* [GStreamer 1.16.2](https://gstreamer.freedesktop.org/) * [GStreamer 1.16.2](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -109,7 +127,7 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/) * [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/)
* [CUDA 11.4 Update 1](https://developer.nvidia.com/cuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local) * [CUDA 11.4 Update 1](https://developer.nvidia.com/cuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local)
* [TensorRT 8.0 GA (8.0.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download) * [TensorRT 8.0 GA (8.0.1)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 470.63.01](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 470.63.01](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) * [NVIDIA DeepStream SDK 6.0.1 / 6.0](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
* [GStreamer 1.14.5](https://gstreamer.freedesktop.org/) * [GStreamer 1.14.5](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -119,20 +137,32 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/) * [Ubuntu 18.04](https://releases.ubuntu.com/18.04.6/)
* [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal) * [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal)
* [TensorRT 7.2.2](https://developer.nvidia.com/nvidia-tensorrt-7x-download) * [TensorRT 7.2.2](https://developer.nvidia.com/nvidia-tensorrt-7x-download)
* [NVIDIA Driver 460.32.03](https://www.nvidia.com.br/Download/index.aspx) * [NVIDIA Driver 460.32.03](https://www.nvidia.com/Download/index.aspx)
* [NVIDIA DeepStream SDK 5.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) * [NVIDIA DeepStream SDK 5.1](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived)
* [GStreamer 1.14.5](https://gstreamer.freedesktop.org/) * [GStreamer 1.14.5](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 7.0 on Jetson platform
* [JetPack 6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60)
* [NVIDIA DeepStream SDK 7.0](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=7.0)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 6.4 on Jetson platform
* [JetPack 6.0 DP](https://developer.nvidia.com/embedded/jetpack-sdk-60dp)
* [NVIDIA DeepStream SDK 6.4](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=6.4)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 6.3 on Jetson platform #### DeepStream 6.3 on Jetson platform
* [JetPack 5.1.2](https://developer.nvidia.com/embedded/jetpack) * JetPack [5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) / [5.1.2](https://developer.nvidia.com/embedded/jetpack-sdk-512)
* [NVIDIA DeepStream SDK 6.3](https://developer.nvidia.com/deepstream-sdk) * [NVIDIA DeepStream SDK 6.3](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream/files?version=6.3)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
#### DeepStream 6.2 on Jetson platform #### DeepStream 6.2 on Jetson platform
* JetPack [5.1.2](https://developer.nvidia.com/embedded/jetpack) / [5.1.1](https://developer.nvidia.com/embedded/jetpack-sdk-511) / [5.1](https://developer.nvidia.com/embedded/jetpack-sdk-51) * JetPack [5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) / [5.1.2](https://developer.nvidia.com/embedded/jetpack-sdk-512) / [5.1.1](https://developer.nvidia.com/embedded/jetpack-sdk-511) / [5.1](https://developer.nvidia.com/embedded/jetpack-sdk-51)
* [NVIDIA DeepStream SDK 6.2](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) * [NVIDIA DeepStream SDK 6.2](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) * [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
@@ -192,52 +222,36 @@ cd DeepStream-Yolo
#### 3. Compile the lib #### 3. Compile the lib
* DeepStream 6.3 on x86 platform 3.1. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3.2. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
#### 4. Edit the `config_infer_primary.txt` file according to your model (example for YOLOv4) #### 4. Edit the `config_infer_primary.txt` file according to your model (example for YOLOv4)
@@ -283,15 +297,14 @@ config-file=config_infer_primary_yoloV2.txt
* x86 platform * x86 platform
``` ```
nvcr.io/nvidia/deepstream:6.3-gc-triton-devel nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
nvcr.io/nvidia/deepstream:6.3-triton-multiarch nvcr.io/nvidia/deepstream:7.0-triton-multiarch
``` ```
* Jetson platform * Jetson platform
``` ```
nvcr.io/nvidia/deepstream-l4t:6.3-samples nvcr.io/nvidia/deepstream:7.0-triton-multiarch
nvcr.io/nvidia/deepstream:6.3-triton-multiarch
``` ```
**NOTE**: To compile the `nvdsinfer_custom_impl_Yolo`, you need to install the g++ inside the container **NOTE**: To compile the `nvdsinfer_custom_impl_Yolo`, you need to install the g++ inside the container
@@ -300,7 +313,7 @@ config-file=config_infer_primary_yoloV2.txt
apt-get install build-essential apt-get install build-essential
``` ```
**NOTE**: With DeepStream 6.3, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features: **NOTE**: With DeepStream 7.0, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features:
``` ```
/opt/nvidia/deepstream/deepstream/user_additional_install.sh /opt/nvidia/deepstream/deepstream/user_additional_install.sh
@@ -323,11 +336,48 @@ topk=300
## ##
### Notes
1. Sometimes while running gstreamer pipeline or sample apps, user can encounter error: `GLib (gthread-posix.c): Unexpected error from C library during 'pthread_setspecific': Invalid argument. Aborting.`. The issue is caused because of a bug in `glib 2.0-2.72` version which comes with Ubuntu 22.04 by default. The issue is addressed in `glib 2.76` and its installation is required to fix the issue (https://github.com/GNOME/glib/tree/2.76.6).
- Migrate `glib` to newer version
```
pip3 install meson
pip3 install ninja
```
**NOTE**: It is recommended to use Python virtualenv.
```
git clone https://github.com/GNOME/glib.git
cd glib
git checkout 2.76.6
meson build --prefix=/usr
ninja -C build/
cd build/
ninja install
```
- Check and confirm the newly installed glib version:
```
pkg-config --modversion glib-2.0
```
2. Sometimes with RTSP streams the application gets stuck on reaching EOS. This is because of an issue in rtpjitterbuffer component. To fix this issue, a script has been provided with required details to update gstrtpmanager library.
```
/opt/nvidia/deepstream/deepstream/update_rtpmanager.sh
```
##
### Extract metadata ### Extract metadata
You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the `deepstream-app` or `deepstream-test` codes. For Python, your can install and edit [deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps). You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the `deepstream-app` or `deepstream-test` codes. For Python, your can install and edit [deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
Basically, you need manipulate the `NvDsObjectMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsObjectMeta.html)) `and NvDsFrameMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/struct__NvDsFrameMeta.html)) to get the label, position, etc. of bboxes. Basically, you need manipulate the `NvDsObjectMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/dev-guide/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/struct__NvDsObjectMeta.html)) `and NvDsFrameMeta` ([Python](https://docs.nvidia.com/metropolis/deepstream/dev-guide/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html) / [C/C++](https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/struct__NvDsFrameMeta.html)) to get the label, position, etc. of bboxes.
## ##

View File

@@ -17,7 +17,7 @@ network-type=0
cluster-mode=2 cluster-mode=2
maintain-aspect-ratio=0 maintain-aspect-ratio=0
symmetric-padding=1 symmetric-padding=1
#force-implicit-batch-dim=1 force-implicit-batch-dim=0
#workspace-size=2000 #workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda #parse-bbox-func-name=NvDsInferParseYoloCuda

View File

@@ -16,6 +16,7 @@ process-mode=1
network-type=0 network-type=0
cluster-mode=2 cluster-mode=2
maintain-aspect-ratio=0 maintain-aspect-ratio=0
force-implicit-batch-dim=0
#workspace-size=2000 #workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda #parse-bbox-func-name=NvDsInferParseYoloCuda

View File

@@ -96,54 +96,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -8,52 +8,42 @@ sudo apt-get install libopencv-dev
### 2. Compile/recompile the `nvdsinfer_custom_impl_Yolo` lib with OpenCV support ### 2. Compile/recompile the `nvdsinfer_custom_impl_Yolo` lib with OpenCV support
* DeepStream 6.3 on x86 platform 2.1. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 2.2. Set the `OPENCV` env
``` ```
CUDA_VER=11.6 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo export OPENCV=1
``` ```
* DeepStream 6.0.1 / 6.0 on x86 platform 2.3. Make the lib
``` ```
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo
``` ```
### 3. For COCO dataset, download the [val2017](https://drive.google.com/file/d/1gbvfn7mcsGDRZ_luJwtITL-ru2kK99aK/view?usp=sharing), extract, and move to DeepStream-Yolo folder ### 3. For COCO dataset, download the [val2017](https://drive.google.com/file/d/1gbvfn7mcsGDRZ_luJwtITL-ru2kK99aK/view?usp=sharing), extract, and move to DeepStream-Yolo folder

View File

@@ -73,54 +73,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -79,54 +79,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -98,54 +98,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -99,54 +99,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -129,54 +129,38 @@ Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -114,54 +114,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -78,54 +78,38 @@ Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -106,54 +106,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -106,54 +106,38 @@ Copy the generated ONNX model file to the `DeepStream-Yolo` folder.
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -108,54 +108,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -99,54 +99,38 @@ Copy the generated ONNX model file and labels.txt file (if generated) to the `De
### Compile the lib ### Compile the lib
Open the `DeepStream-Yolo` folder and compile the lib 1. Open the `DeepStream-Yolo` folder and compile the lib
* DeepStream 6.3 on x86 platform 2. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 3. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -25,52 +25,36 @@ cd DeepStream-Yolo
### Compile the lib ### Compile the lib
* DeepStream 6.3 on x86 platform 1. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 2. Make the lib
``` ```
CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 5.1 on x86 platform
```
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform
```
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -2,49 +2,67 @@
To install the DeepStream on dGPU (x86 platform), without docker, we need to do some steps to prepare the computer. To install the DeepStream on dGPU (x86 platform), without docker, we need to do some steps to prepare the computer.
<details><summary>DeepStream 6.3</summary> **NOTE**: Disable Secure Boot in the BIOS.
### 1. Disable Secure Boot in BIOS 1. Purge all NVIDIA driver, CUDA, etc ($CUDA_PATH = Path to the CUDA installation)
### 2. Install dependencies
```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms
sudo apt install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
sudo apt-get install linux-headers-$(uname -r)
```
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path)
``` ```
sudo nvidia-uninstall sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*libnv*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*nvidia*' sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*' sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
``` ```
### 3. Install CUDA Keyring 2. Install the essential packages
```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake python3 python3-dev python3-pip
sudo apt-get install linux-headers-$(uname -r)
```
3. Reboot
``` ```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb sudo reboot
```
<details><summary>DeepStream 7.0</summary>
### 1. Dependencies
```
sudo apt-get install dkms
sudo apt-get install libssl3 libssl-dev libgles2-mesa-dev libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
```
### 2. CUDA Keyring
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update sudo apt-get update
``` ```
### 4. Download and install NVIDIA Driver ### 3. GCC 12
```
sudo apt-get install gcc-12 g++-12
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 12
sudo update-initramfs -u
```
### 4. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote> <details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
- Download - Download
``` ```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/530.41.03/NVIDIA-Linux-x86_64-530.41.03.run wget https://us.download.nvidia.com/XFree86/Linux-x86_64/535.179/NVIDIA-Linux-x86_64-535.179.run
``` ```
<blockquote><details><summary>Laptop</summary> <blockquote><details><summary>Laptop</summary>
@@ -52,7 +70,7 @@ sudo apt-get update
* Run * Run
``` ```
sudo sh NVIDIA-Linux-x86_64-530.41.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
``` ```
**NOTE**: This step will disable the nouveau drivers. **NOTE**: This step will disable the nouveau drivers.
@@ -66,7 +84,7 @@ sudo apt-get update
* Install * Install
``` ```
sudo sh NVIDIA-Linux-x86_64-530.41.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
``` ```
**NOTE**: If you are using a laptop with NVIDIA Optimius, run **NOTE**: If you are using a laptop with NVIDIA Optimius, run
@@ -83,7 +101,7 @@ sudo prime-select nvidia
* Run * Run
``` ```
sudo sh NVIDIA-Linux-x86_64-530.41.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
``` ```
**NOTE**: This step will disable the nouveau drivers. **NOTE**: This step will disable the nouveau drivers.
@@ -97,7 +115,300 @@ sudo prime-select nvidia
* Install * Install
``` ```
sudo sh NVIDIA-Linux-x86_64-530.41.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</details></blockquote>
</blockquote></details>
<details><summary>Data center / Tesla series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/tesla/535.161.08/NVIDIA-Linux-x86_64-535.161.08.run
```
* Run
```
sudo sh NVIDIA-Linux-x86_64-535.161.08.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</blockquote></details>
### 5. CUDA
```
wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run
sudo sh cuda_12.2.2_535.104.05_linux.run --silent --toolkit
```
* Export environment variables
```
echo $'export PATH=/usr/local/cuda-12.2/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
```
### 6. TensorRT
```
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
sudo apt-get update
sudo apt-get install --no-install-recommends libnvinfer-lean8=8.6.1.6-1+cuda12.0 libnvinfer-vc-plugin8=8.6.1.6-1+cuda12.0 libnvinfer-headers-dev=8.6.1.6-1+cuda12.0 libnvinfer-dev=8.6.1.6-1+cuda12.0 libnvinfer-headers-plugin-dev=8.6.1.6-1+cuda12.0 libnvinfer-plugin-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 libnvinfer-lean-dev=8.6.1.6-1+cuda12.0 libnvparsers-dev=8.6.1.6-1+cuda12.0 python3-libnvinfer-lean=8.6.1.6-1+cuda12.0 python3-libnvinfer-dispatch=8.6.1.6-1+cuda12.0 uff-converter-tf=8.6.1.6-1+cuda12.0 onnx-graphsurgeon=8.6.1.6-1+cuda12.0 libnvinfer-bin=8.6.1.6-1+cuda12.0 libnvinfer-dispatch-dev=8.6.1.6-1+cuda12.0 libnvinfer-dispatch8=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers8=8.6.1.6-1+cuda12.0 libnvinfer-vc-plugin-dev=8.6.1.6-1+cuda12.0 libnvinfer-samples=8.6.1.6-1+cuda12.0
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer* uff-converter-tf* onnx-graphsurgeon*
```
### 7. DeepStream SDK
DeepStream 7.0 for Servers and Workstations
```
wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/deepstream/7.0/files?redirect=true&path=deepstream-7.0_7.0.0-1_amd64.deb' -O deepstream-7.0_7.0.0-1_amd64.deb
sudo apt-get install ./deepstream-7.0_7.0.0-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-12.2 /usr/local/cuda
```
### 8. Reboot
```
sudo reboot
```
</details>
<details><summary>DeepStream 6.4</summary>
### 1. Dependencies
```
sudo apt-get install dkms
sudo apt-get install libssl3 libssl-dev libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
```
### 2. CUDA Keyring
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
```
### 3. GCC 12
```
sudo apt-get install gcc-12 g++-12
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 12
sudo update-initramfs -u
```
### 4. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/535.179/NVIDIA-Linux-x86_64-535.179.run
```
<blockquote><details><summary>Laptop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
```
sudo apt-get install nvidia-prime
sudo prime-select nvidia
```
</details></blockquote>
<blockquote><details><summary>Desktop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-535.179.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</details></blockquote>
</blockquote></details>
<details><summary>Data center / Tesla series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/tesla/535.104.12/NVIDIA-Linux-x86_64-535.104.12.run
```
* Run
```
sudo sh NVIDIA-Linux-x86_64-535.104.12.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</blockquote></details>
### 5. CUDA
```
wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run
sudo sh cuda_12.2.2_535.104.05_linux.run --silent --toolkit
```
* Export environment variables
```
echo $'export PATH=/usr/local/cuda-12.2/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
```
### 6. TensorRT
```
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
sudo apt-get update
sudo apt-get install libnvinfer8=8.6.1.6-1+cuda12.0 libnvinfer-plugin8=8.6.1.6-1+cuda12.0 libnvparsers8=8.6.1.6-1+cuda12.0 libnvonnxparsers8=8.6.1.6-1+cuda12.0 libnvinfer-bin=8.6.1.6-1+cuda12.0 libnvinfer-dev=8.6.1.6-1+cuda12.0 libnvinfer-plugin-dev=8.6.1.6-1+cuda12.0 libnvparsers-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 libnvinfer-samples=8.6.1.6-1+cuda12.0 libcudnn8=8.9.4.25-1+cuda12.2 libcudnn8-dev=8.9.4.25-1+cuda12.2 libnvinfer-headers-dev=8.6.1.6-1+cuda12.0 libnvinfer-lean-dev=8.6.1.6-1+cuda12.0 libnvinfer-headers-plugin-dev=8.6.1.6-1+cuda12.0 libnvinfer-dispatch-dev=8.6.1.6-1+cuda12.0 libnvinfer-vc-plugin-dev=8.6.1.6-1+cuda12.0
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
```
### 7. DeepStream SDK
DeepStream 7.0 for Servers and Workstations
```
wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/deepstream/6.4/files?redirect=true&path=deepstream-6.4_6.4.0-1_amd64.deb' -O deepstream-6.4_6.4.0-1_amd64.deb
sudo apt-get install ./deepstream-6.4_6.4.0-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-12.2 /usr/local/cuda
```
### 8. Reboot
```
sudo reboot
```
</details>
<details><summary>DeepStream 6.3</summary>
### 1. Dependencies
```
sudo apt-get install dkms
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
```
### 2. CUDA Keyring
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
```
### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/525.125.06/NVIDIA-Linux-x86_64-525.125.06.run
```
<blockquote><details><summary>Laptop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-525.125.06.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-525.125.06.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
```
sudo apt-get install nvidia-prime
sudo prime-select nvidia
```
</details></blockquote>
<blockquote><details><summary>Desktop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-525.125.06.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-525.125.06.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
``` ```
</details></blockquote> </details></blockquote>
@@ -120,7 +431,7 @@ sudo prime-select nvidia
</blockquote></details> </blockquote></details>
### 5. Download and install CUDA ### 4. CUDA
``` ```
wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
@@ -133,17 +444,17 @@ sudo sh cuda_12.1.1_530.30.02_linux.run --silent --toolkit
echo $'export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc echo $'export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
``` ```
### 6. Install TensorRT ### 5. TensorRT
``` ```
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update sudo apt-get update
sudo apt-get install libnvinfer8=8.5.3-1+cuda11.8 libnvinfer-plugin8=8.5.3-1+cuda11.8 libnvparsers8=8.5.3-1+cuda11.8 libnvonnxparsers8=8.5.3-1+cuda11.8 libnvinfer-bin=8.5.3-1+cuda11.8 libnvinfer-dev=8.5.3-1+cuda11.8 libnvinfer-plugin-dev=8.5.3-1+cuda11.8 libnvparsers-dev=8.5.3-1+cuda11.8 libnvonnxparsers-dev=8.5.3-1+cuda11.8 libnvinfer-samples=8.5.3-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 python3-libnvinfer=8.5.3-1+cuda11.8 python3-libnvinfer-dev=8.5.3-1+cuda11.8 sudo apt-get install libnvinfer8=8.5.3-1+cuda11.8 libnvinfer-plugin8=8.5.3-1+cuda11.8 libnvparsers8=8.5.3-1+cuda11.8 libnvonnxparsers8=8.5.3-1+cuda11.8 libnvinfer-bin=8.5.3-1+cuda11.8 libnvinfer-dev=8.5.3-1+cuda11.8 libnvinfer-plugin-dev=8.5.3-1+cuda11.8 libnvparsers-dev=8.5.3-1+cuda11.8 libnvonnxparsers-dev=8.5.3-1+cuda11.8 libnvinfer-samples=8.5.3-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 python3-libnvinfer=8.5.3-1+cuda11.8 python3-libnvinfer-dev=8.5.3-1+cuda11.8
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer* tensorrt sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
``` ```
### 7. Download and install the DeepStream SDK ### 6. DeepStream SDK
DeepStream 6.3 for Servers and Workstations DeepStream 6.3 for Servers and Workstations
@@ -154,7 +465,7 @@ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-12.1 /usr/local/cuda sudo ln -snf /usr/local/cuda-12.1 /usr/local/cuda
``` ```
### 8. Reboot the computer ### 7. Reboot
``` ```
sudo reboot sudo reboot
@@ -164,32 +475,14 @@ sudo reboot
<details><summary>DeepStream 6.2</summary> <details><summary>DeepStream 6.2</summary>
### 1. Disable Secure Boot in BIOS ### 1. Dependencies
### 2. Install dependencies
``` ```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms sudo apt-get install dkms
sudo apt install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
sudo apt-get install linux-headers-$(uname -r)
``` ```
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path) ### 2. CUDA Keyring
```
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
```
### 3. Install CUDA Keyring
``` ```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
@@ -197,14 +490,14 @@ sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update sudo apt-get update
``` ```
### 4. Download and install NVIDIA Driver ### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote> <details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
- Download - Download
``` ```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/525.105.17/NVIDIA-Linux-x86_64-525.105.17.run wget https://us.download.nvidia.com/XFree86/Linux-x86_64/525.147.05/NVIDIA-Linux-x86_64-525.147.05.run
``` ```
<blockquote><details><summary>Laptop</summary> <blockquote><details><summary>Laptop</summary>
@@ -212,7 +505,7 @@ sudo apt-get update
* Run * Run
``` ```
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd sudo sh NVIDIA-Linux-x86_64-525.147.05.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
``` ```
**NOTE**: This step will disable the nouveau drivers. **NOTE**: This step will disable the nouveau drivers.
@@ -226,7 +519,7 @@ sudo apt-get update
* Install * Install
``` ```
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd sudo sh NVIDIA-Linux-x86_64-525.147.05.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
``` ```
**NOTE**: If you are using a laptop with NVIDIA Optimius, run **NOTE**: If you are using a laptop with NVIDIA Optimius, run
@@ -243,7 +536,7 @@ sudo prime-select nvidia
* Run * Run
``` ```
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig sudo sh NVIDIA-Linux-x86_64-525.147.05.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
``` ```
**NOTE**: This step will disable the nouveau drivers. **NOTE**: This step will disable the nouveau drivers.
@@ -257,7 +550,7 @@ sudo prime-select nvidia
* Install * Install
``` ```
sudo sh NVIDIA-Linux-x86_64-525.105.17.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig sudo sh NVIDIA-Linux-x86_64-525.147.05.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
``` ```
</details></blockquote> </details></blockquote>
@@ -280,7 +573,7 @@ sudo prime-select nvidia
</blockquote></details> </blockquote></details>
### 5. Download and install CUDA ### 4. CUDA
``` ```
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
@@ -293,19 +586,19 @@ sudo sh cuda_11.8.0_520.61.05_linux.run --silent --toolkit
echo $'export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc echo $'export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
``` ```
### 6. Install TensorRT ### 5. TensorRT
``` ```
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update sudo apt-get update
sudo apt-get install libnvinfer8=8.5.2-1+cuda11.8 libnvinfer-plugin8=8.5.2-1+cuda11.8 libnvparsers8=8.5.2-1+cuda11.8 libnvonnxparsers8=8.5.2-1+cuda11.8 libnvinfer-bin=8.5.2-1+cuda11.8 libnvinfer-dev=8.5.2-1+cuda11.8 libnvinfer-plugin-dev=8.5.2-1+cuda11.8 libnvparsers-dev=8.5.2-1+cuda11.8 libnvonnxparsers-dev=8.5.2-1+cuda11.8 libnvinfer-samples=8.5.2-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 python3-libnvinfer=8.5.2-1+cuda11.8 python3-libnvinfer-dev=8.5.2-1+cuda11.8 sudo apt-get install libnvinfer8=8.5.2-1+cuda11.8 libnvinfer-plugin8=8.5.2-1+cuda11.8 libnvparsers8=8.5.2-1+cuda11.8 libnvonnxparsers8=8.5.2-1+cuda11.8 libnvinfer-bin=8.5.2-1+cuda11.8 libnvinfer-dev=8.5.2-1+cuda11.8 libnvinfer-plugin-dev=8.5.2-1+cuda11.8 libnvparsers-dev=8.5.2-1+cuda11.8 libnvonnxparsers-dev=8.5.2-1+cuda11.8 libnvinfer-samples=8.5.2-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 python3-libnvinfer=8.5.2-1+cuda11.8 python3-libnvinfer-dev=8.5.2-1+cuda11.8
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer* tensorrt sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
``` ```
### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-getting-started) and install the DeepStream SDK ### 6. DeepStream SDK
DeepStream 6.2 for Servers and Workstations (.deb) Download from the [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived): DeepStream 6.2 for Servers and Workstations (.deb)
``` ```
sudo apt-get install ./deepstream-6.2_6.2.0-1_amd64.deb sudo apt-get install ./deepstream-6.2_6.2.0-1_amd64.deb
@@ -313,7 +606,7 @@ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.8 /usr/local/cuda sudo ln -snf /usr/local/cuda-11.8 /usr/local/cuda
``` ```
### 8. Reboot the computer ### 7. Reboot
``` ```
sudo reboot sudo reboot
@@ -323,32 +616,14 @@ sudo reboot
<details><summary>DeepStream 6.1.1</summary> <details><summary>DeepStream 6.1.1</summary>
### 1. Disable Secure Boot in BIOS ### 1. Dependencies
### 2. Install dependencies
``` ```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms sudo apt-get install dkms
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
sudo apt-get install linux-headers-$(uname -r)
``` ```
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path) ### 2. CUDA Keyring
```
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
```
### 3. Install CUDA Keyring
``` ```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
@@ -356,7 +631,7 @@ sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update sudo apt-get update
``` ```
### 4. Download and install NVIDIA Driver ### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote> <details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
@@ -439,7 +714,7 @@ sudo prime-select nvidia
</blockquote></details> </blockquote></details>
### 5. Download and install CUDA ### 4. CUDA
``` ```
wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda_11.7.1_515.65.01_linux.run wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda_11.7.1_515.65.01_linux.run
@@ -452,21 +727,19 @@ sudo sh cuda_11.7.1_515.65.01_linux.run --silent --toolkit
echo $'export PATH=/usr/local/cuda-11.7/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc echo $'export PATH=/usr/local/cuda-11.7/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
``` ```
### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT ### 5. TensorRT
TensorRT 8.4 GA for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4, 11.5, 11.6 and 11.7 DEB local repo Package
``` ```
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604_1-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604/9a60d8bf.pub sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update sudo apt-get update
sudo apt-get install libnvinfer8=8.4.1-1+cuda11.6 libnvinfer-plugin8=8.4.1-1+cuda11.6 libnvparsers8=8.4.1-1+cuda11.6 libnvonnxparsers8=8.4.1-1+cuda11.6 libnvinfer-bin=8.4.1-1+cuda11.6 libnvinfer-dev=8.4.1-1+cuda11.6 libnvinfer-plugin-dev=8.4.1-1+cuda11.6 libnvparsers-dev=8.4.1-1+cuda11.6 libnvonnxparsers-dev=8.4.1-1+cuda11.6 libnvinfer-samples=8.4.1-1+cuda11.6 libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6 python3-libnvinfer=8.4.1-1+cuda11.6 python3-libnvinfer-dev=8.4.1-1+cuda11.6 sudo apt-get install libnvinfer8=8.4.1-1+cuda11.6 libnvinfer-plugin8=8.4.1-1+cuda11.6 libnvparsers8=8.4.1-1+cuda11.6 libnvonnxparsers8=8.4.1-1+cuda11.6 libnvinfer-bin=8.4.1-1+cuda11.6 libnvinfer-dev=8.4.1-1+cuda11.6 libnvinfer-plugin-dev=8.4.1-1+cuda11.6 libnvparsers-dev=8.4.1-1+cuda11.6 libnvonnxparsers-dev=8.4.1-1+cuda11.6 libnvinfer-samples=8.4.1-1+cuda11.6 libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6 python3-libnvinfer=8.4.1-1+cuda11.6 python3-libnvinfer-dev=8.4.1-1+cuda11.6
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
``` ```
### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-getting-started) and install the DeepStream SDK ### 6. DeepStream SDK
DeepStream 6.1.1 for Servers and Workstations (.deb) Download from the [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived): DeepStream 6.1.1 for Servers and Workstations (.deb)
``` ```
sudo apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb sudo apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb
@@ -474,7 +747,7 @@ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.7 /usr/local/cuda sudo ln -snf /usr/local/cuda-11.7 /usr/local/cuda
``` ```
### 8. Reboot the computer ### 7. Reboot
``` ```
sudo reboot sudo reboot
@@ -484,32 +757,14 @@ sudo reboot
<details><summary>DeepStream 6.1</summary> <details><summary>DeepStream 6.1</summary>
### 1. Disable Secure Boot in BIOS ### 1. Dependencies
### 2. Install dependencies
``` ```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install dkms sudo apt-get install dkms
sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev sudo apt-get install libssl1.1 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev
sudo apt-get install linux-headers-$(uname -r)
``` ```
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path) ### 2. CUDA Keyring
```
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
```
### 3. Install CUDA Keyring
``` ```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
@@ -517,7 +772,7 @@ sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update sudo apt-get update
``` ```
### 4. Download and install NVIDIA Driver ### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote> <details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
@@ -600,7 +855,7 @@ sudo prime-select nvidia
</blockquote></details> </blockquote></details>
### 5. Download and install CUDA ### 4. CUDA
``` ```
wget https://developer.download.nvidia.com/compute/cuda/11.6.1/local_installers/cuda_11.6.1_510.47.03_linux.run wget https://developer.download.nvidia.com/compute/cuda/11.6.1/local_installers/cuda_11.6.1_510.47.03_linux.run
@@ -613,21 +868,19 @@ sudo sh cuda_11.6.1_510.47.03_linux.run --silent --toolkit
echo $'export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc echo $'export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
``` ```
### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT ### 5. TensorRT
TensorRT 8.2 GA Update 4 for Ubuntu 20.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4 and 11.5 DEB local repo Package
``` ```
sudo dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505_1-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505/82307095.pub sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update sudo apt-get update
sudo apt-get install libnvinfer8=8.2.5-1+cuda11.4 libnvinfer-plugin8=8.2.5-1+cuda11.4 libnvparsers8=8.2.5-1+cuda11.4 libnvonnxparsers8=8.2.5-1+cuda11.4 libnvinfer-bin=8.2.5-1+cuda11.4 libnvinfer-dev=8.2.5-1+cuda11.4 libnvinfer-plugin-dev=8.2.5-1+cuda11.4 libnvparsers-dev=8.2.5-1+cuda11.4 libnvonnxparsers-dev=8.2.5-1+cuda11.4 libnvinfer-samples=8.2.5-1+cuda11.4 libnvinfer-doc=8.2.5-1+cuda11.4 libcudnn8-dev=8.4.0.27-1+cuda11.6 libcudnn8=8.4.0.27-1+cuda11.6 sudo apt-get install libnvinfer8=8.2.5-1+cuda11.4 libnvinfer-plugin8=8.2.5-1+cuda11.4 libnvparsers8=8.2.5-1+cuda11.4 libnvonnxparsers8=8.2.5-1+cuda11.4 libnvinfer-dev=8.2.5-1+cuda11.4 libnvinfer-plugin-dev=8.2.5-1+cuda11.4 libnvparsers-dev=8.2.5-1+cuda11.4 libnvonnxparsers-dev=8.2.5-1+cuda11.4 libcudnn8=8.4.0.27-1+cuda11.6 libcudnn8-dev=8.4.0.27-1+cuda11.6 python3-libnvinfer=8.2.5-1+cuda11.4 python3-libnvinfer-dev=8.2.5-1+cuda11.4
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
``` ```
### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) and install the DeepStream SDK ### 6. DeepStream SDK
DeepStream 6.1 for Servers and Workstations (.deb) Download from the [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived): DeepStream 6.1 for Servers and Workstations (.deb)
``` ```
sudo apt-get install ./deepstream-6.1_6.1.0-1_amd64.deb sudo apt-get install ./deepstream-6.1_6.1.0-1_amd64.deb
@@ -635,7 +888,7 @@ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.6 /usr/local/cuda sudo ln -snf /usr/local/cuda-11.6 /usr/local/cuda
``` ```
### 8. Reboot the computer ### 7. Reboot
``` ```
sudo reboot sudo reboot
@@ -645,8 +898,6 @@ sudo reboot
<details><summary>DeepStream 6.0.1 / 6.0</summary> <details><summary>DeepStream 6.0.1 / 6.0</summary>
### 1. Disable Secure Boot in BIOS
<details><summary>If you are using a laptop with newer Intel/AMD processors and your Graphics in Settings->Details->About tab is llvmpipe, please update the kernel.</summary> <details><summary>If you are using a laptop with newer Intel/AMD processors and your Graphics in Settings->Details->About tab is llvmpipe, please update the kernel.</summary>
``` ```
@@ -660,35 +911,19 @@ sudo reboot
</details> </details>
### 2. Install dependencies ### 1. Dependencies
``` ```
sudo apt-get update
sudo apt-get install gcc make git libtool autoconf autogen pkg-config cmake
sudo apt-get install python3 python3-dev python3-pip
sudo apt-get install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4 sudo apt-get install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4
sudo apt-get install linux-headers-$(uname -r)
``` ```
**NOTE**: Install DKMS only if you are using the default Ubuntu kernel **NOTE**: Install DKMS (only if you are using the default Ubuntu kernel)
``` ```
sudo apt-get install dkms sudo apt-get install dkms
``` ```
**NOTE**: Purge all NVIDIA driver, CUDA, etc (replace $CUDA_PATH to your CUDA path) ### 2. CUDA Keyring
```
sudo nvidia-uninstall
sudo $CUDA_PATH/bin/cuda-uninstaller
sudo apt-get remove --purge '*nvidia*'
sudo apt-get remove --purge '*cuda*'
sudo apt-get remove --purge '*cudnn*'
sudo apt-get remove --purge '*tensorrt*'
sudo apt autoremove --purge && sudo apt autoclean && sudo apt clean
```
### 3. Install CUDA Keyring
``` ```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
@@ -696,7 +931,7 @@ sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update sudo apt-get update
``` ```
### 4. Download and install NVIDIA Driver ### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote> <details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
@@ -789,7 +1024,7 @@ sudo prime-select nvidia
</blockquote></details> </blockquote></details>
### 5. Download and install CUDA ### 4. CUDA
``` ```
wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda_11.4.1_470.57.02_linux.run
@@ -802,19 +1037,19 @@ sudo sh cuda_11.4.1_470.57.02_linux.run --silent --toolkit
echo $'export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc echo $'export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
``` ```
### 6. Download from [NVIDIA website](https://developer.nvidia.com/nvidia-tensorrt-8x-download) and install the TensorRT ### 5. TensorRT
TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
``` ```
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update sudo apt-get update
sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-bin=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libnvinfer-samples=8.0.1-1+cuda11.3 libnvinfer-doc=8.0.1-1+cuda11.3 libcudnn8-dev=8.2.1.32-1+cuda11.3 libcudnn8=8.2.1.32-1+cuda11.3 sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libcudnn8=8.2.1.32-1+cuda11.3 libcudnn8-dev=8.2.1.32-1+cuda11.3 python3-libnvinfer=8.0.1-1+cuda11.3 python3-libnvinfer-dev=8.0.1-1+cuda11.3
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensorrt sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
``` ```
### 7. Download from [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived) and install the DeepStream SDK ### 6. DeepStream SDK
Download from the [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived):
* DeepStream 6.0.1 for Servers and Workstations (.deb) * DeepStream 6.0.1 for Servers and Workstations (.deb)
@@ -835,7 +1070,148 @@ sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* tensor
sudo ln -snf /usr/local/cuda-11.4 /usr/local/cuda sudo ln -snf /usr/local/cuda-11.4 /usr/local/cuda
``` ```
### 8. Reboot the computer ### 7. Reboot
```
sudo reboot
```
</details>
<details><summary>DeepStream 5.1</summary>
### 1. Dependencies
```
sudo apt-get install dkms
sudo apt-get install libssl1.0.0 libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 libjansson4=2.11-1
```
### 2. CUDA Keyring
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
```
### 3. NVIDIA Driver
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/460.32.03/NVIDIA-Linux-x86_64-460.32.03.run
```
<blockquote><details><summary>Laptop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-460.32.03.run --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-460.32.03.run --silent --disable-nouveau --dkms --install-libglvnd
```
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
```
sudo apt-get install nvidia-prime
sudo prime-select nvidia
```
</details></blockquote>
<blockquote><details><summary>Desktop</summary>
* Run
```
sudo sh NVIDIA-Linux-x86_64-460.32.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
**NOTE**: This step will disable the nouveau drivers.
* Reboot
```
sudo reboot
```
* Install
```
sudo sh NVIDIA-Linux-x86_64-460.32.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</details></blockquote>
</blockquote></details>
<details><summary>Data center / Tesla series</summary><blockquote>
- Download
```
wget https://us.download.nvidia.com/tesla/460.32.03/NVIDIA-Linux-x86_64-460.32.03.run
```
* Run
```
sudo sh NVIDIA-Linux-x86_64-460.32.03.run --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
```
</blockquote></details>
### 4. CUDA
```
wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run
sudo sh cuda_11.1.1_455.32.00_linux.run --silent --toolkit
```
* Export environment variables
```
echo $'export PATH=/usr/local/cuda-11.1/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
```
### 5. TensorRT
```
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
sudo apt-get install libnvinfer7=7.2.2-1+cuda11.1 libnvinfer-plugin7=7.2.2-1+cuda11.1 libnvparsers7=7.2.2-1+cuda11.1 libnvonnxparsers7=7.2.2-1+cuda11.1 libnvinfer-dev=7.2.2-1+cuda11.1 libnvinfer-plugin-dev=7.2.2-1+cuda11.1 libnvparsers-dev=7.2.2-1+cuda11.1 libnvonnxparsers-dev=7.2.2-1+cuda11.1 libcudnn8=8.0.5.39-1+cuda11.1 libcudnn8-dev=8.0.5.39-1+cuda11.1 python3-libnvinfer=7.2.2-1+cuda11.1 python3-libnvinfer-dev=7.2.2-1+cuda11.1
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn8* python3-libnvinfer*
```
### 6. DeepStream SDK
Download from the [NVIDIA website](https://developer.nvidia.com/deepstream-sdk-download-tesla-archived): DeepStream 5.1 for Servers and Workstations (.deb)
```
sudo apt-get install ./deepstream-5.1_5.1.0-1_amd64.deb
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
sudo ln -snf /usr/local/cuda-11.1 /usr/local/cuda
```
### 7. Reboot
``` ```
sudo reboot sudo reboot

View File

@@ -50,46 +50,36 @@ const char* YOLOLAYER_PLUGIN_VERSION {"2"};
**NOTE**: Do it for each GIE folder, replacing the GIE folder name (`gie1/nvdsinfer_custom_impl_Yolo`). **NOTE**: Do it for each GIE folder, replacing the GIE folder name (`gie1/nvdsinfer_custom_impl_Yolo`).
* DeepStream 6.3 on x86 platform 1. Set the `CUDA_VER` according to your DeepStream version
``` ```
CUDA_VER=12.1 make -C gie1/nvdsinfer_custom_impl_Yolo export CUDA_VER=XY.Z
``` ```
* DeepStream 6.2 on x86 platform * x86 platform
``` ```
CUDA_VER=11.8 make -C gie1/nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 = 12.1
DeepStream 6.2 = 11.8
DeepStream 6.1.1 = 11.7
DeepStream 6.1 = 11.6
DeepStream 6.0.1 / 6.0 = 11.4
DeepStream 5.1 = 11.1
``` ```
* DeepStream 6.1.1 on x86 platform * Jetson platform
``` ```
CUDA_VER=11.7 make -C gie1/nvdsinfer_custom_impl_Yolo DeepStream 7.0 / 6.4 = 12.2
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
``` ```
* DeepStream 6.1 on x86 platform 2. Make the lib
``` ```
CUDA_VER=11.6 make -C gie1/nvdsinfer_custom_impl_Yolo make -C gie1/nvdsinfer_custom_impl_Yolo clean && make -C gie1/nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on x86 platform
```
CUDA_VER=11.4 make -C gie1/nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 on Jetson platform
```
CUDA_VER=11.4 make -C gie1/nvdsinfer_custom_impl_Yolo
```
* DeepStream 6.0.1 / 6.0 on Jetson platform
```
CUDA_VER=10.2 make -C gie1/nvdsinfer_custom_impl_Yolo
``` ```
## ##

View File

@@ -15,8 +15,9 @@ __global__ void gpuYoloLayer(const float* input, float* boxes, float* scores, fl
uint y_id = blockIdx.y * blockDim.y + threadIdx.y; uint y_id = blockIdx.y * blockDim.y + threadIdx.y;
uint z_id = blockIdx.z * blockDim.z + threadIdx.z; uint z_id = blockIdx.z * blockDim.z + threadIdx.z;
if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) {
return; return;
}
const int numGridCells = gridSizeX * gridSizeY; const int numGridCells = gridSizeX * gridSizeY;
const int bbindex = y_id * gridSizeX + x_id; const int bbindex = y_id * gridSizeX + x_id;
@@ -47,7 +48,7 @@ __global__ void gpuYoloLayer(const float* input, float* boxes, float* scores, fl
} }
} }
int count = z_id * gridSizeX * gridSizeY + y_id * gridSizeY + x_id + lastInputSize; int count = numGridCells * z_id + bbindex + lastInputSize;
boxes[count * 4 + 0] = xc; boxes[count * 4 + 0] = xc;
boxes[count * 4 + 1] = yc; boxes[count * 4 + 1] = yc;

View File

@@ -13,8 +13,9 @@ __global__ void gpuYoloLayer_nc(const float* input, float* boxes, float* scores,
uint y_id = blockIdx.y * blockDim.y + threadIdx.y; uint y_id = blockIdx.y * blockDim.y + threadIdx.y;
uint z_id = blockIdx.z * blockDim.z + threadIdx.z; uint z_id = blockIdx.z * blockDim.z + threadIdx.z;
if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) {
return; return;
}
const int numGridCells = gridSizeX * gridSizeY; const int numGridCells = gridSizeX * gridSizeY;
const int bbindex = y_id * gridSizeX + x_id; const int bbindex = y_id * gridSizeX + x_id;
@@ -45,7 +46,7 @@ __global__ void gpuYoloLayer_nc(const float* input, float* boxes, float* scores,
} }
} }
int count = z_id * gridSizeX * gridSizeY + y_id * gridSizeY + x_id + lastInputSize; int count = numGridCells * z_id + bbindex + lastInputSize;
boxes[count * 4 + 0] = xc; boxes[count * 4 + 0] = xc;
boxes[count * 4 + 1] = yc; boxes[count * 4 + 1] = yc;

View File

@@ -35,8 +35,9 @@ __global__ void gpuRegionLayer(const float* input, float* softmax, float* boxes,
uint y_id = blockIdx.y * blockDim.y + threadIdx.y; uint y_id = blockIdx.y * blockDim.y + threadIdx.y;
uint z_id = blockIdx.z * blockDim.z + threadIdx.z; uint z_id = blockIdx.z * blockDim.z + threadIdx.z;
if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) if (x_id >= gridSizeX || y_id >= gridSizeY || z_id >= numBBoxes) {
return; return;
}
const int numGridCells = gridSizeX * gridSizeY; const int numGridCells = gridSizeX * gridSizeY;
const int bbindex = y_id * gridSizeX + x_id; const int bbindex = y_id * gridSizeX + x_id;
@@ -66,7 +67,7 @@ __global__ void gpuRegionLayer(const float* input, float* softmax, float* boxes,
} }
} }
int count = z_id * gridSizeX * gridSizeY + y_id * gridSizeY + x_id + lastInputSize; int count = numGridCells * z_id + bbindex + lastInputSize;
boxes[count * 4 + 0] = xc; boxes[count * 4 + 0] = xc;
boxes[count * 4 + 1] = yc; boxes[count * 4 + 1] = yc;