DeepStream 7.1 + Fixes + New model output format
This commit is contained in:
@@ -16,7 +16,7 @@
|
||||
git clone https://github.com/tinyvision/DAMO-YOLO.git
|
||||
cd DAMO-YOLO
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -107,6 +107,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -119,6 +120,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -139,11 +141,11 @@ Edit the `config_infer_primary_damoyolo.txt` file according to your model (examp
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=damoyolo_tinynasL25_S.onnx
|
||||
onnx-file=damoyolo_tinynasL25_S_477.pth.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYoloE
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
179
docs/GoldYOLO.md
Normal file
179
docs/GoldYOLO.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Gold-YOLO usage
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Edit the config_infer_primary_goldyolo file](#edit-the-config_infer_primary_goldyolo-file)
|
||||
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
|
||||
* [Testing the model](#testing-the-model)
|
||||
|
||||
##
|
||||
|
||||
### Convert model
|
||||
|
||||
#### 1. Download the Gold-YOLO repo and install the requirements
|
||||
|
||||
```
|
||||
git clone https://github.com/huawei-noah/Efficient-Computing.git
|
||||
cd Efficient-Computing/Detection/Gold-YOLO
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `export_goldyolo.py` file from `DeepStream-Yolo/utils` directory to the `Gold-YOLO` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pt` file from [Gold-YOLO](https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO) releases
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the ONNX model file (example for Gold-YOLO-S)
|
||||
|
||||
```
|
||||
python3 export_goldyolo.py -w Gold_s_pre_dist.pt --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
-s SIZE
|
||||
--size SIZE
|
||||
-s HEIGHT WIDTH
|
||||
--size HEIGHT WIDTH
|
||||
```
|
||||
|
||||
Example for 1280
|
||||
|
||||
```
|
||||
-s 1280
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-s 1280 1280
|
||||
```
|
||||
|
||||
**NOTE**: To simplify the ONNX model (DeepStream >= 6.0)
|
||||
|
||||
```
|
||||
--simplify
|
||||
```
|
||||
|
||||
**NOTE**: To use dynamic batch-size (DeepStream >= 6.1)
|
||||
|
||||
```
|
||||
--dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To use static batch-size (example for batch-size = 4)
|
||||
|
||||
```
|
||||
--batch 4
|
||||
```
|
||||
|
||||
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 13.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated ONNX model file and labels.txt file (if generated) to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
### Compile the lib
|
||||
|
||||
1. Open the `DeepStream-Yolo` folder and compile the lib
|
||||
|
||||
2. Set the `CUDA_VER` according to your DeepStream version
|
||||
|
||||
```
|
||||
export CUDA_VER=XY.Z
|
||||
```
|
||||
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
DeepStream 6.1.1 = 11.7
|
||||
DeepStream 6.1 = 11.6
|
||||
DeepStream 6.0.1 / 6.0 = 11.4
|
||||
DeepStream 5.1 = 11.1
|
||||
```
|
||||
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
```
|
||||
|
||||
3. Make the lib
|
||||
|
||||
```
|
||||
make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_goldyolo file
|
||||
|
||||
Edit the `config_infer_primary_goldyolo.txt` file according to your model (example for Gold-YOLO-S with 80 classes)
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=Gold_s_pre_dist.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **Gold-YOLO** resizes the input with center padding. To get better accuracy, use
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
maintain-aspect-ratio=1
|
||||
symmetric-padding=1
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the deepstream_app_config file
|
||||
|
||||
```
|
||||
...
|
||||
[primary-gie]
|
||||
...
|
||||
config-file=config_infer_primary_goldyolo.txt
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Testing the model
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
**NOTE**: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
|
||||
|
||||
**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.
|
||||
@@ -17,6 +17,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -29,6 +30,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# PP-YOLOE / PP-YOLOE+ usage
|
||||
|
||||
**NOTE**: You can use the release/2.6 branch of the PPYOLOE repo to convert all model versions.
|
||||
**NOTE**: You can use the develop branch of the PPYOLOE repo to convert all model versions.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
@@ -14,7 +14,7 @@
|
||||
|
||||
#### 1. Download the PaddleDetection repo and install the requirements
|
||||
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.7/docs/tutorials/INSTALL.md
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
@@ -24,7 +24,7 @@ Copy the `export_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.6/configs/ppyoloe) releases (example for PP-YOLOE+_s)
|
||||
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) releases (example for PP-YOLOE+_s)
|
||||
|
||||
```
|
||||
wget https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams
|
||||
@@ -37,7 +37,7 @@ wget https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams
|
||||
Generate the ONNX model file (example for PP-YOLOE+_s)
|
||||
|
||||
```
|
||||
pip3 install onnx onnxsim onnxruntime paddle2onnx
|
||||
pip3 install onnx onnxslim onnxruntime paddle2onnx
|
||||
python3 export_ppyoloe.py -w ppyoloe_plus_crn_s_80e_coco.pdparams -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --dynamic
|
||||
```
|
||||
|
||||
@@ -84,6 +84,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -96,6 +97,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -116,11 +118,11 @@ Edit the `config_infer_primary_ppyoloe_plus.txt` file according to your model (e
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=ppyoloe_plus_crn_s_80e_coco.onnx
|
||||
onnx-file=ppyoloe_plus_crn_s_80e_coco.pdparams.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYoloE
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
@@ -14,13 +14,13 @@
|
||||
|
||||
#### 1. Download the PaddleDetection repo and install the requirements
|
||||
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.7/docs/tutorials/INSTALL.md
|
||||
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md
|
||||
|
||||
```
|
||||
git clone https://github.com/lyuwenyu/RT-DETR.git
|
||||
cd RT-DETR/rtdetr_paddle
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime paddle2onnx
|
||||
pip3 install onnx onnxslim onnxruntime paddle2onnx
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -90,6 +90,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -102,6 +103,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -122,7 +124,7 @@ Edit the `config_infer_primary_rtdetr.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=rtdetr_r50vd_6x_coco.onnx
|
||||
onnx-file=rtdetr_r50vd_6x_coco.pdparams.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
git clone https://github.com/lyuwenyu/RT-DETR.git
|
||||
cd RT-DETR/rtdetr_pytorch
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -109,6 +109,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -121,6 +122,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -141,7 +143,7 @@ Edit the `config_infer_primary_rtdetr.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=rtdetr_r50vd_6x_coco_from_paddle.onnx
|
||||
onnx-file=rtdetr_r50vd_6x_coco_from_paddle.pth.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -17,9 +17,8 @@
|
||||
```
|
||||
git clone https://github.com/ultralytics/ultralytics.git
|
||||
cd ultralytics
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install -e .
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -30,17 +29,17 @@ Copy the `export_rtdetr_ultralytics.py` file from `DeepStream-Yolo/utils` direct
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pt` file from [Ultralytics](https://github.com/ultralytics/assets/releases/) releases (example for RT-DETR-l)
|
||||
Download the `pt` file from [Ultralytics](https://github.com/ultralytics/assets/releases/) releases (example for RT-DETR-L)
|
||||
|
||||
```
|
||||
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/rtdetr-l.pt
|
||||
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/rtdetr-l.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the ONNX model file (example for RT-DETR-l)
|
||||
Generate the ONNX model file (example for RT-DETR-L)
|
||||
|
||||
```
|
||||
python3 export_rtdetr_ultralytics.py -w rtdetr-l.pt --dynamic
|
||||
@@ -110,6 +109,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -122,6 +122,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -137,12 +138,12 @@ make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
|
||||
|
||||
### Edit the config_infer_primary_rtdetr file
|
||||
|
||||
Edit the `config_infer_primary_rtdetr.txt` file according to your model (example for RT-DETR-l with 80 classes)
|
||||
Edit the `config_infer_primary_rtdetr.txt` file according to your model (example for RT-DETR-L with 80 classes)
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=rtdetr-l.onnx
|
||||
onnx-file=rtdetr-l.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
209
docs/RTMDet.md
Normal file
209
docs/RTMDet.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# RTMDet (MMYOLO) usage
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Edit the config_infer_primary_rtmdet file](#edit-the-config_infer_primary_rtmdet-file)
|
||||
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
|
||||
* [Testing the model](#testing-the-model)
|
||||
|
||||
##
|
||||
|
||||
### Convert model
|
||||
|
||||
#### 1. Download the RTMDet (MMYOLO) repo and install the requirements
|
||||
|
||||
```
|
||||
git clone https://github.com/open-mmlab/mmyolo.git
|
||||
cd mmyolo
|
||||
pip3 install openmim
|
||||
mim install "mmengine>=0.6.0"
|
||||
mim install "mmcv>=2.0.0rc4,<2.1.0"
|
||||
mim install "mmdet>=3.0.0,<4.0.0"
|
||||
pip3 install -r requirements/albu.txt
|
||||
mim install -v -e .
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `export_rtmdet.py` file from `DeepStream-Yolo/utils` directory to the `mmyolo` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pth` file from [RTMDet (MMYOLO)](https://github.com/open-mmlab/mmyolo/tree/main/configs/rtmdet) releases (example for RTMDet-s*)
|
||||
|
||||
```
|
||||
wget https://download.openmmlab.com/mmrazor/v1/rtmdet_distillation/kd_s_rtmdet_m_neck_300e_coco/kd_s_rtmdet_m_neck_300e_coco_20230220_140647-446ff003.pth
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the ONNX model file (example for RTMDet-s*)
|
||||
|
||||
```
|
||||
python3 export_rtmdet.py -w kd_s_rtmdet_m_neck_300e_coco_20230220_140647-446ff003.pth -c configs/rtmdet/distillation/kd_s_rtmdet_m_neck_300e_coco.py --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
-s SIZE
|
||||
--size SIZE
|
||||
-s HEIGHT WIDTH
|
||||
--size HEIGHT WIDTH
|
||||
```
|
||||
|
||||
Example for 1280
|
||||
|
||||
```
|
||||
-s 1280
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-s 1280 1280
|
||||
```
|
||||
|
||||
**NOTE**: To simplify the ONNX model (DeepStream >= 6.0)
|
||||
|
||||
```
|
||||
--simplify
|
||||
```
|
||||
|
||||
**NOTE**: To use dynamic batch-size (DeepStream >= 6.1)
|
||||
|
||||
```
|
||||
--dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To use static batch-size (example for batch-size = 4)
|
||||
|
||||
```
|
||||
--batch 4
|
||||
```
|
||||
|
||||
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 17.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated ONNX model file and labels.txt file (if generated) to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
### Compile the lib
|
||||
|
||||
1. Open the `DeepStream-Yolo` folder and compile the lib
|
||||
|
||||
2. Set the `CUDA_VER` according to your DeepStream version
|
||||
|
||||
```
|
||||
export CUDA_VER=XY.Z
|
||||
```
|
||||
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
DeepStream 6.1.1 = 11.7
|
||||
DeepStream 6.1 = 11.6
|
||||
DeepStream 6.0.1 / 6.0 = 11.4
|
||||
DeepStream 5.1 = 11.1
|
||||
```
|
||||
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
```
|
||||
|
||||
3. Make the lib
|
||||
|
||||
```
|
||||
make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_rtmdet file
|
||||
|
||||
Edit the `config_infer_primary_rtmdet.txt` file according to your model (example for RTMDet-s* with 80 classes)
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=kd_s_rtmdet_m_neck_300e_coco_20230220_140647-446ff003.pth.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **RTMDet (MMYOLO)** resizes the input with center padding. To get better accuracy, use
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
maintain-aspect-ratio=1
|
||||
symmetric-padding=1
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **RTMDet (MMYOLO)** uses BGR color format for the image input. It is important to change the `model-color-format` according to the trained values.
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
model-color-format=1
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **RTMDet (MMYOLO)** uses normalization on the image preprocess. It is important to change the `net-scale-factor` and `offsets` according to the trained values.
|
||||
|
||||
Default: `mean = 0.485, 0.456, 0.406` and `std = 0.229, 0.224, 0.225`
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
net-scale-factor=0.0173520735727919486
|
||||
offsets=103.53;116.28;123.675
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the deepstream_app_config file
|
||||
|
||||
```
|
||||
...
|
||||
[primary-gie]
|
||||
...
|
||||
config-file=config_infer_primary_rtmdet.txt
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Testing the model
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
**NOTE**: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
|
||||
|
||||
**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.
|
||||
@@ -19,7 +19,7 @@ git clone https://github.com/Deci-AI/super-gradients.git
|
||||
cd super-gradients
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -140,6 +140,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -152,6 +153,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -172,11 +174,11 @@ Edit the `config_infer_primary_yolonas.txt` file according to your model (exampl
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolo_nas_s_coco.onnx
|
||||
onnx-file=yolo_nas_s_coco.pth.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYoloE
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
git clone https://github.com/WongKinYiu/yolor.git
|
||||
cd yolor
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -125,6 +125,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -137,6 +138,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -157,7 +159,7 @@ Edit the `config_infer_primary_yolor.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolor_csp.onnx
|
||||
onnx-file=yolor_csp.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -19,7 +19,7 @@ git clone https://github.com/Megvii-BaseDetection/YOLOX.git
|
||||
cd YOLOX
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py develop
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -89,6 +89,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -101,6 +102,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -121,7 +123,7 @@ Edit the `config_infer_primary_yolox.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolox_s.onnx
|
||||
onnx-file=yolox_s.pth.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
git clone https://github.com/ultralytics/yolov5.git
|
||||
cd yolov5
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -117,6 +117,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -129,6 +130,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -149,7 +151,7 @@ Edit the `config_infer_primary_yoloV5.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolov5s.onnx
|
||||
onnx-file=yolov5s.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
git clone https://github.com/meituan/YOLOv6.git
|
||||
cd YOLOv6
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -117,6 +117,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -129,6 +130,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -149,7 +151,7 @@ Edit the `config_infer_primary_yoloV6.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolov6s.onnx
|
||||
onnx-file=yolov6s.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
git clone https://github.com/WongKinYiu/yolov7.git
|
||||
cd yolov7
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -119,6 +119,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -131,6 +132,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -151,7 +153,7 @@ Edit the `config_infer_primary_yoloV7.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolov7.onnx
|
||||
onnx-file=yolov7.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
@@ -17,9 +17,8 @@
|
||||
```
|
||||
git clone https://github.com/ultralytics/ultralytics.git
|
||||
cd ultralytics
|
||||
pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
pip3 install onnx onnxsim onnxruntime
|
||||
pip3 install -e .
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
@@ -33,7 +32,7 @@ Copy the `export_yoloV8.py` file from `DeepStream-Yolo/utils` directory to the `
|
||||
Download the `pt` file from [YOLOv8](https://github.com/ultralytics/assets/releases/) releases (example for YOLOv8s)
|
||||
|
||||
```
|
||||
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
|
||||
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
@@ -85,7 +84,7 @@ or
|
||||
--batch 4
|
||||
```
|
||||
|
||||
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 16.
|
||||
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 17.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
@@ -110,6 +109,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -122,6 +122,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
@@ -142,7 +143,7 @@ Edit the `config_infer_primary_yoloV8.txt` file according to your model (example
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolov8s.onnx
|
||||
onnx-file=yolov8s.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
|
||||
185
docs/YOLOv9.md
Normal file
185
docs/YOLOv9.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# YOLOv9 usage
|
||||
|
||||
**NOTE**: The yaml file is not required.
|
||||
|
||||
* [Convert model](#convert-model)
|
||||
* [Compile the lib](#compile-the-lib)
|
||||
* [Edit the config_infer_primary_yoloV9 file](#edit-the-config_infer_primary_yolov9-file)
|
||||
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
|
||||
* [Testing the model](#testing-the-model)
|
||||
|
||||
##
|
||||
|
||||
### Convert model
|
||||
|
||||
#### 1. Download the YOLOv9 repo and install the requirements
|
||||
|
||||
```
|
||||
git clone https://github.com/WongKinYiu/yolov9.git
|
||||
cd yolov9
|
||||
pip3 install -r requirements.txt
|
||||
pip3 install onnx onnxslim onnxruntime
|
||||
```
|
||||
|
||||
**NOTE**: It is recommended to use Python virtualenv.
|
||||
|
||||
#### 2. Copy conversor
|
||||
|
||||
Copy the `export_yoloV9.py` file from `DeepStream-Yolo/utils` directory to the `yolov9` folder.
|
||||
|
||||
#### 3. Download the model
|
||||
|
||||
Download the `pt` file from [YOLOv9](https://github.com/WongKinYiu/yolov9/releases/) releases (example for YOLOv9-S)
|
||||
|
||||
```
|
||||
wget https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-s-converted.pt
|
||||
```
|
||||
|
||||
**NOTE**: You can use your custom model.
|
||||
|
||||
#### 4. Convert model
|
||||
|
||||
Generate the ONNX model file (example for YOLOv9-S)
|
||||
|
||||
```
|
||||
python3 export_yoloV9.py -w yolov9-s-converted.pt --dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To change the inference size (defaut: 640)
|
||||
|
||||
```
|
||||
-s SIZE
|
||||
--size SIZE
|
||||
-s HEIGHT WIDTH
|
||||
--size HEIGHT WIDTH
|
||||
```
|
||||
|
||||
Example for 1280
|
||||
|
||||
```
|
||||
-s 1280
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
-s 1280 1280
|
||||
```
|
||||
|
||||
**NOTE**: To simplify the ONNX model (DeepStream >= 6.0)
|
||||
|
||||
```
|
||||
--simplify
|
||||
```
|
||||
|
||||
**NOTE**: To use dynamic batch-size (DeepStream >= 6.1)
|
||||
|
||||
```
|
||||
--dynamic
|
||||
```
|
||||
|
||||
**NOTE**: To use static batch-size (example for batch-size = 4)
|
||||
|
||||
```
|
||||
--batch 4
|
||||
```
|
||||
|
||||
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 17.
|
||||
|
||||
```
|
||||
--opset 12
|
||||
```
|
||||
|
||||
#### 5. Copy generated files
|
||||
|
||||
Copy the generated ONNX model file and labels.txt file (if generated) to the `DeepStream-Yolo` folder.
|
||||
|
||||
##
|
||||
|
||||
### Compile the lib
|
||||
|
||||
1. Open the `DeepStream-Yolo` folder and compile the lib
|
||||
|
||||
2. Set the `CUDA_VER` according to your DeepStream version
|
||||
|
||||
```
|
||||
export CUDA_VER=XY.Z
|
||||
```
|
||||
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
DeepStream 6.1.1 = 11.7
|
||||
DeepStream 6.1 = 11.6
|
||||
DeepStream 6.0.1 / 6.0 = 11.4
|
||||
DeepStream 5.1 = 11.1
|
||||
```
|
||||
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
```
|
||||
|
||||
3. Make the lib
|
||||
|
||||
```
|
||||
make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the config_infer_primary_yoloV9 file
|
||||
|
||||
Edit the `config_infer_primary_yoloV9.txt` file according to your model (example for YOLOv9-S with 80 classes)
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
onnx-file=yolov9-s-converted.pt.onnx
|
||||
...
|
||||
num-detected-classes=80
|
||||
...
|
||||
parse-bbox-func-name=NvDsInferParseYolo
|
||||
...
|
||||
```
|
||||
|
||||
**NOTE**: The **YOLOv9** resizes the input with center padding. To get better accuracy, use
|
||||
|
||||
```
|
||||
[property]
|
||||
...
|
||||
maintain-aspect-ratio=1
|
||||
symmetric-padding=1
|
||||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Edit the deepstream_app_config file
|
||||
|
||||
```
|
||||
...
|
||||
[primary-gie]
|
||||
...
|
||||
config-file=config_infer_primary_yoloV9.txt
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
### Testing the model
|
||||
|
||||
```
|
||||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
**NOTE**: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
|
||||
|
||||
**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.
|
||||
@@ -34,6 +34,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -46,6 +47,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
|
||||
@@ -29,6 +29,157 @@ sudo apt-get install linux-headers-$(uname -r)
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
<details><summary>DeepStream 7.1</summary>
|
||||
|
||||
### 1. Dependencies
|
||||
|
||||
```
|
||||
sudo apt-get install dkms
|
||||
sudo apt-get install libssl3 libssl-dev libgles2-mesa-dev libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler
|
||||
```
|
||||
|
||||
### 2. CUDA Keyring
|
||||
|
||||
```
|
||||
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
|
||||
sudo dpkg -i cuda-keyring_1.0-1_all.deb
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
### 3. GCC 12
|
||||
|
||||
```
|
||||
sudo apt-get install gcc-12 g++-12
|
||||
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12
|
||||
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 12
|
||||
sudo update-initramfs -u
|
||||
```
|
||||
|
||||
### 4. NVIDIA Driver
|
||||
|
||||
<details><summary>TITAN, GeForce RTX / GTX series and RTX / Quadro series</summary><blockquote>
|
||||
|
||||
- Download
|
||||
|
||||
```
|
||||
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/560.35.03/NVIDIA-Linux-x86_64-560.35.03.run
|
||||
```
|
||||
|
||||
<blockquote><details><summary>Laptop</summary>
|
||||
|
||||
* Run
|
||||
|
||||
```
|
||||
sudo sh NVIDIA-Linux-x86_64-560.35.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
|
||||
```
|
||||
|
||||
**NOTE**: This step will disable the nouveau drivers.
|
||||
|
||||
* Reboot
|
||||
|
||||
```
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
* Install
|
||||
|
||||
```
|
||||
sudo sh NVIDIA-Linux-x86_64-560.35.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd
|
||||
```
|
||||
|
||||
**NOTE**: If you are using a laptop with NVIDIA Optimius, run
|
||||
|
||||
```
|
||||
sudo apt-get install nvidia-prime
|
||||
sudo prime-select nvidia
|
||||
```
|
||||
|
||||
</details></blockquote>
|
||||
|
||||
<blockquote><details><summary>Desktop</summary>
|
||||
|
||||
* Run
|
||||
|
||||
```
|
||||
sudo sh NVIDIA-Linux-x86_64-560.35.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
|
||||
```
|
||||
|
||||
**NOTE**: This step will disable the nouveau drivers.
|
||||
|
||||
* Reboot
|
||||
|
||||
```
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
* Install
|
||||
|
||||
```
|
||||
sudo sh NVIDIA-Linux-x86_64-560.35.03.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
|
||||
```
|
||||
|
||||
</details></blockquote>
|
||||
|
||||
</blockquote></details>
|
||||
|
||||
<details><summary>Data center / Tesla series</summary><blockquote>
|
||||
|
||||
- Download
|
||||
|
||||
```
|
||||
wget https://us.download.nvidia.com/tesla/535.183.06/NVIDIA-Linux-x86_64-535.183.06.run
|
||||
```
|
||||
|
||||
* Run
|
||||
|
||||
```
|
||||
sudo sh NVIDIA-Linux-x86_64-535.183.06.run --no-cc-version-check --silent --disable-nouveau --dkms --install-libglvnd --run-nvidia-xconfig
|
||||
```
|
||||
|
||||
</blockquote></details>
|
||||
|
||||
### 5. CUDA
|
||||
|
||||
```
|
||||
wget https://developer.download.nvidia.com/compute/cuda/12.6.2/local_installers/cuda_12.6.2_560.35.03_linux.run
|
||||
sudo sh cuda_12.6.2_560.35.03_linux.run --silent --toolkit
|
||||
```
|
||||
|
||||
* Export environment variables
|
||||
|
||||
```
|
||||
echo $'export PATH=/usr/local/cuda-12.6/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc && source ~/.bashrc
|
||||
```
|
||||
|
||||
### 6. TensorRT
|
||||
|
||||
```
|
||||
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
|
||||
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
|
||||
sudo apt-get update
|
||||
sudo apt-get install libnvinfer-dev=10.3.0.26-1+cuda12.5 libnvinfer-dispatch-dev=10.3.0.26-1+cuda12.5 libnvinfer-dispatch10=10.3.0.26-1+cuda12.5 libnvinfer-headers-dev=10.3.0.26-1+cuda12.5 libnvinfer-headers-plugin-dev=10.3.0.26-1+cuda12.5 libnvinfer-lean-dev=10.3.0.26-1+cuda12.5 libnvinfer-lean10=10.3.0.26-1+cuda12.5 libnvinfer-plugin-dev=10.3.0.26-1+cuda12.5 libnvinfer-plugin10=10.3.0.26-1+cuda12.5 libnvinfer-vc-plugin-dev=10.3.0.26-1+cuda12.5 libnvinfer-vc-plugin10=10.3.0.26-1+cuda12.5 libnvinfer10=10.3.0.26-1+cuda12.5 libnvonnxparsers-dev=10.3.0.26-1+cuda12.5 libnvonnxparsers10=10.3.0.26-1+cuda12.5 tensorrt-dev=10.3.0.26-1+cuda12.5 libnvinfer-samples=10.3.0.26-1+cuda12.5 libnvinfer-bin=10.3.0.26-1+cuda12.5 libcudnn9-cuda-12=9.3.0.75-1 libcudnn9-dev-cuda-12=9.3.0.75-1
|
||||
sudo apt-mark hold libnvinfer* libnvparsers* libnvonnxparsers* libcudnn9* python3-libnvinfer* uff-converter-tf* onnx-graphsurgeon* graphsurgeon-tf* tensorrt*
|
||||
```
|
||||
|
||||
### 7. DeepStream SDK
|
||||
|
||||
DeepStream 7.1 for Servers and Workstations
|
||||
|
||||
```
|
||||
wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/deepstream/7.1/files?redirect=true&path=deepstream-7.1_7.1.0-1_amd64.deb' -O deepstream-7.1_7.1.0-1_amd64.deb
|
||||
sudo apt-get install ./deepstream-7.1_7.1.0-1_amd64.deb
|
||||
rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
|
||||
sudo ln -snf /usr/local/cuda-12.6 /usr/local/cuda
|
||||
```
|
||||
|
||||
### 8. Reboot
|
||||
|
||||
```
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>DeepStream 7.0</summary>
|
||||
|
||||
### 1. Dependencies
|
||||
|
||||
@@ -59,6 +59,7 @@ export CUDA_VER=XY.Z
|
||||
* x86 platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 = 12.1
|
||||
DeepStream 6.2 = 11.8
|
||||
@@ -71,6 +72,7 @@ export CUDA_VER=XY.Z
|
||||
* Jetson platform
|
||||
|
||||
```
|
||||
DeepStream 7.1 = 12.6
|
||||
DeepStream 7.0 / 6.4 = 12.2
|
||||
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
|
||||
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
|
||||
|
||||
Reference in New Issue
Block a user