Add YOLO-Pose and fixes
This commit is contained in:
13
README.md
13
README.md
@@ -2,6 +2,8 @@
|
||||
|
||||
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models
|
||||
|
||||
--------------------------------------------------------------------------------------------------
|
||||
### YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose
|
||||
--------------------------------------------------------------------------------------------------
|
||||
### Important: please export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model
|
||||
--------------------------------------------------------------------------------------------------
|
||||
@@ -21,11 +23,12 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
|
||||
* Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
|
||||
* Support for YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
|
||||
* GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests)
|
||||
* **Support for DeepStream 5.1**
|
||||
* **Custom ONNX model parser (`NvDsInferYoloCudaEngineGet`)**
|
||||
* **Dynamic batch-size for Darknet and ONNX exported models**
|
||||
* **INT8 calibration (PTQ) for Darknet and ONNX exported models**
|
||||
* **New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model**
|
||||
* Support for DeepStream 5.1
|
||||
* Custom ONNX model parser (`NvDsInferYoloCudaEngineGet`)
|
||||
* Dynamic batch-size for Darknet and ONNX exported models
|
||||
* INT8 calibration (PTQ) for Darknet and ONNX exported models
|
||||
* New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model
|
||||
* **YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose**
|
||||
|
||||
##
|
||||
|
||||
|
||||
Reference in New Issue
Block a user