Add YOLO-Pose and fixes

This commit is contained in:
Marcos Luciano
2023-08-31 21:25:53 -03:00
parent 3bdafe5f8b
commit cc5d565f0a
13 changed files with 41 additions and 33 deletions

View File

@@ -1,7 +1,7 @@
MIT License
Copyright (c) 2020-2022, Marcos Luciano Piropo Santos.
Copyright (c) 2019-2022, NVIDIA CORPORATION. All rights reserved.
Copyright (c) 2018-2023, Marcos Luciano Piropo Santos.
Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -2,6 +2,8 @@
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models
--------------------------------------------------------------------------------------------------
### YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose
--------------------------------------------------------------------------------------------------
### Important: please export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model
--------------------------------------------------------------------------------------------------
@@ -21,11 +23,12 @@ NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration
* Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
* Support for YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
* GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests)
* **Support for DeepStream 5.1**
* **Custom ONNX model parser (`NvDsInferYoloCudaEngineGet`)**
* **Dynamic batch-size for Darknet and ONNX exported models**
* **INT8 calibration (PTQ) for Darknet and ONNX exported models**
* **New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model**
* Support for DeepStream 5.1
* Custom ONNX model parser (`NvDsInferYoloCudaEngineGet`)
* Dynamic batch-size for Darknet and ONNX exported models
* INT8 calibration (PTQ) for Darknet and ONNX exported models
* New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model
* **YOLO-Pose: https://github.com/marcoslucianops/DeepStream-Yolo-Pose**
##

View File

@@ -1,5 +1,5 @@
################################################################################
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
# Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -64,8 +64,9 @@ addBBoxProposal(const float bx1, const float by1, const float bx2, const float b
{
NvDsInferParseObjectInfo bbi = convertBBox(bx1, by1, bx2, by2, netW, netH);
if (bbi.width < 1 || bbi.height < 1)
if (bbi.width < 1 || bbi.height < 1) {
return;
}
bbi.detectionConfidence = maxProb;
bbi.classId = maxIndex;
@@ -82,8 +83,9 @@ decodeTensorYolo(const float* boxes, const float* scores, const float* classes,
float maxProb = scores[b];
int maxIndex = (int) classes[b];
if (maxProb < preclusterThreshold[maxIndex])
if (maxProb < preclusterThreshold[maxIndex]) {
continue;
}
float bxc = boxes[b * 4 + 0];
float byc = boxes[b * 4 + 1];
@@ -111,8 +113,9 @@ decodeTensorYoloE(const float* boxes, const float* scores, const float* classes,
float maxProb = scores[b];
int maxIndex = (int) classes[b];
if (maxProb < preclusterThreshold[maxIndex])
if (maxProb < preclusterThreshold[maxIndex]) {
continue;
}
float bx1 = boxes[b * 4 + 0];
float by1 = boxes[b * 4 + 1];

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -42,8 +42,9 @@ __global__ void decodeTensorYoloCuda(NvDsInferParseObjectInfo *binfo, float* box
{
int x_id = blockIdx.x * blockDim.x + threadIdx.x;
if (x_id >= outputSize)
if (x_id >= outputSize) {
return;
}
float maxProb = scores[x_id];
int maxIndex = (int) classes[x_id];
@@ -81,8 +82,9 @@ __global__ void decodeTensorYoloECuda(NvDsInferParseObjectInfo *binfo, float* bo
{
int x_id = blockIdx.x * blockDim.x + threadIdx.x;
if (x_id >= outputSize)
if (x_id >= outputSize) {
return;
}
float maxProb = scores[x_id];
int maxIndex = (int) classes[x_id];

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),