diff --git a/customModels.md b/customModels.md
index 5cc24e3..42fed77 100644
--- a/customModels.md
+++ b/customModels.md
@@ -21,10 +21,10 @@ How to edit DeepStream files to your custom model
##
### Editing default model
-1. Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
+1. Download [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
2. Copy and remane your obj.names file to labels.txt to deepstream/sources/yolo directory
3. Copy your yolo.cfg and yolo.weights files to deepstream/sources/yolo directory.
-4. Edit config_infer_primary.txt for your model (example for YOLOv4)
+4. Edit config_infer_primary.txt for your model
```
[property]
...
@@ -305,7 +305,7 @@ deepstream-app -c deepstream_app_config.txt
You can get metadata from deepstream in Python and C++. For C++, you need edit deepstream-app or deepstream-test code. For Python your need install and edit [this](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
-You need manipulate NvDsObjectMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsObjectMeta.html)), NvDsFrameMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsFrameMeta.html)) and NvOSD_RectParams ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsOSD/NvOSD_RectParams.html) [C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/OSD/Data_Structures/_NvOSD_FrameRectParams.html)) to get label, position, etc. of bboxs.
+You need manipulate NvDsObjectMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsObjectMeta.html)/[C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsObjectMeta.html)), NvDsFrameMeta ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsMeta/NvDsFrameMeta.html)/[C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsFrameMeta.html)) and NvOSD_RectParams ([Python](https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsOSD/NvOSD_RectParams.html)/[C++](https://docs.nvidia.com/metropolis/deepstream/sdk-api/OSD/Data_Structures/_NvOSD_FrameRectParams.html)) to get label, position, etc. of bboxs.
In C++ deepstream-app application, your code need be in analytics_done_buf_prob function.
In C++/Python deepstream-test application, your code need be in osd_sink_pad_buffer_probe/tiler_src_pad_buffer_probe function.
diff --git a/multipleInferences.md b/multipleInferences.md
index 1737934..825de28 100644
--- a/multipleInferences.md
+++ b/multipleInferences.md
@@ -1,7 +1,7 @@
# Multiple YOLO inferences
How to use multiples GIE's on DeepStream
-1. Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
+1. Download [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
2. Make a folder, in deepstream/sources/yolo directory, named pgie (where you will put files of primary inference).
3. Make a folder, for each secondary inference, in deepstream/sources/yolo directory, named sgie* (* = 1, 2, 3, etc.; depending on the number of secondary inferences; where you will put files of others inferences).
4. Copy and remane each obj.names file to labels.txt in each inference directory (pgie, sgie*), according each inference type.
diff --git a/readme.md b/readme.md
index 0242dc7..c206eda 100644
--- a/readme.md
+++ b/readme.md
@@ -69,7 +69,6 @@ Torchvision 0.8.1
```
DeepStream SDK: https://youtu.be/Qi_F_IYpuFQ
-
Darknet: https://youtu.be/AxJJ9fnJ7Xk
| TensorRT | Precision | Resolution | IoU=0.5:0.95 | IoU=0.5 | IoU=0.75 | FPS
(with display) | FPS
(without display) |
@@ -173,9 +172,9 @@ pre-cluster-threshold = 0.25 (CONF_THRESH)
##
### Native TensorRT conversion
-Donwload [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
+Download [my native folder](https://github.com/marcoslucianops/DeepStream-Yolo/tree/master/native), rename to yolo and move to your deepstream/sources folder.
-Donwload cfg and weights files from your model and move to deepstream/sources/yolo folder.
+Download cfg and weights files from your model and move to deepstream/sources/yolo folder.
* [YOLOv4x-Mish](https://github.com/AlexeyAB/darknet) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4x-mish.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4x-mish.weights)]
* [YOLOv4-CSP](https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-csp) [[cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-csp.cfg)] [[weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-csp.weights)]
@@ -254,5 +253,4 @@ Note: If your model are listed in native tab, you can use [my native folder](htt
##
For commercial DeepStream SDK projects, contact me at email address available in GitHub.
-
My projects: https://www.youtube.com/MarcosLucianoTV
\ No newline at end of file