This folder contains the samples presented during the M&E partner training that took place on June 29th 2021.
The aim was to educate M&E partners on efficient and scalable inference and deployment of AI models:
- How to speed-up AI inference with TensorRT
- How to deploy your models with Triton Inference Server
- How to run AI-powered video analytics applications leveraging DeepStream SDK in Triton Inference Server
Please refer to the READMEs in the respective subfolders for instructions on how to build and run them.