title: Edge-AI ...
Currently, the heavy computing capacity required to run deep learning models necessitates that the majority of AI processes be carried out in the cloud.
However, running AI in the cloud has its disadvantages, including the fact that it requires an internet connection, and that performance can be impacted by bandwidth and latency limitations.
Edge AI, also known as ‘on-device AI’, is a distributed computing paradigm that allows for AI algorithms to be run locally on a device, using the data produced by that device.
Running AI at the ‘edge’ of the local network removes the requirement for the device to be connected to the internet or centralized servers like the cloud.
Edge AI offers significant improvements as far as response speeds and data security.
Executing AI close to the data source allows for processes like data creation and decision-making to take place in milliseconds, making Edge AI ideal for applications where near-instantaneous responses are essential.
Note : If you are new to NNStreamer, see usage examples screenshots.
Example shows how to implement edge AI using NNStreamer.
These examples are tested using Ubuntu PC and Raspberry PI.
-
The device analyzes the camera image before transmitting it, and then transmits meaningful information only.
In this example, if the device finds a target that the user wants, it starts video streaming to the server.
-
Text classifications are classified into predefined groups based on sentences.
-
Image segmentation is the process of partitioning a digital image into multiple segments.
This application shows how to send the flatbuf to the edge device and run inferences on the edgeTPU.
*To help understand the three edge-AI examples, it would be different from the actual.