Skip to content

Latest commit

 

History

History
12 lines (8 loc) · 1.46 KB

README.md

File metadata and controls

12 lines (8 loc) · 1.46 KB

Heterogenous-Inferencing

Project is Part of the "Hardware Accelerators for AI - Hands On" Course at the Otto-von-Guericke University

We try to implement an efficient way to evaluate heterogenous inferencing infrastructures for convolutional neural networks. Our heterogenous inferencing infrastructure will consist of an Intel® Neural Compute Stick 2, an USB Accelerator based on the Edge TPU from Coral and Intel CPU and integrated Graphics Card.

We executed the following inference benchmarks to determine the performance of the different devices listed above: Energy Consumption for "Low-Power" USB AI-Accelerators, Runtime of asynchronous Batch-Inferences for Intel-based Hardware, Runtime of a Single Inference and synchronous Batch-Inferences for the entire Hardware

Textual Statistics of the measurements can be found in the logs folder.
Estimations derived from the measurement results can be found in the res folder.

Violinplots for all measurements can be found in the violinplots folder. They were created on a per model basis.
Plots for aggregated statistic results can be found in the plots folder. They are aggregated in the three following categories: Single-Operations, Dense and Convolutional Layer.