Performance Evaluation of Lightweight Deep Learning Models on Embedded Systems

Authors

  • Dr T. Aswini KL University Vadeshawaram, A.P., India Author

DOI:

https://doi.org/10.63345/ijarcse.v1.i1.204

Keywords:

Lightweight deep learning, embedded systems, MobileNet, SqueezeNet, inference latency, edge AI, resource-constrained devices, performance benchmarking

Abstract

The rise of edge computing and the proliferation of Internet-of-Things (IoT) devices have highlighted the urgent need for deploying efficient and lightweight deep learning (DL) models on resource-constrained embedded systems. While conventional deep learning architectures have demonstrated outstanding performance in various domains, their high computational and memory requirements hinder their application in low-power embedded environments. This study evaluates and benchmarks lightweight DL models—namely MobileNetV2, SqueezeNet, ShuffleNet, and Tiny-YOLO—on popular embedded platforms including Raspberry Pi 4, NVIDIA Jetson Nano, and Google Coral Dev Board.

The need for deploying deep learning inference at the edge is driven by latency-sensitive applications such as real-time surveillance, health monitoring, and autonomous navigation, where reliance on cloud connectivity may be unreliable or impractical. Therefore, this manuscript adopts a comprehensive evaluation framework that not only measures performance metrics such as inference time, accuracy, power consumption, and memory usage, but also simulates real-world use cases to test deployment feasibility.

A combination of statistical analysis and simulation research is applied to ensure robust and generalizable results across platforms and tasks. Notably, ANOVA tests reveal statistically significant differences between models on inference time and efficiency, supporting hardware-specific model recommendations. The findings suggest that MobileNetV2 achieves a favorable balance between model accuracy and latency, while SqueezeNet offers optimal memory and power usage for severely constrained devices. Tiny-YOLO, although heavier, remains valuable in object detection tasks on GPU-supported systems.

This paper contributes a practical guide for researchers and developers seeking to implement edge AI systems in real-world conditions. It also underscores the importance of platform-aware model selection to maximize efficiency and maintain task-specific accuracy, ultimately advancing the integration of intelligent capabilities into embedded systems.

Downloads

Download data is not yet available.

Downloads

Additional Files

Published

2025-02-04

How to Cite

Aswini, Dr T. “Performance Evaluation of Lightweight Deep Learning Models on Embedded Systems”. International Journal of Advanced Research in Computer Science and Engineering (IJARCSE) 1, no. 1 (February 4, 2025): Jan (22–27). Accessed October 19, 2025. https://ijarcse.org/index.php/ijarcse/article/view/48.

Similar Articles

21-30 of 38

You may also start an advanced similarity search for this article.