Fog Computing for Edge AI Workloads in Smart Transportation Systems

Authors

  • Manoj Yadav Independent Researcher Kankarbagh, Patna, India (IN) – 800020 Author

Keywords:

fog computing; edge AI; smart transportation; V2X; latency-aware scheduling; traffic optimization; micro-datacenter; container orchestration.

Abstract

Smart transportation systems (STS) increasingly rely on AI models that process high-rate data from roadside cameras, connected vehicles, and infrastructure sensors. Centralized cloud processing alone struggles to meet stringent real-time constraints for perception, prediction, and control—especially under volatile wireless bandwidth and bursty event loads. Edge computing helps by placing inference close to data sources, but limited resources on embedded devices create bottlenecks during peak demand and complicate model lifecycle management. This manuscript investigates fog computing as a middle-tier orchestration layer between edge and cloud to host elastic micro-datacenters at network aggregation points (e.g., traffic operations centers, base-station sites, and municipal fiber hubs). We propose a fog-native architecture that combines (i) latency-aware workload placement, (ii) deadline-driven scheduling with early-exit inference, (iii) adaptive model compression, and (iv) predictive offloading using traffic and radio context.

We develop a city-scale simulation that couples a microscopic traffic simulator with a network emulator and a containerized AI serving stack. Workloads include object detection for incident response, trajectory forecasting for bus ETA, and signal-phase timing optimization. Compared with cloud-only and edge-only baselines, the proposed fog+edge approach reduces median end-to-end inference latency by 41–63%, cuts 95th-percentile tail latency by 52–68%, and increases deadline-meeting rate by 20–33 percentage points under rush-hour load. Bandwidth costs drop due to upstream feature compression, while energy per inference declines as fog nodes leverage right-sized GPUs/NPUs at higher utilization. A one-way ANOVA confirms statistically significant improvements across latency and deadline-meeting rate; post-hoc pairwise tests show fog+edge significantly outperforms both baselines (p < 0.01). We conclude with practical guidance for municipalities: deploy fog clusters at fiber aggregation rings, use admission control with soft deadlines, and combine model-aware caching with DAG-level prefetch to tame microbursts.

Downloads

Download data is not yet available.

Downloads

Additional Files

Published

2025-10-03

How to Cite

Yadav, Manoj. “Fog Computing for Edge AI Workloads in Smart Transportation Systems”. International Journal of Advanced Research in Computer Science and Engineering (IJARCSE) 1, no. 4 (October 3, 2025): Oct(22–29). Accessed January 22, 2026. https://ijarcse.org/index.php/ijarcse/article/view/81.

Similar Articles

11-20 of 58

You may also start an advanced similarity search for this article.