Fog Computing for Edge AI Workloads in Smart Transportation Systems
Keywords:
fog computing; edge AI; smart transportation; V2X; latency-aware scheduling; traffic optimization; micro-datacenter; container orchestration.Abstract
Smart transportation systems (STS) increasingly rely on AI models that process high-rate data from roadside cameras, connected vehicles, and infrastructure sensors. Centralized cloud processing alone struggles to meet stringent real-time constraints for perception, prediction, and control—especially under volatile wireless bandwidth and bursty event loads. Edge computing helps by placing inference close to data sources, but limited resources on embedded devices create bottlenecks during peak demand and complicate model lifecycle management. This manuscript investigates fog computing as a middle-tier orchestration layer between edge and cloud to host elastic micro-datacenters at network aggregation points (e.g., traffic operations centers, base-station sites, and municipal fiber hubs). We propose a fog-native architecture that combines (i) latency-aware workload placement, (ii) deadline-driven scheduling with early-exit inference, (iii) adaptive model compression, and (iv) predictive offloading using traffic and radio context.
We develop a city-scale simulation that couples a microscopic traffic simulator with a network emulator and a containerized AI serving stack. Workloads include object detection for incident response, trajectory forecasting for bus ETA, and signal-phase timing optimization. Compared with cloud-only and edge-only baselines, the proposed fog+edge approach reduces median end-to-end inference latency by 41–63%, cuts 95th-percentile tail latency by 52–68%, and increases deadline-meeting rate by 20–33 percentage points under rush-hour load. Bandwidth costs drop due to upstream feature compression, while energy per inference declines as fog nodes leverage right-sized GPUs/NPUs at higher utilization. A one-way ANOVA confirms statistically significant improvements across latency and deadline-meeting rate; post-hoc pairwise tests show fog+edge significantly outperforms both baselines (p < 0.01). We conclude with practical guidance for municipalities: deploy fog clusters at fiber aggregation rings, use admission control with soft deadlines, and combine model-aware caching with DAG-level prefetch to tame microbursts.
Downloads
Downloads
Additional Files
Published
Issue
Section
License
Copyright (c) 2025 The journal retains copyright of all published articles, ensuring that authors have control over their work while allowing wide dissenmination.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles are published under the Creative Commons Attribution NonCommercial 4.0 License (CC BY NC 4.0), allowing others to distribute, remix, adapt, and build upon the work for non-commercial purposes while crediting the original author.
