Sensor Fusion-Based Navigation Systems for Autonomous Delivery Robots
Keywords:
autonomous delivery robots; sensor fusion; SLAM; factor graphs; visual–inertial odometry; LiDAR–inertial odometry; UWB; model-predictive control; dynamic obstacle avoidance; semantic mappingAbstract
Autonomous delivery robots must navigate sidewalks, corridors, and mixed indoor–outdoor campuses while maintaining accuracy, safety, and efficiency under imperfect sensing. Single-sensor pipelines (e.g., wheel odometry or vision alone) degrade under wheel slip, poor lighting, occlusions, and multipath. This manuscript presents a sensor-fusion navigation architecture that integrates inertial measurement units (IMUs), wheel encoders, cameras, LiDAR, and optional ultra-wideband (UWB) anchors to achieve robust localization and motion planning in dynamic environments. We detail a modular stack: (1) time-synchronized preprocessing and calibration, (2) multi-rate odometry (wheel–IMU EKF, visual–inertial odometry, LiDAR–inertial odometry), (3) factor-graph smoothing with loop closures and UWB priors, (4) semantic mapping that separates static structure from dynamic obstacles, (5) dual-horizon planning with D*-Lite globally and model-predictive control (MPC) locally, and (6) a safety supervisor enforcing stop/slowdown under uncertainty spikes.
A simulation campaign across campus-sidewalk and urban-alley scenes (varying lighting, ground friction, and pedestrian density) compares four configurations: baseline wheel–IMU EKF, VIO-aided EKF, LiDAR-inertial odometry, and full multimodal factor-graph fusion including UWB. The fused system reduces absolute trajectory error by ~82% and collision rate by ~89% relative to the baseline, while adding <16 ms average fusion latency. We report statistically significant gains (ANOVA, Tukey HSD, p < 0.01) in success rate, path efficiency, and energy per kilometer. Results suggest that tightly-coupled, uncertainty-aware fusion—combined with semantic dynamics handling—yields navigation resilience suitable for last-meter delivery. We conclude with deployment guidance and open problems in long-term calibration drift, low-texture scenes, and learning-enhanced fusion.
Downloads
Downloads
Additional Files
Published
Issue
Section
License
Copyright (c) 2025 The journal retains copyright of all published articles, ensuring that authors have control over their work while allowing wide dissenmination.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles are published under the Creative Commons Attribution NonCommercial 4.0 License (CC BY NC 4.0), allowing others to distribute, remix, adapt, and build upon the work for non-commercial purposes while crediting the original author.
