Anomaly Detection in Time-Series IoT Data Using Transformer Architectures
DOI:
https://doi.org/10.63345/ijarcse.v1.i1.101Keywords:
Anomaly Detection, Time-Series, IoT, Transformer, Deep Learning, Self-Attention, Simulation, Statistical AnalysisAbstract
The rapid expansion of the Internet of Things (IoT) ecosystem has led to the generation of massive volumes of time-series data across various sectors, including healthcare, manufacturing, smart cities, and critical infrastructure monitoring. Detecting anomalies within these time-series streams is vital for ensuring operational reliability, safety, and cyber-resilience. Anomalies may arise from hardware failures, cyber-attacks, system misconfigurations, or environmental factors, and their early detection can prevent catastrophic failures and financial losses. Traditional methods for anomaly detection, such as statistical techniques, rule-based systems, and classical machine learning models, often struggle to capture long-range dependencies and temporal correlations inherent in IoT data.
Recently, deep learning models—especially Long Short-Term Memory (LSTM) networks and Autoencoders—have shown promise in addressing the limitations of classical techniques by learning complex patterns from data. However, they suffer from limitations such as vanishing gradients and computational inefficiencies when processing long sequences. Transformer architectures, originally designed for natural language processing tasks, offer a compelling alternative due to their ability to model long-term dependencies using self-attention mechanisms without relying on recurrence.
In this study, we propose and evaluate a Transformer-based framework for anomaly detection in time-series IoT data. Our methodology includes input embeddings, positional encoding, and a multi-head self-attention mechanism to learn complex temporal patterns. The model is benchmarked against LSTM and Autoencoder architectures using real-world datasets—SWaT and SMD—featuring labeled anomalies in cyber-physical systems. Statistical analyses, including precision, recall, F1-score, AUC, and RMSE, demonstrate that the Transformer model consistently outperforms traditional models in detection accuracy and robustness.
Simulation research further validates the model’s capability to detect diverse types of anomalies, including sudden spikes, gradual drifts, and sensor failures. This research establishes the Transformer architecture as a state-of-the-art solution for real-time anomaly detection in dynamic and data-rich IoT environments.
Downloads
Downloads
Additional Files
Published
Issue
Section
License
Copyright (c) 2025 The journal retains copyright of all published articles, ensuring that authors have control over their work while allowing wide dissenmination.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles are published under the Creative Commons Attribution NonCommercial 4.0 License (CC BY NC 4.0), allowing others to distribute, remix, adapt, and build upon the work for non-commercial purposes while crediting the original author.