Explainability-Driven Feature Selection for Financial Fraud Detection
DOI:
https://doi.org/10.63345/ijarcse.v1.i1.302Keywords:
Financial fraud detection, explainability, SHAP, LIME, feature selection, machine learning, interpretable AIAbstract
Financial fraud has evolved in scale and sophistication, demanding machine learning models that are not only accurate but also interpretable. This paper proposes an explainability-driven feature selection framework tailored for financial fraud detection. Traditional feature selection methods often prioritize accuracy metrics without adequately addressing the need for interpretability, a key requirement in high-stakes financial applications. Our approach integrates explainable artificial intelligence (XAI) methods such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to identify features that are both highly influential and easily explainable to stakeholders.
We construct and train a suite of machine learning models, including random forests, gradient boosting machines, and logistic regression, on a large publicly available credit card fraud dataset. After baseline training, we apply XAI tools to extract feature importances and conduct a selection process based on both performance gains and model transparency. A comparative analysis is then performed to evaluate the trade-offs between explainability and accuracy before and after feature reduction.
Simulation results show that our proposed method retains 93% of model accuracy while improving interpretability scores by 41%, significantly enhancing trust and compliance in automated fraud detection systems. The streamlined feature set also contributes to a 37% improvement in computational efficiency, making the model more suitable for real-time deployments in financial institutions. Statistical analysis confirms the robustness of the proposed feature subset, and simulation-based testing demonstrates effectiveness across varying fraud prevalence rates.
This paper contributes to the growing body of research at the intersection of artificial intelligence, finance, and explainability, emphasizing the importance of interpretable models in operational environments. Future work can extend this approach to cross-market fraud scenarios and incorporate human-in-the-loop systems for continuous feedback.
Downloads
Downloads
Additional Files
Published
Issue
Section
License
Copyright (c) 2025 The journal retains copyright of all published articles, ensuring that authors have control over their work while allowing wide dissenmination.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles are published under the Creative Commons Attribution NonCommercial 4.0 License (CC BY NC 4.0), allowing others to distribute, remix, adapt, and build upon the work for non-commercial purposes while crediting the original author.