AI-Driven Cross-Modal Feature Alignment for Multimodal Fraud Detection in Financial Transaction Systems

Authors

  • Daniel Kim Electrical Engineering, University of Illinois Urbana-Champaign, IL, USA Author

Keywords:

Cross-modal learning, fraud detection, feature alignment, deep learning

Abstract

Financial fraud detection systems increasingly encounter sophisticated attack patterns that exploit multiple data modalities simultaneously. Traditional single-modal detection approaches demonstrate limited effectiveness when adversaries coordinate deceptive behaviors across transaction records, user behavioral sequences, and communication metadata. This research proposes an AI-driven cross-modal feature alignment framework that integrates heterogeneous data streams through attention-based fusion mechanisms and contrastive learning strategies. The methodology employs deep neural architectures to extract discriminative representations from structured transaction data, unstructured behavioral logs, and temporal interaction patterns, subsequently aligning these features in a unified embedding space. Experimental validation on real-world financial datasets demonstrates that the proposed framework achieves superior detection performance compared to baseline methods, with particular effectiveness in identifying coordinated fraud schemes that span multiple channels. The framework maintains computational efficiency suitable for real-time deployment while providing interpretable explanations for detection decisions through attention weight visualization and feature attribution analysis.

Author Biography

  • Daniel Kim, Electrical Engineering, University of Illinois Urbana-Champaign, IL, USA

     

     

Downloads

Published

2026-01-07

How to Cite

AI-Driven Cross-Modal Feature Alignment for Multimodal Fraud Detection in Financial Transaction Systems. (2026). Journal of Global Engineering Review, 4(1), 21-37. https://gereview.com/index.php/jger/article/view/10