Journal Publication in IEEE Access: Deep Multimodal Fusion for Sign Language Recognition

2025.07.19

A new article co-authored by Associate Professor Teeradaj Racharak has been published in IEEE Access. The paper, titled “GSR-Fusion: A Deep Multimodal Fusion Architecture for Robust Sign Language Recognition Using RGB, Skeleton, and Graph-based Modalities”, introduces a deep learning architecture that effectively integrates multiple modalities to improve the accuracy and robustness of sign language recognition.

The proposed model, GSR-Fusion, combines RGB video data, skeleton joint information, and graph-based representations through a unified multimodal fusion framework. This approach allows the system to better capture spatial-temporal patterns and semantic features across modalities, addressing challenges in gesture variation, occlusion, and motion ambiguity.

This work demonstrates the potential of multimodal AI for inclusive technologies and reflects ongoing efforts at the Advanced Institute of So-Go-Chi (Convergence Knowledge) Informatics to develop human-centered, accessible AI systems.

Citation:
Wuttichai Vijitkunsawat, Teeradaj Racharak, GSR-Fusion: A Deep Multimodal Fusion Architecture for Robust Sign Language Recognition Using RGB, Skeleton, and Graph-based Modalities, IEEE Access, 2025. DOI: 10.1109/ACCESS.2025.3581683