Paper Published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2026)
2026.01.29
We are pleased to announce that the following paper, co-authored by Yongsong Huang (Assistant Professor at the AI So-Go-Chi center), Tzu-Hsuan Peng, Tomo Miyazaki, Xiaofeng Liu, Chun-Ting Chou, Ai-Chun Pang, and Shinichiro Omachi, has been accepted for presentation at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2026). This flagship conference of the IEEE Signal Processing Society is a premier global forum for presenting cutting-edge research in signal processing and its applications.
Published Paper
Yongsong Huang, Tzu-Hsuan Peng, Tomo Miyazaki, Xiaofeng Liu, Chun-Ting Chou, Ai-Chun Pang, Shinichiro Omachi. “GTFMN: Guided Texture and Feature Modulation Network for Low-Light Image Enhancement and Super-Resolution.” In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2026.
This work addresses the challenge of Low-Light image Super-Resolution (LLSR), a critical task where images suffer from the coupled degradations of poor illumination and low resolution, which is common in practical scenarios like autonomous driving and surveillance. To overcome the limitations of conventional single-stream networks that often amplify noise and cause color shifts, we propose the Guided Texture and Feature Modulation Network (GTFMN). This novel framework decouples the LLSR problem by employing a dedicated dual-stream architecture: an Illumination Stream that predicts a spatially varying illumination map, and a Texture Stream where a series of novel Illumination-Guided Modulation (IGM) Blocks utilize this map to dynamically modulate features. This mechanism enables spatially adaptive restoration, intensifying enhancement in dark regions while preserving details in well-lit areas. Extensive experiments on the OmniNormal5 and OmniNormal15 datasets demonstrate that GTFMN achieves state-of-the-art performance in both quantitative metrics and visual quality, while maintaining favorable parameter efficiency compared to competing methods.
Congratulations to the authors!