Upcoming Hybrid Seminar on XR and Human–AI Cooperation
2025.09.01
Date and Time: September 8, 2025 (Mon) 16:00–18:00
Venue: RIEC Main Building 1F Open Seminar Room / Online via Zoom
We are pleased to announce a hybrid seminar featuring two distinguished speakers who will share cutting-edge research in immersive technologies and human–AI cooperation.
Speaker 1: Professor Juno Kim (University of New South Wales)
Title: Understanding the neurophysiology of multisensory integration using XR
Abstract:
Extended Reality (XR) refers to a broad class of immersive technologies that have increased in popularity in recent years with the emergence of many commercial head-mounted displays (HMDs) used for entertainment and gaming. These immersive devices are not only inexpensive but also serve as effective tools for research into multisensory integration and perceptual systems. In this presentation, attendees will explore examples of effective use of multisensory stimulation in XR applications for improving immersive experiences in passive situations and how multisensory processes can be influenced in active training applications for modifying human performance in walking and reaching tasks.
Speaker 2: Professor Cheng-Ta Yang (National Cheng Kung University)
Title: Unpacking Human–AI Cooperation with Systems Factorial Technology: From Decision Theory to Real-World Applications
Abstract:
The rapid advancement of artificial intelligence (AI) is continuously reshaping how human interact with technology, making human–AI collaboration a central research focus. However, previous research has shown that working with AI does not necessarily improve task performance; outcomes depend on how—and under what conditions—humans and AI collaborate. Factors such as AI accuracy and confidence, task difficulty, and the type of information provided by AI can all influence collaboration outcomes. This talk will introduce Systems Factorial Technology (SFT), a rigorous theory-driven methodology for decomposing decision processes into their underlying mental architecture, workload capacity, and decisional stopping rules, to demonstrate a series of studies on human–AI collaboration. SFT allows us to identify how information from humans and AI is combined, whether processed in parallel or sequentially, and how processing capacity changes when AI assistance is introduced. Our results can shed light on the effects of AI accuracy under varying task difficulties, to different approaches for manipulating task difficulty, and to the influence of incorporating metacognitive sensitivity—namely, AI confidence. Finally, we will present applications of this research series in medical decision making and Deepfake detection, aiming to identify the conditions under which AI truly enhances human decision-making and to provide practical guidelines for effective collaboration.
This event offers a unique opportunity to engage with the forefront of XR-based cognitive neuroscience and human–AI cooperative systems. Participants can attend either onsite or online.