Brief Description
Session Organizers
Sepcial Session Topics
The topics of interest include, but are not limited to: Submission Method
Introduction of Session Organizers
Computer Vision and Signal Processing Empowering Communities invites
researchers, engineers and NGOs to share cutting-edge work that turns pixels
and waveforms into tangible social impact. The session focuses on robust,
low-cost and privacy-preserving algorithms that can be deployed in
resource-constrained environments to improve health, safety, education and
inclusion. We welcome contributions on continuous sign-language recognition,
real-time disaster monitoring with drones, smartphone-based respiratory
screening, wearable fall-detection systems, and other cross-modal solutions
that fuse vision, audio and biosignals. Emphasis is placed on open datasets,
edge-friendly architectures, explainability and fairness so that AI tools
remain trustworthy and accessible to the communities they aim to serve.
Through concise talks and interactive discussions, the session will
highlight pathways from laboratory prototypes to field deployments,
fostering collaborations that accelerate humanitarian outcomes worldwide.
Prof. Wanli Xue, Tianjin University of Technology, China
Assoc. Prof. Fan Qi, Tianjin University of Technology, China
Prof. Chunwei Tian, Harbin Institute of Technology, China
• Continuous sign-language recognition and translation for inclusive
communication
• Vision-based early warning systems for natural-disaster detection and
response
• AI-driven low-cost retinal imaging for large-scale preventable-blindness
screening
• Multimodal emotion recognition from facial, vocal and physiological
signals for mental-health triage
• Real-time vision-guided rescue drones for search-and-locate missions in
disaster zones
• Edge AI on wearables for automatic fall detection and elderly assistance
• Smartphone-based cough and breathing analysis for large-area
respiratory-disease surveillance
• Cross-modal learning to fuse infrared, acoustic and visual data for
wildlife-poaching prevention
• Sign-language tutoring via interactive computer-vision avatars in
low-resource schools
• Explainable AI for fair and transparent diagnostics in under-served
medical settings
Submit your Full Paper (no less than 8 pages) or your paper abstract—without
publication (200–400 words)—via the
Online Submission
System, then choose Special Session 3 (AI for Social Good: Computer Vision and Signal Processing Empowering Communities).
Template Download
Prof. Wanli Xue
Tianjin University of Technology, China
Wanli Xue is a Professor and Ph.D. supervisor in Computer Science at Tianjin
University of Technology. His research centers on computer vision for social
good, especially UAV perception and continuous sign-language recognition for
barrier-free communication. He has authored 30+ IEEE/Elsevier journal
papers, including IEEE T-IP, IEEE T-NNLS, IEEE T-CSVT, IEEE T-MM, IEEE T-ITS
and Information Fusion, Pattern Recognition, CVPR, ECCV, etc..Assoc. Prof. Fan Qi
Tianjin University of Technology, China
Fan Qi is an Associate Professor at Tianjin University of Technology.
She focuses on privacy-preserving federated learning and multimodal
affective computing for social inclusion. She has published 10+ CCF-A papers
(ACM MM, CVPR, ECCV, ICML, etc.).Prof. Chunwei Tian
Harbin Institute of Technology, China
Chunwei Tian is a Professor and Ph.D. supervisor in the School of Computing,
Harbin Institute of Technology, listed among the world’s top 2% scientists
from 2022-2024. His research spans video/image restoration and recognition,
image generation. He has published 90+ papers in IEEE Transactions, Pattern
Recognition, Neural Networks and Information Fusion, including 7 ESI highly
cited papers and benchmark studies on image super-resolution.