Hi, I am Junwei Li, a researcher at The Hong Kong University of Science and Technology (Guangzhou) (HKUST(GZ)). I earned my Master of Philosophy (M.Phil.) degree in Data Science and Analytics from HKUST(GZ) in 2025, supervised by Prof. Jing TANG and Prof. Lei CHEN. My research interests include Multi-Agent LLM Systems, Retrieval-Augmented Generation with KV-cache reuse, and compliance-aware, privacy-preserving AI companions for cloud–edge deployment in education and assistive use. Besides, I serve as the President of the Entrepreneurship Association at HKUST(GZ).
MPhil in Data Science and Analytics, 2025
The Hong Kong University of Science and Technology (Guangzhou)
Won the Innovation Award in the AI Category at CES 2025 - the only award-winning project from Mainland China! My startup, Yuanhuo Technology, which I founded, was honored for developing a personalized AI companionship platform.
Our team won the National Champion at the First L’Oréal Beauty Tech Hackathon in China. Grateful for the chance to compete in the China Grand Final of L’Oréal’s inaugural Beauty Tech Hackathon!
The Lab of Future Technology is at HKUST(GZ) within the College of Future Technology (CFT). The group is led by Prof.Jingshen Wu.
As the Entrepreneurial Project Owner and Product Manager, I am leading the Aicorumi and TATA AI projects, AI agent system incubated within the prestigious Bridge Program. Our projects have been awarded ¥600,000 in non-equity funding to accelerate development and productization.
My core responsibilities and contributions include:
The Metaverse Joint Innovation Laboratory is at HKUST(GZ) within the Data Science and Analytics Thrust (DSA Thrust). The group is led by Prof.Lei Chen.
My responsibilities include:
As the AI Product Manager, I owned the lifecycle for a proprietary Conversational AI solution focused on the healthcare sector, from v1.0 to v4.0. I successfully drove its 0-to-1 launch in a live hospital setting, acquiring over 1,000 seed users and establishing a critical proof-of-concept.
My core responsibilities and contributions include:
As a Product Operator at JD Retail, I was deeply involved in both driving business growth and shaping the underlying product & technology. By building data dashboards and analyzing user behavior funnels, I provided data-driven growth strategies for a portfolio of 60+ flagship stores, including top-tier lifestyle service brands, consistently achieving over ¥10 million in monthly GMV.
My core responsibilities and contributions include:
A high-fidelity, real-time conversational digital human that combines a photorealistic 3D avatar, persona-driven expressive TTS, and knowledge-grounded dialogue, coordinated by an asynchronous low-latency pipeline with history-augmented retrieval and intent-based routing, supervised by Prof. Zeyu Wang.
High-fidelity digital humans are increasingly used in interactive applications, yet achieving both visual realism and real-time responsiveness remains a major challenge. We present a high-fidelity, real-time conversational digital human system that seamlessly combines a visually realistic 3D avatar, persona-driven expressive speech synthesis, and knowledge-grounded dialogue generation. To support natural and timely interaction, we introduce an asynchronous execution pipeline that coordinates multi-modal components with minimal latency. The system supports advanced features such as wake word detection, emotionally expressive prosody, and highly accurate, context-aware response generation. It leverages novel retrieval-augmented methods, including history augmentation to maintain conversational flow and intent-based routing for efficient knowledge access. Together, these components form an integrated system that enables responsive and believable digital humans, suitable for immersive applications in communication, education, and entertainment.
Sandplay is an effective psychotherapy for mental retreatment, and many people prefer to engage in sandplay in Virtual Reality (VR) due to its convenience. Haptic perception of physical objects and miniatures enhances the realism and immersion in VR. Previous studies have rendered sizes by exerting pressure on the user’s fingertips or employing tangible, shape-changing devices. However, these interfaces are limited by the physical shapes they can assume, making it difficult to simulate objects that grow larger or smaller than the interface. Motivated by literature on visual-haptic illusions, this work aims to convey the haptic sensation of a virtual object’s shape to the user by exploring the relationships between the haptic feedback from real objects and their visual renderings in VR. Our study focuses on the confirmation and adjustment ratios for different virtual object sizes. The results show that the likelihood of participants confirming the correct size of virtual cubes decreases as the object size increases, requiring more adjustments for larger objects. This research provides valuable insights into the relationships between haptic sensations and visual inputs, contributing to the understanding of visual-haptic illusions in VR environments.
Powered by ClustrMaps