Visual Captions: Augmenting Verbal Communication With On-the-fly Visuals

Computer-mediated platforms are increasingly facilitating verbal communication, and capabilities such as live captioning and noise cancellation enable people to understand each other better. We envision that visual augmentations that leverage semantics in the spoken language could also be helpful to illustrate complex or unfamiliar concepts. To advance our understanding of the interest in such capabilities, we conducted formative research through remote interviews (N=10) and crowdsourced a dataset of 1500 sentence-visual pairs across a wide range of contexts. These insights informed Visual Captions, a real-time system that we integrated into a videoconferencing platform to enrich verbal communication. Visual Captions leverages a fine-tuned large language model to proactively suggest relevant visuals in open-vocabulary conversations. We report on our findings from a lab study (N=26) and a two-week deployment study (N=10), which demonstrate how Visual Captions has the potential to help people improve their communication through visual augmentation in various scenarios.

Publications

teaser image of Visual Captions: Augmenting Verbal Communication With On-the-fly Visuals

Visual Captions: Augmenting Verbal Communication With On-the-fly VisualsOpen Source, Real-time, Live!

Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI), 2023.
Keywords: augmented communication, large language models, video-mediated communication, online meeting, collaborative work, augmented reality, XR interaction


teaser image of Experiencing Visual Captions: Augmented Communication With Real-time Visuals Using Large Language Models

Experiencing Visual Captions: Augmented Communication With Real-time Visuals Using Large Language Models

Adjunct Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST), 2023.
Keywords: augmented communication, large language models, video-mediated communication, online meeting, collaborative work, dataset, textto-visual, AI agent, augmented reality


Videos

Talks

Visual Captions: Augmenting Verbal Communication with On-the-fly Visuals Teaser Image.

Visual Captions: Augmenting Verbal Communication with On-the-fly Visuals

Ruofei Du

CHI 2023, Hamburg, Germany.


Interactive Perception & Graphics for a Universally Accessible Metaverse Teaser Image.

Interactive Perception & Graphics for a Universally Accessible Metaverse

Ruofei Du

Invited Talk at UCLA by Prof. Yang Zhang , Remote Talk.


Interactive Graphics for a Universally Accessible Metaverse Teaser Image.

Interactive Graphics for a Universally Accessible Metaverse

Ruofei Du

Invited Talk at ECL Seminar Series by Dr. Alaeddin Nassani , Remote Talk.


Interactive Graphics for a Universally Accessible Metaverse Teaser Image.

Interactive Graphics for a Universally Accessible Metaverse

Ruofei Du

Invited Talk at Empathic Computing Lab , Remote Talk.


Cited By

  • CrossTalk: Intelligent Substrates for Language-Oriented Interaction in Video-Based Communication and Collaboration. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology.Haijun Xia, Tony Wang, Aditya Gunturu, Peiling Jiang, William Duan, and Xiaoshuo Yao. source | cite | search
  • Stay In Touch