Ruofei Du

Ruofei Du is a Senior Research Scientist and Manager at Google and works on creating novel interactive technologies for virtual and augmented reality. Du's research covers a wide range of topics in VR and AR, including AR interaction (DepthLab, Ad hoc UI), augmented communication (CollaboVR), mixed-reality social platforms (Geollery), video-based rendering (Montage4D), gaze-based interaction (GazeChat, Kernel Foveated Rendering), and deep learning in graphics (3D Representation, HumanGPS, Sketch Colorization). His research has been featured by Engadget, The Verge, PC Magazine, VOA News, cnBeta, etc. Du serves as an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology and Frontiers in Virtual Reality. He also served as a committee member in CHI 2021-2023, UIST 2022, and SIGGRAPH Asia 2020 XR. He holds 3 US patents and has published over 30 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH Asia, UIST, TVCG, CVPR, ICCV, ECCV, ISMAR, VR, and I3D. Du holds a Ph.D. and an M.S. in Computer Science from University of Maryland, College Park; and a B.S. from ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com

Alternative Bio for Invited Talk

Ruofei Du is a Senior Research Scientist and Manager at Google and works on creating novel interactive technologies for virtual and augmented reality. Du's research covers a wide range of topics in VR and AR, including computational interaction, augmented communication, social platforms in metaverse, video-based rendering, foveated rendering, and deep learning in graphics. His research has been featured by Engadget, The Verge, PC Magazine, VOA News, cnBeta, etc. Du serves as an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology and Frontiers in Virtual Reality. He also served as a committee member in CHI 2021-2023, UIST 2022, and SIGGRAPH Asia 2020 XR. He holds 6 US patents and has published over 30 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH Asia, UIST, TVCG, CVPR, ICCV, ECCV, ISMAR, VR, and I3D. Du holds a Ph.D. and an M.S. in Computer Science from University of Maryland, College Park; and a B.S. from ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com

Computational Interaction for a Universally Accessible Metaverse

With the dramatic growth of virtual and augmented reality, ubiquitous information is created from both the virtual and the physical worlds. However, it remains a challenge how to bridge the real and the virtual worlds and how to blend the metaverse into our daily life. In this talk, I will present several computational interaction technologies that empower the metaverse with more universal accessibility. With the consecutive works in Geollery.com and kernel foveated rendering, we present real-time pipelines of reconstructing a mirrored world and acceleration techniques. With DepthLab, Ad hoc UI, and SlurpAR, we present real-time 3D interactions with depth maps, everyday objects, and hand gestures. With Montage4D and HumanGPS, we demonstrate the great potential of digital humans in the metaverse. With CollaboVR, GazeChat, SketchyScenes, and ProtoSound, we enhance communication with mid-air sketches, gaze-aware 3D photos, and customized sound recognition. Finally, we conclude the talk with video clips of Google I/O 2022 Keynote to envision the future of a universally accessible metaverse.

Stay In Touch