Immersive maps such as Google Street View and Bing Streetside provide true-to-life views with a massive collection of panoramas. However, these panoramas are only available at sparse intervals along the path they are taken, resulting in visual discontinuities during navigation. Prior art in view synthesis is usually built upon a set of perspective images, a pair of stereoscopic images, or a monocular image, but barely examines wide-baseline panoramas, which are widely adopted in commercial platforms to optimize bandwidth and storage usage. In this paper, we leverage the unique characteristics of wide-baseline panoramas and present OmniSyn, a novel pipeline for 360◦ view synthesis between wide-baseline panoramas. OmniSyn predicts omnidirectional depth maps using a spherical cost volume and a monocular skip connection, renders meshes in 360◦ images, and synthesizes intermediate views with a fusion network. We demonstrate the effectiveness of OmniSyn via comprehensive experimental results including comparison with the state-of-the-art methods on CARLA and Matterport datasets, ablation studies, and generalization studies on street views. We envision our work may inspire future research for this unheeded real-world task and eventually produce a smoother experience for navigating immersive maps.
Ph.D. Dissertation, Computer Science Department., University of Maryland, College Park., 2018.
Keywords: social street view, geollery, spherical harmonics, 360 video, multiview video, montage4d, haptics, cryptography, metaverse, mirrored world
2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022.
Keywords: 360 image, virtual reality, view synthesis, panorama, neural rendering, depth map, mesh rendering, inpainting
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2021.
Keywords: 360° video, foveation, virtual reality, live video stream-ing, log-rectilinear, summed-area table
IEEE Computer Graphics and Applications (CGA), 2021.
Keywords: spherical harmonics, virtual reality, visual saliency, 360°videos, omnidirectional videos, perception, Itti model, spectralresidual, GPGPU, CUDA
A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming
youtube, mp4 | cite VIDEO
Saliency Computation for Virtual Cinematography in 360° Videos
youtube | cite Talks A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming David Li , Remote Talk. VR 2021 pdf | gSlides | cite Fusing Multimedia Data Into Dynamic Virtual Environments
, University of Maryland, College Park, MD.. Ph.D. Dissertation, Talk at Facebook Reality Labs pdf | pptx, slideshare | cite A Pilot Study of Spherical Harmonics for Saliency Computation and Navigation in 360◦ Videos.
, Montreal, Quebec, Canada. May 16, 2018.. I3D 2018 pdf | pptx | cite Cited By . Rectangular Mapping-based Foveated Rendering The IEEE Conference on Virtual Reality and 3D User Interfaces 2022 (IEEE VR). Jiannan Ye, Anqi Xie, Susmjia Jabbireddy, Yunchuan Li, Xubo Yang, and Xiaoxu Meng. source | cite | search . An Integrative View of Foveated Rendering 0. Bipul Mohanto, ABM Tariqul Islam, Enrico Gobbetti, and Oliver Staadt. source | cite | search . Dynamic Viewport Selection-Based Prioritized Bitrate Adaptation for Tile-Based 360\textdegree Video Streaming IEEE Access. Abid Yaqoob, Mohammed Amine Togou, and Gabriel-Miro Muntean. source | cite | search . Application of Eye-tracking Systems Integrated Into Immersive Virtual Reality and Possible Transfer to the Sports Sector - a Systematic Review Multimedia Tools and Applications. Stefan Pastel, Josua Marlok, Nicole Bandow, and Kerstin Witte. source | cite | search . Learning Based Versus Heuristic Based: A Comparative Analysis of Visual Saliency Prediction in Immersive Virtual Reality Computer Animation and Virtual Worlds. Mehmet Bahadir Askin and Ufuk Celikcan. source | cite | search . Progressive Multi-scale Light Field Networks arXiv.2208.06710. David Li and Amitabh Varshney. source | cite | search . Wavelet-Based Fast Decoding of 360° Videos arXiv.2208.10859. Colin Groth, Sascha Fricke, Susana Castillo, and Marcus Magnor. source | cite | search . Bullet Comments for 360\textdegreeVideo 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Yi-Jun Li, Jinchuan Shi, Fang-Lue Zhang, and Miao Wang. source | cite | search . Rega-Net:Retina Gabor Attention for Deep Convolutional Neural Networks arXiv.2211.12698. Chun Bao, Jie Cao, Yaqian Ning, Yang Cheng, and Qun Hao. source | cite | search . Foveated Rendering: A State-of-the-Art Survey arXiv.2211.07969. Lili Wang, Xuehuai Shi, and Yi Liu. source | cite | search . Masked360: Enabling Robust 360-degree Video Streaming With Ultra Low Bandwidth Consumption IEEE Transactions on Visualization and Computer Graphics. Zhenxiao Luo, Baili Chai, Zelong Wang, Miao Hu, and Di Wu. source | cite | search . Stripe Sensitive Convolution for Omnidirectional Image Dehazing IEEE Transactions on Visualization and Computer Graphics. Dong Zhao, Jia Li, Hongyu Li, and Long Xu. source | cite | search . Quality-Constrained Encoding Optimization for Omnidirectional Video Streaming IEEE Transactions on Circuits and Systems for Video Technology. Chaofan He, Roberto G. de A. Azevedo, Jiacheng Chen, Shuyuan Zhu, Bing Zeng, and Pascal Frossard. source | cite | search . Providing Privacy for Eye-Tracking Data With Applications in XR University of Florida. Brendan David-John. source | cite | search