Omnidirectional videos, or 360° videos, have exploded in popularity due to the recent advances in virtual reality head-mounted displays (HMDs) and cameras. Despite the 360° field of regard (FoR), almost 90% of the pixels are outside a typical HMD's field of view (FoV). Hence, understanding where users are more likely to look at plays a vital role in efficiently streaming and rendering 360° videos. While conventional saliency models have shown robust performance over rectilinear images, they are not formulated to handle equatorial bias, horizontal clipping, and spherical rotations in 360° videos. In this paper, we present a novel GPU-driven pipeline for saliency computation and virtual cinematography in 360° videos using spherical harmonics (SH). By analyzing the spherical harmonics spectrum of the 360° video, we extract the spectral residual by accumulating the SH coefficients between a low band and a high band. Our model outperforms the classic Itti et al.'s model in timings by 5\times to 13\times in timing and runs at over 60 FPS for 4K videos. Further, our interactive computation of spherical saliency can be used for saliency-guided virtual cinematography in 360° videos. We formulate a spatiotemporal model to ensure large saliency coverage while reducing the camera movement jitter. Our pipeline can be used in processing, navigating, and streaming 360° videos in real time.