Omnidirectional videos, or 360° videos, have exploded in popularity due to the recent advances in virtual reality head-mounted displays (HMDs) and cameras. Despite the 360° field of regard (FoR), almost 90% of the pixels are outside a typical HMD's field of view (FoV). Hence, understanding where users are more likely to look at plays a vital role in efficiently processing and rendering 360° videos. While conventional saliency models have shown robust performance over rectilinear images, they are not formulated to handle equator biases, horizontal clipping, and spherical rotations in 360° videos. In this paper, we present a novel GPU-driven pipeline for saliency computation and navigation in 360° videos, based upon spherical harmonics (SH). We introduce the Spherical Spectral Residual (SSR) model. In this approach, we define the saliency maps as the accumulation of the SH coefficients between a low band and a high band. Our model outperforms the Itti et al.'s model in timings by over 5 to 13 times and runs at over 60 FPS for 4K videos. We envision that our pipeline will be used in processing, navigating, and rendering 360° videos in real time.