SketchyScenes: Understanding Scene Sketches

Being natural to everyone, language-based inputs have demonstrated effective for various tasks such as object detection and image generation. This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on their semantic comprehension. Compared with prior scribble-based interfaces, which require a minimum level of professional skills, our language-based interface is more natural for novice users. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoon-style color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art, and generalization user studies.


teaser image of Language-Based Colorization of Scene Sketches

Language-Based Colorization of Scene Sketches

ACM Transactions on Graphics (SIGGRAPH Asia), 2019.
Keywords: deep neural networks; image segmentation; language-based editing; scene sketch; sketch colorization
teaser image of SketchyScene: Richly-Annotated Scene Sketches

SketchyScene: Richly-Annotated Scene Sketches

European Conference on Computer Vision (ECCV), 2018.
Keywords: sketch dataset, scene sketch, sketch segmentation


Language-Based Colorization of Scene Sketches

LUCSS: Language-based User-customized Colourization of Scene Sketches


Cited By

  • OpenSketch: A Richly-Annotated Dataset of Product Design Sketches. 3. Yulia Gryaditskaya, Mark Sypesteyn, Jan Willem Hoftijzer, Sylvia Pont, Frédo Dur, and Adrien Bousseau. source | cite
  • Sketch-Based Creativity Support Tools Using Deep Learning. 2. Forrest Huang, Eldon Schoop, David Ha, Jeffrey Nichols, and John Canny. source | cite
  • Emergent Graphical Conventions in a Visual Communication Game. Shuwen Qiu, Sirui Xie, Lifeng Fan, Tao Gao, Song-Chun Zhu, and Yixin Zhu. source | cite
  • Write-an-Animation: High-Level Text-Based Animation Editing With Character-Scene Interaction. Computer Graphics Forum. Jia-Qi Zhang, Xiang Xu, Zhi-Meng Shen, Ze-Huan Huang, Yang Zhao, Yan-Pei Cao, Pengfei Wan, and Miao Wang. source | cite
  • One Sketch for All: One-Shot Personalized Sketch Segmentation. Anran Qi, Yulia Gryaditskaya, Tao Xiang, and Yi-Zhe Song. source | cite
  • Generating Compositional Color Representations From Text. Paridhi Maheshwari, Nihal Jain, Praneetha Vaddamanu, Dhananjay Raut, Shraiysh Vaishay, and Vishwa Vinay. source | cite
  • Painting Style-Aware Manga Colorization Based on Generative Adversarial Networks. 2021 IEEE International Conference on Image Processing (ICIP). Yugo Shimizu, Ryosuke Furuta, Delong Ouyang, Yukinobu Taniguchi, Ryota Hinami, and Shonosuke Ishiwatari. source | cite
  • Adversarial Segmentation Loss for Sketch Colorization. 2021 IEEE International Conference on Image Processing (ICIP). Samet Hicsonmez, Nermin Samet, Emre Akbas, and Pinar Duygulu. source | cite
  • Focusing on Persons. 4. Xin Jin, Zhonglan Li, Ke Liu, Dongqing Zou, Xiaodong Li, Xingfan Zhu, Ziyin Zhou, Qilong Sun, and Qingyu Liu. source | cite
  • Sketchy Scene Captioning: Learning Multi-Level Semantic Information From Sparse Visual Scene Cues. Lecture Notes in Computer Science. Lian Zhou, Yangdong Chen, and Yuejie Zhang. source | cite
  • Multi-Style Chinese Art Painting Generation of Flowers. IET Image Processing. Feifei Fu, Jiancheng Lv, Chenwei Tang, and Mao Li. source | cite
  • TediGAN: Text-Guided Diverse Face Image Generation and Manipulation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Weihao Xia, Yujiu Yang, Jing-Hao Xue, and Baoyuan Wu. source | cite
  • DanbooRegion: An Illustration Region Dataset. Computer Vision –ECCV 2020. Lvmin Zhang, Yi Ji, and Chunping Liu. source | cite
  • SketchyDepth: From Scene Sketches to RGB-D Images. ICCV 2021. Gianluca Berardi, Samuele Salti, and Luigi Di Stefano. source | cite
  • Generative Adversarial Networks\textendashEnabled Human\textendashArtificial Intelligence Collaborative Applications for Creative and Design Industries: A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence. Rowan T. Hughes, Liming Zhu, and Tomasz Bednarz. source | cite
  • XCI-Sketch: Extraction of Color Information From Images for Generation of Colored Outlines and Sketche. Harsh Rathod, Manisimha Varma, Parna Chowdhury, Sameer Saxena, V. Manushree, Ankita Ghosh, and Sahil Khose. source | cite
  • Generating Compositional Color Representations From Text. Paridhi Maheshwari, Nihal Jain, Praneetha Vaddamanu, Dhananjay Raut, Shraiysh Vaishay, and Vishwa Vinay. source | cite
  • Text As Neural Operator:Image Manipulation by Text Instruction. MM '21: Proceedings of the 29th ACM International Conference on Multimedia. Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Weilong Yang, Honglak Lee, and Irfan Essa. source | cite
  • DLA-Net for FG-SBIR. 5. Jiaqing Xu, Haifeng Sun, Qi Qi, Jingyu Wang, Ce Ge, Lejian Zhang, and Jianxin Liao. source | cite
  • Grayscale Image Colorization Using a Convolutional Neural Network. Journal of the Korean Society for Industrial and Applied Mathematics. Minje Jwa and Myungjoo Kang. source | cite
  • Exploring Local Detail Perception for Scene Sketch Semantic Segmentation. IEEE Transactions on Image Processing. Ce Ge, Haifeng Sun, Yi-Zhe Song, Zhanyu Ma, and Jianxin Liao. source | cite
  • FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context. Pinaki Nath Chowdhury, Aneeshan Sain, Yulia Gryaditskaya, Ayan Kumar Bhunia, Tao Xiang, and Yi-Zhe Song. source | cite
  • Partially Does It: Towards Scene-Level FG-SBIR With Partial Input. arXiv.2203.14804. Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Viswanatha Reddy Gajjala, Aneeshan Sain, Tao Xiang, and Yi-Zhe Song. source | cite
  • Stay In Touch