Pedestrian parsing is a fundamental problem for action recognition and behavior analysis. However, unlike indoor person parsing, it remains a challenging problem due to varying luminance, occlusion and clothing. In this paper, we propose a novel pedestrian parsing approach based zero-shot learning. Firstly, we learn an transferred model that extracts clothing parsing attributes from pedestrian images. Then we combine the attributes into higher level human parts, Finally we apply a seed-based segmentation approach to get the parsing results. We test the proposed approach on the Penne-Fudan and PPSS dataset, and achieve reasonablly good results.