PortraitNet: Real-time portrait segmentation network for mobile device

Song-Hai Zhang, Xin Dong, Ruilong Li, Yongliang Yang

Research output: Contribution to journalArticlepeer-review

38 Citations (SciVal)
873 Downloads (Pure)

Abstract

Real-time portrait segmentation plays a significant role in many applications on mobile device, such as background replacement in video chat or teleconference. In this paper, we propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference. The two auxiliary losses are boundary loss and consistency constraint loss. The former improves the accuracy of boundary pixels, and the latter enhances the robustness in complex lighting environment. We evaluate PortraitNet on portrait segmentation dataset EG1800 and Supervise-Portrait. Compared with the state-of-the-art methods, our approach achieves remarkable performance in terms of both accuracy and efficiency, especially for generating results with sharper boundaries and under severe illumination conditions. Meanwhile, PortraitNet is capable of processing 224 × 224 RGB images at 30 FPS on iPhone 7.
Original languageEnglish
Pages (from-to)104-113
Number of pages10
JournalComputers & Graphics
Volume80
Early online date4 Apr 2019
DOIs
Publication statusPublished - 1 May 2019

Fingerprint

Dive into the research topics of 'PortraitNet: Real-time portrait segmentation network for mobile device'. Together they form a unique fingerprint.

Cite this