PortraitNet

Real-time portrait segmentation network for mobile device

Song-Hai Zhang, Xin Dong, Ruilong Li, Yongliang Yang

Research output: Contribution to journalArticle

Abstract

Real-time portrait segmentation plays a significant role in many applications on mobile device, such as background replacement in video chat or teleconference. In this paper, we propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference. The two auxiliary losses are boundary loss and consistency constraint loss. The former improves the accuracy of boundary pixels, and the latter enhances the robustness in complex lighting environment. We evaluate PortraitNet on portrait segmentation dataset EG1800 and Supervise-Portrait. Compared with the state-of-the-art methods, our approach achieves remarkable performance in terms of both accuracy and efficiency, especially for generating results with sharper boundaries and under severe illumination conditions. Meanwhile, PortraitNet is capable of processing 224 × 224 RGB images at 30 FPS on iPhone 7.
Original languageEnglish
Pages (from-to)104-113
Number of pages10
JournalComputers & Graphics
Volume80
Early online date4 Apr 2019
DOIs
Publication statusPublished - 1 May 2019

Cite this

PortraitNet : Real-time portrait segmentation network for mobile device. / Zhang, Song-Hai; Dong, Xin; Li, Ruilong; Yang, Yongliang.

In: Computers & Graphics, Vol. 80, 01.05.2019, p. 104-113.

Research output: Contribution to journalArticle

Zhang, Song-Hai ; Dong, Xin ; Li, Ruilong ; Yang, Yongliang. / PortraitNet : Real-time portrait segmentation network for mobile device. In: Computers & Graphics. 2019 ; Vol. 80. pp. 104-113.
@article{7527b8dfe9104a3682d076a9b048609a,
title = "PortraitNet: Real-time portrait segmentation network for mobile device",
abstract = "Real-time portrait segmentation plays a significant role in many applications on mobile device, such as background replacement in video chat or teleconference. In this paper, we propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference. The two auxiliary losses are boundary loss and consistency constraint loss. The former improves the accuracy of boundary pixels, and the latter enhances the robustness in complex lighting environment. We evaluate PortraitNet on portrait segmentation dataset EG1800 and Supervise-Portrait. Compared with the state-of-the-art methods, our approach achieves remarkable performance in terms of both accuracy and efficiency, especially for generating results with sharper boundaries and under severe illumination conditions. Meanwhile, PortraitNet is capable of processing 224 × 224 RGB images at 30 FPS on iPhone 7.",
author = "Song-Hai Zhang and Xin Dong and Ruilong Li and Yongliang Yang",
year = "2019",
month = "5",
day = "1",
doi = "10.1016/j.cag.2019.03.007",
language = "English",
volume = "80",
pages = "104--113",
journal = "Computers & Graphics",
issn = "0097-8493",
publisher = "Elsevier",

}

TY - JOUR

T1 - PortraitNet

T2 - Real-time portrait segmentation network for mobile device

AU - Zhang, Song-Hai

AU - Dong, Xin

AU - Li, Ruilong

AU - Yang, Yongliang

PY - 2019/5/1

Y1 - 2019/5/1

N2 - Real-time portrait segmentation plays a significant role in many applications on mobile device, such as background replacement in video chat or teleconference. In this paper, we propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference. The two auxiliary losses are boundary loss and consistency constraint loss. The former improves the accuracy of boundary pixels, and the latter enhances the robustness in complex lighting environment. We evaluate PortraitNet on portrait segmentation dataset EG1800 and Supervise-Portrait. Compared with the state-of-the-art methods, our approach achieves remarkable performance in terms of both accuracy and efficiency, especially for generating results with sharper boundaries and under severe illumination conditions. Meanwhile, PortraitNet is capable of processing 224 × 224 RGB images at 30 FPS on iPhone 7.

AB - Real-time portrait segmentation plays a significant role in many applications on mobile device, such as background replacement in video chat or teleconference. In this paper, we propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference. The two auxiliary losses are boundary loss and consistency constraint loss. The former improves the accuracy of boundary pixels, and the latter enhances the robustness in complex lighting environment. We evaluate PortraitNet on portrait segmentation dataset EG1800 and Supervise-Portrait. Compared with the state-of-the-art methods, our approach achieves remarkable performance in terms of both accuracy and efficiency, especially for generating results with sharper boundaries and under severe illumination conditions. Meanwhile, PortraitNet is capable of processing 224 × 224 RGB images at 30 FPS on iPhone 7.

UR - http://www.scopus.com/inward/record.url?scp=85064466398&partnerID=8YFLogxK

U2 - 10.1016/j.cag.2019.03.007

DO - 10.1016/j.cag.2019.03.007

M3 - Article

VL - 80

SP - 104

EP - 113

JO - Computers & Graphics

JF - Computers & Graphics

SN - 0097-8493

ER -