TY - GEN
T1 - I Can Still Steal Your Encoder
T2 - 8th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2025
AU - Xiao, Rongbin
AU - Dong, Changyu
AU - Zhang, Jie
AU - Pang, Yan
AU - Xie, Zihan
AU - Wu, Han
PY - 2026/1/24
Y1 - 2026/1/24
N2 - The rise of Encoder-as-a-Service (EaaS) has made pre-trained encoders accessible for various AI tasks, but this has introduced significant security concerns, particularly with model stealing attacks. While defenses like the B4B mechanism [6] have been proposed to protect against such attacks, we reveal critical vulnerabilities in B4B’s strategies. B4B employs techniques such as embedding space coverage estimation, cost-based perturbation, and embedding transformations to thwart attackers. However, we introduce the first defense-penetrating attack that bypasses these protections. Our attack effectively circumvents all three defense mechanisms, enabling attackers to steal high-quality encoders with minimal degradation in performance. Extensive experiments show that the stolen encoder performs almost as well as the original, highlighting the weaknesses in B4B and similar defenses. Our work exposes significant gaps in the security of EaaS systems and calls for more robust, active defense strategies against model stealing.
AB - The rise of Encoder-as-a-Service (EaaS) has made pre-trained encoders accessible for various AI tasks, but this has introduced significant security concerns, particularly with model stealing attacks. While defenses like the B4B mechanism [6] have been proposed to protect against such attacks, we reveal critical vulnerabilities in B4B’s strategies. B4B employs techniques such as embedding space coverage estimation, cost-based perturbation, and embedding transformations to thwart attackers. However, we introduce the first defense-penetrating attack that bypasses these protections. Our attack effectively circumvents all three defense mechanisms, enabling attackers to steal high-quality encoders with minimal degradation in performance. Extensive experiments show that the stolen encoder performs almost as well as the original, highlighting the weaknesses in B4B and similar defenses. Our work exposes significant gaps in the security of EaaS systems and calls for more robust, active defense strategies against model stealing.
KW - Defense-Penetrating Attack
KW - Model Stealing
KW - Representation Learning
UR - https://www.scopus.com/pages/publications/105028955689
U2 - 10.1007/978-981-95-5764-6_33
DO - 10.1007/978-981-95-5764-6_33
M3 - Chapter in a published conference proceeding
AN - SCOPUS:105028955689
SN - 9789819557639
T3 - Lecture Notes in Computer Science
SP - 483
EP - 501
BT - Pattern Recognition and Computer Vision - 8th Chinese Conference, PRCV 2025, Proceedings
A2 - Kittler, Josef
A2 - Xiong, Hongkai
A2 - Lin, Weiyao
A2 - Yang, Jian
A2 - Chen, Xilin
A2 - Lu, Jiwen
A2 - Yu, Jingyi
A2 - Zheng, Weishi
PB - Springer
CY - Singapore
Y2 - 15 October 2025 through 18 October 2025
ER -