行人再識別系統(re-ID)無處不在,可以在不同攝像頭拍攝的影片中精確地找出同一個人,但這種系統也很容易被對抗樣本所欺騙,因此檢驗 re-ID 系統抵抗對抗攻擊的魯棒性非常重要。來自中山大學、廣州大學和暗物智慧的研究者們透過提出一種學習誤排序的模型來擾亂系統輸出的排序,從而檢驗當前效能最佳的 re-ID 模型的不安全性,為 re-ID 系統的魯棒性提供了改進的方向。該論文已被 CVPR 大會接收為 oral 論文。
論文連結:https://arxiv.org/abs/2004.04199
程式碼連結:https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking
Hongjun Wang*, Guangrun Wang*, Ya Li, Dongyu Zhang, Liang Lin, Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking [C]. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, Washington, USA, June 16 - 18, 2020.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In S&P, pages 39–57. IEEE, 2017. 4, 8
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014. 3
Shengyong Ding, Liang Lin, Guangrun Wang, and Hongyang Chao. Deep feature learning with relative distance comparison for person re-identification. PR, 48(10):2993– 3003, 2015. 2, 3
XuanZhang,HaoLuo,XingFan,WeilaiXiang,YixiaoSun, Qiqi Xiao, Wei Jiang, Chi Zhang, and Jian Sun. Aligne- dreid: Surpassing human-level performance in person re- identification. CoRR, 2017. 1, 2, 5, 6, 7