A Simulated Environment Approach to Investigate the Effect of Adversarial Perturbations on Traffic Sign for Automotive Software-in-Loop Testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32769
A Simulated Environment Approach to Investigate the Effect of Adversarial Perturbations on Traffic Sign for Automotive Software-in-Loop Testing

Authors: Sunil Patel, Pallab Maji

Abstract:

To study the effect of adversarial attack environment must be controlled. Autonomous driving includes mainly 5 phases sense, perceive, map, plan, and drive. Autonomous vehicles sense their surrounding with the help of different sensors like cameras, radars, and lidars. Deep learning techniques are considered Blackbox and found to be vulnerable to adversarial attacks. In this research, we study the effect of the various known adversarial attacks with the help of the Unreal Engine-based, high-fidelity, real-time raytraced simulated environment. The goal of this experiment is to find out if adversarial attacks work in moving vehicles and if an unknown network may be targeted. We discovered that the existing Blackbox and Whitebox attacks have varying effects on different traffic signs. We observed that attacks that impair detection in static scenarios do not have the same effect on moving vehicles. It was found that some adversarial attacks with hardly noticeable perturbations entirely blocked the recognition of certain traffic signs. We observed that the daylight condition has a substantial impact on the model's performance by simulating the interplay of light on traffic signs. Our findings have been found to closely resemble outcomes encountered in the real world.

Keywords: Adversarial attack simulation, computer simulation, ray-traced environment, realistic simulation, unreal engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337

References:


[1] “NVIDIA DRIVE - Software | NVIDIA Developer.” https://developer.nvidia.com/drive/drive-software (accessed Oct. 23, 2020).
[2] A. Dosovitskiy, G. Ros, F. Codevilla, A. López, and V. Koltun, “CARLA: An Open Urban Driving Simulator.” Accessed: Oct. 23, 2020. (Online).
[3] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles,” 2018, pp. 621–635.
[4] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” Dec. 2015, Accessed: Oct. 23, 2020. (Online). Available: https://github.com/lisa-lab/pylearn2/tree/master/pylearn2/scripts/.
[5] J. Kos, I. Fischer, and D. Song, “Adversarial examples for generative models,” Proceedings - 2018 IEEE Symposium on Security and Privacy Workshops, SPW 2018, pp. 36–42, Feb. 2017, Accessed: Oct. 25, 2020. (Online). Available: http://arxiv.org/abs/1702.06832.
[6] B. Li and Y. Vorobeychik, “Feature Cross-Substitution in Adversarial Classification,” 2014. Accessed: Oct. 25, 2020. (Online).
[7] B. Li and Y. Vorobeychik, “Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings,” PMLR, Feb. 2015. Accessed: Oct. 25, 2020. (Online). Available: http://proceedings.mlr.press/v38/li15a.html.
[8] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations.” Accessed: Oct. 25, 2020. (Online). Available: https://github.com/.
[9] A. Nguyen, J. Yosinski, and J. Clune, “Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07-12-June-2015, pp. 427–436, Dec. 2014, Accessed: Oct. 25, 2020. (Online). Available: http://arxiv.org/abs/1412.1897.
[10] K. Eykholt et al., “Robust Physical-World Attacks on Deep Learning Visual Classification,” 2018. Accessed: Oct. 25, 2020. (Online). Available: https://iotsecurity.eecs.umich.edu/#roadsigns.
[11] A. Liu et al., “Perceptual-sensitive GAN for generating adversarial patches,” in 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Jul. 2019, vol. 33, no. 01, pp. 1028–1035, doi: 10.1609/aaai.v33i01.33011028.
[12] C. Sitawarin et al., “DARTS: Deceiving Autonomous Cars with Toxic Signs,” vol. 1, no. 1, 2016. (Online). Available: http://arxiv.org/abs/1712.09665.
[13] G. Lovisotto, H. Turner, I. Sluganovic, M. Strohmeier, and I. Martinovic, “SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations,” Jul. 2020, Accessed: Oct. 25, 2020. (Online). Available: http://arxiv.org/abs/2007.04137.
[14] S. Dodge and L. Karam, “Understanding How Image Quality Affects Deep Neural Networks.” Accessed: Oct. 25, 2020. (Online).
[15] C. Szegedy et al., “Intriguing properties of neural networks,” Dec. 2014, Accessed: Oct. 23, 2020. (Online). Available: https://arxiv.org/abs/1312.6199v4.
[16] C. Zuo, “Regularization Effect of Fast Gradient Sign Method and its Generalization,” Oct. 2018, Accessed: Oct. 25, 2020. (Online). Available: http://arxiv.org/abs/1810.11711.
[17] F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “The Space of Transferable Adversarial Examples,” Apr. 2017, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1704.03453.
[18] X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y. Chen, “DPatch: An Adversarial Patch Attack on Object Detectors,” CEUR Workshop Proceedings, vol. 2301, Jun. 2018, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1806.02299.
[19] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into Transferable Adversarial Examples and Black-box Attacks,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, Nov. 2016, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1611.02770.
[20] M. Cisse, Y. Adi, N. Neverova, and J. Keshet, “Houdini: Fooling Deep Structured Prediction Models,” Jul. 2017, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1707.05373.
[21] A. Arnab, O. Miksik, and P. H. S. Torr, “On the Robustness of Semantic Segmentation Models to Adversarial Attacks,” Nov. 2017, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1711.09856.
[22] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, Jun. 2017, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1706.06083.
[23] A. Kurakin, G. Brain, I. J. G. Openai, and S. Bengio, “Adversarial Examples in the Physical World,” 2017. Accessed: Oct. 23, 2020. (Online). Available: https://arxiv.org/abs/1607.02533.
[24] I. Yilmaz, “Practical Fast Gradient Sign Attack against Mammographic Image Classifier.” Accessed: Oct. 26, 2020. (Online).
[25] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Cvpr, Jun. 2015, Accessed: Oct. 25, 2020. (Online). Available: http://arxiv.org/abs/1506.02640.
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-December, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[27] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Nov. 2017, vol. 2017-January, pp. 5987–5995, doi: 10.1109/CVPR.2017.634.
[28] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks.” Accessed: Oct. 25, 2020. (Online). Available: https://github.com/liuzhuang13/DenseNet.
[29] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 2015. Accessed: Oct. 25, 2020. (Online). Available: http://www.robots.ox.ac.uk/.
[30] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial Patch,” Dec. 2017, Accessed: Oct. 23, 2020. (Online). Available: http://arxiv.org/abs/1712.09665.