当前位置:网站首页>Adverse example of deep learning

Adverse example of deep learning

2020-12-07 13:55:09 osc_ z3ivsxnp

〇、 Malicious samples of deep learning (Adversarial Example)

With the deepening of deep learning research , Related applications have shown amazing performance in many fields . One side , The powerful ability of deep learning has attracted the attention of academia and industry . On the other hand , The safety of deep learning has also begun to attract widespread attention . For a given depth neural network , After training , It may be on a specific mission ( For example, image recognition ) It shows high accuracy . But in images that could have been correctly classified, a little bit ( It's hard for the human eye to detect ) Disturbance , Neural network models can be misguided , So we can get the wrong classification result . for example , The picture of panda on the far left in the figure below could have been correctly classified , Add some disturbance to it , The result is a picture of the panda on the right . In the eyes of man , It's still a panda , But the neural network model recognized it as gibbon with a high confidence rate . The image on the far right, which has been carefully adjusted to mislead the neural network model, is called a malicious sample (Adversarial Example), Or abbreviation AE.

This article mainly introduces several popular malicious samples (Adversarial Example) How to generate , The following experimental code is used Python Language , The environment is Ubuntu 18.04, Deep learning model with residual neural network ResNet For example , It is based on Keras Framework implementation . Welcome to your attention White horse in gold The blog of , To ensure the formula 、 The chart is shown correctly , It is strongly recommended that you from this address (http://blog.csd

版权声明
本文为[osc_ z3ivsxnp]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/12/20201207135019725e.html