Why Adversarial Examples Are Such a Dangerous Threat to Deep Learning

Why Adversarial Examples Are Such a Dangerous Threat to Deep Learning

Technologies like artificial intelligence (AI) and neural networks are driven by deep learning — machine learning algorithms that get “smarter” with more data. The deepfake, a severe cybersecurity threat, wouldn’t be possible without deep learning.


Deepfakes aside, we need to be aware that several machine learning models, including state-of-the-art neural networks, are vulnerable to adversarial examples. The threat to the enterprise can be critical.


What is an adversarial example, and why should we care? These machine learning models, as intelligent and advanced as they are, misclassify examples (or inputs) that are only marginally different from what they normally classify correctly. For instance, by modifying an image ever so slightly — even altering just one pixel in some cases — image recognition software can be defeated.


With more companies relying on deep learning to process data than ever before, we need to be more aware of these types of attacks. The underlying strategies behind adversarial attacks are fascinating. Colorful toasters are even involved.


Adversarial Attacks 101


Luba Gloukhova, founding chair of Deep Learning World and editor-in-chief of the Machine Learning Times, finds herself at the hub of the machine learning and deep learning industries. I met Gloukhova, an independent speaker and consultant, at a tech conference in February, where she told me that the more she understands about the capabilities of deep learning, the more potential security risks we are exposing ourselves to become apparent.


“As I saw some of the potential shortcomings of this technology, it got me venturing down this path of adversarial attacks on adversarial examples, and it got me really interested in this industry,” Gloukhova said.


According ..

Support the originator by clicking the read the rest link below.