Hackers Can Use AI and Machine Learning to Attack Cybersecurity

Hackers Can Use AI and Machine Learning to Attack Cybersecurity
According to researchers at NCSA and Nasdaq cybersecurity summit, hackers can use Machine and AI (Artificial Intelligence) to avoid identification during cybersecurity attacks and make the threats more effective. Hackers use AI to escape disclosure; it allows them never to get caught and adapt to new tactics over time, says Elham Tabassi, National Institute of Standards and Technology's chief of staff information technology laboratory. 

Tim Bandos from Digital Guardian says technology always requires human consciousness to strive forward. It has and will require human effort to counter cyberattacks and stop them. According to Tim, Experts and Analysts are the real heroes, and AI is just a sidekick. 

How are hackers using AI to attack cybersecurity? 

1. Data Poisoning 


In some cyberattacks, hackers exploit the data which is used to train machine learning models. In data poisoning, the hacker manipulates a training dataset to manage the model's prediction patterns and prepare it according to his will to do many hacker desires. These can include spamming or phishing emails. Tabassi says that data is the driving mechanism for any machine learning, and one should focus on the information he uses to train the models to act like any model. Machine learning training models and the data it uses affect user trust. For cybersecurity, the industry needs to establish a standard protocol for data quality. 

2. Generative Adversarial Networks 


GANs are nothing but a setting where two AI systems are set up against each other. One AI generates the content, and the other AI finds the errors. The competition between the two AIs together creates reliable content to get through as the original. "This capability could be used b ..

Support the originator by clicking the read the rest link below.