Security Researchers Raise Concerns Over Security Flaws in Machine Learning

Security Researchers Raise Concerns Over Security Flaws in Machine Learning

In today’s age, it is impossible to implement effective cybersecurity technology without depending on innovative technologies like machine learning and artificial intelligence. Machine learning in the field of cybersecurity is a fast-growing trend. But with machine learning and AI there comes a cyber threat. Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

 What is machine learning? 

Machine learning, a subset of AI is helping business organizations to analyze the threats and respond to ‘adversarial attack’ and security incidents. It also helps to automate more boring and tedious tasks that were previously carried out by under-skilled security teams. Now, Google is also using machine learning to examine the threats against mobile endpoints running on Android along with detecting and removing malware from the infected handsets. 

What are adversarial attacks? 

Adversarial attacks are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. 

Security flaws linked with machine learning and AI 

Security researchers at Adversa, a Tel Aviv-based start-up that focuses on security fo ..

Support the originator by clicking the read the rest link below.