Australian and Korean researchers warn of loopholes in AI security systems

Australian and Korean researchers warn of loopholes in AI security systems

Getty Images/iStockphoto

Research from Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, the Australian Cyber Security Cooperative Research Centre (CSCRC), and South Korea's Sungkyunkwan University have highlighted how certain triggers could be loopholes in smart security cameras.


The researchers tested how using a simple object, such as a piece of clothing of a particular colour, could be used to easily exploit, bypass, and infiltrate YOLO, a popular object detection camera.


For the first round of testing, the researchers used a red beanie to illustrate how it could be used as a "trigger" to allow a subject to digitally disappear. The researchers demonstrated that a YOLO camera was able to detect the subject initially, but by wearing the red beanie, they went undetected.


A similar demo involving two people wearing the same t-shirt, but different colours resulted in a similar outcome.


Read more: The real reason businesses are failing at AI (TechRepublic)  


Data61 cybersecurity research scientist Sharif Abuadbba explained that the interest was to understand the potential shortcomings of artificial intelligence algorithms.


"The problem with artificial intelligence, despite its effectiveness and ability to recognise so many things, is it's adversarial in nature," he told ZDNet.

"If you're writing a simple computer program and you pass it along to someone else next to you, they can run many functional testing and integration testing against that code, and see exactly how that code behaves.


"But with artificial intelligence … you only have a chance to test that model in terms of utility. For example, a model that has been designed to recognise objects or to classify emails -- good or bad emails -- yo ..

Support the originator by clicking the read the rest link below.