Deep Fake: Deep Trouble

Deep Fake: Deep Trouble

According to a new report from University College London (UCL), fake audio or video content has been ranked as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism. Deep fakes will most likely come to fruition on social media as memes, however their future can be much more sinister. The potential consequences of deep fake videos can range from influencing political outcomes to infiltrating biometric security systems.





Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.


What do the experts think?


Senior author Professor Lewis Griffin (UCL Computer Science) said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”


Joe Bloemendaal, Head of Strategy at Mitek, a company that provides cybersecuirty to thousands of banks had several concerns. “Deepfake technology is one of the biggest threat to our lives online right now, and UCL’s report shows that deepfakes are no longer limited to dark corners of the internet. The technology has already been used to impersonate politicians, business leaders and A-listers – hitting the UK political scene in last year’s general election. Now, we can expect to see deepfakes playing a major role in financial crime, as fraudster ..

Support the originator by clicking the read the rest link below.