Computer security bods based in Germany and the US have analyzed the security measures protecting Amazon's Alexa voice assistant ecosystem and found them wanting.
In research presented on Wednesday at the Network and Distributed System Security Symposium (NDSS) conference, researchers describe flaws in the process Amazon uses to review third-party Alexa applications known as Skills.
The boffins – Christopher Lentzsch and Martin Degeling, from Horst Görtz Institute for IT Security at Ruhr-Universität Bochum, and Sheel Jayesh Shah, Benjamin Andow (now at Google), Anupam Das, and William Enck, from North Carolina State University – analyzed 90,194 Skills available in seven countries and found safety gaps that allow for malicious actions, abuse, and inadequate data usage disclosure.
The researchers, for example, were able to publish Skills using the names of well-known companies, which makes trust-based attacks like phishing easier. And they were also able to revise code after it had been reviewed without further scrutiny.
We show that not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval
"We show that not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information," the academics explain in their paper, titled "Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem." [PDF]
By failing to check for changes in Skill server logic, Amazon makes it possible for a malicious developer to alter the response to an existing trigge ..