We’re Entering the Age of Unethical Voice Tech


In 2019, Google released a synthetic speech database with a very specific goal: stopping audio deepfakes. 


“Malicious actors may synthesize speech to try to fool voice authentication systems,” the Google News Initiative blog reported at the time. “Perhaps equally concerning, public awareness of “deep fakes” (audio or video clips generated by deep learning models) can be exploited to manipulate trust in media.”


Ironically, also in 2019, Google introduced the Translatotron artificial intelligence (AI) system to translate speech into another language. By 2021, it was clear that deepfake voice manipulation was a serious issue for anyone relying on AI to mimic speech. Google designed the Translatotron 2 to prevent voice spoofing.


Two-Edged Sword


Google and other tech giants are in a dilemma. AI voice brought us Alexa and Siri; it allows users to use voice to interact with their smartphones and businesses to streamline customer service. However, many of these same companies also launched — or planned to launch — projects that made AI a little too lifelike. Someone can use this tool for harm as easily as for good. Big tech, then, mostly sidestepped these products. The companies agreed they were too dangerous, no matter how useful. 


But smaller companies are just as innovative as big tech. Now that AI and machine learning is somewhat democratized, smaller tech companies are willing to take on the risks and ethical concerns of voice tech. Like it or no ..

Support the originator by clicking the read the rest link below.