Confronted with the emergence of voice-replicating services by AI, associate journalist Joseph Cox decided to conduct a curious experiment. After integrating his own voice, the reporter bypassed the bank’s automated service and tried to “hack” his account like a hacker. As a result, your data was completely accessible, showing the current lack of protection for such identifiers.
According to the journalist’s own account, the bank’s identifier did not recognize the synthetic voice the first time, requiring several attempts with cloning adjustments until the identification was accepted.
During the process, Cox called an automated assistant at UK bank Lloyds Bank, where she has a bank account.
Like other local and American companies, it offers a voice recognition service, which it says is almost like a “fingerprint”. Called Voice ID, it uses more than 100 different physical and behavioral characteristics to verify a user’s speech.
Despite this, at a certain point, the artificial voice passes the verification steps harmlessly, creates access to the customer’s account and displays his balance and list of recent transactions and transfers.
Test Bank Reveals Security Flaw
During the “break-in” process, Cox was asked to explain in her own words why she was calling. Using a cloned voice, the reporter replied, “Check my presence,” and then entered his date of birth as part of the first verification step.
When it was done, the customer service asked the customer to say “My voice is my password,” which the reporter again used a synthetic voice to reproduce directly from a sound file on his computer.
There, you were granted access to your account, even if they weren’t actually your words.
While a hacker in this scenario would also need the customer’s date of birth to complete the service, it’s worth remembering that this kind of information isn’t too difficult to obtain – especially in this day and age, when such data is so easily exposed on the Internet.
Therefore, the situation only shows how it is actually possible to avoid this type of system, and banks that use this authentication method should be aware of how to popularize cloned voice services and improve their security measures.
As for Lloyds Bank, a spokesperson told VICE, “‘Voice ID’ is an optional security measure; the right level of security for customer accounts, while making it easy to access when needed.”
And according to them, the bank is aware of the issues arising from synthetic voices and the need for measures to combat them. But, so far, no case has been registered against its customers for the cloned voice being used for fraud.
Artificial voice created by ElevenLabs service
To conduct the experiment, Cox cloned her own voice, and Eleven Labs, an American startup company, launched an artificial intelligence voice generation platform in mid-January.
The tool, which has caused problems ever since, has a free version and can be used by anyone with a few minutes of talk. A facility that has made it very easy to clone the voices of public figures, using them in problematic ways and without their consent.
One of the most famous cases is what happened to actress Emma Watson, who had a voice coordinated by members of 4Chan who pretended to be reading the book Mein Kampf.
The situation prompted the company to review its security measures and announce that it intends to implement new steps in the future to verify accounts and examine whether audio samples violate copyright.
“Total creator. Devoted tv fanatic. Communicator. Evil pop culture buff. Social media advocate.”