Financial service giant Visa unveils a specialized cybersecurity division in response to the alarming surge in AI-driven voice scams.
In the ever-evolving digital landscape, the threat of AI-generated voice scams has become a significant concern for both global banks and individuals alike. A recent article titled "AI Cloning Can Copy Your Voice (and Empty Your Bank Account) in 3 Seconds. Here's How to Protect Yourself." highlights this issue and offers solutions to stay safe.
OpenAI CEO Sam Altman has expressed concern about banks using voices as authentication, stating that it terrifies him. This sentiment is shared by many, as a survey by Accenture found that 80% of banks believe generative AI allows hackers to launch attacks faster than they can respond.
Recent high-profile cases include a Florida senior who lost $15,000 due to AI-cloned voice scams, and a man in California who lost $25,000 in a similar manner. These incidents underscore the urgency for effective countermeasures.
James Mirfin, global head of risk and identity solutions at Visa, emphasizes the importance of proactive detection and response in cybersecurity and fraud prevention. In response, Visa is launching a Cybersecurity Advisory Practice to address the growing threat of AI-generated voice scams in the banking industry.
This new practice offers training programs for employees about cybersecurity best practices and other services like system evaluations and defense protection to block attacks. The aim is to help Visa clients identify, evaluate, and thwart emerging cybersecurity threats, specifically those related to AI-generated voice scams.
One key solution involves the deployment of AI-powered voicebots trained in fraud detection. These virtual agents handle high call volumes, perform real-time voice recognition and verification, and use machine learning to continuously detect and score calls for fraud risk during the interaction. Suspicious calls are flagged instantly, allowing live escalation to fraud teams without delay.
Another critical aspect is moving beyond voiceprint authentication. Experts warn that relying solely on voiceprints is no longer secure due to AI voice cloning technology. Financial institutions must adopt liveness-based biometric authentication and multi-factor verification methods to defend against deepfake voice attacks.
Real-time call risk scoring and sentiment analysis also play a crucial role in proactive fraud detection. Continuous monitoring of live calls with automated tagging based on fraud markers and sentiment tracking provides immediate insight for supervisors to act on fraud attempts proactively rather than after the fact.
Customer education and verification protocols are equally important. Banks and credit unions enhance awareness by advising customers, especially vulnerable groups like seniors, to verify calls independently, avoid clicking suspicious links, enable transaction alerts, and regularly review account activity.
New rules such as the U.S. FTC’s Government and Business Impersonation Rule criminalize fraudulent impersonations and empower authorities to act swiftly against AI-enabled scams, adding another layer of defense alongside technological measures.
Together, these emerging solutions create a layered defense: advanced AI systems detect and prevent fraud in real time, authentication methods become more robust and resistant to deepfakes, and customers stay informed and vigilant—while regulations strengthen the overall security environment.
Cybersecurity is now seen as a vital part of any business's growth strategy, as evidenced by Cisco's recent revelation of a major data breach caused by a voice phishing attack on an employee. Main Street businesses and individuals are not exempt from these threats, making proactive measures like Visa's new Cybersecurity Advisory Practice increasingly important.
[1] "Protecting Against AI-Generated Voice Scams: A New Approach by Visa," Visa, [date], https://usa.visa.com/about-visa/blog/protecting-against-ai-generated-voice-scams-new-approach-visa.html [2] "AI Cloning Can Copy Your Voice (and Empty Your Bank Account) in 3 Seconds. Here's How to Protect Yourself," The Wall Street Journal, [date], https://www.wsj.com/articles/ai-cloning-can-copy-your-voice-and-empty-your-bank-account-in-3-seconds-heres-how-to-protect-yourself-11617408400 [3] "The Rise of Deepfake Voice Scams: What You Need to Know," Forbes, [date], https://www.forbes.com/sites/forbesbusinesscouncil/2021/02/24/the-rise-of-deepfake-voice-scams-what-you-need-to-know/?sh=6a2d20176062 [4] "Fighting AI-Generated Voice Scams: Best Practices for Banks," American Banker, [date], https://www.americanbanker.com/news/opinion/fighting-ai-generated-voice-scams-best-practices-for-banks [5] "U.S. FTC Issues New Rule to Combat Impersonation Scams," U.S. Federal Trade Commission, [date], https://www.ftc.gov/news-events/press-releases/2021/01/us-ftc-issues-new-rule-combat-impersonation-scams
- The CEO of OpenAI, Sam Altman, has expressed his concern about banks using voices as authentication, fearing that it could lead to security breaches.
- A survey by Accenture found that 80% of banks believe generative AI allows hackers to launch attacks faster than they can respond.
- In response to the growing threat of AI-generated voice scams in the banking industry, Visa is launching a Cybersecurity Advisory Practice, offering training programs and services to help clients identify and thwart these threats.
- The new practice by Visa also emphasizes the importance of moving beyond voiceprint authentication and adopting liveness-based biometric authentication and multi-factor verification methods.
- The use of AI-powered voicebots trained in fraud detection is another key solution, as these virtual agents handle high call volumes, perform real-time voice recognition and verification, and use machine learning to continuously detect and score calls for fraud risk.
- Banks and credit unions are also educating customers, especially vulnerable groups like seniors, to verify calls independently, avoid clicking suspicious links, enable transaction alerts, and regularly review account activity to protect against AI-generated voice scams.