Hello, and welcome back to Inc.'s 1 Smart Business Story. Gone are the days where phishing was limited to suspicious emails or clumsy scam calls. Now, AI is pushing fraud into a new era—making Hollywood-level deepfakes increasingly easy to produce at a fraction of the cost.
AI isn't just infiltrating workflows; it's also flooding the black market, enabling bad actors “to scale the number of victims that they can target at one time and also increase their batting average,” ID.me CEO Blake Hall told Inc. recently.
As deepfake activity surges 1,300 percent YoY across enterprises, deepfake detection and fraud prevention company Pindrop CEO, Vijay Balasubramaniyan, shared a couple of actionable insights on how founders can safeguard their companies against this growing threat.
In this piece you will see:
How AI deepfakes are infiltrating trusted workplace communications platforms
What patterns to watch for when instinct alone is no longer enough to detect AI-powered fraudsters
Why designing your defenses against AI-powered attacks can protect both your company’s credibility and its bottom line
AI Has Supercharged Fraud. Here Are 4 Ways Business Leaders Can Protect Their Companies
BY BRIAN CONTRERAS, STAFF WRITER
The CEO of an AI authentication company shares best practices for deepfake-proofing your business.
Slowly but surely, artificial intelligence is changing every sector of the economy. Nowhere is that more evident than in the world of fraud and scams, where the increasing quality and decreasing prices of lifelike deepfakes, audio emulations, and autonomous agents is bringing automation to the black market, just like everywhere else. Late last year, for instance, the CEO of identity verification company ID.me told Inc. that AI chatbots have enabled bad actors “to scale the number of victims that they can target at one time and also increase their batting average.”
What’s a cautious chief executive to do in this scary new era of AI-supercharged security threats? Inc. spoke with Vijay Balasubramaniyan, CEO of the deepfake detection and fraud prevention company Pindrop, to learn what advice he has for wary founders.
Don’t take voice and video chat channels for granted
The online communication avenues that have become unavoidable in post-pandemic workplaces may make it easier to run your business and keep in touch with your staff—but they can also open up security gaps that need to be closed, Balasubramaniyan warns. AI can now successfully emulate the person on the other end of those channels.
He dubs this the “real human problem”—that is, the uniquely modern-day concern that the “person” on the other end of a corporate interaction may not be any person at all, but a generative AI imitator.
“It’s an existential question to the very fabric of trust that all of these organizations have when they’re interacting with an employee, a customer, a vendor,” he explains. “Any place where you thought a human was on the other end of an interaction, that can be replaced by AI.”
Pindrop’s analysis of customer data, for instance, found that the average business faces up to $2 million in potential deepfake fraud exposure, with nearly $350,000 of it coming through customer service phone channels. As evidence of how video calls offer a new attack surface for fraudsters, Pindrop also pointed to a 2024 incident in which a finance employee was scammed out of $25 million following a video call with someone they thought was their CFO but was in fact an AI deepfake of the exec.
“Our data shows deepfake activity across enterprises has surged 1,300 percent [year-over-year], with fraud no longer confined to phishing emails,” Pindrop said in a statement to Inc. “Fraud in the contact center has increased 26 percent year-over-year from 2021-24.”
Rely on verification, not human instinct
AI-enabled voice and video imitations are now sophisticated enough that you can’t rely on human judgement alone to pick them out, Pindrop warns, citing research indicating that humans now do barely better than a coin-flip when it comes to picking out AI-generated audio, video, and imagery.
“We’d always tell people, ‘Hey, look for pauses when they’re saying things; they’re going to type something and then the system is going to generate the answer, and that’s going to take time,’” Balasubramaniyan recalls. “We’d say things like, ‘See if their eyes are blinking. Have them move their hand in front of their face. Ask them to move their head.’ All of these AI systems have now taken care of those things, and they’ve all progressed in the last year.”
Password resets, for instance, offer imitators a dangerous means of securing access to corporate systems with AI-enabled deception.
“There are these groups called Scattered Spider and ShinyHunters,” Balasubramaniyan explains. “Their mode of operation is, get someone on the phone, convince them they’re Brian, get the password for Brian’s reset, and if Brian is an important IT administrator, [they] now have keys to the kingdom.”
Design your defenses accordingly, the authentication company adds, suggesting that companies adhere to an ethos of verification rather than trust. Some strategies for doing so include being wary of voice or video calls that demand you act quickly; hanging up a call and then calling right back using an official number; and leaning on protective tools such as caller ID, spam filters, and multifactor authentication.
Want to read more on how to design your AI-deepfake defenses? Read on at Inc.com
Advertisement:
Smart starts here.
You don't have to read everything — just the right thing. 1440's daily newsletter distills the day's biggest stories from 100+ sources into one quick, 5-minute read. It's the fastest way to stay sharp, sound informed, and actually understand what's happening in the world. Join 4.5 million readers who start their day the smart way.


