The rise of the deepfake threat is challenging businesses to question everything – especially old approaches to risk management. 

If the business community harbored any remaining doubts about the threat posed by AI-powered fraud, it’s safe to say recent events (and their $25.6 million price tag) have brought the danger all too close to home.   

The notorious case of a Hong Kong multinational falling prey to a phony video conference is by no means the first deepfake scandal to dominate headlines. Earlier attacks exacted even heftier financial tolls: 2021 saw a $76.2 million breach to the Chinese tax system, and a Japanese company lose $35 million to a voice simulation trick. But this latest feat of social engineering, with its new levels of brazen daring and technological sophistication, has laid the reality bare for companies and institutions on every scale: deepfake scams are coming, both for your reputation and your bottom line. 

In a recent interview for RTHK’s business and finance flagship show Money Talk, we discussed why planning an effective counterattack is far from simple. We outlined the limitations of one-size-fits-all education frameworks in improving employee awareness – and a smarter, more targeted way of building real vigilance where it counts.  

Here are the takeaways: 

InfoSec solutions won’t stem the deepfake tide 

Even where firms are prepared to invest significantly in information security, the short-term outlook is gloomy. Deepfakes – scams employing “deep learning”, or generative machine learning software to alter and synthesize digital content – have exploded in recent years. Asian nations lead the global charge: the sharpest rise in attacks between 2022 and 2023 was recorded in the Philippines (4,500%), followed by Vietnam (3,050%). 

As the cons proliferate, they also become more convincing: from a manipulated voice or simulation of a single individual in a one-to-one meeting, to an entire rogues’ gallery of known executives on a video conference, rendered credibly enough to persuade an employee to initiate fund transfers. Social engineering has advanced to the point where individuals (even close colleagues) can be simulated with what FTC Chair Lina Khan has called “eerie precision, and the security implications are formidable – not least the undermining of authentication software many businesses rely on to safeguard internal systems. 

Existing education frameworks are a blunt instrument

Curbing the rise of deepfakes will likely mean a long battle, combining regulatory control of AI capabilities with the development of better, more adaptive InfoSec defenses. For the time being, then, businesses must operate in a digital environment that is fundamentally unsafe. The obvious response is to raise awareness: many firms are beefing up the cybercrime component of their training programs. A sort of inverted “Stranger Danger” campaign serves to remind employees that digital content that looks familiar and trustworthy may be anything but. But is it enough to shield businesses from costly disruption? 

With scam tactics and technology constantly evolving, reactive training will always struggle to stay abreast of their newest and most treacherous iterations. And a one-size-fits-all approach to employee education also overlooks the uneven distribution of risk across an organization. By virtue of their specific job functions, key roles and teams will be high-value targets for bad actors. 

A digital problem has a human solution 

The best defense is both preventative and tailored to the individual level. The first step is diagnostic: our data-driven analysis helps businesses to map their exposure and identify risk hotspots, or individuals most vulnerable to scam attacks. In such a fast-changing threat environment, where it’s all but impossible to Know Your Enemy, it becomes even more fundamental to Know Your Weaknesses. A business that understands where disruption is most likely – and most potentially damaging – can design support systems that deploy your best resources where they are really needed.

On a programmatic level, support means scenario-planning and live-testing of controls to ensure the best possible response to an incident. But the greatest gains in resilience are in creating confident gatekeepers. Understanding individuals’ skills, motivations levels, and capabilities is crucial to designing digital measures and controls that work. And beyond simply monitoring sentiment across an organization, effective coaching and mentoring schemes that work at scale can help proactively manage potential vulnerabilities. 

When skepticism and vigilance aren’t merely tick-box requirements, but mindset and a cultural imperatives, then businesses will finally have shape-shifting digital fraudsters on the run.  

Listen to the interview here: