Deepfakes and Synthetic Media: The New AI-Enabled Era Of Exponential Fraud

Artificial intelligence is ushering in a new era of massive online fraud.

Deepfakes, AI-generated identities, and autonomous agents are overwhelming legacy cybersecurity systems.

And the scale of the problem is likely on the verge of dramatically exploding.

Deepfake Fraud Is Growing Exponentially

Improvements in artificial intelligence models, AI agents and LLMs are the technology that underpins the rise of deepfakes.

The world's largest AI research labs and publicly traded technology companies are spending tens of billions of dollars on scaling the capabilities of their models and are producing breakthroughs in generative AI at a rapid pace.

This is now translating directly into deepfakes causing meaningful fraud.

Between 2019-2023 in the United States alone, fraud that could be attributed to deepfakes was roughly $130M USD.

In 2024 that number jumped to nearly $400M USD.

In 2025, it more than doubled to roughly $1B USD.

The scaling laws of AI are also applying to fraud.

Deepfake Fraud Is An Unprecedented Paradigm In Technology

Throughout every major new breakthrough in technology, fraud has always closely followed.

The invention of the credit card led to credit card fraud.

The invention of the Internet led to identity fraud.

And now, the invention of generative AI and synthetic media is leading to its very own wave of fraud.

However, there is one crucial difference: deepfake fraud can scale faster and more aggressively than any other previous era of fraud.

It's easy to produce, it's cheap to generate and it scales beyond borders and languages. Fraud that can be attributed to deepfakes is already showing signs of being the single fastest growing new type of fraud in human history if it continues to accelerate at this pace.

Synthetic Media Is Getting Better And Scaling Faster Than Ever Before

Synthetic media and deepfakes have never been cheaper to produce in the history of humanity. This also comes at a time when human beings are still becoming calibrated to AI generated content.

In many studies, 70% of people say they are unable to easily distinguish between a deepfake and a real human.

This is even further compounded by the fact that because of the non-linear improvements in AI as a whole, deepfakes are able to mimic personal character traits from personality to vocal inflection better than ever before.

If you simply assume that these technologies will improve at the rate that the largest AI research labs and publicly traded technology companies are investing into AI, it is reasonable to assume that deepfakes are currently the worst they will ever be and will only continue to dramatically improve.

And they will also continue to exponentionally scale, due to the simplicity of using generative AI models to produce deepfake content and the extremely low cost required to do so.

We are now entering an era of high quality and low cost synthetic media that will only improve as we move into the future.

Identity Has Become A Profitable Vulnerability

Deepfakes and synthetic media have a simple compounding advantage when it comes to fraud: AI has lowered the cost of deception.

A machine can now easily, cheaply and convincingly fake your face, your voice, and your credentials- at enormous scale.

Previous authentication technology was built for a world in which a human logs in, another human approved decisions and trust was explicitly built into to systems that were entirely overseen by human beings.

That world is now under an existential risk because of deepfakes and synthetic media. Automation and technology have now essentially scaled impersonation to levels that have never been seen before in the history of the world.

Which means identity can no longer be a one-time login or operate from the basic level of belief that a human being alone can verify the identity or legitimacy of another human being.

Fundamentally, that requires a new architecture and an entirely different infrastructure.

And until that infrastructure is built, deployed and fully integrated, billions of dollars of fraud is likely to occur.

What You Know Versus Who You Are

The default standard of all previous privacy and security was based around the concept of “what you know”. Passwords and verification questions were based around personal details and specific knowledge about birthdays, anniversaries or the city you were born in. Obviously, these types of questions are not inherently secure and can be hacked, discovered or otherwise taken advantage of. This layer of fragile fraud protection led to the rise of biometric security and using your human existence as both proof and trust. Biometric security was essentially based around one simple core concept: biometric identity is immutable and extremely accurate. While someone can know personal security questions about you, they could not actually represent your likeness and human existence. However, that has now changed. Deepfakes and synthetic media are an exponential paradigm shift to biometric security. Facial recognition, voice verification and liveness detections can now all be generated through AI. And it can be done at an extraordinarily high velocity and low cost. Essentially, deepfakes and synthetic media now are breaking the core layer of all security infrastructure that was ever built around human identity.

This is beginning to drive an unprecedented amount of spending and investment into new technologies and biometric cybersecurity that is actually resistant to deepfakes and synthetic media. However, the complexities are enormous. Next generation biometric security needs to be resistant to all types of synthetic media- from voice recognition to human likeness.

This is the underlying tailwind that is poised to become a massive change to where value is created in cybersecurity.

The Era Of AI Agents Is Coming

There is also another compounding variable for deepfake fraud: AI agents.

AI agents are framed as autonomous software that can make decisions and take actions without human intervention. There are clearly enormous applications for AI agents to cut costs in businesses, scale productive outcomes and generally be a net positive. However, it is likely unreasonable to assume that AI agents would only ever be used for good, are not corruptible and would not facilitate harmful behaviour.

The massive adoption cycle for AI agents could be the beginning of removing even more layers of human verification and collapsing the boundaries of identity and authorization. Fundamentally, the more AI agents that are deployed- and the more capabilities and power they have- the less human beings can actually monitor or mitigate negative outcomes.

AI agents integrated deeply within systems and companies could have the unintended consequence of becoming a massive scaling system for deepfakes and synthetic media to be used to commit fraud.

The New Asymmetric Opportunity In Cybersecurity

Digital identity- specifically biometric authentication built for an AI-driven world- may be entering a thematic shift of technological importance similar to the Internet, mobile and cloud.

Deepfakes and synthetic media represent a threat to the largest companies in the entire world. And they have unprecedented scaling capabilities.

There is also an entirely new set of structural risks for companies that are continuously evolving. Employees log in remotely. Customers transact digitally. AI agents operate autonomously. And many systems are still ultimately underpinned by human judgement and human decision making.

All of these challenges are complex and require completely new software and technology to solve. Even more challenging, AI continues to improve at a non-linear rate and creates a paradigm in which every new technological breakthrough in AI has the unintended negative consequence of also improving the quality of synthetic media and deepfakes.

Similar to how the Internet caused a boom in cybersecurity and virus prevention software, AI may now be on the verge of creating an explosion of value creation for companies positioned at the forefront of deepfake resistant technology.


For SG users only, Welcome to open a CBA today and enjoy access to a trading limit of up to SGD 20,000 with unlimited trading on SG, HK, and US stocks, as well as ETFs.

🎉Cash Boost Account Now Supports 35,000+ Stocks & ETFs – Greater Flexibility Now

Find out more here.

Complete your first Cash Boost Account trade with a trade amount of ≥ SGD1000* to get SGD 688 stock vouchers*! The trade can be executed using any payment type available under the Cash Boost Account: Cash, CPF, SRS, or CDP.

Click to access the activity

Other helpful links:

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

  • Top
  • Latest
empty
No comments yet