Next Generation Fraud: How Shallowfakes Are Rocking the Insurance Sector

Insurance executives mustn’t be fooled that being “shallow” suggests they have less impact than their deepfake peers. Martin Rehak, CEO and Founder of Resistant AI, looks at the evolution of shallowfakes, how AI might help detect them, and more.

The issue of digital document fraud has evolved during the pandemic and now poses a serious challenge to insurance company leaders and tech teams. 

Overall insurance fraud has risen by 73% in 2021 and insurers are scrambling to develop strategies to stay a step ahead of fraudsters.

A much-talked about threat in recent years has been the rise of “deepfakes”, the impressively realistic but synthetic still or video images or audio recordings generated using artificial intelligence (AI) technology. But an evolution of this – “shallowfakes” – is rocking the industry yet further, coming at organisations much faster and more often than deepfakes, which calls for cohesive industry action.

The Evolution of Shallowfakes

Originating from social media experimentation with open-source face-swapping technology, both deepfake and shallowfake fraud play a significant role in the overall fraud landscape, estimated at over $80 billion (£65 billion) a year in the US alone.

Where the two differ is that deepfakes are created using AI, more costly to generate and far more complex, but shallowfakes can be more simply produced with far simpler manipulation using basic photo or video editing software.

And insurance executives mustn’t be fooled that being “shallow” suggests they have less impact than their deepfake peers. Their ability to be less professionally created with deep AI/machine learning methods simply means shallowfakes are more rapidly becoming a persistent and direct fraud risk for insurers.

Their relative simplicity is often not a barrier to their successful use, notably where they are used against organisations with insufficient fraud prevention capabilities.

Next Generation Fraud

Insurance fraud manifests itself in different ways, such as deliberate inaccuracy in disclosing information to achieve better cover terms, or faking insurance claims for items which aren’t even owned or inflating the value of the claim. For shallowfake detection to improve, insurance professionals must understand the key signs for manufactured evidence.

Typical evidence includes false proof of identity or address, with photo ID documents a key offending item, such as driving licences. Fake supporting evidence for a claim or transaction is also rife, such as invoices for services and expert reports.

The same fake document can be reused perhaps hundreds of times with just name, account, and address altered to evade detection. Businesses particularly at risk are those with highly-automated customer engagement processes, due to the compressed timescales they have for making decisions.

The COVID Influence on Self-Service Automation

The global pandemic has accelerated the shift to self-service due to the benefits of remote claims reporting. This touchless automation, such as self-service transactions, looks set to stay which makes it easier for fakes to be created. In tandem, the way digital media can be altered has increased in sophistication.

The insurance industry has become dependent on customer-supplied photos or documentation for settling claims, therefore the risk of fraud from altered or manipulated evidence has increased significantly. At the same time, many insurers are still relying on 100% human verification of documentation which is both costly and insufficient for detecting all types of manipulation.

The Potential of AI to Detect Shallowfakes

As human expertise works alongside proven new tech to find solutions for industries’ biggest challenges, AI technology has emerged a key contender to increase detection of shallowfakes. Its analytical attention to detail, and power to validate, and at scale, can support the increasing amounts of data which insurance professionals are processing daily.

AI-based “document forensics” perform enhanced scrutiny of documents and images to detect inconsistencies that the human eye can’t to verify the authenticity of digital materials.

However, AI might not be perfect. It could be argued that AI-powered detection built into a claims process is not ideal, due to the potential for false positive detections holding up claims processes by adding further administration time for claim handlers.

But in the absence of a faster route to verifying the authenticity of disclosed photos and documents its valuable support will undoubtedly stop a percentage of fraudulent claims slipping through the net, saving insurance companies in lost revenue.

Some suggest that prevention technologies can offer a more reliable and future-proof solution to the shallowfake problem. By assessing the media at the point of capture, so that any changes become tamper-evident, it’s easier to be sure the content being checked is original and unaltered.

However, prevention only applies at the point of creation or capture, which means it cannot always replace detection as the best defence when capture software is not yet industry standard and widely used. Therefore some detection, perhaps with the use of AI, is the way forwards.

Inertia Is the Biggest Risk

Insurance companies must first acknowledge the problem of shallowfakes in order to invest in the most effective automated fraud prevention technology to monitor for suspicious documentation and train workers appropriately.

Shallowfakes are a major threat to accurate information and the business, tech and security teams of insurance companies must collaborate to thwart fraudsters’ successful costly claims. Their prevalence is set to grow in the forthcoming metaverse era.

Perhaps the greatest risk and cost to insurance companies is their own inertia and inactivity to mitigate the problem. It’s all too easy to underestimate the ease and sophistication of the shallowfake technology in appearing real. The best combination of tech and human intelligence will thwart rising shallowfake threats and drive fraudulent claimants away empty-handed.

By Martin Rehak, CEO and Founder of Resistant AI.

Guest Contributor
Guest Contributor
Follow on Twitter @eWeekUK

Popular Articles