The insurance industry runs on visual evidence. Damage claims, identity verification, medical documentation, property inspections -- the entire claims workflow depends on images and documents submitted by policyholders. Until recently, the operating assumption was simple: photos don't lie.

That assumption is now broken.

The threat is real and accelerating

Generative AI has made it trivially easy to fabricate convincing evidence. A realistic photo of vehicle damage can be generated from a text prompt in under thirty seconds, with no damaged vehicle required. A genuine photo of minor hail damage can be digitally transformed to show catastrophic destruction, with the manipulation invisible to the naked eye.

The numbers tell the story. Deepfake content online has been increasing at roughly 900% year-on-year according to the World Economic Forum. Deepfake fraud attempts in financial services grew 1,740% in North America between 2022 and 2023. Deloitte's Center for Financial Services projects that AI-enabled fraud losses will reach $40 billion in the US by 2027, up from $12.3 billion in 2023.

Insurance is being hit particularly hard. The Coalition Against Insurance Fraud estimates that insurance fraud already costs $308.6 billion annually in the US alone. AI-enabled fraud is the fastest-growing category, and it is growing at over 100% annually while traditional fraud grows at just 5-8%.

The detection gap

Despite all of this, the industry is astonishingly unprepared. Only around 12% of insurers have deployed dedicated deepfake detection tools, though 42% plan to invest in 2025-2026. That gap between the threat and the response is exactly where we saw the opportunity.

The existing detection tools were not built for insurance. Most were designed for social media misinformation, KYC verification, and government intelligence. They work well on high-resolution, uncompressed images in controlled conditions.

Insurance claims are a fundamentally different environment.

Claims photos are taken on smartphones by stressed policyholders, often in poor lighting, at awkward angles. They get compressed through messaging apps, screenshotted, forwarded via email, and uploaded through web portals. By the time an image reaches the claims system, it has been through multiple rounds of quality degradation. A model that achieves 97% accuracy on clean benchmark images may collapse to 50-65% on a compressed JPEG of a dented bumper bar photographed in a dimly lit car park.

And humans are no help either. Studies suggest that only around 1 in 1,000 people can reliably detect AI-generated fakes -- the rest perform at or below chance.

What deetech does

deetech is AI forensic media verification built specifically for insurance. We analyse photos, videos, audio, and documents across the claims pipeline -- damage evidence, identity verification, medical records, property inspections -- and flag synthetic or manipulated content before it gets paid out.

Our positioning is simple: Detect Deepfakes. Protect Claims.

Built for real-world conditions

Our models are purpose-built for compressed, low-quality media -- the way claims photos actually look in production, not the way benchmark images look in a lab. We train on compressed images, low-resolution smartphone photos, and the specific content types that insurance claims involve: vehicle damage, property destruction, forged medical records, manipulated diagnostic imaging, altered repair estimates, and fabricated police reports.

Explainable, not just accurate

A confidence score from a black-box API is not enough for insurance. Claims adjusters need to understand what was found and where. SIU investigators need evidence that supports an investigation. Legal proceedings require documentation that meets evidentiary standards.

We produce visual heatmaps showing exactly where manipulation is detected, technical descriptions of specific findings, and methodology documentation sufficient for expert testimony. This is court-ready evidence, not a binary label.

3-layer defence

Most deepfake detection tools only perform content analysis -- looking at pixels to determine if an image was generated or manipulated. This misses entire classes of attack. Presentation attacks involve holding a screen displaying a fake image up to a camera. Injection attacks digitally insert fabricated images into the camera pipeline, bypassing the lens entirely. These injection attacks are up 200% according to industry data, and single-layer detectors are completely blind to them.

deetech combines all three in a unified 3-layer defence platform: content analysis, presentation attack detection (PAD), and injection attack prevention.

Insurance-native workflows

A standalone API that requires manual file uploads exists outside the claims workflow. Adjusters won't use it consistently, evidence chain-of-custody is compromised, and detection happens after the fact rather than at intake.

We integrate directly into claims management platforms. Analysis is triggered automatically when media is submitted with a claim. Results are delivered into the claim record before the adjuster reviews it. During catastrophe events, when claims volumes surge 10-20x overnight and fraud risk peaks, the system scales rather than buckles.

Why nobody else is doing this

The competitive landscape confirms the opportunity from two directions.

On one side, deepfake detection companies like Reality Defender (which raised a $33 million Series A) validate that deepfake detection is a venture-scale market. But they are horizontal -- targeting finance, government, and media with no dedicated insurance workflows, no forensic evidence designed for claims investigations, and no integration with insurance platforms.

On the other side, insurance fraud detection companies like FRISS and Shift Technology (which has raised over $320 million) have built substantial businesses selling to insurers, proving that the buyer exists and pays. But neither has deepfake detection capability -- they use rules-based engines and pattern matching on structured claims data, which is blind to fabricated visual evidence.

Nobody sits at the intersection of deepfake detection and insurance specialisation. That intersection is where deetech lives.

Insurance is about trust. We make sure that trust isn't broken by AI.

This is a genuinely hard problem

Detecting deepfakes is not a solved problem -- not even close. In March 2025, CSIRO and Sungkyunkwan University assessed 16 leading deepfake detectors and found that none could reliably identify real-world deepfakes. Detectors that performed well on benchmark datasets collapsed to below 50% accuracy -- worse than a coin flip -- when confronted with deepfakes produced by generators they had not been trained on.

The study identified 18 factors affecting detection accuracy, from how data is processed to how models are trained and evaluated. The core issue: most detection tools are brittle. They learn to spot the artefacts of specific generators rather than learning what makes an image synthetic. When the generator changes -- and generators are evolving constantly -- the detector breaks.

This is the problem we are obsessed with solving.

Why watermarks won't save you

The big tech response to deepfakes has been watermarking. Google has SynthID. OpenAI embeds metadata via C2PA. Adobe, Microsoft, and others have joined the Content Authenticity Initiative. The idea is straightforward: if every AI-generated image carries a cryptographic watermark, you can check for it downstream.

The problem is equally straightforward: bad actors won't use watermarked tools.

The open-source generative AI ecosystem is vast and growing. Stable Diffusion, Flux, and dozens of fine-tuned variants are freely available with no watermarking, no content policies, and no audit trail. Anyone can run them locally on consumer hardware. A fraudster fabricating insurance evidence is not going to use DALL-E with its C2PA metadata intact -- they will use an open-source model running on a laptop with all provenance stripped.

Even when watermarks are present, they can be degraded. Screenshotting, recompressing, cropping, or running an image through a second model can strip or damage watermark signals. Research has shown that current watermarking schemes are not robust against determined adversaries.

Watermarking is a useful signal when it exists, and deetech incorporates provenance checks where available. But building a fraud defence strategy around watermarks alone is like locking the front door while the back wall is missing. The detection problem -- identifying synthetic content with no watermark, no metadata, and no cooperation from the generator -- remains the hard problem. That is where we focus.

Where we are now

We have a state-of-the-art model achieving 98%+ production accuracy on real claims media -- including 95%+ on content from generators the model has never seen. We have secured a design partner insurer and are commencing our pilot. The website is live at deetech.ai.

The regulatory environment is creating urgency. APRA's CPS 230 in Australia now covers AI-enabled fraud risk management. The EU AI Act creates compliance obligations for AI systems in insurance. The cost of inaction is becoming untenable -- insurers that experience publicised AI fraud incidents are seeing significant customer churn in the following twelve months.

The window to establish the category-defining platform in AI forensics for insurance is open now. We intend to be the ones who close it.