The insurance industry runs on visual evidence. Damage claims, identity verification, medical documentation, property inspections. The entire claims workflow depends on images and documents submitted by policyholders. Until recently, the operating assumption was simple: photos don't lie.
That assumption is now broken.
The threat is real and accelerating
Generative AI has made it trivially easy to fabricate convincing evidence. A realistic photo of vehicle damage can be generated from a text prompt in under thirty seconds, with no damaged vehicle required. A genuine photo of minor hail damage can be digitally transformed to show catastrophic destruction, with the manipulation invisible to the naked eye.
The numbers tell the story. Deepfake content online has been increasing at roughly 900% year-on-year according to the World Economic Forum. Deepfake fraud attempts in financial services grew 1,740% in North America between 2022 and 2023. Deloitte's Center for Financial Services projects that AI-enabled fraud losses will reach $40 billion in the US by 2027, up from $12.3 billion in 2023.
Insurance is being hit particularly hard. The Coalition Against Insurance Fraud estimates that insurance fraud already costs $308.6 billion annually in the US alone. AI-generated evidence is the fastest-growing vector, and existing tools are not keeping up. Regulators are taking notice: APRA's CPS 230 in Australia and the EU AI Act now require insurers to manage operational risks including AI-enabled fraud. The cost of inaction is becoming untenable.
The detection gap
Despite the scale of the threat, the industry is astonishingly unprepared. The existing detection tools were not built for insurance. Most were designed for social media misinformation, KYC verification, and government intelligence. They work well on high-resolution, uncompressed images in controlled conditions.
Insurance claims are a fundamentally different environment.
Claims photos are taken on smartphones by stressed policyholders, often in poor lighting, at awkward angles. They get compressed through messaging apps, screenshotted, forwarded via email, and uploaded through web portals. By the time an image reaches the claims system, it has been through multiple rounds of quality degradation. A model that achieves 97% accuracy on clean benchmark images may collapse to 50-65% on a compressed JPEG of a dented bumper bar photographed in a dimly lit car park.
And humans are no help either. Research suggests that only 0.1% of people can reliably distinguish AI-generated content from reality. The rest perform at or below chance.
What deetech does
deetech is AI forensic media verification built specifically for insurance. We analyse photos, videos, audio, and documents across the claims pipeline and flag synthetic or manipulated content before it gets paid out.
Built for real-world conditions
Our models are trained on the kind of media that actually flows through claims systems: compressed smartphone photos, scanned documents, forwarded attachments. We train on the specific content types insurance claims involve: vehicle damage, property destruction, forged medical records, manipulated diagnostic imaging, altered repair estimates, and fabricated police reports. Where benchmark detectors break down on degraded inputs, ours are built for them.
Explainable, not just accurate
A confidence score from a black-box API is not enough for insurance. Claims adjusters need to understand what was found and where. SIU investigators need evidence that supports a case. And if a claim ends up in dispute, the documentation needs to hold up under scrutiny.
We produce visual heatmaps showing where manipulation is detected, technical descriptions of specific findings, and methodology documentation designed to support investigation and litigation workflows. Evidence you can act on, not a binary label.
Detection + forensics + investigation
Most deepfake detection tools stop at a single AI verdict: real or fake. That is not enough for insurance. deetech runs three layers in parallel. First, AI detection models trained on real claims media analyse content across images, video, audio, and documents. Second, 20+ forensic checks run simultaneously: error level analysis, metadata extraction, compression analysis, clone detection, noise patterns, reverse image search, and watermark verification. Third, an investigation workspace ties all findings to the claim record, with batch processing, evidence comparison, and one-click SIU escalation.
A single confidence score does not hold up in an investigation. Adjusters and SIU teams need the full evidence chain, and that is what we deliver.
Insurance-native workflows
If detection lives outside the claims workflow, it won't get used. Adjusters will skip the extra step, evidence chain-of-custody breaks down, and by the time anyone checks, the claim has already been paid.
We integrate directly into claims management platforms. Analysis is triggered automatically when media is submitted with a claim. Results are delivered into the claim record before the adjuster reviews it. During catastrophe events, when claims volumes surge 10-20x overnight and fraud risk peaks, the system scales rather than buckles.
Why nobody else is doing this
Detection itself is far from solved. In March 2025, CSIRO and Sungkyunkwan University assessed 16 leading deepfake detectors and found that none could reliably identify real-world deepfakes. Detectors that performed well on benchmarks collapsed to below 50% accuracy when confronted with content from generators they had not been trained on. They learn to spot artefacts of specific generators rather than learning what makes an image synthetic. When the generator changes, the detector breaks.
Update (Feb 27, 2026): Days after we published this post, The New York Times tested more than a dozen leading AI detectors across over 1,000 scans and reached the same conclusion. Detectors struggled with complex scenes, largely missed AI edits applied to real photographs, and delivered mixed results on video. The article noted that banks and insurance companies are already adopting these tools, despite their significant limitations. It is exactly the gap deetech is built to close.
Big tech's response has been watermarking: Google's SynthID, OpenAI's C2PA metadata, Adobe's Content Authenticity Initiative. The idea is sound, but the assumption is flawed. Bad actors won't use watermarked tools. Open-source generators (Stable Diffusion, Flux, and dozens of variants) are freely available with no watermarking, no content policies, and no audit trail. Even when watermarks exist, screenshotting or recompressing strips them. The hard problem is detecting synthetic content with no watermark, no metadata, and no cooperation from the generator.
The competitive landscape confirms the opportunity from two directions.
On one side, deepfake detection companies like Reality Defender (which raised a $33 million Series A) validate that deepfake detection is a venture-scale market. But they are horizontal, targeting finance, government, and media with no dedicated insurance workflows, no forensic evidence designed for claims investigations, and no integration with insurance platforms.
On the other side, insurance fraud detection companies like FRISS and Shift Technology (which has raised over $320 million) have built substantial businesses selling to insurers, proving that the buyer exists and pays. But neither has deepfake detection capability. They use rules-based engines and pattern matching on structured claims data, which is blind to fabricated visual evidence.
Nobody sits at the intersection of deepfake detection and insurance specialisation. That intersection is where deetech lives.
Where we are now
We have a production model achieving 98%+ accuracy on real claims media, including 95%+ on content from generators the model has never seen. We have secured a design partner insurer and are commencing our pilot. The website is live at deetech.ai.
The regulatory window is tightening, the fraud vector is accelerating, and nobody else is building at this intersection. We intend to be the ones who close it.
Insurance is about trust. We make sure that trust isn't broken by AI.