
Artificial Intelligence In Claims: Friend, Foe, or Both?
April 24, 2025Contributing authors: Olivia Maxwell, Emily Epps, Clark Fennimore

AI (Artificial Intelligence) has been taking the world by storm over the last few years, improving efficiency, assisting in a variety of functions, and overall becoming a useful time-saving tool for a wide range of professions. Despite all the benefits that come with AI, there are, unfortunately, people who have been utilizing this super-tool for evil.
This is why we can’t have nice things.
Although still in its relatively early stages, AI already seems like it has limitless capabilities. Things such as image generation, deepfake videos, voice cloning, and the beloved ChatGPT, along with other LLMs (Large Language Models), are just a few tools AI has to offer. AI is in continuous development and improves its efficacy each day.

With its constant improvement, it is becoming harder to determine when something has been produced or altered with AI. While this would normally be a good thing, it means that it is also becoming more difficult for claims professionals to detect when AI has been used to manipulate insurance claims. Image generative artificial intelligence software, which produces high-quality visuals based on a prompt entered by the user, could be used to create false “evidence” of property damage, visible injuries that may not truly exist, and even fake x-rays and other medical records. Fraudsters can use voice cloning to impersonate a witness giving a statement, or mimic someone who is no longer living in order to collect social security and other benefits on their behalf. Deepfake technology, which can create artificial photographs or video footage, can also be used to impersonate witnesses, as well as to provide false documentation or obtain access to protected information. Lastly, LLMs can be used to create phony medical records, false police reports, other fake legal documents, and bogus billing statements or receipts.

As a claims professional, you may be reading this and thinking you now have to become some sort of super-human AI detector. This isn’t true at all. There are AI systems that focus solely on identifying the manipulation of images and documents that have been submitted as evidence, as well as detecting if an image has been copied from somewhere else on the internet. This technology can also sense when metadata has been tampered with; this surprisingly big industry (now paying people to alter metadata on photos) is the reason why even the FBI has issued a warning about criminals and fraudsters using generative AI).
Although AI can save time during the claims review process by scanning documents and statements to expose potential dishonesty and inconsistencies through behavioral analysis, and it can alert you when further investigation may be warranted, fraudsters can use AI to gain the upper hand. In fact, certain AI systems can even analyze body language, expressions, and speech in recorded media to identify dishonesty and assess credibility.
AI is reshaping the insurance industry as we know it, by analyzing large volumes of data and improving companies’ efficiency. However, that doesn’t take away from AI’s bias, which can lead to unfair outcomes, since certain AI algorithms can sometimes over-exaggerate pre-existing bias. This issue has led companies to question how ethical AI is and whether new ethical guidelines need to be put in place for these algorithms.
However, this isn’t a battle of humans versus AI, because we have the ability to use AI ourselves to detect others’ use of AI (this really could be the start of a great idea for a new movie series). By fighting fire with fire, we can use AI to deter fraudsters and gain the upper hand.

It is our responsibility to keep ourselves educated as AI technology progresses so that we may remain hypervigilant in the fight against insurance fraudsters. By familiarizing ourselves with these emerging fraud trends, and adapting our own strategies in response, we can view AI technology as an advantage instead of a threat.
Sources: