According to the Coalition Against Insurance Fraud and its most recent report estimating the economic impact of insurance fraud in the U.S., the estimated yearly cost related to insurance fraud has increased from $80 billion in 1995 to $308 billion in 2022. That is a staggering number –that the consumer absorbs in the form of increased premiums – to the tune of around $3,700 per year for the average U.S. family.
While the insurance landscape looks significantly different than it has in years past as a result of the onset of a major digital transformation, one thing remains constant: fraud. The promise of using new Technology and automated tools to improve both the consumer’s user experience and the insurance company’s bottom line is real – hence the heavy investment over the last several years. The implementation challenges, however, are just as real. Striking the right balance of improving both efficiencies and ROI is not easy. How do you maximize efficiency in the application, quoting, and claims process and not invite new or additional fraudulent activity? The answer centers around data – not just lots of data but the right data being applied with the right processes.
1. Use third-party data to identify inconsistencies
As insurers look to meet the demand of digital-first (and sometimes digital-only) interactions to improve consumer experience, there are the challenges related to increasing underwriting efficiency and providing immediate policy quotes as well as straight through claims processing and still having reasonable checks and balances to identify potential fraud. Insurtech companies are providing tools to do this, but the effectiveness depends on the confluence of different data sources to isolate red flags. First party data (data contributed by the potential policy holder) is the holy grail when it is contributed by good actors. But people perpetrating fraud aren’t good actors. So how do you know who is sharing true vs. false information?
One way to do this is by integrating third-party data – especially when that data is aggregated from authoritative sources such as Secretary of State business registrations, voter registrations, or other reliable public records. The idea here is to put another ‘set of eyes’ on the data as a fact check. The applicant states their business is at one address, but third-party data says otherwise. The applicant says their business has four employees but third-party data suggests 10 employees. The applicant claims to have no children of driving age in the household but third-party data suggests they have a 17 year old in the household. You get the idea. Third-party data can be matched to your first-party data to identify these discrepancies, and sophisticated models can automate the interpretation of this data to help make decisions on what should fly through and what warrants a closer look.
2. Standardize your data and leverage up-to-date data to improve identity resolution
According to theTechnology-Study-Report-Final-1.pdf” class=”Link” target=”_blank”> 2021 State of Insurance Fraud Technology Study published by the Coalition Against Insurance Fraud, poor data quality and the integration of data represents one of insurance companies’ biggest challenges in applying new fraud detection processes. In fact, bad data, regardless of industry, costs companies in the U.S. approximately $3 trillion per year. At the same time, insurance companies are using more data today than they ever have before – internal systems data, unstructured data, social media data, third-party aggregated data, etc., to try to detect fraud more effectively. While there are many potential benefits with this approach, if you don’t have the ability to integrate these different data sources effectively, you can end up creating more problems than you solve. To avoid this, you must be able to normalize or standardize all these different data feeds. Your goal is to take all these different data points related to a specific business or individual and consolidate them into a super-profile. However, if you haven’t applied a common name and address standardization process across each input, you might turn what should be a single, insight-rich entity into what appears to be multiple, insight-poor entities.
Of course, normalizing different data inputs doesn’t matter if the data itself is inaccurate. It is critical that insurance companies and insurtechs are diligent in their data review processes when evaluating potential data sources. The old adage ‘measure twice and cut once’ comes to mind here. Reviewing things like fill rates when evaluating data sources can be helpful, but a true qualitative review can’t be eschewed if you want to flag legitimate fraud scenarios vs. creating a bunch of false positives. In addition to testing data, make sure your provider explains how the data is sourced, and more importantly how it is maintained. Businesses and consumers open, close, grow, contract, move, experience major life events, change business models, etc. Having data that can keep up with these changes is crucial if you are going to leverage that data as one of the ways you look to detect fraud.
3. Take advantage of predictive modeling
Arguably the quickest and most effective way to prevent fraud from happening is by taking advantage of predictive models. The 2021 State of Insurance Fraud Technology Study states that 80% of those surveyed indicated they incorporate predictive modeling into their fraud detection strategy. In fact, this is one of the most highly adopted processes over recent years, rising from a usage rate of 55% in 2018.
Predictive modeling uses analytics and machine learning to take large amounts of data to build digital models that gauge the likelihood of whether new applications and claims have the potential to be fraudulent. Not only do these solutions scale and become more accurate over time given the amount of data, but they are also functional across all types of insurance.
The reality is we won’t ever eliminate fraudulent insurance activity – it’s a little bit like whack-a-mole. However, if you can hit more moles than you miss, you absolutely can make headway against your operational goals of increased efficiency, reduced losses, and cost-savings–both for your company and your policyholders. But to get there, data and good data at that must be the foundation.