Data quality in the insurance industry

Having good data quality is critical for the insurance industry. This is, of course, not a new challenge – Solvency II reviews, FSA/Lloyd’s data audits, data validation reports and ORSA work-streams are just some of the challenges that insurers have needed to grapple with over the past few years. The data quality challenge has no ‘quick fix’ as many firms can attest to, as it continues to be a lengthy and drawn-out process that needs an appropriate focus and investment if the benefits of having good data quality are to be fully realised.

Much of the drive behind the improvement of data quality in recent years has come from the need to meet regulatory compliance requirements. Indeed, in the new PRA Rulebook, proposed rule FR9 states that “a firm must not knowingly or recklessly give the PRA information that is false or misleading…”, and this applies whether or not the data provided is pursuant to a regulatory requirement. Under some circumstances, the PRA will be able to pursue a criminal action. But the commercial benefits that arise almost directly from better data should not be overlooked. It goes without saying that better data can lead to better pricing, and better pricing leads to better results. But what about its deeper use in data mining and predictive analytics, all of whose techniques benefit from better data and can open up a world of deeper understanding and knowledge, and therefore of competitive edge?

Given the focus of Solvency II on data quality, many insurers have defined and are now embedding good data governance practices. In our work with helping insurance clients address their data quality concerns, we have identified a number of trends which can help to provide a useful guide for how some firms have tackled the problem of poor data quality.

For further details see our factsheet here.


David Edison
Simon Gallagher

Related links