By Alan O’Loughlin, Statistical Modelling and Analytics Lead for Europe, Insurance, LexisNexis Risk Solutions
We are just beginning to scratch the surface of the customer data available, and the potential is exciting and transformational but we must get data quality, standardisation and inter-connectivity right to make the right decisions.
Defining data governance
Defining the governance and philosophy of data use and how we structure the data is key. We need to decide what we really mean by ‘meaningful data’ or ‘meaningful insight’.
Data analytics is already applied across the whole spectrum of insurance but there’s an increasing need to think about how data is stored and accessed, to ensure it adds value.
Data science and new modelling techniques are enabling Actuaries to process vast new data sources and re-imagine risk profiles but this means they need to able to adapt quickly to these new innovations or risk losing a competitive advantage to those insurance providers who prove to be more data agile.
Demonstrating how advanced risk analysis is becoming, LexisNexis Risk Solutions® LexID and linking technology is capable of reconciling data to an individual – not just through business rules or fuzzy matching, but also probabilistic and specificity matching. This provides right-first-time quotes. Another benefit is that it can help identify the quote behaviour of a customer, as well as policy and claims behaviour. If necessary, it is possible to segment different risk profiles, visualise and track the individual policy risk and accumulation of risk across different geographical zones.
Identifying new risks
Increasingly the challenges of advanced analytics in insurance mean that we have to ensure all the right data resources and analytics are being applied correctly. Modelling and discovering new risk attributes is one element but differentiating risk and modelling risk selection is becoming more advanced. With greater data choices, modelling techniques and tools, it is possible to get closer and closer to more accurately pricing a book of risk.
Remaining competitive in the risk business is all about explaining, identifying and segmenting unknown pools of risk. Taking risk identification to the next level requires quality segmentation and leads to better scoring. A large part of what our Analytics and Modelling teams do at LexisNexis Risk Solutions is reducing residual errors, explaining previously unexplainable risk and separating that from statistical noise. Examining data through a different lens, enriching it via contributory databases, allows us to build new features and new risk rating factors.
On the cusp of an AI revolution?
Whilst rating models are rapidly advancing, the models are only as good as the data and it is still too early to deploy end-to-end dynamic pricing models without human (Actuary) intervention. We are simply not ready for full-AI. The human interface must still take care of the checks and balances of governance and help pick palatable rating factors that are acceptable to the market. Of course, in all of this we need to keep in mind the possibility of people’s biases or importing biases from systems built for a different era.
Human-level AI is still a far-off goal and it is proving very difficult to embed machines with human-like values. For now, we are still finding out: what are the right questions to ask the machines?
Governance is applying pressure to use advanced analytics in a certain way, but there are still important questions to answer. Certainly in the EU we don’t have governance for deploying a regression model, so why would we need to create special governance for AI? If AI is going to become more than a series of specific use cases or micro-solutions, we need to consider how, as an industry, we are going to make it happen and make it future-proof.
|