By Chris Halliday, Global Proposition Leader for Personal Lines Pricing, Product, Claims and Underwriting at WTW ICT, and James Clark, Partner in the Data Protection, Privacy and Cyber Security group at DLA Piper.
The ambition is clear: to not only create an environment that enables businesses to feel free to develop and invest in AI, but at the same time protect businesses and consumers from the potential harms and some of the worst excesses of the technology.
Regulation today
What we have now is the indirect regulation of AI, which means there are no specific UK laws in place designed to address AI. Instead, there exists a range of legal frameworks that are indirectly relevant to the development and use of AI.
AI systems are fundamentally creatures of data, initially relying on large amounts of information to train the models that underpin these systems. Personal data is often used both to develop and operate AI systems, and individuals whose personal data is used will have all of the normal rights that they benefit from under existing laws such as the General Data Protection Regulation (GDPR). These typically include rights of transparency, rights to data access, and perhaps most importantly existing rights under GDPR not to be subject to automated decision making in relation to significant decisions, except under special circumstances.
Impact on insurance
The current approach to the regulation of AI in the UK, as outlined in the government's white paper, is to try and rely to the greatest extent possible on existing regulatory frameworks. Particularly relevant for the insurance industry is financial services regulation and the role to be played by the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA).
The FCA and PRA will use their existing powers to monitor and enforce against the misuse of AI, applying existing principles: for example, consumer protection principles and the concept of treating customers fairly and how these might be impacted if an insurer is relying on an AI system and predictive models to make pricing decisions. Because there is a risk that some customers might be discriminated against or even priced out of insurance markets through the increased use of AI, the FCA is carefully considering how to translate existing principles when regulating firms in the use and misuse of AI.
Embracing the future
Most companies accept that it won’t be possible to hold back the tide of AI. As such, many leading businesses are already focusing on how to integrate AI into their operations and apply it to their existing business model, recognising the need to embrace rather than resist change.
In the insurance industry, not all of this is new. For many years, insurers have used algorithms and machine learning principles for risk assessment and pricing purposes. However, new developments in the field of AI make these technologies increasingly powerful, and those developments are coupled with the explosion in other forms of AI – such as generative AI – that are more novel for insurance businesses. Consequently, a key challenge for the industry is working out how to adapt and upgrade existing practices and processes to take account of advancements in existing technology, whilst also embracing certain 'net new' innovations.
Regulation tomorrow
The EU AI Act is one of the world's first comprehensive horizontal laws that is designed specifically to focus on the regulation of AI systems, and the first to command widespread global attention. It is an EU law, but importantly also one that has extraterritorial effect. UK businesses selling into the EU or even businesses using AI systems where the outputs of those systems then have an impact on individuals in the EU are potentially caught by the law.
The EU AI Act applies to all parts and sectors of the economy. Crucially, however, it is also a risk-based law that does not try to regulate all AI systems and which distinguishes between different tiers of risk in relation to the systems it does regulate. The most important category of AI system under the act are those referred to as high-risk AI systems, where the vast majority of obligations lie.
Most of these obligations will apply to the provider or developer of the AI system, but there are also obligations that apply to the system’s user or deployer. The insurance industry is specifically flagged in the EU AI Act as an area of concern for high-risk AI systems. The Act explicitly identifies AI systems used in health and life insurance for pricing, underwriting and claims decisioning, due to the potential significant impact on individuals’ livelihoods.
Because the majority of the obligations will sit with the system’s provider, if you are a business building your own AI tools - even if those tools depend on existing models such as a large language model to develop an in-house AI system - you will be classified as a provider of that AI system under the Act.
As a result, businesses will need to get to grips with the new law, first working out in which areas are they caught as a provider or deployer, before planning and building the required compliance framework to address their relevant obligations. The regulation of AI had one brief mention in July’s King’s Speech, specifically in relation to the establishment of requirements around the most powerful AI models. Meaning the EU AI Act is the most pressing law on AI likely to be applicable to insurers in the coming years. Other relevant parties, such as the Association of British Insurers (ABI) have provided guidance to their members on the use of AI.
Navigating AI
In order for the insurance industry and other sectors to make the most out of AI, businesses will first need to map out where they are already using AI systems. You can only control what you understand. And similar to when businesses first began their GDPR compliance programmes, the starting point is often to identify where within the organisation they are using personal data and, in particular, where they are using AI systems that are higher risk because of the sensitivity of the data they're processing or the criticality of the decisions for which they are being used.
Once the mapping stage is complete, organisations should then begin the process of building a governance and risk management framework for AI. The purpose is to ensure clearly defined leadership for AI is in place within the business, such as a steering or oversight group that has representation from different functions, articulating the business's overall approach to AI adoption. Organisations will also need to decide how aggressive or how risk averse they want their business to be regarding the use of AI. This includes drafting key policy statements that will help to clarify what the business is prepared to do and not do, as well as defining some of the most fundamental controls that will need to be in place.
Following this, more granular risk assessment tools will be needed for specific use cases proposed by the business that can be assessed, including a deeper dive on the associated legal risks, combined with controls on how to build the AI system in a compliant way which can then be audited and monitored in practice. The approach that a business takes will also depend significantly on whether they buy their AI systems from a technology vendor or instead buy in the data they need for their own in-house AI system, as well as potentially selling AI to their own customers at the other end of the pipeline. An insurance broker, for example, might sell certain services to an insurer that depends on the use of AI and will therefore need to consider how they manage their risk at both ends of that pipeline in terms of the contracts, the assurances that they get from their vendors, and whether they are prepared to give the same assurances to their customers.
The challenge with AI at present is that use cases are continuously emerging and consequently demand is increasing. Therefore the creation of governance frameworks and processes need to be designed alongside the business processes to assess and prioritise the highest value AI activities and investments. Governance, business and AI teams need to work side-by-side to embed appropriate processes within the emerging use cases, which can often save significant work later. The EU AI Act is a complex piece of legislation and will be a significant challenge to construct the frameworks needed. Success will not only rely on the quality of data and models used and good governance, but also by adopting an approach to the new law that is proportionate and builds confidence to enable businesses to make the most of future AI opportunities.
|