Artificial intelligence has taken off over the last several years, with a promise to potentially reshape the way people live and work. With the AI revolution, comes opportunities for businesses and others to benefit from cost savings and streamline their key processes. But AI also carries risks, many of which are still unknown or evolving, and that could impact your insurance.

Here’s what is known about the risks of AI so far, how governments, industry and insurers are responding, and what AI risk management for small business should look like for your business.

What Is AI?

Artificial intelligence or AI isn’t any one thing. It’s a whole range of new and evolving technologies that process large amounts of information very quickly, with a variety of applications in business and society. Some of the most talked-about forms of AI today are:

Machine Learning

Machine learning involves computer systems “learning” in a way similar to how human beings learn. These AI systems take in large amounts of data and process them using special algorithms. These systems are often predictive. One real world application for machine learning could be a website that translates a customer’s preferences into on-the-fly personalized product recommendations.

Large Language Models

Large language models (LLMs) are AI networks that use deep learning, multi-layered algorithms to understand language patterns, grammar and semantics and perform a variety of tasks. They are “trained” on massive datasets in order to understand, process and generate their own life-like text. Applications of LLMs include ecommerce chatbots, summarization tools, translators and more.

Generative AI

Generative AI is a type of artificial intelligence that’s capable of creating new content by learning from existing material. This can include text, images, video, audio or even computer code. Prompt-based generative AI applications include the popular ChatGPT tool, which generates text. But generative AI can also be used to create photographic images, artworks, films, music and more.

Deepfake

Deepfake is the term for a type of generative AI used to create highly realistic-yet-fake images, audio recordings or videos. Deepfake technology is said to be a threat to business, the public and even governments. The biggest threat may be its ability to impersonate government officials, corporate executives and other trusted persons in positions of authority, but identity theft, fraud and misinformation are also a threat.

What Are the Key Risks of AI?

While the risks of AI technology misuse, as in the case of deepfakes, are obvious, even the normal and proper utilization of the technology can carry risks for the businesses that incorporate it. This is an area of study that is still evolving and subject to change. But so far, experts have identified four main areas of concern:

1. Bias, Discrimination and Inaccuracy

Artificial intelligence isn’t perfect. Yet unlike human counterparts that are more aware of their limitations, this technology may forge ahead when it shouldn’t. This can result in algorithmic bias, discrimination and inaccurate responses.

Both intentional and unconscious biases are possible and can become part of an AI due to data quality, algorithmic design, training models or the AI user’s own interpretations. AI biases may potentially lead to discrimination in AI applications such as recruitment, pricing, policing and the provision of healthcare or financial services.

Inaccuracy can also be a problem. Some mistakes are inevitable in the very large datasets used to train AI models and AI technologies aren’t able to differentiate what is and is not true in such datasets. Some users may also run up against the limitations of the technology itself, which is designed to provide reasonable and plausible responses, and not necessarily true ones, and such answers may contain mistakes.

2. IP and Copyright Infringement

Intellectual property rights and copyright infringement are also an area of concern. The risk is twofold: on the one hand, intellectual property owners and copyright holders must contend with the fact that their works may become part of a dataset used by an AI model for commercial purposes, on the other, AI users may have difficulty securing legal protections for works generated using an AI that would otherwise be protected if they had been created without its use.

An IP or copyright holder’s material can become part of an AI dataset in two ways. It could have been introduced to the AI at the outset by the designers of the system. Many copyrighted materials available in part or whole on the internet have become part of the training dataset of an AI tool in this way. However, it is also possible that private information provided to an AI system by a user as part of a prompt may eventually become part of the AI’s overall dataset. Either way, such protected works could potentially be used by the AI to generate new content. Fair use doctrine may prevent IP and copyright holders from taking action in such cases.

AI users also may face challenges attempting to legally protect works produced with the assistance of an AI. Court cases, one famous example being a monkey photographer, have previously established that such protections apply only to creators who are human beings. A claim for protection may be rejected if an AI user has not exerted sufficient human control over the outcome of the generated content or disclaimed ownership rights of the parts of the work created by the AI.

3. Regulatory and Compliance

Businesses that utilize AI may face challenges as new policies, law and regulations are decided that impact their application of the technology and its collection of data. Risk also exists where officials may apply existing regulations to newer technologies. So far, both are happening with AI.

Federal laws and regulations governing the use of AI are still being determined. Some of the topics under review include AI political ads, employee tracking, consumer privacy and more. Government policy is currently favorable toward AI technological development, but it’s possible future administrations could take a different approach. A patchwork of new state regulations in Colorado, California, Texas and elsewhere may also complicate corporate compliance efforts.

Existing regulations from the Federal Trade Commission, Equal Employment Opportunity Commission and Consumer Financial Protection Bureau are also enforceable on AI tech usage. In one case, a drug store chain faced FTC action for bias and discrimination over its use of an AI facial recognition technology to deter theft. Primarily, regulators are concerned with how AI may impact key issues like unlawful discrimination, privacy and security.

4. Cyber and Data Privacy Risks

Cyber and data privacy were already areas of concern for companies before AI became a household buzzword. But the broader adoption of AI has accelerated these existing risks. Businesses must start to think both about how data is collected and used by their AI tech tools and how that data may be vulnerable to a data breach or cyber attack.

Two issues concern the collection of the data itself. First, the data collected could be sensitive in nature and may contain personally identifiable information, financial details, health records or other sensitive information. Next, there’s the legal issue of whether such data was obtained with the necessary consent. Even data obtained with the necessary consent may be used by an AI tool in a way inconsistent with that consent. For example, data obtained originally for research purposes could later be used by AI surveillance technologies.

Bad actors also complicate the situation. Like other stores of data, the massive datasets used to train AI algorithms are vulnerable to leaks and hacks. Just as an AI tool has trouble distinguishing which information in its dataset is true, it may have trouble distinguishing which information is private or personal and disclose it. Professional hackers are also working to exploit vulnerabilities in AI systems to access private data for financial or other illicit purposes.

How Are Insurers Responding?

Like other industries, insurers are embracing new AI technologies and looking for ways they can be used to provide better products and services to their own clients. Yet with AI being such a new form of technology, the industry is learning along with everyone else. Even so, insurers are taking an active role in developing risk management strategies for new AI technologies.

1. More Carriers Are Asking Underwriting Questions

During policy applications and renewals, more insurance carriers are beginning to ask about AI technology. These insurers may be interested in knowing whether common forms of insurance they extend to you, such as cyber, errors and omissions, general liability and employment practices liability, may need to respond to a claim involving AI. If so, the carrier wants to be aware of such a risk before issuing a policy.

2. Insurers Are Evaluating and Monitoring Developments

The insurance industry is keeping a close eye on developments involving AI technologies. This includes developments of the technology itself, its applications across various industries, public debates on its usage and government policy, legislation and regulatory activities. These activities help insurers stay on top of emerging risks and bring solutions to market that protect their policyholders from novel exposures.

3. Policies Typically Don’t Apply AI Exclusions, For Now

Exclusions and endorsements are a key piece of any insurance policy, limiting or extending specific coverages to an insured. But at this early stage, most insurers aren’t applying AI exclusions. However, this can vary from insurer to insurer and even policy to policy. It’s crucial for businesses to check with their insurance, discuss their exposures and read their policy documents closely. If there’s any question, be sure to ask.

4. AI Coverage Is in the Early Stages of Development

Some standalone products designed specifically for AI technology are emerging, particularly for those businesses whose operations are deeply tied to the technology. However, such coverages are still quite new. Existing coverages available to businesses may still be able to respond to AI-driven losses, as most active policies either provide affirmative coverage for AI or are silent on it.

What Are the Next Steps to Take for AI Risk Management?

Knowing that AI is here to stay, many businesses are taking the plunge and incorporating the technology into their business operations. There is great promise that the AI revolution will increase productivity and profit while providing better products and services to clients and customers. However, proper AI risk management is key from the outset. Make sure to ask:

  • Do we use AI tech or are we contemplating the use? For your business, be sure to know how AI plays a role in your operations. If it will play a role in the near future, make sure there’s a plan in place that takes the risks into consideration.
  • Do our vendors use AI tech and what are the contractual terms regarding potential liability? For partnerships, it’s important to know whether others you work with are incorporating AI into their operations in ways that could impact your business. Be sure to read your contracts.
  • What are the security risks associated with the AI we are using? For your exposures, security vulnerabilities like a data breach or cyber attack rank near the top in terms of typical claim costs. Be sure to know the risks of the AI your business incorporates.
  • What are our regulatory or compliance obligations? For your obligations, it’s important to know where you stand. Be sure to work with experienced legal counsel to understand which state and local regulations may apply and keep up with federal-level AI developments.
  • How will our existing coverage respond and/or do we want (or need) standalone AI coverage? For your insurance, the time to start asking questions about your AI risks and exposures is now, before you have a claim. Discuss your risks with a trusted insurance professional.

Understanding the risks of artificial intelligence in business is a unique challenge. While AI is new for everyone, the right mindset toward such exposures can make all the difference. With Lockton Affinity, you get more than insurance. As an advocate for our clients, we work with you to help minimize risky exposures and ensure needed coverages.

Lockton Affinity’s CyberLock Defense Insurance Program offers industry-leading coverage that is designed to protect your business against many of today’s biggest cyber threats. Discover more benefits for your business today. Visit CyberLockDefense.com or call us at (844) 868-7144 to get started.