U.S. and international businesses are accelerating their use of artificial intelligence (AI)[1] at an unprecedented rate. The second AI Index Report published in December 2018 by a Stanford University-led group concluded that “AI activity is increasing nearly everywhere and technological performance is improving across the board.” The AI Index Report further found that “the number of AI startups has seen exponential growth” and that “[f]rom 2013 to 2017, AI VC [venture capital] funding increased 350%.” Growth in this area will continue and will infiltrate every imaginable industry: from assisting doctors in detecting lung cancer to the use of self-driving trucks to deliver mail, AI is the New Frontier.

As businesses race to implement AI solutions and processes that may improve efficiency and lower costs, AI will also create new and ever-evolving risks. And when a company’s AI fails to perform as expected, or AI is breached or manipulated in a cyberattack, new and thorny questions about how to apportion liability for resulting losses emerge. The question only becomes thornier when it is a company’s supplier, contractor, or service provider that experiences a breach or failure.

It will be difficult to apply traditional tort liability schemes to AI-related loss scenarios, but there is no doubt that AI will change the way we look at the insurability of losses. Nonetheless, for businesses that use, or are considering using, AI, either directly or indirectly, there are concrete steps those companies can take to enhance their insurance and risk management programs to mitigate against the threat of AI-related loss. Although coverage needs vary from company to company and should be assessed on an individual basis, a non-exhaustive list of threshold issues to consider are as follows:

First, take a full assessment of the risks and exposures associated with the use or development of AI in your business. Although the list is vast and ever changing, examples include:

  • Cyberattack / security breach. The use of AI could leave your business more vulnerable to cyberattacks on AI data repositories that may contain personal or confidential information. In addition, the use of machine learning – an AI technique to design and train software algorithms to learn from and act on data – leaves companies open to “data poisoning attacks,” where malicious users inject false training data with the goal of corrupting the results generated by the algorithms.
  • Third-party bodily injury and property damage. One obvious example of AI creating exposure for bodily injury or property damage related losses is the use of AI in autonomous vehicles, and the associated potential for a crash causing property damage to other vehicles or injury or death to individuals. The use of AI in manufacturing and in medical devices is another example that also presents the possibility for exposure to bodily injury related losses. For example, corrupted AI that administers the wrong dosage of medicine at a hospital could result in serious and significant claims by injured individuals. Or the malfunction of a heart device that relies on biofeedback also presents areas of risk and exposure for the device manufacturer.
  • Business interruption. A failure in or disruption of AI used in the manufacturing process could cause significant production losses or a short-term shut down of operations that leads to business interruption losses during the time it takes to get manufacturing back up and running.
  • Contingent business interruption. Even if your business itself is not using AI, consider whether any company in your supply or delivery chain relies on AI. Just like a hurricane that destroys or damages one of your supplier’s facilities, a cyberattack or other AI error in the supply chain could cause delays in shipments of materials critical to business.
  • Shareholder lawsuits or regulatory inquiries. Regulators are increasingly interested in AI, thus increasing the chances that failure of AI, the role of AI with respect to consumer data privacy, or a cyberattack on AI could lead to a regulatory investigation. Similarly, directors and officers may face heightened scrutiny over their use of AI and could face event-driven shareholder or other lawsuits related to their failure to react appropriately to an AI-related issue.
  • Shareholder lawsuits or regulatory inquiries. Regulators are increasingly interested in AI, thus increasing the chances that failure of AI, the role of AI with respect to consumer data privacy, or a cyberattack on AI could lead to a regulatory investigation. Similarly, directors and officers may face heightened scrutiny over their use of AI and could face event-driven shareholder or other lawsuits related to their failure to react appropriately to an AI-related issue.

Second, once your company has identified areas of potential risk, review your insurance program holistically to determine where you may have gaps in coverage for AI-related exposures.

Does the use of AI in your business present risks that were previously considered remote or immaterial? Consider a software company that develops a new technology that uses AI to detect or diagnose disease. Software previously designed by that same company may not have exposed it to significant third-party liability, but this new technology has the potential to affect the lives of hundreds if not thousands of individuals. Or a company that provides the technology for a self-driving car. A flaw in the software used in a fleet of autonomous vehicles could lead to substantial bodily injury and property damage related losses. In the event of a loss, which company will be responsible? The manufacturer of the ultimate product? The component manufacturer who supplied the AI software? Recognizing new areas of exposure – and assessing where there may be insurance coverage gaps – is critical.

Other important considerations when assessing your insurance program as it relates to AI include, but certainly are not limited to, the following:

  • Cyber insurance policies are increasingly commonplace, and increasingly nuanced. The same can be said for technology errors and omissions (E&O) policies for technology (including AI) providers. But cyber insurance and technology E&O policies often exclude coverage for bodily injury or tangible property damage. As noted above, these types of damages are an increasing concern in case of an AI failure or breach.
  • Similarly, a general liability policy might provide the bodily injury or property damage coverage otherwise excluded from a cyber or technology E&O policy, but does that same general liability policy have an exclusion for professional liability, for cyber or security breaches, or for technology products and software?
  • If you manufacture or develop software, does your technology E&O policy provide coverage? Does the policy cover cyberattacks, or only programming errors? Similarly, what if your AI performs as intended but produces poor results due to bad data? Is that a “wrongful act” covered under your technology E&O policy?
  • If your company uses AI to perform a service, could your traditional (non-tech) E&O or professional liability policy cover a loss associated with alleged failure to render those services?
  • What about business interruption or contingent business interruption? These types of losses are generally covered under a property policy, but that same property policy might have an exclusion for cyber-related losses. Cyber insurance policies on the other hand may have coverage for network business interruption or contingent business interruption losses, but could exclude events triggered by tangible damage to property. Although it is rare, some cyber policies also have exclusions for claims arising out of the use of non-standard software and for programming errors, which could eliminate coverage for AI-related losses. Other policies have exclusions and bar coverage once the data is no longer in the insured’s care, custody, or control.

Although the claims and underwriting experience involving AI-related losses are incredibly limited, considering these scenarios in advance of placing your policies could assist in closing potentially significant coverage gaps.

Third, companies using AI created or supplied by third-party vendors (such as technology providers) should carefully review the indemnity and insurance provisions of their technology-related service contracts. Technology vendor contracts are often written with significant restrictions on the indemnity provided, such as limits on the amount of indemnity provided or specifications that the vendor will not indemnify you in the case of a data breach. If the technology could have a significant impact on your business, consider negotiations to amend or expand these provisions. For example:

  • If your company develops an AI program, and that program is later implemented into a product that eventually malfunctions, whose policy responds to the resulting loss – the company that developed the AI program or the company that ultimately manufactured the product? Are there contractual indemnification provisions between the two companies to address such a loss?
  • If you are a manufacturer that uses AI, could your product liability policy provide coverage for AI related losses? If another company develops the AI, does your company have sufficient indemnification provisions to protect it?

In sum, any company that develops or uses AI should undertake a holistic review of its entire insurance program and consult with coverage counsel to make sure they are they are covered for AI-related claims. A comprehensive coverage review can spot these and other gaps in corporate insurance programs. Timely notice should also be provided under all potentially applicable insurance policies in the event of an AI-related claim.

If you have any questions about the content of this article, or the current state of your company’s coverage for AI-related liabilities, please contact one of the authors of this article or any other member of Reed Smith’s Insurance Recovery Group.

[1] This article uses “AI” or “artificial intelligence” as a broader term, which incorporates “machine learning.”