Reed Smith recently attended the Airmic Annual Conference 2025 in Liverpool as a Marketplace Partner and, in the wake of a series of high-profile cyberattacks on retailers in the first half of 2025, cyber risk emerged as a central theme throughout panel discussions and sessions at the conference. Stakeholders across government, cybersecurity and underwriting all emphasized the importance of awareness and mitigation of cyber risks, as well as the insurance industry’s role to play in transfer and management of that risk.

As cyber-related losses become a fact of life for all genres of business rather than a rarity, it is important for policyholders to engage with this risk and the different forms it might take, in order to be able to assess insurance coverage and potential areas of exposure. Below, we examine three categories of cyber risk, highlighting the key provisions that should be considered in respect of each risk.

  1. Autonomous cyberattacks

Autonomous cyberattacks are carried out by software or systems that operate with little or no human involvement. These attacks use AI or machine learning algorithms to identify system vulnerabilities, make decisions and advance attack paths. The primary objective of autonomous cyber-attacks is usually to interfere with system operations, gain unauthorized access to systems or tamper with data.

Key considerations for policyholders:

  1. Definition of “Cyber Loss” or “Cyber Event”: Most cyber policies will refer to a loss resulting from “Unauthorized Access” by a person (as opposed to AI), in the insuring clause. Therefore, policyholders should review the insuring clause wording and ensure that it is broadly drafted, to cover losses caused by AI-driven attacks.
  2. Causation: Autonomous cyberattacks are sophisticated and able to mask their origins. This makes it difficult for policyholders to prove the cause of loss, and therefore, to identify the relevant insuring clause triggered. Policyholders should seek to include broad coverage language that protects against “unattributable events”, and, in the event of loss, should engage expert incident response teams with forensic investigation capabilities to establish the nature of the attack. This is also something for policyholders to be aware of in respect of any potential territorial exclusions and carve backs, where the territory the attack originated from may be nearly impossible to identify.
  3. Security protocols: The policyholder will be under an obligation to adopt appropriate and up-to-date security measures under a cyber policy. It can be challenging for the policyholder to demonstrate compliance with “industry standard” protocols, in light of the rapid evolution of AI attack techniques. Therefore, policyholders may seek to tie their security obligations under the policy to a recognized security compliance framework (such as “Cyber Essentials”). In all circumstances, it would be advisable to maintain detailed records of the security measures in place (including in written policies, procedures and technical controls).
  4. Notification obligations: Autonomous cyberattacks may go undetected for extended periods, which makes it difficult for policyholders to comply with notification requirements. Policyholders should seek to ensure that the notification obligation is triggered from the date that the policyholder “first becomes aware” of an incident, rather than the date that the incident occurs. Policyholders will need to disclose information as to how their “alarm” system works to alert the business of any potential breach of security. Including a “deeming provision”, which clarifies when the policyholder is deemed to have knowledge of an incident by reference to an event (for example, when it is discovered by the IT team), may also provide greater certainty and clarity as to what level knowledge is required and by whom.
  1. Social engineering cyberattacks

Social engineering cyber-attacks manipulate individuals into providing confidential information, potentially leading to identity theft or financial loss.

Key considerations for policyholders:

  1. Exclusions: Cyber policies will often contain exclusions for certain types of fraud, such as those involving voluntary transfers and authorized access. Where a policyholder is tricked into transferring funds, insurers may argue that the loss is excluded because it was not the result of unauthorized access. Therefore, policyholders should request carve-backs in relation to fraud and voluntary transfer exclusions, so that losses arising from deception or manipulation are not excluded by default.
  2. Employee training protocols: Given that social engineering attacks rely on the involvement of individuals, a cyber policy which covers this type of risk will likely require robust employee training and that internal controls are in place. Consequently, policyholders should implement thorough staff training programs and ensure that they are reviewed and documented on a regular basis to promptly identify and resolve any potentially deficiencies.
  1. Data poisoning

Data poisoning involves the deliberate corruption of a training dataset used to train an AI or machine learning model. A training dataset is a collection of data used to teach a machine learning model how to perform a specific task. The main objective of data poisoning is to undermine the reliability and integrity of a machine learning model (for example, by causing the model to allow security breaches).

Key considerations for policyholders:

  1. Definition of “Data”, “Software” and “Systems”: Cyber policies typically cover unauthorized access to or alteration of “data”, “software” or “systems”. Insurers may seek to challenge arguments that training datasets fall within these standard definitions. Policyholders should seek clarification from insurers or request an endorsement to expressly include training datasets as covered assets.
  2. Causation: Like autonomous cyber-attacks, data poisoning can be difficult to detect, as it is usually designed to compromise security systems. Proving that loss was directly caused by a deliberate cyber-attack, as opposed to a malfunction or error, may be challenging. Consequently, policyholders should implement audits and validation checks of training datasets to identify any potential manipulations.
  3. Exclusions: Data poisoning is regularly perpetrated by employees, or another approved system user, since they have access to the training dataset and the model’s outputs. Therefore, where policies exclude losses arising from insider actions, policyholders should seek to negotiate carve-outs for dishonest or malicious insider incidents.

Conclusion

As the landscape of cyber risk evolves, policyholders must ensure that key risks, such as those arising from autonomous attacks, social engineering and data poisoning, are adequately addressed in their cyber cover. This requires detailed understanding of those specific risks, careful review and negotiation of policy wording, implementing robust internal controls and ongoing engagement with insurers to ensure that coverage keeps pace with a fast-changing risk category.