Risk Management
16 min read

Ransomware Costs Jump 17% Despite Fewer Claims: The AI Phishing Factor

Ransomware costs up 17% to $2.73M per attack, driven by AI phishing. Learn how AI is making attacks more effective and how to protect your business.

S
Written by
Soma Insurance Team
Ransomware Costs Jump 17% Despite Fewer Claims: The AI Phishing Factor

SAN FRANCISCO, CA – The average cost of a ransomware attack reached $2.73 million in 2025, up 17% from $2.33 million in 2024, according to October 2025 data from Sophos's annual "State of Ransomware" report. This increase is particularly concerning because it occurred despite a 15% decrease in the total number of ransomware attacks.

Fewer attacks but higher costs per attack points to a troubling trend: ransomware is becoming more sophisticated, more targeted, and dramatically more effective. The primary driver? Artificial intelligence-powered social engineering that is making it nearly impossible for employees to distinguish legitimate communications from phishing attacks.

This shift has profound implications for cyber insurance and business risk management. The old defensive playbook—employee training, email filters, and endpoint protection—remains necessary but is no longer sufficient against AI-enhanced attacks. Businesses need to fundamentally rethink their cybersecurity strategies to address this new threat landscape.

The 2025 Ransomware Landscape: Key Data Points

Sophos's report, based on a survey of 5,000 cybersecurity and IT professionals across 14 countries, reveals how ransomware is evolving:

Attack Frequency Is Declining (But Don't Celebrate Yet)

59% of organizations experienced at least one ransomware attack in 2024-2025, down from 70% in 2023-2024.

Why attack frequency is declining:

  • Better baseline security: Multi-factor authentication (MFA) adoption increased to 82% of organizations (up from 67% in 2023)
  • Improved endpoint detection: EDR/XDR tools now deployed at 71% of organizations (up from 54%)
  • Law enforcement disruption: FBI and international partners successfully disrupted multiple major ransomware groups
  • Cryptocurrency seizures: High-profile government seizures of ransom payments deterred some attackers
  • Operational security improvements: Basic security hygiene (patching, backups, network segmentation) has improved

Why declining frequency doesn't mean reduced risk:

  • Attackers are focusing on higher-value targets rather than spraying attacks broadly
  • Sophistication is increasing, making successful attacks more damaging
  • AI is enabling smaller groups to execute more effective attacks with fewer resources
  • Initial access is becoming more targeted and harder to detect

Cost Per Attack Is Soaring (+17% Year-Over-Year)

Average ransomware attack cost: $2.73 million

This figure includes:

  • Ransom payments: $418,000 average (only 34% of organizations paid in 2025, down from 48% in 2024)
  • Business interruption: $1.21 million average (downtime, lost productivity, lost revenue)
  • Recovery and remediation: $687,000 average (forensics, system rebuilding, data recovery)
  • Notification and legal: $214,000 average (breach notification, legal fees, regulatory response)
  • Reputation and customer loss: $201,000 average (customer attrition, brand damage, future sales impact)

Key insight: Even though fewer victims are paying ransoms (34% vs. 48%), total costs are rising because recovery, business interruption, and remediation costs are increasing faster than ransom payment costs are declining.

Recovery Time Is Increasing

Average recovery time: 47 days (up from 39 days in 2024)

Why recovery takes longer:

  • Modern ransomware spreads further: Attackers now spend 30-90 days inside networks before deploying ransomware, ensuring it infects backups and multiple systems
  • Backups are increasingly targeted: 87% of ransomware attacks in 2025 attempted to encrypt or delete backups
  • System complexity has increased: Cloud-hybrid environments are harder to restore than traditional on-premises infrastructure
  • Skills shortage: Organizations can't find qualified incident response professionals fast enough
  • Data exfiltration: 71% of ransomware attacks now include data theft, requiring extensive forensic investigation to determine what was stolen

Data Exfiltration ("Double Extortion") Is Now Standard

71% of ransomware attacks included data theft (up from 62% in 2024)

The double extortion model:

  1. Attacker infiltrates network and steals sensitive data
  2. Attacker encrypts systems with ransomware
  3. Attacker demands ransom for decryption key
  4. If victim doesn't pay, attacker threatens to publish stolen data
  5. If victim restores from backups without paying, attacker publishes data anyway

Why this matters: Even organizations with perfect backups and ability to restore systems quickly still face extortion. The threat of data publication—exposing customer records, financial data, trade secrets, or embarrassing internal communications—often forces payment even when technical recovery is possible.

The AI Phishing Revolution: Why Traditional Training Fails

The most significant factor driving increased ransomware success is the use of AI-powered social engineering to gain initial network access.

How AI Is Transforming Phishing Attacks

Traditional phishing emails were often easy to spot: poor grammar, generic greetings ("Dear Sir/Madam"), suspicious sender addresses, and obvious urgency tactics ("Your account will be suspended!").

AI-powered phishing is fundamentally different:

1. Perfect Language and Localization

AI language models like GPT-4 generate perfect grammar, appropriate tone, and culturally appropriate references. Emails are indistinguishable from legitimate business communications.

Example: A CFO received an email purportedly from the CEO requesting an urgent wire transfer for a confidential acquisition. The email:

  • Used the CEO's actual communication style and common phrases
  • Referenced a real ongoing project by code name
  • Sent at the CEO's typical email time (6:47 AM—he always emails before 7:00 AM)
  • Included plausible justification for urgency and confidentiality
  • Contained no grammatical errors or suspicious phrasing

The CFO nearly authorized the $680,000 transfer before calling the CEO directly to verify. The email was AI-generated. The attackers had analyzed months of leaked company emails to train the AI on the CEO's communication patterns.

2. Contextual Awareness and Personalization

AI systems scrape publicly available information (LinkedIn, company websites, social media, data breaches, court filings) to create highly personalized attacks.

Example attack sequence:

  1. AI identifies that Marketing Director Jane Smith recently posted on LinkedIn about attending a digital marketing conference
  2. Three days later, Jane receives an email from "conference organizers" with subject line "Your Conference Presentation Materials - Action Required"
  3. Email references specific sessions Jane attended (scraped from conference app)
  4. Email appears to come from legitimate conference domain (attacker registered look-alike domain: "marketingconf2025.com" vs. real "marketingconference2025.com")
  5. Attached "presentation materials" contain malware
  6. Jane opens attachment because context is perfect and email seems completely legitimate

This attack succeeded. The malware established initial access that led to a $1.8M ransomware attack six weeks later.

3. Voice and Deepfake Integration ("Vishing")

AI voice cloning enables attackers to impersonate executives or IT staff with remarkable accuracy.

Example: Attackers used an AI voice clone of a company's IT director to call the help desk:

  • Voice matched perfectly (attackers trained AI on YouTube videos of IT director speaking at webinars)
  • Caller ID was spoofed to show IT director's actual mobile number
  • "IT director" claimed he was locked out of his laptop while traveling and needed password reset
  • Help desk employee, convinced by the voice and caller ID, reset the password
  • Attackers gained privileged access and deployed ransomware three days later

Cost of this attack: $3.2 million in recovery, business interruption, and lost contracts

4. Automated Reconnaissance and Social Engineering

AI can conduct reconnaissance at scale, identifying the most vulnerable employees and crafting specific attacks for each target.

How it works:

  1. AI scrapes all employee emails from data breaches, LinkedIn, company website
  2. AI analyzes each employee's role, responsibilities, access level, and communication patterns
  3. AI identifies high-value targets: finance staff (can initiate wire transfers), IT staff (have system access), executives (can authorize unusual requests)
  4. AI generates personalized phishing campaigns for each target
  5. AI monitors which employees clicked links or opened attachments
  6. AI automatically escalates successful compromises to human attackers for exploitation

This approach increased successful phishing rates from 3-4% (traditional spray-and-pray phishing) to 18-23% (AI-personalized attacks) according to security research firms.

Why Traditional Training No Longer Works

Most organizations conduct annual or quarterly phishing awareness training. Employees learn to watch for red flags:

  • Check sender email addresses carefully
  • Hover over links before clicking
  • Be suspicious of urgent requests
  • Verify unusual requests through separate communication channels

This training remains important but is increasingly insufficient because:

AI-generated phishing emails have no red flags: Perfect grammar, legitimate-looking sender addresses (via domain spoofing or compromised accounts), contextually appropriate content, and plausible urgency.

Human judgment is unreliable under pressure: Even trained employees make mistakes when they're busy, stressed, or distracted. AI attackers exploit this by timing attacks when targets are most likely to be rushed (end of quarter, during crises, late Friday afternoons).

Verification processes are bypassed: Employees are supposed to verify unusual requests through separate channels, but attackers use urgency and authority to discourage verification ("The CEO needs this done in the next hour for a confidential deal—don't call him, he's in meetings").

AI attacks evolve faster than training: By the time employees complete training on current attack techniques, AI has evolved new methods.

Four Defensive Strategies for the AI Phishing Era

Traditional defenses remain necessary (MFA, email filtering, endpoint protection, training) but must be supplemented with AI-era specific strategies:

Strategy 1: Implement Zero Trust Architecture

Zero Trust assumes that no user or device should be trusted by default—even if they're already inside your network.

Core principles:

Verify explicitly: Require authentication and authorization for every access request, every time

  • No "trusted" network zones where authenticated users can access anything
  • Every resource access requires re-verification
  • Continuous authentication monitoring (flag unusual access patterns)

Use least privilege access: Grant minimum necessary permissions

  • Financial staff can't access engineering systems
  • Engineers can't initiate wire transfers
  • Limit admin rights to specific tasks, not blanket access
  • Time-limit elevated permissions (admin access expires after 4 hours)

Assume breach: Design systems assuming attackers are already inside

  • Segment networks so compromised system can't access everything
  • Monitor for lateral movement (unauthorized system-to-system access)
  • Encrypt sensitive data even inside the network
  • Log everything for forensic analysis

Real-world impact: A manufacturing company implemented Zero Trust architecture in 2024. In March 2025, an employee fell for an AI-generated phishing attack and provided credentials. The attacker gained access to the employee's email and workstation.

Without Zero Trust, the attacker would have pivoted to financial systems, domain controllers, and backups—typical ransomware attack progression.

With Zero Trust, the attacker's access was limited to that employee's specific permissions. Lateral movement attempts triggered alerts. Security team detected and contained the incident within 4 hours. No ransomware was deployed. No data was stolen. Total cost: $12,000 (incident response time).

Contrast with similar company without Zero Trust: Suffered complete ransomware deployment, 47-day recovery, $2.1M total cost.

Strategy 2: Deploy AI-Powered Email Security

Fight AI with AI. Advanced email security systems use machine learning to detect sophisticated phishing attempts that bypass traditional filters.

What AI email security detects:

Communication pattern anomalies:

  • Email from CEO at unusual time (he never emails at 2:00 AM)
  • Email tone inconsistent with sender's normal style (CFO suddenly using informal language)
  • Email requesting unusual action (IT director never asks for password resets via email)

Suspicious characteristics:

  • Domain look-alikes (micr0soft.com vs. microsoft.com)
  • Newly registered domains (registered 3 days ago)
  • Sender location inconsistencies (email claims to be from CEO in New York but originated from IP in Eastern Europe)
  • Attachment types the sender never uses (accounting sending .exe files)

Content red flags:

  • Urgent financial requests
  • Credential harvesting attempts (links to fake login pages)
  • Invoice payment redirections
  • Unusual wire transfer requests

Advanced systems quarantine suspicious emails and require verification:

  • "This email appears to be from your CEO but contains unusual characteristics. Call CEO at [verified number] to confirm this request before taking action."
  • Email is held in quarantine until verification occurs

Real-world example: A law firm deployed AI email security in January 2025. In April, the system quarantined an email purportedly from a senior partner requesting urgent client account information. The email perfectly mimicked the partner's style and referenced a real case.

The associate received a quarantine notice: "Unusual request detected. Verify with sender before accessing this email."

The associate called the partner. The partner had never sent the email. The firm's email account had been compromised via a credential stuffed from an old data breach.

Cost of the attack: $0 (prevented) Cost if attack succeeded: Estimated $800K-$1.2M (data breach of confidential client information, regulatory violations, malpractice exposure)

Strategy 3: Implement Transaction Verification Protocols

For high-risk transactions (wire transfers, credential resets, sensitive data access), require multi-person or multi-channel verification.

Financial transaction protocols:

Wire transfer verification:

  • All wire transfer requests above $X require callback verification
  • Callback must use phone number from company directory (not number in email)
  • Callback must occur even if request appears to come from CEO
  • No exceptions for "urgent" situations

Vendor payment changes:

  • Changes to vendor payment information (bank account, payment address) require verification
  • Contact vendor using phone number from contract or previous verified communications (not number in change request email)
  • Flag all vendor payment changes for manual review

Real-world prevention: A construction company implemented strict wire transfer verification in 2024. In June 2025, their accounts payable clerk received an email from their primary materials supplier requesting a change to their bank account for future payments.

The email looked completely legitimate:

  • Came from supplier's actual email domain (compromised account)
  • Referenced real projects and invoice numbers
  • Used supplier's standard email format and logo
  • Cited "banking relationship changes" as reason

Old process: Clerk would have updated the account information. Next $240,000 payment would have gone to attacker's account.

New verification process: Clerk called supplier using phone number from previous invoices. Supplier confirmed they had NOT requested any banking changes. Their email had been compromised.

Savings: $240,000 fraud prevented. Supplier alerted to compromise, preventing fraud against other customers.

Strategy 4: Implement Immutable Backups and Rapid Recovery

Since no defense is perfect, assume you will eventually suffer a ransomware attack. Make recovery as fast and painless as possible.

Immutable backup requirements:

Air-gapped or immutable: Backups must be either physically disconnected from network or cryptographically immutable (cannot be modified or deleted)

  • Cloud backups with object lock (AWS S3 Object Lock, Azure Immutable Blobs)
  • Offline backup copies stored securely off-site
  • Backup appliances with immutability features (purpose-built backup systems)

Frequent backup schedule: Daily backups (minimum), hourly for critical systems

  • RPO (recovery point objective) of 1-4 hours means losing at most 4 hours of data
  • RTO (recovery time objective) of 24-48 hours for complete restoration

Regularly tested recovery: Monthly restoration tests of critical systems

  • Verify backups are complete and functional
  • Time how long recovery actually takes
  • Identify gaps and problems before you need backups in a crisis

Geographic distribution: Backups in multiple physical locations

  • On-site backup for fast recovery (restored locally)
  • Off-site backup for disaster recovery (fire, flood, or attacker destroys on-site systems)
  • Cloud backup for geographic redundancy

Real-world example: A medical practice implemented immutable cloud backups in late 2024. In May 2025, they suffered a ransomware attack that encrypted all systems, including their on-site backup server.

Recovery process:

  • Day 1: Detected encryption, isolated compromised systems, contacted incident response firm
  • Day 2: Confirmed backups were unaffected (immutable cloud storage), began forensic analysis
  • Day 3-5: Rebuilt servers from clean images, restored data from cloud backups
  • Day 6: Returned to normal operations with all data intact

Total cost: $87,000 (incident response, system rebuild, 6 days of reduced operations) No ransom paid: Attackers demanded $850,000 Contrast with similar practice without immutable backups: 39-day recovery, $1.9M total cost including ransom payment

The Cyber Insurance Implications

The AI-powered ransomware evolution is fundamentally changing cyber insurance:

1. Underwriting Is Becoming More Rigorous

Insurers are requiring specific security controls before offering coverage:

Mandatory controls for cyber insurance:

  • Multi-factor authentication on all remote access and email (100% deployment required)
  • Endpoint detection and response (EDR) on all devices
  • Email security beyond basic spam filtering (AI-powered preferred)
  • Privileged access management controls
  • Immutable or air-gapped backups
  • Regular security awareness training (with testing)

Failure to implement these controls results in:

  • Coverage denial (50% of cyber insurance applications are declined due to inadequate controls)
  • Severe sublimits (ransomware coverage capped at $100K instead of $1M+)
  • Extremely high premiums (2-3x standard rates)

2. Premiums Reflect Actual Risk More Accurately

AI underwriting systems evaluate your specific security posture:

  • Strong controls (MFA, EDR, email security, immutable backups): Premiums 30-45% below average
  • Average controls: Market rate premiums
  • Weak controls: Premiums 50-100% above average or coverage declined

This is a dramatic shift from 5 years ago when cyber insurance pricing was primarily based on revenue and industry, with minimal variation for security controls.

3. Sublimits and Exclusions Are Expanding

Insurers are limiting exposure to specific ransomware-related costs:

Common sublimits:

  • Social engineering fraud: $50K-$250K (much lower than policy limit)
  • Ransomware/extortion payments: Often capped at 50% of policy limit
  • Regulatory fines and penalties: Often excluded entirely or severely sublimited
  • Business interruption: Time-limited (60-90 days maximum)

Why sublimits matter: Your $2M cyber policy may only provide $100K for social engineering fraud—even though that was the entry point for a $2M total loss. Understanding sublimits is critical.

4. Claims Involving AI-Powered Attacks May Face Scrutiny

Insurers are beginning to ask: "Did the insured implement reasonable defenses against known AI-powered attack methods?"

Potential coverage disputes:

  • "You knew AI phishing was a significant threat but didn't implement AI-powered email security"
  • "Your security awareness training didn't address AI-generated attacks"
  • "You didn't implement transaction verification protocols despite knowing about AI voice cloning"

While these disputes are emerging and not yet common, expect insurers to increasingly evaluate whether policyholders implemented "reasonable" security measures given known threats.

Preparing Your Business for AI-Powered Ransomware

The ransomware landscape has fundamentally changed. AI has made attacks more sophisticated, more targeted, and dramatically more effective. Traditional defenses remain necessary but are no longer sufficient.

Five actions to take in the next 30 days:

  1. Implement AI-powered email security: Evaluate solutions from Abnormal Security, Darktrace, Proofpoint, or similar providers
  2. Establish transaction verification protocols: Require callback verification for all wire transfers and sensitive requests
  3. Audit your backup strategy: Ensure backups are immutable or air-gapped, frequently tested, and geographically distributed
  4. Review cyber insurance coverage: Understand your sublimits, exclusions, and control requirements
  5. Conduct AI-specific security awareness training: Educate employees about AI voice cloning, deepfakes, and highly personalized phishing

The businesses that will survive the AI-powered ransomware era are those that recognize the threat, implement appropriate controls, and maintain layered defenses assuming that some attacks will succeed.

Most importantly: Don't assume your current security posture is adequate because you haven't been attacked. Attackers are selecting targets based on vulnerability. The question isn't whether you'll be targeted—it's whether you'll be successfully compromised. Your defenses determine the answer.


Concerned about your organization's ransomware resilience? Understanding your vulnerability to AI-powered attacks and implementing appropriate defenses requires both technical expertise and cyber insurance knowledge. Modern cybersecurity isn't just about technology—it's about risk management, employee training, incident response planning, and insurance coverage working together to protect your business.

Sources: Sophos "State of Ransomware 2025," Verizon Data Breach Investigations Report, FBI Internet Crime Complaint Center, Coalition Cyber Threat Index