LLMs & Ransomware | An Operational Accelerator, Not a Revolution
LLMs make competent ransomware crews faster and novices more dangerous. The risk is not superintelligent malware, but rather industrialized extortion.
LLMs & Ransomware | An Operational Accelerator, Not a Revolution
Gabriel Bernadett-Shapiro, Jim Walter & Alex Delamotte
Executive Summary
- SentinelLABS assesses that LLMs are accelerating the ransomware lifecycle, not fundamentally transforming it.
- We observe measurable gains in speed, volume, and multilingual reach across reconnaissance, phishing, tooling assistance, data triage, and negotiation, but no step-change in novel tactics or techniques driven purely by AI at scale.
- Self-hosted, open-source Ollama models will likely be the go-to for top tier actors looking to avoid provider guardrails.
- Defenders should prepare for adversaries making incremental but rapid efficiency gains.
Overview
SentinelLABS has been researching how large language models (LLMs) impact cybersecurity for both defenders and adversaries. As part of our ongoing efforts in this area and our well-established research and tracking of crimeware actors, we have been closely following the adoption of LLM technology among ransomware operators. We have observed that there appear to be three structural shifts unfolding in parallel.
First, the barriers to entry continue to fall for those intent on cybercrime. LLMs allow low- to mid-skill actors to assemble functional tooling and ransomware-as-a-service (RaaS) infrastructure by decomposing malicious tasks into seemingly benign prompts that are able to slip past provider guardrails.
Second, the ransomware ecosystem is splintering. The era of mega-brand cartels (LockBit, Conti, REvil) has faded under sustained law enforcement pressure and sanctions. In their place, we see a proliferation of small, short-lived crews—Termite, Punisher, The Gentlemen, Obscura—operating under the radar, alongside a surge in mimicry and false claims, such as fake Babuk2 and confused ShinyHunters branding.
Third, the line between APT and crimeware is blurring. State-aligned actors are moonlighting as ransomware affiliates or using extortion for operational cover, while culturally-motivated groups like “The Com” are buying into affiliate ecosystems, adding noise and complicating attribution as we saw with groups such as DragonForce, Qilin, and previously BlackCat/ALPHV.
While these three structural shifts were to a certain extent in play prior to the widespread availability of LLMs, we observe that all three are accelerating simultaneously. To understand the mechanics, we examined how LLMs are being integrated into day-to-day ransomware operations.
We note that the threat intelligence community’s understanding of exactly how threat actors integrate LLMs into attacks is severely limited. The primary sources that furnish information on these attacks are the intelligence teams of LLM providers via periodic reports and, more rarely, victims of intrusions who find artifacts of LLM use.
As a result, it is easy to overinterpret a small number of cases as indicative of a revolutionary change in adversary tradecraft. We assess that such conclusions exceed the available evidence. We find instead that while the use of LLMs by adversaries is certainly an important trend, in ways we detail throughout this report, this reflects operational acceleration rather than a fundamental transformation in attacker capabilities.
How AI Is Changing Ransomware Operations Today
Direct Substitutions from Enterprise Workflows
The most immediate impact comes from ransomware operators adopting the same LLM workflows that legitimate enterprises use every day, only repurposed for crime. In the same way that marketers use LLMs to write copy, threat actors use them to draft phishing emails and localized content, such as ransom notes using the same language as the victim company. Enterprises take advantage of LLMs to refine large amounts of data for sales operations while threat actors use the same workflow to identify lucrative targets from dumps of leaked data or how to extort a specific victim based on the value of the data they steal.
This data triage capability is particularly amplified across language barriers. A Russian-speaking operator might not recognize that a file named “Fatura” (Turkish for “Invoice”) or “Rechnung” (German) contains financially sensitive information. LLMs eliminate this blind spot.
With LLMs, attackers can instruct a model to “Find all documents related to financial debt or trade secrets” in Arabic, Hindi, Spanish, or Japanese. Research shows LLMs significantly outperform traditional tools in identifying sensitive data in non-English languages.
The pattern holds across other enterprise workflows as well. In each case, the effect is the same: competent crews become faster and can operate across more tech stacks, languages, and geographies, while new entrants reach functional capability sooner. Importantly, what we are not seeing is any fundamentally new category of attack or novel capability.
Local Models to Evade Guardrails
Actors are increasingly breaking down malicious tasks into “non-malicious,” seemingly benign fragments. Often, actors spread requests across multiple sessions or prompt multiple models, then stitch code together offline. This approach dilutes potential suspicion from LLM providers by decentralizing malicious activity.
There is a clear and increasing trend of actor interest in using open models for nefarious purposes. Local, fine-tuned, open-source Ollama models offer more control, minimize provider telemetry and have fewer guardrails than commoditized LLMs. Early proof-of-concept (PoC) LLM-enabled ransomware tools like PromptLock may be clunky, but the direction is clear: once optimized, local and self-hosted models will be the default for higher-end crews.
Cisco Talos and others have flagged criminals gravitating toward uncensored models, which offer fewer safeguards than frontier labs and typically omit security controls like prompt classification, account telemetry, and other abuse-monitoring mechanisms in addition to being trained on more harmful content.
As adoption of these open-source models accelerates and as they are fine-tuned specifically for offensive use cases, defenders will find it increasingly challenging to identify and disrupt abuse originating from models that are customized for or directly operated by adversaries.
Documented Use of AI in Offensive Operations
Automated Attacks via Claude Code
Some recent campaigns illustrate our observations of how LLMs are actively being used and how they may be incorporated to accelerate attacker tradecraft.
[...]