Today's Core Dump is brought to you by ThreatPerspective

Security Affairs

AI-powered campaign compromises 600 FortiGate systems worldwide

A Russian-speaking cybercriminal used commercial generative AI tools to hack over 600 FortiGate devices across 55 countries. Amazon Threat Intelligence reports that a Russian-speaking, financially motivated threat actor used commercial generative AI services to compromise more than 600 FortiGate devices in 55 countries. The activity, observed between January 11 and February 18, 2026, highlights how […] A Russian-speaking cybercriminal used commercial generative AI tools to hack over 600 FortiGate devices across 55 countries. Amazon Threat Intelligence reports that a Russian-speaking, financially motivated threat actor used commercial generative AI services to compromise more than 600 FortiGate devices in 55 countries. The activity, observed between January 11 and February 18, 2026, highlights how cybercriminals are increasingly leveraging AI tools to scale and automate attacks against exposed network infrastructure worldwide. The attacker did not exploit any FortiGate vulnerabilities. Instead, the threat actor abused exposed management ports and weak single-factor credentials. “Amazon Threat Intelligence observed a Russian-speaking financially motivated threat actor leveraging multiple commercial generative AI services to compromise over 600 FortiGate devices across more than 55 countries from January 11 to February 18, 2026.” reads the report published by Amazon. “No exploitation of FortiGate vulnerabilities was observed instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale.” Researchers found the actor used multiple commercial GenAI tools to automate and scale familiar attack techniques, despite limited skills. During routine monitoring, Amazon experts uncovered infrastructure hosting the attacker’s tools, along with AI-generated attack plans, victim configs, and custom code, offering rare insight into an AI-driven workflow. The actor scanned the Internet for exposed FortiGate management ports, abused weak credentials, and stole full configurations containing VPN, admin, and network data. “Following VPN access to victim networks, the threat actor deploys a custom reconnaissance tool, with different versions written in both Go and Python. Analysis of the source code reveals clear indicators of AI-assisted development: redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs.” continues the report. “While functional for the threat actor’s specific use case, the tooling lacks robustness and fails under edge cases characteristics typical of AI-generated code used without significant refinement.” AI-assisted scripts parsed and decrypted the loot, enabling VPN access, Active Directory compromise, credential dumping, lateral movement, and attempts to target Veeam backups. This last step is a classic tactic observed in ransomware attacks Custom reconnaissance tools, seemingly AI-generated, automated network mapping and vulnerability scanning but lacked depth, often failing against patched or hardened systems. The actor relied on multiple commercial LLMs for planning and code generation, creating a large toolkit that mimicked a full team’s output. Still, when exploits failed or defenses were strong, they moved on, showing AI amplified scale and efficiency, not true technical sophistication. Once inside, the attacker used common open-source tools to escalate access. They compromised Active Directory, extracting NTLM hashes and, in some cases, entire credential databases sometimes helped by weak or reused admin passwords. After gaining domain control, they moved laterally using pass-the-hash and NTLM relay attacks, and targeted Veeam backup servers to steal credentials and weaken recovery options. However, when systems were patched or hardened, their more advanced exploitation attempts largely failed. “The threat actor’s operational notes reference multiple CVEs across various targets (CVE-2019-7192, CVE-2023-27532, and CVE-2024-40711, among others). However, a critical finding from this analysis is that the threat actor largely failed when attempting to exploit anything beyond the most straightforward, automated attack paths.” Amazon Threat Intelligence experts believe the attacker is financially motivated, Russian-speaking individual or small group with low-to-medium skills boosted heavily by AI. Their post-exploitation skills are shallow, often failing against hardened targets and moving on. Poor operational security exposed detailed plans, credentials, and victim data. Amazon Threat Intelligence investigated and disrupted the campaign, sharing actionable indicators of compromise with partners and working across the industry to expand visibility and coordinate defenses. “Through these efforts, Amazon helped reduce the threat actor’s operational effectiveness and enabled organizations across multiple countries to take steps to disrupt the efficacy of the campaign.” concludes the report that includes Indicators of compromise (IOCs) along with recommendations. The case shows how AI lowers the barrier to cybercrime. Experts warn AI-driven attacks will grow in 2026, urging strong patching, credential hygiene, segmentation, and detection. Follow me on Twitter: @securityaffairs and Facebook and Mastodon Pierluigi Paganini (SecurityAffairs hacking, FortiGate)

Published: 2026-02-23T10:39:40











© Segmentation Fault . All rights reserved.

Privacy | Terms of Use | Contact Us