Skip to main content
search
All BlogsCyber NewsZero Trust

Cyber Attack News – Risk Roundup – Top Stories for February 2026

By February 24, 2026 No Comments

In February 2026, the dominant theme was convergence. Geopolitical cyber operations, industrial ransomware, and AI-enabled threats are no longer separate tracks. They are overlapping risk vectors requiring identity-centric containment.

Critical infrastructure remains a primary strategic target and artificial intelligence is now embedded in both offensive tradecraft and enterprise risk mitigation. Across energy, telecommunications, transportation, and industrial sectors, adversaries demonstrated persistence, preparation, and automation. 

Critical Infrastructure and OT Security: Persistent Exposure

Poland’s Energy Infrastructure Compromise Highlights OT Identity Gaps

Poland disclosed details of a December cyber incident affecting wind farms, solar assets, and a combined heat and power plant. Investigators found default credentials, absent multi-factor authentication, and exposed management interfaces within operational environments. Following the disclosure, CISA issued a warning to U.S. critical infrastructure operators, noting that similar weaknesses are common across energy environments.

CISA urged operators to eliminate default and shared credentials, enforce multi-factor authentication on remote access and edge devices, upgrade or replace unsupported legacy equipment, and strengthen segmentation between IT and OT networks. The guidance emphasized hardening internet-exposed management interfaces and validating that remote access pathways are tightly controlled and continuously monitored.

While the attack did not produce sustained outages, it demonstrated adversary access into mission critical energy systems. The root causes were not advanced zero-days. They were identity and access control failures. For energy providers globally, this is a structural warning. As renewable and distributed assets expand, unmanaged identity across remote sites creates disproportionate risk.

Airport Vendor Credential Exposure Signals Transportation Supply Chain Risk

Security researchers at CloudSEK intercepted leaked credentials from a maintenance provider with access to IT systems supporting more than 200 airports. The account lacked multi-factor authentication.

A single vendor account created potential exposure across hundreds of geographically distributed facilities. Transportation infrastructure is increasingly dependent on third-party connectivity. Without strict identity governance and continuous validation, supplier access becomes a systemic risk multiplier.

Enforcing identity-based segmentation ensures vendor access remains scoped, monitored, and continuously verified. This approach limits downstream impact even when external partners experience compromise.

Coordinated Breach of Singapore’s Telecom Sector

Singapore confirmed that Chinese-linked threat actors compromised all four major telecommunications providers in a coordinated espionage campaign.

Telecom networks are high-value strategic assets. Persistent access enables long-term intelligence collection and potential downstream targeting of dependent sectors. Continuous identity enforcement across distributed network assets reduces the operational value of persistent access. Even when adversaries gain footholds in edge or firewall systems, strict workload and device authentication limits lateral movement.

Autonomous AI and the Identity Governance Imperative

February reinforced a structural shift in cyber risk. Artificial intelligence is no longer just a productivity tool or an emerging threat vector. It is becoming operational infrastructure.

Google reported that state-sponsored actors from China, Russia, Iran, and North Korea are leveraging large language models (LLM) to accelerate reconnaissance, generate phishing campaigns, and streamline infrastructure development. AI is compressing attacker timelines and lowering operational cost. At the same time, researchers at Check Point demonstrated that mainstream AI platforms such as Grok and Microsoft Copilot can be abused as stealth command-and-control relays, allowing adversaries to blend malicious traffic into legitimate enterprise AI workflows.

Meanwhile, Cisco researchers warned that the communication layers connecting AI agents, including Model Context Protocol (MCP) implementations and agent-to-agent exchanges, are expanding the attack surface. As autonomous agents interact across APIs, data sources, and other agents, they introduce new layers of delegated authority. Without strong identity verification and policy enforcement, these interconnected systems expand the attack surface and increase the risk of misuse.

Xage Unified Zero Trust for AI addresses this by binding every user, model instance, agent, and API interaction to a cryptographically verifiable identity. Continuous policy enforcement and least-privilege segmentation ensure AI systems remain governed execution layers rather than uncontrolled pathways for abuse.

Agentic AI Security Gaps Signal Urgent Need for Identity Governance

Two developments in February underscored structural risks in agentic AI deployments. The rapid rise of OpenClaw, a viral autonomous AI assistant, revealed weak authentication boundaries, plaintext API key storage, and prompt injection flaws that allowed malicious instructions to execute without user interaction. At the same time, more than 30 malicious Chrome extensions impersonating AI assistants were installed by over 300,000 users, harvesting credentials and modifying behavior through remotely hosted components.

Both cases highlight the same underlying issue. AI agents increasingly operate with delegated authority across files, email, cloud storage, and enterprise APIs. They are not passive tools. They execute actions on behalf of users and often hold broad access to sensitive systems. When treated as simple applications rather than identity-bearing actors, they can accumulate privilege in ways traditional endpoint and perimeter controls cannot effectively constrain.

These developments reinforce the need to govern AI agents as privileged identities. Xage ensures that every AI tool, model instance, and API interaction is bound to a verifiable identity and enforced by policy. By applying strict least-privilege controls and continuous workload authentication, organizations can prevent autonomous AI systems from becoming high-impact intrusion paths.

Read our full OpenClaw analysis: https://xage.com/blog/from-viral-to-vulnerable-what-the-openclaw-saga-tells-us-about-agentic-ai-security/

NIST Launches AI Agent Standards Initiative

NIST launched an AI Agent Standards Initiative as autonomous agents move into production environments. The effort reflects growing regulatory focus on agent accountability, authorization boundaries, and lifecycle governance.

Autonomous agents increasingly operate across critical infrastructure, enterprise systems, and government workflows. Without strict identity binding and continuous authorization, they concentrate privilege in ways that amplify systemic risk.