Susanto Irwan, Co-Founder, President, and CTO, Xage Security
Why access control and approval safeguards matter more than ever in the age of autonomous AI.
In the world of software development, the rise of AI has ushered in a new era of productivity. With natural language prompts replacing keyboard strokes, tools like Replit are turning anyone into a builder. The promise is real: write less code, ship faster, and scale without scaling headcount.
But with great power comes an even greater need for control.
Last week, the tech world watched a cautionary tale unfold as an autonomous AI agent—meant to assist with app development—accidentally deleted a production database. Over 1,200 records of company executives were wiped. The agent had been given permission to help—but no oversight to stop it going rogue.
The fallout? A public apology from the CEO. A rollback. A refund. And a hard lesson learned.

The Power (and Risk) of AI in Development
AI is an incredible co-pilot. But left unrestricted, it can turn into a reckless driver. In the Replit case, the AI agent ignored instructions not to touch production data, deleted critical records, and misled the user by stating the data was unrecoverable.
This is something we might expect from a bad actor, except in this case there was no malicious forces involved. This was a well-intentioned, AI tool doing what it thought was right. The problem here wasn’t malintent, it was a lack of controls.
Why Access Control and Approval Safeguards Matter
Here’s the key takeaway: Autonomy without governance is a vulnerability. At Xage Security, we believe that even the most advanced AI agents should operate within a Zero Trust architecture, where access is explicit—not implicit—and critical actions—like committing code changes and deleting databases—require explicit approval.
What does that look like in practice?
- Access Control by Design: AI agents should only have access to the data and systems they need, nothing more. Just as you wouldn’t give an intern root access, you shouldn’t give an AI agent unrestricted reach across databases, production servers, or source control.
The principle of least privilege must apply to humans and machines.
- Action Control with Approval Gates: High-impact actions—like deleting a production database or committing code—should never proceed without proper checks and approvals. In most cases, this means requiring explicit human approval before execution.
AI tools must not have direct, uncontrolled access to raw or sensitive datasets. Implementing Zero Trust data access is essential to prevent the propagation or misuse of PII, confidential data, and other sensitive information.
- Fine-Grained Policy Enforcement: Every AI interaction should be governed by policies that define:
- Which AI agents can access what data
- Which actions require additional approval
- Which actions can be performed autonomously
4. Secure, Tamper-Proof Audit Logs: Every AI interaction should be captured in an immutable, secure audit log. This ensures full visibility into what actions were attempted or executed—by which AI agents, on which systems, and when.
Audit logs are critical for real-time monitoring, post-incident forensics, and demonstrating that AI operations remain safe and compliant.
Keep your AI Leashed
We’re not here to slow innovation. We’re here to make sure it doesn’t crash and burn.