Microsoft has unveiled Agent 365 and Microsoft 365 Enterprise 7, new tools designed to address a growing security threat: ungoverned AI agents operating within organizations. The launch, effective May 1st, comes as AI agents rapidly proliferate, with over 80% of Fortune 500 companies already using them—often without proper oversight.
The Rising Threat of Rogue AI
The core concern is that AI agents, once experimental, are now deeply embedded in operational structures. Without monitoring, these agents can be exploited, acting against their parent organizations. Microsoft calls these compromised systems “double agents,” highlighting the risk of manipulation via prompt injection, model poisoning, or other techniques.
The problem is real: nearly a third of agents operate without IT or security approval, and almost half of organizations lack any security measures for their AI deployments. This creates a significant blind spot, especially as attackers develop increasingly sophisticated methods to hijack agents. Recent research shows companies unknowingly embedding malicious instructions into AI-powered tools, creating “sleeper agents” ready to execute harmful commands.
Microsoft’s Solution: Agent 365 and E7
To counter this, Microsoft is offering two solutions:
- Agent 365 ($15/user/month): A centralized “control plane” for observing, governing, and securing AI agents across an enterprise.
- Microsoft 365 Enterprise 7 ($99/user/month): Bundles Agent 365 with Copilot and advanced security features, aiming to provide a comprehensive AI governance solution.
The suite extends existing security infrastructure (Defender, Entra, Purview) to non-human entities. Key features include an Agent Registry to track all agents, Agent ID for identity management, and data protection via sensitivity labels and insider risk monitoring.
The approach mirrors zero-trust security principles applied to AI, ensuring agents are treated as potential threats until verified. Microsoft can block risky agents in real-time.
Why This Matters Now
The rapid adoption of AI agents is outpacing the development of effective governance tools. The market is projected to reach 1.3 billion agents by 2028, yet many organizations are unprepared for the security implications.
This isn’t just a technical issue; it’s a business risk. Uncontrolled agents could leak sensitive data, sabotage operations, or become entry points for cyberattacks. Microsoft’s move signals a shift from experimentation to operational security in the age of autonomous AI.
Copilot Expansion and Geopolitical Undercurrents
The launch is tied to Wave 3 of Microsoft 365 Copilot, which now includes Anthropic’s Claude model alongside OpenAI’s. This expansion comes amid geopolitical tensions, as the U.S. Department of Defense recently flagged Anthropic as a supply chain risk due to its refusal to comply with Pentagon terms. Microsoft’s continued support for Anthropic underscores its commitment to model diversity despite political pressure.
The Bottom Line
Microsoft is betting that enterprises will prioritize AI governance before attackers exploit the current vulnerabilities. The race between creation and control is on, and the company is positioning itself as the trusted provider for securing the future of AI-driven workflows.
Whether businesses will adopt these tools quickly enough to stay ahead of the threat remains uncertain, but the stakes are clear: ungoverned AI agents pose a real and growing risk to organizations of all sizes.






























