- Blog
#Security
Data Security Posture Management for AI
- 13/05/2025
Reading time 5 minutes
AI agents are no longer proof-of-concept experimental things. Microsoft reports that organisations are already running an average of 14 custom AI applications in production, while Azure AI Foundry alone processed 100 trillion tokens* last quarter and handles 2 billion enterprise search queries every day. At this scale, security blind spots rapidly multiply: Gartner predicts that by the end of 2025 more than 70 % of malicious attacks on enterprise AI will originate in poisoned supply chains*—yet when flaws are caught early, remediation costs plummet from ~USD 4.9 million to only USD 80. (*: Source)
In many organisations the rise of low-code Copilot Studio and pro-code Azure AI Foundry has created “agent sprawl”: dozens of non-human identities without central oversight. This issue can be covered with Microsoft’s new Entra Agent ID as it assigns a managed identity to every agent the moment it is published, so security teams can see where each agent lives, review its permissions and apply conditional-access or least-privilege policies from the same console they use for humans. Partnerships with ServiceNow and Workday will extend that control into broader HR and IT workflows.
Two Preview capabilities now embed guard-rails directly into AI Foundry and Copilot Studio projects:
Continuous evaluation dashboards in Azure Monitor keep an eye on groundedness, safety and cost drift after go-live, giving teams the same “single-pane-of-glass” telemetry they rely on for traditional micro-services.
Security and developer workflows converge inside the Foundry portal via Microsoft Defender for Cloud. Recommendations (e.g., “use Private Link for sensitive data paths”) and live threat alerts (e.g., jailbreak attempts or sensitive-data leakage) now surface where engineers work, eliminating ticket lag and compressing mean-time-to-mitigation. General availability is planned for June 2025, referring to the Microsoft’s roadmap.
Purview DSPM for AI brings Copilot-class data-loss prevention and audit to any Foundry or Copilot Studio agent—even those calling third-party models. Auto-labelling for Dataverse applies and enforces sensitivity labels end-to-end, while new visibility into unauthenticated customer chats helps compliance teams police public-facing assistants.
Regulations and standards surrounding AI are bringing in fresh expectations for transparency, thorough documentation, and effective risk management, particularly for high-risk AI systems. As developers create AI applications and agents, they might require support and resources to assess risks in line with these guidelines and efficiently communicate control measures and evaluation findings with compliance and risk management teams.
For instance, a developer building an AI agent in Europe might be required by their compliance team to complete a Data Protection Impact Assessment (DPIA) and an Algorithmic Impact Assessment (AIA) to satisfy internal risk management protocols and technical documentation standards in line with evolving AI governance frameworks and best practices.
Using the step-by-step guidance offered by Purview Compliance Manager for implementing and testing controls, compliance teams can assess risks such as potential algorithmic bias, cybersecurity threats, or insufficient transparency in model behaviour.
Picture 1. EU AI Assessment report for Azure AI Foundry in Compliance Manager (Picture: Microsoft)
After the evaluation is performed within Azure AI Foundry, the developer can generate a report outlining identified risks, mitigation strategies, and any remaining risks. This report can then be uploaded to Compliance Manager to assist with audits and serve as supporting documentation for regulators or external stakeholders.
All those new signals roll up into Microsoft Security Copilot, which is evolving into the command centre for AI-driven SecOps:
By funneling Entra, Defender and Purview insights into one generative canvas, Security Copilot gives blue teams the same acceleration that Copilot agents give business users.
Enterprises want to embrace autonomous agents without adding exponential risk. Microsoft’s strategy—identity for every bot, guard-rails in the runtime, shared telemetry and an AI-assisted SOC—shifts security left and right at once. Development teams stay agile; security teams stay informed; regulators see evidence of controls baked in from design through operation.
Next steps for practitioners
Our newsletters contain stuff our crew is interested in: the articles we read, Azure news, Zure job opportunities, and so forth.
Please let us know what kind of content you are most interested about. Thank you!