Blog

Integrating Copilot Agents and Security Copilot into Enterprise-Grade AI Security Architecture

24.6.2025

Blog Security

Kim-Petrus-Purview-3-1024x684-1

AI agents are no longer proof-of-concept experimental things. Microsoft reports that organisations are already running an average of 14 custom AI applications in production, while Azure AI Foundry alone processed 100 trillion tokens* last quarter and handles 2 billion enterprise search queries every day. At this scale, security blind spots rapidly multiply: Gartner predicts that by the end of 2025 more than 70 % of malicious attacks on enterprise AI will originate in poisoned supply chains*—yet when flaws are caught early, remediation costs plummet from ~USD 4.9 million to only USD 80. (*: Source

Copilot Agents: powerful companions for technical specialists 

In many organisations the rise of low-code Copilot Studio and pro-code Azure AI Foundry has created “agent sprawl”: dozens of non-human identities without central oversight. This issue can be covered with Microsoft’s new Entra Agent ID as it assigns a managed identity to every agent the moment it is published, so security teams can see where each agent lives, review its permissions and apply conditional-access or least-privilege policies from the same console they use for humans. Partnerships with ServiceNow and Workday will extend that control into broader HR and IT workflows. 

Hardening the agent runtime 

Two Preview capabilities now embed guard-rails directly into AI Foundry and Copilot Studio projects: 

  • Spotlighting detects indirect prompt-injection hidden in emails, SharePoint files or other web content before an agent acts on it. 

  • Task Adherence Evaluation + Mitigation scores every planned AI tool call in real time; if an agent drifts, Foundry can automatically block execution, pause the run or escalate for human review. 

Continuous evaluation dashboards in Azure Monitor keep an eye on groundedness, safety and cost drift after go-live, giving teams the same “single-pane-of-glass” telemetry they rely on for traditional micro-services. 

Integrated signals for DevSecOps 

Security and developer workflows converge inside the Foundry portal via Microsoft Defender for Cloud. Recommendations (e.g., “use Private Link for sensitive data paths”) and live threat alerts (e.g., jailbreak attempts or sensitive-data leakage) now surface where engineers work, eliminating ticket lag and compressing mean-time-to-mitigation. General availability is planned for June 2025, referring to the Microsoft’s roadmap. 

Data protection that follows the agent 

Purview DSPM for AI brings Copilot-class data-loss prevention and audit to any Foundry or Copilot Studio agent—even those calling third-party models. Auto-labelling for Dataverse applies and enforces sensitivity labels end-to-end, while new visibility into unauthenticated customer chats helps compliance teams police public-facing assistants. 

AI Foundry evaluation integration with Microsoft Purview Compliance Manager 

Regulations and standards surrounding AI are bringing in fresh expectations for transparency, thorough documentation, and effective risk management, particularly for high-risk AI systems. As developers create AI applications and agents, they might require support and resources to assess risks in line with these guidelines and efficiently communicate control measures and evaluation findings with compliance and risk management teams. 

For instance, a developer building an AI agent in Europe might be required by their compliance team to complete a Data Protection Impact Assessment (DPIA) and an Algorithmic Impact Assessment (AIA) to satisfy internal risk management protocols and technical documentation standards in line with evolving AI governance frameworks and best practices. 

Using the step-by-step guidance offered by Purview Compliance Manager for implementing and testing controls, compliance teams can assess risks such as potential algorithmic bias, cybersecurity threats, or insufficient transparency in model behaviour. 

Kuva, joka sisältää kohteen kuvakaappaus, teksti</p>
<p>Tekoälyllä luotu sisältö voi olla virheellistä., Kuva

Picture 1. EU AI Assessment report for Azure AI Foundry in Compliance Manager (Picture: Microsoft) 

After the evaluation is performed within Azure AI Foundry, the developer can generate a report outlining identified risks, mitigation strategies, and any remaining risks. This report can then be uploaded to Compliance Manager to assist with audits and serve as supporting documentation for regulators or external stakeholders. 

Security Copilot: The SecOps companion 

All those new signals roll up into Microsoft Security Copilot, which is evolving into the command centre for AI-driven SecOps: 

  • Copilot Studio connector (GA, May 2025) lets analysts trigger a Security Copilot investigation from any Studio workflow with plain-language prompts and receive the findings back in-flow. 

  • An expanding plugin catalogue of third-party and Azure-native tools (e.g., Censys threat intelligence, HP Workforce telemetry, Splunk queries, Azure Firewall deep-dive)—means Security Copilot can hunt, correlate and summarise across an increasingly heterogeneous estate in natural language. 

By funneling Entra, Defender and Purview insights into one generative canvas, Security Copilot gives blue teams the same acceleration that Copilot agents give business users. 

Summary 

Enterprises want to embrace autonomous agents without adding exponential risk. Microsoft’s strategy—identity for every bot, guard-rails in the runtime, shared telemetry and an AI-assisted SOC—shifts security left and right at once. Development teams stay agile; security teams stay informed; regulators see evidence of controls baked in from design through operation. 

Next steps for practitioners 

  1. Inventory your agents: enable Entra Agent ID and review existing non-human identities. 

  1. Turn on Spotlighting and Task Adherence in preview environments; measure false-positive rates before broad rollout. 

  1. Wire Defender and Purview into your Foundry projects to surface misconfigurations early. 

  1. Pilot the Security Copilot Studio connector: build a promptbook that opens an investigation whenever Foundry logs a prompt-injection block. 

Petrus Vasenius

Use H2 for the title

Write your content

Use H2 for the title

Write your content