Blog

Ignite 2025: Fabric’s Evolution from Data Platform to Intelligence Platform

19.11.2025

Data Microsoft

Microsoft Ignite 2025 Microsoft Fabric Updates

Microsoft Ignite 2025 made one thing clear: the era of the standalone "data warehouse" is ending. Microsoft is aggressively positioning Fabric not just as a place to store data, but as the "nervous system" for the new wave of AI agents.

While the keynotes were dominated by "Agent 365" and "Foundry," for those of us building data platforms, the real story is in the infrastructure that makes those agents possible.

This post walks through the critical updates for data teams: the foundational Data Warehouse updates, the strategic Fabric IQ layer, and the concrete engineering improvements like dbt jobs.

1. The Intelligence Stack: Fabric IQ, Foundry, and Agent 365

The most strategic announcement was the "Intelligence Ecosystem." Microsoft has formalized how data connects to AI agents, and it involves three distinct layers.

Fabric IQ: The Semantic Foundation

Fabric IQ is the new semantic intelligence layer. It extends the Power BI semantic model concept to cover business entities and processes across the entire enterprise.

  • Concept: Instead of feeding agents raw tables (which they often misunderstand), you define entities like "Customer," "Order," or "Complaint" once in Fabric IQ.
  • Result: An agent built in Copilot Studio and an analyst using Power BI now share the exact same definition of "Churn Rate."

Connecting to Microsoft Foundry

This is where the developer story connects. Microsoft Foundry is the platform for building these agents.

  • Developers use Foundry to build custom agents.
  • These agents plug into Fabric IQ to "ground" their reasoning in your actual business data.
  • This prevents the "hallucination" problem by ensuring the agent is constrained by the semantic rules you defined in Fabric.

Governance via Agent 365

Finally, Agent 365 was announced as the governance and control plane.

  • It acts as a central registry for all agents (Registry, Access Control, Security).
  • For data teams, this is crucial: it ensures that the data access policies (RLS/CLS) you define in Fabric are respected even when an autonomous agent is accessing the data on behalf of a user.

2. Data Warehouse: Specifics for Engineers

Under the hood of the AI story, the Warehouse team delivered two highly requested features that simplify migration from legacy SQL Server or Synapse environments.

IDENTITY Columns (Preview)

Fabric Warehouse finally gets IDENTITY columns, allowing for automatic surrogate key generation. The Nuance:

  • BigInt Only: In the current preview, IDENTITY columns must be BIGINT. INT is not supported.
  • No Custom Control: You cannot manually specify the SEED or INCREMENT values. Fabric manages the sequence internally to ensure performance across distributed nodes.
  • Use Case: Perfect for generating stable surrogate keys in your Silver/Gold layers without complex MAX(ID)+1 logic in pipelines.

Data Clustering (Preview)

Data Clustering gives engineers an explicit lever for performance tuning.

  • VS V-Order: V-Order is a write-time optimization that happens by default. Data Clustering is a manual configuration (similar to CLUSTER BY in Snowflake) that physically reorganizes data based on specific columns.
  • Why it matters: It allows the query engine to aggressively "prune" (skip) files during query execution.
  • Best for: Large fact tables where you frequently filter by specific high-cardinality columns (e.g., DateKey or StoreId).

3. Data Factory: The dbt Era Begins

Ignite 2025 brought a massive win for analytics engineers: dbt job support in Data Factory.

dbt Jobs and the Fusion Engine

  • Now: You can run dbt projects directly within Fabric Data Factory pipelines. This unifies orchestration, letting you trigger dbt models right after your Notebook ingest jobs finish. Currently this support dbt core.
  • Future (2026): Microsoft and dbt Labs announced that the dbt Fusion Engine (the Rust-based, high-performance engine) will be integrated into Fabric in 2026.
  • Strategy: This signals that Fabric is moving to support "code-first" analytics engineering patterns as first-class citizens alongside visual dataflows.

SAP Mirroring

SAP Mirroring (via SAP Datasphere) is now in Preview. This creates a low-latency replication path from SAP systems into Fabric. It drastically lowers the barrier for combining SAP data with other enterprise data sources.

4. Operational Excellence: Agents and Events

The "Real-Time Intelligence" workload received updates that help us run the platform itself.

Operations Agent (Preview)

Located within the Real-Time Intelligence workload, the Operations Agent is a specialized agent designed to monitor your data estate.

  • It can watch for anomalies in data freshness or quality.
  • Unlike static alerts, it can propose remediation steps (e.g., "The ingest pipeline failed; should I retry or scale the capacity?").

Capacity Overview Events

Admins finally get granular visibility into what is consuming their Fabric Capacity Units (CUs).

  • The Flow: Fabric emits structured events regarding capacity usage and throttling status.
  • The Destination: These events land in the Real-Time Hub.
  • Pro Tip: You can route these events into an Eventhouse (KQL Database). This allows you to retain the history for months, letting you analyze long-term usage trends and perform capacity planning based on actual data rather than guesses.
  • Check out: Fabric Capacity Events Accelerator in GitHub provides multiple pre-made dashboards and templates to help with monitoring.

Summary

Ignite 2025 signals that Fabric has graduated from a "unified data platform" to a "unified intelligence platform." For data teams, the roadmap is clear:

  1. Standardize on Fabric IQ to ensure your data is ready for the incoming wave of AI agents.
  2. Leverage dbt and Clustering to build robust, high-performance engineering foundations.
  3. Step up your monitoring with Capacity Overview Events and Operational agents.
Markus Lehtola

Markus Lehtola

Markus has nearly 10 years of experience working in fields of analytics, data, and business intelligence. He believes that data can be valuable asset for any company and Azure provides great foundation to deliver that value. Currently he works as data architect, helping to deliver data solutions on top of Azure.