Skip to main content
Azure
  • 4 min read

Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A

Text reads "Agent Factory: Connecting agents, apps, and data."
The real power of agents comes from their ability to connect to each other, to enterprise data, and to the systems where work gets done.

This blog post is the fifth out of a six-part blog series called Agent Factory which will share best practices, design patterns, and tools to help guide you through adopting and building agentic AI.

An agent that can’t talk to other agents, tools, and apps is just a silo. The real power of agents comes from their ability to connect to each other, to enterprise data, and to the systems where work gets done. Integration is what transforms an agent from a clever prototype into a force multiplier across a business.

With Azure AI Foundry customers and partners, we see the shift everywhere: customer service agents collaborating with retrieval agents to resolve complex cases, research agents chaining together across datasets to accelerate discovery, and business agents acting in concert to automate workflows that once took teams of humans. The story of agent development has moved from “can we build one?” to “how do we make them work together, safely and at scale?” 

At Microsoft over the years, I’ve seen how open protocols shape ecosystems. From OData, which standardized access to data APIs, to OpenTelemetry, which gave developers common ground for observability, open standards have consistently unlocked innovation and scale across industries. Today, customers in Azure AI Foundry are looking for flexibility without vendor lock-in. The same pattern is now unfolding with AI agents. Proprietary, closed ecosystems create risk if agents, tools, or data can’t interoperate, causing innovation to stall and an increase in switching costs.

  • Standard protocols taking root: Open standards like the Model Context Protocol (MCP) and Agent2Agent (A2A) are creating a lingua franca for how agents share tools, context, and results across vendors. This interoperability is critical for enterprises who want the freedom to choose best-of-breed solutions and ensure their agents, tools, and data can work together, regardless of vendor or framework.
  • A2A collaboration on MCP: Specialist agents increasingly collaborate as teams, with one handling scheduling, another querying databases, and another summarizing. This mirrors human work patterns, where specialists contribute to shared goals. Learn more about how this connects to MCP and A2A in our Agent2Agent and MCP blog
  • Connected ecosystems: From Microsoft 365 to Salesforce to ServiceNow, enterprises expect agents to act across all their apps, not just one platform. Integration libraries and connectors are becoming as important as models themselves. Open standards ensure that as new platforms and tools emerge, they can be integrated seamlessly, eliminating the risk of isolated point solutions.
  • Interop across frameworks: Developers want the freedom to build with LangGraph, AutoGen, Semantic Kernel, or CrewAI—and still have their agents talk to each other. Framework diversity is here to stay.

What integration at scale requires

From our work with enterprises and open-source communities, a picture emerges of what’s needed to connect agents, apps, and data:

  • Cross-agent collaboration by design: Multi-agent workflows require open protocols that allow different runtimes and frameworks to coordinate. Protocols like A2A and MCP are rapidly evolving to support richer agent collaboration and integration. A2A expands agent-to-agent collaboration, while MCP is growing into a foundational layer for context sharing, tool interoperability, and cross-framework coordination.
  • Shared context through open standards: Agents need a safe, consistent way to pass context, tools, and results. MCP enables this by making tools reusable across agents, frameworks, and vendors.
  • Seamless enterprise system access: Business value only happens when agents can act: update a CRM record, post in Teams, or trigger an ERP workflow. Integration fabrics with prebuilt connectors remove the heavy lift. Enterprises can connect new and legacy systems without costly rewrites or proprietary barriers.
  • Unified observability: As workflows span agents and apps, tracing and debugging across boundaries becomes essential. Teams must see the chain of reasoning across multiple agents to ensure safety, compliance, and trust. Open telemetry and evaluation standards give enterprises the transparency and control they need to operate at scale.

How Azure AI Foundry enables integration at scale

Azure AI Foundry was designed for this connected future. It makes agents interoperable, enterprise ready, and integrated into the systems where businesses run.

  • Model Context Protocol (MCP): Foundry agents can call MCP-compatible tools directly, enabling developers to reuse existing connectors and unlock a growing marketplace of interoperable tools. Semantic Kernel also supports MCP for pro-code developers. 
  • A2A support: Through Semantic Kernel, Foundry implements A2A so agents can collaborate across different runtimes and ecosystems. Multi-agent workflows—like a research agent coordinating with a compliance agent before drafting a report—just work.
  • Enterprise integration fabric: Foundry comes with thousands of connectors into SaaS and enterprise systems. From Dynamics 365 to ServiceNow to custom APIs, agents can act where business happens without developers rebuilding integrations from scratch. And with Logic Apps now supporting MCP, existing workflows and connectors can be leveraged directly inside Foundry agents.
  • Unified observability and governance: Tracing, evaluation, and compliance checks extend across multi-agent and multi-system workflows. Developers can debug cross-agent reasoning and enterprises can enforce identity, policy, and compliance end-to-end.

Why this matters now

Enterprises don’t want isolated point solutions—they want connected systems that scale. The next competitive advantage in AI isn’t just building smarter agents, it’s building connected agent ecosystems that work across apps, frameworks, and vendors. Interoperability and open standards are the foundation for this future, giving customers the flexibility, choice, and confidence to invest in AI without fear of vendor lock-in.

Azure AI Foundry makes that possible:

  • Flexible protocols (MCP and A2A) for agentic collaboration and interoperability.
  • Enterprise connectors for system integration.
  • Guardrails and governance for trust at scale.

With these foundations, organizations can move from siloed prototypes to truly connected AI ecosystems that span the enterprise.

What’s next

In part six of the Agent Factory series, we’ll focus on one of the most critical dimensions of agent development: trust. Building powerful agents is only half the challenge. Enterprises need to ensure these agents operate with the highest standards of security, identity, and governance.

Did you miss these posts in the series?