Sitemap

Build a MCP client using Azure AI Foundry and OpenAI Agents SDK

3 min readApr 27, 2025

In a post on X dated March 26, 2025, OpenAI CEO Sam Altman announced that OpenAI has integrated support for the Model Context Protocol (MCP) into its Agents SDK.

As outlined in the Model Context Protocol (MCP) documentation for the OpenAI Agents SDK:

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

In this post, I will walk you through how to build a MCP client using Azure AI Foundry alongside the OpenAI Agents SDK.

I followed the MCP Filesystem Example from the OpenAI Agents SDK GitHub Repository. However, instead of using OpenAI’s hosted models, my code uses a GPT-4o model deployed via Azure AI Foundry. This required adapting the example to work with the AzureOpenAI client. Since I couldn’t find any existing examples demonstrating this integration, I’m sharing my approach in this post to help others looking to do the same.

🛠️ Requirements

  • An active Azure subscription.
  • Access to Azure AI Foundry.
  • Python 3.10+
  • NPM and Node.js to run the MCP Server.
  • VS Code with Azure extensions recommended.

🚀 Getting Started

  • Clone the GitHub Repository
git clone https://github.com/Azure-Samples/ai-foundry-agents-samples.git
  • Navigate to the MCP Filesystem example folder
cd examples/mcp/filesystem_example
  • Create and activate Python virtual environment
python3 -m venv .venv
# MacOS/Linux
source .venv/bin/activate
# Windows
venv\Scripts\activate
  • Install dependencies
pip install -r requirements.txt
  • Deploy GPT-4o model in Azure AI Foundry
  • Capture the endpoint, apy_key, and api_version
  • Rename .env_sample file to .env and then update the following environment variables:
    - AZURE_OPENAI_ENDPOINT
    - AZURE_OPENAI_API_KEY
    - AZURE_OPENAI_CHAT_DEPLOYMENT_NAME (GPT-4o)
    - AZURE_OPENAI_API_VERSION
  • Install npx
npm install -g npx

Code walkthrough

In order to make the code work with Azure OpenAI client, two modifications were needed:

  • Create a AsyncAzureOpenAI client.
  • Set model implementation to useOpenAIChatCompletionsModel with the AsyncAzureOpenAI client when defining the agent.

Create a AsyncAzureOpenAI client

for this purpose I created a function called get_azure_open_ai_client to create a AsyncAzureOpenAI client using AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_VERSION, AZURE_OPENAI_ENDPOINT environment variables.

Set model implementation to use OpenAIChatCompletionsModel with the AsyncAzureOpenAI client when defining the agent

Now, when defining the agent, I set model implementation to useOpenAIChatCompletionsModel with the AsyncAzureOpenAI client, and the model deployed in Azure AI Foundry, usingAZURE_OPENAI_CHAT_DEPLOYMENT_NAMEenvironment variable.

Run the sample code

You can now run the python file from command line or in any IDE of your choice

python main_azure_ai_foundry.py

Running the sample code in VS Code, you can see that the LLM used the capabilities of the filesystem MCP server to get answers to the questions asked in the code.

Conclusion

In this blog post I walked you through the process of how to build a MCP client using Azure AI Foundry alongside the OpenAI Agents SDK.

I followed the basic MCP Filesystem example from OpenAI Agents SDK, modifying the agent to useOpenAIChatCompletionsModel with the AsyncAzureOpenAI client in order to use GPT-4o model deployed in Azure AI Foundry.

For more examples of MCP clients implemented using OpenAI Agents SDK, you can browse the openai-agents-python GitHub repository folder here.

--

--

Eitan Sela
Eitan Sela

Written by Eitan Sela

Senior Cloud Solutions Architect - Azure AI and Machine Learning at Microsoft | Ex-Amazon

Responses (2)