Skip to main content

How to use Microsoft AI Foundry with Langchain

··1 min

In Microsoft’s AI Foundry, it might be a bit tricky to get it work with Langchain at first. What need to be specified is different model provider has different service endpoint and need to use different class to call the API.

Azure OpenAI #

from langchain_openai import AzureChatOpenAI

llm = AzureChatOpenAI(
    azure_deployment="gpt-5.2-chat",       
    azure_endpoint="https://<your-service>.cognitiveservices.azure.com/",
    api_version="2024-05-01-preview",
    api_key="<your-api-key>", 
    max_tokens=4096,
)
  • api_key use AZURE_OPENAI_KEY environment variable or pass it directly
  • azure_endpoint use AZURE_OPENAI_ENDPOINT environment variable or pass it directly

Anthropic #

from langchain_anthropic import AnthropicChat
llm = AnthropicChat(
    base_url="https://<your-service>.services.ai.azure.com/anthropic/",       
    api_key="<your-api-key>",
    model="claude-sonnet-4-5",
)
  • api_key use ANTHROPIC_API_KEY environment variable or pass it directly
  • base_url use ANTHROPIC_API_URL then ANTHROPIC_BASE_URL environment variable or pass it

AIChat #

from langchain_azure_ai import AzureAIChatCompletionModel
llm = AzureAIChatCompletionModel(
    azure_endpoint="https://<your-service>.services.ai.azure.com/models",
    azure_deployment="gpt-5.2-chat",       
    api_version="2024-05-01-preview"
)
  • Need to set AZURE_AI_ENDPOINT to https://<your-service>.services.ai.azure.com/models
  • API key use AZURE_AI_CREDENTIAL environment variable

Actually, this method is not recommended, because it does not support some features and have poor compatibility to the langchain ecosystem.