How to use Microsoft AI Foundry with Langchain
··1 min
Table of Contents
In Microsoft’s AI Foundry, it might be a bit tricky to get it work with Langchain at first. What need to be specified is different model provider has different service endpoint and need to use different class to call the API.
Azure OpenAI #
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_deployment="gpt-5.2-chat",
azure_endpoint="https://<your-service>.cognitiveservices.azure.com/",
api_version="2024-05-01-preview",
api_key="<your-api-key>",
max_tokens=4096,
)
api_keyuseAZURE_OPENAI_KEYenvironment variable or pass it directlyazure_endpointuseAZURE_OPENAI_ENDPOINTenvironment variable or pass it directly
Anthropic #
from langchain_anthropic import AnthropicChat
llm = AnthropicChat(
base_url="https://<your-service>.services.ai.azure.com/anthropic/",
api_key="<your-api-key>",
model="claude-sonnet-4-5",
)
api_keyuseANTHROPIC_API_KEYenvironment variable or pass it directlybase_urluseANTHROPIC_API_URLthenANTHROPIC_BASE_URLenvironment variable or pass it
AIChat #
from langchain_azure_ai import AzureAIChatCompletionModel
llm = AzureAIChatCompletionModel(
azure_endpoint="https://<your-service>.services.ai.azure.com/models",
azure_deployment="gpt-5.2-chat",
api_version="2024-05-01-preview"
)
- Need to set
AZURE_AI_ENDPOINTtohttps://<your-service>.services.ai.azure.com/models - API key use
AZURE_AI_CREDENTIALenvironment variable
Actually, this method is not recommended, because it does not support some features and have poor compatibility to the langchain ecosystem.