We try to implement MCP server on our Dremio on-prem, we try to using Azure Open AI model gpt-4o .
According to documentation we first tried following parameters. but we received error message
OPENAI_API_KEY=your-openai-api-key
LLM_MODEL=openai:gpt-4.1
DETAILED_OUTPUT=false
Error:
Error: Error code: 401 - {'error': {'message': 'Incorrect API key provided: 31mT4nEh************************************************************************ZGgp. You can
find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Afterwords we tried following variables, ıt didn't work as well
AZURE_OPENAI_API_VERSION=2024-11-20
AZURE_OPENAI_ENDPOINT=https://vanopenaimcp.openai.azure.com
AZURE_OPENAI_API_KEY=XXXXX
LLM_MODEL=azure:gpt-4o
MODEL_PROVIDER=azure
Error :
File "/home/dell/.local/lib/python3.11/site-packages/langchain/chat_models/base.py", line 340, in _init_chat_model_helper
model, model_provider = _parse_model(model, model_provider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dell/.local/lib/python3.11/site-packages/langchain/chat_models/base.py", line 524, in _parse_model
raise ValueError(
ValueError: Unable to infer model provider for model='azure:gpt-4o', please specify model_provider directly.
This ticket includes a secure attachment. Use this link to access the attached files:
Package ID# CGCW-FD9J
I can see the issues with both attempts. Let me help you configure Azure OpenAI correctly for the Dremio MCP server.
Problem Analysis
First attempt: You used OpenAI configuration with an Azure API key, which won’t work because Azure OpenAI has a different endpoint structure.
Second attempt: The MCP server’s LangChain integration doesn’t properly parse azure:gpt-4o format.
Correct Configuration for Azure OpenAI
Based on the Dremio MCP server implementation, here’s the proper configuration:
Option 1: Using Azure-specific Environment Variables
bash
export AZURE_OPENAI_API_KEY="your-actual-azure-key"
export AZURE_OPENAI_ENDPOINT="https://vanopenaimcp.openai.azure.com"
export AZURE_OPENAI_API_VERSION="2024-11-20"
export AZURE_OPENAI_DEPLOYMENT="your-deployment-name" # IMPORTANT!
export LLM_MODEL="azure_chat_openai" # Use LangChain's Azure model identifier
export DETAILED_OUTPUT="false"
Option 2: Alternative Format
bash
export AZURE_OPENAI_API_KEY="your-actual-azure-key"
export AZURE_OPENAI_ENDPOINT="https://vanopenaimcp.openai.azure.com"
export AZURE_OPENAI_API_VERSION="2024-11-20"
export AZURE_OPENAI_DEPLOYMENT="your-deployment-name"
export MODEL_PROVIDER="azure_openai"
export LLM_MODEL="gpt-4o" # Just the model name, not prefixed
export DETAILED_OUTPUT="false"
Critical Points
AZURE_OPENAI_DEPLOYMENT: This is the deployment name you created in Azure Portal, NOT just “gpt-4o”. Go to Azure OpenAI Studio → Deployments to find your actual deployment name (e.g., “my-gpt4o-deployment”).
LLM_MODEL: Should be either:
azure_chat_openai (tells LangChain to use Azure)
Just gpt-4o (if MODEL_PROVIDER is set correctly)
NOT azure:gpt-4o (this causes the parsing error)
Endpoint Format: Should NOT include /openai/deployments/... - just the base endpoint.
Find Your Azure Deployment Name
In Azure Portal:
Go to your Azure OpenAI resource
Click “Model deployments” or go to Azure OpenAI Studio
Look for your gpt-4o deployment - copy that exact name
Test Configuration
Try this complete example:
bash
export AZURE_OPENAI_API_KEY="31mT4nEh...ZGgp"
export AZURE_OPENAI_ENDPOINT="https://vanopenaimcp.openai.azure.com"
export AZURE_OPENAI_API_VERSION="2024-11-20"
export AZURE_OPENAI_DEPLOYMENT="gpt-4o-deployment" # Replace with YOUR deployment name
export LLM_MODEL="azure_chat_openai"
export DETAILED_OUTPUT="false"
# Then start your MCP server
python -m dremio_mcp_server
What’s your actual deployment name in Azure OpenAI Studio? That’s likely the missing piece. Let me know if you’re still getting errors after adding the deployment name!