AI configuration
By default, AI Assistant operates using models from OpenAI. To connect AI Assistant to OpenAI, you’ll need to pass your OpenAI API key as an environment variable to the AI Assistant service.
If you’re concerned with the security aspects of OpenAI, or if you want to keep your data in a certain region, you may want to configure a different model provider, such as Azure, AWS Bedrock, or a locally served model. For help deciding between these options, refer to our guide on selecting the right LLM hosting strategy.
This guide will introduce you to the configuration file used to customize the models and providers during AI Assistant operation.
Mounting the configuration file
AI Assistant supports two ways to provide service-config.yml: a mounted file, or a Base64-encoded environment variable.
Option 1: Use a mounted file
AI Assistant looks for service-config.yml at the root of the AI Assistant directory. You can mount the file in Docker Compose:
ai-assistant: volumes: - /path-to-service-config-yaml-on-host:/service-config.ymlIf your infrastructure doesn’t permit mounting files at the root directory of the container, mount the file in a different directory and set CONFIG_DIR. For more details on CONFIG_DIR and other environment variables, refer to the Docker configuration guide.
Option 2: Use a Base64 environment variable
You can provide the same configuration using MODEL_CONFIG. This variable expects the full service-config.yml content encoded as Base64:
export MODEL_CONFIG=$(base64 -i service-config.yml)ai-assistant: environment: - MODEL_CONFIG=dmVyc2lvbjogIjIiCnByb3ZpZGVyczoKICA...Configuration precedence
MODEL_CONFIGenvironment variable (highest priority)service-config.ymlinCONFIG_DIR- Default configuration (fallback)
Service configuration file
AI Assistant uses version 2 of the model configuration schema:
version: "2"
providers: - name: "azure" instanceName: "ci-testing"
models: - model: "azure:gpt4o-mini" labels: ["default-llm"] - model: "azure:text-embedding-3-small" labels: ["default-embedding"]Version
Set version to "2" to use the current model configuration schema.
Providers (optional)
providers defines explicit provider configuration.
Use this when your provider requires extra settings (for example, Azure instanceName, Bedrock region, or OpenAI-compatible baseUrl).
For simple OpenAI and Anthropic setups, you can rely on environment variables and omit this block.
You can define one of the following providers:
openai(opens in a new tab) (guide) — Choose this for a wide range of performant models at cheap prices.anthropic(opens in a new tab) (guide) — Use Claude models with advanced reasoning capabilities. Note: Anthropic doesn’t provide embeddings, so define another model with thedefault-embeddinglabel from a different provider.azure(opens in a new tab) (guide) — Use OpenAI models in the Azure environment with more region and data security features.bedrock(opens in a new tab) (guide) — Choose from a range of open and closed models while keeping your data in the AWS ecosystem.openai-compat(guide) — Use any OpenAI API-compatible inference framework, such as vLLM(opens in a new tab), Hugging Face TGI(opens in a new tab), or Ollama(opens in a new tab).
See each provider guide for configuration options, key requirements, and model suggestions.
models (required)
models defines available models in {provider}:{model} format.
Each model requires one or more labels:
default-llm— default model for LLM inferencedefault-embedding— default model for embeddings
Embedding model selection impacts how documents are indexed. Changing embedding models for existing data requires reprocessing documents.
Migrating from model configuration version 1 (aiServices) to version 2? Refer to the AI Assistant 2.1.0 migration guide.