By default, AI Assistant operates using models from OpenAI. To connect AI Assistant to OpenAI, you’ll need to pass your OpenAI API key as an environment variable to the AI Assistant service.

If you’re concerned with the security aspects of OpenAI, or if you want to keep your data in a certain region, you may want to configure a different model provider, such as Azure, AWS Bedrock, or a locally served model. For help deciding between these options, refer to our guide on selecting the right LLM hosting strategy.

This guide will introduce you to the configuration file used to customize the models and providers during AI Assistant operation.

Mounting the configuration file

AI Assistant supports two ways to provide service-config.yml: a mounted file, or a Base64-encoded environment variable.

Option 1: Use a mounted file

AI Assistant looks for service-config.yml at the root of the AI Assistant directory. You can mount the file in Docker Compose:

ai-assistant:
volumes:
- /path-to-service-config-yaml-on-host:/service-config.yml

If your infrastructure doesn’t permit mounting files at the root directory of the container, mount the file in a different directory and set CONFIG_DIR. For more details on CONFIG_DIR and other environment variables, refer to the Docker configuration guide.

Option 2: Use a Base64 environment variable

You can provide the same configuration using MODEL_CONFIG. This variable expects the full service-config.yml content encoded as Base64:

Terminal window
export MODEL_CONFIG=$(base64 -i service-config.yml)
ai-assistant:
environment:
- MODEL_CONFIG=dmVyc2lvbjogIjIiCnByb3ZpZGVyczoKICA...

Configuration precedence

  1. MODEL_CONFIG environment variable (highest priority)
  2. service-config.yml in CONFIG_DIR
  3. Default configuration (fallback)

Service configuration file

AI Assistant uses version 2 of the model configuration schema:

version: "2"
providers:
- name: "azure"
instanceName: "ci-testing"
models:
- model: "azure:gpt4o-mini"
labels: ["default-llm"]
- model: "azure:text-embedding-3-small"
labels: ["default-embedding"]

Version

Set version to "2" to use the current model configuration schema.

Providers (optional)

providers defines explicit provider configuration.

Use this when your provider requires extra settings (for example, Azure instanceName, Bedrock region, or OpenAI-compatible baseUrl).

For simple OpenAI and Anthropic setups, you can rely on environment variables and omit this block.

You can define one of the following providers:

See each provider guide for configuration options, key requirements, and model suggestions.

models (required)

models defines available models in {provider}:{model} format.

Each model requires one or more labels:

  • default-llm — default model for LLM inference
  • default-embedding — default model for embeddings

Embedding model selection impacts how documents are indexed. Changing embedding models for existing data requires reprocessing documents.

Migrating from model configuration version 1 (aiServices) to version 2? Refer to the AI Assistant 2.1.0 migration guide.