Validated AI Providers and LLM Models

Last updated 16 minutes ago

Learn which AI providers and LLM models are integrated and tested for use with KIO Co-Pilot.

CoreMedia implements and tests KIO Co-Pilot with several external AI providers and their Large Language Models (LLMs) following our customers' demands. This guide provides an overview of the AI providers and LLM models with which CoreMedia integrates out-of-the-box and which are used for internal testing and validation with KIO Co-Pilot. It also describes how to configure different providers in KIO Co-Pilot.

In addition, the guide outlines limitations, validation status, and reasons why not every available model is immediately suitable for integration.

Integrated AI Providers

KIO Co-Pilot integrates with the following external AI/LLM providers, selected due to customers demand:

  • AWS Bedrock

  • Microsoft Azure AI

  • OpenAI

These providers offer a broad set of foundation models. CoreMedia uses a subset of these models to ensure stable, predictable behavior in production environments.

Because prompt engineering, response quality, and model behavior differ across vendors—and sometimes even between models from the same vendor, CoreMedia cannot ensure uniform performance across all LLM providers. We have not conducted long-term testing with every LLM vendor in the market.

Validated LLM Models

While the selected providers offer many different models, CoreMedia performs continuous evaluation and validation on a selected subset to ensure they behave reliably with KIO Co-Pilot.

CoreMedia integrates with external third-party AI providers but does not create, control, or fine-tune any of the models they offer.

Model behavior may vary due to the probabilistic nature of LLMs, and identical prompts may not always produce identical responses. Accuracy, consistency, or response quality cannot be guaranteed for any model by CoreMedia.

The following LLM models are currently used and tested for production use:

  • OpenAI GPT-4o

  • Azure AI GPT-4o

  • Anthropic Claude Sonnet 3.7 via Amazon Bedrock Converse

Support may expand over time as further models are evaluated.

Other Providers and Models

It may be possible to connect additional AI providers or use other models offered by the selected providers. However, configurations outside the validated list should be considered as experimental.

Possible limitations include, for example:

  • reduced or inconsistent tool/function-calling support

  • differences in streaming or response formatting

  • unpredictable conversational behavior

  • stricter rate or throughput limits

Unsupported models may work, but are not recommended for production use without customer-side testing.

You might contact CoreMedia for assistance in evaluating additional models.

LLM Provider Configuration

Each LLM provider requires specific configuration properties for the KIO Co-Pilot backend. The Blueprint workspace provides a configuration example for OpenAI as the default configuration. You can override this configuration by activating a specific Spring profile for your LLM provider and adapting the necessary properties from the examples below.

OpenAI Configuration

# OpenAI Example Configuration
# For more details see https://docs.spring.io/spring-ai/reference/api/chat/openai-chat.html

# Configure your OpenAI API Key
# You can get an API Key from https://platform.openai.com/account/api-keys
# Permissions: Models: Read, Model capabilities: Write
spring.ai.openai.api-key=CONFIGURE_ME

spring.ai.model.chat=openai
spring.ai.model.image=openai
spring.ai.model.embedding=openai
spring.ai.openai.chat.api-key=${spring.ai.openai.api-key}
spring.ai.openai.chat.options.model=gpt-4o

# to extract the user intent from the history, we can use a faster model with less complexity
kio.backend.playbooks.user-intent-extraction-model=gpt-4o-mini

# to extract image data we can use a faster model with less complexity
kio.backend.image-extraction-model=gpt-4o-mini

The configuration prepared for OpenAI is the default configuration for KIO Co-Pilot. From the blueprint deployment, just the dev profile is activated by default (see KIO_SPRING_PROFILES_ACTIVE=dev). Profiles for other LLM providers override only the necessary properties.

Azure OpenAI Configuration

# Azure OpenAI Example Configuration
# For more details see https://docs.spring.io/spring-ai/reference/api/chat/azure-openai-chat.html

spring.ai.azure.openai.api-key=CONFIGURE_ME

spring.ai.model.chat=azure-openai
spring.ai.model.image=azure-openai

spring.ai.azure.openai.endpoint=https://my-endpoint.openai.azure.com
spring.ai.azure.openai.chat.options.deployment-name=my-gpt-4o-deployment-name
kio.backend.playbooks.user-intent-extraction-model=${spring.ai.azure.openai.chat.options.deployment-name}

# embedding
spring.ai.model.embedding=azure-openai
spring.ai.azure.openai.embedding.options.deployment-name=my-embedding-deployment-name

The example configuration is also available as the Spring profile azure and can be activated by setting the environment variable KIO_SPRING_PROFILES_ACTIVE=dev,azure in the deployment configuration.

Anthropic Claude via Amazon Bedrock Configuration

# Bedrock Converse Example Configuration
# For more details see https://docs.spring.io/spring-ai/reference/api/chat/bedrock-converse.html

spring.ai.bedrock.aws.access-key=CONFIGURE_ME
spring.ai.bedrock.aws.secret-key=CONFIGURE_ME

spring.ai.model.chat=bedrock-converse
spring.ai.model.image=bedrock-converse
spring.ai.bedrock.converse.chat.options.model=us.anthropic.claude-3-7-sonnet-20250219-v1:0
kio.backend.playbooks.user-intent-extraction-model=${spring.ai.bedrock.converse.chat.options.model}
kio.backend.image-extraction-model=${spring.ai.bedrock.converse.chat.options.model}
spring.ai.bedrock.aws.region=us-east-1

# embedding
spring.ai.model.embedding=bedrock-titan
spring.ai.bedrock.titan.embedding.model=amazon.titan-embed-text-v2:0
spring.ai.bedrock.titan.embedding.input-type=text

# Similarity with titan seems to have more distance compared to OpenAI
kio.backend.playbooks.similarity-threshold=0.2

# the default is 500 tokens, which is not enough and caused JSON parse errors when updating larger rich text properties
spring.ai.bedrock.converse.chat.options.max-tokens=2000

The example configuration is also available as the Spring profile aws and can be activated by setting the environment variable KIO_SPRING_PROFILES_ACTIVE=dev,aws in the deployment configuration.

How to Continue

To learn more about KIO Co-Pilot, explore the other guides: KIO Guides Overview

Need support? Feel free to contact us at support@coremedia.com.

If you encounter issues or unexpected behavior, our Support Team will be glad to assist.

Copyright © 2025 CoreMedia GmbH, CoreMedia Corporation. All Rights Reserved.Privacy | Legal | Imprint
Loading...