Skip to content

Symfony AI - Platform Component

Edit this page

The Platform component provides an abstraction for interacting with different models, their providers and contracts.

Installation

1
$ composer require symfony/ai-platform

Purpose

The Platform component provides a unified interface for working with various AI models, hosted and run by different providers. It allows developers to easily switch between different AI models and providers without changing their application code. This is particularly useful for applications that require flexibility in choosing AI models based on specific use cases or performance requirements.

Usage

The instantiation of the Platform class is usually delegated to a provider-specific factory, with a provider being OpenAI, Anthropic, Google, Replicate, and others.

For example, to use the OpenAI provider, you would typically do something like this:

1
2
3
4
5
use Symfony\AI\Platform\Bridge\OpenAi\Embeddings;
use Symfony\AI\Platform\Bridge\OpenAi\Gpt;
use Symfony\AI\Platform\Bridge\OpenAi\PlatformFactory;

$platform = PlatformFactory::create(env('OPENAI_API_KEY'));

With this PlatformInterface instance you can now interact with the LLM:

1
2
3
4
5
// Generate a vector embedding for a text, returns a Symfony\AI\Platform\Result\VectorResult
$vectorResult = $platform->invoke($embeddings, 'What is the capital of France?');

// Generate a text completion with GPT, returns a Symfony\AI\Platform\Result\TextResult
$result = $platform->invoke('gpt-4o-mini', new MessageBag(Message::ofUser('What is the capital of France?')));

Depending on the model and its capabilities, different types of inputs and outputs are supported, which results in a very flexible and powerful interface for working with AI models.

Models

The component provides a model base class Model which is a combination of a model name, a set of capabilities, and additional options. Usually, bridges to specific providers extend this base class to provide a quick start for vendor-specific models and their capabilities.

Capabilities are a list of strings defined by Capability, which can be used to check if a model supports a specific feature, like Capability::INPUT_AUDIO or Capability::OUTPUT_IMAGE.

Options are additional parameters that can be passed to the model, like temperature or max_tokens, and are usually defined by the specific models and their documentation.

Model Size Variants

For providers like Ollama, you can specify model size variants using a colon notation (e.g., qwen3:32b, llama3:7b). If the exact model name with size variant is not found in the catalog, the system will automatically fall back to the base model name (qwen3, llama3) and use its capabilities while preserving the full model name for the provider.

You can also combine size variants with query parameters:

1
2
3
4
5
6
7
8
9
use Symfony\AI\Platform\Bridge\Ollama\ModelCatalog;

$catalog = new ModelCatalog();

// Get model with size variant
$model = $catalog->getModel('qwen3:32b');

// Get model with size variant and query parameters
$model = $catalog->getModel('qwen3:32b?temperature=0.5&top_p=0.9');

Supported Models & Platforms

Options

The third parameter of the invoke() method is an array of options, which basically wraps the options of the corresponding model and platform, like temperature or max_tokens:

1
2
3
4
$result = $platform->invoke('gpt-4o-mini', $input, [
    'temperature' => 0.7,
    'max_tokens' => 100,
]);

Note

For model- and platform-specific options, please refer to the respective documentation.

Language Models and Messages

One central feature of the Platform component is the support for language models and easing the interaction with them. This is supported by providing an extensive set of data classes around the concept of messages and their content.

Messages can be of different types, most importantly UserMessage, SystemMessage, or AssistantMessage, can have different content types, like Text, Image or Audio, and can be grouped into a MessageBag:

1
2
3
4
5
6
7
8
9
use Symfony\AI\Platform\Message\Content\Image;
use Symfony\AI\Platform\Message\Message;
use Symfony\AI\Platform\Message\MessageBag;

// Create a message bag with a user message
$messageBag = new MessageBag(
    Message::forSystem('You are a helpful assistant.')
    Message::ofUser('Please describe this picture?', Image::fromFile('/path/to/image.jpg')),
);

Message Unique IDs

Each message automatically receives a unique identifier (UUID v7) upon creation. This provides several benefits:

  • Traceability: Track individual messages through your application
  • Time-ordered: UUIDs are naturally sortable by creation time
  • Timestamp extraction: Get the exact creation time from the ID
  • Database-friendly: Sequential nature improves index performance
1
2
3
4
5
6
7
8
9
10
11
12
13
use Symfony\AI\Platform\Message\Message;

$message = Message::ofUser('Hello, AI!');

// Access the unique ID
$id = $message->getId(); // Returns Symfony\Component\Uid\Uuid instance

// Extract creation timestamp
$createdAt = $id->getDateTime(); // Returns \DateTimeImmutable
echo $createdAt->format('Y-m-d H:i:s.u'); // e.g., "2025-06-29 15:30:45.123456"

// Get string representation
echo $id->toRfc4122(); // e.g., "01928d1f-6f2e-7123-a456-123456789abc"

Result Streaming

Since LLMs usually generate a result word by word, most of them also support streaming the result using Server Side Events. Symfony AI supports that by abstracting the conversion and returning a Generator as content of the result:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
use Symfony\AI\Agent\Agent;
use Symfony\AI\Message\Message;
use Symfony\AI\Message\MessageBag;

// Initialize Platform and LLM

$agent = new Agent($model);
$messages = new MessageBag(
    Message::forSystem('You are a thoughtful philosopher.'),
    Message::ofUser('What is the purpose of an ant?'),
);
$result = $agent->call($messages, [
    'stream' => true, // enable streaming of response text
]);

foreach ($result->getContent() as $word) {
    echo $word;
}

Note

To be able to use streaming in your web application, an additional layer like Mercure is needed.

Image Processing

Some LLMs also support images as input, which Symfony AI supports as content type within the UserMessage:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
use Symfony\AI\Platform\Message\Content\Image;
use Symfony\AI\Platform\Message\Message;
use Symfony\AI\Platform\Message\MessageBag;

// Initialize Platform, LLM & agent

$messages = new MessageBag(
    Message::forSystem('You are an image analyzer bot that helps identify the content of images.'),
    Message::ofUser(
        'Describe the image as a comedian would do it.',
        Image::fromFile(dirname(__DIR__).'/tests/fixtures/image.jpg'), // Path to an image file
        Image::fromDataUrl('data:image/png;base64,...'), // Data URL of an image
        new ImageUrl('https://foo.com/bar.png'), // URL to an image
    ),
);
$result = $agent->call($messages);

Audio Processing

Similar to images, some LLMs also support audio as input, which is just another content type within the UserMessage:

1
2
3
4
5
6
7
8
9
10
11
12
13
use Symfony\AI\Platform\Message\Content\Audio;
use Symfony\AI\Platform\Message\Message;
use Symfony\AI\Platform\Message\MessageBag;

// Initialize Platform, LLM & agent

$messages = new MessageBag(
    Message::ofUser(
        'What is this recording about?',
        Audio::fromFile('/path/audio.mp3'), // Path to an audio file
    ),
);
$result = $agent->call($messages);

Embeddings

Creating embeddings of word, sentences, or paragraphs is a typical use case around the interaction with LLMs.

The standalone usage results in a Vector instance:

1
2
3
4
5
6
7
use Symfony\AI\Platform\Bridge\OpenAi\Embeddings;

// Initialize platform

$vectors = $platform->invoke('text-embedding-3-small', $textInput)->asVectors();

dump($vectors[0]->getData()); // returns something like: [0.123, -0.456, 0.789, ...]

Structured Output

A typical use-case of LLMs is to classify and extract data from unstructured sources, which is supported by some models by features like Structured Output or providing a Response Format.

PHP Classes as Output

Symfony AI supports that use-case by abstracting the hustle of defining and providing schemas to the LLM and converting the result back to PHP objects.

To achieve this, the Symfony\AI\Platform\StructuredOutput\PlatformSubscriber needs to be registered with the platform:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
use Symfony\AI\Fixtures\StructuredOutput\MathReasoning;
use Symfony\AI\Platform\Bridge\Mistral\PlatformFactory;
use Symfony\AI\Platform\Message\Message;
use Symfony\AI\Platform\Message\MessageBag;
use Symfony\AI\Platform\StructuredOutput\PlatformSubscriber;
use Symfony\Component\EventDispatcher\EventDispatcher;

$dispatcher = new EventDispatcher();
$dispatcher->addSubscriber(new PlatformSubscriber());

$platform = PlatformFactory::create($apiKey, eventDispatcher: $dispatcher);
$messages = new MessageBag(
    Message::forSystem('You are a helpful math tutor. Guide the user through the solution step by step.'),
    Message::ofUser('how can I solve 8x + 7 = -23'),
);
$result = $platform->invoke('mistral-small-latest', $messages, ['response_format' => MathReasoning::class]);

dump($result->asObject()); // returns an instance of `MathReasoning` class

Array Structures as Output

Also PHP array structures as `response_format` are supported, which also requires the event subscriber mentioned above. On top this example uses the feature through the agent to leverage tool calling:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
use Symfony\AI\Platform\Message\Message;
use Symfony\AI\Platform\Message\MessageBag;

// Initialize Platform, LLM and agent with processors and Clock tool

$messages = new MessageBag(Message::ofUser('What date and time is it?'));
$result = $agent->call($messages, ['response_format' => [
    'type' => 'json_schema',
    'json_schema' => [
        'name' => 'clock',
        'strict' => true,
        'schema' => [
            'type' => 'object',
            'properties' => [
                'date' => ['type' => 'string', 'description' => 'The current date in the format YYYY-MM-DD.'],
                'time' => ['type' => 'string', 'description' => 'The current time in the format HH:MM:SS.'],
            ],
            'required' => ['date', 'time'],
            'additionalProperties' => false,
        ],
    ],
]]);

dump($result->getContent()); // returns an array

Server Tools

Some platforms provide built-in server-side tools for enhanced capabilities without custom implementations:

For complete Vertex AI setup and usage guide, see Vertex AI.

Parallel Platform Calls

Since the Platform sits on top of Symfony's HttpClient component, it supports multiple model calls in parallel, which can be useful to speed up the processing:

1
2
3
4
5
6
7
8
9
// Initialize Platform

foreach ($inputs as $input) {
    $results[] = $platform->invoke('gpt-4o-mini', $input);
}

foreach ($results as $result) {
    echo $result->asText().PHP_EOL;
}

Testing Tools

For unit or integration testing, you can use the InMemoryPlatform, which implements PlatformInterface without calling external APIs.

It supports returning either:

  • A fixed string result
  • A callable that dynamically returns a simple string or any ResultInterface based on the model, input, and options:

    1
    2
    3
    4
    5
    6
    7
    8
    use Symfony\AI\Platform\InMemoryPlatform;
    use Symfony\AI\Platform\Model;
    
    $platform = new InMemoryPlatform('Fake result');
    
    $result = $platform->invoke('gpt-4o-mini', 'What is the capital of France?');
    
    echo $result->asText(); // "Fake result"

Dynamic Text Results

1
2
3
4
5
6
$platform = new InMemoryPlatform(
    fn($model, $input, $options) => "Echo: {$input}"
);

$result = $platform->invoke('gpt-4o-mini', 'Hello AI');
echo $result->asText(); // "Echo: Hello AI"

Vector Results

1
2
3
4
5
6
7
8
use Symfony\AI\Platform\Result\VectorResult;

$platform = new InMemoryPlatform(
    fn() => new VectorResult(new Vector([0.1, 0.2, 0.3, 0.4]))
);

$result = $platform->invoke('gpt-4o-mini', 'vectorize this text');
$vectors = $result->asVectors(); // Returns Vector object with [0.1, 0.2, 0.3, 0.4]

Binary Results

1
2
3
4
5
6
7
8
use Symfony\AI\Platform\Result\BinaryResult;

$platform = new InMemoryPlatform(
    fn() => new BinaryResult('fake-pdf-content', 'application/pdf')
);

$result = $platform->invoke('gpt-4o-mini', 'generate PDF document');
$binary = $result->asBinary(); // Returns Binary object with content and MIME type

Raw Results

The platform automatically uses the getRawResult() from any ResultInterface returned by closures. For string results, it creates an InMemoryRawResult to simulate real API response metadata.

This allows fast and isolated testing of AI-powered features without relying on live providers or HTTP requests.

Note

This requires `cURL` and the `ext-curl` extension to be installed.

Code Examples

Note

Please be aware that some embedding models also support batch processing out of the box.

This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license.
TOC
    Version