For Symfony’s 20th anniversary at SymfonyCon 2025, we wanted a moment that truly involved everyone in the room. The idea was simple but ambitious: 1200 developers in a conference hall, each with their phone showing a colored rectangle we could control live from the stage.

One click, and the whole room turns blue. Another, and only contributors’ phones stay lit. A final filter, “20+ years in the community”, and suddenly it’s just Fabien Potencier’s phone shining while all the others go dark.

On paper, it’s “just” a colored screen. In reality, it’s hundreds of persistent connections, updated in real time, with no noticeable latency and without taking the infrastructure down.

In this article, I want to focus on the experience and the architectural choices rather than hosting details. If you’re interested in the full infrastructure deep-dive (Upsun, advanced configuration, etc.), you can read the original, more technical article on the Upsun blog: How we scaled live connections for 1200 developers at SymfonyCon.

Symfonycon Amsterdam 2025 Lit Up Phones

The real challenge behind a “simple” colored screen

With a classic PHP-FPM app, the model is familiar: one request, one worker, one response, worker released. Perfect for short-lived requests.

Here, each phone:

  • opens a connection,
  • waits for updates,
  • must receive changes instantly.

So one connection = one resource held for as long as it’s open. At hundreds or thousands of connections, this becomes a problem if you only rely on PHP-FPM.

That’s where FrankenPHP and Mercure come in. FrankenPHP, built on top of Caddy, ships with a Mercure hub that handles Server-Sent Events (SSE) using async I/O, so it can hold many open connections without tying up one PHP worker per client.

For our use case – sending simple, real-time events to a large audience – this combination was ideal.

Bringing FrankenPHP and Mercure into a real-world scenario

We set up a Mercure hub with FrankenPHP and plugged it into a Symfony application that acted as the control interface:

  • on the stage side, a Symfony UI let me choose which group to light up (everyone, contributors, seniority, etc.);
  • on the audience side, each phone opened an SSE connection to the Mercure hub and updated its color whenever a new event arrived.

Architecturally, this is about decoupling:

  • the business logic (who lights up when, based on which criteria),
  • from the real-time broadcasting (pushing updates to hundreds of clients at once).

Symfony does what it does best: controllers, business rules, security, templating. Mercure takes over the job of pushing real-time updates to clients.

Splitting responsibilities: Symfony + PHP-FPM on one side, Mercure on the other

Once the Mercure hub was wired up, tests looked great: phones connected, color changes were instant, and the control interface felt smooth. But with a live demo in front of 1200 people, we wanted an architecture that was as simple and robust as possible.

The key realization:

  • We didn’t need the whole application on FrankenPHP - keeping PHP-FPM for all “regular” routes was fine;
  • the only part that required special handling was persistent connections (Mercure).

In this setup:

  • the Symfony control interface (PHP-FPM) handles the click on a button,
  • Symfony decides which message to send (for example: “light all screens blue” or “light only contributors”),
  • Symfony sends that message to the Mercure hub over HTTP with a publisher JWT,
  • Mercure broadcasts the event to all connected phones.

The phones don’t talk directly to Symfony for this part: they stay connected to Mercure and simply update their color whenever an event is pushed.

This clear separation preserved:

  • the robustness of PHP-FPM for the Symfony app,
  • the scalability of Mercure/FrankenPHP for long-lived connections.

Show time: what really matters

On the day of the keynote, everything worked:

  • all 1200 phones connected,
  • state changes were instantaneous,
  • the filters (everyone, contributors, seniority, etc.) behaved as expected,
  • and everything stayed stable.

Technically, that was satisfying. Humanly, it was even better: seeing the whole Symfony community participate in a shared visual experience, controlled live, was exactly the effect we were aiming for.

What I’m taking away from this

A few lessons I’d share with the Symfony community:

  1. Symfony stays at the center. Even in highly real-time scenarios, Symfony remains the brain: decisions, business rules, security, admin UI, and more.
  2. Don’t force one technology to do everything. PHP-FPM is excellent for short, classic requests. Mercure/FrankenPHP shine with many long-lived connections. Combining them thoughtfully is often more pragmatic than a big-bang migration.
  3. Test under realistic conditions. What works with 10 connections may fail at 1000. With one day to go, a simpler, easier-to-reason-about architecture was the safest choice.
  4. Share your stories. Real-world experiments, pivots, and compromises are what help the Symfony, FrankenPHP, Mercure, and surrounding ecosystems move forward.

If you’d like the full infrastructure and configuration story, including concrete snippets and environment details, you can read the original article on Upsun’s blog: How we scaled live connections for 1200 developers at SymfonyCon.

Published in #Community #Symfony