Kafka Without the Operational Baggage: Inside Tansu's Stateless Architecture
A solo developer spent two years rethinking Kafka's architecture from first principles. The result eliminates leader elections, broker state, and the four-gigabyte heap—while keeping the protocol everyone already uses.
I've run Kafka clusters for nearly a decade. The protocol is elegant. The operational reality? Less so. Four-gigabyte heaps running 24/7. Leader elections that make you nervous during deployments. Brokers with identities and state that you treat like pets, not cattle. And the nagging feeling that most of this complexity exists to solve problems that modern infrastructure has already solved differently.
At QCon London 2026, Peter Morgan introduced Tansu, an open-source project that asks a simple question: what if Kafka's durability model is backwards?
The Core Assumption
Kafka achieves resilience through replication. It copies data between brokers, elects leaders, and maintains consensus. This made perfect sense in 2011 when Kafka was created—you couldn't trust your storage layer to be durable on its own.
Tansu starts from the opposite premise: storage is already durable. S3 promises 11 nines of durability. Postgres has continuous archiving. If your storage layer is already resilient, why replicate data between brokers at all?
Morgan spent over a decade building event-driven systems on Kafka, including platforms for Disney's MagicBand and large-scale betting systems, according to InfoQ. He's not a skeptic throwing stones from the outside—he's someone who lived with Kafka's operational model long enough to wonder if there's a better way.
What Stateless Actually Means
The architectural shift has practical consequences. Traditional Kafka brokers maintain state: partition assignments, replication logs, consumer offsets. They need stable identities and careful coordination. Morgan calls them "pets."
Tansu brokers carry no state. They're stateless proxies that delegate all durability to pluggable storage backends—S3, Postgres, SQLite, or memory. No leaders. No replication. No coordination overhead. They run in 20MB of resident memory and start in roughly 10 milliseconds.
During his QCon demo, Morgan deployed Tansu to Fly.io as a 40MB statically-linked binary in a from-scratch container with no OS—just the binary and SSL certificates. He configured it to scale to zero, created a topic with standard Kafka CLI tools, produced a message, killed the broker, then consumed the message. The broker woke automatically when the consumer connected. The entire deployment ran on a 256MB machine.
Morgan joked that the first rule of Kafka is not to mention you're running it, because everyone in your department will immediately want topics on your cluster. When asked how many people actually scale Kafka down in production, one hand went up in the room.
The Postgres Story
Tansu supports multiple storage backends, but Morgan was candid about his favorite: Postgres. The original motivation for the project, he explained, was watching data flow through Kafka topics only to end up in a database anyway, and wondering why the intermediate step was necessary.
The integration goes deeper than just using Postgres as a store. A produce operation in Tansu is an INSERT (or COPY). A fetch is a SELECT. This eliminates the transactional outbox pattern entirely—that architectural workaround where you atomically write both your business data and your message queue in the same transaction to avoid the dual-write problem. With Tansu, you can use a stored procedure to update business data and queue a message in a single database transaction. No separate outbox table. No change data capture connector pulling from the outbox. Just SQL.
Morgan also detailed how the Postgres backend evolved. Initially, Tansu used sequential INSERT statements, which became a bottleneck—every execution requires a round-trip response. He replaced this with Postgres's COPY FROM protocol, which streams rows without waiting for individual acknowledgements. One setup, a stream of COPY DATA messages, one COPY DONE. The result is substantially higher throughput for batch ingestion.
Schema Validation at the Broker
In standard Kafka deployments, schema enforcement is optional and happens at the client. You typically run a separate schema registry, and well-behaved clients validate against it before producing. Badly-behaved clients skip validation entirely.
Tansu flips this: if a topic has a schema (Avro, JSON, or Protobuf), the broker validates every record before writing it. Invalid data gets rejected at ingestion, not consumption. Morgan described this as a deliberate trade-off: it's slower than Kafka's pass-through approach because the broker must decompress and validate each record, but it guarantees data consistency regardless of which client produces.
That broker-side schema awareness enables something Kafka cannot do natively: writing validated data directly into open table formats. If a topic has a Protobuf schema, Tansu can automatically write records to Apache Iceberg, Delta Lake, or Parquet, creating tables and handling schema evolution. A "sink topic" configuration skips normal storage entirely and writes exclusively to the table format, turning Tansu into a direct pipeline from Kafka-compatible producers to analytics-ready data.
The Gaps Morgan Won't Hide
Morgan was upfront about what's missing. SSL support exists but is being reworked. There's no throttling or access control lists yet. Compaction and message deletion aren't implemented on S3. Share groups aren't planned.
The project is written in asynchronous Rust, Apache-licensed, and has accumulated 1,700 GitHub stars. Morgan has been building it solo for the past two years. He's actively looking for contributors.
As a proxy, Tansu can sit in front of an existing Kafka cluster, forwarding requests at 60,000 records per second with sub-millisecond P99 latency on modest hardware—13 megabytes of RAM on a Mac Mini, according to his presentation.
What This Means for Developers
Kafka's protocol has become the de facto standard for event streaming. Thousands of tools, libraries, and integrations are built around it. Tansu doesn't ask you to throw that away—it keeps the protocol while rethinking everything underneath.
If you're building greenfield event-driven systems, especially on cloud infrastructure where durable storage is a commodity, Tansu's architectural model is worth understanding. The operational simplicity—stateless brokers, pluggable storage, native table format integration—addresses real pain points.
For existing Kafka deployments, Tansu's proxy mode offers a potential migration path or a way to extend Kafka with features like direct Iceberg writes without replacing your entire infrastructure.
The project is early. Morgan is one person. But the core architectural insight—that Kafka's replication model is solving a problem that modern infrastructure has already solved—is the kind of first-principles thinking that occasionally produces step changes in how we build systems.
All examples and deployment demos are available in the Tansu GitHub repository.