Event Sourcing & CQRS vs. Traditional Architectures
Most systems begin with the classic CRUD (Create, Read, Update, Delete) model—simple, familiar, and quick to implement. But as requirements evolve—especially around auditability, traceability, and scalability—this approach quickly reaches its limits.
That’s where Event Sourcing, especially when combined with CQRS (Command Query Responsibility Segregation), becomes a powerful and future-proof alternative.
comby
leverages the full power of ES/CQRS to solve the challenges traditional architectures struggle with—cleanly, efficiently, and at scale. Let us compare both approaches.
What is CRUD?
CRUD is the standard and traditional model most developers are familiar with. It allows clients to Create new records, Read existing records, Update records in place or Delete records. The classic CRUD approach maintains only a snapshot of the current state. Here’s how a typical update works - Let’s say Bob initiates a payment of 100 EUR:
PaymentId | User | Amount |
---|---|---|
1234 | Bob | 100 |
Later, Bob changes the amount to 200 EUR. In a CRUD system, the update operation modifies the same record — once changed, the old data is gone:
PaymentId | User | Amount |
---|---|---|
1234 | Bob | 200 |
The system now holds only the latest state.
What is ES/CQRS?
Event Sourcing captures every change in the system as an immutable event on the write side. Instead of storing the current state directly, the system’s state (read side) is reconstructed by replaying the ordered sequence of these events. Here is an example of how this works (result of the write-side):
EventId | Event |
---|---|
1 | PaymentCreatedEvent(amount: 100) |
... | ... |
4321 | PaymentUpdatedEvent(amount: 200) |
The system holds the complete audit trail, making it possible to trace any change back to its origin. From these events, projections (read models) are built—automatically or via custom logic. A projection may look just like a CRUD-style record:
PaymentId | User | Amount |
---|---|---|
1234 | Bob | 200 |
Or it can be enriched with additional context, such as related entities or metadata from additional events (such as - imaginary: PaymentAssignedToProjectEvent
or TransactionUpdatedEvent
which are related to the payment in some form or another):
PaymentId | User | Amount | Project | Transaction | LastUpdated |
---|---|---|---|---|---|
1234 | Bob | 200 | AnyProject | 762033 | 2025-06-26 12:00:00 |
Projections are decoupled from the write model and can be tailored for specific use cases—reporting, analytics, APIs, etc.
Performance
Custom projections allow you to include exactly the data your application needs — no more, no less. This brings major benefits for performance and scalability. In practice, up to 95% of all requests can be answered directly from the projection layer, often without touching a traditional database. Most projections are stored in memory or caching systems like Redis, while the original events remain untouched and safely stored in the event store.
Traditional CRUD Approach
In a typical CRUD system, handling a single user request often requires:
- Calling multiple backend services
- Performing multiple live database queries
- Aggregating the results manually
- Handling possible failures from any downstream service
Expressed in a diagram, it might look something like this:
This adds latency, increases system coupling, and complicates scaling under load.
Event Sourcing with Projections
With Event Sourcing (as implemented in comby
), most queries can be answered like this:
- A single query endpoint responds to the request
- It uses a precomputed projection tailored for that use case
- The data is ready-to-serve, without real-time joins or cross-service lookups
- Response time is extremely low — even under heavy load
Read-only HTTP requests are decoupled from write operations. comby
embraces this model by design — enabling high performance without sacrificing consistency or traceability. However - in practice, about 5% of all HTTP requests are write operations and processed differently by the write side. The speed of the write side is comparable to CRUD.
Energy Consumptions
Energy efficiency is becoming a key factor in modern software design—not only for cost savings but also for environmental impact. Let's compare how CRUD and Event Sourcing + CQRS (comby
) differ in their energy footprint when handling HTTP requests.
We assume that an traditional HTTP request consumes an average of 0.1 Wh. We also assume that the traditional CRUD approach requires 3 service calls per request - read and write operations, while the Event Sourcing + CQRS approach requires only 1 service call per request on the write side and 0 service calls per request on the read side. Let us illustrate this in a table:
Architekture | Operation | Service Calls per Request | Energy Consumption per Request (Wh) |
---|---|---|---|
CRUD | Read | 3 | 0.3 |
Write | 3 | 0.3 | |
Event Sourcing + CQRS | Read | 0 | 0.001 (in-memory lookup) |
Write | 1 | 0.3 |
If we now assume that on average all HTTP operations are 95% read operations, then the following results for 1,000 Http requests:
- CRUD: 1,000 * 0.3 =
900 Wh
- Event Sourcing + CQRS: 950 * 0.001 + 50 * 0.3 =
15.95 Wh
Result: Event Sourcing + CQRS uses ~18x less energy under this scenario.
If we compare this with large products, some of which also use Event Sourcing and CQRS, the following table emerges - we use per day as a metric and assume that 1 kWh costs 0.30 EUR (based on the average price in Germany):
Product | Average Request (million) | Energy Consumption with CRUD (MWh) | Energy Consumption with ES/CQRS (MWh) | Cost Savings with ES/CQRS (million EUR) |
---|---|---|---|---|
Google Workspace | 50,000 | 15,000 | 800 | 4,260 |
GitHub | 2,000 | 600 | 30 | 170 |
Don't forget that the calculation is purely hypothetical per day, but it is still impressive.
By minimizing the number of service calls and optimizing read operations, comby
's Event Sourcing + CQRS approach delivers significantly lower energy consumption—making it not only more performant but also more sustainable. This difference grows even more pronounced at scale, helping reduce both operational costs and environmental impact.
Side‑by‑Side Feature Comparison
Feature | CRUD | Event Sourcing + CQRS (comby ) |
---|---|---|
Data Handling | Overwrites current state | Appends immutable events; state is replayable |
State History | Lost unless manually tracked | Full, built-in history via the event store |
Read/Write Models | Shared schema for reads & writes | fixed write schema; read models (projections) optimized for query performance |
Performance | Slower under any contention | High throughput; append-only writes + fast in-memory or cache-based reads |
Scalability | Mostly vertical; replication is hard | Naturally horizontal |
Auditing | Requires custom logging implementation | Built-in audit trail at the event level |
Streaming & Integration | Polling or Change Data Capture (CDC) | Native support for real-time event streaming |
Consistency | Strong consistency (ACID) | Eventual consistency (write → project asynchronously) |
Complexity | Easier to start, less flexible | More sophisticated, but highly flexible and traceable |
When Event Sourcing Really Pays Off
- Auditability: Financial systems, compliance, traceability
- Complex workflows: Order management, inventory systems
- Debugging & Historical Analysis: Replay past workflows, test with production data
- Integration: Spin up new services or analytics pipelines from historical data
Conclusion
CRUD is a great starting point—but comby
’s adoption of Event Sourcing + CQRS is designed for robust, scalable, and auditable systems:
- State becomes a story, not just a snapshot
- Events become the backbone of real-time, stream-first architecture
- Projections and read models provide flexibility without costly migrations or performance penalties
Appendix: comby vs. Kafka
It's common to compare Event Sourcing (as implemented in comby
) with technologies like Apache Kafka. While they both deal with events and streams, they serve very different purposes.
Core Difference
Aspect | comby (Event Sourcing + CQRS) | Apache Kafka |
---|---|---|
Purpose | Application-level architecture & persistence | Distributed event streaming platform |
Event Model | Domain events, modeled explicitly in Go (business intent) | Generic byte/message stream |
Replayability | Built-in; used to rebuild application state | Possible; requires consumers to handle logic |
Storage | Custom event store, tightly integrated with domain logic | Log-based storage system |
Querying | Via projections/read models (highly optimized) | Not designed for querying |
Processing Logic | Embedded in application services (comby handlers) | Done via consumers, Kafka Streams, etc. |
Consistency Model | Application-level guarantees (eventual or strong) | At-most-once / at-least-once delivery |
When to Use What?
Use Kafka if:
- You need to stream large volumes of heterogeneous data across teams and systems
- You want a durable, distributed log for decoupling producers and consumers
- You build data pipelines, analytics systems, or integration hubs
Use
comby
(Event Sourcing + CQRS) if:- You want to model domain logic explicitly with full auditability
- You need queryable projections and read-side optimization
- You require business-centric, traceable state over time
- You're building transactional, reactive systems in Go
Can They Work Together?
Yes! comby
can emit domain events to Kafka for integration with other systems—bridging internal consistency with external event streaming.
For example:
comby
stores domain events and updates projections- Selected events are forwarded to Kafka for consumption by analytics, notification, or other services
Summary
- Kafka is a powerful infrastructure layer for event distribution.
- comby is a structured application framework for building event-native, business-aware systems.
They solve different problems—and can be even more powerful together.