Skip to content

Event Sourcing & CQRS vs. Traditional Architectures

Most systems begin with the classic CRUD (Create, Read, Update, Delete) model—simple, familiar, and quick to implement. But as requirements evolve—especially around auditability, traceability, and scalability—this approach quickly reaches its limits.

That’s where Event Sourcing, especially when combined with CQRS (Command Query Responsibility Segregation), becomes a powerful and future-proof alternative.

comby leverages the full power of ES/CQRS to solve the challenges traditional architectures struggle with—cleanly, efficiently, and at scale. Let us compare both approaches.

What is CRUD?

CRUD is the standard and traditional model most developers are familiar with. It allows clients to Create new records, Read existing records, Update records in place or Delete records. The classic CRUD approach maintains only a snapshot of the current state. Here’s how a typical update works - Let’s say Bob initiates a payment of 100 EUR:

PaymentIdUserAmount
1234Bob100

Later, Bob changes the amount to 200 EUR. In a CRUD system, the update operation modifies the same record — once changed, the old data is gone:

PaymentIdUserAmount
1234Bob200

The system now holds only the latest state.

What is ES/CQRS?

Event Sourcing captures every change in the system as an immutable event on the write side. Instead of storing the current state directly, the system’s state (read side) is reconstructed by replaying the ordered sequence of these events. Here is an example of how this works (result of the write-side):

EventIdEvent
1PaymentCreatedEvent(amount: 100)
......
4321PaymentUpdatedEvent(amount: 200)

The system holds the complete audit trail, making it possible to trace any change back to its origin. From these events, projections (read models) are built—automatically or via custom logic. A projection may look just like a CRUD-style record:

PaymentIdUserAmount
1234Bob200

Or it can be enriched with additional context, such as related entities or metadata from additional events (such as - imaginary: PaymentAssignedToProjectEvent or TransactionUpdatedEvent which are related to the payment in some form or another):

PaymentIdUserAmountProjectTransactionLastUpdated
1234Bob200AnyProject7620332025-06-26 12:00:00

Projections are decoupled from the write model and can be tailored for specific use cases—reporting, analytics, APIs, etc.

Performance

Custom projections allow you to include exactly the data your application needs — no more, no less. This brings major benefits for performance and scalability. In practice, up to 95% of all requests can be answered directly from the projection layer, often without touching a traditional database. Most projections are stored in memory or caching systems like Redis, while the original events remain untouched and safely stored in the event store.

Traditional CRUD Approach

In a typical CRUD system, handling a single user request often requires:

  • Calling multiple backend services
  • Performing multiple live database queries
  • Aggregating the results manually
  • Handling possible failures from any downstream service

Expressed in a diagram, it might look something like this:

This adds latency, increases system coupling, and complicates scaling under load.

Event Sourcing with Projections

With Event Sourcing (as implemented in comby), most queries can be answered like this:

  • A single query endpoint responds to the request
  • It uses a precomputed projection tailored for that use case
  • The data is ready-to-serve, without real-time joins or cross-service lookups
  • Response time is extremely low — even under heavy load

Read-only HTTP requests are decoupled from write operations. comby embraces this model by design — enabling high performance without sacrificing consistency or traceability. However - in practice, about 5% of all HTTP requests are write operations and processed differently by the write side. The speed of the write side is comparable to CRUD.

Energy Consumptions

Energy efficiency is becoming a key factor in modern software design—not only for cost savings but also for environmental impact. Let's compare how CRUD and Event Sourcing + CQRS (comby) differ in their energy footprint when handling HTTP requests.

We assume that an traditional HTTP request consumes an average of 0.1 Wh. We also assume that the traditional CRUD approach requires 3 service calls per request - read and write operations, while the Event Sourcing + CQRS approach requires only 1 service call per request on the write side and 0 service calls per request on the read side. Let us illustrate this in a table:

ArchitektureOperationService Calls per RequestEnergy Consumption per Request (Wh)
CRUDRead30.3
Write30.3
Event Sourcing + CQRSRead00.001 (in-memory lookup)
Write10.3

If we now assume that on average all HTTP operations are 95% read operations, then the following results for 1,000 Http requests:

  • CRUD: 1,000 * 0.3 = 900 Wh
  • Event Sourcing + CQRS: 950 * 0.001 + 50 * 0.3 = 15.95 Wh

Result: Event Sourcing + CQRS uses ~18x less energy under this scenario.

If we compare this with large products, some of which also use Event Sourcing and CQRS, the following table emerges - we use per day as a metric and assume that 1 kWh costs 0.30 EUR (based on the average price in Germany):

ProductAverage Request (million)Energy Consumption with CRUD (MWh)Energy Consumption with ES/CQRS (MWh)Cost Savings with ES/CQRS (million EUR)
Google Workspace50,00015,0008004,260
GitHub2,00060030170

Don't forget that the calculation is purely hypothetical per day, but it is still impressive.

By minimizing the number of service calls and optimizing read operations, comby's Event Sourcing + CQRS approach delivers significantly lower energy consumption—making it not only more performant but also more sustainable. This difference grows even more pronounced at scale, helping reduce both operational costs and environmental impact.

Side‑by‑Side Feature Comparison

FeatureCRUDEvent Sourcing + CQRS (comby)
Data HandlingOverwrites current stateAppends immutable events; state is replayable
State HistoryLost unless manually trackedFull, built-in history via the event store
Read/Write ModelsShared schema for reads & writesfixed write schema; read models (projections) optimized for query performance
PerformanceSlower under any contentionHigh throughput; append-only writes + fast in-memory or cache-based reads
ScalabilityMostly vertical; replication is hardNaturally horizontal
AuditingRequires custom logging implementationBuilt-in audit trail at the event level
Streaming & IntegrationPolling or Change Data Capture (CDC)Native support for real-time event streaming
ConsistencyStrong consistency (ACID)Eventual consistency (write → project asynchronously)
ComplexityEasier to start, less flexibleMore sophisticated, but highly flexible and traceable

When Event Sourcing Really Pays Off

  • Auditability: Financial systems, compliance, traceability
  • Complex workflows: Order management, inventory systems
  • Debugging & Historical Analysis: Replay past workflows, test with production data
  • Integration: Spin up new services or analytics pipelines from historical data

Conclusion

CRUD is a great starting point—but comby’s adoption of Event Sourcing + CQRS is designed for robust, scalable, and auditable systems:

  • State becomes a story, not just a snapshot
  • Events become the backbone of real-time, stream-first architecture
  • Projections and read models provide flexibility without costly migrations or performance penalties

Appendix: comby vs. Kafka

It's common to compare Event Sourcing (as implemented in comby) with technologies like Apache Kafka. While they both deal with events and streams, they serve very different purposes.

Core Difference

Aspectcomby (Event Sourcing + CQRS)Apache Kafka
PurposeApplication-level architecture & persistenceDistributed event streaming platform
Event ModelDomain events, modeled explicitly in Go (business intent)Generic byte/message stream
ReplayabilityBuilt-in; used to rebuild application statePossible; requires consumers to handle logic
StorageCustom event store, tightly integrated with domain logicLog-based storage system
QueryingVia projections/read models (highly optimized)Not designed for querying
Processing LogicEmbedded in application services (comby handlers)Done via consumers, Kafka Streams, etc.
Consistency ModelApplication-level guarantees (eventual or strong)At-most-once / at-least-once delivery

When to Use What?

  • Use Kafka if:

    • You need to stream large volumes of heterogeneous data across teams and systems
    • You want a durable, distributed log for decoupling producers and consumers
    • You build data pipelines, analytics systems, or integration hubs
  • Use comby (Event Sourcing + CQRS) if:

    • You want to model domain logic explicitly with full auditability
    • You need queryable projections and read-side optimization
    • You require business-centric, traceable state over time
    • You're building transactional, reactive systems in Go

Can They Work Together?

Yes! comby can emit domain events to Kafka for integration with other systems—bridging internal consistency with external event streaming.

For example:

  • comby stores domain events and updates projections
  • Selected events are forwarded to Kafka for consumption by analytics, notification, or other services

Summary

  • Kafka is a powerful infrastructure layer for event distribution.
  • comby is a structured application framework for building event-native, business-aware systems.

They solve different problems—and can be even more powerful together.