Snapshot Store
The SnapshotStore interface in comby provides an optimization layer for aggregate state reconstruction in event-sourced systems. Instead of replaying all events from the beginning every time an aggregate is loaded, a snapshot captures the serialized aggregate state at a specific version. Subsequent loads restore from the snapshot and only replay events that occurred after it.
This reduces the cost of GetAggregate from O(n) (where n is the total number of events) to O(k) (where k is the number of events since the last snapshot), making it particularly beneficial for long-lived aggregates with many events.
INFO
The SnapshotStore is entirely optional. If no SnapshotStore is configured, comby falls back to full event replay — the same behavior as before snapshots were introduced. This means existing applications require zero changes to continue working.
How It Works
When the AggregateRepository loads an aggregate via GetAggregate, the following flow is executed:
Key Behaviors
- Snapshot Restore: If a snapshot exists, the aggregate state is deserialized from it. Only events with a version greater than the snapshot version are loaded and replayed.
- Async Snapshot Save: After loading, if the number of new events since the last snapshot exceeds the configured
SnapshotInterval, a new snapshot is saved in a background goroutine (fire-and-forget). This avoids blocking theGetAggregatecall. - Fault Tolerance: If a snapshot is corrupt or incompatible (e.g., after an aggregate schema change), comby automatically falls back to full event replay. A warning is logged, but no error is returned to the caller.
- Version-Specific Queries: When
GetAggregateis called withAggregateRepositoryGetOptionWithVersion, snapshots are bypassed to ensure correct historical state reconstruction via full replay. - Delete Cleanup:
DeleteAggregateremoves both all events and the associated snapshot.
Interface
The SnapshotStore interface provides methods for initialization, saving/retrieving snapshots, and cleanup.
type SnapshotStore interface {
// Init initializes the snapshot store.
Init(ctx context.Context) error
// Save stores a snapshot for an aggregate (upsert behavior).
Save(ctx context.Context, model *SnapshotStoreModel) error
// GetLatest retrieves the most recent snapshot for an aggregate.
// Returns nil if no snapshot exists.
GetLatest(ctx context.Context, aggregateUuid string) (*SnapshotStoreModel, error)
// Delete removes the snapshot for an aggregate.
Delete(ctx context.Context, aggregateUuid string) error
// Close closes the snapshot store connection.
Close(ctx context.Context) error
}Model
The SnapshotStoreModel represents a stored snapshot of an aggregate's state:
type SnapshotStoreModel struct {
AggregateUuid string `json:"aggregateUuid,omitempty"` // UUID of the aggregate.
Domain string `json:"domain,omitempty"` // Domain of the aggregate (e.g., "Tenant").
Version int64 `json:"version,omitempty"` // Aggregate version at snapshot time.
Data []byte `json:"data,omitempty"` // Serialized aggregate state.
CreatedAt int64 `json:"createdAt,omitempty"` // Timestamp of snapshot creation.
}The Data field contains the full serialized state of the aggregate, including internal fields tagged with json:"-". Comby uses a custom JSON encoder (JsonEncodeAll / jsonDecodeAll) that serializes all fields via reflection, ensuring that internal state (maps, caches, computed fields) is correctly preserved and restored.
Configuration
Snapshots are configured at the Facade level using two options:
import "github.com/gradientzero/comby/v2"
fc, _ := comby.NewFacade(
comby.FacadeWithSnapshotStore(comby.NewSnapshotStoreMemory()),
comby.FacadeWithSnapshotInterval(100),
// ... other options
)| Option | Description |
|---|---|
FacadeWithSnapshotStore(store SnapshotStore) | Sets the snapshot store implementation. nil disables snapshotting. |
FacadeWithSnapshotInterval(interval int64) | Number of events between snapshots. 0 disables snapshotting. |
Both conditions must be met for snapshotting to be active: a non-nil SnapshotStore and a SnapshotInterval > 0.
Choosing a Snapshot Interval
The interval depends on your use case:
- Small interval (e.g., 10-50): More frequent snapshots, faster loads, slightly more storage.
- Large interval (e.g., 500-1000): Fewer snapshots, less storage overhead, but longer replay times between snapshots.
- A good starting point is 100 for most applications.
Zero-Configuration Integration
When a SnapshotStore is configured on the Facade, all AggregateRepository instances automatically inherit it. No changes are required at individual repository call sites:
// This repository automatically uses snapshots if the Facade has a SnapshotStore configured.
repo := comby.NewAggregateRepository(fc, aggregate.NewAggregate)
// GetAggregate transparently uses snapshots when available.
agg, err := repo.GetAggregate(ctx, aggregateUuid)Usage Example
Basic Setup
import "github.com/gradientzero/comby/v2"
// Create facade with snapshot support
fc, _ := comby.NewFacade(
comby.FacadeWithEventStore(eventStore),
comby.FacadeWithCommandStore(commandStore),
comby.FacadeWithSnapshotStore(comby.NewSnapshotStoreMemory()),
comby.FacadeWithSnapshotInterval(100), // snapshot every 100 events
)
// Register domains as usual
domain.RegisterDefaults(ctx, fc)
// Restore state
fc.RestoreState()What Happens at Runtime
repo := comby.NewAggregateRepository(fc, aggregate.NewAggregate)
// First load of an aggregate with 500 events:
// → No snapshot exists → full replay of 500 events
// → Saves snapshot at version 500 (async)
agg, _ := repo.GetAggregate(ctx, uuid)
// After 50 more events (total 550):
// → Restores from snapshot at version 500
// → Replays only 50 new events (instead of 550)
// → No new snapshot (550 - 500 = 50 < 100 interval)
agg, _ = repo.GetAggregate(ctx, uuid)
// After 100 more events (total 600):
// → Restores from snapshot at version 500
// → Replays 100 new events
// → Saves new snapshot at version 600 (600 - 500 >= 100)
agg, _ = repo.GetAggregate(ctx, uuid)Implementations
comby provides the following implementations of the SnapshotStore interface:
- In-Memory Store: Built-in, lightweight implementation using a thread-safe map. Suitable for testing, development, and single-instance deployments. Snapshots are lost on restart.
store := comby.NewSnapshotStoreMemory()External implementations for persistent storage are available as separate packages:
- SQLite Store: File-based persistent snapshot storage. Link: comby-store-sqlite
- PostgreSQL Store: Distributed persistent snapshot storage. Link: comby-store-postgres
- Redis Store: High-performance distributed snapshot storage. Link: comby-store-redis
Each external snapshot store accepts store-specific connection pool options via its constructor. Since the SnapshotStore interface keeps Init(ctx) minimal, pool configuration is passed at construction time:
PostgreSQL Snapshot Store:
import snapshotPostgres "github.com/gradientzero/comby-store-postgres"
store := snapshotPostgres.NewSnapshotStorePostgres(connString,
snapshotPostgres.SnapshotStorePostgresWithMaxOpenConns(10), // default: 10
snapshotPostgres.SnapshotStorePostgresWithMaxIdleConns(5), // default: 5
snapshotPostgres.SnapshotStorePostgresWithConnMaxLifetime(30 * time.Minute), // default: 30min
snapshotPostgres.SnapshotStorePostgresWithConnMaxIdleTime(5 * time.Minute), // default: 5min
)SQLite Snapshot Store:
import snapshotSQLite "github.com/gradientzero/comby-store-sqlite"
store := snapshotSQLite.NewSnapshotStoreSQLite(path,
snapshotSQLite.SnapshotStoreSQLiteWithMaxOpenConns(1), // default: 1
snapshotSQLite.SnapshotStoreSQLiteWithConnMaxIdleTime(5 * time.Minute), // default: 5min
)Redis Snapshot Store:
import snapshotRedis "github.com/gradientzero/comby-store-redis"
store := snapshotRedis.NewSnapshotStoreRedis(addr, password, db,
snapshotRedis.SnapshotStoreRedisWithPoolSize(20), // default: 20
snapshotRedis.SnapshotStoreRedisWithMinIdleConns(2), // default: 2
snapshotRedis.SnapshotStoreRedisWithMaxIdleConns(10), // default: 10
snapshotRedis.SnapshotStoreRedisWithMaxRetries(3), // default: 3
snapshotRedis.SnapshotStoreRedisWithWriteTimeout(3 * time.Second), // default: 3s
)Users can implement the SnapshotStore interface to integrate with alternative storage systems. The interface is intentionally minimal (5 methods) to keep custom implementations simple.
Important Notes
- Optional Feature: Snapshots are entirely opt-in. Without configuration, the system behaves exactly as before.
- No Data Loss Risk: Snapshots are a performance cache, not the source of truth. The event store remains the authoritative record. If a snapshot is missing, corrupt, or deleted, the system falls back to full event replay.
- Schema Changes: After modifying an aggregate's structure (adding/removing fields), existing snapshots may become incompatible. Comby handles this gracefully by falling back to full replay and creating a new snapshot with the updated schema on the next load.
- Concurrency: The in-memory implementation is fully thread-safe. All snapshot operations can be performed concurrently from multiple goroutines without external synchronization.