Skip to content

Stores

Comby provides a flexible and extensible architecture for managing different types of stores, including the EventStore, CommandStore, DataStore, CacheStore, and SnapshotStore.

All of these are defined as interfaces, allowing developers to implement their own custom storage solutions tailored to specific requirements. Comby also includes several ready-made implementations to simplify development. For example, in-memory implementations are available for all stores, providing a lightweight option for testing and prototyping.

INFO

Although the stores are represented within the facade here, this is not strictly necessary. Depending on the use case, a store can be created independently, such as for testing purposes. In production environments, however, the store is typically accessed through the facade. By default, if no specific configuration is provided, the in-memory variant is used.

EventStore

The EventStore handles the storage of Events in an event-driven architecture. It serves as the central source for changes to aggregates and enables event sourcing. In addition to an in-memory implementation, Comby provides SQLite and PostgreSQL-based solutions for persistent storage.

CommandStore

The CommandStore manages Commands that trigger changes in an application. It is particularly useful for tracking and replaying commands to ensure consistency or auditability. Comby includes an in-memory implementation, SQLite and PostgreSQL.

DataStore

The DataStore is used for storing and retrieving underyling data of an Asset, such as files or binary assets. Comby offers in-memory, file-system and MinIO implementations, providing flexibility for lightweight or distributed object storage.

CacheStore

The CacheStore provides temporary data storage to optimize performance and reduce the load on persistent stores. Comby supports in-memory and Redis-based implementations, making it suitable for use in distributed systems.

SnapshotStore

The SnapshotStore provides an optimization layer for aggregate state reconstruction. Instead of replaying all events every time an aggregate is loaded, snapshots capture the serialized aggregate state at a specific version. Subsequent loads restore from the snapshot and only replay events that occurred after it — reducing load times from O(n) to O(k). Comby supports in-memory (built-in) as well as SQLite, PostgreSQL, and Redis-based implementations via external packages. Snapshots are entirely optional and require no changes to existing code.

Connection Pool Configuration

All comby store implementations expose configurable connection pool settings to prevent resource exhaustion under load. By default, each store uses sensible defaults — a value of 0 means "use the store-specific default". All settings are passed as functional options during store initialization.

Why This Matters

Under concurrent load (e.g., many simultaneous HTTP requests), each DispatchCommand and DispatchQuery spawns goroutines that compete for database connections. Without proper pool sizing, this can lead to:

  • PostgreSQL: pq: sorry, too many clients already followed by cascading context deadline exceeded errors
  • SQLite: database is locked (SQLITE_BUSY) errors
  • Redis: Connection timeouts under heavy write load

Default Pool Sizes

StoreBackendMaxOpenConnsMaxIdleConnsConnMaxLifetimeConnMaxIdleTime
EventStorePostgreSQL25530min5min
EventStoreSQLite105min
CommandStorePostgreSQL25530min5min
CommandStoreSQLite105min
SnapshotStorePostgreSQL10530min5min
SnapshotStoreSQLite15min
StoreBackendPoolSizeMinIdleConnsMaxIdleConnsMaxRetriesWriteTimeout
CacheStoreRedis2021033s
SnapshotStoreRedis2021033s
StoreBackendMaxIdleConnsMaxIdleConnsPerHostIdleConnTimeout
DataStoreMinIO201090s

Sizing Guidelines

When running a single Facade with PostgreSQL stores, the total potential connections are: EventStore(25) + CommandStore(25) + SnapshotStore(10) = 60. Ensure your PostgreSQL max_connections (default: 100) can accommodate this plus any other clients.

For SQLite, the default MaxOpenConns has been kept low (10 for EventStore/CommandStore, 1 for SnapshotStore) since SQLite uses file-level locking and benefits from limited concurrency.

Example: Custom Pool Configuration

go
import (
    "time"
    "github.com/gradientzero/comby/v2"
    postgresStore "github.com/gradientzero/comby-store-postgres"
)

// EventStore with custom pool settings
eventStore := postgresStore.NewEventStorePostgres(connString,
    comby.EventStoreOptionWithMaxOpenConns(15),
    comby.EventStoreOptionWithMaxIdleConns(5),
    comby.EventStoreOptionWithConnMaxLifetime(20 * time.Minute),
    comby.EventStoreOptionWithConnMaxIdleTime(3 * time.Minute),
)

// CacheStore with custom Redis pool settings
cacheStore := redisStore.NewCacheStoreRedis(addr, password, db,
    comby.CacheStoreOptionWithPoolSize(30),
    comby.CacheStoreOptionWithMaxRetries(5),
    comby.CacheStoreOptionWithWriteTimeout(5 * time.Second),
)

// DataStore with custom HTTP transport settings
dataStore := minioStore.NewDataStoreMinio(endpoint, secure, accessKey, secretKey,
    comby.DataStoreOptionWithMaxIdleConns(30),
    comby.DataStoreOptionWithMaxIdleConnsPerHost(15),
    comby.DataStoreOptionWithIdleConnTimeout(120 * time.Second),
)

Store Encryption

Comby provides built-in encryption capabilities for stores through the CryptoService. This allows you to encrypt sensitive data at rest, ensuring that stored events, commands, and other data are protected using AES-GCM-256 encryption.

CryptoService

The CryptoService handles encryption and decryption of data using AES-GCM-256, a secure authenticated encryption algorithm. It requires a 32-byte encryption key and automatically manages the encryption process, including nonce generation for each encryption operation.

Creating a CryptoService

To create a CryptoService, you need to provide a 32-byte key:

go
// Create a 32-byte encryption key
key := []byte("01234567890123456789012345678901")

// Initialize the CryptoService
cryptoService, err := comby.NewCryptoService(key)
if err != nil {
    panic(err)
}

WARNING

The encryption key must be exactly 32 bytes for AES-256 encryption. In production environments, generate and store this key securely using proper key management practices. Never hardcode keys in your source code.

Enabling Encryption for Stores

Once you have created a CryptoService, you can enable encryption for individual stores by passing it as an option during store initialization. The following stores support encryption:

  • EventStore (SQLite, PostgreSQL)
  • CommandStore (SQLite, PostgreSQL)
  • DataStore (SQLite, PostgreSQL, MinIO)
  • CacheStore (Redis)

Example: Encrypted EventStore

go
// Create the CryptoService
key := []byte("01234567890123456789012345678901")
cryptoService, err := comby.NewCryptoService(key)
if err != nil {
    panic(err)
}

// Create an encrypted EventStore
eventStore := sqliteStore.NewEventStoreSQLite(
    "./__data__/eventStore-encrypted.db",
    comby.EventStoreOptionWithCryptoService(cryptoService),
)

How It Works

When encryption is enabled:

  1. Storage: Data is automatically encrypted before being written to the underlying storage (database, file system, etc.)
  2. Retrieval: Data is automatically decrypted when read from storage
  3. Transparency: The encryption/decryption process is transparent to your application code - you interact with the stores normally
  4. Security: Each encryption operation uses a unique nonce, ensuring that identical data produces different ciphertext

INFO

The encryption happens at the store level, meaning that the data is encrypted in the database or file system. This provides protection if the storage medium is compromised, but data is decrypted when loaded into memory for processing.