PostgreSQL EventStorage
Configuring PostgreSQL EventStorage
This guide covers how to configure and deploy the PostgreSQL-backed EventStore implementation in production environments.
Configuring the Postgres EventStorage
To use PostgreSQL as your event storage backend, add the following dependencies to your Maven pom.xml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<dependencies>
<!-- PostgreSQL EventStorage implementation -->
<dependency>
<groupId>org.sliceworkz</groupId>
<artifactId>sliceworkz-eventstore-infra-postgres</artifactId>
</dependency>
<!-- PostgreSQL JDBC driver -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<!-- HikariCP connection pooling (recommended) -->
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
</dependency>
</dependencies>
The PostgreSQL driver is marked as provided scope in the library, allowing you to choose your preferred version. HikariCP is used for high-performance connection pooling.
PostgreSQL Version Support and UUIDv7 Generation
From release 0.8.0 onwards, Sliceworkz Eventstore targets PostgreSQL 18+ as the default. Event ids are stored as time-ordered UUIDv7 values (per RFC 9562), which improves B-tree index locality for append-heavy workloads. The way those ids are generated depends on the server version you connect to:
| PostgreSQL version | UUIDv7 generation | Implementation | Extra runtime dependency |
|---|---|---|---|
| 18+ (default) | Server-side uuidv7() | PostgresEventStorageImpl | none |
| 13–17 (legacy) | Java-side via uuid-creator | PostgresLegacyEventStorageImpl | com.github.f4b6a3:uuid-creator (must be added explicitly) |
The right implementation is picked automatically at build() time: PostgresEventStorage.Builder.build() borrows a connection, reads getDatabaseMajorVersion() and selects the matching impl. The same library binary therefore works against both PG17 and PG18 — no code or property changes required when you upgrade your database.
The chosen path is logged at startup; grep your logs for uuidv7 to verify which one was selected:
1
PostgreSQL major version 18 detected — using native server-side uuidv7() via PostgresEventStorageImpl
or, on older servers:
1
PostgreSQL major version 17 detected — using Java-side uuidv7 generation via PostgresLegacyEventStorageImpl
Optional uuid-creator Dependency (for PostgreSQL 13–17)
Because Java-side UUIDv7 generation is only needed when running against PostgreSQL 13–17, the com.github.f4b6a3:uuid-creator dependency is declared as optional in sliceworkz-eventstore-infra-postgres. Applications targeting PG18+ only can ignore it and benefit from a smaller dependency tree.
Applications that may connect to PG13–17 must declare it explicitly in their pom.xml:
1
2
3
4
<dependency>
<groupId>com.github.f4b6a3</groupId>
<artifactId>uuid-creator</artifactId>
</dependency>
The version is managed by
sliceworkz-eventstore-bom, so you don’t need to specify one.
If the legacy code path is selected at runtime but uuid-creator is missing from the classpath, Builder.build() fails fast with an EventStorageException whose message tells you exactly how to remediate:
… — Java-side uuidv7 generation is required, but the optional 'com.github.f4b6a3:uuid-creator' dependency is missing from the classpath. Either add it to your application's build (see PostgresEventStorage Javadoc for the dependency snippet), or upgrade the PostgreSQL server to version 18+ for native uuidv7() support.
This means you only ever pay for the dependency on the deployments that actually need it.
Future-proofing. Once PostgreSQL 17 reaches end of support and your deployments have moved to PG18+, the legacy path (and the
uuid-creatordependency) can simply be dropped from your build. The library continues to function unchanged —PostgresEventStorageImplremains the sole implementation under PG18+.
Using Your Own DataSource
If your application already manages database connections, pass your existing DataSource to the builder:
1
2
3
4
5
DataSource existingDataSource = // ... from your application context
EventStore eventStore = PostgresEventStorage.newBuilder()
.dataSource(existingDataSource)
.build();
This approach is useful when:
- Using application server connection pools (e.g., Tomcat, WildFly)
- Integrating with Spring’s DataSource management
- Sharing a connection pool across multiple components
- Using custom DataSource implementations
When providing a custom DataSource, ensure it’s properly configured with:
- Sufficient connection pool size for your workload
- Appropriate connection timeout settings
- PostgreSQL-specific optimizations if using HikariCP
Regular and Monitoring Connections
The EventStore uses two types of database connections:
Regular DataSource: Used for standard operations (queries, appends, bookmark management). Should use connection pooling for performance.
Monitoring DataSource: Used exclusively for PostgreSQL’s LISTEN/NOTIFY mechanism to receive real-time event notifications.
1
2
3
4
5
6
7
DataSource pooledDataSource = // ... HikariCP pooled connections
DataSource monitoringDataSource = // ... separate non-pooled connection
EventStore eventStore = PostgresEventStorage.newBuilder()
.dataSource(pooledDataSource)
.monitoringDataSource(monitoringDataSource)
.build();
Why Separate Datasources?
PostgreSQL’s LISTEN/NOTIFY requires dedicated, long-lived connections that cannot be pooled. Using a separate monitoring DataSource:
- Optimization: Configure a short leakDetection time for regular connections, and a longer one for monitoring event appends via LISTEN/NOTIFY
- Prevents blocking: Long-running LISTEN connections don’t consume regular connection pool resources
- PgBouncer compatibility: Connection poolers like PgBouncer don’t support LISTEN/NOTIFY in transaction pooling mode. A separate direct connection bypasses this limitation
- Performance: Isolates notification traffic from query traffic
If you don’t provide a separate monitoring DataSource, the regular DataSource is used for both purposes.
The monitoring connections include built-in resilience: if a LISTEN connection drops, it automatically reconnects with exponential backoff (1 second up to 30 seconds) to avoid flooding logs or exhausting the connection pool during database outages.
When connecting to a database through pbBouncer for the monitoring, you will find the realtime notification mechanism not to react to appends immediately, but only after 30 seconds or so. While this functionally works, your expectations towards eventual consistency keeping up are without a doubt higher than that.
Configuring an EventStore-managed DataSource (db.properties)
The EventStore can automatically create and configure DataSources from a db.properties file. The library searches for this file in the following locations (in order):
- System property:
-Deventstore.db.config=/path/to/db.properties - Environment variable:
EVENTSTORE_DB_CONFIG=/path/to/db.properties - Current working directory and up to 2 parent directories:
./db.properties,../db.properties,../../db.properties
Configuring the pooled and non-pooled connections
Define separate datasources:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# db.properties - Advanced configuration
# Pooled connections for regular operations
db.pooled.url=jdbc:postgresql://<host>/<db>
db.pooled.username=<user>
db.pooled.password=<password>
db.pooled.leakDetectionThreshold=2000
db.pooled.maximumPoolSize=25
db.pooled.datasource.sslmode=require
db.pooled.datasource.channelBinding=require
db.pooled.datasource.cachePrepStmts=true
db.pooled.datasource.prepStmtCacheSize=250
db.pooled.datasource.prepStmtCacheSqlLimit=2048
# Non-pooled connections for LISTEN/NOTIFY
db.nonpooled.url=jdbc:postgresql://<host>/<db>
db.nonpooled.username=<user>
db.nonpooled.password=<password>
db.nonpooled.leakDetectionThreshold=70000
db.nonpooled.maximumPoolSize=2
db.nonpooled.datasource.sslmode=require
db.nonpooled.datasource.channelBinding=require
db.nonpooled.datasource.cachePrepStmts=true
db.nonpooled.datasource.prepStmtCacheSize=250
db.nonpooled.datasource.prepStmtCacheSqlLimit=2048
Be sure the size your pooled datasource connections according to your application needs. The non-pooled datasource used for monitoring appends with the NOTIFY/LISTEN mechanism only needs 2 connections.
leakDetectionThreshold on the pooled (application) connections should be set quite low, for the monitoring connections this should be at least 30 seconds, as the monitoring connection only refreshes after a longer LISTEN for updates.
Preparing the Database Schema Manually via DDL
The recommended approach is to create the database schema manually using DDL scripts before deploying your application. This allows you to:
- Use database users with limited DML-only privileges for the application
- Separate schema management from application deployment
- Apply schema changes through controlled migration processes when a newer EventStore version ever requires schema updates
Using the Initialization Script
The library includes an quickstart.ddl.sql script (available in the JAR or source repository):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
CREATE TABLE events (
-- Primary key and positioning
event_position BIGSERIAL PRIMARY KEY,
-- XID8 transaction id
event_tx xid8 DEFAULT pg_current_xact_id()::xid8 NOT NULL,
-- Event identification
event_id UUID NOT NULL UNIQUE,
-- Stream identification
stream_context TEXT NOT NULL,
stream_purpose TEXT NOT NULL DEFAULT '',
-- Event metadata
event_type TEXT NOT NULL,
-- Transaction information
event_timestamp TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
-- Event payload
event_data JSONB NOT NULL,
event_erasable_data JSONB,
-- Tags as string array
event_tags TEXT[] DEFAULT '{}'
) WITH (FILLFACTOR = 100);
-- Additional indexes, functions, and triggers...
Extract the script from the JAR or copy from the source repository, then execute it using your preferred database client or migration tool.
Co-locating Multiple EventStore Instances
In some scenarios, you want to create multiple eventstores, without configuring multiple databases:
- separating environments like dev(development), tst(testing) and acc(acceptance)
- separating tenants like customer1, customer2, etc…
- separating application components or bounded contexts (sales, orders, invoicing, …)
For these scenario’s, EventStore supports the usage of prefixes, in which all required database objects are uniquely identified by prefixing them.
The library includes two DDL scripts with “PREFIX_” placeholders for multi-tenant deployments:
ensure-schema.sql - Idempotent creation script using CREATE TABLE IF NOT EXISTS and CREATE INDEX IF NOT EXISTS. Safe to run repeatedly:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
CREATE TABLE IF NOT EXISTS PREFIX_events (
-- Primary key and positioning
event_position BIGSERIAL PRIMARY KEY,
-- XID8 transaction id
event_tx xid8 DEFAULT pg_current_xact_id()::xid8 NOT NULL,
-- Event identification
event_id UUID NOT NULL UNIQUE,
-- Stream identification
stream_context TEXT NOT NULL,
stream_purpose TEXT NOT NULL DEFAULT '',
-- Event metadata
event_type TEXT NOT NULL,
-- Transaction information
event_timestamp TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
-- Event payload
event_data JSONB NOT NULL,
event_erasable_data JSONB,
-- Tags as string array
event_tags TEXT[] DEFAULT '{}'
) WITH (FILLFACTOR = 100);
-- Additional indexes, functions, and triggers...
drop-schema.sql - Teardown script for destroying an existing eventstore schema:
1
2
DROP TABLE IF EXISTS PREFIX_bookmarks CASCADE;
DROP TABLE IF EXISTS PREFIX_events CASCADE;
By replacing all occurrences of “PREFIX_” in these files by e.g. “myapp_”, you create a private eventstorage to be used by a specific application.
Connecting to an EventStore database with prefixes
Configure each instance with its prefix:
1
2
3
4
5
6
7
EventStore tenant1Store = PostgresEventStorage.newBuilder()
.prefix("tenant1_")
.build();
EventStore tenant2Store = PostgresEventStorage.newBuilder()
.prefix("tenant2_")
.build();
Note: Prefixes must be alphanumeric with underscores, end with an underscore, and be 32 characters or less.
Security Best Practice: Create the schema with a privileged database user (e.g., eventstore_admin with DDL rights), then run your application with a limited user (e.g., eventstore_app with only DML rights). This prevents applications from accidentally modifying the schema.
Database Initialization Modes
The DatabaseInitMode enum controls how the database schema is handled at startup. Set it via .databaseInitMode(DatabaseInitMode.xxx) or use one of the convenience methods on the builder:
ENSURE (default)
Creates missing database objects if they don’t exist, leaving existing objects untouched, then validates the schema. This is the default mode and is safe to run repeatedly:
1
2
3
4
5
6
7
8
// Default behavior — no need to specify
EventStore eventStore = PostgresEventStorage.newBuilder()
.build();
// Or explicitly
EventStore eventStore = PostgresEventStorage.newBuilder()
.ensureDatabase()
.build();
This is the recommended mode for most environments. It uses idempotent CREATE TABLE IF NOT EXISTS / CREATE INDEX IF NOT EXISTS statements, so existing data is never affected.
ENSURE mode requires the database user to have DDL privileges (CREATE TABLE, CREATE FUNCTION, CREATE TRIGGER, CREATE INDEX).
VALIDATE
Validates that all required database objects exist and are correctly defined. No objects are created or modified — this is a read-only check:
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.validateDatabase()
.build();
The validation checks:
- Required tables exist (
PREFIX_events,PREFIX_bookmarks) - All columns are present with correct types
- Indexes are properly configured
- PostgreSQL functions and triggers exist
If validation fails, an EventStorageException is thrown with details about missing or misconfigured objects. This provides early detection of schema issues before runtime failures occur.
VALIDATE is useful when the schema is managed externally (e.g., by a DBA or migration tool) but you still want startup verification.
NONE
Skips all database operations. Trusts that the schema exists and is correct. Minimizes startup time:
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.databaseInitMode(DatabaseInitMode.NONE)
.build();
Use this when:
- The database user lacks permissions to query
information_schema - Minimizing startup time is critical
- You have full confidence in the schema being correct
INITIALIZE
Drops all event store objects and recreates them from scratch, then validates:
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.initializeDatabase()
.build();
This mode is destructive — all existing event data will be lost. Use only for test environments, fresh deployments, or when a clean slate is explicitly needed.
Recommended Configurations
Production with external schema management (DML-only database user):
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.validateDatabase() // Verify schema exists at startup
.build();
Production with auto-managed schema (DDL-capable database user):
1
2
EventStore eventStore = PostgresEventStorage.newBuilder()
.build(); // Default ENSURE mode
Test environments:
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.initializeDatabase() // Fresh schema every test run
.build();
Configuring a Hard Limit on Query Result Size
To prevent memory exhaustion from poorly designed queries, configure an absolute limit on result set size:
1
2
3
EventStore eventStore = PostgresEventStorage.newBuilder()
.resultLimit(10000) // Maximum 10,000 events per query
.build();
When a query would exceed this limit, an EventStorageException is thrown:
1
2
3
4
5
6
try {
// Query that exceeds the limit
stream.query(EventQuery.matchAll()).toList();
} catch (EventStorageException e) {
// Handle: "Query result exceeded absolute limit of 10000"
}
Important: This is an emergency brake, not a substitute for proper query design. Applications should:
- Use batched queries with
Limitfor large result sets - Design queries to stay well below the hard limit
- Process results incrementally rather than loading everything into memory
Example of proper batched querying:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
EventReference lastRef = null;
while (true) {
List<Event<CustomerEvent>> batch = stream.query(
EventQuery.matchAll(),
lastRef,
Limit.to(500) // Well below hard limit
).toList();
if (batch.isEmpty()) break;
// Process batch
batch.forEach(this::processEvent);
lastRef = batch.getLast().reference();
}
The hard limit applies globally to all queries on that EventStore instance. Individual queries can use lower limits, but cannot exceed the configured maximum.
Configuring Multiple EventStore Instances on the Same Database
Multiple EventStore instances can safely connect to the same PostgreSQL database schema simultaneously.
Multiple Processes/Machines
In a multi-machine setup, each process connects to the same database:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Application Server 1
EventStore storeOnServer1 = PostgresEventStorage.newBuilder()
.name("api-server-1")
.build();
// Application Server 2
EventStore storeOnServer2 = PostgresEventStorage.newBuilder()
.name("api-server-2")
.build();
// Batch Worker
EventStore batchStore = PostgresEventStorage.newBuilder()
.name("batch-worker")
.build();
You can use different names or the same for the storage one on all machines, this only differs in logging output but has no runtime impact.
All instances share the same event data and can:
- Append events concurrently
- Query the same event streams
- Subscribe to event notifications
- Use optimistic locking (DCB) for safe concurrent writes
Benefits of Multiple Instances
Load Balancing: Distribute query load across multiple application servers
Failover: If one instance fails, others continue operating
Workload Separation:
- Online instances serve REST APIs and user interactions
- Batch instances run asynchronous projections and integrations
- Dedicated instances for reporting or analytics
Horizontal Scaling: Add more instances as load increases
Example Architecture:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
┌─────────────────┐ ┌─────────────────┐
│ API Server 1 │ │ API Server 2 │
│ (Application) │ │ (Application) │
└────────┬────────┘ └────────┬────────┘
│ │
└───────────┬───────────┘
│
┌───────────┴───────────┐
│ PostgreSQL Database │
│ (Shared Database) │
└───────────┬───────────┘
│
┌───────────┴───────────┐
│ Batch Processor │
│ (Application) │
└───────────────────────┘
PostgreSQL’s MVCC (Multi-Version Concurrency Control) ensures safe concurrent access. The DCB optimistic locking mechanism prevents conflicting writes at the application level.