SQL-Style Triggers in NoSQL: DynamoDB Streams, CosmosDB Change Feed, and When Not to Bother
If you’ve worked with Postgres or MySQL, you know triggers. A row gets inserted, a trigger fires, an audit log gets written — all in the same transaction. Synchronous, reliable, and it bloats your write latency.
NoSQL databases don’t have CREATE TRIGGER. No BEFORE INSERT, no stored procedures, no row-level hooks. Instead you get change data streams — a durable, ordered log of every mutation on the table. Attach compute to the stream, call it a day.
The model shift matters. Get it right and you have clean, decoupled event processing that scales independently of your writes. Get it wrong and you’re debugging Lambda timeouts at 3am wondering why half your events got dropped.
How SQL triggers actually work
In Postgres, a trigger is a function registered on a table that fires inside the transaction:
CREATE OR REPLACE FUNCTION log_audit()
RETURNS trigger AS $$
BEGIN
INSERT INTO audit_log(table_name, row_id, action, changed_at)
VALUES (TG_TABLE_NAME, NEW.id, TG_OP, NOW());
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER audit_trigger
AFTER INSERT OR UPDATE OR DELETE ON orders
FOR EACH ROW EXECUTE FUNCTION log_audit();
The trigger runs synchronously. If log_audit fails, the write rolls back. Consistent, transactional, and impossible to decouple. A slow trigger makes every write slow. A failing trigger blocks every write. A badly written trigger melts the database and there’s no circuit breaker — it’s baked into the transaction. This is why DBAs hate them.
What NoSQL gives you instead
Rather than running logic inside the transaction, NoSQL databases emit events after the write commits. You attach compute. The database doesn’t know or care what happens next.
DynamoDB Streams
DynamoDB optionally maintains a stream of item changes. Each record looks like this:
{
"eventID": "abc123",
"eventName": "INSERT",
"dynamodb": {
"Keys": { "id": { "S": "order-42" } },
"NewImage": { "id": { "S": "order-42" }, "total": { "N": "99" } }
}
}
Records within a shard arrive in order. Attach a Lambda:
import { DynamoDBStreamHandler } from "aws-lambda";
export const handler: DynamoDBStreamHandler = async (event) => {
for (const record of event.Records) {
await writeAuditEntry(record);
await invalidateCache(record);
}
};
Wire it with SAM or CDK, connect the DynamoEventSource, done. No database config at all.
CosmosDB Change Feed
Same idea, different plumbing. Every container exposes a time-ordered feed of inserts and updates. Deletes need TTL or a _deleted soft-delete flag. Attach an Azure Function:
[FunctionName("AuditTrigger")]
public static async Task Run(
[CosmosDBTrigger(
databaseName: "orders-db",
containerName: "orders",
Connection = "CosmosDBConnection",
LeaseContainerName = "leases",
CreateLeaseContainerIfNotExists = true)] IReadOnlyList<Document> input,
ILogger log)
{
foreach (var doc in input)
await WriteAuditEntry(doc);
}
The lease container handles checkpointing. If your function scales out, each instance grabs a different partition range. No coordinator.
The real trade-off: synchronous vs. async
SQL triggers are synchronous. NoSQL streams are async. Here’s what you give up and what you gain:
- The write commits before the trigger runs. If the consumer fails, the write stays committed. You can retry, but you can’t undo the original mutation.
- Write latency drops to just the database write. No trigger time added. Good for high-throughput workloads where every millisecond counts.
- Consistency is eventual. The consumer trails behind the write. A read immediately after a write won’t see trigger side-effects yet. If that’s a problem, you need transactional writes (DynamoDB TransactWriteItems, CosmosDB transactional batch), not streams.
- The trigger can’t block writes. A dead consumer means events pile up (or go to a dead-letter queue). The table stays up.
- You can fan out. A single change can feed audit logging, search indexing, and cache invalidation — three different consumers, no coupling, no shared fate.
But you have to handle at-least-once delivery. DynamoDB Streams and CosmosDB Change Feed both retry batches. Your handler needs idempotency:
export const handler: DynamoDBStreamHandler = async (event) => {
for (const record of event.Records) {
const key = record.eventID;
if (await alreadyProcessed(key)) continue;
await processRecord(record);
await markProcessed(key);
}
};
Skip this and you get duplicate audit entries and double-counted metrics. Not theoretical — this is the first thing that breaks in production.
Where streams break down
Not everything maps.
BEFORE triggers that reject writes. Validation that throws an error before the row hits the table can’t be replicated with streams — the write already committed. Validation stays in your application layer, API Gateway, or a thin Lambda in front of DynamoDB.
Cross-record atomicity. If UPDATE table_a and INSERT INTO table_b must be atomic, use DynamoDB Transactions or CosmosDB transactional batches. Stream consumers are async and can’t roll back.
Read-after-write consistency on trigger side-effects. If the trigger updates a denormalized view and the very next read must see it, streams are too slow. Write the denormalized data in the same transaction as the primary mutation, or accept staleness.
So what to use
Most of the time, streams. The async model forces you to think about idempotency and ordering up front — things you’d ignore with SQL triggers until they become problems at scale. SQL triggers just let you defer that pain.
- DynamoDB Streams: ordered per shard, 24h retention, attach Lambda, idempotency on
eventID. - CosmosDB Change Feed: ordered per partition, managed leases, attach Azure Functions, same idempotency story.
- Streams don’t replace
BEFOREtriggers, cross-table atomicity, or read-after-write consistency. Those stay in application code or transactional writes.