ADR-033: Communication Decision Engine β
Date: 2026-05-04 Status: Proposed [Year 2 pipeline]Deciders: Julian BrΓΌning
Context β
Busflow's notification pipeline (notification-pipeline-protocol.md) processes 22 trigger events across a 4-layer stack: Hasura Event Trigger β NestJS handler β n8n workflow β BullMQ dispatch. Each trigger fires independently β a single domain event produces a single notification. This model works for isolated events but breaks when events from different Bounded Contexts collide temporally.
A systematic analysis of 53 domain events from four Bounded Contexts (Commerce, Operations, Communications, Backoffice) identified 14 conflict scenarios where independent triggers produce contradictory, stale, or tone-deaf passenger communications. Of these, 10 are solvable with standard engineering guards (existing WHERE clauses, status re-checks, dedup logic). Four conflicts are genuinely non-trivial because they require cross-Bounded-Context state awareness that no single handler possesses:
| ID | Conflict | Contexts | Impact |
|---|---|---|---|
| C5 | Pre-trip reminder sent for a cancelled service leg | Commerce β Operations | Passenger shows up at empty boarding point |
| C8 | Pending incident broadcast not dismissed when the leg cancels | Operations β Communications | "We're working on it" followed by "it's cancelled" |
| C12 | Standard feedback request sent after a CRITICAL incident | Commerce β Operations | "Rate us βββββ" after a 3-hour breakdown |
| C4 | Pre-trip reminder contains stale vehicle/seat data after a vehicle swap | Commerce β Operations | Passenger arrives expecting wrong bus |
All four follow the same structural pattern: a scheduled trigger (Cron sweep, timer) prepares a notification, but a reactive event from a different Bounded Context has invalidated the context. The scheduled trigger has no awareness of the reactive event.
Scale consideration β
At Busflow's current operator volumes (1β5 departures per week per operator), these conflicts occur rarely. The research value lies not in immediate operational frequency but in the deterministic resolution patterns: when a conflict does occur, the consequences are severe (passenger at an empty boarding point, tone-deaf feedback after a breakdown). The patterns developed here apply to any event-driven multi-context system at any scale.
Why existing tools cannot solve this β
| Tool | Limitation |
|---|---|
| n8n | No cross-workflow state. Each workflow execution is stateless. Workflow A (pre-trip reminder) cannot query whether Workflow B (incident broadcast) is pending for the same entity. |
| Temporal / Camunda | Could solve this via long-running workflows with signal handling. Rejected: requires a JVM or dedicated orchestration server. Busflow runs on a 2-node Docker Swarm β adding a heavy orchestration engine is architecturally infeasible (workflow-orchestration.md Β§Boundary Rules). |
| BullMQ alone | Can delay and retry jobs but has no concept of cross-job state queries or job supersession based on external events. |
| Hasura Event Triggers | Fire-after-commit, at-least-once. No conditional suppression based on state from other schemas. |
The question becomes: how does the notification pipeline make deterministic delivery decisions when scheduled triggers and reactive events from isolated Bounded Contexts collide β without a dedicated orchestration server?
Alternatives Evaluated β
Alternative 1: n8n-Native Cross-Context Checks β
Add HTTP calls inside n8n workflows to query state from other Bounded Contexts before sending notifications.
Advantages:
- No new infrastructure β extends existing n8n workflows
- Fastest to implement (~20h for basic freshness checks)
Disadvantages:
- Violates workflow-orchestration.md Β§Boundary Rules: "n8n handles template resolution and delivery β domain logic stays in NestJS"
- n8n gains Operations awareness β cross-context coupling moves into a tool with no type safety, no test framework, and no version control for logic
- No solution for Supersession (n8n cannot query or cancel other workflows' pending messages)
- Maintenance burden: business rules scattered across NestJS handlers AND n8n workflows
Alternative 2: Temporal Orchestration Engine β
Deploy Temporal as a dedicated communication orchestration layer.
Advantages:
- Purpose-built for long-running, signal-aware workflows
- Native support for workflow queries, signals, and cancellation β solves all 14 conflict scenarios elegantly
- Battle-tested in production at scale (Uber, Netflix, Stripe)
Disadvantages:
- Requires a Temporal Server (Go binary + database) β a third stateful service alongside PostgreSQL and Redis
- 2-node Docker Swarm has no headroom for a dedicated orchestration cluster
- Operational complexity: Temporal requires its own monitoring, versioning strategy, and failure recovery
- Overkill: 10 of 14 conflicts are already solved by existing guards or trivial checks. Temporal's power addresses only the 4 Tier 1 conflicts
Alternative 3: Communication Decision Engine on BullMQ + PostgreSQL (Selected) β
Build a lightweight decision layer in the NestJS handler tier, using three architectural primitives backed by the existing BullMQ + PostgreSQL stack.
Advantages:
- Zero new infrastructure β uses existing BullMQ for windowed holds, PostgreSQL for state queries
- Decision logic stays in NestJS (type-safe, testable, version-controlled)
- Three primitives cover all 14 conflict scenarios with minimal coupling
- Aligns with the existing notification pipeline architecture β extends, does not replace
Disadvantages:
- Custom implementation β no off-the-shelf library provides these primitives for modular monoliths
- Cross-schema reads introduce coupling between Bounded Contexts (see Consequences)
- BullMQ job cancellation has race condition edge cases (see Research Phases Β§2)
Decision β
Implement a Communication Decision Engine with two core primitives and one extension primitive in the NestJS notification handler layer. The engine operates between the Hasura Event Trigger (input) and the n8n webhook call (output), intercepting notification requests and applying cross-context decision logic before forwarding.
The Supersession Rule is the architectural centerpiece β it solves the hardest conflicts (C8, C10) where cross-workflow state awareness is required and no standard pattern exists. The Freshness Gate is a supporting primitive that handles the more common pre-send validation cases (C5, C12). The Coalescing Window is a Phase 3 extension explored only if the core primitives succeed.
Hasura Event Trigger
β
NestJS Handler
β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Communication Decision Engine β
β β
β CORE PRIMITIVES β
β ββββββββββββββββββββββββ βββββββββββββββββββ β
β β Supersession Rule β β Freshness Gate β β
β β [CENTERPIECE] β β β β
β β β β Cross-BC state β β
β β Dismiss pending β β check before β β
β β weaker notification β β sending sched- β β
β β when stronger event β β uled messages β β
β β arrives for same β β β β
β β entity β β Covers: C5, C12 β β
β β β β (+ C2βC4, C6 β β
β β Covers: C5, C8, C10 β β as simple β β
β β β β guards) β β
β ββββββββββββββββββββββββ βββββββββββββββββββ β
β β
β AUDIT β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Suppression Audit Log β β
β β notification_suppressions table β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β
β EXTENSION (Phase 3 β if core succeeds) β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Coalescing Window β β
β β Time-windowed merge of related events β β
β β Covers: C14 (rebooking) β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β (if not suppressed/superseded)
n8n Webhook β BullMQ β Provider APIPrimitive 1: Freshness Gate β
Before dispatching any scheduled/cron-triggered notification, the handler queries the current state of the affected entity across relevant Bounded Contexts.
Covers: C5, C12 (genuinely cross-context, require Operations awareness in Commerce handlers). Additionally covers C2, C3, C4, C6, C13 β but these 5 are standard defensive query patterns (re-check booking.status, check for in-flight payments) that a competent engineer implements without architectural research. The Freshness Gate's research value comes from C5 and C12 specifically, where the handler must reach into a different Bounded Context.
Mechanism:
- Cron trigger fires β Hasura Event Trigger β NestJS handler receives
NotificationRequest - Handler identifies the
entity_typeandentity_id(booking, service_leg, tour_offering) - Handler executes a freshness query against the relevant context:
-- C12: Feedback after incident β check Operations context
SELECT EXISTS (
SELECT 1 FROM operations.incidents
WHERE tour_offering_id = :tour_offering_id
AND severity = 'CRITICAL'
AND status IN ('OPEN', 'IN_PROGRESS', 'RESOLVED')
) AS has_critical_incident;-- C5: Pre-trip reminder for cancelled leg β check Operations context
SELECT status FROM operations.service_legs
WHERE tour_offering_id = :tour_offering_id
AND leg_type = 'PICKUP'
LIMIT 1;- If the freshness check fails (entity cancelled, incident exists, payment in-flight), the handler:
- Suppresses the notification (does not forward to n8n)
- Logs the suppression in
notification_suppressions - Optionally substitutes with an alternative template (C12: empathy feedback instead of standard feedback)
Cross-schema read justification: DDD Β§7.2 permits read-side coupling between Bounded Contexts. The freshness query is a read-only check β no mutations cross context boundaries.
Honesty note: The Freshness Gate pattern itself ("check state before acting") is not novel. What makes C5 and C12 research-relevant is the systematic injection of cross-BC reads into a pipeline designed to be fire-and-forget, and the architectural question of where the boundary between "permitted read-model coupling" and "BC isolation violation" lies. The remaining 5 freshness scenarios (C2, C3, C4, C6, C13) are standard engineering β they use single-context status checks that the sweep queries should already include.
Primitive 2: Supersession Rule β
When a "stronger" event arrives, the handler queries for and dismisses any pending/queued notifications from "weaker" events on the same entity.
Covers: C5, C8, C10 (3 scenarios)
Event priority hierarchy (strongest β weakest):
ServiceLegCancelled
> IncidentCreated (CRITICAL)
> ServiceLegDelayed
> VehicleSwapped
> PRE_TRIP_REMINDER
> FinalPaymentDueWhy this is the centerpiece of AP26: The Supersession Rule requires the notification pipeline to maintain awareness of its own pending state β something no standard event-driven pattern provides. When ServiceLegCancelled arrives, the handler must: (a) know that an IncidentCreated broadcast is sitting in PENDING_REVIEW for the same leg, (b) dismiss it atomically, (c) absorb its context into the cancellation message. This cross-workflow state awareness, combined with BullMQ job cancellation atomicity and context forwarding, has no standard pattern in the event-driven literature.
Mechanism:
- Strong event arrives β NestJS handler receives
NotificationRequest - Handler queries pending notifications for the same entity via the
conversationsβmessagespath:
-- Schema note: booking_id and trip_id live on conversations, not messages.
-- Messages reference conversations via conversation_id.
SELECT m.id, m.trigger_event, m.status, c.id AS conversation_id
FROM communications.messages m
JOIN communications.conversations c ON m.conversation_id = c.id
WHERE c.tenant_id = :tenant_id
AND (c.booking_id = :booking_id OR c.trip_id = :service_leg_id)
AND m.status IN ('QUEUED', 'PENDING_REVIEW')
AND m.trigger_event = ANY(:weaker_events);- For each matching weaker message:
- Update
status = 'SUPERSEDED',superseded_by_event_id = :strong_event_id - If the message has a pending BullMQ job: remove the job by ID
- If the message has a pending broadcast workflow (
PENDING_REVIEW): auto-dismiss with reasonSUPERSEDED_BY_{strong_event_type}
- Update
- Log the supersession in
notification_suppressions - Proceed with the strong event's notification β optionally merging context from the superseded message (C8: cancellation message includes "reason: breakdown")
Context forwarding: When a superseded message carries context that the superseding message needs (e.g., C8: the breakdown incident's description), the handler reads the superseded message's rendered_content metadata and injects it into the superseding notification's template variables.
BullMQ atomicity challenge: The race condition between the Supersession handler calling job.remove() and the BullMQ worker already executing the job is the core technical uncertainty. The mitigation (worker re-checks messages.status before calling the provider API) introduces a double-read pattern that adds latency and complexity. Whether this pattern is reliable under concurrent load is an explicit research output of Phase 2.
Extension Primitive: Coalescing Window (Phase 3) β
NOTE
The Coalescing Window is an extension explored only if the core primitives (Supersession Rule, Freshness Gate) succeed in Phases 1β2. It currently covers a single scenario (C14: rebooking), which can alternatively be solved by creating a composite rebookPassenger Hasura Action that eliminates the conflict at the domain level. The Coalescing Window becomes research-relevant only if additional coalescing scenarios emerge during Phase 1β2 work.
Hold a notification for a configurable time window. If a semantically related event arrives within the window, merge both into a single notification.
Covers: C14 (1 scenario β rebooking)
Alternative approach (simpler): Create a rebookPassenger composite action that emits a single PassengerRebooked event instead of separate PassengerCancelled + PassengerAdded. This eliminates the conflict at the domain layer without notification-layer machinery. The Coalescing Window is explored only if the composite action approach proves insufficient (e.g., the two actions originate from different dispatcher sessions, or the rebooking spans different operators).
Mechanism (if pursued):
- Event A arrives (e.g.,
PassengerCancelled) β handler checks coalescing rules - If a coalescing rule matches (e.g., "PassengerCancelled + PassengerAdded for same
passenger_profile_idwithin window"):- Enqueue a BullMQ delayed job with the notification payload and a configurable delay (default: 5 minutes)
- Store the pending coalescing intent in Redis:
coalesce:{passenger_profile_id}:{event_type} β job_id
- If Event B arrives within the window:
- Remove the delayed BullMQ job, create merged
REBOOKING_CONFIRMEDnotification
- Remove the delayed BullMQ job, create merged
- If the window expires without Event B:
- The delayed job fires β handler releases Event A's notification as-is
Tenant-configurable: The coalescing window duration is stored on operators.communication_config (JSONB).
Suppression Audit Log β
Every suppression, supersession, and coalescing decision is logged for GoBD compliance:
CREATE TABLE communications.notification_suppressions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES backoffice.operators(id),
suppressed_event_id UUID NOT NULL, -- the Hasura event_id that was suppressed
suppressed_trigger_event VARCHAR NOT NULL, -- e.g., 'FINAL_PAYMENT_DUE'
primitive_used VARCHAR NOT NULL, -- 'FRESHNESS_GATE' | 'SUPERSESSION' | 'COALESCING'
reason VARCHAR NOT NULL, -- human-readable: 'booking_cancelled', 'superseded_by_ServiceLegCancelled'
conflicting_event_id UUID, -- the event that caused the suppression (nullable for freshness checks)
entity_type VARCHAR NOT NULL, -- 'booking' | 'service_leg' | 'tour_offering'
entity_id UUID NOT NULL,
substitution_template VARCHAR, -- if an alternative template was used instead (C12: empathy feedback)
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_notification_suppressions_entity
ON communications.notification_suppressions(tenant_id, entity_type, entity_id);This table provides an audit trail for the question: "Why did passenger X NOT receive notification Y?" β critical for compliance and dispute resolution.
Conflict Test Matrix β
These five test scenarios define the acceptance criteria for the Communication Decision Engine. Each scenario specifies the event sequence, the expected system behavior, and the primitive exercised.
| # | Precondition | Event Sequence | Expected Behavior | Primitive | Acceptance Criterion |
|---|---|---|---|---|---|
| T1 | Booking in DEPOSIT_PAID, passenger cancels at 07:52 | BookingCancelled β (08:00) FinalPaymentDue sweep fires | Sweep excludes cancelled booking. If race condition: handler re-checks booking.status, suppresses reminder. | Freshness Gate | No payment reminder sent. notification_suppressions row with reason = 'booking_cancelled'. |
| T2 | Pre-trip reminder scheduled at T-24h. Leg cancels at T-20h. | PRE_TRIP_REMINDER Cron fires β handler checks service_legs.status β finds CANCELLED | Handler suppresses reminder. ServiceLegCancelled handler already sent cancellation notification. | Freshness Gate | No reminder sent. Cancellation notification sent. Suppression logged. |
| T3 | BREAKDOWN incident at T+0. Broadcast enters PENDING_REVIEW. Dispatcher cancels leg at T+2min. | IncidentCreated β ServiceLegCancelled | Cancellation handler queries messages WHERE trip_id AND status = 'PENDING_REVIEW'. Finds pending broadcast. Auto-dismisses with SUPERSEDED_BY_CANCELLATION. Sends cancellation notification with reason: BREAKDOWN. | Supersession | Broadcast auto-dismissed. Cancellation message includes breakdown context. Two notification_suppressions rows. |
| T4 | Trip completed. CRITICAL incident occurred during trip. Completion sweep fires at T+24h. | BookingCompleted β handler checks incidents WHERE tour_offering_id AND severity = 'CRITICAL' | Handler detects incident. Substitutes standard feedback template with BOOKING_COMPLETED_DISRUPTED (empathy variant). | Freshness Gate | Empathy feedback template sent (not standard). Suppression logged with substitution_template. |
| T5 | Dispatcher cancels passenger from Trip A. Within 3 min, adds same person to Trip B. | PassengerCancelled β (wait) β PassengerAdded (same passenger_profile_id, within 5 min window) | Cancellation notification held in coalescing window. PassengerAdded triggers merge. Single REBOOKING_CONFIRMED notification sent. | Coalescing Window | Single rebooking message sent. No standalone cancellation or addition message. Coalescing logged. |
Research Phases β
Phase 1: Supersession Rule Spike (4 weeks, ~45h) β
The centerpiece of the research. This phase addresses the hardest technical uncertainties.
- Implement the Supersession Rule for scenarios C8 (broadcast + cancellation) and C5 (reminder + cancellation, supersession variant)
- Define the event priority hierarchy as a configurable data structure (not hardcoded)
- Solve the BullMQ job cancellation race condition: evaluate
job.remove()atomicity, implement astatus = SUPERSEDEDguard in the BullMQ worker (worker checks message status before dispatching β if SUPERSEDED, skip) - Implement context forwarding: superseding messages absorb relevant context from superseded messages (C8: cancellation includes breakdown reason)
- Create the
notification_suppressionstable and logging infrastructure - Extend
incident-broadcast-protocol.mdE-1 edge state to coverServiceLegCancelled(not justIncidentResolved) - Output: Working Supersession Rule with priority hierarchy, context forwarding, and suppression audit log. Race condition solution documented.
- Go/No-Go: Can BullMQ job cancellation be made atomic? Does context forwarding produce coherent messages? Does the
conversationsβmessagesJOIN path perform under concurrent supersession writes?
Phase 2: Freshness Gate (3 weeks, ~35h) β
Supporting primitive. Lower architectural novelty but broader coverage.
- Implement the Freshness Gate for the two genuinely cross-context scenarios: C12 (feedback + incident) and C5 (reminder + cancellation)
- Define the cross-schema query interface: a
FreshnessCheckServicein NestJS that encapsulates all cross-BC reads - Validate DDD Β§7.2 compliance: confirm that read-only cross-schema queries do not trigger cascading mutations or violate tenant isolation
- Add the 5 standard-engineering freshness checks (C2, C3, C4, C6, C13) as simple guard clauses β these are engineering, not research, but they reuse the suppression audit infrastructure from Phase 1
- Measure query latency impact on the notification pipeline (target: < 50ms per freshness check)
- Output: Working Freshness Gate for 7 scenarios. Performance benchmarks. DDD Β§7.2 compliance assessment.
- Go/No-Go: Does the gate add < 50ms latency? Does the cross-schema read pattern remain maintainable? Is the BC isolation boundary clear?
Phase 3: Coalescing Window (conditional, 2 weeks, ~20h) β
NOTE
Gate: Phase 3 proceeds only if (a) the composite rebookPassenger action approach proves insufficient, AND (b) additional coalescing scenarios emerge during Phase 1β2 work. If the composite action covers the rebooking case, Phase 3 reduces to documenting why coalescing was unnecessary.
- Evaluate whether a composite
rebookPassengerHasura Action eliminates C14 at the domain level - If not: implement the Coalescing Window for scenario C14 (rebooking) with Redis-backed intent store
- Add
communication_configJSONB field tooperatorstable for tenant-specific window durations - Test with β₯ 2 operators with distinct coalescing window durations
- Output: Decision on composite action vs. Coalescing Window. If pursued: working Coalescing Window with tenant configuration.
- Go/No-Go: Does the domain-level solution suffice? If not, does the window produce correct merge behavior?
Phase 4: Integration Testing + Audit Validation (1 week, ~15h) β
- Run all 5 test matrix scenarios (T1βT5) as integration tests
- Validate the
notification_suppressionsaudit log: every suppression, supersession, and coalescing decision is logged with the correctprimitive_used,reason, andconflicting_event_id - Verify that suppression audit queries answer: "Why did passenger X NOT receive notification Y?"
- Validate tenant isolation: Operator A's suppression rules do not affect Operator B's messages
- Output: Green test suite. Audit log query examples. Tenant isolation validation.
Total estimated effort: 95h core (Phase 1β2 + Phase 4) + 20h conditional (Phase 3). Range: 95β115h depending on Coalescing Window gate decision.
Evaluation Metrics β
| Metric | Target | Measurement Method |
|---|---|---|
| False-positive suppression rate | < 5% | Manual review of 100 suppression log entries from test scenarios |
| Conflict resolution latency | < 200ms end-to-end | Time from conflicting event arrival to suppression/coalescing decision (instrumented in NestJS handler) |
| Audit completeness | 100% | Every suppression decision has a linked suppressed_event_id, reason, and conflicting_event_id |
| Tenant config isolation | 0 cross-tenant leaks | Operator A's communication_config never influences Operator B's decisions |
Consequences β
Positive β
- Deterministic delivery decisions β the pipeline resolves cross-context conflicts before sending, eliminating contradictory messages
- Zero new infrastructure β the engine runs on existing BullMQ + PostgreSQL, within the NestJS handler layer
- Auditable suppressions β every "message NOT sent" decision is logged with the triggering reason, satisfying GoBD traceability
- Tenant-configurable β operators customize coalescing windows, suppression rules, and priority hierarchies via
communication_config - Extensible β new conflict scenarios only require adding freshness checks or priority entries, not architectural changes
Negative β
- Cross-schema reads β the Freshness Gate introduces read-only coupling between Commerce handlers and Operations state. Each new freshness check increases the coupling surface
- Custom implementation β no off-the-shelf library provides these three primitives for modular monoliths. The implementation is Busflow-specific
- BullMQ job cancellation complexity β the Supersession Rule requires cancelling queued jobs, which introduces race conditions between the worker and the supersession handler
- Coalescing Window adds latency β notifications held in the coalescing window experience a deliberate delay (configurable, default 5 min). Passengers wait longer for their cancellation confirmation when a rebooking follows
Risks β
| Risk | Likelihood | Mitigation |
|---|---|---|
| Freshness Gate queries degrade notification pipeline latency | Low | Freshness queries are indexed reads on existing tables. Target: < 50ms. Circuit breaker on the freshness check β if query times out, send the notification (fail-open, not fail-closed). |
| Cross-schema reads proliferate and erode BC isolation | Medium | Encapsulate all cross-BC reads in a single FreshnessCheckService. No raw cross-schema queries outside this service. Review threshold: > 5 freshness checks triggers architectural review. |
| BullMQ job cancellation race condition causes duplicate sends | Medium | Worker checks messages.status before dispatching. If status = SUPERSEDED, skip dispatch. Double-check: notification_suppressions row exists for this event. |
| Coalescing Window size misconfigured by operator | Low | Default value (5 min) covers 95% of rebooking workflows. Admin UI shows a warning for windows > 15 min or < 1 min. |
| Suppression audit log grows large | Low | Partition notification_suppressions by created_at (monthly). Retention policy: 10 years (GoBD). |
Interaction with AP7a β
AP7a (funding-work-packages.md Β§AP7a) covers delivery resilience: provider error classification, retry strategies, circuit breakers, fallback chains. AP26 covers delivery decisions: should this message be sent at all?
| Concern | AP7a | AP26 |
|---|---|---|
| "Meta API returns 500" | β Retry with backoff | β |
| "Passenger should not receive this message" | β | β Freshness Gate / Supersession |
| "Two messages should be merged into one" | β | β Coalescing Window |
| "Channel suspended during dispatch" | β Fallback chain | β |
| "Why was this message NOT sent?" | β | β Suppression audit log |
The two APs share no implementation code. AP7a operates at the BullMQ worker / provider API layer. AP26 operates at the NestJS handler layer, upstream of n8n.
References β
- notification-pipeline-protocol.md β existing 4-layer notification architecture
- incident-broadcast-protocol.md β dispatcher approval gate, E-1 edge state
- event-catalog.md β 53 domain events across 4 BCs
- event-contracts-commerce.md β booking lifecycle events
- event-contracts-operations.md β operational events (delay, incident, cancellation)
- workflow-orchestration.md β boundary rules, n8n scope
- booking-lifecycle-protocol.md β booking state machine
- cancellation-protocol.md β partial cancellation saga
- vehicle-swap-protocol.md β VehicleSwapped event and remapping
- domain-driven-design.md Β§7.2 β read-side coupling between BCs
- ADR-019: Change Events Polymorphic Audit Trail β audit pattern