Busflow Docs

Internal documentation portal

Skip to content

Operations Database Schema

This document details the concrete physical database schema for the Operations Bounded Context (schema: operations). The Operations schema handles the execution layer for real-world logistics, fleet assignments, hardware IoT integrations, offline app synchronization, and fulfillment validation.

Physical Entity Relationship Diagram

mermaid
erDiagram
    SERVICE_LEG ||--o{ LEG_ASSIGNMENT : "assigned via"
    SERVICE_LEG ||--|{ ROUTE_WAYPOINT : "follows"
    SERVICE_LEG ||--o{ INCIDENT : "disrupted by"
    SERVICE_LEG ||--o{ EXPENSE_RECEIPT : "generates actuals"
    SERVICE_LEG ||--o{ ONBOARD_SALE : "generates actuals"
    SERVICE_LEG ||--o{ BOARDING_EVENT : "validates boarding"
    SERVICE_LEG ||--o{ CREW_DUTY_LOG : "logs duty"
    SERVICE_LEG ||--o{ TELEMETRY_POINT : "tracks position"

    ISSUE_REPORT ||--o{ INCIDENT_ISSUE_REPORT : "linked via"
    INCIDENT ||--o{ INCIDENT_ISSUE_REPORT : "linked via"

Table Definitions

Note on Cross-Schema References: Foreign keys across bounded contexts (e.g., tenant_id, vehicle_id, ticket_id) use soft UUID references. The operations schema maintains hard relational constraints only within its own boundary.

service_legs

The physical, continuous operation of a vehicle between an origin and destination. A single offering can have multiple legs.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique leg identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
tour_offering_idUUIDNot NullSoft FK to commerce.tour_offerings
tour_departure_idUUIDNot NullSoft FK to backoffice.tour_departures. Set when Operations creates the leg from the TripPublished event payload — see ADR-018.
leg_typeVARCHARNot NullPICKUP, TRANSIT, TRANSFER, DROPOFF, REPOSITIONING
scheduled_startTIMESTAMPNot NullExpected leg start
scheduled_endTIMESTAMPNot NullExpected leg end
actual_startTIMESTAMPNullableActual leg start. Set on SCHEDULED→ACTIVE transition.
actual_endTIMESTAMPNullableActual leg end. Set on ACTIVE/DELAYED→COMPLETED transition.
statusVARCHARNot Null, Default: SCHEDULEDSCHEDULED, ACTIVE, DELAYED, COMPLETED, CANCELLED
sequence_orderINTNot NullExecution order within the departure. UNIQUE together with tour_departure_id — see constraint below.
boarding_point_idUUIDNullableSoft FK to backoffice.boarding_point_library. Set during TripPublished handler for PICKUP-typed legs. Null for TRANSIT, TRANSFER, DROPOFF, REPOSITIONING. Enables downstream passenger targeting for incident broadcasts — see incident-broadcast-protocol.md §2.
cancellation_reasonTEXTNullableRequired when status → CANCELLED. Free text in Phase 1 (cancellation scenario classification is Phase 2).
cancelled_byUUIDNullableSoft FK to backoffice.users. The dispatcher who cancelled the leg. Set by cancelServiceLeg Action.
created_atTIMESTAMPTZDefault: now()

UNIQUE constraint: UNIQUE(tour_departure_id, sequence_order). Prevents duplicate legs per departure and enables idempotent re-creation on TripPublished re-emit.

NOTE

BoardingPoint mapping: Each PICKUP-typed ServiceLeg represents a single Backoffice boarding_point_library item in a 1:1 relationship. The boarding_point_id column on service_legs provides a direct lookup for PICKUP legs — the TripPublished handler sets it at creation time. This enables the incident broadcast chain to determine downstream passengers without fragile positional correlation (see incident-broadcast-protocol.md §2). boarding_events does not carry boarding_point_id — boarding validation resolves the stop via the parent ServiceLeg. If grouped stops (one ServiceLeg serving multiple nearby library items) become necessary in the future, add boarding_point_id on boarding_events. See ADR-001.

ServiceLeg State Machine

FromToTriggerActorGuardDomain EventSide Effects
SCHEDULEDServiceLeg createdSystem (TripPublished handler)TourDeparture.status = PUBLISHEDCreates RouteWaypoints from itinerary
SCHEDULEDACTIVEDriver starts legDriver (app action)LegAssignment exists with status=CONFIRMEDServiceLegStartedSets actual_start = now()
SCHEDULEDCANCELLEDDispatcher cancels pre-startDispatcherNo BoardingEvents with status=SUCCESS existServiceLegCancelledReleases all LegAssignments (status→RELEASED). Commerce releases SeatReservations. Communications notifies passengers at affected stops.
ACTIVEDELAYEDETA exceeds threshold OR driver reports delaySystem / Driverrecalculated_eta - scheduled_end > delay_thresholdServiceLegDelayedAuto-creates a system Incident (type=DELAY, severity=CRITICAL, reporter_crew_id=NULL) if no driver-reported DELAY Incident exists for this leg (dedup: (service_leg_id, type, occurred_at ± 5min)). IncidentCreated fires → Communications evaluates broadcast. Dispatch board shows alert overlay.
ACTIVECOMPLETEDDriver ends legDriver (app action)ServiceLegCompletedSets actual_end = now(). Triggers BookingNoShow evaluation window (for the final leg of a departure).
ACTIVECANCELLEDDispatcher terminates active legDispatcherServiceLegCancelledSee Cancellation Resolution below. Side effects depend on leg type and passenger impact. Always creates a linked Incident.
DELAYEDCOMPLETEDDriver ends legDriver (app action)ServiceLegCompletedSame as ACTIVE→COMPLETED. Additionally, auto-resolves any linked system-created DELAY Incident (status → RESOLVED, resolution_notes='Leg completed') → IncidentResolved fires → Communications sends all-clear.
DELAYEDACTIVEETA recovers below recovery thresholdSystemrecalculated_eta - scheduled_end < recovery_threshold sustained for delay_dwell_minutesServiceLegDelayResolvedAuto-resolves linked system-created DELAY Incident → IncidentResolved fires → Communications sends all-clear. Driver-reported DELAY Incidents managed by a dispatcher (status ≠ OPEN) are not auto-resolved.
DELAYEDCANCELLEDDispatcher cancels delayed legDispatcherServiceLegCancelledSame as ACTIVE→CANCELLED.

Terminal states: COMPLETED and CANCELLED are terminal — no further transitions.

Delay hysteresis (anti-flapping): The ACTIVE→DELAYED and DELAYED→ACTIVE transitions use asymmetric thresholds to prevent notification spam when the ETA oscillates around the boundary. Configuration (per-operator or global):

ParameterDefaultDescription
delay_threshold15 minETA deviation above scheduled_end that triggers ACTIVE→DELAYED
recovery_threshold5 minETA deviation must drop below this to trigger DELAYED→ACTIVE
delay_dwell_minutes3 minThe ETA must remain below recovery_threshold for this duration before the system emits ServiceLegDelayResolved

Example: A bus runs 18 min late → DELAYED (exceeds 15 min). Traffic clears and the ETA drops to 12 min — the bus remains DELAYED (12 > 5 min recovery threshold). ETA drops to 4 min and stays below 5 min for 3 consecutive minutes → ACTIVE, emits ServiceLegDelayResolved. This prevents passengers from receiving "Your bus has a delay" / "Your bus is back on time" messages every minute.

Cancellation Resolution (⚠️ L3 REQUIRED): The state machine allows ACTIVE/DELAYED → CANCELLED, but the side effects vary by scenario. This table defines the transition; L3 must specify the resolution strategy. Concrete scenarios:

ScenarioLeg TypePassenger ImpactResolution
A. Construction blocks a pickup stopPICKUPPassengers stranded at stop, paid for tripDispatcher must arrange alternative (reroute, taxi, second bus) OR cancel affected passengers' bookings → refund flow in Commerce
B. Sightseeing stop unreachableTRANSITPassengers on board, stop was optionalSkip leg, mark CANCELLED, no refund needed
C. Paid excursion stop unreachableTRANSITPassengers on board, paid ancillarySkip leg, mark CANCELLED, trigger partial refund for linked Ancillary in Commerce
D. Final dropoff blockedDROPOFFPassengers on board, need to disembarkFind alternative dropoff point, may create ad-hoc substitute leg

In all cases, ACTIVE/DELAYED → CANCELLED auto-creates a linked Incident capturing the reason, and preserves existing BoardingEvent records for already-boarded passengers.

Hasura Action: startServiceLeg

Driver transitions SCHEDULED → ACTIVE. Sets actual_start.

Input:

typescript
interface StartServiceLegInput {
  service_leg_id: UUID;
}

Handler route: POST /api/actions/start-service-leg

Guards:

  1. service_leg_id exists and belongs to x-hasura-tenant-id.
  2. status = SCHEDULED.
  3. EXISTS (SELECT 1 FROM leg_assignments WHERE service_leg_id = :id AND crew_member_id = :requesting_crew_member_id AND status = 'CONFIRMED').
  4. SELECT ... FOR UPDATE row lock on service_legs (prevents concurrent start from two drivers during vehicle swap timing).

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'service_leg', entityId: service_leg_id, action: 'UPDATE', scope: 'GENERAL' }) — acquires row lock, captures old_values.
  2. UPDATE service_legs SET status = 'ACTIVE', actual_start = now().
  3. Hasura Event Trigger fires ServiceLegStarted (async, post-commit).

Output:

typescript
interface StartServiceLegOutput {
  service_leg_id: UUID;
  status: 'ACTIVE';
  actual_start: string; // ISO 8601
}

Errors:

CodeCondition
LEG_NOT_FOUNDservice_leg_id doesn't exist or wrong tenant
INVALID_STATUSstatus ≠ SCHEDULED
NO_ASSIGNMENTNo confirmed LegAssignment for requesting crew member
ALREADY_STARTEDConcurrent start — another driver acquired lock first

Hasura Action: completeServiceLeg

Driver transitions ACTIVE/DELAYED → COMPLETED. Sets actual_end.

Input:

typescript
interface CompleteServiceLegInput {
  service_leg_id: UUID;
}

Handler route: POST /api/actions/complete-service-leg

Guards:

  1. service_leg_id exists and belongs to x-hasura-tenant-id.
  2. status IN (ACTIVE, DELAYED).
  3. Requesting user has active LegAssignment for this leg.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'service_leg', entityId: service_leg_id, action: 'UPDATE', scope: 'GENERAL' }).
  2. UPDATE service_legs SET status = 'COMPLETED', actual_end = now().
  3. If status was DELAYED: auto-resolve any linked system-created DELAY Incident (status → RESOLVED, resolution_notes = 'Leg completed').
  4. Hasura Event Trigger fires ServiceLegCompleted (async, post-commit).

Output:

typescript
interface CompleteServiceLegOutput {
  service_leg_id: UUID;
  status: 'COMPLETED';
  actual_end: string; // ISO 8601
}

Errors:

CodeCondition
LEG_NOT_FOUNDMissing or wrong tenant
INVALID_STATUSstatus ∉ {ACTIVE, DELAYED}

Hasura Action: cancelServiceLeg

Dispatcher transitions SCHEDULED/ACTIVE/DELAYED → CANCELLED.

Input:

typescript
interface CancelServiceLegInput {
  service_leg_id: UUID;
  cancellation_reason: string; // Mandatory
}

Handler route: POST /api/actions/cancel-service-leg

Guards:

  1. service_leg_id exists and belongs to x-hasura-tenant-id.
  2. status IN (SCHEDULED, ACTIVE, DELAYED).
  3. x-hasura-user-id has DISPATCHER or MANAGER role.
  4. If status IN (ACTIVE, DELAYED) AND EXISTS (SELECT 1 FROM boarding_events WHERE service_leg_id = :id AND check_in_status IN ('SUCCESS', 'MANUAL_OVERRIDE')) → requires MANAGER role. Regular dispatcher gets BOARDING_IN_PROGRESS error.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'service_leg', entityId: service_leg_id, action: 'UPDATE', scope: 'GENERAL' }).
  2. UPDATE service_legs SET status = 'CANCELLED', cancellation_reason, cancelled_by = x-hasura-user-id.
  3. Auto-create linked Incident (type based on cancellation_reason context, severity = CRITICAL).
  4. Cancel any linked OnboardSales with payment_status = 'PENDING_LINK': UPDATE payment_status = 'FAILED'. If checkout_session_id IS NOT NULL, call Commerce to cancel the CheckoutSession.
  5. Release all LegAssignments: UPDATE status = 'RELEASED'.
  6. Hasura Event Trigger fires ServiceLegCancelled (async, post-commit).

Output:

typescript
interface CancelServiceLegOutput {
  service_leg_id: UUID;
  status: 'CANCELLED';
  incident_id: UUID; // The auto-created incident
}

Errors:

CodeCondition
LEG_NOT_FOUNDMissing or wrong tenant
ALREADY_COMPLETEDstatus = COMPLETED
ALREADY_CANCELLEDstatus = CANCELLED
BOARDING_IN_PROGRESSActive boarding events exist and user is not MANAGER
INSUFFICIENT_ROLEUser is not DISPATCHER or MANAGER

TripPublished Consumer Handler

Trigger: Hasura Event Trigger on backoffice.tour_departures when status → PUBLISHED.

Handler route: POST /api/events/trip-published

Input (event payload):

typescript
interface TripPublishedPayload {
  tour_departure_id: UUID;
  tour_offering_id: UUID;
  tenant_id: UUID;
  max_door_pickups: number | null; // From tour_templates. Null = unlimited. Commerce enforces at checkout.
  boarding_points: TripPublishedBoardingPoint[]; // Resolved boarding point catalog. Commerce denormalizes into tour_offerings.available_boarding_points.
  legs: TripLegDefinition[];
}

interface TripPublishedBoardingPoint {
  boarding_point_id: UUID;         // FK to boarding_point_library — Commerce stores this on passengers
  name: string;                    // Display name (e.g., "Dorfplatz Nachbardorf")
  address: string | null;
  geo_coordinates: { lat: number; lng: number } | null;
  zone_label: string | null;       // UI grouping label
  surcharge: number;               // Effective stop surcharge in cents (0 for origin). Resolved: is_origin ? 0 : (surcharge_override ?? library.surcharge)
  is_origin: boolean;              // Whether this stop is the departure origin for this template
  door_pickup_available: boolean;  // Effective door pickup availability. Resolved: door_pickup_override ?? library.door_pickup_available
  door_pickup_surcharge: number | null; // Effective door pickup premium. Resolved: door_pickup_surcharge_override ?? library.door_pickup_surcharge
  door_pickup_radius_km: number | null; // Radius for address validation. Inherited from library.
  passenger_instructions: string | null;
  display_order: number;           // Display sequence in booking widget
}

interface TripLegDefinition {
  sequence_order: number;
  leg_type: 'PICKUP' | 'TRANSIT' | 'TRANSFER' | 'DROPOFF' | 'REPOSITIONING';
  boarding_point_id: UUID | null; // Set for PICKUP legs — references the same boarding_point_id as TripPublishedBoardingPoint
  scheduled_start: string; // ISO 8601
  scheduled_end: string;
  waypoints: WaypointDefinition[];
}

interface WaypointDefinition {
  geo_coordinates: { lat: number; lng: number };
  sequence_order: number;
  label: string | null;
  waypoint_type: 'BOARDING_STOP' | 'REST_AREA' | 'WAYPOINT' | 'BORDER_CROSSING';
}

NOTE

Commerce projection: The boarding_points array carries the fully resolved boarding point catalog. The Backoffice NestJS handler resolves the template assignment cascade (library defaults + template overrides + is_origin surcharge zeroing) before emitting the event. Commerce stores this array as-is into tour_offerings.available_boarding_points JSONB — no further resolution needed at read time. This follows the same denormalization pattern as available_ancillaries.

Handler logic:

  1. For each leg in payload.legs:
    • UPSERT into service_legs ON CONFLICT (tour_departure_id, sequence_order):
      • If existing status = SCHEDULED → update fields.
      • If existing status IN (ACTIVE, DELAYED, COMPLETED, CANCELLED) → skip (do NOT overwrite active/completed legs).
    • For each waypoint: UPSERT into route_waypoints ON CONFLICT (service_leg_id, sequence_order).
  2. Delete orphaned legs: legs with tour_departure_id = :id AND sequence_order > max(payload.legs.sequence_order) AND status = SCHEDULED.

Idempotency: Safe to re-process on TripPublished re-emit — uses UPSERT semantics and skips non-SCHEDULED legs.

UNIQUE constraint on service_legs: UNIQUE(tour_departure_id, sequence_order). Prevents duplicate legs per departure and enables idempotent re-creation.

IndexColumnsPurpose
idx_service_legs_departure(tour_departure_id, sequence_order)UNIQUE — TripPublished upsert and leg ordering
idx_service_legs_status_tenant(tenant_id, status) WHERE status IN ('SCHEDULED', 'ACTIVE', 'DELAYED')Partial index for dispatch board active legs subscription
idx_service_legs_offering(tour_offering_id)Commerce cross-reference (boarding, ticketing)

leg_assignments

The dispatching link tying a ServiceLeg to a fleet asset and staff member. For subcontracted legs, supplier_id replaces the internal vehicle/crew references — see ADR-007.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique assignment identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
service_leg_idUUIDForeign Key (service_legs.id), Not NullTarget leg
vehicle_idUUIDSoft FK to backoffice.vehicles. Nullable for subcontracted legs.
crew_member_idUUIDSoft FK to backoffice.crew_members. Nullable for subcontracted legs.
supplier_idUUIDSoft FK to backoffice.suppliers. Set when the operator outsources the leg to a subcontractor.
roleVARCHARNot NullE.g., DRIVER, GUIDE
statusVARCHARNot Null, Default: CONFIRMEDCONFIRMED (active assignment), RELEASED (leg cancelled or crew reassigned). No TENTATIVE state — all dispatch flows treat assignments as immediately effective.
assigned_atTIMESTAMPTZDefault: now()When the dispatch action occurred. For audit and conflict resolution.

NOTE

CHECK constraint: (supplier_id IS NOT NULL AND vehicle_id IS NULL AND crew_member_id IS NULL) OR (supplier_id IS NULL AND vehicle_id IS NOT NULL AND crew_member_id IS NOT NULL). An assignment is either internal (vehicle + crew) or external (supplier).

IMPORTANT

UNIQUE constraint: UNIQUE(crew_member_id, service_leg_id). Prevents double-assignment of the same crew member to the same service leg (optimistic concurrency for concurrent dispatchers — see Dispatch Availability Engine E1). PostgreSQL treats NULL ≠ NULL, so subcontracted legs (crew_member_id IS NULL) are naturally exempt from this constraint.

NOTE

Vehicle Swap Protocol: The SwapVehicle Hasura Action handles vehicle reassignment on an existing assignment, acquiring a SELECT ... FOR UPDATE row lock on the leg_assignment row to prevent concurrent modification. The swap remaps Commerce seat_reservations to the new vehicle's seat map within a single ACID transaction. See vehicle-swap-protocol.md for the full API contract, saga choreography, and edge states.

crew_duty_logs

Immutable ledger for driving and rest times (EU-561/2006 compliance).

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique log identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
crew_member_idUUIDNot NullSoft FK to backoffice.crew_members
service_leg_idUUIDNullableSoft FK to service_legs. NULL for non-leg-specific events (e.g., rest at depot).
log_timeTIMESTAMPNot NullTime of recorded event
event_typeVARCHARNot NullDRIVING, REST, WORK, AVAILABILITY
tachograph_dataJSONB[Phase 2] Raw or parsed IoT payload. No tachograph integration in Phase 1 — see EU-561 research §3.3.

NOTE

Phase 2: Duration-based logging → DutyActivity extraction. The current single-timestamp model (log_time) supports only the 11h daily rest heuristic (Phase 1). Five of six EU-561 regulatory limits require duration-based activity logging. Phase 2 will introduce a DutyActivity entity with start_time, end_time, duration_minutes, and a source_type discriminator (TOUR_LEG, LINE_ROUTE, CHARTER, TACHOGRAPH_IMPORT, SELF_DECLARATION) to decouple compliance evaluation from the tour-specific ServiceLeg pipeline. This enables reuse for scheduled bus routes (Linienverkehr), chartering, and tachograph data import. CrewDutyLog remains for Phase 1 operational correlation; DutyActivity supersedes it as the compliance data source. Operations records raw events; Backoffice/compliance services evaluate them — this preserves the compliance boundary. See EU-561 research §6 for the full design intent, entity schema, and migration path.

Phase 1 guard rails: Do not (1) add logic assuming service_leg_id is always present on duty records, (2) build compliance SQL that joins through service_legstour_departures, or (3) store compliance outcomes on CrewDutyLog rows.

route_waypoints

High-frequency ETA ingestion and geographical paths.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique waypoint identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
service_leg_idUUIDForeign Key (service_legs.id), Not NullLeg path
geo_coordinatesJSONBNot NullLat/Long
etaTIMESTAMPExpected time of arrival
sequence_orderINTNot NullOrder in the leg
labelVARCHARNullableDescriptive name for the waypoint (e.g., "Köln Hbf"). Used by ETA tracker and dispatch board.
waypoint_typeVARCHARDefault: WAYPOINTBOARDING_STOP (physical boarding location within a PICKUP leg), REST_AREA, WAYPOINT (intermediate GPS point), BORDER_CROSSING. Enables the ETA tracker to show "ETA to your stop" instead of "ETA to end of leg."

telemetry_points

High-frequency GPS signals emitted by the vehicle.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique point identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
vehicle_idUUIDNot NullSoft FK to backoffice.vehicles
service_leg_idUUIDNullableSoft FK to service_legs. Set when the leg is ACTIVE. NULL for telemetry outside active legs (e.g., repositioning).
geo_coordinatesJSONBNot NullCurrent Lat/Long
speedDECIMALCurrent speed
fuel_levelDECIMAL[Phase 2] Current fuel reading. No Phase 1 consumer — ETA pipeline uses position and speed only.
recorded_atTIMESTAMPDefault: now()

Telemetry Ingestion API

Endpoint: POST /api/telemetry/ingest

Auth: Bearer JWT (Driver Hub). Tenant and vehicle resolved from JWT claims.

Input:

typescript
interface TelemetryBatch {
  device_id: string;
  points: TelemetryPoint[];
}

interface TelemetryPoint {
  geo_coordinates: { lat: number; lng: number };
  speed: number | null;
  recorded_at: string; // ISO 8601
}

Behavior:

  • Batched ingestion: Driver Hub sends telemetry every 10s, batching accumulated points.
  • Rate limiting: Max 6 requests/minute per device_id (@nestjs/throttler). Excess is silently dropped.
  • Dedup: (vehicle_id, recorded_at) — duplicate timestamps skipped.
  • service_leg_id resolution: Server checks if the vehicle has an ACTIVE/DELAYED ServiceLeg (via leg_assignments). If yes, sets service_leg_id on each point. If no active leg, service_leg_id = NULL (repositioning telemetry).
  • Write path: Bulk INSERT into telemetry_points. No Hasura Event Trigger — the ETA recalculation service polls or subscribes independently.

ETA Recalculation Service

Trigger: Periodic evaluation every 30s for each ACTIVE/DELAYED ServiceLeg with recent telemetry.

Algorithm (Phase 1 — simple extrapolation):

  1. Fetch latest telemetry_points for the vehicle (last 5 min).
  2. Calculate average speed from recent points.
  3. Compute remaining distance to each downstream route_waypoints using straight-line segments.
  4. recalculated_eta = now() + (remaining_distance / avg_speed).
  5. UPDATE route_waypoints.eta for each downstream waypoint.
  6. Compare recalculated_eta of the final waypoint against service_legs.scheduled_end.

Delay detection:

  • If recalculated_eta - scheduled_end > delay_threshold (default: 15 min) AND service_legs.status = ACTIVE → UPDATE status = 'DELAYED' → Hasura Event Trigger fires ServiceLegDelayed.
  • Hysteresis recovery: if recalculated_eta - scheduled_end < recovery_threshold (default: 5 min) sustained for delay_dwell_minutes (default: 3 min) AND status = DELAYED → UPDATE status = 'ACTIVE' → Hasura Event Trigger fires ServiceLegDelayResolved. See §Delay hysteresis above.

NOTE

Phase 2 upgrade: Replace straight-line distance with OSRM/GraphHopper routing API for road-network-aware ETAs. The service contract remains identical — only the distance calculation changes.

Telemetry/ETA Edge States

#Edge StateResolution
E-1GPS dropout (no telemetry for 5+ min)ETA service marks the leg's ETA as stale (dispatch board shows "Last update: X min ago" badge). No DELAYED transition from stale data — only from active telemetry deviation. After 15 min of silence → push notification to driver: "GPS signal lost."
E-2High-frequency telemetry floodRate limiting at ingestion endpoint (6 req/min). Within a batch, the system accepts all points — batch size bounds the storage cost.
E-3ETA oscillation / threshold flappingHandled by delay hysteresis: asymmetric thresholds (15 min trigger / 5 min recovery) + delay_dwell_minutes (3 min sustained recovery). See §Delay hysteresis.
E-4Tracking token expiredGET /api/track/:tracking_token returns HTTP 401 with { error: 'TOKEN_EXPIRED' }. Frontend shows "Tracking link expired" message. Tokens use 24h TTL (covers trip day + buffer).
E-5Vehicle stopped (speed = 0 in traffic)Speed alone cannot determine ETA. If avg_speed < 5 km/h for > 5 min → ETA service freezes the last calculated ETA (no update). Dispatch board shows "Vehicle stopped" badge. Resumes normal calculation when speed > 5 km/h.
E-6Multi-leg ETA cascadeWhen Leg N runs late, the ETA service recalculates scheduled_start for Leg N+1 (and downstream): leg[N+1].recalculated_start = leg[N].recalculated_eta + transfer_buffer. Cascade continues for all subsequent SCHEDULED legs in the same departure.
E-7Data retention (millions of rows)telemetry_points uses time-based partitioning: monthly partitions, 90-day retention. After 90 days, the system drops partitions (compliance does not require raw GPS data). The system materializes aggregated metrics (avg speed, total distance per leg) into service_legs columns before each partition drop. Phase 1: pg_partman extension; Phase 2: TimescaleDB hypertable.
IndexColumnsPurpose
idx_telemetry_vehicle_time(vehicle_id, recorded_at DESC)Latest position lookup for ETA calculation
idx_telemetry_leg(service_leg_id, recorded_at) WHERE service_leg_id IS NOT NULLPartial index for per-leg telemetry queries
idx_route_waypoints_leg_seq(service_leg_id, sequence_order)ETA recalculation downstream waypoint traversal

NOTE

Partitioning: telemetry_points should use PARTITION BY RANGE (recorded_at) with monthly partitions managed by pg_partman. Retention policy: 90 days. This keeps per-partition size manageable (est. 5M rows/month for 50 concurrent vehicles at 10s intervals).

issue_reports

Raw field logs (e.g., maintenance reports) created by crew. When maintenance_urgency is set to IMMEDIATE, the system emits a VehicleMaintenanceRequired domain event to Backoffice for inspection scheduling — see ADR-008. Status lifecycle: OPEN (driver submits) → IN_PROGRESS (Backoffice schedules a VehicleInspection and emits VehicleInspectionScheduled) → RESOLVED (Backoffice completes inspection and emits VehicleInspectionCompleted). An IssueReport does not automatically create an Incident — the dispatcher manually links IssueReports to Incidents via the incident_issue_reports join table when a vehicle observation escalates into an operational disruption.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique issue identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
vehicle_idUUIDNot NullSoft FK to backoffice.vehicles
reporter_crew_idUUIDNot NullSoft FK to backoffice.crew_members
categoryVARCHARNot NullE.g., MECHANICAL, CLEANLINESS
descriptionTEXTNot NullDetails of the issue
statusVARCHARNot NullOPEN, IN_PROGRESS, RESOLVED
maintenance_urgencyVARCHARNot Null, Default: NONENONE, IMMEDIATE, SCHEDULED. Escalation flag for cross-context maintenance workflows.
attachmentsJSONBDefault: '[]'Array of { storage_key: string, type: 'IMAGE' | 'VIDEO', uploaded_at: string }. Photo/video evidence from driver field reports. storage_key is a Nhost Storage key (UUID / relative path) — frontend constructs the full URL from key + environment config. Consistent with expense_receipts.receipt_image_key pattern.
reported_atTIMESTAMPDefault: now()

Hasura Action — createIssueReport

Type: Mutation Auth: x-hasura-role ∈ { DRIVER, MANAGER }Handler: NestJS webhook POST /api/actions/create-issue-report

Input:

FieldTypeRequiredDescription
vehicle_idUUIDYesTarget vehicle
categoryStringYesMECHANICAL, CLEANLINESS, DAMAGE
descriptionStringYesFree-text description
maintenance_urgencyStringNoNONE (default), IMMEDIATE, SCHEDULED
attachments[AttachmentInput]No{ storage_key: String, type: 'IMAGE' | 'VIDEO' }. uploaded_at set server-side.
client_event_idUUIDNoClient-generated mutation ID for offline dedup

Processing:

  1. Extract tenant_id and reporter_crew_id from JWT (x-hasura-tenant-id, x-hasura-crew-member-id).
  2. INSERT issue_reports with status = OPEN, reported_at = now().
  3. Create change_event via AuditTrailService.record(tx, ...) (action = CREATE, scope = GENERAL).
  4. If maintenance_urgency = IMMEDIATE: emit VehicleMaintenanceRequired event (see event-contracts-operations.md §VehicleMaintenanceRequired).
  5. IssueReportCreated event fires via Hasura Event Trigger on INSERT (async, post-commit).

Offline Sync: The driver creates the IssueReport locally with client_event_id. On sync, the batch handler calls this action. Server deduplicates via sync_idempotency_log using client_event_id. Attachments: the client uploads photos to Nhost Storage independently with client-generated storage_key UUIDs; the IssueReport row references these keys. If a storage key doesn't exist at query time, the frontend shows a placeholder. The client retries failed uploads via the sync queue.

Output: { issue_report_id: UUID }

Hasura Action — resolveIssueReport

Allows dispatchers to manually resolve IssueReports that are not in the maintenance feedback loop (i.e., status = OPEN only). The VehicleInspectionCompleted event automatically resolves IssueReports with status = IN_PROGRESS — see event-contracts-operations.md §VehicleInspectionCompleted.

Type: Mutation Auth: x-hasura-role ∈ { DISPATCHER, MANAGER }Handler: NestJS webhook POST /api/actions/resolve-issue-report

Input:

FieldTypeRequiredDescription
issue_report_idUUIDYesTarget IssueReport
resolution_notesStringNoDispatcher's resolution summary

Guards:

  1. issue_report_id must exist and belong to x-hasura-tenant-id.
  2. issue_report.status must be OPEN. If IN_PROGRESS → return MAINTENANCE_IN_PROGRESS. If RESOLVED → return ALREADY_RESOLVED.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'issue_report', ... }) — acquires row lock, captures old_values.
  2. UPDATE issue_reports SET status = 'RESOLVED'.
  3. Create change_event (action = UPDATE, scope = GENERAL).

Output: { issue_report_id: UUID, status: 'RESOLVED' }

Errors:

CodeCondition
ISSUE_REPORT_NOT_FOUNDissue_report_id doesn't exist or wrong tenant
MAINTENANCE_IN_PROGRESSstatus = IN_PROGRESS — IssueReport has a pending VehicleInspection. It resolves automatically when the inspection completes.
ALREADY_RESOLVEDstatus = RESOLVED

Non-Maintenance IssueReport Resolution (Phase 1): IssueReports with maintenance_urgency = NONE (e.g., cleanliness) do not trigger the maintenance feedback loop and have no automated path to resolution. The dispatcher resolves them via this action. Phase 1 acceptance: NONE-urgency reports may remain OPEN indefinitely if the dispatcher takes no action. The dispatch board should filter the feed by status to prevent stale accumulation.

Offline Photo Upload Protocol:

  1. The driver captures a photo in the Driver Hub. The client generates a UUID as the storage_key and stores the binary locally (IndexedDB).
  2. The client creates the IssueReport row locally with attachments: [{ storage_key: <uuid>, type: 'IMAGE', uploaded_at: null }].
  3. On sync: (a) the IssueReport row syncs via POST /api/sync/batch, (b) the photo binary uploads to Nhost Storage with the pre-generated storage_key as the file path.
  4. Upload ordering: IssueReport row syncs first (the record exists in the DB with a storage_key reference). Photo upload follows asynchronously. If the photo upload fails, the client retries with exponential backoff (max 3 attempts). uploaded_at is set to null until upload confirms, signaling the dispatch board to show a "photo uploading…" placeholder.
  5. Permanent failure: After 3 failed retries, the client marks the attachment as upload_failed = true locally and surfaces a notification to the driver: "Photo upload failed. Retake?" The IssueReport remains valid without the photo — attachments is optional.
  6. Orphaned storage keys: If a photo upload succeeds but the IssueReport row sync fails, the storage key exists in Nhost Storage with no referencing record. A daily GC cron (Phase 2) cleans storage keys not referenced by any attachments JSONB after 24h.

Attachment Constraints:

ConstraintValueRationale
Accepted formatsJPEG, PNG, HEIC (images); MP4 (video)HEIC for iOS native camera. Server transcodes to JPEG for dispatch board display.
Max file size10 MB per attachmentSufficient for high-res photos from field; prevents abuse.
Max attachments per IssueReport5Pre-trip inspection covers limited scope.
Per-tenant storage quotaNot enforced Phase 1Nhost Storage has plan-level quotas. Monitor via usage metrics.

incidents

Logistical disruptions impacting a leg. The driver creates the Incident from the Driver Hub; the dispatcher manages resolution via the Dispatch Board. BREAKDOWN incidents do not directly trigger VehicleMaintenanceRequired — vehicle maintenance is the responsibility of IssueReport (see ADR-008). The Driver Hub prompts the driver to file an IssueReport alongside a BREAKDOWN incident.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique incident identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
service_leg_idUUIDForeign Key (service_legs.id), Not NullImpacted leg
reporter_crew_idUUIDNullableSoft FK to backoffice.crew_members. The driver who reported the incident. NULL for system-created Incidents (e.g., auto-created DELAY Incidents from telemetry detection — see ServiceLeg state machine ACTIVE→DELAYED).
assigned_toUUIDNullableSoft FK to backoffice.users. The dispatcher handling the incident. Set during takeOverIncident (Phase 1: atomic OPEN → IN_PROGRESS).
severityVARCHARNot NullLOW, MEDIUM, CRITICAL
typeVARCHARNot NullDELAY, BREAKDOWN, PASSENGER_ISSUE
statusVARCHARNot Null, Default: OPENOPEN, ACKNOWLEDGED, IN_PROGRESS, RESOLVED. See state machine below.
descriptionTEXTNot NullFree-text description of the disruption.
resolution_notesTEXT
geo_coordinatesJSONBNullableGPS pin location of the incident.
occurred_atTIMESTAMPDefault: now()
resolved_atTIMESTAMPNullableSet when status → RESOLVED.

Incident State Machine

FromToActorSide Effect
OPENDriver (via Driver Hub)Emits IncidentCreated (payload includes severity and type). Communications consumer routes uniformly: CRITICAL (any type) → WhatsApp broadcast to downstream passengers; LOW/MEDIUM → dispatch board alert only. For telemetry-detected delays, the system auto-creates DELAY Incidents (see ServiceLeg state machine ACTIVE→DELAYED).
OPENACKNOWLEDGEDDispatcher (via Dispatch Board)Sets assigned_to.
ACKNOWLEDGEDIN_PROGRESSDispatcher
IN_PROGRESSRESOLVEDDispatcher / SystemSets resolved_at. Emits IncidentResolved.

Phase 1 UI shortcut: The dispatcher clicks "Take Over" and the system atomically transitions OPEN → ACKNOWLEDGED → IN_PROGRESS and sets assigned_to in a single action. This gives SLA data and multi-dispatcher safety without adding UI ceremony. The 4-state enum is forward-compatible; collapsing 4 → 3 later would be a breaking migration.

Offline behavior: The driver creates an Incident with status = OPEN. Subsequent transitions require server connectivity (dispatcher-driven). No offline state advancement beyond OPEN. Dedup key: (service_leg_id, type, occurred_at ± 5min) prevents duplicate incidents on sync.

Hasura Action — takeOverIncident

Phase 1 UI shortcut per ADR-016. Atomically transitions OPEN → IN_PROGRESS and sets assigned_to. Single click, one API call.

Input:

typescript
interface TakeOverIncidentInput {
  incident_id: UUID;
}

Handler route: POST /api/actions/take-over-incident

Guards:

  1. incident_id must exist and belong to x-hasura-tenant-id.
  2. incident.status must be OPEN. If ACKNOWLEDGED or IN_PROGRESS → return ALREADY_TAKEN with current assigned_to.
  3. Concurrency control: SELECT ... FOR UPDATE row lock on the incidents row (same pattern as SwapVehicle).
  4. System-created Incident guard: If reporter_crew_id IS NULL (system-created, e.g., auto-DELAY from telemetry), the handler checks whether the linked ServiceLeg is still DELAYED. If service_leg.status = DELAYED, warn the dispatcher: { code: 'SYSTEM_INCIDENT_AUTO_MANAGED', message: 'This incident may auto-resolve when the delay clears. Take over anyway?' }. The dispatcher confirms → proceed. The dispatcher declines → abort. This prevents a race where a dispatcher takes over a system Incident moments before ServiceLegDelayResolved auto-resolves it. Once taken over (assigned_to IS NOT NULL), auto-resolution skips this Incident (see ServiceLeg state machine DELAYED→ACTIVE: "Driver-reported DELAY Incidents managed by a dispatcher (status ≠ OPEN) are not auto-resolved").

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'incident', ... }) — acquires row lock via SELECT ... FOR UPDATE, captures old_values.
  2. Validate status = OPEN.
  3. UPDATE incidents SET status = 'IN_PROGRESS', assigned_to = x-hasura-user-id.
  4. Record two audit entries via AuditTrailService.record(tx, ...): OPEN → ACKNOWLEDGED (captures assigned_to set) and ACKNOWLEDGED → IN_PROGRESS. Both entity_type = 'incident', entity_id = incident_id, action = UPDATE, scope = GENERAL. Identical created_at — forward compatibility for Phase 2 (separate Acknowledge/Start Working actions).

Output:

typescript
interface TakeOverIncidentOutput {
  incident_id: UUID;
  status: 'IN_PROGRESS';
  assigned_to: UUID;
}

Errors:

CodeCondition
INCIDENT_NOT_FOUNDincident_id doesn't exist or wrong tenant
ALREADY_TAKENstatusOPEN. Response: { assigned_to, status } — UI shows "Already taken by [Dispatcher Name]."
ALREADY_RESOLVEDstatus = RESOLVED

Concurrent dispatcher take-over: Two dispatchers click "Take Over" simultaneously: Dispatcher A acquires the lock, transitions OPEN → IN_PROGRESS, sets assigned_to = A, commits. Dispatcher B blocks until A commits, reads status = IN_PROGRESS. Guard fails → ALREADY_TAKEN with { assigned_to: A.name, status: 'IN_PROGRESS' }.

Hasura Action — resolveIncident

Input:

typescript
interface ResolveIncidentInput {
  incident_id: UUID;
  resolution_notes?: string | null;
}

Handler route: POST /api/actions/resolve-incident

Guards:

  1. incident_id must exist and belong to x-hasura-tenant-id.
  2. incident.status must be IN_PROGRESS.
  3. x-hasura-user-id must have DISPATCHER or MANAGER role. No assigned_to check — any dispatcher can resolve any in-progress incident (shift handoff without ceremony).

YAGNI decision: Dispatcher reassignment is not a Phase 1 feature. Opening resolve permissions to any DISPATCHER/MANAGER eliminates the shift-handoff problem. If ownership tracking matters later, add a reassignIncident action in Phase 2.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'incident', entityId: incident_id, action: 'UPDATE', scope: 'GENERAL' }) — acquires row lock, captures old_values.
  2. UPDATE incidents SET status = 'RESOLVED', resolution_notes, resolved_at = now().
  3. Hasura Event Trigger fires IncidentResolved (async, post-commit).

Output:

typescript
interface ResolveIncidentOutput {
  incident_id: UUID;
  status: 'RESOLVED';
  resolved_at: string; // ISO 8601
}

Errors:

CodeCondition
INCIDENT_NOT_FOUNDincident_id doesn't exist or wrong tenant
INVALID_STATUSstatusIN_PROGRESS
INSUFFICIENT_ROLEUser role is not DISPATCHER or MANAGER

Offline Dedup — Driver Notification UX

Dedup algorithm (sync handler):

sql
SELECT id FROM operations.incidents
WHERE service_leg_id = :service_leg_id
  AND type = :type
  AND occurred_at BETWEEN :occurred_at - INTERVAL '5 minutes'
                      AND :occurred_at + INTERVAL '5 minutes';
  • Match found: Skip INSERT. Return the existing incident_id in the sync response synced[] array. Client replaces local record with server-side version (per "Server Wins" protocol).
  • No match: Proceed with INSERT.
  • Driver notification: On the next sync, if the client detects its locally-created Incident was dedup-discarded (the returned server_state contains an Incident with a different id but matching dedup key), the Driver Hub shows a toast: "Dispatch merged your incident report with an existing report." No driver action required.

Post-Trip Incident Reporting

A driver may report an Incident on a COMPLETED ServiceLeg within 72 hours of actual_end (e.g., a passenger-reported issue discovered after disembarkation, or a minor collision during dropoff).

Guard: service_leg.status IN (ACTIVE, DELAYED, COMPLETED) AND (if COMPLETED: actual_end + INTERVAL '72 hours' > now()).

  • SCHEDULED → rejected (leg hasn't started).
  • CANCELLED → rejected (leg abandoned).
  • COMPLETED beyond 72h → rejected with LEG_CLOSED error.

Post-trip incidents do NOT trigger ETA recalculation or ServiceLegDelayed (the leg has already completed). The IncidentCreated event still fires for dispatch board visibility and, if CRITICAL, Communications.

IndexColumnsPurpose
idx_incidents_dedup(service_leg_id, type, occurred_at)Offline dedup query — matching incidents within ±5min window
idx_incidents_status_tenant(tenant_id, status) WHERE status != 'RESOLVED'Partial index for dispatch board "open incidents" subscription filter
idx_incidents_assigned_to(assigned_to) WHERE assigned_to IS NOT NULL"My incidents" per-dispatcher filtering

incident_issue_reports

Join table bridging IssueReportIncident (many-to-many). The dispatcher manually links IssueReports to Incidents when a vehicle observation escalates into an operational disruption. An IssueReport does not automatically create an Incident — IssueReports are vehicle-centric observations, Incidents are leg-centric disruptions.

ColumnTypeConstraintsDescription
idUUIDPrimary Key
tenant_idUUIDNot NullSoft FK to backoffice.operators. Required for Hasura row-level permission filtering.
incident_idUUIDForeign Key (incidents.id), Not Null
issue_report_idUUIDForeign Key (issue_reports.id), Not Null
created_atTIMESTAMPTZDefault: now()

NOTE

UNIQUE constraint: UNIQUE(incident_id, issue_report_id). Prevents duplicate links.

Hasura Action — linkIssueReportToIncident

Type: Mutation Auth: x-hasura-role ∈ { DISPATCHER, MANAGER }Handler: NestJS webhook POST /api/actions/link-issue-report-to-incident

Input:

FieldTypeRequiredDescription
incident_idUUIDYesTarget incident
issue_report_idUUIDYesIssueReport to link

Validation:

  1. Both incident_id and issue_report_id must exist and belong to the caller's tenant_id.
  2. No status guard on incident.status — the system permits linking to a RESOLVED incident for post-mortem and insurance documentation (see note below).

Processing:

  1. INSERT incident_issue_reports with tenant_id from JWT.
  2. On UNIQUE constraint violation (incident_id, issue_report_id), return success (idempotent).
  3. Create change_event on the incident_issue_reports row (action = CREATE, scope = GENERAL).

Output: { incident_issue_report_id: UUID, created: Boolean }created = false if link already existed.

Errors:

CodeCondition
INCIDENT_NOT_FOUNDincident_id doesn't exist or wrong tenant
ISSUE_REPORT_NOT_FOUNDissue_report_id doesn't exist or wrong tenant

NOTE

Phase 2: unlinkIssueReportFromIncident action for correcting mistaken links (soft-delete with deleted_at).

NOTE

Linking to Resolved Incidents: Permitted. The linkIssueReportToIncident action does not guard on incident.status. Linking an IssueReport to a RESOLVED incident is valid for post-mortem and insurance documentation.

NOTE

Duplicate IssueReport Dedup: Operations does not enforce IssueReport dedup — multiple drivers may legitimately report the same defect independently. The Backoffice consumer owns dedup: when the VehicleMaintenanceRequired handler creates a VehicleInspection, it checks for an existing open inspection for the same vehicle_id. If found, the handler links the new IssueReport to the existing inspection via the Backoffice vehicle_inspection_issue_reports join table (no new inspection row). The feedback events (VehicleInspectionScheduled, VehicleInspectionCompleted) use fan-out delivery — one event per linked IssueReport — so each IssueReport receives its own status transition regardless of inspection dedup.

NOTE

Backward Transition Guards: The VehicleInspectionScheduled consumer checks issue_report.status = OPEN before transitioning to IN_PROGRESS. If status ≠ OPEN, the consumer logs a warning and skips. The VehicleInspectionCompleted consumer checks issue_report.status = IN_PROGRESS before transitioning to RESOLVED. If status ≠ IN_PROGRESS, the consumer logs a warning and skips. No backward transitions exist in the IssueReport lifecycle.

expense_receipts

Actual operational costs incurred during execution, supporting AI OCR. Carries receipt_image_key for the uploaded photo and ocr_data JSONB for extracted fields. Includes net_amount, vat_amount, and vat_rate for DATEV-compliant accounting. On dispatcher verification, emits ExpenseSubmitted → Commerce FinancialLedger for realized_expense aggregation.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique receipt identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
crew_member_idUUIDNot NullSubmitting crew member
service_leg_idUUIDForeign Key (service_legs.id), Not NullContext of expense
categoryVARCHARNot NullFUEL, TOLL, PARKING, OTHER
receipt_image_keyTEXTNot NullStorage key (UUID / relative path) of the uploaded receipt photo in Nhost Storage. Frontend constructs the full URL from key + environment config — never store absolute URLs.
amountDECIMALNot NullGross total
net_amountDECIMALNot Null, Default: 0Net amount (excl. VAT). Set to 0 on creation, populated by OCR on PARSED, verified by dispatcher on VERIFIED. Explicit 0 instead of NULL — NULL would leave DATEV export fields ambiguous.
vat_amountDECIMALNot Null, Default: 0VAT amount. Same lifecycle as net_amount.
vat_rateDECIMALNot Null, Default: 0VAT rate (e.g., 0.19 for 19%). Same lifecycle as net_amount.
currencyVARCHARNot Null
ocr_statusVARCHARNot NullPENDING, PARSED, VERIFIED, REJECTED
ocr_dataJSONBProcessed receipt JSON payload
notesTEXTOptional driver-added context (e.g., "detour via A13 due to road closure")
replaces_receipt_idUUIDForeign Key (expense_receipts.id), NullableIf this receipt replaces a previously REJECTED receipt, references the original. Provides traceability for re-submissions.
rejected_byUUIDNullableSoft FK to backoffice.users. Who rejected the receipt.
rejected_atTIMESTAMPNullableTimestamp of rejection.
rejection_reasonTEXTNullableReason for rejection.
incurred_atTIMESTAMP

OCR Pipeline State Transitions

FromToActorSide Effect
PENDINGDriver (via Driver Hub)Receipt uploaded with photo. On sync, Hasura Event Trigger on expense_receipts INSERT fires NestJS webhook → enqueues BullMQ job expense-ocr-parse. net_amount, vat_amount, vat_rate default to 0.
PENDINGPARSEDSystem (BullMQ OCR worker)AI extracts { amount, currency, category, vendor, date } into ocr_data JSONB. Populates net_amount, vat_amount, vat_rate from OCR data.
PARSEDVERIFIEDDispatcher (via Workspace UI)Dispatcher confirms or corrects VAT fields. The system resolves any outstanding OCR discrepancies. Emits ExpenseSubmitted → Commerce FinancialLedger.
PARSEDREJECTEDDispatcher (via Workspace UI)REJECTED is a terminal state. Sets rejected_by, rejected_at, rejection_reason. Notifies driver to re-submit as a new ExpenseReceipt row (with replaces_receipt_id linking to the rejected original). The rejected row remains immutable (GoBD). No event to Commerce.

ocr_data JSONB Schema

typescript
interface OcrData {
  raw_text: string;              // Full extracted text
  confidence_score: number;      // 0.0–1.0 overall confidence
  vendor: string | null;         // Detected vendor name
  date: string | null;           // ISO 8601 date from receipt
  line_items: OcrLineItem[];     // Itemized lines (if parseable)
  currency_detected: string;     // ISO 4217 currency from receipt
}

interface OcrLineItem {
  description: string;
  amount: number;
  vat_rate: number | null;
}

BullMQ Job: expense-ocr-parse

PropertyValue
Queueocr-processing
Job nameexpense-ocr-parse
Input{ expense_receipt_id: UUID, receipt_image_key: string, tenant_id: UUID }
TriggerHasura Event Trigger on expense_receipts INSERT (filter: ocr_status = 'PENDING')
HandlerOcrProcessingWorker.processExpenseReceipt()
AI ServicePOST /api/ai/ocr/receipt — input: image binary; output: OcrData JSON. Phase 1: Google Cloud Vision API. The AI service is a thin NestJS wrapper normalizing the response into OcrData.
On successUPDATE expense_receipts SET ocr_data = :result, ocr_status = 'PARSED'. Populate net_amount/vat_amount/vat_rate only if confidence_score ≥ 0.7.
On failureRetry 3× with exponential backoff (1s, 4s, 16s). After final failure: SET ocr_data = { error: :message, confidence_score: 0 }, ocr_status = 'PARSED' (dispatcher resolves manually). Do NOT leave in PENDING indefinitely.
Timeout30s per attempt

Hasura Action: verifyExpenseReceipt

Dispatcher confirms or corrects OCR fields and transitions PARSED → VERIFIED.

Input:

typescript
interface VerifyExpenseReceiptInput {
  expense_receipt_id: UUID;
  net_amount: number;    // Dispatcher-confirmed (may differ from OCR)
  vat_amount: number;
  vat_rate: number;
}

Handler route: POST /api/actions/verify-expense-receipt

Guards:

  1. expense_receipt_id exists and belongs to x-hasura-tenant-id.
  2. ocr_status = PARSED.
  3. x-hasura-user-id has DISPATCHER or MANAGER role.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'expense_receipt', entityId: expense_receipt_id, action: 'UPDATE', scope: 'GOBD' }).
  2. UPDATE expense_receipts SET ocr_status = 'VERIFIED', net_amount, vat_amount, vat_rate.
  3. Hasura Event Trigger fires ExpenseSubmitted → Commerce FinancialLedger (async, post-commit).

Errors:

CodeCondition
RECEIPT_NOT_FOUNDMissing or wrong tenant
INVALID_STATUSocr_status ≠ PARSED

Hasura Action: rejectExpenseReceipt

Dispatcher rejects receipt. Terminal state — driver creates a new row with replaces_receipt_id to re-submit.

Input:

typescript
interface RejectExpenseReceiptInput {
  expense_receipt_id: UUID;
  rejection_reason: string; // Mandatory
}

Handler route: POST /api/actions/reject-expense-receipt

Guards:

  1. expense_receipt_id exists and belongs to x-hasura-tenant-id.
  2. ocr_status = PARSED.
  3. x-hasura-user-id has DISPATCHER or MANAGER role.
  4. rejection_reason is non-empty.

Transaction steps:

  1. AuditTrailService.captureAndRecord(tx, { entityType: 'expense_receipt', entityId: expense_receipt_id, action: 'UPDATE', scope: 'GOBD' }).
  2. UPDATE expense_receipts SET ocr_status = 'REJECTED', rejected_by = x-hasura-user-id, rejected_at = now(), rejection_reason.
  3. Notify driver via Communications (push notification): "Receipt for {category} rejected: {rejection_reason}".

Errors:

CodeCondition
RECEIPT_NOT_FOUNDMissing or wrong tenant
INVALID_STATUSocr_status ≠ PARSED
REASON_REQUIREDrejection_reason is empty

ExpenseReceipt Edge States

#Edge StateResolution
E-1OCR fails (image too blurry)OCR service sets ocr_status = PARSED with ocr_data.confidence_score < 0.7. net_amount/vat_amount/vat_rate remain at 0. Dispatcher sees "Low Confidence" badge → manual data entry via verifyExpenseReceipt.
E-2Photo upload fails mid-syncSync handler uploads image to Nhost Storage before INSERT. If upload fails, the system marks the mutation failed in the sync response → client retries on next sync cycle.
E-3Duplicate receipt submissionNo automatic dedup — receipts are inherently unique documents. replaces_receipt_id provides traceability for intentional re-submissions after rejection.
E-4Receipt in foreign currencyPhase 1 stores original currency as-is. FinancialLedger consumer converts using operators.default_currency at daily ECB rate. Dedicated conversion fields on ExpenseReceipt deferred to Phase 2.
E-5Low confidence OCR (ambiguous data)Threshold: confidence_score = 0.7. Below threshold: ocr_data populated but net_amount/vat_amount/vat_rate left at 0 → dispatcher must enter manually.
IndexColumnsPurpose
idx_expense_receipts_ocr_status(tenant_id, ocr_status) WHERE ocr_status IN ('PENDING', 'PARSED')Partial index for dispatcher review queue ("pending verification")
idx_expense_receipts_leg(service_leg_id)FinancialLedger aggregation query
idx_expense_receipts_replaces(replaces_receipt_id) WHERE replaces_receipt_id IS NOT NULLRe-submission traceability chain

onboard_sales

Point-of-Sale records for the driver's cash box and dynamic upsells. Carries sale_status for the business lifecycle (independent of payment_status), crew_member_id (the seller), and payment_method. On sync, emits OnboardSaleRecorded → Commerce FinancialLedger for realized_expense aggregation when payment_status = PAID and sale_status = ACTIVE.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique sale identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
crew_member_idUUIDNot NullSoft FK to backoffice.crew_members. The driver/seller.
service_leg_idUUIDForeign Key (service_legs.id), Not NullContext of sale
ticket_idUUIDSoft FK to commerce.tickets (if billed to seat)
checkout_session_idUUIDSoft FK to commerce.checkout_sessions (if pending payment link). Populated on sync by the NestJS handler after calling the Commerce API.
item_typeVARCHARNot NullBEVERAGE, SNACK, TICKET
quantityINTNot Null, Default: 1Number of items sold. amount represents the total line amount (unit_price × quantity).
amountDECIMALNot NullTotal line amount
currencyVARCHARNot Null
sale_statusVARCHARNot Null, Default: ACTIVEACTIVE, VOIDED, REFUNDED. Independent of payment_status.
payment_statusVARCHARNot NullPAID, PENDING_LINK, FAILED
payment_methodVARCHARNot Null, Default: CASHCASH, PAYMENT_LINK, CARD_READER
voided_atTIMESTAMPNullableTimestamp of void action.
voided_byUUIDNullableSoft FK to backoffice.users. Who voided the sale (driver or dispatcher).
void_reasonTEXTNullableReason for voiding (GoBD auditability).
refunded_atTIMESTAMPNullableTimestamp of refund action.
refunded_byUUIDNullableSoft FK to backoffice.users. The actor who authorized the refund — may be a driver (owner-operator with MANAGER role, or DRIVER with can_refund_onboard_sale grant) or dispatcher.
refund_reasonTEXTNullableReason for refund (GoBD auditability).
sale_timeTIMESTAMPDefault: now()

Business Rules

IMPORTANT

VOID flow: Driver or dispatcher sets sale_status → VOIDED with voided_at, voided_by, void_reason. Creates a change_event with scope = GOBD. The original record is never deleted (GoBD immutability). Emits OnboardSaleVoided → Commerce FinancialLedger subtracts the amount from realized_expense. For cash sales, the driver returns cash to the passenger — the cash box total decreases accordingly.

IMPORTANT

REFUND flow: Role-gated, payment-method-scoped. Sets sale_status → REFUNDED with refunded_at, refunded_by, refund_reason. Creates a change_event with scope = GOBD. Emits OnboardSaleRefunded → Commerce FinancialLedger. Authorization rules:

Payment MethodWho Can RefundRationale
CASHAny user with refund permission (MANAGER, DISPATCHER, or DRIVER with can_refund_onboard_sale grant + amount ≤ driver_cash_refund_limit)Cash changes hands physically — the system is just recording the bookkeeping entry. Owner-operators (MANAGER role) naturally have full authority.
PAYMENT_LINK / CARD_READERMANAGER or DISPATCHER onlyRefund crosses into Commerce (Mollie refund API). The Driver Hub does not have Commerce access.

Operator settings (backoffice.operator_settings): driver_cash_refund_enabled (boolean, default: true), driver_cash_refund_limit (DECIMAL, nullable = unlimited, default: 50.00). For CASH sales, the refund serves as a bookkeeping correction only — the operator handles physical cash return offline. Feature gap (Phase 2): A four-eyes refund approval workflow (dispatcher requests → second dispatcher approves) for operators with stricter compliance requirements.

NOTE

PENDING_LINK → Commerce Bridge:

StepActorAction
1Driver (Driver Hub)Taps "Digital Payment Link" → app creates a local OnboardSale with payment_status = PENDING_LINK, payment_method = PAYMENT_LINK, checkout_session_id = NULL.
2System (POST /api/sync/batch handler)On sync, per-entity handler for onboard_sale detects payment_method = PAYMENT_LINK. Resolves passenger contact: primary path = ticket_idcommerce.ticketspassengers (email/phone); fallback = driver-entered phone number (transient input, not persisted on OnboardSale). Calls Commerce createOnboardCheckoutSession — see PRODUCT_mollie-integration.md §8.1.
3CommerceCreates CheckoutSession, creates Mollie payment (standard Payments API, returns checkout_url). Sends checkout_url to passenger via Communications (WhatsApp/SMS). Operations sync handler receives checkout_session_id in the response and UPDATEs the OnboardSale row.
4Mollie → Commerce → OperationsMollie webhook payment.paid → Commerce webhook handler checks metadata.payment_type = 'ONBOARD_SALE' → emits OnboardPaymentCompleted → Operations handler updates payment_status → PAID → emits OnboardSaleRecorded.

Idempotency key: checkout_session_id. For cash sales (payment_method = CASH), step 1 sets payment_status = PAID directly and OnboardSaleRecorded emits on sync INSERT.

WARNING

Crash-gap recovery: If the sync handler crashes between the Commerce API call (step 2) and the checkout_session_id UPDATE (step 3), a retry must not create a duplicate CheckoutSession. The Commerce createOnboardCheckoutSession API enforces idempotency on onboard_sale_id — if a non-expired session already exists for the same onboard_sale_id, it returns the existing session instead of creating a new one. The sync handler then completes the interrupted UPDATE. The Commerce service layer must implement this guarantee, not the Mollie API level.

NOTE

Offline rule: The "upsell with payment link" button is disabled in the offline UI, with a toast: "Payment links require internet." Cash sales (payment_method = CASH) are always available offline.

NOTE

Edge state — Link expiry: OnboardSale-linked CheckoutSessions use the operator-configured TTL (operator_settings.onboard_payment_link_ttl_minutes, default: 60, min: 15, max: 180). The checkout_abandoned_sweep trigger uses the expires_at column on each checkout_session row (set at creation time from the configured TTL) — no type-based discrimination needed. When the sweep expires the session, Commerce emits OnboardPaymentExpired (see event-catalog.md). The Operations handler sets payment_status → FAILED and notifies the driver via push: "Payment link for {item_type} expired." Driver decides: (a) void the sale, (b) regenerate a new link (new CheckoutSession), or (c) accept cash (new OnboardSale with payment_method = CASH).

NOTE

Edge state — Double payment guard: The OnboardPaymentCompleted handler checks payment_status = PENDING_LINK before transitioning to PAID. If already PAID, skip silently (idempotent). Mollie prevents actual double-charge at the payment level.

NOTE

Edge state — VOID after PENDING_LINK: When driver voids an OnboardSale with payment_status = PENDING_LINK and checkout_session_id IS NOT NULL: (1) set sale_status = VOIDED; (2) call Commerce to cancel the CheckoutSession (→ EXPIRED) and cancel the Mollie payment; (3) if the passenger pays between void and cancellation (race): OnboardPaymentCompleted handler sees sale_status = VOIDED → skips PAID transition, emits OnboardPaymentOrphaned to dispatch board (always, regardless of auto-refund setting); Commerce initiates the refund for the orphaned payment if operator_settings.auto_refund_orphaned_onboard_payment = true (default: true). If the operator disables auto-refund, the dispatcher reviews the orphaned payment in the dispatch board and triggers manual refund.

NOTE

OnboardPaymentCompleted handler (Operations consumer):

Payload: { checkout_session_id, onboard_sale_id, amount_paid, currency, paid_at }

  1. SELECT onboard_sales WHERE checkout_session_id = payload value
  2. Guard: payment_status != PENDING_LINK → skip (idempotent, already processed)
  3. Guard: sale_status = VOIDED → skip PAID transition, emit OnboardPaymentOrphaned (see edge state above). If operator_settings.auto_refund_orphaned_onboard_payment = true, Commerce initiates refund.
  4. UPDATE payment_status → PAID
  5. Create change_event (scope = GOBD, action = UPDATE)
  6. Emit OnboardSaleRecorded → Commerce FinancialLedger

Cash Box Reconciliation

Cash box reconciliation (Journey 3: "the system automatically reconciles the cash box against Klaus's POS sales") is a derived computation, not a domain entity:

sql
SELECT crew_member_id, SUM(amount) AS total_cash_sales
FROM operations.onboard_sales
WHERE payment_method = 'CASH'
  AND sale_status = 'ACTIVE'
  AND service_leg_id IN (SELECT id FROM operations.service_legs WHERE tour_offering_id = :offering_id)
GROUP BY crew_member_id;

Expose as a Hasura computed field on ServiceLeg or a read-only API endpoint. The sale_status = 'ACTIVE' filter is intentional: VOIDED sales mean the driver already returned cash to the passenger (reducing the physical cash box), and REFUNDED sales are bookkeeping corrections that do not affect the current cash total. No new table — a cash_box_reconciliations table would introduce synchronized derived state that can drift from the source of truth.

boarding_events

The actual scan or check-in occurrence when a passenger boards.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique event identifier
tenant_idUUIDNot NullSoft FK to backoffice.operators
service_leg_idUUIDForeign Key (service_legs.id), Not NullWhere boarding actually happened
ticket_idUUIDNot NullSoft FK to commerce.tickets
expected_service_leg_idUUIDNullableSoft FK to service_legs. Stores the ticket's original service_leg_id when check_in_status = MANUAL_OVERRIDE (wrong-stop boarding). NULL for normal boarding.
checked_in_byUUIDSoft FK to backoffice.users (the driver/staff)
check_in_statusVARCHARNot NullSUCCESS, INVALID, ALREADY_SCANNED, MANUAL_OVERRIDE
qr_hashTEXTNot NullThe scanned payload (for verification)
luggage_countINTDefault: 0Number of checked bags
luggage_tagsJSONB[Phase 2] Array of tag strings/identifiers. Depends on Phase 1 scope decision for luggage tagging: if Phase 1 = numeric count only, this column remains empty.
checked_in_atTIMESTAMPDefault: now()

Offline Validation Rules (Driver App)

QR Code Model: Each Passenger receives their own unique QR code (embedded in their Ticket). A group booking with 4 passengers produces 4 QR codes. The driver scans each QR individually — no batch scan. The manifest groups passengers by booking_id, so the driver sees "Booking #1234: 3/4 scanned" in real time.

When the driver scans a QR code, the app executes these checks locally against the cached manifest:

#CheckPassFail → check_in_status
1QR hash exists in cached ticket manifestContinueINVALID — "Ticket not found"
2Ticket.status = ACTIVE (at cache time)ContinueINVALID — "Ticket not active"
3Ticket belongs to the current ServiceLeg (matching service_leg_id)Continue to Check 4Prompt driver: "Passenger has a booking for Stop B. Board here anyway?" → If confirmed: MANUAL_OVERRIDE. If rejected: INVALID — "Wrong boarding stop".
4No existing BoardingEvent with same qr_hash + check_in_status ∈ {SUCCESS, MANUAL_OVERRIDE} for this legContinueALREADY_SCANNED — "Already boarded"
5All checks passSUCCESS

Group boarding enforcement: The driver must scan every passenger's QR code individually. After the last passenger of a booking boards, the manifest row for that booking turns green ("4/4 ✓"). Passengers without a scan remain amber. This enables accurate BookingNoShow detection — the system evaluates no-show per Passenger (per Ticket), not per Booking.

MANUAL_OVERRIDE status: When the driver boards a passenger at the wrong stop (Check 3 override), the BoardingEvent records check_in_status = MANUAL_OVERRIDE, the ticket's service_leg_id in expected_service_leg_id (expected stop), and the actual service_leg_id (where boarded). This preserves the audit trail without blocking the operational reality.

Offline constraint: Checks 1–4 execute against the IndexedDB/RxDB manifest snapshot. The system syncs the manifest at trip start and whenever connectivity becomes available. If the manifest grows stale (e.g., Commerce cancelled a ticket after sync), the conflict resolves on reconnect via "Server Wins (Delta Driven)" — the server-side boarding_events handler re-validates and may retroactively mark a SUCCESS as INVALID, emitting a BoardingConflictDetected event.

Real-time boarding manifest: The dispatch board shows live boarding status per leg via a Hasura subscription on boarding_events filtered by service_leg_id and check_in_status ∈ {SUCCESS, MANUAL_OVERRIDE}, with an aggregated count (e.g., "12/49 boarded").

Offline Manifest Pre-Sync Specification

Endpoint: GET /api/manifest/:tour_departure_id

Auth: Bearer JWT with x-hasura-role = DRIVER and verified LegAssignment for the departure.

Response schema:

typescript
interface OfflineManifest {
  tour_departure_id: UUID;
  generated_at: string; // ISO 8601 — manifest freshness timestamp
  legs: ManifestLeg[];
}

interface ManifestLeg {
  service_leg_id: UUID;
  leg_type: string;
  sequence_order: number;
  scheduled_start: string;
  scheduled_end: string;
  status: string;
  boarding_point_name: string | null; // Denormalized from Backoffice BoardingPoint
  passengers: ManifestPassenger[];
}

interface ManifestPassenger {
  ticket_id: UUID;
  booking_id: UUID;
  booking_status: string;
  passenger_name: string;
  seat_identifier: string | null;
  qr_hash: string;
  boarding_point_name: string | null;
  special_requests: string | null;
}

Cache strategy:

  • Driver Hub fetches manifest on app launch and on each sync cycle.
  • Stored in IndexedDB keyed by tour_departure_id.
  • Manifest is valid for the duration of the trip (no TTL — the driver operates on this data until the next sync refreshes it).
  • Upper bound: ~500 records per departure (50 passengers × 10 legs). Well within IndexedDB capacity.

Server-Side Boarding Sync Handler

Per-entity mutation handler within the POST /api/sync/batch processor (offline-sync-protocol.md).

Per-record logic:

FOR EACH synced BoardingEvent:
  1. Dedup check: SELECT FROM boarding_events
     WHERE ticket_id = :ticket_id AND service_leg_id = :service_leg_id
     AND check_in_status IN ('SUCCESS', 'MANUAL_OVERRIDE')
     → Match found: skip (idempotent re-sync)
  2. Cross-schema validation:
     a. Query commerce.tickets WHERE id = :ticket_id
     b. If ticket.status = 'VOID' → override check_in_status = 'INVALID',
        reason = 'TICKET_VOIDED_POST_SYNC'
     c. If booking.status IN ('CANCELLED', 'REFUNDED') → override
        check_in_status = 'INVALID',
        reason = 'BOOKING_CANCELLED_POST_SYNC'
     d. Cross-leg duplicate QR check:
        SELECT service_leg_id, checked_in_at FROM boarding_events
        WHERE qr_hash = :qr_hash
          AND service_leg_id != :service_leg_id
          AND check_in_status IN ('SUCCESS', 'MANUAL_OVERRIDE')
        → Match found: do NOT override check_in_status (both boardings
          remain valid records). Emit BoardingConflictDetected with
          reason = 'DUPLICATE_QR_DIFFERENT_LEG' and payload including
          both service_leg_ids. The dispatcher investigates — the
          passenger may have shared their QR screenshot.
  3. INSERT boarding_event with (potentially overridden) check_in_status
  4. If check_in_status was overridden OR duplicate detected
     → emit BoardingConflictDetected event
  5. Wrap in AuditTrailService.record() with scope = GENERAL

Real-Time Boarding Subscription (Dispatch Board)

graphql
subscription BoardingProgress($service_leg_id: uuid!) {
  boarding_events_aggregate(
    where: {
      service_leg_id: { _eq: $service_leg_id }
      check_in_status: { _in: ["SUCCESS", "MANUAL_OVERRIDE"] }
    }
  ) {
    aggregate {
      count
    }
  }
}

The dispatch board combines this count with the leg's max_capacity (from Commerce TourOffering) to display "12/49 boarded."

Luggage Tagging — Phase 1 Scope

Phase 1: Numeric luggage_count only. The luggage_tags JSONB column remains in the schema but stores simple string identifiers (e.g., ["A-001", "A-002"]) — a sequential tag number per departure, generated locally on the Driver Hub. No NFC, no print, no tracked-tag system. Phase 2: digital luggage tag system with unique barcodes and print integration.

BoardingEvent Edge States

#Edge StateResolution
E-1Ticket cancelled after manifest syncServer-side sync handler queries commerce.tickets WHERE id = :ticket_id AND status = 'VOID'. If void → override check_in_status = 'INVALID', emit BoardingConflictDetected.
E-2QR code screenshot sharing (DUPLICATE_QR_DIFFERENT_LEG)Offline: Rule 4 catches same-leg duplicates → ALREADY_SCANNED. Cross-leg duplicates remain invisible offline (each driver has a leg-scoped manifest). On sync: Server detects the same qr_hash used at two different service_leg_id values with check_in_status IN (SUCCESS, MANUAL_OVERRIDE). Both boarding records remain valid (the handler does not override either — undoing a real boarding causes data loss). The handler emits BoardingConflictDetected with reason = 'DUPLICATE_QR_DIFFERENT_LEG', payload: { qr_hash, ticket_id, conflicting_legs: [{ service_leg_id, checked_in_at, checked_in_by }] }. Dispatcher action: Dispatch board shows an alert. Dispatcher investigates: (a) passenger shared QR → void the ticket for the unauthorized person, reboard the legitimate passenger with a new ticket; (b) legitimate re-boarding across legs (e.g., passenger moved stops) → dismiss alert. No automated ticket voiding — human judgment required.
E-3Boarding after ServiceLeg COMPLETEDOffline validation allows boarding on ACTIVE or DELAYED legs only. If leg is COMPLETED, driver receives "Leg already completed" toast. No override in Phase 1.
E-4Group booking with partial no-showBookingNoShow trigger evaluates per-passenger, not per-booking. A booking with 4 passengers where 3 boarded and 1 didn't → no-show applies to the individual passenger only. Booking status remains COMPLETED.
IndexColumnsPurpose
idx_boarding_events_leg_status(service_leg_id, check_in_status)Real-time boarding count subscription + dedup check
idx_boarding_events_ticket(ticket_id)BookingNoShow cross-schema join, dedup on sync
idx_boarding_events_qr(qr_hash, service_leg_id)Fast QR duplicate detection on sync

change_events

Local audit trail for entities within the Operations schema. Uses a polymorphic entity reference (entity_type + entity_id) — see ADR-019. The server generates change_events exclusively — the Driver Hub client sends raw mutations via the sync API (POST /api/sync/batch), and the server wraps each applied mutation in a change_event with a server-generated timestamp and the originating client_event_id for traceability. The client never generates change_events directly — this preserves the trust boundary for GoBD immutability. See offline-sync-protocol.md. The shared AuditTrailService (NestJS) writes all entries.

ColumnTypeConstraintsDescription
idUUIDPrimary KeyUnique event
tenant_idUUIDNot NullSoft FK to backoffice.operators
user_idUUIDNullableSoft FK to auth.users(id). The actor — null for system/automation.
entity_typeVARCHARNot NullTarget entity type: service_leg, leg_assignment, expense_receipt, onboard_sale, incident, issue_report, crew_duty_log, boarding_event
entity_idUUIDNot NullID of the mutated entity. Soft FK — entity table varies by entity_type.
actionVARCHARNot NullINSERT, UPDATE, DELETE
scopeVARCHARNot Null, Default: GENERALGOBD, COMPLIANCE, DSGVO, CONFIG, GENERAL — canonical enum per ADR-019.
correlation_idUUIDNullableGroups related change_events across bounded contexts (e.g., vehicle swap writes to Operations + Commerce).
client_event_idUUIDNullableReferences the client-side mutation ID for traceability. Not UNIQUE — a single client action can produce multiple change_events across different entities. The sync_idempotency_log handles dedup (see offline-sync-protocol.md).
device_idVARCHARNullableOriginating device identifier (null for server-originated changes).
sync_batch_idUUIDNullableGroups records from a single offline sync session.
old_valuesJSONBFull row snapshot before mutation. Captured via SELECT ... FOR UPDATE in the AuditTrailService. Null for CREATE.
new_valuesJSONBState after mutation. Null for DELETE.
created_atTIMESTAMPTZDefault: now()

NOTE

Index: (tenant_id, entity_type, entity_id) — covers all audit trail queries. Additional partial index: (correlation_id) WHERE correlation_id IS NOT NULL.

NOTE

Scope mapping for this schema: GOBD for expense_receipt, onboard_sale. COMPLIANCE for crew_duty_log (EU-561). GENERAL for everything else.

Internal documentation — Busflow