Sinks
Sinks deliver events from the pipeline to your applications. Each sink watches for events that match its configured patterns and delivers them using a specific transport (HTTP push, pull, streaming, or desktop notifications).
If you want to get data out of the pipeline, you configure a sink. If you want to get data in, see Sources.
For detailed information on common configuration options, human-readable intervals, and environment variable expansion, see the Configuration Guide.
Getting Started
Add a sink to the sink section of your config.yaml. Each sink needs a unique name (the YAML key) and a type. If the name matches the type, you can omit type.
sink:
my_webhook:
type: webhook
url: "https://api.example.com/events"Delivery Models
| Sink | Type | How it works |
|---|---|---|
| Webhook | webhook | Pushes each event to a URL via HTTP POST. Retries on failure. |
| Command | command | Executes a CLI command for each event. Supports sequential processing. |
| HTTP Pull | http_pull | Your app polls for batches of events and confirms receipt. |
| SSE | sse | Streams events in real-time over a persistent HTTP connection. |
| Win11 Toast | win11toast | Shows a Windows 11 desktop notification per event. For debugging only. |
Event Matching
Every sink has a match parameter that controls which event types it receives. The matching supports three patterns:
"*"— matches all events (default)."prefix.*"— matches any event type starting withprefix.(e.g.gmail.*matchesgmail.message_received)."exact.type"— matches only that exact event type.
You can provide a single pattern or a list:
sink:
alerts:
type: webhook
url: "https://example.com/alerts"
match:
- "gmail.message_received"
- "fio.transaction.*"Event Envelope
All sinks deliver events using the same JSON structure:
{
"id": 42,
"event_id": "evt_12345",
"event_type": "gmail.message_received",
"entity_id": "msg_99",
"created_at": "2024-03-13T23:52:00+00:00",
"data": {
"subject": "Hello",
"from": "alice@example.com"
},
"source": {
"id": 1,
"name": "gmail_primary"
},
"meta": {}
}| Field | Description |
|---|---|
id | Internal database ID of the event. |
event_id | Unique identifier assigned by the source. |
event_type | Dot-separated event type string. |
entity_id | Identifier of the object the event is about. |
created_at | ISO 8601 timestamp of when the event was stored in the pipeline. |
data | Source-specific payload (see individual source docs). |
source | Information about the source that produced the event (id, name). |
meta | Additional metadata (usually empty). |
Coalescing
Inboxclaw features a centralized In-Flight Coalescing system. Multiple rapid events with the same event_type and entity_id can be merged at the source level before they are even stored in the main event table.
This means sinks receive a "pre-optimized" event stream. For example, if a file in Google Drive is saved 10 times in 30 seconds, a "debounce" rule can ensure your sinks only receive a single file_updated event after the user stops typing.
Coalescing rules are configured at the Source level. Sinks no longer need individual coalescing configuration.
For more details on how to configure coalescing, see the Sources documentation.
TTL (Time-To-Live)
The Webhook and HTTP Pull sinks support TTL. When enabled, events older than their TTL are skipped during delivery. This prevents stale events from being delivered after a long downtime.
ttl_enabled: Whether TTL filtering is active (default:truefor webhook and http_pull).default_ttl: Fallback TTL for events without a specific rule (default:"1h").event_ttl: Per-type TTL overrides using the same matching patterns ("exact.type","prefix.*","*").
sink:
my_webhook:
type: webhook
url: "https://example.com/events"
ttl_enabled: true
default_ttl: "2h"
event_ttl:
"critical.*": "7d"
"stats.update": "15m"TTL is resolved in order: exact match → longest prefix match → default_ttl.
