AutomationFlowsData & Sheets › Webhook Audit Trail

Webhook Audit Trail

Webhook Audit Trail. Uses stickyNote, postgres, respondToWebhook, httpRequest. Webhook trigger; 14 nodes.

Webhook trigger★★★★☆ complexity14 nodesPostgresHttp Request
Data & Sheets Trigger: Webhook Nodes: 14 Complexity: ★★★★☆

The workflow JSON

Copy or download the full n8n JSON below. Paste it into a new n8n workflow, add your credentials, activate. Full import guide →

Download .json
{
  "name": "Webhook Audit Trail",
  "nodes": [
    {
      "parameters": {
        "content": "## Webhook Audit Trail\n\nGeneric ingest endpoint that any system can POST signed events to. Verifies the HMAC + replay-window, rate-limits, dedupes, persists to a Postgres `audit_log` table with an atomic hash chain across rows, alerts Slack on signature-fail (security event) and rate-limit-hit (DoS event).\n\n**Production patterns wired:**\n- HMAC + replay-window verify (opt-in)\n- Rate limit + per-IP cap\n- Idempotency on `x-request-id` or `sha256(rawBody)`\n- Atomic hash chain in a single SQL statement: a transaction-scoped `pg_advisory_xact_lock` serializes all inserts on the same chain key, the latest row is read with `FOR UPDATE` as a belt-and-suspenders defense, and `digest()` from `pgcrypto` computes the new `row_hash` inside the same transaction. Survives n8n restarts (state is in DB) and is race-safe across concurrent executions.\n- Error branch with structured fallback\n- Slack alert on security or capacity events (not on every successful write)\n\nSee `README.md` for setup, env vars, and extension recipes.",
        "height": 360,
        "width": 400,
        "color": 6
      },
      "id": "note-intro",
      "name": "Sticky Note - Intro",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -200,
        -100
      ]
    },
    {
      "parameters": {
        "content": "### >> SET ME <<\n\n1. Add a Postgres credential pointing at the audit-log database. Wire it into the `Postgres Insert` node.\n2. Run the DDL from the README (idempotent: creates `audit_log` table + `pgcrypto` extension required for `digest()`).\n3. Set `AUDIT_SIGNING_SECRET` to a 32+ char random string. Configure your client to sign with `x-audit-signature: <hex hmac-sha256(timestamp + '.' + rawBody)>` + `x-audit-timestamp: <unix-seconds>` headers.\n4. Set `WEBHOOK_INTEGRITY_CHECK_ENABLED=1` for production.\n5. Set `RATE_LIMIT_ENABLED=1` and `IDEMPOTENCY_ENABLED=1`.\n6. Optional: `AUDIT_REPLAY_WINDOW_S=300` (default), `AUDIT_RATE_LIMIT_PER_IP=120` (default 60), `MAX_BODY_BYTES=1048576` (1MB default).\n7. Set `SLACK_SECURITY_WEBHOOK` for security-event alerts.\n8. Self-hosted n8n: set `NODE_FUNCTION_ALLOW_BUILTIN=crypto`.",
        "height": 360,
        "width": 380,
        "color": 5
      },
      "id": "note-setup",
      "name": "Sticky Note - Setup",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -200,
        280
      ]
    },
    {
      "parameters": {
        "content": "## Production Patterns\n\nFour patterns wired. HMAC verify is the most important, this is an ingest endpoint.\n\n- **HMAC + replay-window:** `WEBHOOK_INTEGRITY_CHECK_ENABLED=1` + `AUDIT_SIGNING_SECRET`. Signed payload `<timestamp>.<rawBody>`, default 5-min replay window.\n- **Rate limit:** `RATE_LIMIT_ENABLED=1` (60 req / 5 min / IP)\n- **Idempotency:** `IDEMPOTENCY_ENABLED=1` (5-min window on `x-request-id` or hash(rawBody)). The Idempotency node runs `onError: continueErrorOutput` so an unexpected static-data corruption is caught and routed to the Error Fallback rather than escaping as an unhandled exception.\n- **Atomic hash chain:** the Postgres Insert is a single SQL statement that wraps `pg_advisory_xact_lock(hashtext('audit_log_chain:default'))` in a `MATERIALIZED` CTE referenced via `CROSS JOIN` by the `last` CTE. PostgreSQL is forced to acquire the transaction-scoped advisory lock before scanning `audit_log`, so two concurrent inserts cannot read the same `prev_hash`. The `FOR UPDATE` row-lock is kept as a belt-and-suspenders defense against direct SQL writers that bypass this workflow. The hash is computed in SQL via `digest()` from `pgcrypto` so the chain integrity holds across n8n restarts (no static-data bridge needed).\n- **Error branch:** always on. DB failure -> structured 500 + Slack alert (DB outage is a security event, audit chain breaks).\n\nFor clustered n8n, the in-memory dedup is per-instance. Swap to Redis SET NX EX 300 for cross-instance dedup. Snippet in the node's comments.",
        "height": 380,
        "width": 380,
        "color": 7
      },
      "id": "note-production-patterns",
      "name": "Sticky Note - Production Patterns",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        840,
        -300
      ]
    },
    {
      "parameters": {
        "httpMethod": "POST",
        "path": "audit-ingest",
        "responseMode": "responseNode",
        "options": {
          "rawBody": true
        }
      },
      "id": "audit-1-trigger",
      "name": "Audit Webhook",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 2,
      "position": [
        240,
        60
      ]
    },
    {
      "parameters": {
        "jsCode": "// HMAC-SHA256 signature verification on raw body, opt-in, with replay-window check.\n// Headers required when WEBHOOK_INTEGRITY_CHECK_ENABLED=1:\n//   x-audit-signature: <hex hmac-sha256(timestamp + '.' + rawBody)>\n//   x-audit-timestamp: <unix-seconds>\n// Plus body-size cap.\n\nconst secret = $env.AUDIT_SIGNING_SECRET;\nconst integrityCheckEnabled = $env.WEBHOOK_INTEGRITY_CHECK_ENABLED === '1';\n\nconst MAX_BODY_BYTES = parseInt($env.MAX_BODY_BYTES || '1048576', 10);\nconst item = $input.first();\nconst rawBody = item.json.rawBody || (typeof item.json.body === 'string' ? item.json.body : JSON.stringify(item.json.body || {}));\nif (Buffer.byteLength(String(rawBody), 'utf8') > MAX_BODY_BYTES) {\n  throw new Error('PAYLOAD_TOO_LARGE: body exceeds MAX_BODY_BYTES=' + MAX_BODY_BYTES);\n}\n\nif (!secret || !integrityCheckEnabled) {\n  // Verification disabled, mark as unsigned and pass through.\n  return [{ json: { signed: false, rawBody, headers: item.json.headers || {} } }];\n}\n\nconst crypto = require('crypto');\nconst headers = item.json.headers || {};\nconst providedSig = headers['x-audit-signature'] || headers['X-Audit-Signature'];\nconst tsHeader = headers['x-audit-timestamp'] || headers['X-Audit-Timestamp'];\n\nif (!providedSig || typeof providedSig !== 'string') {\n  throw new Error('UNAUTHORIZED: missing x-audit-signature header');\n}\nif (!tsHeader || typeof tsHeader !== 'string') {\n  throw new Error('UNAUTHORIZED: missing x-audit-timestamp header (replay protection)');\n}\n\nconst tsNum = parseInt(tsHeader, 10);\nif (!Number.isFinite(tsNum)) {\n  throw new Error('UNAUTHORIZED: invalid x-audit-timestamp value');\n}\nconst nowSec = Math.floor(Date.now() / 1000);\nconst REPLAY_WINDOW_S = parseInt($env.AUDIT_REPLAY_WINDOW_S || '300', 10);\nif (Math.abs(nowSec - tsNum) > REPLAY_WINDOW_S) {\n  throw new Error('UNAUTHORIZED: timestamp outside replay window (' + REPLAY_WINDOW_S + 's)');\n}\n\nconst signedPayload = tsHeader + '.' + rawBody;\nconst expected = crypto.createHmac('sha256', secret).update(signedPayload, 'utf8').digest('hex');\n\nif (providedSig.length !== expected.length) {\n  throw new Error('UNAUTHORIZED: signature length mismatch');\n}\n\nconst expBuf = Buffer.from(expected, 'utf8');\nconst provBuf = Buffer.from(providedSig, 'utf8');\nif (!crypto.timingSafeEqual(expBuf, provBuf)) {\n  throw new Error('UNAUTHORIZED: invalid signature');\n}\n\nreturn [{ json: { signed: true, rawBody, headers, timestamp: tsNum } }];"
      },
      "id": "audit-pp-1-verify",
      "name": "Verify Webhook (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        440,
        60
      ],
      "onError": "continueErrorOutput"
    },
    {
      "parameters": {
        "jsCode": "// Per-IP sliding-window rate limit, opt-in.\n// Default 60 req / 5 min / IP. Rate-limit hits are a DoS signal -> Slack alert in error branch.\n\nif ($env.RATE_LIMIT_ENABLED !== '1') {\n  return $input.all();\n}\n\nconst LIMIT = parseInt($env.AUDIT_RATE_LIMIT_PER_IP || '60', 10);\nconst WINDOW_MS = 5 * 60 * 1000;\nconst MAX_KEYS = 5000;\n\nconst item = $input.first();\nconst headers = item.json.headers || {};\nconst rawIp = headers['x-forwarded-for'] || headers['x-real-ip'] || 'unknown';\nconst key = String(rawIp).split(',')[0].trim();\n\nconst data = $getWorkflowStaticData('global');\ndata.rateBuckets = data.rateBuckets || {};\nconst buckets = data.rateBuckets;\nconst now = Date.now();\n\nfor (const k of Object.keys(buckets)) {\n  buckets[k] = (buckets[k] || []).filter(t => now - t < WINDOW_MS);\n  if (buckets[k].length === 0) delete buckets[k];\n}\nif (Object.keys(buckets).length > MAX_KEYS) {\n  const oldest = Object.entries(buckets).sort((a, b) => (a[1][0] || 0) - (b[1][0] || 0)).slice(0, 100);\n  for (const [k] of oldest) delete buckets[k];\n}\n\nconst hits = buckets[key] || [];\nif (hits.length >= LIMIT) {\n  throw new Error('RATE_LIMIT_EXCEEDED: ' + LIMIT + ' requests per ' + Math.round(WINDOW_MS / 60000) + ' minutes for ' + key);\n}\nbuckets[key] = [...hits, now];\n\nreturn $input.all();"
      },
      "id": "audit-pp-2-ratelimit",
      "name": "Rate Limit (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        640,
        60
      ],
      "onError": "continueErrorOutput"
    },
    {
      "parameters": {
        "jsCode": "// 5-minute idempotency window, opt-in.\n// Dedup key preference order: x-request-id header -> sha256(rawBody).\n// onError: continueErrorOutput on the node so unexpected static-data corruption is\n// caught + routed to Error Fallback rather than escaping as an unhandled exception.\n\nconst crypto = require('crypto');\n\nif ($env.IDEMPOTENCY_ENABLED !== '1') {\n  return $input.all();\n}\n\nconst WINDOW_MS = 5 * 60 * 1000;\nconst MAX_KEYS = 5000;\n\nconst item = $input.first();\nconst headers = item.json.headers || {};\nconst rawBody = item.json.rawBody || '';\nconst providedReqId = headers['x-request-id'] || headers['X-Request-Id'];\n\nlet dedupKey;\nif (providedReqId && typeof providedReqId === 'string' && providedReqId.length <= 128) {\n  dedupKey = 'reqid:' + providedReqId;\n} else {\n  dedupKey = 'body:' + crypto.createHash('sha256').update(rawBody, 'utf8').digest('hex').slice(0, 32);\n}\n\nconst data = $getWorkflowStaticData('global');\ndata.seenKeys = data.seenKeys || {};\nconst seen = data.seenKeys;\nconst now = Date.now();\n\nfor (const k of Object.keys(seen)) {\n  if (now - seen[k] > WINDOW_MS) delete seen[k];\n}\nif (Object.keys(seen).length > MAX_KEYS) {\n  const oldest = Object.entries(seen).sort((a, b) => a[1] - b[1]).slice(0, 500);\n  for (const [k] of oldest) delete seen[k];\n}\n\nif (seen[dedupKey]) {\n  return [{ json: { ...item.json, skipped: true, reason: 'duplicate', dedupKey } }];\n}\nseen[dedupKey] = now;\n\nreturn [{ json: { ...item.json, dedupKey } }];"
      },
      "id": "audit-pp-3-idempotency",
      "name": "Idempotency Check (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        840,
        60
      ],
      "onError": "continueErrorOutput"
    },
    {
      "parameters": {
        "jsCode": "// Build the audit row payload.\n// Hash chain (prev_hash + row_hash) is computed atomically in the Postgres Insert SQL.\n// Serialization is via pg_advisory_xact_lock in a MATERIALIZED CTE, with FOR UPDATE\n// kept as belt-and-suspenders. digest() from pgcrypto produces the row hash. We only\n// emit the payload + payload_hash + metadata fields here.\n\nconst crypto = require('crypto');\n\nconst items = $input.all();\nconst out = [];\nfor (const item of items) {\n  const j = item.json || {};\n  if (j.skipped) {\n    out.push(item);\n    continue;\n  }\n  const rawBody = j.rawBody || '';\n  const headers = j.headers || {};\n  const signed = !!j.signed;\n  const sourceIp = String(headers['x-forwarded-for'] || headers['x-real-ip'] || '').split(',')[0].trim() || null;\n  const sourceUserAgent = headers['user-agent'] || headers['User-Agent'] || null;\n  const eventType = headers['x-audit-event-type'] || headers['X-Audit-Event-Type'] || 'unknown';\n  const eventSource = headers['x-audit-source'] || headers['X-Audit-Source'] || 'unknown';\n  const payloadHash = crypto.createHash('sha256').update(rawBody, 'utf8').digest('hex');\n\n  out.push({ json: {\n    receivedAt: new Date().toISOString(),\n    eventType,\n    eventSource,\n    sourceIp,\n    sourceUserAgent: sourceUserAgent ? String(sourceUserAgent).slice(0, 500) : null,\n    signed,\n    payload: rawBody,\n    payloadHash,\n    dedupKey: j.dedupKey || null,\n  }});\n}\nreturn out;"
      },
      "id": "audit-3-normalize",
      "name": "Build Audit Row",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1040,
        60
      ]
    },
    {
      "parameters": {
        "jsCode": "// Drop the items that were already deduped before we hit the database.\n\nconst out = [];\nfor (const item of $input.all()) {\n  const j = item.json || {};\n  if (!j.skipped) out.push(item);\n}\nif (out.length === 0) {\n  return [{ json: { skipped: true, reason: 'all-items-deduped' } }];\n}\nreturn out;"
      },
      "id": "audit-4-passthrough",
      "name": "Forward Live Items Only",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1240,
        60
      ]
    },
    {
      "parameters": {
        "operation": "executeQuery",
        "query": "WITH advisory AS MATERIALIZED (\n  -- Serialize all inserts on the same chain key. Transaction-scoped\n  -- advisory lock is released on COMMIT/ROLLBACK. Using hashtext on a\n  -- chain identifier so multiple chains (e.g. per-tenant) do not block\n  -- each other if you ever need to shard.\n  SELECT pg_advisory_xact_lock(hashtext('audit_log_chain:default')) AS lock_acquired\n),\nlast AS MATERIALIZED (\n  -- CROSS JOIN advisory forces PostgreSQL to evaluate the advisory CTE\n  -- before scanning audit_log. With MATERIALIZED on advisory, the\n  -- ordering is guaranteed even on PG 12+ where CTEs are inlined by\n  -- default. FOR UPDATE stays as a belt-and-suspenders defense in case\n  -- a direct SQL writer bypasses this workflow.\n  SELECT row_hash AS prev_hash\n  FROM audit_log\n  CROSS JOIN advisory\n  ORDER BY id DESC\n  LIMIT 1\n  FOR UPDATE\n),\nseed AS (\n  SELECT\n    COALESCE((SELECT prev_hash FROM last), repeat('0', 64)) AS prev_hash,\n    $1::timestamptz AS received_at,\n    $2::text AS event_type,\n    $3::text AS event_source,\n    NULLIF($4, '')::text AS source_ip,\n    NULLIF($5, '')::text AS source_user_agent,\n    $6::boolean AS signed,\n    $7::text AS payload,\n    $8::text AS payload_hash,\n    NULLIF($9, '')::text AS dedup_key\n)\nINSERT INTO audit_log (received_at, event_type, event_source, source_ip, source_user_agent, signed, payload, payload_hash, prev_hash, row_hash, dedup_key)\nSELECT\n  received_at, event_type, event_source, source_ip, source_user_agent, signed, payload, payload_hash, prev_hash,\n  encode(\n    digest(\n      json_build_object(\n        'prevHash', prev_hash,\n        'payloadHash', payload_hash,\n        'signed', signed,\n        'sourceIp', source_ip,\n        'sourceUserAgent', source_user_agent,\n        'eventType', event_type,\n        'eventSource', event_source,\n        'receivedAt', to_char(received_at AT TIME ZONE 'UTC', 'YYYY-MM-DD\"T\"HH24:MI:SS.MSZ')\n      )::text,\n      'sha256'\n    ),\n    'hex'\n  ) AS row_hash,\n  dedup_key\nFROM seed\nRETURNING id, prev_hash, row_hash;",
        "options": {
          "queryReplacement": "={{ $json.receivedAt }},{{ $json.eventType }},{{ $json.eventSource }},{{ $json.sourceIp }},{{ $json.sourceUserAgent }},{{ $json.signed }},{{ $json.payload }},{{ $json.payloadHash }},{{ $json.dedupKey }}"
        }
      },
      "id": "audit-5-postgres",
      "name": "Postgres Insert",
      "type": "n8n-nodes-base.postgres",
      "typeVersion": 2.5,
      "position": [
        1440,
        60
      ],
      "credentials": {
        "postgres": {
          "name": "<your credential>"
        }
      },
      "onError": "continueErrorOutput"
    },
    {
      "parameters": {
        "respondWith": "json",
        "responseBody": "={{ JSON.stringify({ ok: true, id: $json.id, rowHash: $json.row_hash, prevHash: $json.prev_hash }) }}",
        "options": {
          "responseCode": 200
        }
      },
      "id": "audit-6-respond",
      "name": "Respond to Client",
      "type": "n8n-nodes-base.respondToWebhook",
      "typeVersion": 1.1,
      "position": [
        1640,
        60
      ]
    },
    {
      "parameters": {
        "jsCode": "// Fallback for any failure along the path:\n// - HMAC failure (UNAUTHORIZED in Verify) -> security event, alert Slack\n// - Rate limit breach (RATE_LIMIT_EXCEEDED in Rate Limit) -> capacity event, alert Slack\n// - Idempotency unexpected error -> capacity event, alert Slack\n// - DB failure (Postgres Insert) -> the audit chain is broken until DB recovers,\n//   alert Slack as security event since auditability is compromised.\n\nconst input = $input.first();\nconst raw = input.json || {};\nconst errorRaw = raw.error || raw;\nconst errorMessage = errorRaw && errorRaw.message ? String(errorRaw.message) : 'unknown error';\n\nlet category = 'unknown';\nlet severity = 'warning';\nif (errorMessage.startsWith('UNAUTHORIZED')) {\n  category = 'auth';\n  severity = 'critical';\n} else if (errorMessage.startsWith('RATE_LIMIT_EXCEEDED')) {\n  category = 'capacity';\n  severity = 'warning';\n} else if (errorMessage.startsWith('PAYLOAD_TOO_LARGE')) {\n  category = 'capacity';\n  severity = 'warning';\n} else {\n  // Likely a DB error if we got past Verify + Rate Limit + Idempotency.\n  category = 'database';\n  severity = 'critical';\n}\n\n// HTTP status to return to the client.\nlet statusCode = 500;\nif (category === 'auth') statusCode = 401;\nelse if (category === 'capacity') statusCode = 429;\nelse if (errorMessage.startsWith('PAYLOAD_TOO_LARGE')) statusCode = 413;\n\nreturn [{ json: {\n  ok: false,\n  error: errorMessage,\n  category,\n  severity,\n  statusCode,\n  at: new Date().toISOString(),\n}}];"
      },
      "id": "audit-err-fallback",
      "name": "Error Fallback",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1240,
        380
      ]
    },
    {
      "parameters": {
        "method": "POST",
        "url": "={{ $env.SLACK_SECURITY_WEBHOOK }}",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={{ JSON.stringify({ text: ($json.severity === 'critical' ? ':rotating_light:' : ':warning:') + ' Audit ingest ' + $json.category + ' event: ' + $json.error }) }}",
        "options": {}
      },
      "id": "audit-err-slack",
      "name": "Slack Alert",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [
        1440,
        380
      ],
      "onError": "continueRegularOutput"
    },
    {
      "parameters": {
        "respondWith": "json",
        "responseBody": "={{ JSON.stringify({ ok: false, error: $json.error || 'unknown error', category: $json.category }) }}",
        "options": {
          "responseCode": "={{ $json.statusCode || 500 }}"
        }
      },
      "id": "audit-err-respond",
      "name": "Error Respond",
      "type": "n8n-nodes-base.respondToWebhook",
      "typeVersion": 1.1,
      "position": [
        1640,
        380
      ]
    }
  ],
  "connections": {
    "Audit Webhook": {
      "main": [
        [
          {
            "node": "Verify Webhook (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Verify Webhook (opt-in)": {
      "main": [
        [
          {
            "node": "Rate Limit (opt-in)",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Fallback",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Rate Limit (opt-in)": {
      "main": [
        [
          {
            "node": "Idempotency Check (opt-in)",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Fallback",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Idempotency Check (opt-in)": {
      "main": [
        [
          {
            "node": "Build Audit Row",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Fallback",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Build Audit Row": {
      "main": [
        [
          {
            "node": "Forward Live Items Only",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Forward Live Items Only": {
      "main": [
        [
          {
            "node": "Postgres Insert",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Postgres Insert": {
      "main": [
        [
          {
            "node": "Respond to Client",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Fallback",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Error Fallback": {
      "main": [
        [
          {
            "node": "Slack Alert",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Slack Alert": {
      "main": [
        [
          {
            "node": "Error Respond",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "settings": {
    "executionOrder": "v1"
  }
}

Credentials you'll need

Each integration node will prompt for credentials when you import. We strip credential IDs before publishing — you'll add your own.

About this workflow

Webhook Audit Trail. Uses stickyNote, postgres, respondToWebhook, httpRequest. Webhook trigger; 14 nodes.

Source: https://github.com/studiomeyer-io/n8n-workflows/blob/main/templates/13-webhook-audit-trail/workflow.json — original creator credit. Request a take-down →

More Data & Sheets workflows → · Browse all categories →