{
  "name": "Restaurant Stammgast-Bot (Multi-Provider)",
  "nodes": [
    {
      "parameters": {
        "content": "## AI Restaurant Stammgast-Bot Bot\n\n**Stack:** Telegram (or WhatsApp / web chat) \u2192 StudioMeyer Memory entity lookup \u2192 Claude or OpenAI with customer dossier \u2192 reply \u2192 persist outcome.\n\n**Why this beats a stateless bot:** Returning customers don't have to re-explain who they are. The bot greets them by name, references past orders, and the agent that takes over has full history one click away.\n\n**To swap Telegram for WhatsApp:** replace the trigger node with a WhatsApp Trigger and the reply node with WhatsApp Send Message. The middle stays identical.\n\n**Production patterns ship in this workflow.json** as opt-in Code nodes. See the orange Sticky Notes below for the four env vars that toggle them on.",
        "height": 380,
        "width": 480,
        "color": 6
      },
      "id": "note-intro",
      "name": "Sticky Note - Intro",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -260,
        -60
      ]
    },
    {
      "parameters": {
        "content": ">> SET ME <<\n\n**Telegram Bot credential** required.\n\n1. Talk to @BotFather, create a bot, get the token.\n2. n8n Credentials \u2192 New \u2192 Telegram \u2192 paste token.\n3. After activation, message your bot once and check this node's incoming-data view.\n\n**Webhook security:** to harden the trigger, expand `additionalFields` and set `secretToken` to a strong random string. Then re-register the webhook so Telegram sends `X-Telegram-Bot-Api-Secret-Token` on every request. The trigger validates it automatically.",
        "height": 320,
        "width": 320,
        "color": 5
      },
      "id": "note-telegram",
      "name": "Sticky Note - Telegram",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        240,
        -60
      ]
    },
    {
      "parameters": {
        "content": "## Production patterns (opt-in)\n\nThree Code nodes below are off by default. Toggle each with an n8n env var:\n\n- `IDEMPOTENCY_ENABLED=1` deduplicates Telegram retries on the same `update_id` (5-min window).\n- `RATE_LIMIT_ENABLED=1` caps each chat at 60 requests / 5 min.\n- `WEBHOOK_INTEGRITY_CHECK_ENABLED=1` rejects malformed payloads (no `message`, no `from`).\n\nEach node returns pass-through when its env var is unset, so the default import boots clean. Production deployments enable all three plus the Telegram Trigger `secretToken`.\n\nFor clustered n8n deployments, swap the in-memory `$getWorkflowStaticData` blocks for Redis (`SET NX EX 300` for idempotency, `INCR + EXPIRE` for rate limit). Single-instance n8n is fine with the default.",
        "height": 360,
        "width": 540,
        "color": 7
      },
      "id": "note-production-patterns",
      "name": "Sticky Note - Production Patterns",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        700,
        -120
      ]
    },
    {
      "parameters": {
        "updates": [
          "message"
        ],
        "additionalFields": {}
      },
      "id": "rest-1-trigger",
      "name": "Telegram Trigger",
      "type": "n8n-nodes-base.telegramTrigger",
      "typeVersion": 1.1,
      "position": [
        240,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Webhook integrity check (opt-in via WEBHOOK_INTEGRITY_CHECK_ENABLED=1).\n// Telegram Trigger handles HMAC via its secretToken option (set on the\n// trigger node itself). This Code node is the second defense layer:\n// reject malformed payloads that lack the fields downstream nodes expect.\n//\n// To enable: set the n8n env var WEBHOOK_INTEGRITY_CHECK_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n\nconst enabled = $env.WEBHOOK_INTEGRITY_CHECK_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\n\nif (!message || typeof message !== 'object') {\n  throw new Error('Webhook integrity check failed: no message object');\n}\nif (typeof message.from?.id !== 'number' && typeof message.from?.id !== 'string') {\n  throw new Error('Webhook integrity check failed: missing message.from.id');\n}\nif (typeof message.chat?.id !== 'number' && typeof message.chat?.id !== 'string') {\n  throw new Error('Webhook integrity check failed: missing message.chat.id');\n}\nif (typeof message.text !== 'string' && typeof message.caption !== 'string') {\n  throw new Error('Webhook integrity check failed: no text or caption');\n}\n\nreturn [{ json: item }];"
      },
      "id": "rest-pp-1-verify",
      "name": "Verify Webhook (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        460,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Rate limit (opt-in via RATE_LIMIT_ENABLED=1).\n// Per-chat-id 60 requests in a 5-minute window. Tracked in workflow\n// static data (per-instance). For clustered n8n deployments or higher\n// throughput, use Nginx limit_req_zone or Cloudflare WAF instead.\n//\n// To enable: set the n8n env var RATE_LIMIT_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n//\n// Concurrency note: $getWorkflowStaticData is not atomic. Under heavy\n// burst load the count can over-shoot the limit by a few percent. For\n// hard limits use a reverse proxy or Redis INCR + EXPIRE.\n\nconst enabled = $env.RATE_LIMIT_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\nconst chatId = message?.chat?.id ?? message?.from?.id ?? 'unknown';\nconst bucketKey = `chat:${chatId}`;\n\nconst data = $getWorkflowStaticData('global');\nconst buckets = data.rateBuckets ?? {};\nconst now = Date.now();\nconst WINDOW_MS = 5 * 60 * 1000;\nconst LIMIT = 60;\nconst MAX_BUCKETS = 5000;\n\nconst bucket = buckets[bucketKey] ?? { count: 0, windowStart: now };\nif (now - bucket.windowStart > WINDOW_MS) {\n  bucket.count = 0;\n  bucket.windowStart = now;\n}\nbucket.count++;\nbuckets[bucketKey] = bucket;\n\n// Bound the map: evict expired entries when full\nif (Object.keys(buckets).length > MAX_BUCKETS) {\n  const cutoff = now - WINDOW_MS;\n  for (const k of Object.keys(buckets)) {\n    if (buckets[k].windowStart < cutoff) delete buckets[k];\n  }\n}\ndata.rateBuckets = buckets;\n\nif (bucket.count > LIMIT) {\n  throw new Error(`Rate limit exceeded for ${bucketKey}: ${bucket.count} requests in 5 min window`);\n}\n\nreturn [{ json: item }];"
      },
      "id": "rest-pp-2-ratelimit",
      "name": "Rate Limit (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        680,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Idempotency check (opt-in via IDEMPOTENCY_ENABLED=1).\n// Telegram retries on 5xx. Without dedup, the workflow fires twice and\n// writes Memory twice. This node holds a 5-minute in-memory window of\n// seen update_ids and short-circuits duplicates.\n//\n// To enable: set the n8n env var IDEMPOTENCY_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n//\n// Concurrency note: $getWorkflowStaticData is not atomic and not cluster-\n// aware. Two simultaneous fires of the same update_id may both pass.\n// For production scale, swap the staticData block for Redis SET NX EX 300.\n\nconst enabled = $env.IDEMPOTENCY_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\nconst updateId = item?.update_id ?? message?.update_id;\nconst messageId = message?.message_id;\nconst idempotencyKey = updateId ? `tg-update:${updateId}` : (messageId ? `tg-msg:${messageId}` : null);\n\nif (!idempotencyKey) {\n  // No dedup key available, pass through\n  return [{ json: item }];\n}\n\nconst data = $getWorkflowStaticData('global');\nconst seen = data.seenKeys ?? {};\nconst now = Date.now();\nconst WINDOW_MS = 5 * 60 * 1000;\n\n// Purge expired entries\nfor (const k of Object.keys(seen)) {\n  if (now - seen[k] > WINDOW_MS) delete seen[k];\n}\n\nif (seen[idempotencyKey]) {\n  // Duplicate, short-circuit by returning empty array\n  return [];\n}\nseen[idempotencyKey] = now;\ndata.seenKeys = seen;\n\nreturn [{ json: item }];"
      },
      "id": "rest-pp-3-idempotency",
      "name": "Idempotency Check (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        900,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Extract a stable customer key from the Telegram message.\n// Restaurant context: try phone first (E.164 regex over the message text),\n// fall back to Telegram contact-share, then to Telegram user ID, then to\n// chat ID for anonymous channel-style messages.\n//\n// Why phone first: a returning regular customer often gives the same phone\n// number when ordering. We want to recognise them across messages even if\n// they switch Telegram accounts or send from a friend's phone.\n\nconst body = $input.first().json;\nconst message = body?.message ?? body?.edited_message;\nconst text = (message?.text ?? '').trim();\nconst fromId = message?.from?.id;\nconst chatId = message?.chat?.id;\nconst username = message?.from?.username ?? '';\nconst firstName = message?.from?.first_name ?? '';\nconst sharedContactPhone = message?.contact?.phone_number;\n\n// E.164-ish phone regex: + with 8-15 digits, or 8-15 digits starting with 0/+/space\nfunction extractPhone(s) {\n  if (!s) return null;\n  const m = s.match(/(?:\\+|00)\\d{8,15}/);\n  if (m) return m[0].replace(/^00/, '+');\n  return null;\n}\n\nlet customerKey;\nlet customerLabel;\nlet identitySource;\n\nif (sharedContactPhone) {\n  // Telegram contact-share is the most reliable signal\n  customerKey = sharedContactPhone.startsWith('+') ? sharedContactPhone : `+${sharedContactPhone}`;\n  customerLabel = `phone ${customerKey}`;\n  identitySource = 'contact-share';\n} else if (extractPhone(text)) {\n  customerKey = extractPhone(text);\n  customerLabel = `phone ${customerKey}`;\n  identitySource = 'text-regex';\n} else if (fromId) {\n  customerKey = `tg:${fromId}`;\n  customerLabel = username ? `@${username}` : firstName || `tg:${fromId}`;\n  identitySource = 'telegram-user-id';\n} else if (chatId) {\n  customerKey = `chat:${chatId}`;\n  customerLabel = `chat-${chatId}`;\n  identitySource = 'telegram-chat-id';\n} else {\n  throw new Error('Cannot extract customer key: no phone, no Telegram user, no chat ID.');\n}\n\nreturn [{\n  json: {\n    customerKey,\n    customerLabel,\n    identitySource,\n    messageText: text,\n    chatId,\n    fromId,\n    username,\n    firstName,\n    receivedAt: new Date().toISOString(),\n  },\n}];"
      },
      "id": "rest-2-extract",
      "name": "Extract Customer Key",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1120,
        320
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "search",
        "query": "={{ $json.customerKey }}",
        "entityType": "customer",
        "limit": 1
      },
      "id": "support-3-entity-search",
      "name": "Memory: Lookup Customer",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1340,
        320
      ]
    },
    {
      "parameters": {
        "conditions": {
          "options": {
            "caseSensitive": true,
            "leftValue": "",
            "typeValidation": "strict",
            "version": 2
          },
          "conditions": [
            {
              "id": "cond-known",
              "leftValue": "={{ ($json.entities ?? $json.results ?? []).length }}",
              "rightValue": 0,
              "operator": {
                "type": "number",
                "operation": "gt"
              }
            }
          ],
          "combinator": "and"
        },
        "options": {}
      },
      "id": "support-4-if",
      "name": "Known Customer?",
      "type": "n8n-nodes-base.if",
      "typeVersion": 2.2,
      "position": [
        1560,
        320
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "open",
        "entityRef": "={{ $('Extract Customer Key').item.json.customerKey }}"
      },
      "id": "support-5a-open",
      "name": "Memory: Customer Dossier",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1800,
        200
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "create",
        "name": "={{ $('Extract Customer Key').item.json.customerKey }}",
        "entityType": "customer",
        "project": "support-bot",
        "observations": "=First contact via Telegram on {{ $('Extract Customer Key').item.json.receivedAt }}. Message: {{ $('Extract Customer Key').item.json.messageText.slice(0, 200) }}"
      },
      "id": "support-5b-create",
      "name": "Memory: Create Customer",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1800,
        460
      ]
    },
    {
      "parameters": {
        "content": "## Why entity.open here\n\nFor returning customers we use `entity.open` (not `memory.search`) because we want the **complete dossier**: entity-type, first-seen date, every observation, every relation. That's exactly what an agent needs to feel like a colleague who reviewed the customer's file before saying hello.\n\nFor recency-weighted search across **all** memory (e.g. when the customer asks about a product topic, not their own history), use the Memory: Search operation instead.",
        "height": 240,
        "width": 380,
        "color": 7
      },
      "id": "note-entity-open",
      "name": "Sticky Note - entity.open",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        1800,
        -120
      ]
    },
    {
      "parameters": {
        "jsCode": "// Build a structured dossier the LLM can use as background.\n\nconst custKey = $('Extract Customer Key').item.json.customerKey;\nconst custLabel = $('Extract Customer Key').item.json.customerLabel || '(unknown)';\nconst messageText = $('Extract Customer Key').item.json.messageText;\n\n// If known customer: pull observations from entity.open\nlet observations = [];\nlet entityType = 'customer';\nlet firstSeen = null;\ntry {\n  const dossier = $('Memory: Customer Dossier').first()?.json;\n  const entity = dossier?.entity ?? dossier?.data ?? dossier;\n  observations = entity?.observations ?? [];\n  entityType = entity?.entityType ?? 'customer';\n  firstSeen = entity?.createdAt ?? entity?.validFrom;\n} catch (e) {\n  // First-time customer, no dossier yet\n}\n\nconst obsList = observations.length\n  ? observations.slice(-10).map((o, i) => {\n      const content = o.content ?? o.text ?? JSON.stringify(o).slice(0, 200);\n      const at = o.createdAt ?? o.observedAt ?? '';\n      return `${i + 1}. [${at}] ${content}`;\n    }).join('\\n')\n  : '(this is a new customer, no prior tickets on file)';\n\nconst systemPrompt = `You are a friendly support agent for our company.\\n\\nCustomer: ${custLabel}\\nFirst contact: ${firstSeen ?? 'today'}\\nPrior interactions (most recent first):\\n${obsList}\\n\\nReply concisely. Reference past tickets when it adds value (\"I see this is your second time asking about X, let me get to the bottom of it.\"). If the question is unclear, ask one clarifying question. If the issue is technical and beyond a one-line answer, offer to escalate to a human and tell the user that you'll log the ticket.`;\n\nreturn [{\n  json: {\n    customerKey: custKey,\n    customerLabel: custLabel,\n    messageText,\n    systemPrompt,\n    observationCount: observations.length,\n  },\n}];"
      },
      "id": "support-6-prompt",
      "name": "Build LLM Prompt",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2040,
        320
      ]
    },
    {
      "parameters": {
        "mode": "manual",
        "duplicateItem": false,
        "assignments": {
          "assignments": [
            {
              "id": "set-provider",
              "name": "provider",
              "value": "openai",
              "type": "string"
            }
          ]
        },
        "includeOtherFields": true,
        "options": {}
      },
      "id": "mp-set-provider",
      "name": "Set Provider",
      "type": "n8n-nodes-base.set",
      "typeVersion": 3.4,
      "position": [
        2260,
        320
      ]
    },
    {
      "parameters": {
        "rules": {
          "values": [
            {
              "conditions": {
                "options": {
                  "caseSensitive": true,
                  "leftValue": "",
                  "typeValidation": "strict",
                  "version": 2
                },
                "conditions": [
                  {
                    "id": "rule-openai",
                    "leftValue": "={{ $json.provider }}",
                    "rightValue": "openai",
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    }
                  }
                ],
                "combinator": "and"
              },
              "renameOutput": true,
              "outputKey": "openai"
            },
            {
              "conditions": {
                "options": {
                  "caseSensitive": true,
                  "leftValue": "",
                  "typeValidation": "strict",
                  "version": 2
                },
                "conditions": [
                  {
                    "id": "rule-anthropic",
                    "leftValue": "={{ $json.provider }}",
                    "rightValue": "anthropic",
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    }
                  }
                ],
                "combinator": "and"
              },
              "renameOutput": true,
              "outputKey": "anthropic"
            }
          ]
        },
        "options": {
          "fallbackOutput": "extra",
          "renameFallbackOutput": "fallback"
        }
      },
      "id": "mp-route",
      "name": "Route by Provider",
      "type": "n8n-nodes-base.switch",
      "typeVersion": 3.2,
      "position": [
        2480,
        320
      ]
    },
    {
      "parameters": {
        "resource": "text",
        "operation": "message",
        "modelId": {
          "__rl": true,
          "value": "gpt-5-mini",
          "mode": "list",
          "cachedResultName": "gpt-5-mini"
        },
        "messages": {
          "values": [
            {
              "content": "={{ $json.systemPrompt }}",
              "role": "system"
            },
            {
              "content": "={{ $json.messageText ?? $json.transcript ?? $json.payload }}",
              "role": "user"
            }
          ]
        },
        "jsonOutput": false,
        "options": {
          "maxTokens": 400,
          "temperature": 0.5
        }
      },
      "id": "mp-openai",
      "name": "OpenAI Reply",
      "type": "n8n-nodes-base.openAi",
      "typeVersion": 1.7,
      "onError": "continueErrorOutput",
      "position": [
        2700,
        200
      ]
    },
    {
      "parameters": {
        "resource": "text",
        "operation": "message",
        "modelId": {
          "__rl": true,
          "value": "claude-haiku-4-5",
          "mode": "list",
          "cachedResultName": "claude-haiku-4-5"
        },
        "messages": {
          "values": [
            {
              "content": "={{ $json.systemPrompt }}",
              "role": "system"
            },
            {
              "content": "={{ $json.messageText }}",
              "role": "user"
            }
          ]
        },
        "options": {
          "maxTokens": 400,
          "temperature": 0.5
        }
      },
      "id": "mp-anthropic",
      "name": "Anthropic Reply",
      "type": "@n8n/n8n-nodes-langchain.anthropic",
      "typeVersion": 1,
      "onError": "continueErrorOutput",
      "position": [
        2700,
        440
      ]
    },
    {
      "parameters": {
        "content": "## Error branch (always on)\n\nThe two LLM Reply nodes have `On Error: Continue (Error Output)` enabled. The red error pin lands at **LLM Fallback Reply**, which builds a graceful customer message and feeds two destinations:\n\n1. **Telegram Reply** so the customer gets an answer instead of silence.\n2. **Memory: Learn Error** with `category: mistake, tags: [llm-error, <provider>]` so you spot patterns in your knowledge graph.\n\nNo env var, this branch is always wired. Without it, an OpenAI rate-limit or Anthropic 5xx leaves the customer hanging.\n\nThe error syntax is `{{ $json.error.message }}`, not `$error.message` (which does not exist in n8n) and not `$json.execution.error.message` (which is for separate Error Trigger Workflows, not inline error pins).",
        "height": 360,
        "width": 460,
        "color": 7
      },
      "id": "note-error-branch",
      "name": "Sticky Note - Error Branch",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        2960,
        -100
      ]
    },
    {
      "parameters": {
        "jsCode": "// LLM Fallback Reply: fires when OpenAI Reply or Anthropic Reply errors,\n// or when Route by Provider receives an unknown provider value.\n// Builds a graceful customer-facing reply and an error-learn payload.\n//\n// Two arrival paths land here:\n//   1. LLM error (provider returned non-2xx) - input has $json.error\n//   2. Router fallback (Route by Provider had no matching rule) - input has\n//      the original prompt object (systemPrompt, messageText, customerKey)\n//      WITHOUT an error field. We must NOT JSON.stringify the whole input\n//      because systemPrompt contains private memory context.\n\nconst errorRaw = $input.first().json;\nconst provider = $('Set Provider').item.json.provider ?? 'unknown';\n\nconst isLlmError = !!(errorRaw?.error || errorRaw?.message);\nlet errorMessage;\nif (isLlmError) {\n  errorMessage = errorRaw?.error?.message\n    ?? errorRaw?.error?.name\n    ?? errorRaw?.message\n    ?? 'Unknown LLM error';\n} else {\n  errorMessage = `Unknown provider value: ${provider}. Set \"provider\" to \"openai\" or \"anthropic\" in the Set Provider node.`;\n}\n\nconst customerKey = $('Extract Customer Key').item.json.customerKey;\nconst customerLabel = $('Extract Customer Key').item.json.customerLabel;\nconst messageText = $('Extract Customer Key').item.json.messageText;\n\nconst fallbackText = `Sorry, our system is briefly down. We have noted your message and will get back to you within 30 minutes.`;\n\nreturn [{\n  json: {\n    replyText: fallbackText,\n    provider,\n    isFallback: true,\n    isRouterFallback: !isLlmError,\n    errorMessage: String(errorMessage),\n    customerKey,\n    customerLabel,\n    messageText,\n  },\n}];"
      },
      "id": "rest-7-fallback",
      "name": "LLM Fallback Reply",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2960,
        580
      ]
    },
    {
      "parameters": {
        "jsCode": "// Normalize LLM output across providers into a single field `replyText`.\n// OpenAI:    $json.choices[0].message.content\n// Anthropic: $json.content[0].text\n\nconst raw = $input.first().json;\nlet replyText = '';\n\nif (raw?.choices?.[0]?.message?.content) {\n  replyText = raw.choices[0].message.content;\n} else if (Array.isArray(raw?.content) && raw.content[0]?.text) {\n  replyText = raw.content[0].text;\n} else if (raw?.message?.content) {\n  replyText = raw.message.content;\n} else if (raw?.text) {\n  replyText = raw.text;\n} else if (raw?.reply) {\n  replyText = raw.reply;\n} else {\n  replyText = '(LLM returned no text, check provider response shape)';\n}\n\nreturn [{\n  json: {\n    replyText: String(replyText).trim(),\n    provider: $('Set Provider').item.json.provider,\n    isFallback: false,\n  },\n}];"
      },
      "id": "mp-normalize",
      "name": "Normalize LLM Output",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2960,
        320
      ]
    },
    {
      "parameters": {
        "chatId": "={{ $('Extract Customer Key').item.json.chatId }}",
        "text": "={{ $json.replyText ?? 'Sorry, I had trouble generating a reply. A human will be in touch shortly.' }}",
        "additionalFields": {
          "parse_mode": "Markdown"
        }
      },
      "id": "support-8-tg-send",
      "name": "Telegram Reply",
      "type": "n8n-nodes-base.telegram",
      "typeVersion": 1.2,
      "position": [
        3200,
        320
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "observe",
        "entityRef": "={{ $('Extract Customer Key').item.json.customerKey }}",
        "observations": "=Ticket on {{ $('Extract Customer Key').item.json.receivedAt }}, Customer: {{ $('Extract Customer Key').item.json.messageText.slice(0, 200) }} | Bot reply: {{ ($json.replyText ?? '').slice(0, 200) }}"
      },
      "id": "support-9-observe",
      "name": "Memory: Observe Ticket",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        220
      ]
    },
    {
      "parameters": {
        "resource": "memory",
        "operation": "learn",
        "content": "=Support interaction with {{ $('Extract Customer Key').item.json.customerLabel }} ({{ $('Extract Customer Key').item.json.customerKey }}): \"{{ $('Extract Customer Key').item.json.messageText.slice(0, 150) }}\" \u2192 Bot resolved with: \"{{ ($json.replyText ?? '').slice(0, 150) }}\"",
        "category": "insight",
        "project": "support-bot",
        "tags": "=support, customer-{{ $('Extract Customer Key').item.json.customerKey }}",
        "confidence": 0.7
      },
      "id": "support-10-learn",
      "name": "Memory: Learn Ticket",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        380
      ]
    },
    {
      "parameters": {
        "resource": "memory",
        "operation": "learn",
        "content": "=LLM error in support bot ({{ $json.provider }}): {{ $json.errorMessage }} | Customer: {{ $json.customerLabel }} ({{ $json.customerKey }}) | Question: \"{{ ($json.messageText ?? '').slice(0, 150) }}\"",
        "category": "mistake",
        "project": "support-bot",
        "tags": "=llm-error, {{ $json.provider }}, support-bot",
        "confidence": 0.6
      },
      "id": "support-11-learn-error",
      "name": "Memory: Learn Error",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        580
      ]
    }
  ],
  "connections": {
    "Telegram Trigger": {
      "main": [
        [
          {
            "node": "Verify Webhook (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Verify Webhook (opt-in)": {
      "main": [
        [
          {
            "node": "Rate Limit (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Rate Limit (opt-in)": {
      "main": [
        [
          {
            "node": "Idempotency Check (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Idempotency Check (opt-in)": {
      "main": [
        [
          {
            "node": "Extract Customer Key",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Customer Key": {
      "main": [
        [
          {
            "node": "Memory: Lookup Customer",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Lookup Customer": {
      "main": [
        [
          {
            "node": "Known Customer?",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Known Customer?": {
      "main": [
        [
          {
            "node": "Memory: Customer Dossier",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Memory: Create Customer",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Customer Dossier": {
      "main": [
        [
          {
            "node": "Build LLM Prompt",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Create Customer": {
      "main": [
        [
          {
            "node": "Build LLM Prompt",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Build LLM Prompt": {
      "main": [
        [
          {
            "node": "Set Provider",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Set Provider": {
      "main": [
        [
          {
            "node": "Route by Provider",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Route by Provider": {
      "main": [
        [
          {
            "node": "OpenAI Reply",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Anthropic Reply",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Reply": {
      "main": [
        [
          {
            "node": "Normalize LLM Output",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Anthropic Reply": {
      "main": [
        [
          {
            "node": "Normalize LLM Output",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Normalize LLM Output": {
      "main": [
        [
          {
            "node": "Telegram Reply",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Observe Ticket",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Learn Ticket",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "LLM Fallback Reply": {
      "main": [
        [
          {
            "node": "Telegram Reply",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Learn Error",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "settings": {
    "executionOrder": "v1"
  }
}