AutomationFlowsAI & RAG › Lead-Qualifier with BANT+I and Pipedrive (Multi-Provider)

Lead-Qualifier with BANT+I and Pipedrive (Multi-Provider)

Lead-Qualifier with BANT+I and Pipedrive (Multi-Provider). Uses stickyNote, n8n-nodes-studiomeyer-memory, openAi, anthropic. Webhook trigger; 28 nodes.

Webhook trigger★★★★☆ complexityAI-powered28 nodesN8N Nodes Studiomeyer MemoryOpen AiAnthropicHttp Request
AI & RAG Trigger: Webhook Nodes: 28 Complexity: ★★★★☆ AI nodes: yes

The workflow JSON

Copy or download the full n8n JSON below. Paste it into a new n8n workflow, add your credentials, activate. Full import guide →

Download .json
{
  "name": "Lead-Qualifier with BANT+I and Pipedrive (Multi-Provider)",
  "nodes": [
    {
      "parameters": {
        "content": "## AI Lead-Qualifier with BANT+I and Pipedrive Bot\n\n**Stack:** Lead Form (or WhatsApp / web chat) \u2192 StudioMeyer Memory entity lookup \u2192 Claude or OpenAI with lead dossier \u2192 reply \u2192 persist outcome.\n\n**Why this beats a stateless bot:** Returning leads don't have to re-explain who they are. The bot greets them by name, references past leads, and the agent that takes over has full history one click away.\n\n**To swap Lead Form for WhatsApp:** replace the trigger node with a WhatsApp Trigger and the reply node with WhatsApp Send Message. The middle stays identical.\n\n**Production patterns ship in this workflow.json** as opt-in Code nodes. See the orange Sticky Notes below for the four env vars that toggle them on.\n\nREQUIRED env vars before activation: PIPEDRIVE_COMPANY_DOMAIN, PIPEDRIVE_API_TOKEN, PIPEDRIVE_STAGE_HOT, PIPEDRIVE_STAGE_WARM, PIPEDRIVE_STAGE_COLD. The Pipedrive: Create Deal node will hit https://.pipedrive.com/... and return HTTP 400 if PIPEDRIVE_COMPANY_DOMAIN is unset.",
        "height": 380,
        "width": 480,
        "color": 6
      },
      "id": "note-intro",
      "name": "Sticky Note - Intro",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -260,
        -60
      ]
    },
    {
      "parameters": {
        "content": ">> SET ME <<\n\n**Lead Form Bot credential** required.\n\n1. Talk to your form provider (Typeform / Tally / Webflow / custom), create a bot, get the token.\n2. n8n Credentials \u2192 New \u2192 Lead Form \u2192 paste token.\n3. After activation, message your bot once and check this node's incoming-data view.\n\n**Webhook security:** to harden the trigger, expand `additionalFields` and set `secretToken` to a strong random string. Then re-register the webhook so Lead Form sends `X-Lead Form-Bot-Api-Secret-Token` on every request. The trigger validates it automatically.",
        "height": 320,
        "width": 320,
        "color": 5
      },
      "id": "note-telegram",
      "name": "Sticky Note - Telegram",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        240,
        -60
      ]
    },
    {
      "parameters": {
        "content": "## Production patterns (opt-in)\n\nThree Code nodes below are off by default. Toggle each with an n8n env var:\n\n- `IDEMPOTENCY_ENABLED=1` deduplicates Lead Form retries on the same `email + first-100-chars-of-message` (5-min window).\n- `RATE_LIMIT_ENABLED=1` caps each chat at 60 requests / 5 min.\n- `WEBHOOK_INTEGRITY_CHECK_ENABLED=1` rejects malformed payloads (no `message`, no `from`).\n\nEach node returns pass-through when its env var is unset, so the default import boots clean. Production deployments enable all three plus the Lead Form Trigger `secretToken`.\n\nFor clustered n8n deployments, swap the in-memory `$getWorkflowStaticData` blocks for Redis (`SET NX EX 300` for idempotency, `INCR + EXPIRE` for rate limit). Single-instance n8n is fine with the default.",
        "height": 360,
        "width": 540,
        "color": 7
      },
      "id": "note-production-patterns",
      "name": "Sticky Note - Production Patterns",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        700,
        -120
      ]
    },
    {
      "parameters": {
        "httpMethod": "POST",
        "path": "lead-form",
        "responseMode": "responseNode",
        "options": {
          "rawBody": true
        }
      },
      "id": "lead-1-trigger",
      "name": "Form Webhook",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 2,
      "position": [
        240,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Webhook integrity check (opt-in via WEBHOOK_INTEGRITY_CHECK_ENABLED=1).\n// Telegram Trigger handles HMAC via its secretToken option (set on the\n// trigger node itself). This Code node is the second defense layer:\n// reject malformed payloads that lack the fields downstream nodes expect.\n//\n// To enable: set the n8n env var WEBHOOK_INTEGRITY_CHECK_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n\nconst enabled = $env.WEBHOOK_INTEGRITY_CHECK_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\n\nif (!message || typeof message !== 'object') {\n  throw new Error('Webhook integrity check failed: no message object');\n}\nif (typeof message.from?.id !== 'number' && typeof message.from?.id !== 'string') {\n  throw new Error('Webhook integrity check failed: missing message.from.id');\n}\nif (typeof message.chat?.id !== 'number' && typeof message.chat?.id !== 'string') {\n  throw new Error('Webhook integrity check failed: missing message.chat.id');\n}\nif (typeof message.text !== 'string' && typeof message.caption !== 'string') {\n  throw new Error('Webhook integrity check failed: no text or caption');\n}\n\nreturn [{ json: item }];"
      },
      "id": "support-pp-1-verify",
      "name": "Verify Webhook (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        460,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Rate limit (opt-in via RATE_LIMIT_ENABLED=1).\n// Per-chat-id 60 requests in a 5-minute window. Tracked in workflow\n// static data (per-instance). For clustered n8n deployments or higher\n// throughput, use Nginx limit_req_zone or Cloudflare WAF instead.\n//\n// To enable: set the n8n env var RATE_LIMIT_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n//\n// Concurrency note: $getWorkflowStaticData is not atomic. Under heavy\n// burst load the count can over-shoot the limit by a few percent. For\n// hard limits use a reverse proxy or Redis INCR + EXPIRE.\n\nconst enabled = $env.RATE_LIMIT_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\nconst chatId = message?.chat?.id ?? message?.from?.id ?? 'unknown';\nconst bucketKey = `chat:${chatId}`;\n\nconst data = $getWorkflowStaticData('global');\nconst buckets = data.rateBuckets ?? {};\nconst now = Date.now();\nconst WINDOW_MS = 5 * 60 * 1000;\nconst LIMIT = 60;\nconst MAX_BUCKETS = 5000;\n\nconst bucket = buckets[bucketKey] ?? { count: 0, windowStart: now };\nif (now - bucket.windowStart > WINDOW_MS) {\n  bucket.count = 0;\n  bucket.windowStart = now;\n}\nbucket.count++;\nbuckets[bucketKey] = bucket;\n\n// Bound the map: evict expired entries when full\nif (Object.keys(buckets).length > MAX_BUCKETS) {\n  const cutoff = now - WINDOW_MS;\n  for (const k of Object.keys(buckets)) {\n    if (buckets[k].windowStart < cutoff) delete buckets[k];\n  }\n}\ndata.rateBuckets = buckets;\n\nif (bucket.count > LIMIT) {\n  throw new Error(`Rate limit exceeded for ${bucketKey}: ${bucket.count} requests in 5 min window`);\n}\n\nreturn [{ json: item }];"
      },
      "id": "support-pp-2-ratelimit",
      "name": "Rate Limit (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        680,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Idempotency check (opt-in via IDEMPOTENCY_ENABLED=1).\n// Telegram retries on 5xx. Without dedup, the workflow fires twice and\n// writes Memory twice. This node holds a 5-minute in-memory window of\n// seen update_ids and short-circuits duplicates.\n//\n// To enable: set the n8n env var IDEMPOTENCY_ENABLED to '1'.\n// To disable: leave the env var unset (default). The node passes through.\n//\n// Concurrency note: $getWorkflowStaticData is not atomic and not cluster-\n// aware. Two simultaneous fires of the same update_id may both pass.\n// For production scale, swap the staticData block for Redis SET NX EX 300.\n\nconst enabled = $env.IDEMPOTENCY_ENABLED === '1';\nif (!enabled) {\n  return [{ json: $input.first().json }];\n}\n\nconst item = $input.first().json;\nconst message = item?.message ?? item;\nconst updateId = item?.update_id ?? message?.update_id;\nconst messageId = message?.message_id;\nconst idempotencyKey = updateId ? `tg-update:${updateId}` : (messageId ? `tg-msg:${messageId}` : null);\n\nif (!idempotencyKey) {\n  // No dedup key available, pass through\n  return [{ json: item }];\n}\n\nconst data = $getWorkflowStaticData('global');\nconst seen = data.seenKeys ?? {};\nconst now = Date.now();\nconst WINDOW_MS = 5 * 60 * 1000;\n\n// Purge expired entries\nfor (const k of Object.keys(seen)) {\n  if (now - seen[k] > WINDOW_MS) delete seen[k];\n}\n\nif (seen[idempotencyKey]) {\n  // Duplicate detected. Emit a sentinel item that the\n  // 'Skip If Duplicate' IF node routes to 'Respond Duplicate'\n  // (200 OK + { deduped: true }). Without that 200 the source\n  // provider would hold the HTTP connection until n8n's webhook\n  // timeout (default 30s) and mark delivery failed.\n  return [{ json: { skipped: true, reason: 'duplicate', dedupKey: String(idempotencyKey) } }];\n}\nseen[idempotencyKey] = now;\ndata.seenKeys = seen;\n\nreturn [{ json: item }];"
      },
      "id": "support-pp-3-idempotency",
      "name": "Idempotency Check (opt-in)",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        900,
        320
      ]
    },
    {
      "parameters": {
        "jsCode": "// Extract lead identity + form fields from the incoming webhook payload.\n// Form context: lead fills out a form on your landing page with email, name,\n// company, and a free-text message about their problem. We extract email as\n// the primary key (more stable than name) and pass the rest forward as\n// observations to be stored on the lead entity.\n\nconst body = $input.first().json;\n\nconst email = (body?.email ?? body?.email_address ?? '').trim().toLowerCase();\nconst name = (body?.name ?? body?.full_name ?? '').trim();\nconst company = (body?.company ?? body?.organization ?? '').trim();\nconst phone = (body?.phone ?? body?.tel ?? '').trim();\nconst message = (body?.message ?? body?.notes ?? body?.question ?? '').trim();\nconst source = body?.source ?? body?.utm_source ?? body?.referrer ?? 'unknown';\n\nif (!email || !email.includes('@')) {\n  throw new Error('Missing or invalid email in form payload (required field: email).');\n}\n\nif (!message) {\n  throw new Error('Missing message text in form payload (required field: message).');\n}\n\nreturn [{\n  json: {\n    customerKey: email,            // reuse downstream \"customerKey\" naming for entity-lookup parity with T02\n    customerLabel: name || company || email,\n    email,\n    name,\n    company,\n    phone,\n    messageText: message,\n    source,\n    receivedAt: new Date().toISOString(),\n  },\n}];"
      },
      "id": "lead-2-extract",
      "name": "Extract Lead",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1120,
        320
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "search",
        "query": "={{ $json.customerKey }}",
        "entityType": "customer",
        "limit": 1
      },
      "id": "lead-3-lookup",
      "name": "Memory: Lookup Lead",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1340,
        320
      ]
    },
    {
      "parameters": {
        "conditions": {
          "options": {
            "caseSensitive": true,
            "leftValue": "",
            "typeValidation": "strict",
            "version": 2
          },
          "conditions": [
            {
              "id": "cond-known",
              "leftValue": "={{ ($json.entities ?? $json.results ?? []).length }}",
              "rightValue": 0,
              "operator": {
                "type": "number",
                "operation": "gt"
              }
            }
          ],
          "combinator": "and"
        },
        "options": {}
      },
      "id": "lead-3b-known",
      "name": "Returning Lead?",
      "type": "n8n-nodes-base.if",
      "typeVersion": 2.2,
      "position": [
        1560,
        320
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "open",
        "entityRef": "={{ $('Extract Lead').item.json.customerKey }}"
      },
      "id": "lead-3c-dossier",
      "name": "Memory: Lead Dossier",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1800,
        200
      ]
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "create",
        "name": "={{ $('Extract Lead').item.json.customerKey }}",
        "entityType": "customer",
        "project": "support-bot",
        "observations": "=First contact via form on {{ $('Extract Lead').item.json.receivedAt }}. Message: {{ $('Extract Lead').item.json.messageText.slice(0, 200) }}"
      },
      "id": "lead-3d-create",
      "name": "Memory: Create Lead",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        1800,
        460
      ]
    },
    {
      "parameters": {
        "content": "## Why entity.open here\n\nFor returning leads we use `entity.open` (not `memory.search`) because we want the **complete dossier**: entity-type, first-seen date, every observation, every relation. That's exactly what an agent needs to feel like a colleague who reviewed the lead's file before saying hello.\n\nFor recency-weighted search across **all** memory (e.g. when the lead asks about a product topic, not their own history), use the Memory: Search operation instead.",
        "height": 240,
        "width": 380,
        "color": 7
      },
      "id": "note-entity-open",
      "name": "Sticky Note - entity.open",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        1800,
        -120
      ]
    },
    {
      "parameters": {
        "jsCode": "// Build a system + user prompt for the LLM that performs BANT+I classification.\n// BANT = Budget, Authority, Need, Timeline. The \"I\" extension is Intent\n// (buying-signal strength: 1-5).\n//\n// We ask the LLM to return STRICT JSON with five integer scores 1-5 and a\n// brief justification per dimension. Downstream Pipedrive node uses the\n// total score + bucketed-state to decide which Pipedrive Pipeline to assign.\n\nconst customerKey = $('Extract Lead').item.json.customerKey;\nconst customerLabel = $('Extract Lead').item.json.customerLabel;\nconst messageText = $('Extract Lead').item.json.messageText;\nconst company = $('Extract Lead').item.json.company;\n\nconst dossierRaw = $input.first().json;\nconst dossier = dossierRaw?.data ?? dossierRaw;\nconst observations = dossier?.observations ?? [];\nconst priorBant = observations\n  .slice(-5)\n  .map(o => `- ${o?.created_at?.slice(0, 10) ?? '?'}: ${(o?.content ?? '').slice(0, 200)}`)\n  .join('\\n');\n\nconst systemPrompt = `You are a B2B sales-qualification assistant. You evaluate inbound leads on the BANT+I framework and return STRICT JSON only.\n\nBANT+I dimensions (each 1-5):\n- B Budget: 1=no budget signal, 5=explicit budget mentioned\n- A Authority: 1=junior IC, 5=decision-maker (C-level, founder, head-of)\n- N Need: 1=vague exploration, 5=specific painful problem with impact mentioned\n- T Timeline: 1=no urgency, 5=signed RFP or \"this quarter\"\n- I Intent: 1=research-mode browsing, 5=buying-signal explicit (\"looking for vendors\")\n\nReturn ONLY a JSON object of this exact shape, no markdown, no preamble:\n{\n  \"scores\": { \"B\": 1-5, \"A\": 1-5, \"N\": 1-5, \"T\": 1-5, \"I\": 1-5 },\n  \"total\": <sum of scores>,\n  \"qualifiedAs\": \"hot\" | \"warm\" | \"cold\",\n  \"summary\": \"one sentence in English with the why behind the bucketing\",\n  \"nextAction\": \"one sentence with what sales rep should do next\"\n}\n\nBucketing rule: total >= 20 = hot, 15-19 = warm, < 15 = cold.\n\nLead label: ${customerLabel}\nCompany: ${company || '(not provided)'}\nLead message: ${messageText}\n\nPrior BANT observations on this lead (most-recent last):\n${priorBant || '(no prior interactions on file, this is a first-time lead)'}`;\n\nreturn [{\n  json: {\n    systemPrompt,\n    messageText,\n    customerKey,\n    customerLabel,\n  },\n}];"
      },
      "id": "lead-4-prompt",
      "name": "Build LLM Prompt",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2040,
        320
      ]
    },
    {
      "parameters": {
        "mode": "manual",
        "duplicateItem": false,
        "assignments": {
          "assignments": [
            {
              "id": "set-provider",
              "name": "provider",
              "value": "openai",
              "type": "string"
            }
          ]
        },
        "includeOtherFields": true,
        "options": {}
      },
      "id": "lead-5-set-provider",
      "name": "Set Provider",
      "type": "n8n-nodes-base.set",
      "typeVersion": 3.4,
      "position": [
        2260,
        320
      ]
    },
    {
      "parameters": {
        "rules": {
          "values": [
            {
              "conditions": {
                "options": {
                  "caseSensitive": true,
                  "leftValue": "",
                  "typeValidation": "strict",
                  "version": 2
                },
                "conditions": [
                  {
                    "id": "rule-openai",
                    "leftValue": "={{ $json.provider }}",
                    "rightValue": "openai",
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    }
                  }
                ],
                "combinator": "and"
              },
              "renameOutput": true,
              "outputKey": "openai"
            },
            {
              "conditions": {
                "options": {
                  "caseSensitive": true,
                  "leftValue": "",
                  "typeValidation": "strict",
                  "version": 2
                },
                "conditions": [
                  {
                    "id": "rule-anthropic",
                    "leftValue": "={{ $json.provider }}",
                    "rightValue": "anthropic",
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    }
                  }
                ],
                "combinator": "and"
              },
              "renameOutput": true,
              "outputKey": "anthropic"
            }
          ]
        },
        "options": {
          "fallbackOutput": "extra",
          "renameFallbackOutput": "fallback"
        }
      },
      "id": "lead-5b-route",
      "name": "Route by Provider",
      "type": "n8n-nodes-base.switch",
      "typeVersion": 3.2,
      "position": [
        2480,
        320
      ]
    },
    {
      "parameters": {
        "resource": "text",
        "operation": "message",
        "modelId": {
          "__rl": true,
          "value": "gpt-5-mini",
          "mode": "list",
          "cachedResultName": "gpt-5-mini"
        },
        "messages": {
          "values": [
            {
              "content": "={{ $json.systemPrompt }}",
              "role": "system"
            },
            {
              "content": "={{ $json.messageText ?? $json.transcript ?? $json.payload }}",
              "role": "user"
            }
          ]
        },
        "jsonOutput": false,
        "options": {
          "maxTokens": 400,
          "temperature": 0.5
        }
      },
      "id": "lead-6-openai",
      "name": "OpenAI Reply",
      "type": "n8n-nodes-base.openAi",
      "typeVersion": 1.7,
      "onError": "continueErrorOutput",
      "position": [
        2700,
        200
      ]
    },
    {
      "parameters": {
        "resource": "text",
        "operation": "message",
        "modelId": {
          "__rl": true,
          "value": "claude-haiku-4-5",
          "mode": "list",
          "cachedResultName": "claude-haiku-4-5"
        },
        "messages": {
          "values": [
            {
              "content": "={{ $json.systemPrompt }}",
              "role": "system"
            },
            {
              "content": "={{ $json.messageText }}",
              "role": "user"
            }
          ]
        },
        "options": {
          "maxTokens": 400,
          "temperature": 0.5
        }
      },
      "id": "lead-6-anthropic",
      "name": "Anthropic Reply",
      "type": "@n8n/n8n-nodes-langchain.anthropic",
      "typeVersion": 1,
      "onError": "continueErrorOutput",
      "position": [
        2700,
        440
      ]
    },
    {
      "parameters": {
        "content": "## Error branch (always on)\n\nThe two LLM Reply nodes have `On Error: Continue (Error Output)` enabled. The red error pin lands at **LLM Fallback Reply**, which builds a graceful lead message and feeds two destinations:\n\n1. **Lead Form Reply** so the lead gets an answer instead of silence.\n2. **Memory: Learn Error** with `category: mistake, tags: [llm-error, <provider>]` so you spot patterns in your knowledge graph.\n\nNo env var, this branch is always wired. Without it, an OpenAI rate-limit or Anthropic 5xx leaves the lead hanging.\n\nThe error syntax is `{{ $json.error.message }}`, not `$error.message` (which does not exist in n8n) and not `$json.execution.error.message` (which is for separate Error Trigger Workflows, not inline error pins).",
        "height": 360,
        "width": 460,
        "color": 7
      },
      "id": "note-error-branch",
      "name": "Sticky Note - Error Branch",
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        2960,
        -100
      ]
    },
    {
      "parameters": {
        "jsCode": "// LLM Fallback Reply: fires when OpenAI/Anthropic Reply errors,\n// or when Route by Provider receives an unknown provider value.\n//\n// Two arrival paths:\n//   1. LLM error - input has $json.error\n//   2. Router fallback - input has the original prompt object including\n//      systemPrompt with the full BANT-context (prior lead history).\n//      We must NOT JSON.stringify the whole input.\n\nconst errorRaw = $input.first().json;\nconst provider = $('Set Provider').item.json.provider ?? 'unknown';\n\nconst isLlmError = !!(errorRaw?.error || errorRaw?.message);\nlet errorMessage;\nif (isLlmError) {\n  errorMessage = errorRaw?.error?.message\n    ?? errorRaw?.error?.name\n    ?? errorRaw?.message\n    ?? 'Unknown LLM error';\n} else {\n  errorMessage = `Unknown provider value: ${provider}. Set \"provider\" to \"openai\" or \"anthropic\" in the Set Provider node.`;\n}\n\nconst customerKey = $('Extract Lead').item.json.customerKey;\nconst customerLabel = $('Extract Lead').item.json.customerLabel;\nconst messageText = $('Extract Lead').item.json.messageText;\n\nreturn [{\n  json: {\n    scores: { B: 1, A: 1, N: 1, T: 1, I: 1 },\n    total: 5,\n    qualifiedAs: 'cold',\n    summary: 'Qualifier unavailable, lead defaulted to cold for human review.',\n    nextAction: 'Manual review required: BANT classifier failed.',\n    provider,\n    isFallback: true,\n    isRouterFallback: !isLlmError,\n    errorMessage: String(errorMessage),\n    customerKey,\n    customerLabel,\n    messageText,\n    replyText: 'Qualifier unavailable, lead defaulted to cold.',\n  },\n}];"
      },
      "id": "lead-7-fallback",
      "name": "LLM Fallback Reply",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2960,
        580
      ]
    },
    {
      "parameters": {
        "jsCode": "// Parse the strict-JSON BANT+I response from either OpenAI or Anthropic\n// and lift the structured fields up so downstream nodes can pivot on them.\n//\n// OpenAI shape: $json.choices[0].message.content (string with JSON inside)\n// Anthropic shape: $json.content[0].text (string with JSON inside)\n//\n// LLMs sometimes wrap JSON in fenced code blocks despite instructions. We\n// strip ``` and ```json fences before parsing, then fall back to raw.\n\nconst raw = $input.first().json;\nlet text = '';\n\nif (raw?.choices?.[0]?.message?.content) {\n  text = raw.choices[0].message.content;\n} else if (Array.isArray(raw?.content) && raw.content[0]?.text) {\n  text = raw.content[0].text;\n} else if (raw?.message?.content) {\n  text = raw.message.content;\n} else if (raw?.text) {\n  text = raw.text;\n}\n\ntext = String(text).trim();\ntext = text.replace(/^```(?:json)?\\s*/, '').replace(/\\s*```$/, '');\n\nlet parsed;\ntry {\n  parsed = JSON.parse(text);\n} catch (e) {\n  // LLM returned non-JSON: degrade gracefully, qualify as cold for safety\n  parsed = {\n    scores: { B: 1, A: 1, N: 1, T: 1, I: 1 },\n    total: 5,\n    qualifiedAs: 'cold',\n    summary: 'LLM did not return valid JSON, lead defaulted to cold for safety. Raw output: ' + text.slice(0, 120),\n    nextAction: 'Manual review required: classifier output was unparseable.',\n  };\n}\n\nconst total = Number(parsed?.total ?? 0);\nconst qualifiedAs = parsed?.qualifiedAs ?? (total >= 20 ? 'hot' : total >= 15 ? 'warm' : 'cold');\n\nreturn [{\n  json: {\n    scores: parsed?.scores ?? {},\n    total,\n    qualifiedAs,\n    summary: parsed?.summary ?? '',\n    nextAction: parsed?.nextAction ?? '',\n    provider: $('Set Provider').item.json.provider,\n    replyText: parsed?.summary ?? 'Lead qualified.',\n  },\n}];"
      },
      "id": "lead-8-normalize",
      "name": "Normalize LLM Output",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        2960,
        320
      ]
    },
    {
      "parameters": {
        "respondWith": "json",
        "responseBody": "={{ JSON.stringify({ status: 'received', leadEmail: $('Extract Lead').item.json.email, qualifiedAs: $json.qualifiedAs ?? 'pending', message: 'Thanks, we will be in touch within one business day.' }) }}",
        "options": {}
      },
      "id": "lead-9-respond",
      "name": "Form Acknowledge",
      "type": "n8n-nodes-base.respondToWebhook",
      "typeVersion": 1.1,
      "position": [
        3200,
        320
      ],
      "credentials": {}
    },
    {
      "parameters": {
        "resource": "entity",
        "operation": "observe",
        "entityRef": "={{ $('Extract Lead').item.json.customerKey }}",
        "observations": "=Ticket on {{ $('Extract Lead').item.json.receivedAt }}, Customer: {{ $('Extract Lead').item.json.messageText.slice(0, 200) }} | Bot reply: {{ ($json.replyText ?? '').slice(0, 200) }}"
      },
      "id": "lead-9b-observe",
      "name": "Memory: Observe BANT",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        220
      ]
    },
    {
      "parameters": {
        "resource": "memory",
        "operation": "learn",
        "content": "=Support interaction with {{ $('Extract Lead').item.json.customerLabel }} ({{ $('Extract Lead').item.json.customerKey }}): \"{{ $('Extract Lead').item.json.messageText.slice(0, 150) }}\" \u2192 Bot resolved with: \"{{ ($json.replyText ?? '').slice(0, 150) }}\"",
        "category": "insight",
        "project": "support-bot",
        "tags": "=support, customer-{{ $('Extract Lead').item.json.customerKey }}",
        "confidence": 0.7
      },
      "id": "lead-9c-learn",
      "name": "Memory: Learn Lead-Insight",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        380
      ]
    },
    {
      "parameters": {
        "resource": "memory",
        "operation": "learn",
        "content": "=LLM error in support bot ({{ $json.provider }}): {{ $json.errorMessage }} | Customer: {{ $json.customerLabel }} ({{ $json.customerKey }}) | Question: \"{{ ($json.messageText ?? '').slice(0, 150) }}\"",
        "category": "mistake",
        "project": "support-bot",
        "tags": "=llm-error, {{ $json.provider }}, support-bot",
        "confidence": 0.6
      },
      "id": "lead-9d-error-learn",
      "name": "Memory: Learn Error",
      "type": "n8n-nodes-studiomeyer-memory.studioMeyerMemory",
      "typeVersion": 1,
      "position": [
        3440,
        580
      ]
    },
    {
      "parameters": {
        "method": "POST",
        "url": "=https://{{ $env.PIPEDRIVE_COMPANY_DOMAIN }}.pipedrive.com/api/v1/deals?api_token={{ $env.PIPEDRIVE_API_TOKEN }}",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={{ JSON.stringify({ title: ($('Extract Lead').item.json.company || $('Extract Lead').item.json.email) + ' - ' + $('Normalize LLM Output').item.json.qualifiedAs.toUpperCase(), value: 0, currency: 'EUR', stage_id: ($('Normalize LLM Output').item.json.qualifiedAs === 'hot' ? Number($env.PIPEDRIVE_STAGE_HOT || 1) : $('Normalize LLM Output').item.json.qualifiedAs === 'warm' ? Number($env.PIPEDRIVE_STAGE_WARM || 2) : Number($env.PIPEDRIVE_STAGE_COLD || 3)), person_id: null, person_name: $('Extract Lead').item.json.name, person_email: [{ value: $('Extract Lead').item.json.email, primary: true }], expected_close_date: null, label: $('Normalize LLM Output').item.json.qualifiedAs, custom_fields: { bant_total: $('Normalize LLM Output').item.json.total, bant_summary: $('Normalize LLM Output').item.json.summary, bant_next_action: $('Normalize LLM Output').item.json.nextAction } }) }}",
        "options": {
          "redirect": {
            "redirect": {}
          }
        }
      },
      "id": "lead-9e-pipedrive",
      "name": "Pipedrive: Create Deal",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [
        3140,
        100
      ],
      "onError": "continueRegularOutput"
    },
    {
      "parameters": {
        "conditions": {
          "options": {
            "caseSensitive": true,
            "leftValue": "",
            "typeValidation": "strict",
            "version": 2
          },
          "conditions": [
            {
              "id": "cond-06-lead-qualifier-pipedrive-skipped",
              "leftValue": "={{ $json.skipped }}",
              "rightValue": true,
              "operator": {
                "type": "boolean",
                "operation": "true",
                "singleValue": true
              }
            }
          ],
          "combinator": "and"
        },
        "options": {}
      },
      "id": "06-lea-if-skip-dup",
      "name": "Skip If Duplicate",
      "type": "n8n-nodes-base.if",
      "typeVersion": 2,
      "position": [
        1120,
        320
      ]
    },
    {
      "parameters": {
        "respondWith": "json",
        "responseBody": "={{ JSON.stringify({ ok: true, deduped: true, reason: \"duplicate\" }) }}",
        "options": {
          "responseCode": 200,
          "responseHeaders": {
            "entries": [
              {
                "name": "X-Dedup",
                "value": "1"
              }
            ]
          }
        }
      },
      "id": "06-lea-respond-duplicate",
      "name": "Respond Duplicate",
      "type": "n8n-nodes-base.respondToWebhook",
      "typeVersion": 1.1,
      "position": [
        1340,
        140
      ]
    }
  ],
  "connections": {
    "Form Webhook": {
      "main": [
        [
          {
            "node": "Verify Webhook (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Verify Webhook (opt-in)": {
      "main": [
        [
          {
            "node": "Rate Limit (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Rate Limit (opt-in)": {
      "main": [
        [
          {
            "node": "Idempotency Check (opt-in)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Idempotency Check (opt-in)": {
      "main": [
        [
          {
            "node": "Skip If Duplicate",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Lead": {
      "main": [
        [
          {
            "node": "Memory: Lookup Lead",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Lookup Lead": {
      "main": [
        [
          {
            "node": "Returning Lead?",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Returning Lead?": {
      "main": [
        [
          {
            "node": "Memory: Lead Dossier",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Memory: Create Lead",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Lead Dossier": {
      "main": [
        [
          {
            "node": "Build LLM Prompt",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Create Lead": {
      "main": [
        [
          {
            "node": "Build LLM Prompt",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Build LLM Prompt": {
      "main": [
        [
          {
            "node": "Set Provider",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Set Provider": {
      "main": [
        [
          {
            "node": "Route by Provider",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Route by Provider": {
      "main": [
        [
          {
            "node": "OpenAI Reply",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Anthropic Reply",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Reply": {
      "main": [
        [
          {
            "node": "Normalize LLM Output",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Anthropic Reply": {
      "main": [
        [
          {
            "node": "Normalize LLM Output",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "LLM Fallback Reply",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Normalize LLM Output": {
      "main": [
        [
          {
            "node": "Form Acknowledge",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Observe BANT",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Learn Lead-Insight",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "LLM Fallback Reply": {
      "main": [
        [
          {
            "node": "Form Acknowledge",
            "type": "main",
            "index": 0
          },
          {
            "node": "Memory: Learn Error",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Memory: Observe BANT": {
      "main": [
        [
          {
            "node": "Pipedrive: Create Deal",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Skip If Duplicate": {
      "main": [
        [
          {
            "node": "Respond Duplicate",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Extract Lead",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "settings": {
    "executionOrder": "v1"
  }
}

About this workflow

Lead-Qualifier with BANT+I and Pipedrive (Multi-Provider). Uses stickyNote, n8n-nodes-studiomeyer-memory, openAi, anthropic. Webhook trigger; 28 nodes.

Source: https://github.com/studiomeyer-io/n8n-templates/blob/main/templates/06-lead-qualifier-pipedrive/workflow.json — original creator credit. Request a take-down →

More AI & RAG workflows → · Browse all categories →