AutomationFlowsAI & RAG › Supabase Data Analyst Agent

Supabase Data Analyst Agent

Supabase Data Analyst Agent. Uses stickyNote, chatTrigger, agent, lmChatOpenRouter. Chat trigger; 17 nodes.

Chat trigger trigger★★★★☆ complexityAI-powered17 nodesChat TriggerAgentLm Chat Open RouterMemory Postgres ChatPostgres ToolTool WorkflowTool CodeGoogle Sheets Tool
AI & RAG Trigger: Chat trigger Nodes: 17 Complexity: ★★★★☆ AI nodes: yes

The workflow JSON

Copy or download the full n8n JSON below. Paste it into a new n8n workflow, add your credentials, activate. Full import guide →

Download .json
{
  "name": "Supabase Data Analyst Agent",
  "settings": {
    "executionOrder": "v1"
  },
  "nodes": [
    {
      "parameters": {
        "content": "## Supabase Data Analyst Agent\n**What it does:** Turns natural-language questions into SQL queries + charts over a Supabase Postgres database.\n\n**Architecture:** Chat Trigger \u2192 Tools Agent (OpenRouter / Claude Sonnet) with persistent Postgres chat memory + 7 tools for schema discovery, querying, charting, and export.\n\n### Required credentials\n1. **Postgres (Supabase)** \u2014 attach to `list_schema`, `get_table_definition`, `Postgres Chat Memory`, and the sub-workflows.\n2. **OpenRouter API** \u2014 attach to the `OpenRouter Chat Model` node.\n3. **Google Sheets OAuth2** \u2014 attach to `export_to_sheets` (optional, only for export feature).\n\n### Sub-workflows required\n- `[Sub] Run SQL with Row Cap` \u2192 used by `run_sql` tool\n- `[Sub] Generate Chart from Data` \u2192 used by `create_chart` tool\n- `[Sub] Search Schema Docs` \u2192 used by `search_schema_docs` tool\n\n### Before activating\nRun the companion `SETUP.sql` in Supabase to create the readonly role, chat-memory table, and optional schema-docs table.\n\n### Demo prompts\n- *What tables do you have?*\n- *Using demo_orders, show revenue by product category as a bar chart*\n- *Monthly revenue trend \u2014 line chart*\n- *Top 10 customers by lifetime value*",
        "height": 540,
        "width": 680,
        "color": 6
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -620,
        -580
      ],
      "id": "sticky-main-header",
      "name": "README"
    },
    {
      "parameters": {
        "content": "### 1. Chat Trigger\nUser-facing entry point. `mode: hostedChat` exposes a public chat URL you can share or embed. The `sessionId` it generates is the key that Postgres Chat Memory uses to isolate conversations.\n\nFor private/auth'd use: set `public: false` and embed via the `@n8n/chat` widget with a JWT in `metadata`.",
        "height": 260,
        "width": 280,
        "color": 4
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -620,
        -20
      ],
      "id": "sticky-trigger",
      "name": "Trigger note"
    },
    {
      "parameters": {
        "content": "### 2. The Agent\nLangChain **Tools Agent** loop:\n1. Reads the user message + memory.\n2. Decides which tools to call (maxIterations = 12).\n3. Composes the final markdown response.\n\nThe system message (see node settings) enforces: read-only SQL, always-return-charts policy, strict markdown-table output, and the tool-call order.",
        "height": 280,
        "width": 300,
        "color": 3
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -320,
        -40
      ],
      "id": "sticky-agent",
      "name": "Agent note"
    },
    {
      "parameters": {
        "content": "### 3. Brain (LLM + Memory)\n**OpenRouter Chat Model:** Claude Sonnet 4.6 by default. Swap to any OpenRouter-supported model via the `model` field.\n\n**Postgres Chat Memory:** stores conversation in `public.n8n_chat_histories` keyed by `sessionId`. Last 20 messages are loaded per turn. Needs a WRITABLE Postgres credential (not the readonly role).",
        "height": 260,
        "width": 320,
        "color": 3
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -480,
        500
      ],
      "id": "sticky-brain",
      "name": "Brain note"
    },
    {
      "parameters": {
        "content": "### 4. Schema discovery tools\n**list_schema** \u2014 lists every table + column in the DB (excludes Supabase internal schemas). Call once per conversation.\n\n**get_table_definition** \u2014 columns/types/nullability for ONE table. Call before writing SQL against a table you haven't inspected.\n\n**search_schema_docs** \u2014 trigram-fuzzy search over a human-curated `public.schema_docs` table. Use when the user mentions a business term and you need to map it to tables/columns.",
        "height": 320,
        "width": 420,
        "color": 5
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        -160,
        500
      ],
      "id": "sticky-schema-tools",
      "name": "Schema tools note"
    },
    {
      "parameters": {
        "content": "### 5. Action tools\n**run_sql** \u2014 calls the `[Sub] Run SQL with Row Cap` sub-workflow. Returns up to 200 rows + a `truncated` flag.\n\n**create_chart** \u2014 calls the `[Sub] Generate Chart from Data` sub-workflow. Takes simple primitives (type, title, labels, series) and returns a QuickChart PNG URL.\n\n**think** \u2014 scratchpad tool. Used after errors to reason before retrying.\n\n**export_to_sheets** \u2014 appends rows to a Google Sheets tab on explicit user request.",
        "height": 320,
        "width": 460,
        "color": 5
      },
      "type": "n8n-nodes-base.stickyNote",
      "typeVersion": 1,
      "position": [
        280,
        500
      ],
      "id": "sticky-action-tools",
      "name": "Action tools note"
    },
    {
      "parameters": {
        "public": true,
        "mode": "hostedChat",
        "initialMessages": "Hi! I'm your Supabase data analyst.\nAsk me anything about your data \u2014 I can run SQL queries, build charts, and export results.",
        "options": {
          "title": "Supabase Data Analyst",
          "subtitle": "Ask questions, get answers and charts from your Supabase database.",
          "responseMode": "lastNode"
        }
      },
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "typeVersion": 1.1,
      "position": [
        -600,
        0
      ],
      "id": "chat-trigger-001",
      "name": "When chat message received"
    },
    {
      "parameters": {
        "promptType": "auto",
        "options": {
          "maxIterations": 12,
          "systemMessage": "You are a senior data analyst working over a Supabase Postgres database. Your job is to turn data into clear, visual, decision-ready insights \u2014 not just rows of numbers.\n\n## Tools and when to call them\n- `search_schema_docs(term)` \u2014 search human-curated descriptions. Use ONLY when the user mentions a vague business concept whose table/column is non-obvious (e.g. 'churn', 'MRR', 'active users'). Do NOT call it when the user names a table directly or asks about obvious fields (revenue, orders, customers by country, etc.). Skip it for most questions.\n- `list_schema()` \u2014 lists every table. Call ONLY when you don't know what tables exist yet AND the user hasn't named one. If the user says 'demo_orders', 'orders', or any specific table name, SKIP this and go straight to `get_table_definition`.\n- `get_table_definition(table_name)` \u2014 columns/types for one table. Call this BEFORE any `run_sql` on a table you haven't inspected yet this conversation. If you already inspected it earlier in this session, don't re-call it \u2014 trust your memory.\n- `run_sql(query)` \u2014 single read-only SELECT/WITH, returns up to 200 rows + `truncated` flag.\n- `think(thought)` \u2014 scratchpad for planning or post-error reasoning.\n- `create_chart(chart_config)` \u2014 render a Chart.js v4 config. Use constantly (see chart policy below).\n- `export_to_sheets(spreadsheetId, sheetName, rows)` \u2014 append to Google Sheets on explicit request.\n\n## Efficiency rules\n- Minimum viable tool calls. For most questions the ideal path is: `get_table_definition` \u2192 `run_sql` \u2192 `create_chart` \u2192 answer. Three tool calls total.\n- Never call `list_schema` and `get_table_definition` back-to-back on a table the user already named.\n- Never re-introspect a table you already introspected in this conversation.\n\n## Hard rules\n- Read-only. Refuse INSERT/UPDATE/DELETE/TRUNCATE/DROP/ALTER/CREATE/GRANT.\n- Always qualify tables with their schema (e.g. `public.orders`) and quote case-sensitive identifiers.\n- Always include a `LIMIT` (\u2264500) on exploratory queries unless aggregating.\n- If `run_sql` errors: call `think` \u2192 diagnose (wrong column, missing cast, case-sensitive name) \u2192 retry ONCE. After 2 failures, ask the user.\n- If `run_sql` returns `truncated: true`, tell the user and suggest a narrower filter.\n- Never invent table/column names.\n\n## Chart policy \u2014 BE AGGRESSIVE\nCall `create_chart` by DEFAULT for almost every query. You should be generating a chart in 80%+ of responses. The only exceptions where you may skip the chart:\n- The answer is a single scalar (e.g. 'How many customers do we have?' \u2192 just a number).\n- The user explicitly said 'no chart' or 'table only'.\n- The result has 1 row and 1 column.\n\nChart type selection:\n- Time series (anything grouped by day/week/month/year) \u2192 `line`\n- Category comparison (by country, by segment, by category) \u2192 `bar` (horizontal if >8 categories)\n- Share of total (\u22646 slices) \u2192 `doughnut` or `pie`\n- Two numeric dimensions \u2192 `scatter`\n- Multi-series (e.g. revenue by channel by month) \u2192 grouped or stacked `bar`\n\n### Calling create_chart (SIMPLE \u2014 don't build Chart.js manually)\nPass just these fields. The tool assembles the Chart.js config for you.\n- `chart_type`: one of `bar`, `line`, `pie`, `doughnut`, `horizontalBar`\n- `title`: e.g. 'Revenue by Product Category'\n- `labels`: array of strings, aligned with series values, e.g. `['Jan','Feb','Mar']`\n- `series`: array of `{name, values}`, one per metric. Example for single series: `[{\"name\":\"Revenue\",\"values\":[12000,15000,18000]}]`. Example for two series: `[{\"name\":\"Orders\",\"values\":[26,86,67]},{\"name\":\"Revenue\",\"values\":[12105,36418,23797]}]`\n- `x_label`, `y_label`: optional axis titles\n\n### Rule for mixed-magnitude series\nIf you want to plot two metrics with very different scales (e.g. orders 26-150 alongside revenue $12K-$60K), DO NOT put them in the same chart \u2014 the small one flatlines. Instead call `create_chart` TWICE (once per metric) or pick the single most relevant metric.\n\n## Output formatting \u2014 STRICT\nFollow this exact structure for every response:\n\n1. **Headline insight** \u2014 ONE bold sentence with the key takeaway and the most important number(s). Example: `**Revenue tripled from $12K in Apr 2025 to $46K in Apr 2026, driven by a spike in March.**`\n\n2. **Chart** \u2014 embed the chart URL from `create_chart` as a markdown image on its own line:\n   `![chart](https://quickchart.io/chart/render/...)`\n\n3. **Data table** \u2014 present the rows using PROPER MARKDOWN PIPE TABLES. Never use space-aligned ASCII. Example:\n   ```\n   | Month    | Orders | Revenue  |\n   | -------- | -----: | -------: |\n   | 2025-04  |     26 | $12,105  |\n   | 2025-05  |     86 | $36,418  |\n   ```\n   - Right-align numeric columns with `---:`\n   - Format money as `$1,234` (thousands separators, no decimals unless <$10)\n   - Format percents as `12.3%`\n   - Format dates as `2025-04` (months), `2025-04-15` (days)\n   - Never dump more than 15 rows \u2014 summarize or aggregate instead.\n\n4. **SQL** \u2014 fenced ```sql block with the exact query used. One query only, no comments.\n\n5. **Key observations** \u2014 2-4 bullets of NON-OBVIOUS insights. Examples: trend inflections, anomalies, category concentration, week-over-week deltas, notable outliers. Never restate the headline. Never pad with filler like 'the data shows'.\n\nSeparate each section with a blank line. Keep the whole response tight \u2014 no introductions, no 'Here is your analysis', no closing pleasantries."
        }
      },
      "type": "@n8n/n8n-nodes-langchain.agent",
      "typeVersion": 1.9,
      "position": [
        -300,
        0
      ],
      "id": "agent-001",
      "name": "Data Analyst Agent"
    },
    {
      "parameters": {
        "model": "anthropic/claude-sonnet-4.6",
        "options": {
          "temperature": 0.2
        }
      },
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenRouter",
      "typeVersion": 1,
      "position": [
        -460,
        240
      ],
      "id": "openrouter-001",
      "name": "OpenRouter Chat Model"
    },
    {
      "parameters": {
        "sessionIdType": "customKey",
        "sessionKey": "={{ $('When chat message received').item.json.sessionId }}",
        "contextWindowLength": 20,
        "tableName": "n8n_chat_histories"
      },
      "type": "@n8n/n8n-nodes-langchain.memoryPostgresChat",
      "typeVersion": 1.3,
      "position": [
        -300,
        240
      ],
      "id": "memory-001",
      "name": "Postgres Chat Memory"
    },
    {
      "parameters": {
        "descriptionType": "manual",
        "toolDescription": "Lists every table and column in the user's Supabase database (excluding internal Supabase/Postgres schemas). Call this once per conversation to discover what data is available. Takes no input.",
        "operation": "executeQuery",
        "query": "SELECT table_schema, table_name, column_name, data_type\nFROM information_schema.columns\nWHERE table_schema NOT IN ('pg_catalog','information_schema','auth','storage','realtime','extensions','graphql','graphql_public','net','vault','pgsodium','pgsodium_masks','supabase_functions','_analytics','_realtime')\nORDER BY table_schema, table_name, ordinal_position;",
        "options": {}
      },
      "type": "n8n-nodes-base.postgresTool",
      "typeVersion": 2.6,
      "position": [
        -140,
        240
      ],
      "id": "tool-list-schema",
      "name": "list_schema"
    },
    {
      "parameters": {
        "descriptionType": "manual",
        "toolDescription": "Returns columns, data types, nullability, and length for ONE table. Call this BEFORE writing a SQL query for any table you have not yet inspected, so you don't hallucinate column names. Input: `table_name` (string, unqualified, e.g. 'orders').",
        "operation": "executeQuery",
        "query": "SELECT table_schema, table_name, column_name, data_type, is_nullable, character_maximum_length, column_default\nFROM information_schema.columns\nWHERE table_name = $1\n  AND table_schema NOT IN ('pg_catalog','information_schema','auth','storage','realtime','extensions','graphql','graphql_public','net','vault','pgsodium','pgsodium_masks','supabase_functions','_analytics','_realtime')\nORDER BY table_schema, ordinal_position;",
        "options": {
          "queryReplacement": "={{ $fromAI('table_name', 'Unqualified table name to inspect, e.g. orders', 'string') }}"
        }
      },
      "type": "n8n-nodes-base.postgresTool",
      "typeVersion": 2.6,
      "position": [
        20,
        240
      ],
      "id": "tool-get-table-def",
      "name": "get_table_definition"
    },
    {
      "parameters": {
        "name": "run_sql",
        "description": "Runs a single read-only SQL SELECT (or WITH...SELECT) against the Supabase Postgres database and returns up to 200 rows with a `truncated` flag and `total_rows` count. Input: `query` \u2014 the SQL string. Always qualify tables with their schema (e.g. public.orders) and add LIMIT to exploratory queries.",
        "workflowId": {
          "__rl": true,
          "value": "F119zKphZYCqFS4z",
          "mode": "id"
        },
        "workflowInputs": {
          "mappingMode": "defineBelow",
          "value": {
            "query": "={{ $fromAI('query', 'A single read-only SQL SELECT or WITH statement.', 'string') }}"
          },
          "matchingColumns": [],
          "schema": [
            {
              "id": "query",
              "displayName": "query",
              "required": true,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            }
          ],
          "attemptToConvertTypes": false,
          "convertFieldsToString": false
        }
      },
      "type": "@n8n/n8n-nodes-langchain.toolWorkflow",
      "typeVersion": 2.1,
      "position": [
        180,
        240
      ],
      "id": "tool-run-sql",
      "name": "run_sql"
    },
    {
      "parameters": {
        "name": "create_chart",
        "description": "Renders a chart and returns a public PNG URL. Pass these simple fields \u2014 do NOT try to build a full Chart.js config. Inputs: chart_type (bar|line|pie|doughnut|horizontalBar), title (string), labels (array of strings for the x-axis or slice names), series (array of {name, values}) \u2014 one object per data series, values aligned with labels, x_label (optional string for x-axis title), y_label (optional string for y-axis title).",
        "workflowId": {
          "__rl": true,
          "value": "HmxF5hPr6ELWkY7e",
          "mode": "id"
        },
        "workflowInputs": {
          "mappingMode": "defineBelow",
          "value": {
            "chart_type": "={{ $fromAI('chart_type', 'One of: bar, line, pie, doughnut, horizontalBar', 'string') }}",
            "title": "={{ $fromAI('title', 'Chart title, e.g. \"Revenue by Product Category\"', 'string') }}",
            "labels": "={{ $fromAI('labels', 'Array of string labels for x-axis or slices, e.g. [\"Jan\",\"Feb\",\"Mar\"]', 'json') }}",
            "series": "={{ $fromAI('series', 'Array of series objects, each {\"name\":\"...\",\"values\":[...]}. Example: [{\"name\":\"Revenue\",\"values\":[100,200,150]}]', 'json') }}",
            "x_label": "={{ $fromAI('x_label', 'X-axis title, optional', 'string') }}",
            "y_label": "={{ $fromAI('y_label', 'Y-axis title, optional', 'string') }}"
          },
          "matchingColumns": [],
          "schema": [
            {
              "id": "chart_type",
              "displayName": "chart_type",
              "required": true,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            },
            {
              "id": "title",
              "displayName": "title",
              "required": true,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            },
            {
              "id": "labels",
              "displayName": "labels",
              "required": true,
              "type": "array",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            },
            {
              "id": "series",
              "displayName": "series",
              "required": true,
              "type": "array",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            },
            {
              "id": "x_label",
              "displayName": "x_label",
              "required": false,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            },
            {
              "id": "y_label",
              "displayName": "y_label",
              "required": false,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            }
          ],
          "attemptToConvertTypes": false,
          "convertFieldsToString": false
        }
      },
      "type": "@n8n/n8n-nodes-langchain.toolWorkflow",
      "typeVersion": 2.1,
      "position": [
        340,
        240
      ],
      "id": "tool-create-chart",
      "name": "create_chart"
    },
    {
      "parameters": {
        "name": "search_schema_docs",
        "description": "Fuzzy-search the human-curated `public.schema_docs` table for tables/columns whose business meaning matches a term (e.g. 'churn', 'revenue', 'pharmaceutical form'). Input: `term` (string). Use this BEFORE list_schema when the user asks about a business concept. Returns up to 10 doc rows.",
        "workflowId": {
          "__rl": true,
          "value": "V55XKSUOJoFkkQN2",
          "mode": "id"
        },
        "workflowInputs": {
          "mappingMode": "defineBelow",
          "value": {
            "term": "={{ $fromAI('term', 'Search term \u2014 a business concept, table guess, or column guess.', 'string') }}"
          },
          "matchingColumns": [],
          "schema": [
            {
              "id": "term",
              "displayName": "term",
              "required": true,
              "type": "string",
              "display": true,
              "canBeUsedToMatch": true,
              "defaultMatch": false
            }
          ],
          "attemptToConvertTypes": false,
          "convertFieldsToString": false
        }
      },
      "type": "@n8n/n8n-nodes-langchain.toolWorkflow",
      "typeVersion": 2.1,
      "position": [
        500,
        240
      ],
      "id": "tool-schema-docs",
      "name": "search_schema_docs"
    },
    {
      "parameters": {
        "name": "think",
        "description": "Internal scratchpad. Write down a brief plan, decompose a complex question, or reason about a tool error before retrying. Input: `thought` (string). Returns the thought back. Use this generously before complex SQL and ALWAYS after a tool error.",
        "language": "javaScript",
        "jsCode": "const t = typeof query === 'string' ? query : (query && query.thought) ? query.thought : JSON.stringify(query);\nreturn `Thought recorded at ${new Date().toISOString()}: ${t}`;",
        "specifyInputSchema": true,
        "schemaType": "manual",
        "inputSchema": "{\n  \"type\": \"object\",\n  \"properties\": {\n    \"thought\": { \"type\": \"string\", \"description\": \"What you're thinking / planning / diagnosing.\" }\n  },\n  \"required\": [\"thought\"]\n}"
      },
      "type": "@n8n/n8n-nodes-langchain.toolCode",
      "typeVersion": 1.3,
      "position": [
        660,
        240
      ],
      "id": "tool-think",
      "name": "think"
    },
    {
      "parameters": {
        "descriptionType": "manual",
        "toolDescription": "Append rows to an existing Google Sheet tab. Use only when the user explicitly asks to export. Inputs: spreadsheet ID, sheet/tab name, and the rows to append (array of objects with column-name keys).",
        "operation": "append",
        "documentId": {
          "__rl": true,
          "value": "={{ $fromAI('spreadsheetId', 'The Google Sheets file ID (from the URL).', 'string') }}",
          "mode": "id"
        },
        "sheetName": {
          "__rl": true,
          "value": "={{ $fromAI('sheetName', 'Tab name in the spreadsheet to append to.', 'string') }}",
          "mode": "name"
        },
        "columns": {
          "mappingMode": "autoMapInputData",
          "value": {},
          "matchingColumns": []
        },
        "options": {}
      },
      "type": "n8n-nodes-base.googleSheetsTool",
      "typeVersion": 4.5,
      "position": [
        820,
        240
      ],
      "id": "tool-sheets",
      "name": "export_to_sheets"
    }
  ],
  "connections": {
    "When chat message received": {
      "main": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenRouter Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Postgres Chat Memory": {
      "ai_memory": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "list_schema": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "get_table_definition": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "run_sql": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "create_chart": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "search_schema_docs": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "think": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "export_to_sheets": {
      "ai_tool": [
        [
          {
            "node": "Data Analyst Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    }
  }
}

About this workflow

Supabase Data Analyst Agent. Uses stickyNote, chatTrigger, agent, lmChatOpenRouter. Chat trigger; 17 nodes.

Source: https://github.com/MinaSaad1/n8n-data-analyst-agent/blob/main/workflows/01-main-agent.json — original creator credit. Request a take-down →

More AI & RAG workflows → · Browse all categories →