The workflow JSON
Copy or download the full n8n JSON below. Paste it into a new n8n workflow, add your credentials, activate. Full import guide →
{
"id": "af8RV5b2TWB2LclA",
"meta": {
"templateCredsSetupCompleted": true
},
"name": "Chat with local LLMs using n8n and Ollama",
"tags": [],
"nodes": [
{
"id": "475385fa-28f3-45c4-bd1a-10dde79f74f2",
"name": "When chat message received",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
700,
460
],
"parameters": {
"options": {}
},
"typeVersion": 1.1
},
{
"id": "61133dc6-dcd9-44ff-85f2-5d8cc2ce813e",
"name": "Ollama Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOllama",
"position": [
900,
680
],
"parameters": {
"options": {}
},
"credentials": {
"ollamaApi": {
"name": "<your credential>"
}
},
"typeVersion": 1
},
{
"id": "3e89571f-7c87-44c6-8cfd-4903d5e1cdc5",
"name": "Sticky Note",
"type": "n8n-nodes-base.stickyNote",
"position": [
160,
80
],
"parameters": {
"width": 485,
"height": 473,
"content": "## Chat with local LLMs using n8n and Ollama\nThis n8n workflow allows you to seamlessly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface. By connecting to Ollama, a powerful tool for managing local LLMs, you can send prompts and receive AI-generated responses directly within n8n.\n\n### How it works\n1. When chat message received: Captures the user's input from the chat interface.\n2. Chat LLM Chain: Sends the input to the Ollama server and receives the AI-generated response.\n3. Delivers the LLM's response back to the chat interface.\n\n### Set up steps\n* Make sure Ollama is installed and running on your machine before executing this workflow.\n* Edit the Ollama address if different from the default.\n"
},
"typeVersion": 1
},
{
"id": "9345cadf-a72e-4d3d-b9f0-d670744065fe",
"name": "Sticky Note1",
"type": "n8n-nodes-base.stickyNote",
"position": [
1040,
660
],
"parameters": {
"color": 6,
"width": 368,
"height": 258,
"content": "## Ollama setup\n* Connect to your local Ollama, usually on http://localhost:11434\n* If running in Docker, make sure that the n8n container has access to the host's network in order to connect to Ollama. You can do this by passing `--net=host` option when starting the n8n Docker container"
},
"typeVersion": 1
},
{
"id": "eeffdd4e-6795-4ebc-84f7-87b5ac4167d9",
"name": "Chat LLM Chain",
"type": "@n8n/n8n-nodes-langchain.chainLlm",
"position": [
920,
460
],
"parameters": {},
"typeVersion": 1.4
}
],
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "3af03daa-e085-4774-8676-41578a4cba2d",
"connections": {
"Ollama Chat Model": {
"ai_languageModel": [
[
{
"node": "Chat LLM Chain",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"When chat message received": {
"main": [
[
{
"node": "Chat LLM Chain",
"type": "main",
"index": 0
}
]
]
}
}
}
Credentials you'll need
Each integration node will prompt for credentials when you import. We strip credential IDs before publishing — you'll add your own.
ollamaApi
About this workflow
Chat with local LLMs using n8n and Ollama. Uses chatTrigger, lmChatOllama, stickyNote, chainLlm. Chat trigger; 5 nodes.
Source: https://github.com/Zie619/n8n-workflows — original creator credit. Request a take-down →