n8n (pronounced “n-eight-n” or “nodemation”) is a powerful open-source workflow automation tool that enables you to connect applications, APIs, and services with minimal coding. With an intuitive UI and built-in AI integrations, n8n is the perfect choice for building AI agents, such as chatbots and virtual assistants.
β
Self-hosted β Keep your data secure and private.
β
Cost-effective β Avoid expensive cloud services.
β
Extensive integrations β Supports over 400+ services, including Google Sheets and Slack.
β
AI-ready β Seamlessly connects with Large Language Models (LLMs) like Ollama. \
π Learn more in the official n8n documentation.
We will deploy n8n and Ollama locally using Docker, leveraging the self-hosted AI starter kit. This setup ensures fast and easy deployment across different hardware configurations.
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
Depending on your setup, use one of the following commands:
docker compose --profile cpu up
docker compose --profile gpu-nvidia up
docker compose up
docker-compose.yml to add: OLLAMA_HOST=host.docker.internal:11434docker compose --profile gpu-amd up
http://localhost:5678/http://localhost:5678/home/credentialshttp://localhost:11434/ (Test with curl http://localhost:11434/)To update to the latest version:
docker compose --profile gpu-nvidia pull && docker compose create && docker compose --profile gpu-nvidia up
Now, let’s create a workflow showcasing how an AI agent operates using n8n and the DeepSeek Distill 8b model. The workflow processes user queries, analyzes intent, performs tool calls (e.g., web search), and returns responses.

π Open: http://localhost:5678
π First-time users: Register with email and password.
1οΈβ£ Click “Create a new workflow”.
1οΈβ£ Click + to add a new node.
2οΈβ£ Search for “Chat” and select it.
πΉ In the Chat node settings, you can create an AI Agent for advanced conversation management.
1οΈβ£ Navigate to Chat node settings. 2οΈβ£ Select Ollama Chat Model.
π‘ If you havenβt added Ollama credentials, click “Create new credential”.
1οΈβ£ Locate the “Base URL” field.
2οΈβ£ DO NOT use http://localhost:11434.
3οΈβ£ Instead, enter:
http://host.docker.internal:11434
4οΈβ£ Click “Save”. 5οΈβ£ Test the connectionβif successful, youβll see “Connection test successfully”. π¨ Incorrect settings will prevent n8n from connecting to Ollama!
β Choose from available LLMs, such as:
Llama 3.1Gemma 2 2Bdeepseek 1.5Bπ Improves conversational memory for better AI interactions.
1οΈβ£ Click “Chat” in the Chat node.
2οΈβ£ Enter a test message, e.g., Hello, how are you?
3οΈβ£ If successful, you will receive a response from your AI model.
π Congratulations! You’ve successfully set up n8n with Ollama to run an AI model locally. π
docker logs <container-id>.π Now, you’re ready to build powerful AI-powered workflows with n8n! Happy automating! π―