Basic agent with n8n

n8n (pronounced “n-eight-n” or “nodemation”) is a powerful open-source workflow automation tool that enables you to connect applications, APIs, and services with minimal coding. With an intuitive UI and built-in AI integrations, n8n is the perfect choice for building AI agents, such as chatbots and virtual assistants.

Why Choose n8n?

βœ… Self-hosted – Keep your data secure and private.
βœ… Cost-effective – Avoid expensive cloud services.
βœ… Extensive integrations – Supports over 400+ services, including Google Sheets and Slack.
βœ… AI-ready – Seamlessly connects with Large Language Models (LLMs) like Ollama. \

πŸ‘‰ Learn more in the official n8n documentation.


🐳 Deploying n8n and Ollama with Docker

We will deploy n8n and Ollama locally using Docker, leveraging the self-hosted AI starter kit. This setup ensures fast and easy deployment across different hardware configurations.

System Requirements

  • Installed Docker and Docker Compose.
  • Supported hardware: CPU, Nvidia GPU, Mac/Apple Silicon, or AMD GPU on Linux.

Step-by-Step Deployment Guide

πŸ”Ή 1. Clone the Repository

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit

πŸ”Ή 2. Select the Right Configuration for Your Hardware

Depending on your setup, use one of the following commands:

  • CPU only:
    docker compose --profile cpu up
    
  • Nvidia GPU:
    docker compose --profile gpu-nvidia up
    
    Ensure Nvidia drivers and CUDA are installed. More details: Ollama Docker.
  • Mac/Apple Silicon:
    docker compose up
    
    Modify docker-compose.yml to add: OLLAMA_HOST=host.docker.internal:11434
  • AMD GPU (Linux):
    docker compose --profile gpu-amd up
    

πŸ”Ή 3. Verify and Access

  • Open n8n UI: http://localhost:5678/
  • Configure credentials: http://localhost:5678/home/credentials
  • Ensure Ollama is running: http://localhost:11434/ (Test with curl http://localhost:11434/)

πŸ”Ή 4. Upgrade (Optional)

To update to the latest version:

docker compose --profile gpu-nvidia pull && docker compose create && docker compose --profile gpu-nvidia up

πŸ€– Building a Simple AI Agent Workflow

Now, let’s create a workflow showcasing how an AI agent operates using n8n and the DeepSeek Distill 8b model. The workflow processes user queries, analyzes intent, performs tool calls (e.g., web search), and returns responses.

🏁 Step 1: Access the n8n Workspace

πŸ”— Open: http://localhost:5678 πŸ†• First-time users: Register with email and password.

πŸ› οΈ Step 2: Create a New Workflow

1️⃣ Click “Create a new workflow”.

πŸ’¬ Step 3: Add a “Chat” Node

1️⃣ Click + to add a new node. 2️⃣ Search for “Chat” and select it.

πŸ€– Step 4: (Optional) Create a New AI Agent

πŸ”Ή In the Chat node settings, you can create an AI Agent for advanced conversation management.

πŸ“Œ Step 5: Select the Ollama Chat Model

1️⃣ Navigate to Chat node settings. 2️⃣ Select Ollama Chat Model.

πŸ”‘ Step 6: Create New Ollama Credentials

πŸ’‘ If you haven’t added Ollama credentials, click “Create new credential”.

⚠️ Step 7: Configure Ollama Connection (Important!)

1️⃣ Locate the “Base URL” field. 2️⃣ DO NOT use http://localhost:11434. 3️⃣ Instead, enter:

http://host.docker.internal:11434

4️⃣ Click “Save”. 5️⃣ Test the connectionβ€”if successful, you’ll see “Connection test successfully”. 🚨 Incorrect settings will prevent n8n from connecting to Ollama!

🏷️ Step 8: Select a Large Language Model (LLM)

βœ… Choose from available LLMs, such as:

  • Llama 3.1
  • Gemma 2 2B
  • deepseek 1.5B

πŸ—ƒοΈ Step 9: (Optional) Add “Buffer Memory” Node

πŸ“ Improves conversational memory for better AI interactions.

βœ… Step 10: Test the Workflow

1️⃣ Click “Chat” in the Chat node. 2️⃣ Enter a test message, e.g., Hello, how are you? 3️⃣ If successful, you will receive a response from your AI model.

πŸŽ‰ Congratulations! You’ve successfully set up n8n with Ollama to run an AI model locally. πŸš€


🎨 Advanced Tips & Pro Insights

πŸ”Ή Tool Calling Support

πŸ”Ή Optimization

  • With high-end GPUs (e.g., RTX 4090), increase context length up to 16K for complex tasks.

πŸ”Ή Expert Tips

  • Join the n8n community for the latest solutions.
  • Check Ollama logs for debugging: docker logs <container-id>.

πŸš€ Now, you’re ready to build powerful AI-powered workflows with n8n! Happy automating! 🎯