How to Build an AI Chatbot That Queries Your Baserow Database with n8n

Your team keeps pinging you with the same questions: “What’s the status on the Carter account?” or “How many orders came in from Texas last week?” The answers already live in your database — the problem is that nobody wants to learn how to query it. What if they could just ask, in plain English, and get the answer back in seconds? That’s exactly what you’ll build in this guide: an AI chatbot powered by n8n, OpenAI GPT-4o Mini, and Baserow that lets anyone on your team talk to your data without writing a single query.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A user opens the n8n chat widget and types a question like “Show me all pending orders from California.”
  2. An AI agent interprets the question, queries your Baserow database for matching records, and formats the results.
  3. The chatbot replies with a clear, readable answer — no SQL, no filters, no training needed.
  4. Follow-up questions work naturally because the chatbot remembers the conversation context.

How It Works — The Big Picture

The whole system is a single n8n workflow with four connected components. The Chat Trigger receives the user’s message, passes it to an AI Agent that decides what to look up, queries Baserow through a built-in tool, and streams the answer back — all while keeping a memory buffer so the conversation flows naturally.

┌──────────────────────────────────────────────────────────────────────┐
│  AI CHATBOT — QUERY BASEROW DATABASE                                │
│                                                                      │
│  [Chat Trigger] ──→ [Database Query Agent] ──→ Chat Response         │
│                           │       │                                  │
│                    ┌──────┘       └──────┐                           │
│                    ↓                     ↓                           │
│           [OpenAI GPT-4o Mini]   [Conversation Memory]               │
│                                         │                            │
│                                  [Query Baserow]                     │
│                                    (AI Tool)                         │
└──────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • A self-hosted or cloud n8n instance (version 1.0 or later)
  • An OpenAI account with an API key — GPT-4o Mini costs roughly $0.15 per million input tokens, so this is very affordable
  • A Baserow account (free tier works) with at least one database table containing the data you want to query
  • About 15 minutes to set everything up from scratch

Estimated build time: 15–20 minutes from scratch, or under 5 minutes with the template.

Building the Chatbot — Step by Step

1 Chat Trigger (Chat Trigger Node)

This is the entry point. The Chat Trigger node creates a built-in chat widget inside n8n where users can type their questions. When someone sends a message, the node captures the text and passes it downstream.

  1. In your n8n canvas, click Add node and search for Chat Trigger.
  2. Drop it on the canvas. The default settings work — no configuration needed.
  3. This node will automatically generate a chat URL you can share, or you can embed the widget on any page.

After this node fires, the data looks like this:

{
  "chatInput": "Show me all pending orders from California",
  "sessionId": "a7f3e2b1-9c4d-4e8a-b5f6-1234567890ab"
}
💡

Tip: Each conversation gets a unique sessionId. The memory node uses this to keep separate conversation threads — so multiple people can chat at the same time without mixing up context.

2 Database Query Agent (AI Agent Node)

The AI Agent is the brain of this workflow. It receives the user’s question, decides whether it needs to query the database, calls the Baserow tool if needed, and composes a human-readable response. Think of it as a smart middleman between your user and your data.

  1. Add an AI Agent node and connect it to the Chat Trigger’s output.
  2. Open the node settings and find the System Message field under Options.
  3. Paste this system prompt (customize it to describe your specific data):
You are a helpful data assistant. Your job is to answer user questions by querying the connected Baserow database.

Rules:
1. Always use the Baserow tool to look up real data before answering.
2. If the user asks a question you cannot answer from the database, say so clearly.
3. Format numbers, dates, and lists in a readable way.
4. Keep answers concise — summarize large result sets instead of dumping raw rows.
5. If the query returns no results, suggest the user rephrase or check their filters.
💡

Tip: The more specific your system prompt is about your data schema, the better the agent performs. If your table has columns like order_status, customer_state, and total_amount, mention them in the prompt so the agent knows exactly what fields to filter on.

3 OpenAI GPT-4o Mini (Chat Model Node)

This node provides the language model that powers the agent’s reasoning. GPT-4o Mini is the sweet spot here — it’s fast, cheap (fractions of a cent per query), and accurate enough for database lookups and conversational responses.

  1. Add an OpenAI Chat Model node (@n8n/n8n-nodes-langchain.lmChatOpenAi).
  2. Connect it to the AI Language Model input on the Agent node (the bottom-left connector).
  3. Set the Model to gpt-4o-mini.
  4. Under Options, set Temperature to 0.3 — you want factual, consistent answers, not creative ones.
  5. Set Max Tokens to 2048 — enough for detailed responses without runaway costs.
  6. Select your OpenAI credential (or create a new one by pasting your API key).
📌

You can swap in gpt-4o for more complex reasoning, but for most database Q&A tasks, GPT-4o Mini handles the job well and costs about 10x less.

4 Conversation Memory (Buffer Window Memory Node)

Without memory, every message would be a brand-new conversation. The Buffer Window Memory node stores the last 12 exchanges so the agent understands context. When a user asks “What about New York?” right after asking about California orders, the agent knows they mean orders from New York.

  1. Add a Window Buffer Memory node.
  2. Connect it to the AI Memory input on the Agent node (the bottom-center connector).
  3. Set Session ID Type to From Input — this uses the Chat Trigger’s session ID automatically.
  4. Set Context Window Length to 12. This means the agent remembers the last 12 messages (6 pairs of user + assistant).
💡

Tip: If your users tend to ask long, multi-part questions, increase the window to 20. If you want to save on token costs, drop it to 6. The trade-off is context awareness vs. cost per conversation.

5 Query Baserow (Baserow Tool Node)

This is where the magic happens. The Baserow Tool node gives the AI Agent the ability to search, filter, and read rows from your Baserow table. The agent decides when to use it and what to search for — you just need to point it at the right table.

  1. Add a Baserow node (specifically the Baserow Tool variant for AI agents).
  2. Connect it to the AI Tool input on the Agent node (the bottom-right connector).
  3. Select your Baserow credential (API token — you’ll find this in your Baserow account settings).
  4. Set the Database ID — you can find this in your Baserow URL: https://baserow.io/database/YOUR_ID.
  5. Set the Table ID — click into any table and check the URL for the table number.

When the agent calls this tool, it gets back rows like this:

[
  {
    "id": 142,
    "order_id": "ORD-2026-0891",
    "customer_name": "James Carter",
    "customer_email": "james.carter@gmail.com",
    "customer_state": "CA",
    "order_status": "Pending",
    "total_amount": 249.99,
    "created_at": "2026-04-07"
  },
  {
    "id": 143,
    "order_id": "ORD-2026-0892",
    "customer_name": "Emily Rodriguez",
    "customer_email": "emily.rodriguez@outlook.com",
    "customer_state": "CA",
    "order_status": "Pending",
    "total_amount": 89.50,
    "created_at": "2026-04-08"
  }
]

The agent then reads this data, formats it, and responds to the user with something like: “There are 2 pending orders from California — one for James Carter ($249.99) and one for Emily Rodriguez ($89.50), totaling $339.49.”

The Data Structure

Your Baserow table is the foundation of this whole system. The chatbot can only answer questions about data that exists in your table, so the schema matters. Here’s an example for an order tracking table:

Column Type Example Description
order_id Text ORD-2026-0891 Unique order identifier
customer_name Text James Carter Full name of the customer
customer_email Email james.carter@gmail.com Customer contact email
customer_state Text CA Two-letter US state code
order_status Single Select Pending Current status: Pending, Shipped, Delivered, Cancelled
total_amount Number 249.99 Order total in USD
created_at Date 2026-04-07 Date the order was placed
📌

Column names should be descriptive and snake_case. The AI agent uses column names to understand what each field contains, so customer_state works much better than col_7. If your existing table has cryptic column names, consider renaming them — it makes a real difference in answer quality.

Here’s what the data lifecycle looks like with a few sample rows:

order_id customer_name customer_state order_status total_amount
ORD-2026-0891 James Carter CA Pending $249.99
ORD-2026-0887 Sarah Thompson TX Shipped $175.00
ORD-2026-0879 Michael Chen NY Delivered $312.50

Full System Flow

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│  USER types question          n8n CHAT WIDGET                           │
│  "Show pending CA orders"  →  [Chat Trigger]                            │
│                                     │                                   │
│                                     ▼                                   │
│                            [Database Query Agent]                       │
│                             GPT-4o Mini brain                           │
│                                │         │                              │
│                      ┌─────────┘         └──────────┐                   │
│                      ▼                              ▼                   │
│             [Conversation Memory]          [Query Baserow]              │
│              12-message window              Reads table rows            │
│                                                    │                    │
│                                                    ▼                    │
│                                          ┌─────────────────┐           │
│                                          │  BASEROW TABLE   │           │
│                                          │  Orders, Leads,  │           │
│                                          │  Inventory, etc. │           │
│                                          └─────────────────┘           │
│                                                    │                    │
│                      ▼                             │                    │
│           Agent formats answer  ◄──────────────────┘                    │
│                      │                                                  │
│                      ▼                                                  │
│  USER receives      "There are 2 pending orders                         │
│  formatted answer    from California totaling $339.49"                  │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

  1. Save your workflow and make sure all four nodes are connected (Chat Trigger → Agent, with the model, memory, and tool feeding into the Agent’s sub-connectors).
  2. Click Chat in the bottom-right of the n8n canvas to open the built-in chat panel.
  3. Type a simple question: How many rows are in the table? — This tests the basic connection.
  4. Ask a filtered question: Show me all orders from California — This tests the agent’s ability to apply filters.
  5. Ask a follow-up: Which of those are still pending? — This tests conversation memory.
  6. Ask something the database can’t answer: What's the weather in Chicago? — The agent should politely decline.
Problem Likely Cause Fix
Agent says “I don’t have access to any tools” Baserow Tool node not connected to the Agent’s AI Tool input Check the connection — it should link to the wrench/tool connector on the Agent node, not the main input
Agent returns empty results for valid queries Wrong Database ID or Table ID in the Baserow node Double-check both IDs against your Baserow URL — they’re numeric IDs, not names
“Authentication failed” error Invalid or expired Baserow API token Generate a new token in Baserow → Settings → API tokens, and update the credential in n8n
Follow-up questions don’t work Memory node not connected or Session ID not set Ensure the Buffer Window Memory connects to the AI Memory port, and Session ID Type is set to “From Input”
Responses are slow (10+ seconds) Large table with thousands of rows Add a view filter in Baserow to limit the rows the tool can access, or switch to a more targeted query setup

Frequently Asked Questions

Can I use this with a free Baserow account?

Yes. Baserow’s free tier includes API access, which is all this workflow needs. The free plan supports up to 3,000 rows per table. If your dataset is larger, you’ll need a paid Baserow plan or a self-hosted instance.

How much does it cost to run per month?

Very little. GPT-4o Mini charges roughly $0.15 per million input tokens and $0.60 per million output tokens. A typical database query conversation uses about 2,000–4,000 tokens total. If your team runs 100 queries per day, you’re looking at maybe $2–5 per month in OpenAI costs. The n8n and Baserow free tiers cover everything else.

Can I connect multiple tables to the same chatbot?

Yes — add additional Baserow Tool nodes for each table and connect them all to the Agent’s AI Tool input. The agent will decide which table to query based on the user’s question. Just make sure your system prompt describes what each table contains so the agent picks the right one.

Can I embed the chat widget on my website?

n8n’s Chat Trigger generates a URL that you can embed as an iframe or integrate with n8n’s chat widget library. For production use, you might want to swap the Chat Trigger for a Webhook node and build a custom frontend — but for internal team use, the built-in widget works great.

What if I want to use Google Sheets instead of Baserow?

Swap the Baserow Tool node for a Google Sheets Tool node (or use an HTTP Request tool with the Sheets API). The rest of the workflow stays the same. Baserow has an edge for structured queries because it supports filters natively, but Google Sheets works fine for smaller datasets.

Is my data sent to OpenAI?

Yes — the rows retrieved from Baserow are sent to OpenAI as part of the prompt so the model can formulate an answer. If you’re working with sensitive data, review OpenAI’s data usage policy or consider using a self-hosted LLM (like Ollama) instead of GPT-4o Mini.

Get the Baserow AI Chatbot Template

Skip the setup — get the pre-built workflow JSON, a step-by-step setup guide, and a credentials walkthrough so you’re live in under 5 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Slack or Telegram interface — replace the Chat Trigger with a Slack or Telegram trigger so your team can query the database directly from their messaging app.
  • Connect multiple data sources — add Airtable, Google Sheets, or a Postgres database alongside Baserow to build a unified data assistant.
  • Add write capabilities — extend the agent with a Baserow “create row” tool so users can add or update records through the chat (e.g., “Mark order ORD-2026-0891 as shipped”).
  • Schedule automated reports — add a Schedule Trigger that runs a predefined query every morning and sends the results to your team channel.
n8n
Baserow
OpenAI
GPT-4o Mini
AI Agent
chatbot
database automation
no-code

How to Build an AI Candidate Screening Pipeline with n8n (LinkedIn + Gemini)

Recruiting teams spend hours on first-round screening—parsing LinkedIn profiles, cross-referencing job requirements, and writing candidate summaries. It’s critical work, but it’s repetitive and error-prone. What if you could automate the entire initial review, freeing your team to focus on real conversations with the best candidates?

This n8n workflow does exactly that: a recruiter sends a LinkedIn profile URL via Telegram, three AI agents powered by Google Gemini analyze the candidate against the job requirements, and a formatted assessment comes back within seconds. Everything is logged to Google Sheets for your records. Let’s build it.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Telegram bot receives a LinkedIn profile URL from a recruiter
  2. The workflow scrapes the candidate’s profile data using Apify
  3. It retrieves the job description from your Google Drive folder
  4. Three specialized AI agents evaluate the candidate: one scores JD match, one delivers a detailed analysis, and one synthesizes a recruiter-ready recommendation
  5. Results are stored as a row in Google Sheets for future reference
  6. A formatted summary is sent back to Telegram with the screening verdict

How It Works — The Big Picture

The workflow orchestrates a multi-stage evaluation: it gathers data from three sources (LinkedIn, Google Drive, Apify), processes it through three independent LLM agents, consolidates the results, and delivers them both to a persistent data store and back to the recruiter in real time.

┌──────────────────────────────────────────────────────────────────────────┐
│  AI CANDIDATE SCREENING PIPELINE                                        │
│                                                                          │
│  1. Telegram Trigger                                                     │
│         ↓                                                                │
│  2. Extract LinkedIn URL → 3. Apify Scraper (LinkedIn Profile)         │
│         ↓                                                                │
│  4. Poll Apify Status → 5. Get Apify Results                            │
│         ↓                                                                │
│  6. Google Drive: Fetch Job Description                                 │
│  7. Extract PDF Text                                                    │
│         ↓                                                                │
│  8–10. Three Parallel LLM Agents (Gemini 2.5 Pro)                       │
│        • Agent 1: JD Matching Score                                     │
│        • Agent 2: Detailed Candidate Analysis                           │
│        • Agent 3: Recruiter Recommendation                              │
│         ↓                                                                │
│  11. Merge Agent Results                                                │
│  12. Add to Google Sheets                                               │
│  13. Format & Send Telegram Summary                                     │
│         ↓                                                                │
│  14. Telegram Send (Final Message)                                      │
└──────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n account (Cloud or self-hosted, version 1.0+)
  • Telegram Bot API—a bot token created via BotFather
  • Google Account with access to Google Drive, Google Sheets, and the Gemini API
  • Apify Account with API access and a LinkedIn Scraper actor already configured
  • Google Drive Folder containing job description PDFs (one per role)
  • Google Sheets Document where candidate results will be stored
  • LinkedIn URL(s) to test with—public profiles work best

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

Part 1 — The Trigger and Data Collection

Step 1: Telegram Trigger and URL Extraction

The workflow starts when your recruiter sends a message with a LinkedIn profile link. The Telegram trigger node listens for incoming messages, and a Set node extracts the LinkedIn URL for processing.

Configuration: Set the Telegram Bot token in the credential field. The trigger fires every time a message arrives. A downstream Set node extracts the URL string from the message text using a simple expression.


{
  "message": "Please screen this candidate: https://www.linkedin.com/in/james-carter-52a1b3c/",
  "linkedinUrl": "https://www.linkedin.com/in/james-carter-52a1b3c/",
  "timestamp": "2026-04-08T10:15:00Z"
}
  
💡

Tip: Make sure your recruiter includes the full LinkedIn profile URL in their message. Private or incomplete URLs will cause the Apify scraper to fail silently. Train users to send the full URL like https://www.linkedin.com/in/username/.

Step 2: Apify LinkedIn Scraper Trigger and Polling

Once you have the URL, send it to Apify’s LinkedIn Profile Scraper actor. Apify will queue the job asynchronously, so you need to poll for results. The workflow launches the actor, then checks its status repeatedly until it’s done.

Configuration: In the Apify node, set the Actor ID to your LinkedIn Scraper actor, pass the LinkedIn URL as input, and call the actor. Store the Run ID for polling. Use a Wait node to space out polling calls (2–3 seconds apart, 30–40 attempts). Once the status shows “Succeeded,” fetch the results from the Apify output dataset.


{
  "runId": "YOUR_RUN_ID_FROM_APIFY",
  "status": "Succeeded",
  "profile": {
    "name": "James Carter",
    "headline": "Senior Software Engineer at TechCorp",
    "location": "Austin, TX, USA",
    "about": "10+ years building scalable systems. Expertise in cloud architecture and team leadership.",
    "experience": [
      {
        "title": "Senior Software Engineer",
        "company": "TechCorp Inc.",
        "duration": "2022–Present",
        "description": "Led platform modernization, reducing infrastructure costs by 35%."
      }
    ],
    "skills": ["Python", "AWS", "System Design", "Leadership", "Docker", "Kubernetes"],
    "endorsements": 247
  }
}
  
📌

Important: Apify’s LinkedIn scraper may hit rate limits if called too frequently. Keep polling intervals at 2–3 seconds and fail gracefully if a profile can’t be scraped (use an IF node to check the status).

Step 3: Fetch Job Description from Google Drive

Your Google Drive folder holds job descriptions as PDFs. In parallel with the Apify scrape, the workflow fetches the correct job description. A Set node stores the job title or folder ID for lookup, and a Google Drive node finds and downloads the PDF.

Configuration: Use Google Drive credentials (OAuth2). Set the operation to “Download File” and specify your folder structure. You may hard-code the folder ID or pass it dynamically based on recruiter input. Extract the PDF file ID and download the binary content.


{
  "jobDescriptionFile": {
    "id": "1a2b3c4d5e6f7g8h9i0j_JOB_DESC",
    "name": "Senior_Software_Engineer_2026.pdf",
    "mimeType": "application/pdf",
    "size": 45230,
    "downloadUrl": "https://drive.google.com/file/d/1a2b3c4d5e6f7g8h9i0j_JOB_DESC/view"
  }
}
  

Part 2 — Document Processing and AI Analysis

Step 4: Extract Text from PDF Job Description

PDF files need to be converted to plain text before the AI agents can analyze them. An n8n PDF Extractor node (or a Code node using a library like pdfkit) parses the PDF and outputs clean text.

Configuration: Feed the downloaded PDF binary into a PDF Extract node. Set it to extract all text. The output is clean, line-broken text suitable for LLM processing.


{
  "jobDescriptionText": "Senior Software Engineer - Full-Time, Austin, TX\n\nAbout the Role:\nWe're seeking a Senior Software Engineer to lead our platform modernization initiative...\n\nKey Responsibilities:\n- Design and implement scalable microservices\n- Mentor junior engineers\n- Collaborate with product and design teams\n\nRequired Skills:\n- 8+ years software engineering experience\n- Proficiency in Python, Go, or Rust\n- AWS or GCP certification preferred\n- Strong system design fundamentals\n\nCompensation:\n$180,000–$220,000 + equity"
}
  

Step 5–7: Three Parallel AI Agents (Gemini 2.5 Pro with LangChain)

This is where the intelligence happens. Three specialized LangChain agents, each powered by Google Gemini 2.5 Pro, evaluate the candidate from different angles. They run in parallel for speed, each receiving the same candidate profile and job description but with a different prompt.

Agent 1: JD Matching Agent
Purpose: Assign a match score (0–100%) and list which job requirements the candidate meets and which they lack.
Prompt: “You are a recruitment analyst. Compare this candidate’s profile to the job description. Score the match from 0–100%. List which required skills are present, which are missing, and which desired skills the candidate has. Be precise and numerical.”


{
  "matchScore": 78,
  "requiredSkillsMet": ["Python", "AWS", "System Design", "Leadership"],
  "requiredSkillsMissing": [],
  "desiredSkillsPresent": ["Docker", "Kubernetes"],
  "reasoning": "Strong match on core backend skills and architecture. Leadership experience aligns with mentoring expectations."
}
  

Agent 2: Detailed Analysis Agent
Purpose: Provide a deep-dive evaluation of the candidate’s background, strengths, gaps, and how they’d perform in the role.
Prompt: “You are a senior recruiter reviewing this candidate. Write a comprehensive 2–3 paragraph evaluation of their fit for the role. Consider their experience trajectory, demonstrated technical depth, leadership maturity, and any red flags or concerns. Be constructive but honest.”


{
  "analysis": "James Carter presents a strong profile for this role. His 10 years in software engineering, with the last 4 focused on platform modernization at TechCorp, directly mirror the responsibilities outlined. His experience leading infrastructure cost optimization demonstrates both technical depth and business acumen. However, his background is predominantly in established, large-scale systems; this role will require exposure to startup-pace decision-making. His skill set is very current—Docker, Kubernetes, and AWS are all heavily weighted in the job description. No significant gaps identified beyond the typical onboarding curve."
}
  

Agent 3: Recruiter Recommendation Agent
Purpose: Synthesize the other two analyses and produce a hiring recommendation for the recruiter (e.g., “Strong Yes,” “Yes with caveats,” “No”).
Prompt: “Based on the candidate profile, job description, and the above analyses, provide a short hiring recommendation. Choose from: ‘Strong Yes—move to phone screen,’ ‘Yes, with caveats,’ ‘Maybe—needs clarification on specific skills,’ ‘No—not a fit.’ Explain your recommendation in 1–2 sentences.”


{
  "recommendation": "Strong Yes—move to phone screen",
  "rationale": "Carter's technical skills and leadership experience are a strong fit. His platform modernization background directly aligns with the role's core responsibility. Recommend phone screen to assess cultural fit and career motivation."
}
  
💡

Tip: Parallel execution is critical for speed. All three agents should start at the same time (use a Merge node to combine their outputs). If one agent times out, the workflow won’t block the others—use error handlers to catch and log failures gracefully.

Part 3 — Results Storage and Final Output

Step 8: Merge Agent Results and Add to Google Sheets

Once all three agents finish, a Merge node combines their outputs into a single structured result. This consolidated data is then added as a new row to your Google Sheets document, creating a searchable archive of all screening decisions.

Configuration: Set up the Merge node to combine all agent outputs under a single object. In the Google Sheets node, configure the operation to “Append Row” into your spreadsheet. Map each agent result to a column: matchScore, analysis, recommendation, linkedinUrl, timestamp, and candidateName.


{
  "linkedinUrl": "https://www.linkedin.com/in/james-carter-52a1b3c/",
  "candidateName": "James Carter",
  "timestamp": "2026-04-08T10:15:00Z",
  "matchScore": 78,
  "requiredSkillsMet": "Python, AWS, System Design, Leadership",
  "requiredSkillsMissing": "None",
  "analysis": "James Carter presents a strong profile...",
  "recommendation": "Strong Yes—move to phone screen"
}
  

Step 9: Format and Send Telegram Summary

The final step sends a formatted message back to the recruiter via Telegram. The message includes the match score, the recommendation, and a brief summary for quick review.

Configuration: Use a Set node to format the output as a readable Telegram message (emoji, line breaks, bold text). Then use a Telegram Send node to deliver it to the recruiter’s chat ID (or the original chat where they sent the URL).


{
  "telegramMessage": "✅ SCREENING COMPLETE\\n\\nCandidate: James Carter\\nMatch Score: 78%\\nRecommendation: Strong Yes—move to phone screen\\n\\nProfile: Senior Software Engineer at TechCorp (10 yrs exp)\\nKey Fit: Python, AWS, System Design, Leadership all present.\\n\\nFull analysis saved to screening sheet.",
  "chatId": "YOUR_RECRUITER_CHAT_ID"
}
  
💡

Tip: Add a conditional branch here. If the recommendation is “No,” prefix the Telegram message with a ⛔ emoji. If it’s “Strong Yes,” use a 🚀 emoji. This gives the recruiter instant visual feedback before they even read the details.

The Data Structure

All screening results are logged to a Google Sheets document. This becomes your searchable candidate database. Each row captures one screening event, with columns for the LinkedIn URL, candidate name, all three agent outputs, and the timestamp.

Column Type Example Description
Date Date 2026-04-08 Screening date (auto-populate with workflow timestamp)
Candidate Name Text James Carter Full name from LinkedIn profile
LinkedIn URL URL https://www.linkedin.com/in/james-carter-52a1b3c/ Link to original profile
Job Title Screened For Text Senior Software Engineer Which job description was used
Match Score Number 78 0–100 from Agent 1
Required Skills Met Text Python, AWS, System Design, Leadership Comma-separated list from Agent 1
Required Skills Missing Text (none) Gaps the candidate should address
Detailed Analysis Long Text James Carter presents a strong profile… Full paragraph from Agent 2
Recommendation Text Strong Yes—move to phone screen Decision from Agent 3
📌

Important: Set up the Google Sheets document with these column headers before importing the workflow. The column names must match exactly—the workflow expects Candidate Name, Match Score, etc. If you rename columns, update the field mappings in the Google Sheets node.

Full System Flow

Here’s the complete end-to-end journey, from recruiter message to final Telegram response:

┌─────────────────────────────────────────────────────────────────────────┐
│                    FULL AI SCREENING PIPELINE                          │
│                                                                         │
│  TRIGGER                                                                │
│  ┌──────────────────┐                                                   │
│  │ Telegram Message │ (Recruiter sends LinkedIn URL)                   │
│  └────────┬─────────┘                                                   │
│           │                                                             │
│  DATA GATHERING (Parallel)                                             │
│           ├──→ Apify LinkedIn Scraper ────→ Poll for Results           │
│           │                                                             │
│           └──→ Google Drive ────→ Download Job Description PDF         │
│                                           │                            │
│                                   Extract PDF Text                     │
│           │                                                             │
│           ↓ (Wait for both)                                            │
│           │                                                             │
│  AI ANALYSIS (3 Parallel Agents)                                       │
│           ├──→ Agent 1: JD Match Score & Skills Gap                    │
│           ├──→ Agent 2: Detailed Candidate Analysis                    │
│           └──→ Agent 3: Recruiter Recommendation                       │
│           │                                                             │
│           ↓ (Merge all agent outputs)                                  │
│           │                                                             │
│  PERSISTENCE & OUTPUT                                                  │
│           ├──→ Add Row to Google Sheets                                │
│           │                                                             │
│           └──→ Format & Send Telegram Summary                          │
│                                                                         │
│  ┌──────────────────────────────────────┐                              │
│  │ Recruiter Receives Summary in Telegram│ (with score + recommendation)
│  └──────────────────────────────────────┘                              │
│                                                                         │
│  ✅ Full screening complete in 30–60 seconds                           │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

Before letting your team loose on the workflow, run through this test plan to confirm everything is wired correctly.

  1. Send a test LinkedIn URL via Telegram: Use a public profile (e.g., your own LinkedIn or a known public figure). Send a message to your bot like: “Please screen: https://www.linkedin.com/in/sarah-thompson-engineering/”
  2. Monitor the n8n execution: Open the workflow’s execution history in n8n and watch for successful node completion. Check that Apify returns profile data, Google Drive successfully downloads the PDF, and all three agents produce output.
  3. Check Telegram for the response: Within 30–60 seconds, you should receive a formatted message with a match score and recommendation.
  4. Verify the Google Sheets row: Open your screening spreadsheet and confirm that a new row was added with all the candidate details and agent analysis.
Problem Likely Cause Fix
Telegram message not triggering workflow Bot token incorrect or Telegram node not listening on the right chat Re-check bot token in Telegram credential. Confirm you’re messaging the correct bot.
Apify scraper returns empty profile LinkedIn profile is private or URL is malformed Test with a public profile. Ensure recruiter sends full URL (https://www.linkedin.com/in/username/).
Google Drive node returns “File not found” Job description PDF is not in the specified folder, or folder ID is wrong Double-check folder ID in Google Drive config. Confirm PDF file exists and is accessible.
Gemini agents time out or return empty responses API quota exceeded, malformed prompt, or credential not authenticated Check Google Cloud console for quota limits. Re-authenticate Gemini credential. Simplify the prompt if needed.
Google Sheets append fails Column names don’t match, sheet is read-only, or credentials lack write access Verify column headers match exactly. Check sheet permissions. Re-authenticate Google Sheets credential.
Telegram response is delayed (>2 minutes) Apify polling is slow, or Gemini API is slow Reduce polling interval slightly (1–2 sec). Check n8n logs for slow node execution.

Frequently Asked Questions

Can I screen candidates for multiple jobs at once?

Yes. Instead of hard-coding a single job description, modify the workflow to accept a job title as an input parameter. Add a conditional step that looks up the corresponding PDF from Google Drive based on the title sent in the Telegram message. For example, if the recruiter sends “Screen for Senior Software Engineer,” the workflow finds and uses that specific job description.

What if Apify can’t scrape a LinkedIn profile?

Apify may fail on private profiles, suspended profiles, or if LinkedIn rate-limits the scraper. Add an error handler branch after the Apify polling step. If the status is “Failed,” send a message back to the recruiter explaining the issue and ask them to provide a public-facing profile link or a resume PDF instead. You can then use a PDF extractor node to parse the resume and proceed with the same AI agents.

How much does this cost to run?

Costs depend on your service usage. Gemini API charges per 1,000 input tokens (~$0.075) and 1,000 output tokens (~$0.3). A typical screening run uses about 5,000–8,000 tokens total, so roughly $0.05–$0.10 per candidate. Apify charges based on actor runs (LinkedIn scraper ~$0.05–$0.15 per run). Google Sheets and Google Drive are included in your Google Workspace account (no additional charge). Telegram Bot is free. Total cost per screening: roughly $0.10–$0.30.

Can I customize the AI agents’ evaluation criteria?

Absolutely. Each agent’s instructions are defined in the LangChain prompt. Edit the prompt in each agent node to emphasize different criteria. For example, if your role prioritizes “leadership and mentoring ability,” adjust Agent 2’s prompt to focus on those traits. Or if you want more detail, ask the agents to return structured JSON with sub-scores instead of prose.

What if the recruiter needs to screen a candidate without a LinkedIn profile?

Create an alternative input path. After the initial Telegram message, add a conditional step: if the message contains a LinkedIn URL, proceed with Apify; if it contains a resume attachment or a Drive link, fetch and parse that instead. Both paths merge at the “AI Analysis” stage, so the agents evaluate the same data regardless of source.

How do I integrate this with my existing ATS (Applicant Tracking System)?

Most modern ATSs expose an API for candidate creation. After screening results are added to Google Sheets, add a conditional node that checks the recommendation. If it’s “Strong Yes,” make an HTTP POST request to your ATS’s API with the candidate details, match score, and recruiter notes. This creates a pre-filled candidate record that the recruiter can review and move forward in your hiring workflow.

Get the AI Candidate Screening Template

Stop screening manually. Import the complete 55-node workflow in under 10 minutes, configure your credentials, and let AI agents handle first-round reviews while you focus on real conversations.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted · Setup guide included

What’s Next?

  • Add a second-stage review: Create a companion workflow that triggers when a recruiter flags a candidate for deeper evaluation. It can compile a detailed dossier from LinkedIn, recent GitHub contributions, and portfolio links.
  • Expand to video interviews: Integrate a video interview scheduling tool (e.g., Calendly, Slack) to automatically send a booking link to “Strong Yes” candidates, streamlining the next step.
  • Multi-language support: Use Google Translate in parallel with Gemini to evaluate candidates from non-English profiles, opening your talent pool globally.
  • Scoring refinement: Once you have 20–30 screening results in Google Sheets, analyze which candidates actually performed well in interviews and phone screens. Use that data to fine-tune the agents’ prompts and weightings for even better predictions.
n8n
AI agents
LinkedIn automation
Google Gemini
Telegram
recruitment automation
workflow template

How to Build AI Lead Scoring with Salesforce, GPT-4o, and Slack in n8n

How to Build AI Lead Scoring with Salesforce, GPT-4o, and Slack in n8n

Your sales team is drowning in leads. Last week you got 247. Yesterday alone, 58. But here’s the real problem: they’re spending three hours manually qualifying each batch, leaving scores written in messy spreadsheets, and the best leads get worked last because nobody marked them as priority. By then, they’ve already gone cold.

What if your AI could instantly score every lead the moment it enters Salesforce? Not just a random 1-100 number, but a real assessment: Is this a hot prospect ready to buy? A warm lead worth nurturing? Or cold?

That’s exactly what we’re building today. In this guide, you’ll create a complete AI-powered lead-scoring workflow in n8n that connects Salesforce, GPT-4o, and Slack. You’ll master data masking—a privacy-first technique that keeps PII out of the AI’s hands. And you’ll have a system that runs automatically, scoring leads while your team sleeps.

Ready to skip the manual building and import a production-ready template? We’ve pre-built this entire workflow. Grab the template and get started in under 5 minutes.

What You’ll Build

By the end of this guide, you’ll have a fully automated lead-scoring system that:

  1. Triggers instantly when a new lead lands in Salesforce, no manual intervention needed
  2. Masks sensitive data (names, emails, phone numbers) before it ever leaves your system, keeping PII private
  3. Scores each lead with GPT-4o, assigning a score (1-100), tier (Hot/Warm/Cold), and actionable next steps
  4. Restores original data after scoring, so your Slack message has the real contact info intact
  5. Posts to Slack with a clean, formatted notification that your sales team acts on immediately

How It Works — The Big Picture

Before we dig into the code, let’s see how the pieces fit together:

┌──────────────┐         ┌──────────────┐         ┌──────────────┐
│ 1. SALESFORCE│         │ 2. LOOP/      │         │ 3. FETCH     │
│ TRIGGER      │────────▶│    SPLIT      │────────▶│ LEAD DETAILS │
│              │         │  (splitIn5s)  │         │ (HTTP Request)
└──────────────┘         └──────────────┘         └──────────────┘
        │                        │                        │
        │                        │                        │
        │                        └────────────────────────┘
        │                                  │
        │                    ┌─────────────▼──────────────┐
        │                    │ 4. MASK SENSITIVE DATA     │
        │                    │ (Code node)                │
        │                    │ Strip email, phone, name   │
        │                    │ Store mapping: real→masked │
        │                    └─────────────┬──────────────┘
        │                                  │
        │                    ┌─────────────▼──────────────┐
        │                    │ 5. AI LEAD SCORING         │
        │                    │ (OpenAI GPT-4o)            │
        │                    │ Analyze masked lead        │
        │                    │ Score, tier, reasoning     │
        │                    └─────────────┬──────────────┘
        │                                  │
        │                    ┌─────────────▼──────────────┐
        │                    │ 6. UNMASK LEAD DATA        │
        │                    │ (Code node)                │
        │                    │ Restore email, phone, name │
        │                    │ From mapping: masked→real  │
        │                    └─────────────┬──────────────┘
        │                                  │
        │                    ┌─────────────▼──────────────┐
        │                    │ 7. POST TO SLACK           │
        │                    │ Format and send            │
        │                    │ scored lead notification   │
        │                    │ to sales channel           │
        │                    └────────────────────────────┘

The key innovation here is Steps 4 and 6: data masking and unmasking. Your lead’s real email and phone number never get sent to OpenAI. Instead, GPT-4o analyzes a sanitized version, then we restore the original data before Slack posts it. Privacy by design.

How to Build an AI Contract Risk Analyzer with n8n (Two-Pass Analysis)

How to Build an AI Contract Risk Analyzer with n8n (Two-Pass Analysis)

Learn how to build a two-pass AI contract analyzer with n8n that classifies documents, scores risk, and alerts your team via Slack and email.

Overview

This comprehensive guide shows you how to create an AI-powered contract analysis workflow using n8n, OpenAI, Supabase, Slack, and Gmail. The workflow performs a two-pass analysis to classify contracts and identify risks efficiently.

Key Features

  • Two-pass AI analysis for accurate risk scoring
  • Contract classification and metadata extraction
  • Automatic Slack and email alerts for high-risk contracts
  • Supabase integration for data persistence
  • Deduplication to avoid re-analyzing the same contracts
  • Error handling and admin notifications

How It Works

The workflow ingests contract PDFs, performs a quick classification pass to extract metadata, then conducts a deep risk analysis. Results are stored in Supabase and alerts are sent via Slack and email based on risk scores.

Get Started

Download the Complete Workflow

Includes setup guide, credentials guide, and the ready-to-import n8n JSON workflow.

How to Build an AI SEO Content Engine with n8n and Google Gemini

How to Build an AI SEO Content Engine with n8n and Google Gemini

Learn how to build an automated content engine that generates SEO and GEO-optimized articles using n8n and Google Gemini, with smart distribution to WordPress, LinkedIn, and email.

What You’ll Build

This guide shows you how to create a fully automated AI content pipeline that:

  • Watches a Google Sheet for content ideas
  • Generates SEO/GEO-optimized articles with Google Gemini
  • Intelligently routes content to the right distribution channels
  • Publishes to WordPress, LinkedIn, email, and more

Getting Started

Access the complete workflow template and setup guides at: AI SEO Content Engine Template

How to Build an AI Chatbot with Long-Term Memory Using n8n and Google Docs

Ever built a chatbot that forgets everything the moment the conversation ends? You ask your AI assistant to remember your coffee order preference, and five minutes later it has no idea you’re a oat-milk double-shot latte person. It’s frustrating, and it makes the experience feel less intelligent than it should be.

Here’s the good news: you can fix this with n8n. This workflow gives your AI chatbot true persistent memory using Google Docs as a beautifully simple, completely free knowledge store. No database setup. No complex infrastructure. Just your chatbot learning about users, remembering details, and getting smarter with every conversation.

Ready to build this? You can grab the complete ready-to-import workflow with all the setup instructions at EasyWorkflows. Import in seconds and start building immediately.

What You’ll Build

This workflow creates an intelligent chatbot experience where:

  1. Users send messages via chat trigger (or Telegram)
  2. The workflow instantly retrieves all previous memories and notes about that user from Google Docs
  3. An AI agent (GPT-4o-mini) reads the incoming message alongside historical context
  4. The AI thinks about what to remember, what to save, and how to respond
  5. Important facts get saved to the “Long-Term Memories” doc for persistence
  6. Temporary notes get saved to the “Notes” doc for session context
  7. The response goes back to the user via Telegram, chat interface, or both

How It Works — The Big Picture

Let me show you the architecture before we dive into the build:

┌─────────────────────────────────────────────────────────────────┐
│                         USER MESSAGE                             │
│                  (Chat Trigger / Telegram)                       │
└────────────────────────────┬────────────────────────────────────┘
                             │
                ┌────────────┴────────────┐
                │                         │
      ┌─────────▼──────────┐   ┌─────────▼──────────┐
      │   Load Memories    │   │    Load Notes      │
      │  from Google Docs  │   │  from Google Docs  │
      │   (Memory Doc)     │   │   (Notes Doc)      │
      └─────────┬──────────┘   └─────────┬──────────┘
                │                         │
                └────────────┬────────────┘
                             │
                      ┌──────▼──────┐
                      │    Merge    │
                      │ Memories &  │
                      │    Notes    │
                      └──────┬──────┘
                             │
                      ┌──────▼──────┐
                      │  Combine    │
                      │  Context    │
                      └──────┬──────┘
                             │
        ┌────────────────────┼────────────────────┐
        │                    │                    │
   ┌────▼────┐      ┌────────▼────────┐     ┌────▼────┐
   │ Session │      │  AI Memory      │     │  GPT-   │
   │ Memory  │      │  Agent (with    │     │ 4o Mini │
   │ Buffer  │      │  System Prompt) │     │         │
   └────┬────┘      └────────┬────────┘     └────┬────┘
        │                    │                    │
        └────────────────────┼────────────────────┘
                             │
                ┌────────────┴────────────┐
                │                         │
      ┌─────────▼────────┐    ┌──────────▼──────────┐
      │  Save Memory to  │    │  Save Note to      │
      │   Google Docs    │    │  Google Docs       │
      │  (Triggered by   │    │  (Triggered by AI) │
      │   AI decision)   │    │                    │
      └─────────┬────────┘    └──────────┬──────────┘
                │                        │
      ┌─────────▼──────────────────────────▼────────┐
      │              Response Output                │
      │      (Format Chat + Send Telegram)          │
      └──────────────────────────────────────────────┘

What You’ll Need

Before you start, gather these prerequisites:

  • n8n account (cloud.n8n.io or self-hosted) — free tier works fine
  • OpenAI API key with GPT-4o-mini access (~$0.15 per conversation)
  • Google account with Google Docs access (free)
  • Telegram account (optional, but included in this workflow)
  • Telegram Bot Token from BotFather (optional, only if using Telegram output)
  • Basic n8n knowledge — familiarity with nodes, connections, and expressions is helpful but not required

Time estimate: 25–35 minutes for setup and first conversation.

Setting Up Your Google Docs

Google Docs is the memory backbone of this workflow. Think of it like a simple, searchable database that the AI can read and write to. Here’s why it works so well:

  • Free — You don’t pay per request or storage
  • Human-readable — You can open it anytime and read what the AI has learned
  • Searchable — Built-in Find function means you can track memory growth
  • OAuth2-compatible — n8n integrates seamlessly
  • Shareable — If you want to audit or share user data with a team member, it’s just a Google Doc link

Create Two Google Docs

Doc 1: “Long-Term Memories”

This stores facts that persist across conversations. Examples:

[04/08/2026] - Name: Sarah
[04/08/2026] - Preference: Oat milk lattes, no sugar
[04/08/2026] - Job: Product Manager at TechCorp
[04/08/2026] - Timezone: America/Chicago
[04/08/2026] - Goal: Learn n8n automation

Doc 2: “Notes”

This stores temporary reminders, action items, and session notes:

[04/08/2026 14:32:15] - User asked about workflow pricing
[04/08/2026 14:33:42] - Follow up: send template examples
[04/08/2026 14:35:08] - User mentioned bandwidth limits in n8n Cloud

You’ll get the document IDs after you create them. In Google Docs, the URL looks like:

https://docs.google.com/document/d/YOUR_GOOGLE_DOC_ID/edit

Copy that ID—you’ll paste it into the workflow nodes.

Building the Workflow — Step by Step

Now let’s build this. I’ll walk through each node with configurations and tips.

1 Chat Trigger

What it does: Listens for incoming messages from the n8n chat interface or external webhooks. This is your entry point for user input.

How to configure:

  • In n8n, add a “Chat Trigger” node (n8n-nodes-base.chatTrigger)
  • Set chatTriggerType to "webhook"
  • Save and deploy the workflow
  • The node will generate a webhook URL automatically

Data that flows out:

{
  "chatId": "user-session-123",
  "message": "Hey, can you remember I prefer oat milk lattes?",
  "sessionId": "session-456"
}

Tip: The Chat Trigger can receive messages from n8n’s web chat UI, Telegram, Slack, or any HTTP POST request. In this workflow, it fans out to two parallel Google Docs fetch operations.

2 Load Memories from Google Docs

What it does: Retrieves the Long-Term Memories document. This gives the AI context about the user’s past.

How to configure:

  • Add a “Google Docs” node
  • Authenticate with your Google account (OAuth2)
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_MEMORIES
  • Set option to "getText"

Data that flows out:

{
  "data": "[04/08/2026] - Name: Sarah\n[04/08/2026] - Preference: Oat milk lattes, no sugar\n[04/08/2026] - Job: Product Manager at TechCorp"
}

Tip: This node runs in parallel with the Notes fetch, so both operations happen simultaneously. The latency is usually under 500ms per fetch.

3 Load Notes from Google Docs

What it does: Retrieves the Notes document. This provides immediate session context and recent reminders.

How to configure:

  • Add another “Google Docs” node (separate from the Memories node)
  • Authenticate with the same Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_NOTES
  • Set option to "getText"

Data that flows out:

{
  "data": "[04/08/2026 14:32:15] - Asked about workflow pricing\n[04/08/2026 14:33:42] - Follow up: send template examples"
}

Tip: If the Notes doc doesn’t exist yet or is empty, the node will return an empty string. That’s fine—the workflow handles it gracefully.

4 Merge Memories & Notes

What it does: Combines the two separate outputs into a single data structure. This merge prepares the data for aggregation.

How to configure:

  • Add a “Merge” node (@n8n-nodes-base.merge)
  • Set mode to "combine"
  • Set combineBy to "id"
  • Connect Memories output to input 0
  • Connect Notes output to input 1

Data that flows out:

[
  { "json": { "data": "[memories...]", "type": "memories" } },
  { "json": { "data": "[notes...]", "type": "notes" } }
]

Tip: The Merge node in “combine” mode takes multiple inputs and creates an array. This makes it easy for the next node to iterate over both data sources.

5 Combine Context

What it does: Aggregates the memories and notes into a single context object that the AI agent will receive.

How to configure:

  • Add an “Aggregate” node (@n8n-nodes-base.aggregate)
  • Set mode to "combine"
  • Leave other options as default

Data that flows out:

{
  "memory_context": "[all memories]",
  "notes_context": "[all notes]",
  "combined": true
}

Tip: Think of Aggregate as the “summary” step. It pools all items from the previous step so the AI Agent receives one clean input.

6 AI Memory Agent

What it does: This is the brain. The AI Agent reads the user’s message, your system prompt, and all the context from memories and notes. It decides what to respond, what to save, and which tools to call.

How to configure:

  • Add an “AI Tools Agent” node (@n8n/n8n-nodes-langchain.agent)
  • Set agentOptions.systemPrompt to the following:
You are an AI assistant with persistent memory capabilities. Your role is to:

1. MEMORY MANAGEMENT:
   - Extract and save important facts about the user for long-term recall
   - Format memories as: [DATE] - [TOPIC]: [DETAIL]
   - Save to 'Save Memory to Google Docs' when you learn something important

2. NOTE TAKING:
   - Capture actionable items, reminders, or temporary notes
   - Format notes as: [DATE] [TIME] - [NOTE]
   - Save to 'Save Note to Google Docs' when user mentions something to remember

3. CONTEXT AWARENESS:
   - Review loaded memories and notes before responding
   - Reference past conversations naturally
   - Acknowledge when you remember something about the user

4. RESPONSE GUIDELINES:
   - Be helpful, personable, and concise
   - If unsure, ask clarifying questions
   - Never make up information not in your context

Always use the provided tools to save important information.

Tool attachments: Connect the Session Memory Buffer, GPT-4o Mini LLM, and the two Save tools (Save Memory, Save Note). The agent will automatically call these tools based on its logic.

Data that flows out:

{
  "output": "Thanks Sarah! I've noted that you prefer oat milk lattes. I'll remember that for next time we chat. Is there anything else I can help with?",
  "tool_calls": [
    { "tool": "Save Memory to Google Docs", "input": { "topic": "Preference", "detail": "Oat milk lattes, no sugar" } }
  ]
}

Tip: The system prompt is where you define the AI’s personality and its rules for memory management. You can customize this heavily—make it more casual, more formal, more technical, whatever fits your use case.

7 GPT-4o Mini

What it does: The language model that powers the agent. GPT-4o-mini is fast, cheap (~$0.15 per 1K tokens), and smart enough for most chatbot tasks.

How to configure:

  • Add an “OpenAI Chat Model” node (@n8n/n8n-nodes-langchain.lmChatOpenAi)
  • Authenticate with your OpenAI API key
  • Set model to "gpt-4o-mini"
  • Set temperature to 0.7 (keeps responses natural but consistent)

Data flow: The Agent sends prompts to this node, which returns generated text. The node is “read-only” from the Agent’s perspective—it doesn’t make decisions, just generates language.

Tip: If you want more creative responses, increase temperature to 0.9. If you want more deterministic responses, lower it to 0.3. For memory-critical tasks, I’d recommend 0.5–0.7.

8 Session Memory Buffer

What it does: Stores the last 10 messages in the conversation (configurable). This gives the AI short-term context within a single chat session, complementing the long-term Google Docs memory.

How to configure:

  • Add a “Memory Buffer Window” node (@n8n/n8n-nodes-langchain.memoryBufferWindow)
  • Set bufferSize to 10 (keeps the last 10 messages)

Data that flows out:

{
  "history": [
    { "role": "user", "content": "Hey, remember I prefer oat milk?" },
    { "role": "assistant", "content": "I'll remember that!" },
    ...
  ]
}

Tip: The Session Memory Buffer is separate from the long-term Google Docs memory. Use bufferSize 10–20 for natural conversations. If you set it higher (50+), you might hit token limits on OpenAI, increasing costs.

9 Save Memory to Google Docs

What it does: A Google Docs node that appends new long-term memories. When the AI Agent decides to save something important, it calls this tool.

How to configure:

  • Add a “Google Docs” node (different from the retrieval nodes)
  • Authenticate with your Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_MEMORIES
  • Set option to "appendText"
  • Set text to: ={{ "[" + new Date().toLocaleDateString() + "] - " + $json.input.topic + ": " + $json.input.detail }}

Example append:

[04/08/2026] - Preference: Oat milk lattes, no sugar

Tip: The expression creates a timestamp automatically. Each memory is on a new line, making the doc easy to scan.

10 Save Note to Google Docs

What it does: A Google Docs node for temporary notes. When the AI wants to save a reminder or action item, it calls this.

How to configure:

  • Add another “Google Docs” node (separate from Save Memory)
  • Authenticate with your Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_NOTES
  • Set option to "appendText"
  • Set text to: ={{ "[" + new Date().toLocaleDateString() + " " + new Date().toLocaleTimeString() + "] - " + $json.input }}

Example append:

[04/08/2026 14:32:15] - User asked about Telegram integration

Tip: This node includes both date and time, since notes are usually time-sensitive. Notes doc grows fast—you might clean it out monthly or weekly depending on volume.

11 Send Telegram Reply

What it does: Sends the AI’s response to Telegram. Optional, but great for testing or if you want users to chat via Telegram instead of the n8n web UI.

How to configure:

  • Add an “HTTP Request” node (@n8n-nodes-base.httpRequest)
  • Set method to POST
  • Set url to: https://api.telegram.org/bot{{ $env.TELEGRAM_BOT_TOKEN }}/sendMessage
  • Add body parameters:
  • chat_id: YOUR_TELEGRAM_CHAT_ID
  • text: ={{ $json.output }}

Tip: Get your Telegram Bot Token from BotFather on Telegram. Get your Chat ID by messaging the bot and calling /getchatid (if your bot has that command) or using a debug service. If you’re not using Telegram, you can delete this node entirely—the workflow still works.

12 Format Chat Response

What it does: Formats the AI’s output for the n8n chat interface. This is optional if you’re only using Telegram, but it’s useful for testing.

How to configure:

  • Add a “Set” node (@n8n-nodes-base.set)
  • Set mode to "map"
  • Add assignment: response = ={{ $json.output }}

Data that flows out:

{
  "response": "Thanks Sarah! I've noted that you prefer oat milk lattes. I'll remember that for next time."
}

Tip: This node is simple, but it’s important for keeping the chat interface happy. It ensures the response is in the expected format.

The Data Structure

Let’s look at how data flows through the system, using a real example with Sarah:

Initial state (empty memories and notes):

Google Docs - Long-Term Memories:
(empty)

Google Docs - Notes:
(empty)

Sarah sends her first message:

User: "Hi! I'm Sarah from Chicago. I work in product management and I love oat milk lattes."

The workflow retrieves (empty context):

memories_context: ""
notes_context: ""

AI Agent decides to save key facts:

Tool call: Save Memory to Google Docs
Input: { topic: "Name", detail: "Sarah" }
Input: { topic: "Timezone", detail: "America/Chicago" }
Input: { topic: "Job", detail: "Product Manager" }
Input: { topic: "Preference", detail: "Oat milk lattes" }

Google Docs after first conversation:

Google Docs - Long-Term Memories:
[04/08/2026] - Name: Sarah
[04/08/2026] - Timezone: America/Chicago
[04/08/2026] - Job: Product Manager
[04/08/2026] - Preference: Oat milk lattes

Google Docs - Notes:
[04/08/2026 14:35:22] - User wants to learn n8n automation

Sarah’s next message (tomorrow):

User: "Hey, can you help me with n8n workflows?"

The workflow now retrieves:

memories_context: "[04/08/2026] - Name: Sarah\n[04/08/2026] - Timezone: America/Chicago\n[04/08/2026] - Job: Product Manager\n[04/08/2026] - Preference: Oat milk lattes"

notes_context: "[04/08/2026 14:35:22] - User wants to learn n8n automation"

AI response:

Output: "Hey Sarah! Of course—I'd love to help with your n8n workflows. Given that you're a PM at TechCorp in Chicago, I'm guessing you're looking to automate some processes for your team. What kind of workflows are you thinking about?"

Notice how the AI naturally references Sarah by name, acknowledges her role, and connects her earlier interest in n8n to the current question. That’s persistent memory in action.

Full System Flow

Here’s the complete end-to-end journey:

┌─ Start: User sends message via Chat Trigger or Telegram ──┐
│                                                             │
├─ Parallel: Load Memories from Google Docs                 │
│ └─ Retrieve all historical facts about user                │
│                                                             │
├─ Parallel: Load Notes from Google Docs                    │
│ └─ Retrieve session context and recent reminders           │
│                                                             │
├─ Merge: Combine both memory sources                       │
│ └─ Creates array: [memories_obj, notes_obj]               │
│                                                             │
├─ Aggregate: Consolidate into single context               │
│ └─ Pools all items into one data structure                │
│                                                             │
├─ AI Memory Agent (receives):                              │
│ ├─ User message                                            │
│ ├─ All long-term memories                                 │
│ ├─ All notes from this session                            │
│ ├─ Last 10 messages from Session Memory Buffer            │
│ ├─ System prompt (memory management rules)                │
│ └─ Connection to GPT-4o-mini and tools                    │
│                                                             │
├─ AI Decision Making:                                      │
│ ├─ Read context and user message                          │
│ ├─ Generate response                                       │
│ ├─ Decide what facts to save (→ Save Memory tool)         │
│ └─ Decide what notes to capture (→ Save Note tool)        │
│                                                             │
├─ Parallel: Save Memory to Google Docs                     │
│ └─ Append new facts (if AI decided to save)               │
│                                                             │
├─ Parallel: Save Note to Google Docs                       │
│ └─ Append new notes (if AI decided to save)               │
│                                                             │
├─ Output to Chat:                                          │
│ ├─ Format Chat Response → n8n Web UI                     │
│ └─ Send Telegram Reply → User's Telegram                 │
│                                                             │
└─ Complete: User receives response & AI remembers ──────────┘

Testing Your Workflow

Now that your workflow is built, let’s test it thoroughly:

Test Plan

  1. Deploy the workflow and activate it in n8n
  2. Open the Chat Trigger and test with a simple message: "Hi, my name is Sarah and I like coffee."
  3. Check the Google Docs — The Memories doc should now have a new entry with Sarah’s name
  4. Send another message: "What do I like to drink?" The AI should reference the coffee preference from the previous message
  5. Test Telegram output (if enabled) by checking your Telegram chat — the response should arrive there too
  6. Clear the Notes doc and send a message with a reminder: "Remind me to follow up with the marketing team tomorrow." Check that the note appears in the Notes doc
  7. Simulate a delay — Wait 10 minutes, come back, and test that the AI still remembers Sarah. This confirms persistence

Troubleshooting Table

Issue Likely Cause Fix
Chat Trigger shows “Webhook not working” Workflow not saved or deployed Click Save, then Deploy. Check that the webhook URL is active in the node.
Google Docs nodes fail with auth error OAuth2 credential not connected or expired Re-authenticate the Google Docs credential. Grant permission for Docs read/write.
AI response is generic, not remembering user Memories not being retrieved or system prompt unclear Check that Load Memories node is pulling data. Review system prompt in AI Agent node for clarity.
Memory saves fail silently (no error, but doc not updated) Google Doc ID is incorrect or doc doesn’t exist Verify Document ID in Save Memory and Save Note nodes. Double-check the URL in Google Docs.
Telegram message doesn’t send Bot token invalid or chat ID missing Confirm bot token with BotFather. Test chat ID manually using curl or Postman.
Workflow is slow (>5 second response) Large memory doc or network latency Archive old memories to a separate doc. Consider pagination in future versions.

Frequently Asked Questions

How much data can Google Docs store?

Google Docs has a 50MB file size limit, which translates to roughly 10–20 million words. For a personal chatbot, you’d need to chat 100+ times daily for years to hit that limit. For multi-user systems, consider archiving old memories after 6 months.

Why Google Docs instead of a real database?

Simplicity. No backend servers, no authentication headaches, no costs, and you can read/edit memories by hand. For production systems with thousands of users, you’d want PostgreSQL or MongoDB. For personal projects and small teams, Google Docs is perfect.

Can multiple users use the same workflow?

Yes, but you’d want to create separate memory docs per user (or per user-group). Otherwise, all memories get mixed together. A simple fix is to add a User ID prefix in each memory entry: [04/08/2026] [user-sarah] - Preference: Oat milk.

What if I want to use a different LLM instead of GPT-4o-mini?

Easy. Replace the “GPT-4o Mini” node with any other LLM node that n8n supports: Claude (Anthropic), Gemini (Google), Llama (Meta), or even local models. The system prompt stays the same. Just swap the node and re-authenticate.

How do I ensure memory data is private?

The memories are stored in your Google Docs, which means Google has access (per their privacy policy). If you need end-to-end encryption, deploy n8n self-hosted and use your own database with encryption at rest. For most use cases, treating Google Docs like email (encrypted in transit, encrypted at rest on Google’s servers) is sufficient.

What if the AI forgets something it should remember?

This usually means the system prompt isn’t clear enough about what to save. Revisit the “MEMORY MANAGEMENT” section of the system prompt and be more explicit. Example: add “Save names, preferences, job titles, and timezone. Save reminders and follow-ups as notes.” You can also manually add memories to the Google Doc yourself.

Ready to Deploy This Workflow?

Get the complete, ready-to-import JSON file with all the configurations, plus detailed setup guides for OpenAI, Google Docs, and Telegram. Import in seconds and start building.

Get the Workflow & Setup Guide

Includes step-by-step credential setup, testing checklist, and common customizations.

What’s Next?

You’ve built a powerful memory-driven chatbot. Here are four directions you can take it:

  1. Add Slack Integration — Instead of (or in addition to) Telegram, receive and respond to Slack DMs. Users stay in Slack, and memories persist across platforms.
  2. Multi-User Memory Isolation — Spin up separate memory docs per user, or implement a shared “team knowledge base” doc that all users contribute to. Great for support teams.
  3. Export Memory Analytics — Create a separate workflow that reads the memory doc weekly and generates a summary email. “Sarah asked 12 questions about n8n, preferred 5 templates, and wants to learn automation.”
  4. Archive Old Memories — Set up a scheduled n8n workflow that runs monthly, archives memories older than 6 months to a separate “Archive” doc, and clears the active memory doc. Keeps performance snappy.
n8n
AI Agent
Memory Management
Google Docs
GPT-4o
Chatbot
Telegram
LLM Integration
Automation

How to Build an AI-Powered Zoho CRM Lead Assistant with n8n and OpenAI

Stop clicking through Zoho CRM forms every time a new lead comes in. What if you could just type “Add Sarah Thompson from Vertex Solutions, email sarah@vertexsolutions.com” and have it done instantly? This n8n workflow builds exactly that — an AI-powered CRM assistant that understands plain English, connects to your Zoho CRM through the Model Context Protocol (MCP), and handles lead creation, updates, lookups, and deletions on command. You talk to it like a teammate, and it handles the CRM busywork.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. You type a message into n8n’s built-in chat panel — something like “Create a lead for James Carter at NovaTech, email james.carter@novatech.io, source is web form.”
  2. An AI Agent powered by OpenAI reads your message, figures out which CRM action you need, and calls the right Zoho tool automatically.
  3. The lead is created (or updated, retrieved, or deleted) in your Zoho CRM — and the AI confirms what it did in plain English.
  4. The conversation has memory, so you can follow up naturally: “Now update his status to Contacted” without repeating details.

How It Works — The Big Picture

This workflow uses n8n’s Model Context Protocol (MCP) architecture to bridge an AI chatbot with your Zoho CRM. It has two connected sections running inside a single workflow: an MCP Server that exposes CRM operations as callable tools, and an AI Agent that interprets your messages and calls those tools.

┌──────────────────────────────────────────────────────────────────────────────┐
│  AI-POWERED ZOHO CRM LEAD MANAGER                                          │
│                                                                              │
│  SECTION A: MCP SERVER                                                       │
│  ┌─────────────────────┐                                                     │
│  │  MCP Server Trigger  │◄── Create Lead ── Delete Lead ── Get Lead          │
│  │  (exposes tools)     │◄── Get All Leads ── Update Lead                    │
│  └─────────┬───────────┘                                                     │
│            │ MCP endpoint                                                    │
│            ▼                                                                 │
│  SECTION B: AI AGENT                                                         │
│  ┌──────────────┐    ┌──────────────┐    ┌─────────────────┐                │
│  │ Chat Trigger  │───▶│  AI Agent     │◄───│ OpenAI GPT-4o   │                │
│  │ (user input)  │    │ (orchestrator)│◄───│ mini            │                │
│  └──────────────┘    │              │◄───│ Conversation    │                │
│                      │              │◄───│ Memory          │                │
│                      │              │◄───│ MCP Client      │───▶ Section A  │
│                      └──────────────┘    └─────────────────┘                │
└──────────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n instance — Cloud or self-hosted, version 1.60 or later (MCP nodes require a recent version)
  • Zoho CRM account — Free tier works. You need API access via a Zoho Developer Console app.
  • OpenAI API key — A funded account with access to gpt-4o-mini (or any model you prefer)

Estimated build time: 30–40 minutes from scratch, or under 10 minutes with the template.

Part 1 — The MCP Server (Zoho CRM Tools)

The first section of this workflow creates a private API endpoint that exposes five Zoho CRM operations as tools. Any MCP-compatible client — including the AI Agent in Section B — can discover and call these tools automatically.

1 MCP Server Trigger

This node is the backbone of the MCP architecture. It creates a webhook endpoint that advertises all connected Zoho CRM tool nodes as callable functions. When an MCP client connects, it receives a manifest listing every available tool with its parameters — the client (your AI Agent) then knows exactly what it can do.

  1. Add an MCP Server Trigger node to your canvas (found under Langchain → Triggers).
  2. Leave the default path or set a custom one — n8n will generate the full Production URL for you.
  3. After saving the workflow, copy the Production URL from this node — you will need it for the MCP Client in Section B.
💡

Tip: The MCP Server Trigger only advertises tools that are connected to it. If you add a new Zoho operation later, make sure to wire it into this trigger node or the AI Agent won’t see it.

2 Create Lead in Zoho CRM (Zoho CRM Tool)

This tool node lets the AI create new leads. It uses n8n’s $fromAI() expression for every field, which means the AI Agent decides what values to fill based on the user’s message — you don’t hardcode anything.

  1. Add a Zoho CRM Tool node and set Resource to Lead, Operation to Create.
  2. In the Last Name field, enter: ={{ $fromAI('Last_Name', '', 'string') }}
  3. In the Company field, enter: ={{ $fromAI('Company', '', 'string') }}
  4. Expand Additional Fields and add: Email, Mobile, Website, First_Name, Lead_Source, Lead_Status — each using the same $fromAI() pattern.
  5. Select your Zoho OAuth2 API credential.
  6. Connect this node’s AI Tool output to the MCP Server Trigger’s AI Tool input.

When the AI Agent receives “Create a lead for Emily Rodriguez at BrightPath Marketing, email emily.rodriguez@brightpath.com,” it will fill in:

{
  "First_Name": "Emily",
  "Last_Name": "Rodriguez",
  "Company": "BrightPath Marketing",
  "Email": "emily.rodriguez@brightpath.com",
  "Lead_Source": "Chat",
  "Lead_Status": "Not Contacted"
}

3 Get Lead from Zoho CRM (Zoho CRM Tool)

Retrieves a single lead by its Zoho record ID. The AI uses this when you ask something like “Show me the details for lead 5765430000012345.”

  1. Add another Zoho CRM Tool node. Set Resource to Lead, Operation to Get.
  2. Set Lead ID to: ={{ $fromAI('Lead_ID', '', 'string') }}
  3. Connect to the MCP Server Trigger.

4 Get All Leads from Zoho CRM (Zoho CRM Tool)

Lists every lead in your CRM. Useful when you type “Show me all my leads” or “How many leads do we have?”

  1. Add a Zoho CRM Tool node. Set Resource to Lead, Operation to Get All.
  2. Toggle Return All to true so the AI gets the complete list.
  3. Connect to the MCP Server Trigger.
📌

If you have thousands of leads, returning all of them at once can be slow and expensive (OpenAI tokens). Consider toggling Return All off and setting a limit of 50–100 for large CRM databases.

5 Update Lead in Zoho CRM (Zoho CRM Tool)

Modifies an existing lead. The AI extracts the Lead ID and whichever fields need changing from your message.

  1. Add a Zoho CRM Tool node. Set Resource to Lead, Operation to Update.
  2. Set Lead ID to ={{ $fromAI('Lead_ID', '', 'string') }}
  3. Under Update Fields, add: Company, Last_Name, First_Name, Description, Lead_Source, Lead_Status — each with $fromAI() expressions.
  4. Connect to the MCP Server Trigger.

Example: “Update lead 5765430000012345 — change status to Qualified and add a note: Scheduled demo for April 15.”

{
  "Lead_ID": "5765430000012345",
  "Lead_Status": "Qualified",
  "Description": "Scheduled demo for April 15"
}

6 Delete Lead in Zoho CRM (Zoho CRM Tool)

Removes a lead permanently. The AI only calls this when you explicitly request deletion and provide an ID.

  1. Add a Zoho CRM Tool node. Set Resource to Lead, Operation to Delete.
  2. Set Lead ID to ={{ $fromAI('Lead_ID', '', 'string') }}
  3. Connect to the MCP Server Trigger.
💡

Tip: The AI Agent’s system prompt instructs it to confirm before destructive actions. If you want extra safety, you can add an IF node before the Delete tool that requires a confirmation flag.


Part 2 — The AI Agent (Chat Interface)

This section is the front door. It receives your messages, processes them through an AI model, and decides which Zoho CRM tool to call via the MCP connection.

7 When Chat Message Received (Chat Trigger)

This trigger opens n8n’s built-in chat panel, giving you a familiar messaging interface to interact with the AI Agent.

  1. Add a Chat Trigger node (Langchain → Triggers).
  2. Leave defaults — no special configuration needed.
  3. This node’s output connects to the AI Agent node.

8 AI Agent (Agent)

The orchestrator. It receives the user’s message, uses OpenAI to understand intent, selects the right Zoho tool via MCP, executes the action, and returns a natural-language summary.

  1. Add an AI Agent node (Langchain → Agents).
  2. In Options → System Message, enter:
You are an AI assistant that helps manage leads in Zoho CRM.
When the user asks to create, update, retrieve, or delete a lead,
use the appropriate Zoho CRM tool via the MCP connection.
Always confirm the action taken and summarize the result in natural language.
If the user asks to list leads, present them in a clean, readable format.
If a request is ambiguous, ask for clarification before proceeding.
  1. Connect the Chat Trigger’s output to this node’s input.
  2. This node has three sub-inputs: Language Model, Memory, and Tools. You will connect the next three nodes to these.

9 OpenAI Chat Model

The brain powering the AI Agent. We use gpt-4o-mini for a good balance of intelligence and cost — it handles tool selection and natural language reliably without running up a large bill.

  1. Add an OpenAI Chat Model node (Langchain → Language Models).
  2. Select gpt-4o-mini from the model dropdown (or use gpt-4o if you want stronger reasoning).
  3. Select your OpenAI API credential.
  4. Connect this node’s AI Language Model output to the AI Agent’s Language Model input.

10 Conversation Memory (Buffer Window)

Keeps the last few exchanges in context so you can have multi-turn conversations. Without this, the AI would forget everything after each message.

  1. Add a Memory Buffer Window node (Langchain → Memory).
  2. Leave defaults — the default window size of 5 messages works well for CRM tasks.
  3. Connect its AI Memory output to the AI Agent’s Memory input.

This is what makes exchanges like this possible:

You: "Create a lead for Michael Chen at Pinnacle Labs, email michael.chen@pinnaclelabs.com"
AI:  "Done! Lead created for Michael Chen at Pinnacle Labs."
You: "Great, now change his status to Qualified"
AI:  "Updated Michael Chen's lead status to Qualified."

11 MCP Client (Tool)

This is the bridge between the AI Agent and the MCP Server from Section A. It connects to the MCP Server Trigger’s endpoint and pulls in all available Zoho tools — the Agent can then call any of them.

  1. Add an MCP Client Tool node (Langchain → Tools).
  2. In the Endpoint URL field, paste the Production URL you copied from the MCP Server Trigger in Step 1.
  3. Connect its AI Tool output to the AI Agent’s Tools input.
💡

Tip: If your n8n instance is behind a firewall, make sure the MCP Server Trigger’s URL is reachable from the same instance. For self-hosted setups, http://localhost:5678/mcp/YOUR_PATH works. For n8n Cloud, use the full production URL.


The MCP Architecture — Why It Matters

You might wonder: why not connect the Zoho CRM nodes directly to the AI Agent as tools? You absolutely can, and for simpler setups that works fine. The MCP architecture adds a layer of separation that pays off in three ways:

  • Reusability — Once your Zoho MCP Server is running, any MCP-compatible client can use it. You could connect a second AI Agent from a different workflow, a Claude Desktop integration, or any tool that speaks MCP.
  • Modularity — You can add new Zoho operations (deals, contacts, tasks) to the MCP Server without touching the AI Agent. It discovers new tools automatically.
  • Scalability — In a team setting, one person maintains the CRM tools while another builds AI interfaces on top. Clean separation of concerns.

Full System Flow

┌─────────────────────────────────────────────────────────────────────────────┐
│                                                                             │
│   USER: "Create a lead for Sarah Thompson at Vertex Solutions"              │
│     │                                                                       │
│     ▼                                                                       │
│   [Chat Trigger] ──▶ [AI Agent] ──▶ interprets intent                      │
│                         │    ▲                                               │
│                         │    │ context from [Conversation Memory]            │
│                         │    │ reasoning from [OpenAI GPT-4o-mini]          │
│                         ▼                                                    │
│                    [MCP Client] ──▶ calls MCP Server endpoint               │
│                         │                                                    │
│                         ▼                                                    │
│                  [MCP Server Trigger] ──▶ selects tool                      │
│                         │                                                    │
│                         ▼                                                    │
│               [Create Lead in Zoho CRM]                                     │
│                         │                                                    │
│                         ▼                                                    │
│               ZOHO CRM: Lead created                                        │
│                         │                                                    │
│                         ▼                                                    │
│   AI: "Done! Created lead for Sarah Thompson at Vertex Solutions."          │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

Before toggling Active, run through these checks:

  1. Open the chat panel — click the Chat icon in n8n’s sidebar (or execute the workflow manually).
  2. Create a test lead: type “Create a lead for Test User at Test Corp, email test@testcorp.com”
  3. Verify in Zoho CRM: log in to your Zoho CRM and confirm the lead appeared with the correct details.
  4. Test retrieval: copy the Lead ID from Zoho and type “Get details for lead [ID]”
  5. Test update: type “Update lead [ID] — set status to Contacted”
  6. Test listing: type “Show me all leads”
  7. Test memory: create a lead, then immediately ask “Now change their company to Acme Corp” — the AI should remember the lead from the previous message.
Problem Likely Cause Fix
AI says “I don’t have tools to do that” MCP Client isn’t connected or URL is wrong Re-copy the Production URL from the MCP Server Trigger and paste it into the MCP Client’s Endpoint URL field
Zoho returns “INVALID_TOKEN” OAuth2 token expired Go to Credentials → Zoho OAuth2 API → click Reconnect to refresh the token
Lead created with missing fields AI couldn’t extract all values from your message Be more explicit: include first name, last name, company, and email in your message
“Could not connect to MCP endpoint” Workflow not activated or wrong URL Make sure the workflow is saved and Active. For self-hosted, verify the URL starts with your instance domain
AI forgets previous context Memory node not connected Check that the Conversation Memory node is wired into the AI Agent’s Memory input

Frequently Asked Questions

Can I use a different AI model instead of OpenAI?

Yes. Swap the OpenAI Chat Model node for any Langchain-compatible model node in n8n — Anthropic Claude, Google Gemini, or a local model via Ollama. The MCP architecture is model-agnostic, so the Zoho CRM tools work the same regardless of which AI powers the agent.

Does this work with Zoho CRM’s free plan?

Zoho CRM’s free plan supports API access for up to three users, which is enough for this workflow. You’ll need to create a Server-based Client in the Zoho Developer Console to get your OAuth credentials. The only limitation is that some advanced CRM fields may not be available on the free tier.

Can I add more CRM operations — like managing deals or contacts?

Absolutely. Add new Zoho CRM Tool nodes for Deals, Contacts, Tasks, or any other Zoho CRM resource. Wire them into the MCP Server Trigger, and the AI Agent will discover them automatically on its next connection. No changes needed on the Agent side.

How much does it cost to run per message?

Using gpt-4o-mini, each CRM interaction costs roughly $0.001–$0.003 depending on the conversation length and the size of data returned from Zoho. Even heavy daily use (100+ messages) would stay well under $1/day. Switching to gpt-4o increases cost by about 10x but gives stronger reasoning.

Is my CRM data secure? Does OpenAI see my lead information?

When using the OpenAI API (not ChatGPT), OpenAI does not train on your data. However, lead details are sent to OpenAI’s servers for processing. If this is a concern, you can swap in a self-hosted model via Ollama — the MCP setup works identically, and your data never leaves your infrastructure.

Can multiple team members use this at the same time?

The chat interface is per-session in n8n, so each user gets their own conversation thread and memory. Multiple people can use it concurrently — each session’s Conversation Memory buffer is independent. For team-wide access, consider triggering the AI Agent via a Telegram or Slack bot instead of the built-in chat.

Get the Zoho CRM AI Lead Assistant Template

Skip the 30-minute build — get the complete workflow JSON, a step-by-step setup guide, and a credentials walkthrough for every API key you need.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Telegram or Slack trigger — manage your CRM leads from your phone without opening n8n at all.
  • Expand to Deals and Contacts — add more Zoho CRM Tool nodes to the MCP Server and manage your entire pipeline.
  • Integrate lead scoring — add a Code node that scores leads based on custom rules before the AI creates them.
  • Build a daily summary — add a Schedule Trigger that asks the AI Agent “Summarize all leads created today” and sends the result via email.
n8n
Zoho CRM
OpenAI
MCP
AI Agent
lead management
CRM automation
automation

How to Build an Aave DeFi Portfolio Monitor with n8n and AI

Build an automated Aave V3 portfolio monitoring system using n8n, GPT-4o-mini, and the Moralis API. This guide walks you through creating a professional-grade workflow that analyzes your DeFi positions hourly and delivers insights via Telegram and email.

What You’ll Build: A scheduled workflow that reads your Ethereum wallet addresses from Google Sheets, queries the Moralis API for real-time Aave V3 data, uses AI to synthesize insights about health factors and liquidation risk, and automatically sends reports to your phone via Telegram and detailed records via Gmail.

Key Features:

  • Hourly automated execution with zero manual intervention
  • Multi-wallet support—monitor as many positions as you need
  • AI-powered analysis using GPT-4o-mini for intelligent insights
  • Real-time Aave V3 data including health factors and collateral breakdown
  • Dual delivery: Telegram for quick alerts, email for detailed records

Requirements: n8n account, Google Sheets API, OpenAI API key, Moralis API key, Telegram Bot, and Gmail account. No coding experience needed.

Get Started: Download the complete workflow template including setup guide and credentials walkthrough at: easyworkflows.net/downloads/aave-defi-portfolio-ai-agent-n8n/

How to Add LinkedIn Post Commenters to HubSpot CRM with n8n

You publish a LinkedIn post, it takes off, and suddenly forty people you’ve never met are commenting on it. Each one of them is a warm lead — someone who already cares about what you have to say. But by the time you’ve finished scrolling through the comments, copying names, and looking people up one by one, the momentum is gone. What if every single one of those commenters appeared in your CRM automatically, complete with their email, job title, and company — ready for your sales team to follow up?

That’s exactly what you’ll build in this guide. Using n8n, Apify, and HubSpot, you’ll create a workflow that scrapes commenters from any LinkedIn post, enriches their profiles with professional data, and pushes qualified contacts straight into your CRM. No manual data entry, no copy-pasting, no missed opportunities.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. You paste a LinkedIn post URL into a simple web form hosted by n8n.
  2. The workflow scrapes every comment on that post and extracts the commenter’s LinkedIn profile URL.
  3. Each commenter’s profile is enriched through Apify — pulling their email address, job title, company name, city, and country.
  4. If a valid email is found, the workflow creates or updates a contact in HubSpot CRM with all the enriched data.
  5. Commenters without a discoverable email are silently skipped, keeping your CRM clean.

How It Works — The Big Picture

The entire process runs through a single n8n workflow with seven core nodes. Here’s the high-level flow:

┌──────────────────────────────────────────────────────────────────────────────────────┐
│  LINKEDIN COMMENTERS → HUBSPOT CRM                                                  │
│                                                                                      │
│  [Form Trigger]                                                                      │
│       │                                                                              │
│       ▼                                                                              │
│  [Scrape Post Comments]  ──▶  [Loop Over Commenters]                                 │
│                                     │                                                │
│                                     ▼                                                │
│                              [Wait 3 sec]                                            │
│                                     │                                                │
│                                     ▼                                                │
│                              [Enrich Profile]                                        │
│                                     │                                                │
│                                     ▼                                                │
│                              [Extract Fields]                                        │
│                                     │                                                │
│                                     ▼                                                │
│                              [Has Valid Email?]                                      │
│                               ╱            ╲                                         │
│                           YES ╱              ╲ NO                                    │
│                             ╱                  ╲                                     │
│                [Create/Update           [Skip — No Email]                            │
│                 HubSpot Contact]              │                                      │
│                        │                      │                                      │
│                        └───── ◀── Loop ──◀────┘                                      │
└──────────────────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n instance — self-hosted or n8n Cloud (all nodes are built-in, no community nodes required)
  • Apify account — the free tier gives you $5/month of compute, enough to process around 50–100 profiles per run. Sign up here.
  • HubSpot account — free CRM tier works perfectly. You’ll need a Private App Token with crm.objects.contacts.write scope.

Estimated build time: 30–40 minutes from scratch, or under 10 minutes with the template.

Building the Workflow Step by Step

1 Submit LinkedIn Post URL (Form Trigger)

The workflow starts with n8n’s built-in Form Trigger node. This creates a simple web form where you paste the LinkedIn post URL. When you submit the form, the workflow fires.

  1. Add a Form Trigger node to your canvas.
  2. Set the Form Title to LinkedIn Post Engagement Capture.
  3. Add one form field: label it LinkedIn Post URL, set it as required, and add a placeholder like https://www.linkedin.com/posts/username_topic-activity-123...
  4. Save the node. n8n will generate a unique form URL you can bookmark.

After submission, the data flowing out of this node looks like:

{
  "LinkedIn Post URL": "https://www.linkedin.com/posts/james-carter_ai-automation-activity-7291234567890-AbCd",
  "submittedAt": "2026-04-08T14:22:00.000Z"
}
💡

Tip: You can find the URL of any LinkedIn post by clicking the three dots on the post and selecting “Copy link to post.” Make sure you’re copying the full URL, not a shortened one.

2 Scrape Post Comments (HTTP Request → Apify)

This node sends the LinkedIn post URL to Apify’s LinkedIn Post Comments Scraper actor, which returns an array of all commenters with their profile URLs and comment text.

  1. Add an HTTP Request node and connect it to the Form Trigger.
  2. Set the Method to POST.
  3. Set the URL to: https://api.apify.com/v2/acts/curious_coder~linkedin-post-comments-scraper/run-sync-get-dataset-items?token={{ $credentials.httpHeaderAuth.value }}
  4. Under Body, select JSON and enter:
    {
      "postUrl": "{{ $json['LinkedIn Post URL'] }}",
      "maxComments": 100
    }
  5. Set the Timeout to 120000 ms (2 minutes) — scraping takes time.
  6. Under Credentials, add your Apify API token as an HTTP Header Auth credential (header name: Authorization, value: Bearer YOUR_TOKEN). Alternatively, the token in the URL query parameter handles auth directly.

The response is an array of comment objects. Each item looks like:

{
  "profileUrl": "https://www.linkedin.com/in/emily-rodriguez-marketing",
  "commenterName": "Emily Rodriguez",
  "commentText": "This is such a great breakdown! We've been looking at something similar.",
  "timestamp": "2026-04-07T09:15:00.000Z"
}
💡

Tip: The maxComments parameter caps how many comments are scraped. For posts with hundreds of comments, start with 50 to test, then increase once you’ve confirmed the workflow runs smoothly.

3 Loop Over Commenters (Split In Batches)

The comment scraper returns all comments as an array. The Loop node processes them one at a time, which is important because the enrichment step hits an external API that has rate limits.

  1. Add a Split In Batches node and connect it to the HTTP Request node.
  2. Set Batch Size to 1 — we process one commenter per iteration.

The loop feeds each individual commenter object to the next node in sequence.

4 Wait Between Requests (Wait Node)

Adding a 3-second pause between enrichment requests prevents you from hitting Apify’s rate limits and keeps the workflow running reliably even with large comment batches.

  1. Add a Wait node after the loop’s “process” output.
  2. Set Amount to 3 and Unit to seconds.
📌

If you’re on Apify’s paid plan with higher rate limits, you can reduce this to 1 second or remove it entirely.

5 Enrich Profile (HTTP Request → Apify)

This is where the magic happens. For each commenter, we call Apify’s LinkedIn Profile Scraper to pull their full professional details — email, headline, company, location, and more.

  1. Add another HTTP Request node.
  2. Set the Method to POST.
  3. Set the URL to: https://api.apify.com/v2/acts/curious_coder~linkedin-profile-scraper/run-sync-get-dataset-items?token={{ $credentials.httpHeaderAuth.value }}
  4. Under Body, set JSON to:
    {
      "profileUrls": ["{{ $json.profileUrl }}"]
    }
  5. Set the Timeout to 120000 ms.
  6. Use the same Apify credential as Step 2.

The enriched profile data comes back looking something like this:

{
  "firstName": "Emily",
  "lastName": "Rodriguez",
  "email": "emily.rodriguez@techcorp.com",
  "headline": "VP of Marketing at TechCorp",
  "company": "TechCorp Inc.",
  "city": "Austin",
  "country": "US",
  "url": "https://www.linkedin.com/in/emily-rodriguez-marketing",
  "connections": 2847
}

6 Extract Profile Fields (Set Node)

The enrichment response contains dozens of fields. This Set node extracts only the ones you actually need for your CRM, creating a clean, standardized data object.

  1. Add a Set node and connect it to the Enrich Profile node.
  2. Switch to Manual mode.
  3. Map these fields:
    Output Field Expression
    email ={{ $json.email }}
    firstName ={{ $json.firstName }}
    lastName ={{ $json.lastName }}
    jobTitle ={{ $json.headline }}
    company ={{ $json.company }}
    city ={{ $json.city }}
    country ={{ $json.country }}
    linkedinUrl ={{ $json.url }}

After this node, each item has a clean, flat structure:

{
  "email": "emily.rodriguez@techcorp.com",
  "firstName": "Emily",
  "lastName": "Rodriguez",
  "jobTitle": "VP of Marketing at TechCorp",
  "company": "TechCorp Inc.",
  "city": "Austin",
  "country": "US",
  "linkedinUrl": "https://www.linkedin.com/in/emily-rodriguez-marketing"
}

7 Has Valid Email? (IF Node)

Not every LinkedIn profile has a publicly discoverable email address. This IF node checks whether the enrichment found a valid email before attempting to create a CRM contact.

  1. Add an IF node.
  2. Add two conditions (AND):
    • {{ $json.email }} exists
    • {{ $json.email }} is not empty

The true branch goes to HubSpot. The false branch goes to a No-Op node that loops back, silently skipping the commenter.

💡

Tip: Apify typically finds emails for 40–60% of LinkedIn profiles. If you need higher hit rates, consider adding a secondary enrichment service like Hunter.io or Apollo as a fallback before the IF node.

8 Create or Update HubSpot Contact (HubSpot Node)

The final action node. It takes the enriched profile data and creates a new contact in HubSpot — or updates an existing one if a contact with the same email already exists.

  1. Add a HubSpot node.
  2. Set Authentication to App Token.
  3. Set the Email field to ={{ $json.email }}.
  4. Under Additional Fields, map:
    HubSpot Field Expression
    First Name ={{ $json.firstName }}
    Last Name ={{ $json.lastName }}
    Job Title ={{ $json.jobTitle }}
    Company Name ={{ $json.company }}
    City ={{ $json.city }}
    Country ={{ $json.country }}
    Website ={{ $json.linkedinUrl }}

Connect both the HubSpot node’s output and the “Skip — No Email” node back to the Loop node to continue processing the next commenter.

📌

The HubSpot node uses upsert behavior by default — if a contact with the same email already exists, it updates their fields instead of creating a duplicate. This keeps your CRM clean even if you run the workflow on multiple posts where the same people comment.

The Data Flow

Here’s what a contact record looks like as it moves through the system, from raw LinkedIn comment to polished CRM entry:

Stage Data Available Example
After Comment Scrape Profile URL, Name, Comment Text emily-rodriguez-marketing
After Enrichment + Email, Headline, Company, City, Country emily.rodriguez@techcorp.com, VP of Marketing
After Field Extraction Clean 8-field object ready for CRM Flat JSON with all mapped fields
In HubSpot Full contact record with source tracking Contact card with LinkedIn URL as website

Full System Flow

┌──────────────────────────────────────────────────────────────────────────────┐
│                                                                              │
│   USER                     n8n WORKFLOW                      HUBSPOT CRM     │
│                                                                              │
│   Pastes URL ──▶ [Form Trigger]                                              │
│                       │                                                      │
│                       ▼                                                      │
│                  [HTTP Request] ──▶ Apify Comments API                       │
│                       │                                                      │
│                       ▼                                                      │
│                  [Loop: 1 at a time]                                          │
│                       │                                                      │
│                       ▼                                                      │
│                  [Wait 3 sec]                                                │
│                       │                                                      │
│                       ▼                                                      │
│                  [HTTP Request] ──▶ Apify Profile API                        │
│                       │                                                      │
│                       ▼                                                      │
│                  [Set: Extract Fields]                                        │
│                       │                                                      │
│                       ▼                                                      │
│                  [IF: Email exists?]                                          │
│                    ╱        ╲                                                 │
│                YES            NO                                             │
│                 ╱                ╲                                            │
│   [HubSpot: Upsert]     [Skip: No-Op]                                       │
│         │                      │           ──▶  Contact created/updated      │
│         └──── Loop back ◀──────┘                                             │
│                                                                              │
└──────────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

  1. Find a test post. Use one of your own LinkedIn posts that has at least 5–10 comments. Avoid posts with hundreds of comments for your first test — keep it small.
  2. Open the form. Click “Test workflow” in n8n, then open the form URL in your browser. Paste the LinkedIn post URL and submit.
  3. Watch the execution. In n8n, you’ll see the workflow run node by node. The comment scraping takes 30–60 seconds. Each profile enrichment takes another 10–30 seconds.
  4. Check HubSpot. Open your HubSpot contacts list and look for the newly created records. Verify that the name, email, job title, and company are populated correctly.
  5. Review skipped contacts. Check the workflow execution log — any commenters without emails will show as passing through the “Skip — No Email” branch.
Problem Likely Cause Fix
No comments returned Post URL is incorrect or post has no comments Copy the URL directly from the post’s share menu; make sure the post is public
Enrichment returns empty data Apify token is invalid or has run out of credits Check your Apify dashboard for remaining credits and regenerate the token
HubSpot returns 401 error App Token doesn’t have the right scopes In HubSpot, edit your Private App and ensure crm.objects.contacts.write is enabled
Workflow times out Too many comments + enrichment is slow Reduce maxComments to 50, or increase the HTTP Request timeout to 180000 ms
Duplicate contacts in CRM Email field is mapped incorrectly Make sure the HubSpot node’s Email field uses the exact expression ={{ $json.email }}

Frequently Asked Questions

Does this work with n8n Cloud or only self-hosted?

It works with both. Every node in this workflow is a standard n8n built-in node — no community nodes required. The Apify calls are made through regular HTTP Request nodes, so there’s nothing extra to install.

How many comments can it handle per run?

The template is set to scrape up to 100 comments per post. You can increase this by changing the maxComments parameter, but keep in mind that each profile enrichment uses Apify compute credits. A batch of 100 profiles typically costs around $1–2 on the free tier.

What if a commenter is already in my HubSpot CRM?

The HubSpot node uses upsert logic — it matches on email address. If a contact with that email already exists, their record gets updated with the latest data instead of creating a duplicate. Your CRM stays clean no matter how many times you run it.

Can I use a different CRM instead of HubSpot?

Yes. Swap the HubSpot node for any CRM node that n8n supports — Salesforce, Pipedrive, Zoho CRM, or even a Google Sheets node if you want a lightweight approach. The enrichment pipeline stays the same; you just change the final destination.

Is scraping LinkedIn comments against their terms of service?

Apify handles the data collection through their compliant infrastructure. The data accessed is publicly visible comment information. That said, always review LinkedIn’s current terms and your local data protection regulations before using any automation at scale.

What percentage of profiles actually have an email?

Apify’s LinkedIn Profile Scraper typically discovers email addresses for 40–60% of profiles, depending on the industry and how complete people’s profiles are. B2B professionals in tech and marketing tend to have higher hit rates.

Get the LinkedIn Commenters to HubSpot CRM Template

Skip the 40-minute build. Get the pre-built workflow JSON, step-by-step setup guide, and credentials walkthrough — import it into n8n and start capturing leads in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Slack notification — get a message in your team channel every time a new contact is added to HubSpot, with their name, company, and the post they commented on.
  • Tag contacts by post topic — use a Set node to add a custom HubSpot property that records which post the contact engaged with, so your sales team knows what they’re interested in.
  • Chain with an email sequence — connect HubSpot to your email tool (Mailchimp, SendGrid, or HubSpot’s own sequences) to automatically send a welcome email to new contacts.
  • Schedule it to run on multiple posts — replace the Form Trigger with a Schedule Trigger and a list of post URLs in a Google Sheet to process several posts on autopilot.
n8n
LinkedIn
HubSpot
Apify
CRM
Lead Generation
Contact Enrichment
automation

How to A/B Test AI Prompts with n8n, Supabase, and OpenAI

Most AI builders pick their chatbot’s system prompt based on gut feeling. They write something that sounds good, deploy it, and hope for the best. But what if you could actually test two prompt variants on real users and measure which one performs better? This n8n workflow does exactly that: it randomly assigns users to either a baseline or alternative system prompt, remembers their assignment, and lets you collect data on which version gets better results. No guessing. Just data-driven prompt optimization.

Prefer to skip the setup? Grab the ready-made template and import it into your n8n instance in minutes — get the A/B testing template here.


What You’ll Build

By the end of this guide, you’ll have a fully functional A/B testing system that:

  1. Accepts incoming chat messages with user session IDs
  2. Stores two distinct system prompt variants (baseline and alternative)
  3. Checks whether the session has been assigned to a test group before
  4. Automatically assigns new users to one of the two variants using a 50/50 random split
  5. Ensures returning users always see the same prompt variant they were originally assigned
  6. Passes the correct prompt to your AI agent (OpenAI GPT-4o-mini)
  7. Maintains full conversation history in PostgreSQL so the AI remembers previous messages
  8. Records session and test assignment data for later analysis

How It Works — The Big Picture

Here’s the flow from incoming message to AI response:

┌────────────────────────────────────────────────────────────────┐
│  A/B TEST AI PROMPTS                                           │
│                                                                │
│  [Chat Trigger] → [Define Test Prompts] → [Check Session]     │
│                                              ↓                 │
│                                    [Session Assigned?]         │
│                                     ↙ Yes        ↘ No         │
│                              [Select Prompt]  [Assign Random]  │
│                                     ↘             ↙            │
│                                  [Select Prompt]               │
│                                       ↓                        │
│                                  [AI Agent]                    │
│                              (OpenAI + Memory)                 │
└────────────────────────────────────────────────────────────────┘

The workflow listens for incoming chat messages, queries Supabase to see if the user’s session already exists, and branches based on the result. New users get randomly assigned to one of two prompt variants; returning users get their original variant. Both paths converge at a single AI Agent node that uses the correct system prompt and maintains conversation memory through PostgreSQL.


What You’ll Need

  • Supabase account (free tier is fine) — you’ll need a PostgreSQL database and the ability to run SQL queries
  • OpenAI API key with access to GPT-4o-mini (cost: typically less than $1 per 1M tokens)
  • n8n instance — either n8n Cloud (free or paid plans) or self-hosted
  • Basic familiarity with n8n — understanding nodes, inputs, and outputs will help

Build time: 25-35 minutes from scratch, under 10 minutes if you import the ready-made template.


Step-by-Step Setup

1 Set Up Your Chat Trigger

Start with a Chat Trigger node (or HTTP Request if you’re building a custom endpoint). This node receives incoming user messages along with a session ID. The session ID is crucial — it’s how you identify repeat users.

Your incoming payload should look like this:

{
  "sessionId": "sess_7f3a2b91",
  "userId": "user_4c2e9k10",
  "message": "Hello, what's your recommendation for a CRM?"
}
📝

Session ID strategy: If you’re embedding this in a web app, generate a unique session ID and store it in localStorage or a cookie. For API-driven usage, your backend can generate UUIDs or slugs.

2 Define Your Test Prompts

Add a Set node after the Chat Trigger. This node stores both system prompt variants. Here’s an example with a customer support chatbot:

Baseline prompt (friendly):

"You are a helpful customer support agent for an e-commerce platform. Be warm, approachable, and conversational. Always put the customer's needs first. If you don't know something, admit it and offer to escalate."

Alternative prompt (professional):

"You are a professional customer support specialist. Provide concise, accurate answers. Use technical terminology where appropriate. Focus on efficiency and quick resolution. Maintain professional boundaries while remaining courteous."

Store these as variables in your Set node. For example:

{
  "baseline_prompt": "You are a helpful customer support agent...",
  "alternative_prompt": "You are a professional customer support specialist..."
}

You can customize these prompts however you want — adjust tone, instructions, constraints, anything. The point is to test meaningful variations.

3 Query Supabase for Existing Sessions

Add a Supabase node (Query Rows) to check if this session has been assigned before. Set up the query like this:

Table: split_test_sessions

Filter: session_id = (incoming session_id)

This will return an empty array if the session is new, or one row if the session already exists. Save the result to a variable like session_lookup.

4 Add a Conditional: Is the Session Already Assigned?

Use an IF node to check whether the session exists:

session_lookup.length > 0

If true (session exists), branch to “Select Active Prompt”. If false (new session), branch to “Assign Random Variant”.

5 Assign a Random Variant to New Users

In the “false” branch, add a Function node that generates a random coin flip and inserts a new row into Supabase:

// Generate 50/50 random boolean
const show_alternative = Math.random() < 0.5;

// Return the assignment for the next node
return {
  show_alternative: show_alternative,
  session_id: $input.all()[0].json.sessionId,
  timestamp: new Date().toISOString()
};

Follow this with a Supabase Insert Rows node that saves the assignment to the database:

Table: split_test_sessions

Columns:

{
  "session_id": $input.all()[0].json.sessionId,
  "show_alternative": show_alternative,
  "created_at": new Date().toISOString()
}
💡

Tip: Use Supabase's connection pooler for faster queries, especially if you're running high volume. It's in your project settings under "Database" → "Connection Pooling".

6 Select the Active Prompt

Both paths (existing and new sessions) converge at a Set node that picks the correct system prompt. This node needs to check whether show_alternative is true or false and return the matching prompt:

{
  "system_prompt": $input.all()[0].json.show_alternative
    ? $input.all()[0].json.alternative_prompt
    : $input.all()[0].json.baseline_prompt
}

Make sure this node receives the show_alternative boolean from either the database query (existing session) or the assignment function (new session).

7 Configure the AI Agent with Memory

Add an OpenAI node configured as an AI Agent. Set it up like this:

Model: gpt-4o-mini

System prompt: Use the system_prompt variable from the previous Set node

Chat memory: Enable PostgreSQL memory using your Supabase connection. Configure it with:

  • Connection: Your Supabase PostgreSQL connection
  • Session ID: The incoming sessionId
  • Memory type: Buffer memory or summarization (your choice based on conversation length)

This ensures the AI remembers all previous messages in the session, maintaining context across turns.


The Data Structure

You need a PostgreSQL table in Supabase to track session assignments. Here's the schema:

Column Name Type Description
id BIGINT (auto-increment) Primary key, auto-generated
session_id TEXT (unique) Unique identifier for the user session, e.g. "sess_7f3a2b91"
show_alternative BOOLEAN true = user sees alternative prompt, false = user sees baseline prompt
created_at TIMESTAMP When the assignment was created, useful for sorting and analysis

To create this table in Supabase, go to the SQL Editor and run:

CREATE TABLE split_test_sessions (
  id BIGSERIAL PRIMARY KEY,
  session_id TEXT NOT NULL UNIQUE,
  show_alternative BOOLEAN NOT NULL,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_session_id ON split_test_sessions(session_id);

The index on session_id speeds up lookups when users return.


Full System Flow with Data

Let's trace a complete example with realistic data:

1. User message arrives:

{
  "sessionId": "sess_7f3a2b91",
  "userId": "user_4c2e9k10",
  "message": "Hello, what CRM do you recommend for a 50-person startup?"
}

2. Query Supabase:

SELECT * FROM split_test_sessions WHERE session_id = 'sess_7f3a2b91';
// Result: [] (empty, this is a new user)

3. Assign random variant:

Math.random() < 0.5  // Returns true, assign alternative
// Insert into Supabase:
{
  "session_id": "sess_7f3a2b91",
  "show_alternative": true,
  "created_at": "2026-04-08T14:32:05.000Z"
}

4. Select the correct prompt:

// show_alternative is true, so use:
system_prompt = "You are a professional customer support specialist..."

5. AI Agent responds:

The OpenAI node receives the alternative system prompt and the user's message. With PostgreSQL memory enabled, it also pulls any previous messages from this session (none on first message). It generates a response:

"For a 50-person startup, I'd recommend HubSpot or Pipedrive. Both scale efficiently and offer the customization you'll need. What's your primary use case — sales pipeline or customer support?"

6. On the next turn (same session):

{
  "sessionId": "sess_7f3a2b91",
  "userId": "user_4c2e9k10",
  "message": "Mainly sales pipeline. What about pricing?"
}

Query returns the existing row with show_alternative: true. The same professional prompt is used. Memory context includes the entire conversation.


Testing Your Workflow

Before running it live, test each step:

  1. Test the Chat Trigger: Send a sample message with a session ID through the webhook or chat interface. Check that the payload arrives correctly.
  2. Test Supabase connectivity: Run a simple query (e.g., SELECT * FROM split_test_sessions LIMIT 1;) to verify the connection works.
  3. Test the random assignment: Run the workflow 10 times with different session IDs and verify that Supabase records are created with roughly 50% true/false split.
  4. Test the conditional logic: Create a session, run the workflow, then re-run with the same session ID. Verify that the second run retrieves the existing assignment instead of creating a new one.
  5. Test the AI Agent: Verify the AI receives the correct system prompt by checking the API logs or n8n execution history.
  6. Test memory persistence: Send multiple messages in the same session and confirm the AI remembers previous context.

Common Issues and Troubleshooting

Issue Likely Cause Solution
Supabase query returns empty even for existing sessions Session ID mismatch (case sensitivity, extra whitespace) Normalize session IDs: trim whitespace, convert to lowercase
AI Agent fails with authentication error Invalid OpenAI API key or quota exceeded Check your API key in n8n credentials; verify you have billing enabled in OpenAI account
Duplicate session assignments (multiple true/false values) Missing UNIQUE constraint on session_id Add UNIQUE constraint to split_test_sessions.session_id
Conversation memory not working PostgreSQL connection not configured or memory table missing Verify Supabase PostgreSQL credentials in n8n; ensure memory table exists
Workflow executes but AI returns generic responses System prompt not being passed correctly Debug: log the system_prompt variable before the AI Agent node, verify it's not empty
🔍

Debugging tip: Use n8n's Execute Workflow button and inspect the input/output of each node. The execution history shows exactly what data is flowing through your workflow.


Measuring Results

Now that your A/B test is running, how do you measure which prompt is better? A few approaches:

  • User feedback: Add a thumbs-up/thumbs-down button after each AI response and record votes in a feedback table, tagged with session_id and show_alternative.
  • Conversation length: Query Supabase to see average message count per session for each variant. Longer conversations might indicate more engaging prompts.
  • Resolution time: If this is customer support, track how many turns it takes to resolve issues per variant.
  • Manual review: Export a sample of responses from each variant and have a human evaluate quality, tone, and accuracy.
  • Custom metrics: Log additional data (response time, token usage, user satisfaction score) to your Supabase table for analysis.

Run each variant for at least 100-200 sessions before drawing conclusions. Statistical significance matters.


Frequently Asked Questions

Can I test more than two prompts?

Yes, absolutely. Instead of a boolean show_alternative column, use an integer or enum to represent three or more variants. Adjust the random assignment logic to distribute evenly (e.g., if 3 variants: Math.floor(Math.random() * 3)). Update the "Select Active Prompt" node to use a switch statement or nested ternary.

How do I measure which prompt performs better?

Add a feedback mechanism (thumbs-up/down buttons or a satisfaction rating) tied to each session. Store results in Supabase with the variant ID. Then query Supabase to calculate average scores per variant. You can also measure conversation length, resolution time, or cost per variant.

Does this work with models other than GPT-4o-mini?

Yes. The workflow is model-agnostic. You can use GPT-4, GPT-3.5 Turbo, Claude (via Anthropic API), or any LLM with an n8n integration. Just swap the model in the AI Agent node and ensure you have valid API credentials.

What happens if Supabase goes down?

If Supabase is unavailable, the workflow will fail at the session lookup step. To add resilience, wrap Supabase queries in try-catch blocks or add error handling nodes that fall back to a default prompt (e.g., always use baseline if the database is unreachable).

Can I use this for testing different temperatures or models?

Absolutely. Extend the workflow to test different model parameters. For example, add a temperature and model_name column to split_test_sessions. In the AI Agent node, dynamically set the temperature and model based on the session's assigned variant. This lets you A/B test creativity (high temperature) vs. consistency (low temperature).

Can I run multiple A/B tests simultaneously?

Yes. Use separate columns in split_test_sessions for each test (e.g., prompt_test, temperature_test, model_test). Each column holds the variant assignment for that specific test. The workflow then reads all relevant columns and applies them simultaneously to the AI Agent. This is called multivariate testing.


Get the A/B Prompt Testing Template

Stop guessing which prompt works best. Import this ready-made n8n workflow, connect your Supabase and OpenAI accounts, and start testing in minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What's Next: Extending the Workflow

Once you have the basic A/B test working, consider these enhancements:

  • Automatic winner selection: Set up a scheduled workflow that analyzes results every week and automatically switches all new users to the winning variant.
  • Progressive rollout: Instead of 50/50, shift traffic gradually (90/10, 80/20, etc.) as one variant proves better.
  • Segmented testing: Run different tests for different user segments (new vs. returning, by industry, by region).
  • Prompt versioning: Store all prompt versions in Supabase with timestamps so you can track which exact variant each user saw.
  • Multivariate testing: Test system prompt, temperature, and model all at once to find the optimal combination.
  • Cost tracking: Log token usage per variant to see if one prompt is more efficient.

n8n
Supabase
OpenAI
A/B Testing
AI Prompts
Chatbot
Automation
PostgreSQL
Data-Driven