How to Build Voice AI Customer Support for WooCommerce with n8n and VAPI

Imagine a customer calls your store on a Friday evening asking about their order status—but your team has already left for the weekend. The phone rings unanswered. The customer leaves frustrated and takes their business elsewhere. This scenario plays out thousands of times every day in e-commerce. Now imagine if your WooCommerce store had a tireless AI agent that could answer customer calls 24/7, look up orders, provide tracking information, and answer FAQs using your own knowledge base—all through a simple phone call.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

In this guide, you’ll learn how to build exactly that: a voice AI customer support system that combines multiple cutting-edge AI technologies (VAPI for voice, GPT-4o-mini for intelligent conversations, Gemini for knowledge retrieval, and Qdrant for vector search) with n8n as the orchestration engine. By the end, you’ll have a fully functional voice assistant that handles post-sales support and product questions.

What You’ll Build

This n8n workflow creates a complete voice AI customer support solution with two primary flows:

  1. Post-Sales Agent Flow — A GPT-4o-mini powered conversational agent that customers can call to ask about their orders. It verifies identity using email and order number, retrieves real-time order information from WooCommerce, fetches tracking details via a sub-workflow, and speaks the results back to the caller through VAPI.
  2. RAG Q&A Flow — A Retrieval-Augmented Generation system using Google Gemini 1.5 Flash that searches a Qdrant vector database for product information, policies, and FAQs, delivering contextual answers directly to customers asking product questions.
  3. Tracking Sub-Workflow — A helper workflow that queries the WooCommerce REST API using the YITH tracking plugin to extract shipping carrier URLs, tracking codes, and pickup dates.

All three flows are triggered via webhooks from VAPI, your voice AI platform, and connected to Twilio phone numbers for inbound calling.

The Big Picture: How It All Works Together

Here’s a visual representation of the system architecture:

┌────────────────────────────────────────────────────────────────────┐
│                       CUSTOMER CALLS TWILIO NUMBER                    │
└──────────────────────────────┬──────────────────────────────────────┘
                                │
                                ▼
                      ┌───────────────────────┐
                      │   VAPI Voice AI        │
+                      │   (Processes Speech)  │
                      └───────┬───────┬───────┘
                              │       │
                     ┌────────┘       └────────┐
                    ▼                           ▼
      ┌──────────────────────────┐   ┌──────────────────────┐
      │  POST-SALES AGENT FLOW   │   │  RAG Q&A FLOW       │
      │  (Order Lookups)         │   │  (Product Questions) │
      └────────┬─────────────────┘   └──────────┬───────────┘
               │                              │
               ▼                               ▼
      ┌──────────────────────────┐   ┌──────────────────────┐
      │  n8n Post-Sales Webhook  │   │ n8n RAG Webhook      │
      │                         │   │                       │
      │ 1. GPT-4o-mini Agent     │   │ 1. Question & Answer │
      │ 2. WooCommerce API Call  │   │ 2. Gemini 1.5 Flash  │
      │ 3. Tracking Sub-Workflow │   │ 3. Qdrant Vector DB  │
      │ 4. Format Response       │   │ 4. Return Response   │
      └────────┬─────────────────┘   └──────────┬───────────┘
               │                               │
               └────────────┬──────────────────┘
                              ▼
                    ┌─────────────────────┐
                   │  VAPI Returns Audio │
                   │  to Caller          │
                   └─────────────────────┘

Prerequisites

Before you start building, make sure you have access to these services and accounts:

  • WooCommerce Store — A working WooCommerce installation with REST API enabled. We’ll authenticate using API keys (Consumer Key & Consumer Secret).
  • VAPI Account — Sign up at vapi.ai to create voice assistants. You’ll need to create tools that point to your n8n webhooks.
  • Twilio Account — Create a Twilio account to provision inbound phone numbers. VAPI integrates directly with Twilio.
  • OpenAI API Key — For GPT-4o-mini and OpenAI embeddings. Ensure you have billing enabled.
  • Google Gemini API Key — For the Gemini 1.5 Flash model used in the RAG flow.
  • Qdrant Vector Database — Either a cloud Qdrant instance or self-hosted. You’ll need the API key and collection name with pre-populated embeddings.
  • n8n Instance — Cloud or self-hosted n8n with execution enabled.
  • YITH WooCommerce Order Tracking Plugin — Free plugin to store tracking metadata. Install it on your WordPress site.

Building Flow 1: Post-Sales Agent

The post-sales agent handles customer calls asking about their orders. Here’s how to build it step-by-step:

Step 1: Create the VAPI Post-Sales Webhook

Start with a webhook node that receives requests from VAPI. This node waits for incoming POST requests containing customer data (email address and order number) sent by VAPI when a customer calls the dedicated phone number.

In n8n, add a Webhook node and configure it as follows:

  • Set HTTP Method to POST
  • Set Response Mode to responseNode (we’ll send the response later from a dedicated response node)
  • Copy the webhook URL to use in VAPI later
💡

Tip: The webhook path is auto-generated. Note it down—you’ll paste this URL into VAPI’s tool configuration.

Step 2: Add the GPT-4o-mini Agent Node

The agent is the brain of your post-sales support system. It’s a GPT-4o-mini powered node with tools that can look up orders, retrieve customer information, and get tracking details.

Add an Agent node (type: @n8n/n8n-nodes-langchain.agent) and configure:

  • Language Model: Connect your OpenAI credentials and select gpt-4o-mini
  • Input Text: Set to ={{ $json.body.email }} and {{ $json.body.n_order }} to pass the customer email and order number from the webhook
  • System Message: Paste a detailed prompt telling the agent its role, how to verify customer identity, and which tools to use (see the workflow for the full prompt)

The system message is critical. It instructs the agent to:

  • Always verify that the email matches the order number
  • Refuse to share order details if the email is incorrect
  • Use available tools (get_order, get_orders, get_user, get_tracking, Calculator) to answer questions
  • Provide clear, professional responses suitable for voice output

Step 3: Connect Tool Nodes for Order Lookup

The agent needs tools to retrieve information. Add four WooCommerce tool nodes:

get_order Node: Retrieves a single order by ID.

  • Resource: order
  • Operation: get
  • Order ID: Leave this for the agent to fill dynamically via $fromAI()
  • Connect WooCommerce credentials with your store URL/li>

get_orders Node: Searches for multiple orders (e.g., by customer email).

  • Resource: order
  • Operation: getAll
  • Search: Leave for agent to fill via $fromAI()

get_user Node: Retrieves customer profile information by email.

  • Resource: customer
  • Operation: getAll
  • Filter by Email: ={{ $fromAI('Email', '', 'string') }}

Calculator Node: Allows the agent to perform math (e.g., calculate shipping costs, discounts). Use the built-in calculator tool from LangChain.

Connect all four of these nodes to the agent’s ai_tool output.

📌

Security Note: The agent’s system prompt includes strict identity verification logic. Always ensure that order details are only shared after email verification. Never allow the agent to bypass this check.

Step 4: Add the Tracking Sub-Workflow Tool

Instead of querying WooCommerce directly for tracking, we call a separate workflow using a tool workflow node. This keeps your main workflow clean and allows the tracking sub-workflow to be tested independently.

Add a Tool Workflow node (type: @n8n/n8n-nodes-langchain.toolWorkflow):

  • Name: get_tracking
  • Workflow ID: Select the tracking sub-workflow (you’ll create this in the next section)
  • Description: “Get tracking number for a specific order by providing the order number. The tool retrieves the unique tracking code that allows customers to monitor their shipment’s current status and location.”
  • Input Schema: Define one input parameter: order_number (string)

Connect this to the agent’s ai_tool output as well.

Step 5: Format and Return the Response

After the agent generates an answer, format it and send it back to VAPI.

Add a Set node to extract the agent’s output message:

  • Assignment: Set a field called message to ={{ $json.output }}

Then add a Respond to Webhook node:

  • This sends the JSON response back to VAPI with the agent’s message
  • VAPI will convert this text to speech and play it to the caller

Building Flow 2: RAG Q&A System

The second flow handles customer questions about products, policies, and FAQs by searching your knowledge base. Here’s how to build it:

Step 1: Create the RAG Webhook

Add another Webhook node for the RAG flow:

  • HTTP Method: POST
  • Response Mode: responseNode
  • Path: Give it a distinct path like rag-webhook

This webhook will receive search queries from VAPI when customers ask product questions.

Step 2: Add the Question & Answer Chain

Add a Chain Retrieval QA node (type: @n8n/n8n-nodes-langchain.chainRetrievalQa):

  • Input Text: ={{ $json.body.search }} to use the search query from VAPI
  • System Prompt Template: Set a prompt that instructs the model to use retrieved context to answer questions. Example: “You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don’t know the answer, just say that you don’t know.”
  • Prompt Type: define

This node will automatically search the vector store and synthesize an answer based on retrieved documents.

Step 3: Connect the Language Model (Gemini 1.5 Flash)

Add a Chat Google Gemini node (type: @n8n/n8n-nodes-langchain.lmChatGoogleGemini):

  • Model: models/gemini-1.5-flash (the fastest and most cost-effective Google model)
  • Add your Google Gemini API credentials

Connect this to the Question & Answer Chain’s ai_languageModel input.

Step 4: Set Up Vector Store Retrieval

Add a Vector Store Retriever node (type: @n8n/n8n-nodes-langchain.retrieverVectorStore):

  • Top K: 5 (retrieve the 5 most relevant documents)

Connect this to the Question & Answer Chain’s ai_retriever input.

Step 5: Connect Qdrant Vector Store

Add a Qdrant Vector Store node (type: @n8n/n8n-nodes-langchain.vectorStoreQdrant):

  • Add your Qdrant API credentials
  • Collection: Select the collection name containing your product embeddings (e.g., “product-knowledge”)

Connect this to the Vector Store Retriever’s ai_vectorStore input.

Step 6: Add OpenAI Embeddings

Add an Embeddings OpenAI node (type: @n8n/n8n-nodes-langchain.embeddingsOpenAi):

  • Add your OpenAI API credentials
  • This generates embeddings for the search query to match against your Qdrant collection

Connect this to the Qdrant Vector Store’s ai_embedding input.

💡

Tip: Make sure your Qdrant collection was populated with embeddings from the same OpenAI model (text-embedding-3-small or similar). Mismatched embeddings will give poon search results.

Step 7: Return RAG Response

Add a Respond to Webhook node to send the RAG answer back to VAPI. The Question & Answer Chain’s output will automatically include the synthesized answer in $json.output.

Building the Tracking Sub-Workflow

The tracking sub-workflow is a separate workflow that the main workflow calls. This keeps your logic modular and testable. Here’s how to build it:

Step 1: Add Workflow Trigger

Create a new workflow and add an Execute Workflow Trigger node (type: n8n-nodes-base.executeWorkflowTrigger):

  • Input Source: jsonExample
  • JSON Example: { "order_number": "order number" }

This allows the main workflow to pass an order number to this sub-workflow.

Step 2: Query WooCommerce REST API

Add an HTTP Request node:

  • Method: GET
  • URL: https://YOUR_STORE_URL/wp-json/wc/v3/orders/{{ $json.order_number }}
  • Authentication: Use HTTP Basic Auth with your WordPress credentials

This fetches the full order object from WooCommerce, including the meta_data field where YITH stores tracking information.

Step 3: Extract Tracking Details

Add a Set node to parse the tracking metadata:

Assignments:
- tracking_code: $json["meta_data"].find(item => item.key === "ywot_tracking_code").value
- carrier_url: $json["meta_data"].find(item => item.key === "ywot_carrier_url").value
- pick_up: $json["meta_data"].find(item => item.key === "ywot_pick_up_date").value

These fields come from the YITH plugin’s meta data structure. The agent will use these fields to provide tracking links and pickup information to customers.

📌

Important: Make sure the YITH plugin is installed and configured on your WooCommerce store. If the meta_data fields don’t exist, this node will error. Test with a real order that has tracking information.

VAPI Configuration: Connecting Voice to Your Workflows

Now that your n8n workflows are built, connect them to VAPI so that incoming phone calls trigger the workflows.

Step 1: Create API Request Tools in VAPI

Log in to your VAPI dashboard and navigate to the Tools section.

Create Tool 1: WooCommerce Order Lookup

  • Tool Type: API Request
  • Name: Check Order Status
  • Request Method: POST
  • Request URL: Paste the webhook URL from your VAPI Post-Sales Webhook node
  • Request Body Format: JSON
  • Parameters:
    • Name: email, Type: string, Description: “Customer’s email address”
    • Name: n_order, Type: string, Description: “Order number to look up”

Create Tool 2: Product Knowledge RAG

  • Tool Type: API Request
  • Name: Search Product Knowledge
  • Request Method: POST
  • Request URL: Paste the webhook URL from your RAG Webhook node
  • Request Body Format: JSON
  • Parameters:
    • Name: search, Type: string, Description: “Search query for product information, policies, or FAQs”

Step 2: Create or Update Your Voice Assistant

In VAPI, go to the Assistants section and create a new assistant or edit an existing one:

  • Name: “WooCommerce Support Agent”
  • Model: Select your preferred model (GPT-4 Turbo, Claude, etc.)
  • System Prompt: Craft a prompt that tells the voice assistant to greet customers, explain the available tools, and guide them to either check their order status or ask product questions. Example:

    “You are a friendly customer support agent for an online store. When someone calls, greet them warmly and ask whether they want to (1) check on an existing order by providing their order number and email, or (2) ask a product question. Use the appropriate tool based on their request.”

  • Voice: Choose a voice under the Voice tab (e.g., “Alloy” for a neutral tone, “Juniper” for friendliness)
  • Tools: In the Tools tab, add the two tools you created above

Step 3: Set Up Twilio Phone Number

In your VAPI dashboard, go to Phone Numbers and either:

  • Create a new phone number with VAPI (it provisions through Twilio), or
  • Import an existing Twilio phone number into VAPI

Configure the phone number as “Inbound” and associate it with your WooCommerce Support Assistant.

Once configured, any call to this number will trigger your VAPI assistant, which will prompt the caller, and route requests to your n8n workflows via the webhooks.

💡

Tip: Test the end-to-end flow by calling the Twilio number from your phone. Listen for the greeting, and try asking about an order or a product question. Check your n8n execution logs to debug any issues.

Testing and Debugging

Once everything is wired up, test thoroughly before going live:

Test the Post-Sales Agent

  1. Create a test order in WooCommerce with a known order number and email address
  2. Call your Twilio number and ask to check the order status
  3. Provide the correct email and order number when prompted
  4. Verify that the agent retrieves and reads the order information correctly
  5. Try with an incorrect email to verify the security check works (agent should refuse)

Test the RAG Q&A

  1. Call the Twilio number and ask a product question (e.g., “What are your return policies?” or “Do you ship internationally?”)
  2. Verify that the assistant searches your Qdrant collection and returns relevant answers
  3. Check the n8n execution logs to see which documents were retrieved

Check n8n Execution Logs

In your n8n dashboard, navigate to the execution history of your workflows. Look for:

  • Successful webhook trigger (check the request body has the correct email and order number)
  • Agent node executing without errors
  • WooCommerce API calls returning data
  • Correct JSON response sent back to VAPI

If any step fails, the agent will explain the error to the caller, so listen carefully and check the logs.

Frequently Asked Questions

Can I use a different LLM instead of GPT-4o-mini for the post-sales agent?

Yes, absolutely. The workflow uses GPT-4o-mini because it’s fast and cost-effective, but n8n supports many LLMs: Claude (Anthropic), Gemini Pro, Mistral, Llama, and others. You can swap it out by changing the language model node’s credentials and model selection. Just ensure the model supports function calling (tool use) for the agent to work properly.

How do I populate my Qdrant collection with product knowledge?

You need to create a separate n8n workflow or script that:

  1. Fetches your product data (from WooCommerce, a CSV, or a documentation site)
  2. Chunks the data into smaller pieces (e.g., 500-1000 characters per chunk)
  3. Generates embeddings using OpenAI’s embedding model
  4. Uploads the chunks and embeddings to your Qdrant collection

The n8n marketplace and documentation have templates for this. Once set up, you can keep your Qdrant collection updated with new products automatically using a scheduled workflow.

What happens if a customer doesn’t provide a valid order number or email?

The agent is instructed to ask the customer to provide the correct information. If the email doesn’t match the order, it politely refuses to share details and asks them to provide the correct email. The agent never reveals what email is associated with an order—this is a critical security feature to prevent unauthorized access.

Can I customize the voice assistant’s personality or tone?

Yes! Both in n8n and in VAPI. In the agent’s system prompt (the Post-Sales Agent node), you can adjust the tone and instructions. In VAPI, you can choose different voices and customize the opening greeting. Together, these let you create a brand-aligned assistant—friendly, professional, playful, or formal.

Will this workflow handle multiple customers calling at the same time?

Yes! Both VAPI and n8n scale horizontally. When multiple customers call, VAPI handles multiple concurrent conversations independently, and each call triggers a separate n8n workflow execution. As long as your n8n instance is configured for concurrent executions (it is by default), you can handle multiple simultaneous callers. If you reach scale limits, you can upgrade your n8n or VAPI plan.

What’s Next: Advanced Enhancements

Your voice AI assistant is now live, but you can keep improving it. Here are some ideas:

  • Sentiment Analysis: Add a sentiment analysis node after the conversation to detect if the customer is frustrated. Route frustrated customers to a human agent queue.
  • Call Recording & Transcription: Enable VAPI’s call recording and use n8n to transcribe calls and log them to a database for training or compliance.
  • CRM Integration: After each call, push the conversation summary and customer info to your CRM (HubSpot, Salesforce, Pipedrive) automatically.
  • Multi-Language Support: Use VAPI’s language detection and route to different language-specific workflows in n8n.
  • Proactive Outbound Calls: Use n8n to trigger VAPI outbound calls for shipping notifications or follow-ups when orders ship.
  • Dynamic Pricing Lookup: Extend the agent to query real-time pricing or inventory from WooCommerce before answering product questions.
  • Live Agent Handoff: Add a “transfer to human agent” tool that routes the call to your support team in a queuing system like Zendesk or Twilio Flex.

Final Thoughts

Building a voice AI customer support system once seemed like a luxury for large enterprises, but with VAPI, n8n, and modern LLMs, it’s now accessible to any online store owner. Your WooCommerce customers can now call 24/7 and get instant support—no waiting, no frustration, just answers.

The workflows you’ve built today are production-ready, but they’re also highly customizable. Adjust the system prompts, add more tools, change the models, integrate other services—n8n’s visual builder makes all of it possible without writing code.

Start small: test with a single phone number and a small set of test orders. Once you’re confident, promote it to your customers via email, your website, and social media. The result? Better customer satisfaction, reduced support costs, and a competitive edge in your market.

🚀 Ready to Deploy This Workflow?

Get the complete, production-ready n8n workflow with all nodes pre-configured, detailed documentation, and setup guides.

Download the Workflow Template •

Instant download · Works on n8n Cloud and self-hosted

voice-ai
n8n
vapi
woocommerce
customer-support
rag
vector-search
gpt-4o
gemini
twilio
automation

How to Automate Zendesk Support with AI Using n8n and RAG

Your support team is drowning in repetitive tickets while customers wait hours for answers that already exist in your knowledge base. Password resets, shipping questions, “how do I cancel” — the same 20 questions eating up 80% of your agents’ time. What if every new Zendesk ticket got an accurate, knowledge-base-backed AI reply within seconds, and only the genuinely tricky ones reached a human?

That’s exactly what you’ll build in this guide. Using n8n, OpenAI, Supabase, and a RAG (Retrieval-Augmented Generation) pipeline, you’ll create a workflow that reads every incoming Zendesk ticket, searches your knowledge base for relevant answers, generates a professional reply, and posts it directly to the ticket — automatically. When the AI doesn’t have a confident answer, it escalates to your human team with a clear tag so nothing falls through the cracks.

Prefer to skip the setup? Grab the ready-made template and be up and running in under 10 minutes.

What You’ll Build

  1. A new ticket arrives in Zendesk — the workflow fires instantly via webhook.
  2. The AI agent searches your Supabase-hosted knowledge base using vector similarity (RAG) and drafts a response grounded in your actual documentation.
  3. If the knowledge base has a solid answer, the AI posts the reply directly to the Zendesk ticket and tags it ai_reply.
  4. If the AI can’t find relevant information, it tags the ticket human_requested so your team picks it up — no guessing, no hallucinations.
  5. Every conversation is stored in Postgres memory, so follow-up tickets from the same customer retain context.

How It Works — The Big Picture

The entire workflow is a single n8n pipeline with a smart routing branch at the end. Here’s the flow from ticket to resolution:

┌──────────────────────────────────────────────────────────────────────────────┐
│  AI-POWERED ZENDESK SUPPORT WITH RAG                                        │
│                                                                              │
│  [Zendesk Webhook] → [Extract Ticket Data] → [RAG AI Agent]                │
│                                                     │                        │
│                                        ┌────────────┴────────────┐          │
│                                         │  Check If Escalation    │         │
│                                        │      Needed             │          │
│                                        └────┬──────────┬─────────┘          │
│                                     YES ↓              ↓ NO                  │
│                               [Tag: human_      [Post AI Reply]             │
│                                requested]              ↓                     │
│                                                  [Tag: ai_reply]            │
│                                                                              │
│  Sub-components of the RAG AI Agent:                                        │
│  ┌──────────────────────────────────────────────┐                            │
│  │ [OpenAI GPT-4o-mini]  [Postgres Memory]      │                           │
│  │ [Knowledge Base Tool → Supabase Vectors]     │                           │
│  │ [OpenAI Embeddings]                         │                           │
│  └──────────────────────────────────────────────┘                           │
└──────────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n — self-hosted or n8n Cloud (any plan)
  • Zendesk — any plan with API access and trigger/webhook support
  • OpenAI API key — for GPT-4o-mini (chat) and text-embedding-3-small (embeddings)
  • Supabase account — free tier works; you’ll need a project with pgvector enabled for the vector store
  • PostgreSQL database — Supabase’s built-in Postgres works, or any external Postgres instance for conversation memory
  • Your knowledge base content — FAQ articles, product docs, troubleshooting guides already written and ready to embed

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

Part 1 — Receiving and Parsing Zendesk Tickets

1 Receive New Ticket (Webhook)

This node listens for incoming HTTP POST requests from Zendesk. Every time a new ticket is created, Zendesk’s trigger sends the ticket data to this webhook URL.

  1. Add a Webhook node to your canvas.
  2. Set the HTTP Method to POST.
  3. Set the Path to zendesk-new-ticket (or any slug you prefer).
  4. Copy the generated webhook URL — you’ll paste this into Zendesk’s trigger configuration.

When a ticket arrives, the webhook receives a JSON payload like this:

{
  "body": {
    "ticket_id": "48291",
    "ticket_status": "new",
    "requester_name": "Sarah Thompson",
    "requester_email": "sarah.thompson@gmail.com",
    "subject": "Can't reset my password",
    "description": "Hi, I've been trying to reset my password for the last hour but the reset email never arrives. I've checked spam. Can you help?"
  }
}
💡

Tip: In Zendesk, go to Admin Center → Objects and rules → Triggers. Create a trigger that fires when “Ticket is Created” and set the action to “Notify active webhook” with your n8n webhook URL. Include ticket_id, subject, description, requester_name, requester_email, and ticket_status as JSON fields in the webhook body.

2 Extract Ticket Data (Set Node)

The raw webhook payload is nested inside body. This Set node extracts the fields you need into clean, top-level variables that the rest of the workflow can reference easily.

  1. Add a Set node after the Webhook.
  2. Create assignments for: ticket_id, ticket_status, requester_name, requester_email, subject, description, and timestamp.
  3. For description, use the expression {{ ($json.body.description || '').split('\n\n').slice(-1)[0].trim() }} — this strips quoted reply chains and keeps only the customer’s latest message.
  4. For timestamp, use {{ $now.format('yyyy-MM-dd HH:mm') }} so you have a record of when the workflow processed the ticket.

After this node, the data flowing forward looks like:

{
  "ticket_id": "48291",
  "ticket_status": "new",
  "requester_name": "Sarah Thompson",
  "requester_email": "sarah.thompson@gmail.com",
  "subject": "Can't reset my password",
  "description": "I've been trying to reset my password for the last hour but the reset email never arrives. I've checked spam. Can you help?",
  "timestamp": "2026-04-10 14:23"
}

Part 2 — The RAG AI Agent

This is the brain of the workflow. The AI Agent node connects to three sub-components: a language model (GPT-4o-mini), a conversation memory (Postgres), and a knowledge base retrieval tool (Supabase vector store). Together, they form a RAG pipeline that grounds every response in your actual documentation.

3 Generate AI Response (AI Agent)

The Agent node receives the customer’s ticket and orchestrates the entire reasoning process. It decides whether to search the knowledge base, reads the results, and composes a professional reply.

  1. Add an AI Agent node.
  2. Set the Prompt Type to Define.
  3. Set the input text to:
    Customer: {{ $json.requester_name }}
    Subject: {{ $json.subject }}
    Message: {{ $json.description }}
  4. In the System Message, paste the following instructions that tell the agent how to behave:
You are a professional, empathetic customer support agent. Your job is to help
customers by answering their questions accurately using ONLY the information
from the retrieved knowledge base documents.

Rules:
1. Always be polite, professional, and concise.
2. If the knowledge base contains a clear answer, provide it with step-by-step
   instructions when appropriate.
3. If the knowledge base does NOT contain relevant information, respond with
   EXACTLY: "I will escalate this to our support team who will get back to
   you shortly."
4. Never make up information. Never guess. Only use verified KB content.
5. Sign off with: "Best regards, Support Team"
📌

The exact escalation phrase — “will escalate this” / “get back to you shortly” — is critical. The routing node downstream checks for these phrases to decide whether to post the reply or tag for human review. Don’t change this wording unless you also update the IF condition.

4 OpenAI Chat Model (Sub-node)

This is the language model that powers the agent’s reasoning. It connects to the Agent node’s ai_languageModel input.

  1. Add an OpenAI Chat Model node below the Agent.
  2. Select gpt-4o-mini as the model — it’s fast, cheap, and accurate enough for support responses.
  3. Set temperature to 0.3 — low enough to keep answers factual, high enough to sound natural.
  4. Set max tokens to 1024 — more than enough for a support reply.
  5. Connect your OpenAI API credential.
💡

Tip: If you need higher-quality reasoning for complex product questions, swap to gpt-4o. The cost goes up roughly 10x per token, but accuracy improves noticeably for technical troubleshooting.

5 Ticket Conversation Memory (Postgres)

This memory node stores the conversation history for each ticket ID in a Postgres table called zendesk_ticket_histories. If a customer sends a follow-up message on the same ticket, the AI remembers what was discussed before.

  1. Add a Postgres Chat Memory node.
  2. Set the table name to zendesk_ticket_histories.
  3. Set the Session Key to {{ $('Extract Ticket Data').item.json.ticket_id }} — this groups messages by ticket.
  4. Connect your Postgres credential (Supabase’s built-in Postgres works perfectly here).

The table is created automatically on first run. It stores each message exchange so the agent has full context for multi-message tickets.

6 Knowledge Base Retrieval Tool (Vector Store)

This is the RAG component. The tool searches your Supabase vector store for the most relevant knowledge base articles and feeds them to the AI agent as context.

  1. Add a Vector Store Tool node and name it “Retrieve Knowledge Base”.
  2. Set the tool name to knowledge_base.
  3. Set Top K to 5 — the agent will receive the 5 most relevant document chunks.
  4. Write a clear description: “Contains all company knowledge base articles, FAQs, product documentation, and troubleshooting guides.”

7 Supabase Vector Store + OpenAI Embeddings

The Vector Store Tool needs two sub-components: a vector store (where your documents live) and an embedding model (to convert the search query into a vector).

  1. Add a Supabase Vector Store node. Set the table to documents and the query function to match_documents.
  2. Add an OpenAI Embeddings node. Set the model to text-embedding-3-small with 1536 dimensions.
  3. Wire the Embeddings node into the Vector Store’s ai_embedding input.
  4. Wire the Vector Store into the Retrieve Knowledge Base tool’s ai_vectorStore input.
💡

Tip: To populate your Supabase vector store, create a separate n8n workflow that reads your KB articles (from Google Docs, Notion, or markdown files), splits them into chunks of ~500 tokens, generates embeddings with the same model (text-embedding-3-small), and inserts them into the documents table. This only needs to run once (or whenever you update your docs).

Part 3 — Smart Routing and Zendesk Actions

After the AI generates a response, the workflow needs to decide: was the response a confident answer, or did the AI punt because it couldn’t find relevant information?

8 Check If Escalation Needed (IF Node)

This IF node inspects the AI’s output text. If it contains the escalation phrases (“will escalate this” or “get back to you shortly”), the ticket goes to the human escalation path. Otherwise, it proceeds to auto-reply.

  1. Add an IF node after the AI Agent.
  2. Set the condition combinator to OR.
  3. Condition 1: {{ $json.output }} contains will escalate this
  4. Condition 2: {{ $json.output }} contains get back to you shortly
  5. Set case sensitivity to false.

The True branch (escalation needed) goes to the human tagging node. The False branch (AI answered) goes to the reply-posting node.

9 Tag as Human Escalation (True Branch)

When the AI can’t find an answer, this HTTP Request node calls the Zendesk API to tag the ticket with human_requested and ai_escalated. Your support team can create a Zendesk view filtered by these tags to see exactly which tickets need human attention.

  1. Add an HTTP Request node on the True output.
  2. Set Method to PUT.
  3. Set URL to: https://YOUR_ZENDESK_DOMAIN.zendesk.com/api/v2/tickets/{{ $('Extract Ticket Data').first().json.ticket_id }}/tags.json
  4. Set the JSON body to: { "tags": ["human_requested", "ai_escalated"] }
  5. Set authentication to Predefined Credential Type → Zendesk API.

10 Post AI Reply to Ticket (False Branch)

When the AI has a confident answer, this node posts it as a public comment on the Zendesk ticket and sets the ticket status to “pending” (awaiting customer confirmation).

  1. Add an HTTP Request node on the False output.
  2. Set Method to PUT.
  3. Set URL to: https://YOUR_ZENDESK_DOMAIN.zendesk.com/api/v2/tickets/{{ $('Extract Ticket Data').first().json.ticket_id }}.json
  4. Set the JSON body to:
{
  "ticket": {
    "comment": {
      "body": "{{ AI agent's output text }}",
      "public": true
    },
    "status": "pending"
  }
}

The "public": true flag means the customer sees this reply in their email and in the Zendesk portal. Setting status to "pending" tells Zendesk to wait for the customer’s next response.

11 Tag as AI Reply

After posting the reply, this final node tags the ticket with ai_reply and auto_resolved. This lets you track how many tickets the AI handles versus humans, and gives you data for measuring automation ROI.

  1. Add one more HTTP Request node after the reply node.
  2. Same pattern: PUT to the /tags.json endpoint.
  3. Body: { "tags": ["ai_reply", "auto_resolved"] }

The Data Structure

The workflow relies on two persistent data stores: the Supabase vector store for knowledge base content, and the Postgres table for conversation memory.

Supabase documents Table

Column Type Example Description
id bigint 1 Auto-incrementing primary key
content text To reset your password, go to Settings → Security → Change Password… The raw text chunk from your KB article
metadata jsonb {“source”: “password-reset-guide”, “section”: “steps”} Source tracking — which article and section this chunk came from
embedding vector(1536) [0.0023, -0.0119, …] Vector embedding generated by text-embedding-3-small

Postgres zendesk_ticket_histories Table

Column Type Example Description
session_id text 48291 The Zendesk ticket ID — groups all messages for one ticket
message text {“type”:”human”,”content”:”I can’t reset my password…”} Individual message in the conversation
created_at timestamp 2026-04-10 14:23:00 When the message was stored
📌

The zendesk_ticket_histories table is created automatically by n8n’s Postgres Chat Memory node on first execution. You don’t need to create it manually. The documents table and its match_documents function must be set up in Supabase before the workflow will work — see the Credentials Guide for full setup steps.

Full System Flow

┌─────────────────────────────────────────────────────────────────────────────┐
│  END-TO-END: TICKET ARRIVES → RESOLUTION                                   │
│                                                                             │
│  Customer creates ticket in Zendesk                                        │
│       ↓                                                                     │
│  Zendesk Trigger fires webhook POST                                        │
│       ↓                                                                     │
│  [Receive New Ticket] — n8n webhook catches the request                    │
│       ↓                                                                     │
│  [Extract Ticket Data] — pull ticket_id, subject, description, etc.        │
│       ↓                                                                     │
│  [Generate AI Response] ← GPT-4o-mini + Postgres Memory                   │
│       │                  ← Knowledge Base Tool                              │
│       │                     ↑                                               │
│       │              [Supabase Vectors] ← [OpenAI Embeddings]              │
│       ↓                                                                     │
│  [Check If Escalation Needed]                                              │
│       │                                                                     │
│   YES ├──→ [Tag: human_requested] → Human agent picks up ticket            │
│       │                                                                     │
│    NO └──→ [Post AI Reply to Ticket] → Customer gets instant answer        │
│                    ↓                                                        │
│            [Tag: ai_reply] → Track AI resolution rate                      │
└─────────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

  1. Activate the workflow in n8n (toggle the Active switch).
  2. Create a test ticket in Zendesk with a question your knowledge base can answer — for example, “How do I reset my password?”
  3. Check the n8n execution log — you should see the webhook trigger, data extraction, AI agent processing, and the reply being posted.
  4. Open the Zendesk ticket — verify the AI’s reply appears as a public comment and the ticket has the ai_reply tag.
  5. Create a second test ticket with a question your KB does NOT cover — like “Can I get a refund for my order #12345?”
  6. Verify escalation — the ticket should have the human_requested tag and no AI reply posted.
Problem Likely Cause Fix
Webhook never fires Zendesk trigger not configured or wrong URL Double-check the webhook URL in Zendesk Admin → Triggers. Make sure the trigger condition is “Ticket is Created.”
AI returns generic/wrong answers Knowledge base not populated or embeddings mismatch Verify your documents table has content. Ensure you used text-embedding-3-small with 1536 dimensions when inserting docs.
AI always escalates Vector search returns no results Check that the match_documents Postgres function exists in Supabase and the table name matches exactly.
Reply not appearing on ticket Zendesk API auth error or wrong domain Verify your Zendesk API credential has write access and the domain in the URL matches your subdomain (e.g., mycompany.zendesk.com).
“401 Unauthorized” on tag update Zendesk credential lacks admin scope The API token needs to be created by an admin user. Go to Admin → Channels → API and generate a new token.

Frequently Asked Questions

How much does it cost to run this per ticket?

With GPT-4o-mini and text-embedding-3-small, the average cost is about $0.002–$0.005 per ticket (input + output tokens combined). If you process 1,000 tickets per month, that’s roughly $2–$5/month in OpenAI costs. Supabase’s free tier handles up to 500MB of vector data, which is plenty for most knowledge bases.

Can I use this with Zendesk’s free plan?

No — you need API access and the ability to create triggers/webhooks, which requires Zendesk Suite Team or higher. The trigger feature that sends webhook notifications on ticket creation is not available on the free/lite plans.

What happens if the AI gives a wrong answer?

Since the agent is configured with strict RAG-only instructions and a low temperature (0.3), hallucinations are rare. But they can happen. We recommend reviewing AI-tagged tickets weekly during the first month and adjusting your KB content where gaps appear. You can also lower the temperature to 0.1 for even more conservative responses.

Can I add more languages to the knowledge base?

Yes. OpenAI’s embedding model supports 100+ languages natively. If your KB articles are in Spanish or French, the vector search will still work — the embedding model handles cross-lingual similarity. Just make sure your system prompt tells the agent to reply in the customer’s language.

How do I update the knowledge base when we add new articles?

Create a second n8n workflow that watches your documentation source (Google Drive, Notion, or a webhook from your CMS). When a document changes, the workflow re-chunks it, generates new embeddings, and upserts them into the Supabase documents table. This keeps your AI current without manual intervention.

Does this work with Freshdesk, Intercom, or other helpdesks?

The core RAG pipeline (AI Agent + Supabase + Embeddings) works with any helpdesk that has a webhook or API. You’d swap the Zendesk-specific webhook trigger and HTTP Request nodes for the equivalent in your helpdesk. The logic is identical — only the API endpoints change.

Get the AI Zendesk Support Template

Skip the 60-minute build. Get the complete workflow JSON, Supabase setup instructions, and a step-by-step credentials guide — paste your API keys and go live in under 10 minutes.

Get the Template

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a satisfaction survey — after the AI replies, send a follow-up asking the customer to rate the response. Use the rating to fine-tune your KB.
  • Build a KB ingestion pipeline — automate the process of embedding new documentation into Supabase whenever you publish a new help article.
  • Add Slack notifications for escalations — when a ticket is tagged human_requested, send a Slack message to your support channel so agents can pick it up immediately.
  • Track AI resolution rate — build a simple dashboard (Google Sheets or Metabase) that counts ai_reply vs human_requested tags over time to measure your automation ROI.
n8n
Zendesk
OpenAI
Supabase
RAG
Customer Support
automation