How to Build a Slack Knowledge Base Bot with AI and n8n

Every team has that one Slack channel — the one where the same five questions get asked every single week. “Where’s the onboarding doc?” “What’s the refund policy?” “How do I reset my API key?” Your senior engineers are answering these instead of shipping. Your support lead is copy-pasting the same response for the third time today. With this n8n workflow, you’ll build an AI-powered Slack bot that reads incoming questions, searches your company knowledge base using semantic vector search, and posts precise answers back in the thread — automatically, in about 2 seconds.

Prefer to skip the setup? Grab the ready-made template → and be running in under 10 minutes.

What You’ll Build

  1. A Slack bot that listens for messages in any channel it’s invited to
  2. An OpenAI step that converts each incoming question into a 1,536-dimensional vector embedding
  3. A Pinecone semantic search that retrieves the most relevant chunks from your knowledge base
  4. A GPT-4o-mini step that reads the retrieved context and writes a clear, grounded answer
  5. An automatic thread reply in Slack so teammates get instant answers without leaving the channel

How It Works — The Big Picture

This workflow uses the RAG pattern — Retrieval-Augmented Generation. Instead of asking an AI to recall facts from its training data (which goes stale), you store your up-to-date company knowledge in Pinecone as vector embeddings. When a question arrives, the bot finds the semantically closest knowledge chunks and hands them to GPT-4o-mini as context. The result: factual, grounded answers drawn only from your approved content.

┌─────────────────────────────────────────────────────────────────────┐
│  SLACK KNOWLEDGE BASE BOT                                           │
│                                                                     │
│  [Slack Trigger]                                                    │
│        ↓                                                            │
│  [Filter Bot Messages] ──(bot message)──→ (stop — no loop)         │
│        ↓ (user message)                                             │
│  [Generate Question Embedding]  (OpenAI text-embedding-3-small)    │
│        ↓ 1,536-dim vector                                           │
│  [Query Pinecone]  (top-5 semantic matches from knowledge base)     │
│        ↓ matched chunks + scores                                    │
│  [Extract Context]  (filter score > 0.7, join top 3 chunks)        │
│        ↓ formatted context string                                   │
│  [Generate AI Answer]  (GPT-4o-mini + RAG prompt)                  │
│        ↓ natural language answer                                    │
│  [Post Answer to Slack]  (reply in the original thread)            │
└─────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted (v1.0+) or n8n Cloud
  • Slack app with Events API enabled and a Bot User Token (starts with xoxb-)
  • OpenAI account — API key with access to text-embedding-3-small and gpt-4o-mini
  • Pinecone account — free Starter plan is plenty; create an index with dimension 1536 and metric cosine
  • A pre-populated Pinecone namespace called knowledge-base (the Credentials Guide includes a Python ingestion script)

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

Building the Bot — Step by Step

1 Slack Trigger

This node opens a webhook endpoint that Slack’s Events API calls every time a message is posted in a channel your bot belongs to. It’s the entry point for everything.

  1. Add a Slack Trigger node to your canvas.
  2. Select your Slack Bot Token credential (or create one — see the Credentials Guide).
  3. Set the Trigger to Message.
  4. Copy the Webhook URL n8n displays. Paste it into your Slack app’s Event Subscriptions → Request URL field.
  5. In Slack app settings, enable the message.channels and message.groups event scopes, then reinstall the app to your workspace.

A message event payload looks like this:

{
  "type": "message",
  "text": "What is our refund policy for annual subscriptions?",
  "user": "U04ABCDEF12",
  "channel": "C06XYZABC99",
  "ts": "1743784201.000100",
  "subtype": null
}
💡

Tip: Slack sends events for bot messages too — including the bot’s own replies. Without the next filter node, every answer the bot posts would re-trigger the workflow and create an infinite loop.

2 Filter Bot Messages (IF)

This IF node stops the workflow from processing bot messages. It’s a one-condition check that routes user messages forward and drops everything else.

  1. Add an IF node connected to the Slack Trigger output.
  2. Set Value 1 to ={{ $json.subtype }}.
  3. Condition: Is Not Equal Tobot_message.
  4. Connect the True output to Step 3. Leave False unconnected.
💡

Tip: You can add a second condition here to limit the bot to a specific channel — filter $json.channel equals C06XYZABC99. This is useful if you want the bot active only in #ask-the-bot and not everywhere.

3 Generate Question Embedding (HTTP Request → OpenAI)

This node calls the OpenAI Embeddings API and converts the user’s question into a 1,536-dimensional vector — a list of numbers that captures the semantic meaning of the sentence. Pinecone will use this to find similar content.

  1. Add an HTTP Request node.
  2. Method: POST | URL: https://api.openai.com/v1/embeddings
  3. Authentication: Generic Credential Type → HTTP Header Auth. Create a credential with Name = Authorization and Value = Bearer YOUR_OPENAI_API_KEY.
  4. Body Content Type: JSON. Paste this body:
{
  "input": "={{ $('Slack Trigger').item.json.text }}",
  "model": "text-embedding-3-small"
}

The response contains the embedding inside data[0].embedding:

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023, -0.0189, 0.0341, "...1,533 more values..."]
    }
  ],
  "model": "text-embedding-3-small",
  "usage": { "prompt_tokens": 11, "total_tokens": 11 }
}
💡

Tip: text-embedding-3-small costs $0.02 per million tokens. A team of 50 people asking 200 questions a day will spend about half a cent on embeddings. If you need higher search accuracy, switch to text-embedding-3-large (3,072 dimensions) — but update your Pinecone index dimension to match before doing so.

4 Query Pinecone (HTTP Request)

This node sends the question vector to Pinecone and gets back the five most semantically similar knowledge chunks, each scored between 0 (irrelevant) and 1 (identical).

  1. Add another HTTP Request node.
  2. Method: POST | URL: https://YOUR_PINECONE_INDEX_HOST/query (replace with your index host from the Pinecone console — it looks like my-index-abc123.svc.us-east-1.pinecone.io)
  3. Add a header: Name = Api-Key, Value = YOUR_PINECONE_API_KEY.
  4. Body Content Type: JSON. Use this expression as the body:
{
  "vector": "={{ $json.data[0].embedding }}",
  "topK": 5,
  "includeMetadata": true,
  "namespace": "knowledge-base"
}

Pinecone responds with the top matches and their stored metadata:

{
  "matches": [
    {
      "id": "doc-refund-annual-001",
      "score": 0.921,
      "metadata": {
        "text": "Annual subscriptions may be refunded within 30 days of purchase for a full refund. After 30 days, refunds are prorated based on remaining months.",
        "source": "help-center/billing",
        "last_updated": "2026-03-01"
      }
    },
    {
      "id": "doc-refund-annual-002",
      "score": 0.874,
      "metadata": {
        "text": "To request a refund, email billing@acme-corp.com with your order number and reason for cancellation. Refunds are processed within 5 business days.",
        "source": "help-center/billing",
        "last_updated": "2026-03-01"
      }
    }
  ],
  "namespace": "knowledge-base"
}
📌

Your Pinecone index must be pre-populated before the bot can answer anything. Each vector record needs a text field in its metadata. The Credentials Guide PDF bundled with the template includes a ready-to-run Python ingestion script that embeds and uploads your documents in minutes.

5 Extract Context (Code)

This JavaScript node processes the Pinecone results: filters low-confidence matches, takes the top 3 chunks, formats them into a numbered context string, and bundles the data for the next node.

const matches = $input.item.json.matches || [];
const slackData = $('Slack Trigger').item.json;

if (matches.length === 0) {
  return [{
    json: {
      context: 'No relevant information found in the knowledge base.',
      question: slackData.text,
      channel: slackData.channel,
      thread_ts: slackData.ts
    }
  }];
}

const context = matches
  .filter(m => m.score > 0.7)
  .slice(0, 3)
  .map((m, i) => `[${i + 1}] ${m.metadata.text}`)
  .join('\n\n');

return [{
  json: {
    context: context || 'No highly relevant information found.',
    question: slackData.text,
    channel: slackData.channel,
    thread_ts: slackData.ts
  }
}];

After this node, the data is clean and ready for the AI:

{
  "context": "[1] Annual subscriptions may be refunded within 30 days...\n\n[2] To request a refund, email billing@acme-corp.com...",
  "question": "What is our refund policy for annual subscriptions?",
  "channel": "C06XYZABC99",
  "thread_ts": "1743784201.000100"
}
💡

Tip: The 0.7 score threshold is a good starting point. If the bot returns off-topic answers, raise it to 0.8. If it says “no information found” for questions you know are in the knowledge base, lower it to 0.65 or check that your Pinecone namespace name matches exactly.

6 Generate AI Answer (HTTP Request → OpenAI Chat)

This node sends the retrieved context and the original question to GPT-4o-mini. The system prompt instructs the model to answer strictly from the provided context — no hallucinating facts that aren’t in your knowledge base.

  1. Method: POST | URL: https://api.openai.com/v1/chat/completions
  2. Reuse your OpenAI HTTP Header Auth credential.
  3. Body (JSON):
{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful company knowledge base assistant. Answer questions using ONLY the provided context. If the context does not contain the answer, say so clearly and suggest the user contact the team directly. Keep answers concise and actionable."
    },
    {
      "role": "user",
      "content": "Context:\n={{ $json.context }}\n\nQuestion: ={{ $json.question }}"
    }
  ],
  "max_tokens": 500,
  "temperature": 0.2
}
💡

Tip: temperature: 0.2 keeps answers factual and consistent. For a knowledge base bot you want determinism, not creativity. max_tokens: 500 keeps responses Slack-readable — roughly 3–5 paragraphs maximum.

7 Post Answer to Slack

The final node takes GPT-4o-mini’s answer and posts it as a thread reply — so the answer lives directly under the original question rather than flooding the main channel.

  1. Add a Slack node. Resource: Message, Operation: Post.
  2. Channel: ={{ $('Extract Context').item.json.channel }}
  3. Text: ={{ $json.choices[0].message.content }}
  4. Under Other Options, set Thread Timestamp to ={{ $('Extract Context').item.json.thread_ts }}
  5. Use the same Slack credential as the Slack Trigger.
💡

Tip: Want to brand the reply? Change the text field to: 🤖 *Knowledge Base Bot:* {{ $json.choices[0].message.content }}. The asterisks render as bold in Slack, making it clear this is an automated response.

The Data Flow

Here’s how a single question moves through all seven nodes, from Slack message to bot reply:

Stage Data Present Key Field
After Slack Trigger Raw Slack event payload text, channel, ts
After Filter Same payload, confirmed user message subtype is null
After OpenAI Embeddings 1,536-number float array data[0].embedding
After Pinecone Query Top 5 knowledge chunks + similarity scores matches[].score, matches[].metadata.text
After Extract Context Formatted context + original question context, question, thread_ts
After OpenAI Chat Full ChatGPT response object choices[0].message.content
Posted to Slack Plain text answer in thread Visible to all channel members instantly

Pinecone Knowledge Base Schema

Every vector record you upsert into Pinecone must follow this structure. The text metadata field is required — the Extract Context node reads it directly.

Field Type Example Description
id String doc-refund-001 Unique identifier for this knowledge chunk
values Float[1536] [0.023, -0.019, …] Embedding from text-embedding-3-small
metadata.text String "Annual subscriptions are refunded within 30 days…" The raw knowledge chunk — required
metadata.source String help-center/billing Where this content came from (optional)
metadata.last_updated String 2026-03-01 Last update date for freshness tracking (optional)
📌

Keep each knowledge chunk between 100 and 500 words. Too short and the chunk loses context; too long and the embedding gets diluted. One concept per chunk is a good rule of thumb — for example, one chunk for the refund policy, a separate chunk for the cancellation process.

Full System Flow

User posts question in Slack
          │
          ▼
┌─────────────────────┐
│    Slack Trigger    │  Receives message.channels event via webhook
└─────────────────────┘
          │
          ▼
┌─────────────────────┐
│ Filter Bot Messages │──── subtype = "bot_message"? ──→ STOP
└─────────────────────┘
          │ user message passes
          ▼
┌──────────────────────────┐
│ Generate Question        │  POST https://api.openai.com/v1/embeddings
│ Embedding (OpenAI)       │  model: text-embedding-3-small → 1,536-dim vector
└──────────────────────────┘
          │
          ▼
┌──────────────────────────┐
│ Query Pinecone           │  POST {index-host}/query
│ (vector search)          │  topK=5 · namespace: knowledge-base
└──────────────────────────┘
          │ top matches with metadata + scores
          ▼
┌──────────────────────────┐
│ Extract Context          │  Filter: score > 0.7
│ (Code node)              │  Join top 3 chunks into context string
└──────────────────────────┘
          │
          ▼
┌──────────────────────────┐
│ Generate AI Answer       │  POST https://api.openai.com/v1/chat/completions
│ (GPT-4o-mini)            │  RAG prompt · temperature: 0.2 · max_tokens: 500
└──────────────────────────┘
          │ grounded answer text
          ▼
┌──────────────────────────┐
│ Post Answer to Slack     │  Thread reply on original message
└──────────────────────────┘
          │
          ▼
Team member sees the answer in 2–3 seconds ✓

Testing Your Workflow

  1. Make sure the workflow is toggled to Active in n8n.
  2. Invite your bot to a test channel: type /invite @YourBotName.
  3. Post a question you know is covered in your knowledge base — for example: “What’s the refund policy for annual plans?”
  4. Within 2–3 seconds, a thread reply should appear with a grounded answer drawn from your Pinecone content.
  5. Post a question that’s definitely not in your knowledge base. The bot should say it doesn’t have that information and suggest contacting the team.
  6. Check the n8n Execution log to confirm all 7 nodes completed with green checkmarks and no errors.
Problem Likely Cause Fix
Bot doesn’t respond at all Workflow not Active, or Slack webhook URL mismatch Toggle workflow to Active; verify webhook URL in Slack app settings matches n8n exactly
Bot replies to its own messages Filter Bot Messages node misconfigured Check the True output of the IF node connects to Step 3; False output should be unconnected
“No relevant information found” for everything Pinecone index empty or wrong namespace Run the ingestion script from the Credentials Guide; confirm namespace is exactly knowledge-base
Off-topic or wrong answers Score threshold too low or chunks too large Raise score threshold to 0.8 in the Extract Context node; re-chunk content into shorter segments
OpenAI 401 Unauthorized API key missing or expired Regenerate key at platform.openai.com and update the HTTP Header Auth credential in n8n
Slack “not_in_channel” error Bot not invited to the channel Run /invite @YourBotName in the channel before testing

Frequently Asked Questions

Do I need to load content into Pinecone before the bot will work?

Yes — without content in Pinecone, every query returns empty results and the bot will say “no information found.” The template package includes a Python ingestion script (in the Credentials Guide PDF) that takes any plain text or Markdown file, splits it into chunks, generates embeddings, and uploads them to Pinecone. You can have a basic knowledge base loaded in 10–15 minutes.

How do I prevent the bot from answering questions it shouldn’t?

The system prompt in Step 6 instructs GPT-4o-mini to answer only from the provided context. If a question doesn’t match anything in Pinecone above the 0.7 threshold, the Extract Context node sends a “no information found” message as the context — and the AI is instructed to honestly say so and redirect to the team. You control what goes into Pinecone, so you control what the bot can answer.

How much does this cost to run per month?

For a team of 50 people asking roughly 200 questions a day: OpenAI embedding calls cost about $0.005/day, and GPT-4o-mini answers cost about $0.10/day. Pinecone’s free Starter plan handles up to 100,000 vectors — more than enough for a thorough company knowledge base. Total cost: roughly $3–4/month in API fees.

Can I restrict the bot to specific Slack channels?

Yes. In the Filter Bot Messages IF node (Step 2), add a second condition: $json.channel equals your target channel ID. The bot will only respond in that specific channel. You can find a channel’s ID by right-clicking it in Slack and selecting “Copy link” — the ID is the string starting with C at the end of the URL.

How do I keep the knowledge base current as our docs change?

Build a second n8n workflow that watches for document updates — a Google Drive trigger that fires when a doc is modified, re-embeds the content, and upserts it to Pinecone by the same id. Since Pinecone’s upsert operation overwrites by ID, you won’t accumulate duplicates. You can also just re-run the ingestion script manually after major documentation updates.

Does this work on n8n Cloud, or only self-hosted?

It works on both. The workflow uses only HTTP Request nodes, a Code node, the Slack Trigger, and the Slack node — all available in n8n Cloud and every self-hosted version from 1.0 onwards. No custom nodes or community packages are required.

🚀 Get the Slack Knowledge Base Bot Template

You now know exactly how this workflow is built. The template gets you there in under 10 minutes: it includes the ready-to-import workflow JSON, a Setup Guide PDF with step-by-step activation instructions, and a Credentials Guide PDF with a working Python ingestion script to load your knowledge base into Pinecone.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add answer feedback: Let users react 👍 or 👎 on bot replies, log reactions to Airtable, and use quality scores to identify gaps in your knowledge base.
  • Auto-ingest from Notion or Confluence: Build a companion n8n workflow that watches for document updates and automatically re-embeds modified pages into Pinecone.
  • Add a Slack slash command: Create a /ask command so users can query the bot privately in DMs without cluttering a shared channel.
  • Multi-namespace routing: Create separate Pinecone namespaces for HR, Engineering, and Sales — and route questions to the right namespace based on which Slack channel they came from.
n8n
Slack
OpenAI
Pinecone
RAG
AI automation
knowledge base
vector search

How to Auto-Triage Support Emails to Slack with AI Using n8n

Every support inbox gets noisy fast. A billing question sits next to a frantic “our entire account is locked” message, and both look the same in a Gmail preview. By the time someone reads the urgent one, an hour has slipped by. This tutorial walks you through building an n8n workflow that reads every inbound support email, asks GPT-4o mini to classify it by priority and category, routes the result to the right Slack channel — and logs everything to a Google Sheet so nothing ever disappears.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Gmail trigger polls your inbox every minute for new unread emails.
  2. An OpenAI call analyzes the email and returns a priority level (High / Medium / Low), a category (Bug Report, Billing, Feature Request, etc.), a one-sentence summary, and the sender’s detected sentiment.
  3. A conditional branch routes High-priority tickets to a #support-urgent Slack channel with a bold alert, and everything else to #support as a standard notification.
  4. Every ticket — regardless of priority — is appended to a Google Sheet so you have a permanent, searchable log of all incoming requests.

How It Works — The Big Picture

The workflow runs as a polling loop. Think of it as a tireless assistant who checks email every sixty seconds, reads it with an AI brain, and instantly posts into the right Slack channel — then files the paperwork in a spreadsheet.

┌─────────────────────────────────────────────────────────────────────┐
│  GMAIL → SLACK AI SUPPORT TRIAGE                                    │
│                                                                     │
│  [Gmail Trigger]                                                    │
│       │  (new unread email)                                         │
│       ▼                                                             │
│  [Extract Email Fields]                                             │
│       │  (sender, subject, body snippet, timestamp)                 │
│       ▼                                                             │
│  [AI Classify & Summarize]  ←── OpenAI gpt-4o-mini                 │
│       │  (priority, category, summary, sentiment)                   │
│       ▼                                                             │
│  [Parse AI Response]                                                │
│       │                                                             │
│       ▼                                                             │
│  [Check Priority]                                                   │
│       │                        │                                    │
│    priority=High           priority≠High                            │
│       │                        │                                    │
│       ▼                        ▼                                    │
│  [Slack Urgent]          [Slack Standard]                           │
│  #support-urgent          #support                                  │
│       │                        │                                    │
│       └──────────┬─────────────┘                                    │
│                  ▼                                                  │
│         [Log to Google Sheets]                                      │
└─────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted or n8n Cloud (any version 1.0+)
  • Gmail account — connected via Google OAuth2 in n8n
  • OpenAI account — API key with access to gpt-4o-mini
  • Slack workspace — with two channels: #support-urgent and #support
  • Google Sheets — a spreadsheet with the columns described below

Estimated build time: 40–60 minutes from scratch, or under 10 minutes with the template.


Building the Workflow — Step by Step

1 Gmail Trigger — Watch for New Emails

The Gmail Trigger node polls your inbox every minute and fires once for each new unread email it finds. Connect your Gmail credential via Google OAuth2, set Poll Times to Every Minute, and set Read Status to Unread under Filters.

After a test run, the raw payload looks like this:

{
  "id": "18f4a3b2c1d0e9f8",
  "from": { "value": [{ "name": "Sarah Thompson", "address": "sarah.thompson@outlook.com" }] },
  "subject": "Can't log into my account — urgent!",
  "text": "Hi, I've been trying to log in for the past hour and keep getting an 'Invalid credentials' error...",
  "date": "2026-04-03T14:22:00.000Z"
}
💡

Tip: If you only want to monitor a specific support address, route emails to a Gmail label first, then filter by that label in the trigger node.

2 Extract Email Fields — Set Node

The Set node flattens the nested Gmail payload into a clean, flat object for the AI prompt. Configure these assignments:

Field Name Expression Purpose
sender ={{ $json.from?.value?.[0]?.address ?? $json.from }} Sender’s email address
senderName ={{ $json.from?.value?.[0]?.name ?? 'Unknown' }} Sender’s display name
subject ={{ $json.subject ?? '(No Subject)' }} Email subject line
bodySnippet ={{ ($json.text ?? $json.snippet ?? '').substring(0, 1200) }} First 1,200 chars of body
receivedAt ={{ $now.format('yyyy-MM-dd HH:mm') }} Human-readable timestamp
messageId ={{ $json.id }} Gmail message ID
📌

We cap the body at 1,200 characters to keep OpenAI token costs low. Increase this limit if your customers tend to write detailed reports.

3 AI Classify & Summarize — HTTP Request to OpenAI

An HTTP Request node sends the email to OpenAI’s Chat Completions API. Set Method to POST, URL to https://api.openai.com/v1/chat/completions, and Authentication to HTTP Header Auth with Authorization: Bearer YOUR_KEY. Use this JSON body:

{
  "model": "gpt-4o-mini",
  "temperature": 0,
  "response_format": { "type": "json_object" },
  "messages": [
    { "role": "system", "content": "You are a support triage assistant. Return JSON with: priority (High/Medium/Low), category (Bug Report/Billing/Feature Request/General Question/Account Issue), summary (max 120 chars), sentiment (Frustrated/Neutral/Positive). High priority = outage, data loss, billing error, account locked, urgent/ASAP." },
    { "role": "user", "content": "From: {{ $json.senderName }} <{{ $json.sender }}>\nSubject: {{ $json.subject }}\n\n{{ $json.bodySnippet }}" }
  ]
}
💡

Tip: Setting temperature: 0 makes classification deterministic. The response_format: json_object parameter guarantees the response is always valid JSON, preventing parse errors downstream.

4 Parse AI Response — Code Node

This Code node extracts the nested JSON, parses it, and merges AI classifications with the original email fields:

const aiRaw = items[0].json.choices?.[0]?.message?.content ?? '{}';
let ai;
try { ai = JSON.parse(aiRaw); }
catch (e) { ai = { priority: 'Low', category: 'General Question', summary: 'Parse error', sentiment: 'Neutral' }; }

const emailData = $('Extract Email Fields').item.json;
return [{ json: {
  sender: emailData.sender, senderName: emailData.senderName,
  subject: emailData.subject, bodySnippet: emailData.bodySnippet,
  receivedAt: emailData.receivedAt, messageId: emailData.messageId,
  priority: ai.priority ?? 'Low', category: ai.category ?? 'General Question',
  summary: ai.summary ?? emailData.subject, sentiment: ai.sentiment ?? 'Neutral'
}}];

The merged output object looks like this:

{
  "sender": "sarah.thompson@outlook.com",
  "senderName": "Sarah Thompson",
  "subject": "Can't log into my account — urgent!",
  "priority": "High",
  "category": "Account Issue",
  "summary": "User cannot log in due to 'Invalid credentials' error persisting for over an hour.",
  "sentiment": "Frustrated",
  "receivedAt": "2026-04-03 14:22",
  "messageId": "18f4a3b2c1d0e9f8"
}
💡

Tip: The try/catch block ensures the workflow never crashes if OpenAI is rate-limited or returns an unexpected response — it falls back to Low priority instead.

5 Check Priority — IF Node

The IF node checks whether priority equals "High" (case-insensitive). True branch → urgent Slack alert. False branch → standard notification. Left value: ={{ $json.priority }}, Operation: String equals, Right value: High.

📌

To add a Medium-priority route, swap the IF node for a Switch node with three outputs: High, Medium, and Low.

6 Slack Urgent Alert

Posts to #support-urgent with Block Kit formatting. Set Resource to Message, Operation to Post, Message Type to Blocks. The card shows sender name, category, received timestamp, sentiment, subject, and AI summary — everything your on-call team needs at a glance.

💡

Tip: Add an Action block with a button linking directly to the email: https://mail.google.com/mail/u/0/#inbox/{{ $json.messageId }}

7 Slack Standard Alert

Identical configuration to Step 6, but posts to #support with a gentler 📬 New Support Ticket header and includes the priority level in the fields.

8 Log to Google Sheets

Both Slack branches converge here. The Google Sheets node appends one row per email. Set Operation to Append Row and map all nine columns. This creates a permanent, searchable history of every inbound request.


The Data Structure

Create a Google Sheet named Support Ticket Log with these columns in row 1. Names are case-sensitive and must match exactly.

Column Type Example Description
Received At Text 2026-04-03 14:22 Timestamp when the email arrived
Sender Text sarah.thompson@outlook.com Sender’s email address
Sender Name Text Sarah Thompson Sender’s display name from Gmail
Subject Text Can’t log into my account — urgent! Original email subject line
Priority Text High AI-assigned priority: High / Medium / Low
Category Text Account Issue AI-assigned ticket category
Sentiment Text Frustrated Detected sender emotion
AI Summary Text User cannot log in due to… One-sentence AI summary
Message ID Text 18f4a3b2c1d0e9f8 Gmail message ID for deduplication
📌

Column names are case-sensitive. Leaving out columns won’t break the workflow — n8n simply won’t write to unmapped columns.


Full System Flow

  INCOMING EMAIL
       │
       ▼
  ┌────────────────┐
  │  Gmail Trigger │  (polls every minute)
  └────────┬───────┘
           │ raw email payload
           ▼
  ┌──────────────────────┐
  │  Extract Email Fields │  (Set node)
  └──────────┬───────────┘
             │ {sender, subject, bodySnippet, receivedAt, messageId}
             ▼
  ┌──────────────────────────────┐
  │  AI Classify & Summarize     │  (HTTP → OpenAI gpt-4o-mini)
  └──────────┬───────────────────┘
             │ OpenAI JSON response
             ▼
  ┌──────────────────────┐
  │  Parse AI Response   │  (Code node)
  └──────────┬───────────┘
             │ {priority, category, summary, sentiment, + email fields}
             ▼
  ┌──────────────────────┐
  │    Check Priority    │  (IF node)
  └────┬─────────────────┘
       │                     │
  priority=High         priority≠High
       │                     │
       ▼                     ▼
  ┌────────────┐     ┌────────────────┐
  │ Slack      │     │ Slack          │
  │ #support-  │     │ #support       │
  │  urgent    │     │ (standard)     │
  └─────┬──────┘     └──────┬─────────┘
        │                   │
        └──────────┬────────┘
                   ▼
       ┌────────────────────┐
       │  Log to Google     │
       │  Sheets (append)   │
       └────────────────────┘

Testing Your Workflow

  1. Activate the workflow (toggle the Active switch in the top-right).
  2. Send a test email with subject “URGENT: payment failed — please help!” to the monitored inbox.
  3. Wait up to 60 seconds, then check #support-urgent in Slack — you should see a card with a bold header.
  4. Check your Google Sheet — a new row should appear with all nine columns filled.
  5. Send a second email with subject “Feature request: dark mode”. It should appear in #support as Low priority.
Problem Likely Cause Fix
No Slack message received Slack credential scope missing Re-authorize Slack and ensure chat:write scope is granted
Google Sheets row not appended Column name mismatch Check that sheet column headers match exactly (case-sensitive)
OpenAI node returns 401 API key format wrong Verify the Authorization header value is Bearer sk-…
All emails classified as Low AI parse error falling back Inspect Code node output — check choices[0].message.content
Gmail trigger fires duplicates Emails re-marked as unread Enable the Mark as Read option on the Gmail Trigger node

Frequently Asked Questions

Does this workflow automatically reply to the sender?

No — this workflow focuses purely on triage and notification. You can add a Gmail send node at the end to fire an auto-acknowledgment, wired in parallel with the Slack nodes so both happen simultaneously.

How much does the OpenAI API cost to run this?

GPT-4o mini is very inexpensive — roughly $0.15 per million input tokens. For a typical support email (~200 tokens), you’re looking at fractions of a cent per email. Processing 1,000 emails per month costs less than $0.50 in API fees.

Can I add more priority levels or categories?

Absolutely. Edit the system prompt in the HTTP Request node to include any levels or categories you need. Just make sure your IF or Switch node branches match whatever the AI returns.

What happens if OpenAI is down or rate-limited?

The Code node has a try/catch block that falls back to Low priority if parsing fails. The email still gets logged to Google Sheets and posted to Slack — just without an AI classification.

Can I monitor multiple Gmail inboxes?

Yes. Add a second Gmail Trigger node connected to a different Gmail credential and wire it into the same Extract Email Fields node. Both triggers independently poll their inboxes and feed into the same triage pipeline.

Is my email content sent to OpenAI’s servers?

Yes — the subject and first 1,200 characters of the body are sent to OpenAI’s API. OpenAI’s standard API does not use submitted data to train models. If your emails contain sensitive data, consider a self-hosted local model like Ollama with Llama 3 via the HTTP Request node.


🚀 Get the Gmail → Slack AI Support Triage Template

Skip the build and get a ready-to-import n8n workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide PDF — everything you need to be running in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Auto-reply on High priority: Add a Gmail send node that fires an immediate acknowledgment to the customer when their ticket lands as High priority.
  • Create tickets in a helpdesk: Replace or augment the Google Sheets node with a Zendesk, Freshdesk, or Linear node to create a real ticket with the AI-assigned priority already set.
  • Daily digest report: Add a second workflow that runs at 8 AM every morning, reads yesterday’s rows from the Google Sheet, and posts a summary to Slack.
  • Sentiment escalation: DM the support manager directly when sentiment is Frustrated AND priority is High.
n8n
Gmail
Slack
OpenAI
Google Sheets
Support Automation
AI Triage
automation

How to Auto-Summarize Gmail Emails with AI Using n8n

Your inbox is full — important emails are buried, and you’re spending hours skimming threads just to find the ones that actually need your attention. What if every new email was automatically summarized by AI, pinged to your Slack, and logged to a spreadsheet — without you lifting a finger? In this tutorial, you’ll build exactly that using n8n, OpenAI, Slack, and Google Sheets. No code required, and it runs 24/7 in the background while you focus on work that matters.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Gmail trigger polls your inbox every minute for new unread emails.
  2. The email content is sent to OpenAI’s GPT-4o-mini model, which generates a crisp 2–3 sentence summary.
  3. The summary is posted to a Slack channel (e.g., #email-digest) so you see it the moment it lands.
  4. The email metadata and AI summary are simultaneously appended to a Google Sheets log — your permanent, searchable email archive.
  5. You stay on top of every important email without ever opening your inbox to triage.

How It Works — The Big Picture

The workflow is a linear pipeline with one fan-out at the end. Gmail fires the trigger, two Set nodes shape the data, one HTTP Request calls OpenAI, and then both Slack and Google Sheets receive the result in parallel.

┌──────────────────────────────────────────────────────────────────────────┐
│  AUTO-SUMMARIZE GMAIL EMAILS WITH AI                                     │
│                                                                          │
│  [Gmail Trigger]                                                         │
│       │  new unread email detected                                       │
│       ▼                                                                  │
│  [Extract Email Fields]  ← pulls sender, subject, body text             │
│       │                                                                  │
│       ▼                                                                  │
│  [Summarize with OpenAI]  ← POST to GPT-4o-mini API                     │
│       │                                                                  │
│       ▼                                                                  │
│  [Prepare Notification Data]  ← merges summary + email metadata          │
│       │                                                                  │
│       ├──────────────────────────┐                                       │
│       ▼                          ▼                                       │
│  [Send Slack Notification]  [Log to Google Sheets]                       │
│   posts to #email-digest     appends row to "Email Log" sheet            │
└──────────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted or n8n Cloud (free tier works)
  • Gmail account — connected via Google OAuth2 in n8n
  • OpenAI account — API key from platform.openai.com (GPT-4o-mini costs fractions of a cent per email)
  • Slack workspace — with a channel like #email-digest and a Slack app connected in n8n
  • Google Sheets — a new spreadsheet with a tab named Email Log

Estimated build time: 35–50 minutes from scratch, or under 10 minutes with the ready-made template.

Building the Workflow — Step by Step

1 Gmail Trigger (gmailTrigger)

This is the entry point. The Gmail Trigger node polls your inbox on a schedule and fires whenever it finds a new unread email. Think of it as a quiet watchdog that checks your mailbox every 60 seconds.

How to configure it:

  1. In your n8n canvas, click + Add node and search for Gmail Trigger.
  2. Connect your Google account when prompted (you’ll be redirected to Google’s OAuth screen).
  3. Under Poll Times, set the interval to Every Minute — or adjust to every 5 minutes if you prefer less frequent checks.
  4. Under Filters → Read Status, choose Unread so already-read emails don’t get processed again.
  5. Leave Include Spam & Trash unchecked.

When this node fires, the output is a rich object containing the full email. Here’s what a sample output looks like:

{
  "id": "18e5a3b2c7d4e901",
  "subject": "Q2 Budget Review — Action Needed",
  "from": {
    "value": [{ "address": "sarah.thompson@acmecorp.com", "name": "Sarah Thompson" }]
  },
  "date": "2026-04-02T14:23:11.000Z",
  "text": "Hi James,\n\nI've attached the updated Q2 budget spreadsheet for your review..."
}
💡

Tip: If you only want to process emails from specific senders or with certain subjects, add a Label filter. Create a Gmail filter that auto-labels those emails, then reference that label ID here. This keeps your workflow laser-focused and avoids processing newsletters or automated notifications.

2 Extract Email Fields (Set)

The raw Gmail output has a lot of nested data. This Set node flattens it into clean, named fields that the rest of the workflow can reference easily.

How to configure it:

  1. Add a Set node after Gmail Trigger and set mode to Manual Mapping.
  2. Add the following fields using the expression editor:
Field Name Expression What It Captures
sender ={{ $json.from.value[0].address }} Sender’s email address
senderName ={{ $json.from.value[0].name || $json.from.value[0].address }} Display name (falls back to email)
subject ={{ $json.subject }} Email subject line
bodyText ={{ ($json.text || '').substring(0, 4000) }} Plain text body, capped at 4,000 chars
receivedAt ={{ $json.date }} Timestamp the email was received
messageId ={{ $json.id }} Gmail’s unique message ID
💡

Tip: The substring(0, 4000) cap on bodyText is intentional. Capping at 4,000 characters keeps your API costs minimal while still capturing the meaningful content of any real email.

3 Summarize with OpenAI (HTTP Request)

This is where the magic happens. You’ll make a direct API call to OpenAI’s Chat Completions endpoint using an HTTP Request node — transparent, flexible, and easy to customize.

How to configure it:

  1. Add an HTTP Request node. Set Method to POST and URL to https://api.openai.com/v1/chat/completions.
  2. Under Authentication, choose Generic Credential Type → HTTP Header Auth. Create a credential with Name: Authorization and Value: Bearer sk-YOUR_OPENAI_API_KEY.
  3. Set Body Content Type to JSON and use this body:
{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are a concise email assistant. Summarize the following email in exactly 2-3 sentences."
    },
    {
      "role": "user",
      "content": "From: {{ $json.senderName }} <{{ $json.sender }}>\nSubject: {{ $json.subject }}\n\n{{ $json.bodyText }}"
    }
  ],
  "max_tokens": 200,
  "temperature": 0.2
}
📌

Cost note: GPT-4o-mini costs roughly $0.15 per million input tokens. A typical email summary costs about $0.00009 — less than a tenth of a cent. Processing 1,000 emails a month costs under $0.10 total.

4 Prepare Notification Data (Set)

After the OpenAI call, the email fields from Step 2 are no longer in scope. This second Set node reassembles everything — it grabs the AI summary and re-references the email metadata using n8n’s node-reference syntax.

Field Expression
summary ={{ $json.choices[0].message.content }}
sender ={{ $('Extract Email Fields').item.json.sender }}
senderName ={{ $('Extract Email Fields').item.json.senderName }}
subject ={{ $('Extract Email Fields').item.json.subject }}
receivedAt ={{ $('Extract Email Fields').item.json.receivedAt }}
messageId ={{ $('Extract Email Fields').item.json.messageId }}

5 Send Slack Notification (Slack)

This node posts the email summary to a Slack channel so your team sees it in real time. Connect your Slack workspace via OAuth, set the channel to #email-digest, and use this message text:

📧 *New Email Summary*
*From:* {{ $json.senderName }} <{{ $json.sender }}>
*Subject:* {{ $json.subject }}
*Received:* {{ $json.receivedAt }}

*AI Summary:*
{{ $json.summary }}

6 Log to Google Sheets (Google Sheets)

The final node appends a new row to your Email Log spreadsheet every time an email is processed — your permanent, searchable archive.

Set Operation to Append or Update Row, select your spreadsheet, set the sheet name to Email Log, and map these columns:

Sheet Column n8n Expression
Received At ={{ $json.receivedAt }}
Sender Name ={{ $json.senderName }}
Sender Email ={{ $json.sender }}
Subject ={{ $json.subject }}
AI Summary ={{ $json.summary }}
Message ID ={{ $json.messageId }}

The Data Structure (Google Sheets)

Your Email Log sheet must have these exact column headers in row 1. Column names are case-sensitive.

Column Type Example Value Description
Received At DateTime 2026-04-02T14:23:11.000Z ISO timestamp when email arrived
Sender Name Text Sarah Thompson Display name from the From header
Sender Email Text sarah.thompson@acmecorp.com Sender’s email address
Subject Text Q2 Budget Review — Action Needed Email subject line
AI Summary Long Text Sarah Thompson from Acme Corp sent… 2–3 sentence AI-generated summary
Message ID Text 18e5a3b2c7d4e901 Gmail’s unique internal message ID
📌

Column header names in Google Sheets are case-sensitive. If there’s a mismatch, data will go into a new column instead of the right one. Double-check spelling before your first test run.

Full System Flow

┌─────────────────────────────────────────────────────────────────────────────────┐
│  FULL SYSTEM FLOW — Gmail AI Summarizer                                         │
│                                                                                 │
│  Gmail Inbox                                                                    │
│     │  (new unread email arrives)                                               │
│     ▼                                                                           │
│  [Gmail Trigger]  ──polls every 60s──►  raw email object (id, from, subject,   │
│                                          date, text)                            │
│     │                                                                           │
│     ▼                                                                           │
│  [Extract Email Fields]  ─────────────► { sender, senderName, subject,         │
│                                           bodyText (≤4000 chars), receivedAt,  │
│                                           messageId }                           │
│     │                                                                           │
│     ▼                                                                           │
│  [Summarize with OpenAI]                                                        │
│     POST https://api.openai.com/v1/chat/completions                             │
│     model: gpt-4o-mini  ──────────────► { choices[0].message.content: "..." }  │
│     │                                                                           │
│     ▼                                                                           │
│  [Prepare Notification Data]  ────────► { summary, sender, senderName,         │
│                                           subject, receivedAt, messageId }      │
│     │                                                                           │
│     ├───────────────────────────────────────────────┐                          │
│     ▼                                               ▼                           │
│  [Send Slack Notification]               [Log to Google Sheets]                 │
│   POST to #email-digest                   APPEND row to "Email Log" tab         │
│                                                                                 │
│  ✅ Done — email summarized, team notified, and permanently archived            │
└─────────────────────────────────────────────────────────────────────────────────┘

Testing Your Workflow

  1. Send a test email to yourself from another account. Use a realistic subject and write 3–4 sentences of body text.
  2. In the n8n canvas, click the Gmail Trigger node and press Fetch Test Event. Your test email should appear as the sample data.
  3. Click Execute from here to run the rest of the workflow with that email.
  4. Check your #email-digest Slack channel — the summary should appear within seconds.
  5. Open your Google Sheets Email Log — a new row should be appended with all six columns filled.
  6. Once confirmed, click Save then toggle the Active switch to start the live workflow.
Problem Likely Cause Fix
Gmail Trigger finds no emails No unread emails, or wrong label filter Send a fresh test email, then click “Fetch Test Event” again
OpenAI returns 401 error API key incorrect or missing “Bearer ” prefix Regenerate the key and ensure the value starts with “Bearer sk-…”
Slack message not appearing Bot not added to the channel Type /invite @YourBotName in the #email-digest channel
Google Sheets row goes to wrong column Column header name mismatch Compare sheet headers letter-for-letter with n8n field names
Same email processed multiple times Email stays unread after processing Add a Gmail node at the end to mark the email as read

Frequently Asked Questions

Will this workflow read emails I’ve already seen?

Only if they’re still marked as unread. The Gmail Trigger is filtered to unread emails, so anything you’ve already opened and read won’t be reprocessed. If you want to back-process a batch of old emails, temporarily change the filter to “All Mail” for a one-time run, then switch back.

What if an email has no plain text body — only HTML?

The workflow uses $json.text (plain text) by default. If your email client sends HTML-only, update the bodyText expression to ={{ $json.text || $json.textHtml?.replace(/<[^>]+>/g, '') || '' }} — this strips HTML tags as a fallback.

Can I filter which emails get summarized — for example, only from my boss?

Yes. Create a Gmail filter that auto-labels emails from specific senders, then restrict the Gmail Trigger to only watch that label. Alternatively, add an IF node after “Extract Email Fields” and check if $json.sender matches a list of allowed addresses.

Is my email content sent to OpenAI? Is that safe?

Yes — the email body text is sent to OpenAI’s API. OpenAI’s API does not use your data to train models by default, per their API data usage policy. If your emails contain highly sensitive content, consider running an open-source LLM locally via n8n’s Ollama integration instead.

Can I use a different AI model instead of GPT-4o-mini?

Absolutely. To use GPT-4o, just change "model": "gpt-4o-mini" to "model": "gpt-4o" in the HTTP Request body — expect roughly 10× the cost. For a free option, replace the HTTP Request node with an Ollama node pointed at a local Llama 3 instance.

What happens if the workflow runs while n8n is offline?

n8n stores the last-polled email timestamp internally. When it comes back online, it will catch up on any emails that arrived while it was down. For production use on self-hosted n8n, consider setting up PM2 to auto-restart n8n if it crashes.

🚀 Get the Gmail AI Summarizer Template

Skip the setup and get the pre-built workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide that walks you through connecting Gmail, OpenAI, Slack, and Google Sheets — all for less than the cost of a cup of coffee.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted · $14.99

What’s Next?

  • Auto-reply to routine emails: Add a second AI call that drafts a reply, then use Gmail’s “Create Draft” action so you can review and send with one click.
  • Priority scoring: Add a second OpenAI call that rates each email 1–5 for urgency. Route high-priority emails to a dedicated #urgent-emails Slack channel.
  • Weekly digest email: Pair this workflow with a scheduled trigger that reads your Google Sheets log every Friday and emails a formatted weekly summary to your team.
  • Notion integration: Replace or supplement the Google Sheets node with a Notion node to create a card in your “Inbox” database for each processed email.
n8n
Gmail
OpenAI
Slack
Google Sheets
AI automation
email automation