How to Auto-Extract Invoice Data from PDF with AI Using n8n

Every month, the same grind: open an email, download the PDF, squint at the invoice number, manually type the amount into a spreadsheet, and pray you didn’t transpose a digit. If your business processes more than a handful of invoices per week, that manual routine is silently eating hours you could spend on work that actually matters. What if every PDF invoice that landed in your inbox was automatically read, parsed, and filed into a clean database — without you touching it once?

That’s exactly what you’ll build in this guide. Using n8n, OpenAI’s GPT-4o mini, and Airtable, you’ll create a workflow that watches your Gmail inbox for invoice emails, extracts text from attached PDFs, sends that text to AI for structured parsing, and logs every field — vendor name, invoice number, amounts, line items, due dates — into an Airtable base. The whole thing runs on autopilot every 30 minutes.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A schedule trigger checks your Gmail inbox every 30 minutes for unread emails with PDF attachments matching invoice-related subjects.
  2. The workflow downloads the PDF, extracts raw text, and sends it to OpenAI with a carefully tuned prompt that returns structured JSON — vendor name, invoice number, dates, amounts, line items, and payment status.
  3. A Code node validates and formats the AI output, then pushes a clean record into your Airtable base with 13 fields covering every detail you’d normally type by hand.
  4. The source email is marked as read so it won’t be processed twice.

How It Works — The Big Picture

The workflow runs as a single linear pipeline with one conditional branch. Emails without PDF attachments are silently skipped, while PDFs flow through extraction, AI parsing, and database logging.

┌──────────────────────────────────────────────────────────────────────────┐
│  INVOICE DATA EXTRACTOR — AI POWERED                                     │
│                                                                          │
│  [Schedule]  →  [Gmail: Fetch]  →  [IF: Has PDF?]                       │
│                                         │ Yes          │ No              │
│                                         ↓              ↓                │
│                                  [Extract PDF]    [Skip]                │
│                                         ↓                               │
│                                  [OpenAI: Parse]                        │
│                                         ↓                               │
│                                  [Code: Format]                         │
│                                         ↓                               │
│                                  [Airtable: Save]                       │
│                                         ↓                               │
│                                  [Gmail: Mark Read]                     │
└──────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n — self-hosted or cloud (version 1.0 or higher)
  • Gmail account — with OAuth2 credentials configured in n8n
  • OpenAI API key — with access to the gpt-4o-mini model (or gpt-4o for higher accuracy on complex invoices)
  • Airtable account — free tier works fine; you’ll need one base with a table matching the schema below

Estimated build time: 30–45 minutes from scratch, or under 10 minutes with the template.

Building the Workflow

1 Check Inbox Every 30 Minutes (Schedule Trigger)

This node kicks off the workflow on a regular interval. Open n8n, create a new workflow, and add a Schedule Trigger node. Set it to run every 30 minutes. You can adjust this later — every 15 minutes if you process high volumes, or once per hour if invoices trickle in slowly.

💡

Tip: During testing, switch this to “manual” execution so you can trigger runs on demand without waiting for the timer.

2 Fetch Unread Invoice Emails (Gmail)

Add a Gmail node and set the operation to Get Many. This node searches your inbox for unread emails that have PDF attachments and match invoice-related subject lines. Configure it as follows:

  • Limit: 10 (processes up to 10 emails per run)
  • Search query: has:attachment filename:pdf is:unread subject:(invoice OR receipt OR bill)
  • Download Attachments: Enabled, with prefix attachment_

The search query is the key here — it filters for only unread emails containing a PDF file whose subject mentions “invoice,” “receipt,” or “bill.” You can customize these keywords to match the vocabulary your vendors use.

{
  "id": "18e3a5b7c9d2f100",
  "subject": "Invoice #INV-2026-0847 from Acme Supplies",
  "from": { "emailAddress": "billing@acmesupplies.com" },
  "attachment_0": {
    "fileName": "INV-2026-0847.pdf",
    "mimeType": "application/pdf"
  }
}
💡

Tip: If you receive invoices from specific vendors, tighten the query with from:billing@vendor.com to avoid picking up random PDFs like flight confirmations.

3 Has PDF Attachment? (IF Node)

Add an IF node to verify that the attachment is actually a PDF. Check that attachment_0.mimeType equals application/pdf. Emails that pass go to the extraction step. Emails that fail (wrong attachment type, or the attachment was an image) are routed to a No Operation node and silently skipped.

📌

This guard prevents errors downstream. Some emails match the search query but attach Excel files or images instead of PDFs — the IF node filters those out cleanly.

4 Extract Text from PDF (Extract From File)

Add an Extract From File node. Set the operation to PDF and the binary property name to attachment_0 (matching the prefix you set in the Gmail node). This node converts the PDF binary data into raw text that the AI can read.

After this node runs, the output looks like this:

{
  "data": "INVOICE\nAcme Supplies Inc.\n742 Evergreen Terrace\nSpringfield, IL 62704\n\nBill To: Carter & Associates LLC\nInvoice #: INV-2026-0847\nDate: March 28, 2026\nDue Date: April 27, 2026\n\nDescription          Qty    Price     Amount\nConsulting Services    40   $150.00   $6,000.00\nSoftware License        1   $499.00     $499.00\n\nSubtotal: $6,499.00\nTax (8.25%): $536.17\nTotal: $7,035.17\n\nPayment Terms: Net 30"
}
💡

Tip: The Extract From File node works well with text-based PDFs. If your invoices are scanned images (no selectable text), you’ll need to add an OCR step before this node — tools like Google Cloud Vision or AWS Textract can handle that.

5 Parse Invoice with AI (OpenAI)

This is where the magic happens. Add an OpenAI node configured for the Chat resource. Set the model to gpt-4o-mini (fast, cheap, and surprisingly accurate for structured extraction) and the temperature to 0.1 (low randomness = consistent output).

The prompt instructs the AI to return a strict JSON object with 12 fields covering every piece of data you’d want from an invoice. Here’s what the prompt asks for:

Field Type Example
vendor_name String Acme Supplies Inc.
vendor_address String 742 Evergreen Terrace, Springfield, IL 62704
invoice_number String INV-2026-0847
invoice_date Date 2026-03-28
due_date Date 2026-04-27
subtotal Number 6499.00
tax_amount Number 536.17
total_amount Number 7035.17
currency String USD
payment_status String unpaid
line_items Array [{description, qty, unit_price, amount}]
notes String Payment Terms: Net 30
💡

Tip: For invoices with complex layouts or non-English text, switch the model to gpt-4o. It costs a bit more per call but handles edge cases much better. At typical invoice lengths (under 1,000 tokens), even gpt-4o costs less than a penny per invoice.

6 Format Extracted Data (Code Node)

The AI returns a JSON string inside its message content. This Code node does three things: extracts the JSON from the response (handling cases where the AI wraps it in markdown code blocks), validates each field with fallback defaults, and formats the line items array into a human-readable string for Airtable’s long text field.

The node also carries forward metadata from the original email — the subject line and sender address — so you always know where each invoice came from.

{
  "vendor_name": "Acme Supplies Inc.",
  "invoice_number": "INV-2026-0847",
  "invoice_date": "2026-03-28",
  "due_date": "2026-04-27",
  "subtotal": 6499.00,
  "tax_amount": 536.17,
  "total_amount": 7035.17,
  "currency": "USD",
  "payment_status": "unpaid",
  "line_items_text": "1. Consulting Services — Qty: 40 × $150 = $6000\n2. Software License — Qty: 1 × $499 = $499",
  "notes": "Payment Terms: Net 30",
  "source_email_from": "billing@acmesupplies.com",
  "processed_at": "2026-04-08T14:30:00.000Z"
}

7 Save to Airtable

Add an Airtable node set to the Create operation. Point it at your base and table, then map each field from the Code node output to the corresponding Airtable column. The mapping uses n8n expressions like ={{ $json.vendor_name }} to pull values dynamically.

Every processed invoice becomes a new row in your Airtable base — fully structured, searchable, and ready for reporting.

📌

Make sure your Airtable column names match exactly — including capitalization and spaces. “Invoice Number” is not the same as “invoice_number” in Airtable.

8 Mark Email as Read (Gmail)

The final node marks the original email as read so it won’t be picked up again on the next run. It references the email ID from the “Fetch Unread Invoice Emails” node using the expression ={{ $('Fetch Unread Invoice Emails').item.json.id }}.

💡

Tip: Want extra safety? Add a Gmail “Add Label” step before this one to move processed emails into a “Processed Invoices” label. That gives you an audit trail even if something goes wrong with Airtable.

The Data Structure

Before activating the workflow, create an Airtable base called “Invoice Tracker” with a single table named “Invoices” using this schema:

Column Type Example Description
Vendor Name Single line text Acme Supplies Inc. Company that issued the invoice
Invoice Number Single line text INV-2026-0847 Unique invoice identifier
Invoice Date Date 2026-03-28 Date the invoice was issued
Due Date Date 2026-04-27 Payment deadline
Subtotal Currency $6,499.00 Amount before tax
Tax Currency $536.17 Tax amount
Total Amount Currency $7,035.17 Final amount due
Currency Single line text USD Currency code
Payment Status Single select unpaid Options: paid, unpaid, unknown
Line Items Long text 1. Consulting — 40 × $150 Formatted breakdown of charges
Notes Long text Net 30 Payment terms, PO numbers, etc.
Source Email Email billing@acmesupplies.com Sender of the original email
Processed At Date 2026-04-08 Timestamp when the workflow ran
📌

Column names must match the mapping in the Airtable node exactly. The easiest way: duplicate the table from the template’s included Airtable CSV, or create columns one by one following this schema.

Full System Flow

┌──────────────────────────────────────────────────────────────────────────────┐
│                                                                              │
│   GMAIL INBOX                          AIRTABLE                             │
│   ┌──────────────┐                     ┌──────────────────┐                 │
│   │ Unread email  │                     │ Invoice Tracker   │                │
│   │ + PDF attach  │                     │ ┌──────────────┐ │                │
│   └──────┬───────┘                     │ │ Vendor: Acme │ │                │
│          ↓                             │ │ #INV-0847    │ │                │
│   [Schedule: 30 min]                   │ │ $7,035.17    │ │                │
│          ↓                             │ └──────────────┘ │                │
│   [Gmail: Fetch unread]                 └────────▲─────────┘                │
│          ↓                                       │                          │
│   [IF: Has PDF?]──No──→ [Skip]                   │                          │
│          │ Yes                                   │                          │
│          ↓                                      │                          │
│   [Extract PDF Text]                            │                          │
│          ↓                                      │                          │
│   [OpenAI: Parse JSON]                          │                          │
│          ↓                                      │                          │
│   [Code: Format Data]──────────────────────────→┘                          │
│          ↓                                                                 │
│   [Gmail: Mark Read]                                                       │
│                                                                              │
└──────────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

  1. Send yourself a test email with a sample PDF invoice attached. Use a subject line like “Invoice #TEST-001 from Test Vendor” so it matches the search query.
  2. In n8n, click Test Workflow (the play button) to trigger a manual run.
  3. Watch the execution — each node should show a green checkmark. Click any node to inspect its output data.
  4. Check your Airtable base. You should see a new row with all 13 fields populated from the test invoice.
  5. Verify the original email is now marked as read in Gmail.
Problem Likely Cause Fix
Gmail node returns 0 items No unread emails match the search query Send a test email with “invoice” in the subject and a PDF attached, then wait a moment for Gmail to index it
Extract From File returns empty text The PDF is a scanned image, not text-based Add an OCR step (Google Cloud Vision or AWS Textract) before the extraction node
OpenAI returns malformed JSON Invoice text was too garbled or incomplete Check the extracted text — if it’s unreadable, the source PDF may need OCR. Also try switching to gpt-4o for better handling
Airtable node fails with “field not found” Column name mismatch between n8n mapping and Airtable Compare column names character by character — watch for extra spaces, capitalization differences, or missing columns
Same invoice processed twice Mark as Read step failed or email wasn’t marked before the next scheduled run Check the Gmail node credentials have write permissions. Consider adding a label-based filter as a secondary guard

Frequently Asked Questions

How much does each invoice cost to process with OpenAI?

Very little. A typical invoice is 300–800 tokens of input text. With GPT-4o mini at $0.15 per million input tokens, processing one invoice costs roughly $0.0001 — less than a hundredth of a penny. Even at 1,000 invoices per month, your OpenAI bill would be under $0.50.

Does this work with scanned PDF invoices (images, not text)?

The Extract From File node handles text-based PDFs natively. For scanned invoices where the text is embedded in an image, you’ll need to add an OCR step before extraction. Google Cloud Vision, AWS Textract, or the n8n HTTP Request node calling an OCR API all work. The rest of the workflow stays the same.

Can I use Google Sheets instead of Airtable?

Yes. Replace the Airtable node with a Google Sheets “Append Row” node and map the same fields to spreadsheet columns. The rest of the workflow doesn’t need any changes. Airtable gives you richer field types (currency, single select, date pickers), but Sheets is perfectly functional.

What if I use Outlook instead of Gmail?

Swap the two Gmail nodes for their Microsoft Outlook equivalents — “Microsoft Outlook Trigger” for fetching emails and “Microsoft Outlook” for marking as read. The search query syntax differs slightly, but the concept is identical. Update the credential type in n8n accordingly.

Can the workflow handle invoices in languages other than English?

GPT-4o mini handles most major languages well. The prompt asks for specific JSON field names in English, but the AI will extract data from French, Spanish, German, and many other languages and map it into the English schema. For best results with non-Latin scripts, consider upgrading to GPT-4o.

What happens if two invoices arrive in the same email?

The current workflow processes the first PDF attachment per email. If a sender regularly attaches multiple PDFs to one email, you can modify the workflow to loop through all attachments by adding a “Split In Batches” node after the IF check and adjusting the binary property name dynamically.

Get the Invoice Data Extractor Template

Stop building from scratch — get the complete workflow JSON, Airtable schema, and step-by-step setup guides. Import it, add your credentials, and start processing invoices in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add Slack notifications — send a message to your #accounting channel whenever a high-value invoice (over $5,000) is processed.
  • Auto-flag overdue invoices — add a second scheduled workflow that checks Airtable daily and flags rows where the due date has passed and payment status is still “unpaid.”
  • Build a payment approval flow — route invoices above a threshold to a manager via email for approval before marking them as “approved” in Airtable.
  • Connect to accounting software — push the structured data into QuickBooks, Xero, or FreshBooks via their APIs to eliminate double entry entirely.
n8n
PDF
OpenAI
Airtable
invoice processing
automation
AI
accounts payable

How to Build a Telegram Crypto Price Alert Bot with n8n

Missing a Bitcoin pump or getting wrecked by a sudden drop you didn’t see coming — that’s the hidden cost of watching charts manually. Most crypto traders either stare at their screen all day or miss the exact moment they needed to act. This tutorial shows you how to build an automated Telegram bot that watches your chosen coins every 15 minutes and pings you the instant prices break your personal alert thresholds — or when any coin moves more than 5% in 24 hours. No coding background required. Just n8n, a free CoinGecko API key, and a Telegram bot.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A scheduled n8n workflow that fires every 15 minutes and fetches live prices for Bitcoin, Ethereum, Solana, BNB, and XRP from CoinGecko’s free API.
  2. A smart code node that checks each coin against your custom price thresholds (e.g., “alert me when BTC goes above $75,000 or below $55,000”) and flags any coin with a 24-hour move greater than 5%.
  3. A conditional gate that only fires when something actually matters — no noisy, irrelevant pings.
  4. A clean, formatted Telegram message delivered straight to your phone listing current prices plus a highlighted alert block showing exactly what triggered.

How It Works — The Big Picture

The workflow is a simple five-node pipeline. A schedule wakes it up every 15 minutes, a single HTTP call fetches all prices at once, a JavaScript code node crunches the numbers against your thresholds, and an IF gate decides whether your phone needs to know about it.

┌──────────────────────────────────────────────────────────────────────────┐
│  TELEGRAM CRYPTO PRICE ALERT BOT                                         │
│                                                                          │
│  [Schedule Trigger]  →  [Get Crypto Prices]  →  [Check Alert Thresholds] │
│     every 15 min          CoinGecko API           Code node (JS)         │
│                                                          │               │
│                                              ┌───────────┴───────────┐   │
│                                         hasAlerts?               no alert│
│                                              │                       │   │
│                                    [Send Alert to Telegram]   [Skip]     │
│                                       Telegram bot                       │
└──────────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted or n8n Cloud (free tier works fine for this workflow)
  • CoinGecko account — free plan; no API key required for the public endpoint used here (rate limit: 10–30 calls/min, well within our 15-minute schedule)
  • Telegram account — you’ll create a bot via BotFather (free, takes 2 minutes)
  • Your Telegram chat ID — we’ll show you exactly how to find it below

Estimated build time: 25–35 minutes from scratch, or under 10 minutes with the ready-made template.

Part 1 — Building the Workflow

1 Schedule Trigger (n8n-nodes-base.scheduleTrigger)

This is the heartbeat of the workflow. It wakes up n8n every 15 minutes and kicks off the entire pipeline. Add a Schedule Trigger node and configure it like this:

  • Mode: Interval
  • Every: 15 Minutes

The trigger produces a single output item with a timestamp. That’s all the next node needs — it just needs to know it’s time to fetch prices.

💡

Tip: Prefer a 1-hour interval to reduce noise? Change the minutes value to 60. If you want minute-level granularity (great for active trading sessions), drop it to 5 — CoinGecko’s free tier handles this comfortably.

2 Get Crypto Prices (n8n-nodes-base.httpRequest)

Add an HTTP Request node. This single API call fetches prices and 24-hour change data for all five coins simultaneously — no need for separate calls per coin.

  • Method: GET
  • URL: https://api.coingecko.com/api/v3/simple/price
  • Query Parameters (add each as a separate row):
Parameter Value
ids bitcoin,ethereum,solana,binancecoin,ripple
vs_currencies usd
include_24hr_change true
include_market_cap true

No authentication needed for this endpoint. The API returns a flat JSON object where each key is a coin ID:

{
  "bitcoin":     { "usd": 67543.21, "usd_24h_change": 2.45,  "usd_market_cap": 1330000000000 },
  "ethereum":    { "usd": 3421.56,  "usd_24h_change": -1.23, "usd_market_cap": 411000000000  },
  "solana":      { "usd": 178.92,   "usd_24h_change": 6.11,  "usd_market_cap": 83000000000   },
  "binancecoin": { "usd": 587.44,   "usd_24h_change": 0.87,  "usd_market_cap": 87000000000   },
  "ripple":      { "usd": 0.5821,   "usd_24h_change": -2.34, "usd_market_cap": 32000000000   }
}
💡

Tip: To add more coins, find their CoinGecko ID on the coin’s page (e.g., cardano, polkadot, avalanche-2) and append them comma-separated to the ids parameter. Also add matching entries to the thresholds and coinLabels objects in Step 3.

3 Check Alert Thresholds (n8n-nodes-base.code)

This is where the logic lives. Add a Code node (JavaScript mode) and paste the script below. It loops through every coin, checks prices against your thresholds, flags large 24-hour moves, and assembles a formatted Telegram message ready to send.

Here’s the config block at the top of the script — the only section you need to customize:

// ─── USER CONFIG ─────────────────────────────────────────────
const thresholds = {
  bitcoin:     { low: 55000,  high: 75000 },
  ethereum:    { low: 2800,   high: 4000  },
  solana:      { low: 120,    high: 220   },
  binancecoin: { low: 400,    high: 700   },
  ripple:      { low: 0.40,   high: 0.80  },
};

// Alert if any coin moves more than this % in 24 hours
const CHANGE_ALERT_PCT = 5;
// ─────────────────────────────────────────────────────────────

After running, the node outputs a single item with this shape:

{
  "hasAlerts":   true,
  "alertCount":  1,
  "message":     "📊 *Crypto Price Update — Apr 7, 02:30 PM ET*\n\n🟢 *BTC*: $67,543.21 (+2.45%)\n🔴 *ETH*: $3,421.56 (-1.23%)\n🟢 *SOL*: $178.92 (+6.11%)\n...\n\n🔔 *ALERTS*\n⚡ *Solana (SOL)* surged +6.11% in 24h — now $178.92",
  "timestamp":   "2026-04-07T18:30:00.000Z"
}
📌

Note: The full JavaScript code is included in the downloadable template JSON. When you import the workflow, the code node is pre-filled — you only need to update the threshold values in the USER CONFIG block at the top.

4 Has Alerts? (n8n-nodes-base.if)

Add an IF node to act as the gate. Configure one condition:

  • Value 1: {{ $json.hasAlerts }}
  • Operator: is true

The true branch connects to the Telegram node. The false branch connects to a No-Op (do nothing) node — this prevents n8n from throwing an “unconnected branch” warning while keeping the workflow clean.

💡

Tip: Want a periodic price summary even when no alerts fire? Connect the false branch to a second Telegram node that sends just the priceReport field instead of the full message. Great for a morning digest.

5 Send Alert to Telegram (n8n-nodes-base.telegram)

Add a Telegram node and connect it to the true output of the IF node. Before configuring it, you need a bot token and your chat ID.

Creating your Telegram bot:

  1. Open Telegram and search for @BotFather
  2. Send /newbot, give it a name (e.g., “My Crypto Alerts”) and a username ending in bot
  3. BotFather sends you a token like 7123456789:AAHxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx — save it
  4. Start a conversation with your new bot (click “Start” or send any message)
  5. Visit https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates in your browser — find your chat.id in the response (it’s a number like 123456789)

Configuring the Telegram node:

  • Credential: Create a new Telegram API credential → paste your bot token
  • Chat ID: YOUR_TELEGRAM_CHAT_ID (the number you found above)
  • Text: ={{ $json.message }}
  • Additional Fields → Parse Mode: Markdown

When an alert fires, the message delivered to your phone looks like this:

📊 Crypto Price Update — Apr 7, 02:30 PM ET

🟢 BTC: $67,543.21 (+2.45%)
🔴 ETH: $3,421.56 (-1.23%)
🟢 SOL: $178.92 (+6.11%)
🟢 BNB: $587.44 (+0.87%)
🔴 XRP: $0.58 (-2.34%)

🔔 ALERTS
⚡ Solana (SOL) surged +6.11% in 24h — now $178.92
💡

Tip: To send alerts to a Telegram group or channel instead of just yourself, add the bot as an admin to your group, then use the group’s chat ID (starts with a minus sign, e.g., -1001234567890) as the Chat ID value.

Full System Flow

Every 15 minutes
      │
      ▼
[Schedule Trigger]
      │
      ▼
[HTTP Request → CoinGecko API]
  GET /simple/price
  ids=bitcoin,ethereum,solana,bnb,xrp
  include_24hr_change=true
      │
      ▼ (raw price JSON)
[Check Alert Thresholds — Code Node]
  ├── Loop each coin
  ├── Compare price vs. low/high thresholds
  ├── Flag if |24h change| ≥ 5%
  ├── Build formatted Telegram message
  └── Output: { hasAlerts, message, ... }
      │
      ▼
[Has Alerts? — IF Node]
  hasAlerts === true?
      │                     │
     YES                   NO
      │                     │
      ▼                     ▼
[Send Alert to Telegram]  [No-Op — Skip]
  Parse Mode: Markdown
  Delivered to your phone

Testing Your Workflow

Before enabling the schedule, run the workflow manually once to confirm everything is wired up correctly:

  1. Open the workflow in the n8n editor and click Test Workflow (the play button).
  2. Click on the Get Crypto Prices node — you should see live price data in the output panel on the right.
  3. Click on Check Alert Thresholds — verify the message field contains a formatted price string and that hasAlerts shows the expected value.
  4. To force an alert for testing, temporarily set a threshold your current price has already crossed — e.g., set BTC high to 1000 so it always triggers. Run again, and your Telegram should receive a message within seconds. Reset the threshold afterward.
  5. Once confirmed, toggle the workflow Active switch. It will now run automatically every 15 minutes.
Problem Likely Cause Fix
HTTP Request returns 429 CoinGecko rate limit hit Increase the schedule interval to 30 min, or sign up for a CoinGecko Demo API key and add it as a header
Telegram node says “Forbidden” Bot hasn’t been started Open Telegram, find your bot, and send it any message first — bots can’t initiate conversations
Message sends but formatting is broken Parse Mode not set to Markdown In the Telegram node → Additional Fields → set Parse Mode to Markdown
No alerts ever fire Thresholds set outside realistic price ranges Check the thresholds object — make sure low/high values bracket actual current prices
Workflow runs but nothing happens IF node false branch reached (no alerts) Expected behavior — set a temporary low threshold to trigger a test alert as described above

Frequently Asked Questions

Does this cost anything to run?

No — CoinGecko’s public API used in this workflow is completely free with no API key required. The only costs are your n8n hosting (free on self-hosted, or the n8n Cloud free tier covers this comfortably) and your Telegram bot (always free). Total running cost: $0.

Can I track more than 5 coins?

Yes. Add more coin IDs to the ids query parameter in the HTTP Request node (comma-separated, using CoinGecko’s lowercase coin IDs like cardano, chainlink, or avalanche-2). Then add matching entries to the thresholds and coinLabels objects in the Code node. CoinGecko’s free tier supports up to ~250 coin IDs per request.

Can I send alerts to multiple Telegram chats or a group?

Absolutely. Duplicate the Telegram node and set a different Chat ID for each recipient — or add your bot to a Telegram group and use the group’s chat ID (a negative number like -1001234567890). You can find a group’s chat ID the same way as your personal one: visit the getUpdates URL after the bot receives a message in the group.

What happens if CoinGecko is down when the workflow runs?

The HTTP Request node will throw an error, which n8n logs in the execution history. The workflow won’t crash permanently — it simply skips that execution and runs again 15 minutes later. For higher reliability, enable the Retry on Fail option on the HTTP Request node (set to 2 retries, 5 second delay).

Can I get alerts for percentage-based price changes rather than fixed thresholds?

Yes — the Code node already handles this. The CHANGE_ALERT_PCT constant (default: 5) fires an alert whenever any coin moves more than 5% in 24 hours, in either direction. Adjust that number to any value you prefer.

Can I use this with a CoinGecko Pro API key for more features?

Yes. If you upgrade to CoinGecko’s Demo or Pro plan, you get a personal API key that unlocks lower rate limits and additional endpoints (like real-time prices). Add it as an HTTP header named x-cg-demo-api-key in the HTTP Request node’s Headers section. The rest of the workflow stays identical.


🚀 Get the Telegram Crypto Price Alert Bot Template

Skip the build and get the pre-configured n8n workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide PDF — everything you need to be live in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a daily digest: Connect the false branch of the IF node to a second Telegram send that fires a clean price summary every morning at 9 AM — regardless of alerts — using a separate Schedule Trigger.
  • Log alerts to Google Sheets: Add a Google Sheets node after the Telegram send to log every triggered alert with a timestamp. Track your alert history over time.
  • Add portfolio tracking: Extend the Code node to calculate the current USD value of your holdings (e.g., “1.5 BTC × $67,543 = $101,315”) and include it in the Telegram message.
  • Connect to a trading signal: Feed alert data into a webhook that triggers a paper trade or signals a human review — building toward a semi-automated trading assistant.
n8n
Telegram
CoinGecko
crypto
price alerts
automation
no-code
trading bots

How to Auto-Create HubSpot Contacts from Typeform Submissions with n8n

Every Typeform submission sitting in a spreadsheet instead of your CRM is a lead going cold. Someone just told you their name, email, company, and job title — and that data is gathering dust in a tab you’ll check “later.” Meanwhile, your sales team has no idea the lead exists. This workflow fixes that permanently. You’ll build an n8n automation that catches every Typeform submission the moment it arrives, maps the fields to HubSpot contact properties, checks whether the person already exists in your CRM, and either creates a new contact or updates the existing one — all without you lifting a finger.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A visitor fills out your Typeform (contact form, demo request, lead magnet — whatever you use).
  2. n8n instantly receives the submission via webhook and extracts the relevant fields: email, first name, last name, company, phone, and job title.
  3. The workflow searches HubSpot to check if a contact with that email already exists.
  4. If the contact exists, their record gets updated with the latest information. If they’re new, a fresh contact is created in HubSpot with all the form data.
  5. Your sales team sees the new or updated contact in HubSpot immediately — no manual entry, no lag, no missed leads.

How It Works — The Big Picture

The entire workflow runs as a single chain with one decision branch. Here’s the full flow at a glance:

┌──────────────────────────────────────────────────────────────────────────────┐
│  AUTO-CREATE HUBSPOT CONTACTS FROM TYPEFORM                                 │
│                                                                              │
│  [Typeform Trigger] → [Map Fields] → [Check If Email Exists] → [Contact?]  │
│                                                          ↓ Yes      ↓ No    │
│                                              [Update Contact]  [Create New] │
└──────────────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n instance — self-hosted or n8n Cloud (any plan)
  • Typeform account — free tier works, but you need at least one published form
  • HubSpot account — free CRM tier is enough; you’ll need an OAuth2 app or private app token
  • 5 minutes if you use the template, or about 25–30 minutes building from scratch

Estimated build time: 25–30 minutes from scratch, or under 10 minutes with the template.

Building the Workflow — Step by Step

1 Typeform Trigger (Typeform Trigger Node)

This node listens for new Typeform submissions in real time. Every time someone hits “Submit” on your form, n8n receives the full response payload instantly.

  1. In n8n, add a new node and search for Typeform Trigger.
  2. Connect your Typeform credential (API token — see the Credentials Guide for how to get one).
  3. In the Form dropdown, select the form you want to connect.
  4. Click Listen for Test Event, then go to your Typeform and submit a test response.

After the test submission, you’ll see the raw data from Typeform:

{"What is your email address?":"james.carter@gmail.com","What is your first name?":"James","What is your last name?":"Carter","What company do you work for?":"Ridgeline Analytics","What is your phone number?":"(555) 867-5309","What is your job title?":"VP of Marketing","submittedAt":"2026-04-07T14:23:00.000Z"}
💡

Tip: Typeform uses the full question text as the field key. If you rename a question later, update the field references in the Map Fields node too.

2 Map Fields (Set Node)

Typeform’s field names are long question strings. This Set node transforms them into clean field names that match HubSpot’s default properties.

Output Field Expression Purpose
email {{ $json['What is your email address?'] }} Primary identifier for HubSpot
firstname {{ $json['What is your first name?'] }} Contact first name
lastname {{ $json['What is your last name?'] }} Contact last name
company {{ $json['What company do you work for?'] }} Company association
phone {{ $json['What is your phone number?'] }} Phone number
jobtitle {{ $json['What is your job title?'] }} Job title property
lead_source Typeform (static) Tracks lead origin
💡

Tip: If your Typeform uses different question text, update the expressions to match. The output field names should stay the same — they match HubSpot’s defaults.

3 Check If Email Exists (HubSpot Node — Search)

Before creating a contact, this node searches HubSpot by email address to check for duplicates.

  1. Add a HubSpot node. Set Resource to Contact, Operation to Search.
  2. Add a filter: Property = email, Operator = Equal, Value = {{ $json.email }}

4 Contact Exists? (IF Node)

Routes the flow: if {{ $json.total }} is greater than 0 → update the existing contact. Otherwise → create a new one.

📌

The IF node’s output 0 (true) goes to Update, output 1 (false) goes to Create. Don’t mix them up.

5 Update Existing Contact (HubSpot Node — Update)

Connected to the true branch. Uses {{ $('Check If Email Exists').item.json.results[0].id }} as the Contact ID, and pulls field values from the Map Fields node.

6 Create New Contact (HubSpot Node — Create)

Connected to the false branch. Sets the email and all additional fields from the Map Fields node to create a fresh HubSpot contact.

Full System Flow

┌────────────────────────────────────────────────────────────────────────────────────────┐
│  VISITOR SUBMITS TYPEFORM                                                              │
│       ↓                                                                                │
│  [Typeform Trigger] ─── receives webhook payload ──→ [Map Fields]                     │
│                                                          ↓                             │
│                                               [Check If Email Exists]                  │
│                                                          ↓                             │
│                                                  [Contact Exists?]                     │
│                                                    ↓ YES    ↓ NO                       │
│                                           [Update Contact] [Create Contact]            │
└────────────────────────────────────────────────────────────────────────────────────────┘

Testing Your Workflow

  1. Activate the workflow in n8n by toggling the Active switch.
  2. Submit a test form response in Typeform using a test email not in HubSpot yet.
  3. Check HubSpot — you should see a new contact with all fields populated.
  4. Submit again with the same email but different company name.
  5. Check HubSpot again — the contact should show the updated company.
Problem Likely Cause Fix
Workflow never triggers Webhook not registered Deactivate and reactivate the workflow. Verify webhook in Typeform settings.
HubSpot 401 error Expired OAuth token Re-authorize your HubSpot credential. Check scopes include contacts read+write.
Empty fields in HubSpot Question text mismatch Compare Typeform question keys with Map Fields expressions exactly.
Duplicate contacts IF node misconfigured Verify condition checks $json.total > 0. True → Update, False → Create.

Frequently Asked Questions

Can I use this with HubSpot’s free CRM?

Yes. The free CRM includes full contact management and API access. Default fields like email, name, company, and phone are all available on the free plan.

What if the submission is missing some fields?

The workflow still runs — HubSpot leaves empty fields blank. It won’t overwrite existing data with empty values on updates.

Can I auto-add contacts to a HubSpot list or deal?

Yes — add a HubSpot node after Create New Contact set to “Add Contact to List” or “Create Deal”. The template covers core contact creation; lists and deals are easy extensions.

Does this work with Typeform’s free plan?

Yes, with the 10 responses/month limit. Webhooks work on all Typeform plans including free.

What if I use Google Forms or Tally instead?

Swap the Typeform Trigger for a Google Forms Trigger or Webhook node. Update the Map Fields expressions for the new payload. The HubSpot side stays identical.

Will this slow down my Typeform?

No. Typeform sends the webhook after submission is complete. The respondent sees the Thank You screen immediately. n8n processes asynchronously.

Get the HubSpot + Typeform Template

Skip the 30-minute build. Get the complete workflow JSON, setup guide, and credentials walkthrough — import into n8n and be live in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Auto-assign leads to sales reps — round-robin assignment based on territory or workload.
  • Send a Slack notification — ping #new-leads when a contact is created.
  • Enroll in a HubSpot sequence — automated welcome email within minutes.
  • Score leads with AI — use OpenAI to evaluate company and job title before CRM entry.
n8n
HubSpot
Typeform
CRM
lead capture
automation
no-code

How to Monitor RSS Feeds and Post AI Summaries to Discord with n8n

Your team is drowning in newsletters and blog feeds — important articles get missed and nobody has time to read everything. What if an AI bot automatically scanned your favorite RSS feeds every hour, wrote a crisp two-sentence summary of each new article, and posted it straight to your Discord channel? That’s exactly what you’ll build here: a fully automated n8n workflow that monitors RSS feeds, uses OpenAI to generate smart summaries, and delivers them to Discord — with a Google Sheet keeping track so nothing gets posted twice.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A scheduled n8n workflow that wakes up every hour and fetches the latest items from any RSS feed you choose.
  2. A deduplication layer backed by Google Sheets so the same article is never posted twice, even across restarts.
  3. An OpenAI call that reads each article’s title and snippet and writes a punchy 2–3 sentence summary tailored for a Discord audience.
  4. An automatic Discord post that drops the summary, the article title, and the link into your chosen channel — formatted and ready to read.
  5. A log entry written back to Google Sheets so the system stays in sync across all future runs.

How It Works — The Big Picture

The workflow runs on a timer, pulls from your RSS feed, filters out anything already posted, summarizes the new items with AI, and delivers them to Discord — all without you touching a thing.

┌──────────────────────────────────────────────────────────────────┐
│  RSS FEED AI SUMMARIZER TO DISCORD                               │
│                                                                  │
│  [Schedule Trigger]                                              │
│       │  (every 1 hour)                                          │
│       ▼                                                          │
│  [RSS Feed Read]                                                 │
│       │  (fetch all items from feed)                             │
│       ▼                                                          │
│  [Get Tracked URLs] ◄── Google Sheets (Tracked tab)             │
│       │  (read previously posted URLs)                           │
│       ▼                                                          │
│  [Filter New Articles]                                           │
│       │  (Code node: remove duplicates, cap at 5 items)          │
│       ▼                                                          │
│  [Summarize with OpenAI]                                         │
│       │  (HTTP Request → OpenAI Chat Completions API)            │
│       ▼                                                          │
│  [Build Discord Message]                                         │
│       │  (Set node: format title + summary + link)               │
│       ▼                                                          │
│  [Post to Discord]                                               │
│       │  (HTTP Request → Discord Webhook URL)                    │
│       ▼                                                          │
│  [Log URL to Sheet]                                              │
│       (Append row: url, title, summary, posted_at)               │
└──────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted (v1.0+) or n8n Cloud (any paid plan)
  • An RSS feed URL — any public feed (TechCrunch, Hacker News, a podcast, a company blog…)
  • An OpenAI account — API key from platform.openai.com; GPT-4o-mini usage is very cheap (roughly $0.002 per summary)
  • A Google account — for the tracking spreadsheet (Google Sheets OAuth2)
  • A Discord server — with permission to create a Webhook URL in the target channel

Estimated build time: 35–50 minutes from scratch, or under 10 minutes with the ready-made template.

Part 1 — The Workflow, Node by Node

1 Schedule Trigger (Schedule Trigger)

This is the engine that starts the whole thing. It fires the workflow automatically on a fixed interval so you never have to run it manually. Set Trigger Interval to Hours and the value to 1. The workflow will now run every hour.

💡

Tip: For high-volume feeds (like Hacker News), consider 30 minutes. For low-volume feeds (weekly newsletters), daily is plenty.

2 RSS Feed Read (RSS Feed Read)

Fetches the RSS feed and returns all available items as individual records — typically 10–25 per feed. Paste your RSS feed URL into the URL field. For example, TechCrunch’s feed is https://techcrunch.com/feed/.

{
  "title": "Anthropic Raises $4B Series E at $18B Valuation",
  "link": "https://techcrunch.com/2026/04/05/anthropic-series-e/",
  "pubDate": "Sat, 05 Apr 2026 14:22:00 +0000",
  "contentSnippet": "Anthropic, the AI safety company behind Claude, has raised $4 billion..."
}

3 Get Tracked URLs (Google Sheets)

Before processing anything, the workflow reads a Google Sheet to see which article URLs have already been posted to Discord. Connect your Google Sheets OAuth2 credential, set Document ID to your spreadsheet ID, sheet name to Tracked, and operation to Read.

📌

On the very first run, your sheet will be empty — that’s fine. The filter node handles an empty result gracefully and all items will be treated as new.

4 Filter New Articles (Code)

This JavaScript node cross-references the RSS items against the tracked URLs and returns only articles that haven’t been posted yet. It also caps the output at 5 items per run to prevent Discord channel flooding. Set the node to Run Once for All Items mode.

const rssItems = $('RSS Feed Read').all();
const trackedItems = $('Get Tracked URLs').all();
const trackedUrls = new Set(
  trackedItems.map(item => (item.json.url || item.json.URL || '').trim()).filter(Boolean)
);
const newItems = rssItems.filter(item => {
  const url = (item.json.link || item.json.url || '').trim();
  return url && !trackedUrls.has(url);
});
return newItems.slice(0, 5).map(item => ({ json: item.json }));

5 Summarize with OpenAI (HTTP Request)

Sends each new article’s title and snippet to the OpenAI API and gets back a tight 2–3 sentence summary. Set method to POST, URL to https://api.openai.com/v1/chat/completions. Set authentication to HTTP Header Auth with header Authorization: Bearer YOUR_API_KEY.

{
  "model": "gpt-4o-mini",
  "messages": [
    {"role": "system", "content": "Summarize this article in 2-3 sentences for a Discord audience."},
    {"role": "user", "content": "Title: {{ $json.title }}\nContent: {{ $json.contentSnippet }}"}
  ],
  "max_tokens": 200
}
💡

Tip: GPT-4o-mini costs roughly $0.003 per summary — less than $0.10/day for continuous hourly operation.

6 Build Discord Message (Set)

This node assembles the final Discord message by combining the article title, AI summary, link, and timestamp. Add an assignment named discord_message:

📰 **{{ $('Filter New Articles').item.json.title }}**

{{ $json.choices[0].message.content }}

🔗 Read more: {{ $('Filter New Articles').item.json.link }}

_{{ $now.format('MMMM d, yyyy · h:mm a') }} UTC_

7 Post to Discord (HTTP Request)

Sends the formatted message to your Discord channel using a Webhook URL. To create a webhook: go to your Discord server → right-click the channel → Edit ChannelIntegrationsWebhooksNew Webhook. Copy the URL and paste it as the node’s URL. Set method to POST with this JSON body:

{
  "content": "{{ $json.discord_message }}",
  "username": "RSS News Bot"
}

8 Log URL to Sheet (Google Sheets)

After posting, this node appends a row to the Google Sheet with the article URL, title, AI summary, and timestamp. Set operation to Append, use the same spreadsheet and Tracked sheet, and map columns: url, title, summary, posted_at.

The Data Structure

The Google Sheet is the workflow’s memory. Create a sheet named Tracked with these column headers in row 1:

Column Type Example Description
url Text https://techcrunch.com/2026/04/05/article/ The canonical URL — this is the deduplication key
title Text Anthropic Raises $4B Series E Article title for human review
summary Text Anthropic has secured $4B… The AI-generated summary that was posted
posted_at Text 2026-04-05 14:22:00 UTC timestamp of when it was posted
📌

The column name url must be lowercase. If you rename it, update the Code node’s lookup logic to match.

Full System Flow

Schedule Trigger (every hour)
       │
       ▼
RSS Feed Read → [10-25 items: title, link, contentSnippet]
       │
       ▼
Get Tracked URLs (Google Sheets READ) → [list of previously posted URLs]
       │
       ▼
Filter New Articles (Code) → [0-5 new items only]
       │
       ▼  (for each new item)
Summarize with OpenAI → [2-3 sentence AI summary]
       │
       ▼
Build Discord Message → [formatted text with title + summary + link]
       │
       ▼
Post to Discord (Webhook) → [message appears in channel]
       │
       ▼
Log URL to Sheet (Google Sheets APPEND) → [new row: url|title|summary|time]
       │
       ↑ cycle repeats next hour

Testing Your Workflow

  1. Before activating, run the workflow manually using the Test workflow button in n8n’s top toolbar.
  2. After the RSS Feed Read executes, click it and confirm you see article items in the output panel.
  3. On the first run your sheet is empty — Filter New Articles should pass through up to 5 items.
  4. Check your Discord channel — formatted news messages should appear within seconds.
  5. Check your Google Sheet — the Tracked tab should have rows for each posted article.
  6. Run the workflow a second time manually. Filter should return 0 items — deduplication is working.
Problem Likely Cause Fix
No items from RSS Feed Read Invalid RSS URL or feed is down Test the URL in a browser — it should return XML
All items filtered on first run Sheet already has URLs from a previous test Clear data rows in your Tracked sheet and re-run
OpenAI returns 401 API key missing or incorrectly configured Check Authorization header starts with “Bearer ” (with space)
Discord message not appearing Webhook URL invalid or deleted Regenerate webhook in Discord and update the URL in the node
Duplicate posts in Discord Column name mismatch in Google Sheet Ensure sheet header is exactly url (lowercase)

Frequently Asked Questions

Can I monitor multiple RSS feeds at once?

Yes — the simplest way is to add multiple RSS Feed Read nodes and use a Merge node to combine their outputs before the Filter node. The Google Sheet tracks deduplicated URLs across all feeds since URLs are globally unique.

Will it post to multiple Discord channels?

Add additional HTTP Request nodes after the Post to Discord node, each pointing to a different webhook URL. You can add an IF node before them to route different feed categories to different channels.

What happens if OpenAI is down or returns an error?

Enable Continue on Fail in the OpenAI node’s settings — the workflow will skip that item and continue. You can also set up an Error Trigger workflow in n8n to notify you via Telegram or email when something fails.

How do I change the AI summary style or tone?

Edit the system prompt inside the Summarize with OpenAI node. Change it to “summarize in one sentence with a market angle” or “write as a witty tech commentator” — the AI follows prompt instructions reliably for this task.

Will this work on n8n Cloud, or only self-hosted?

It works on both. The workflow uses only standard HTTP Requests and core n8n nodes — no special configuration is needed for cloud vs. self-hosted.

How much will the OpenAI API cost per month?

Running every hour, posting up to 5 articles per run on GPT-4o-mini: roughly $0.003 per summary, less than $0.10/day, under $3/month for continuous operation.

🚀 Get the RSS Feed AI Summarizer Template

Skip the build — get the complete workflow JSON, a step-by-step Setup Guide, and a Credentials Guide covering every API key. Works on n8n Cloud and self-hosted.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add Slack support: Duplicate the Discord posting step and route summaries to a Slack channel.
  • Filter by keyword: Add an IF node to only process articles containing keywords like “AI”, “funding”, or your company name.
  • Weekly digest: Change the Schedule Trigger to run weekly and bundle all new items into a single long-form Discord post.
  • Add sentiment scoring: Extend the OpenAI prompt to return a sentiment score, then route only positive articles to your channel.
n8n
RSS
Discord
OpenAI
Google Sheets
automation
news bot
AI summarization

How to Auto-Process Stripe Payments and Log to Google Sheets with n8n

Every Stripe payment you receive can be automatically logged to Google Sheets in real time — no manual exports, no missed transactions, no more downloading CSVs at the end of the month. With this n8n workflow, the moment a payment succeeds in Stripe, a new row appears in your spreadsheet with the customer name, email, amount, description, and payment method already filled in. If the payment is above a threshold you set, you’ll get an email alert too.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Stripe webhook listener that fires the moment a payment succeeds — no polling, no delays.
  2. A data formatting step that converts Stripe’s raw event data (amounts in cents, Unix timestamps) into clean, human-readable fields.
  3. An automatic Google Sheets append that logs every payment as a new row in a structured ledger.
  4. A conditional check that sends you an email notification for any payment over $500 so you never miss a big order.

How It Works — The Big Picture

n8n listens for Stripe’s payment_intent.succeeded webhook event. Each time a payment completes, the workflow wakes up, reformats the data into tidy fields, writes a row to your Google Sheet, then checks whether the amount qualifies as high-value — sending an alert email if so.

+----------------------------------------------------------------+
|  AUTO-PROCESS STRIPE PAYMENTS to GOOGLE SHEETS                 |
|                                                                |
|  [Stripe Trigger]                                              |
|        |  payment_intent.succeeded fires                       |
|        v                                                       |
|  [Format Payment Data]                                         |
|        |  cents-to-dollars, Unix-to-date, map fields           |
|        v                                                       |
|  [Log to Google Sheets]                                        |
|        |  append row: ID, date, name, email, amount            |
|        v                                                       |
|  [Check High Value Payment]                                    |
|        | amount >= $500?                                       |
|       YES -------------------------------------------+         |
|        |                                             v         |
|       NO (end)                        [Send High Value Alert]  |
|                                          email notification    |
+----------------------------------------------------------------+
  

What You’ll Need

  • An n8n instance — cloud or self-hosted (n8n.io; free tier works)
  • A Stripe account with at least one product or payment link set up
  • A Google account with Google Sheets access (free)
  • An SMTP account for alert emails — Gmail App Password, SendGrid, or any SMTP provider (optional but recommended)
  • A Google Sheet with the correct column headers (details in the Data Structure section below)

Estimated build time: 35–50 minutes from scratch, or under 10 minutes with the template.

Building the Workflow — Step by Step

1 Stripe Trigger (stripeTrigger)

This node creates a webhook endpoint in n8n that Stripe calls whenever a payment event fires. It’s the entry point for the entire workflow — everything else is downstream of this single trigger.

To configure it:

  1. In n8n, add a new node and search for Stripe Trigger.
  2. Connect your Stripe credential. If you don’t have one yet, click Create new, paste your Stripe Restricted API key (with webhook read permissions), and save.
  3. Under Events, select payment_intent.succeeded. This fires every time a payment is fully authorized and captured.
  4. Save the workflow. n8n will display a webhook URL — copy it.
  5. In your Stripe dashboard, go to Developers → Webhooks → Add endpoint, paste the n8n URL, and select the payment_intent.succeeded event.
💡

Tip: Use Stripe’s Send test webhook button right after adding the endpoint to confirm n8n receives it. You’ll see the raw payload in n8n’s execution log — this is what the next node will process.

A typical Stripe payment_intent.succeeded payload looks like this:

{
  "id": "pi_3QxT4kABCDEFGHIJ12345678",
  "object": "payment_intent",
  "amount": 4999,
  "currency": "usd",
  "status": "succeeded",
  "description": "Order #1042 — Premium Plan",
  "receipt_email": "emily.rodriguez@gmail.com",
  "billing_details": {
    "name": "Emily Rodriguez",
    "email": "emily.rodriguez@gmail.com"
  },
  "created": 1743865200,
  "payment_method_types": ["card"]
}

Notice that amount is in cents (4999 = $49.99) and created is a Unix timestamp. The next node handles the conversion.

2 Format Payment Data (Set)

The raw Stripe payload is useful, but messy. This Set node maps the data into clean, consistently named fields that Google Sheets can receive directly — no JavaScript required inside the sheet.

Add a Set node connected to the Stripe Trigger, then add these fields in Manual mapping mode:

Field Name n8n Expression Result Example
payment_id ={{ $json.id }} pi_3QxT4kABC…
customer_name ={{ $json.billing_details?.name || 'Unknown' }} Emily Rodriguez
customer_email ={{ $json.receipt_email || 'N/A' }} emily.rodriguez@gmail.com
amount_usd ={{ ($json.amount / 100).toFixed(2) }} 49.99
currency ={{ $json.currency.toUpperCase() }} USD
description ={{ $json.description || 'Payment' }} Order #1042 — Premium Plan
status ={{ $json.status }} succeeded
payment_date ={{ new Date($json.created * 1000).toLocaleDateString('en-US') }} 4/5/2026
payment_method ={{ $json.payment_method_types?.[0] || 'card' }} card
💡

Tip: The ?.name optional chaining prevents the workflow from erroring when billing_details is null — which happens with some payment types like bank transfers. The || 'Unknown' fallback ensures your sheet always gets a value.

After this node, data looks like this:

{
  "payment_id": "pi_3QxT4kABCDEFGHIJ12345678",
  "customer_name": "Emily Rodriguez",
  "customer_email": "emily.rodriguez@gmail.com",
  "amount_usd": "49.99",
  "currency": "USD",
  "description": "Order #1042 — Premium Plan",
  "status": "succeeded",
  "payment_date": "4/5/2026",
  "payment_method": "card"
}

3 Log to Google Sheets (googleSheets)

This node appends a new row to your Payments sheet every time a payment is processed. It uses the Append or Update operation — if a payment ID already exists (e.g., a duplicate webhook retry), it updates the existing row instead of creating a duplicate.

  1. Add a Google Sheets node and connect your Google OAuth2 credential.
  2. Set Operation to Append or Update.
  3. Paste your Spreadsheet ID (the long string in your Google Sheet’s URL).
  4. Set Sheet Name to Payments.
  5. Under Columns, switch to Define below and map each column header to its expression from the previous Set node (e.g., Payment ID={{ $json.payment_id }}).
  6. Set Matching Columns to Payment ID so duplicate events update instead of duplicate.
📌

Note: The column headers in your Google Sheet must match exactly — including capitalization and spacing. If you name the column “payment id” (lowercase) but the node sends “Payment ID”, the data will land in the wrong column or create a new one.

4 Check High Value Payment (IF)

After logging, the workflow checks whether the payment amount is $500 or more. Payments that clear this threshold go down the true branch to trigger an email alert; all others exit quietly.

  1. Add an IF node connected to the Google Sheets node.
  2. Set the Value 1 expression to ={{ parseFloat($('Format Payment Data').item.json.amount_usd) }}.
  3. Set Operation to Greater than or equal to.
  4. Set Value 2 to 500.
💡

Tip: You can change the threshold to any amount that makes sense for your business. A SaaS charging $29/month might alert at $200; an agency billing clients might only care about payments above $2,000.

5 Send High Value Alert (emailSend)

When a payment exceeds the threshold, this node fires off an email with all the relevant details so you can follow up personally, flag the account, or just celebrate a big win.

  1. Add a Send Email node to the true output of the IF node.
  2. Connect your SMTP credential (Gmail App Password works great; see the Credentials Guide).
  3. Set From to your sender address and To to your notification email.
  4. Set Subject to: =High-Value Payment: ${{ $('Format Payment Data').item.json.amount_usd }} from {{ $('Format Payment Data').item.json.customer_name }}
  5. Add a plain-text or HTML body with all the payment fields from the Format node.

A sample alert email body:

High-Value Payment Received

Customer: James Carter
Email: james.carter@gmail.com
Amount: $1,250.00
Description: Order #2089 — Enterprise Annual License
Date: 4/5/2026
Payment ID: pi_3QxT4kABCDEFGHIJ12345678

The Data Structure

Your Google Sheet needs a tab named Payments with these column headers in row 1 (exact capitalization matters):

Column Header Type Example Description
Payment ID Text pi_3QxT4kABC… Stripe’s unique payment intent ID — used to deduplicate retried webhooks
Date Text 4/5/2026 US-formatted payment date derived from Stripe’s Unix timestamp
Customer Name Text Emily Rodriguez Billing name from Stripe; falls back to “Unknown” if not provided
Customer Email Text emily.rodriguez@gmail.com Receipt email address; falls back to “N/A” if absent
Amount (USD) Number 49.99 Payment amount in dollars (converted from cents)
Currency Text USD ISO currency code, uppercased
Description Text Order #1042 — Premium Plan Stripe payment description or product name
Status Text succeeded Payment status from Stripe (always “succeeded” for this trigger)
Payment Method Text card Payment method type (card, bank_transfer, etc.)

Here’s what a few rows look like once data starts flowing:

Payment ID Date Customer Name Amount (USD) Status
pi_3QxT4k… 4/5/2026 Emily Rodriguez 49.99 succeeded
pi_3QxT7m… 4/5/2026 James Carter 1250.00 succeeded
pi_3QxV2n… 4/5/2026 Sarah Thompson 29.00 succeeded
📌

Create the sheet and add the column headers before activating the workflow. n8n’s Google Sheets node expects the headers to already exist in row 1 — it doesn’t create them automatically.

Full System Flow

  Customer Pays via Stripe
          |
          v
  Stripe fires payment_intent.succeeded
          |
          v
  +----------------------------+
  |   n8n Stripe Trigger       |  <-- webhook endpoint registered in Stripe dashboard
  +------------+---------------+
               | raw Stripe payload (cents, Unix timestamps)
               v
  +----------------------------+
  |  Format Payment Data       |  <-- Set node: converts & maps fields
  +------------+---------------+
               | clean fields: name, email, $amount, date
               v
  +----------------------------+
  |  Log to Google Sheets      |  <-- appends or updates row by Payment ID
  +------------+---------------+
               | row written OK
               v
  +----------------------------+
  | Check High Value Payment   |  <-- IF: amount_usd >= 500?
  +------+-------------+-------+
        YES            NO
         |              |
         v             end
  +-----------------+
  | Send Alert Email|  <-- SMTP: notify you of big payment
  +-----------------+
  

Testing Your Workflow

Before going live, test with Stripe’s built-in test mode:

  1. In Stripe dashboard, switch to Test mode (toggle in the top-left).
  2. Go to Developers → Webhooks → select your endpoint → click Send test webhook → choose payment_intent.succeeded.
  3. Open n8n’s execution log — you should see the workflow trigger and all 5 nodes complete successfully.
  4. Check your Google Sheet — a new row should appear with the test payment data.
  5. To test the high-value alert, temporarily lower the threshold to $1 in the IF node, trigger another test, and check your inbox.
Problem Likely Cause Fix
Workflow doesn’t trigger at all Webhook URL not registered in Stripe, or n8n not publicly reachable Check Stripe → Webhooks; for self-hosted n8n, ensure your instance has a public URL (not localhost)
Google Sheets shows data in wrong columns Column header mismatch Check that headers in row 1 of your sheet exactly match the field names in the Set node (case-sensitive)
Amount appears as “NaN” or empty $json.amount is undefined for some event types Confirm the trigger is set to payment_intent.succeeded not charge.succeeded
Duplicate rows in Google Sheets Stripe retries the webhook if n8n doesn’t return 200 fast enough Ensure your Matching Columns is set to “Payment ID” in the Google Sheets node
High-value alert email not arriving SMTP credential invalid, or spam filter Test SMTP connection in n8n; check spam folder; use an App Password for Gmail

Frequently Asked Questions

Does this work with Stripe Checkout, Payment Links, and direct API charges?

Yes — the payment_intent.succeeded event fires for all of them. Whether the payment comes from a Stripe Checkout session, a Payment Link, a subscription renewal, or a manual API charge, a PaymentIntent is always created under the hood and this workflow will catch it.

What happens if Stripe sends the same webhook twice (retry)?

Stripe retries failed webhooks up to 3 days. Because the Google Sheets node is configured with Append or Update and Payment ID as the matching column, a duplicate event will update the existing row instead of creating a second one. Your ledger stays clean.

Can I log refunds or failed payments too?

Absolutely. In the Stripe Trigger node, add additional events: charge.refunded for refunds and payment_intent.payment_failed for failures. You can add an IF or Switch node after the trigger to route each event type to a different sheet tab.

Can I use this with n8n Cloud, or does it need to be self-hosted?

Either works. n8n Cloud instances have a public URL by default, so webhook registration in Stripe is straightforward. Self-hosted instances need a public URL too — if yours is behind a router, use a service like Cloudflare Tunnel or ngrok to expose it. Once the URL is registered, behavior is identical.

Can I send the alert to Slack instead of email?

Yes — just swap the Send Email node for a Slack node set to Post Message. Connect your Slack OAuth credential, pick your alert channel, and use the same expressions to build the message text. You can even have both: add both a Send Email and a Slack node to the true branch of the IF node.

What if a customer’s name or email is missing from the Stripe event?

The Set node uses JavaScript optional chaining (?.name) and fallback values (|| 'Unknown' and || 'N/A'). This means even if Stripe sends an event with no billing details, the workflow will still complete and log the row — just with placeholder values instead of blank cells.

🚀 Get the Stripe to Google Sheets Template

Skip the setup and get the complete n8n workflow JSON, a pre-configured Google Sheet template, a step-by-step Setup Guide PDF, and a Credentials Guide PDF — everything you need to go live in minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Slack notification for every payment (not just high-value) — great for small teams who want a #payments channel in real time.
  • Create a monthly revenue summary by adding a scheduled workflow that queries your Google Sheet, calculates MRR, and emails you a report on the 1st of each month.
  • Sync to a CRM — add a HubSpot or Airtable node after the Google Sheets step to create or update a deal every time a payment lands.
  • Handle subscriptions separately — listen for invoice.payment_succeeded to capture recurring Stripe subscription payments on their own sheet tab with subscriber lifetime value tracking.
n8n
Stripe
Google Sheets
payment automation
no-code
webhooks
automation

How to Build a Slack Knowledge Base Bot with AI and n8n

Every team has that one Slack channel — the one where the same five questions get asked every single week. “Where’s the onboarding doc?” “What’s the refund policy?” “How do I reset my API key?” Your senior engineers are answering these instead of shipping. Your support lead is copy-pasting the same response for the third time today. With this n8n workflow, you’ll build an AI-powered Slack bot that reads incoming questions, searches your company knowledge base using semantic vector search, and posts precise answers back in the thread — automatically, in about 2 seconds.

Prefer to skip the setup? Grab the ready-made template → and be running in under 10 minutes.

What You’ll Build

  1. A Slack bot that listens for messages in any channel it’s invited to
  2. An OpenAI step that converts each incoming question into a 1,536-dimensional vector embedding
  3. A Pinecone semantic search that retrieves the most relevant chunks from your knowledge base
  4. A GPT-4o-mini step that reads the retrieved context and writes a clear, grounded answer
  5. An automatic thread reply in Slack so teammates get instant answers without leaving the channel

How It Works — The Big Picture

This workflow uses the RAG pattern — Retrieval-Augmented Generation. Instead of asking an AI to recall facts from its training data (which goes stale), you store your up-to-date company knowledge in Pinecone as vector embeddings. When a question arrives, the bot finds the semantically closest knowledge chunks and hands them to GPT-4o-mini as context. The result: factual, grounded answers drawn only from your approved content.

┌─────────────────────────────────────────────────────────────────────┐
│  SLACK KNOWLEDGE BASE BOT                                           │
│                                                                     │
│  [Slack Trigger]                                                    │
│        ↓                                                            │
│  [Filter Bot Messages] ──(bot message)──→ (stop — no loop)         │
│        ↓ (user message)                                             │
│  [Generate Question Embedding]  (OpenAI text-embedding-3-small)    │
│        ↓ 1,536-dim vector                                           │
│  [Query Pinecone]  (top-5 semantic matches from knowledge base)     │
│        ↓ matched chunks + scores                                    │
│  [Extract Context]  (filter score > 0.7, join top 3 chunks)        │
│        ↓ formatted context string                                   │
│  [Generate AI Answer]  (GPT-4o-mini + RAG prompt)                  │
│        ↓ natural language answer                                    │
│  [Post Answer to Slack]  (reply in the original thread)            │
└─────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted (v1.0+) or n8n Cloud
  • Slack app with Events API enabled and a Bot User Token (starts with xoxb-)
  • OpenAI account — API key with access to text-embedding-3-small and gpt-4o-mini
  • Pinecone account — free Starter plan is plenty; create an index with dimension 1536 and metric cosine
  • A pre-populated Pinecone namespace called knowledge-base (the Credentials Guide includes a Python ingestion script)

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

Building the Bot — Step by Step

1 Slack Trigger

This node opens a webhook endpoint that Slack’s Events API calls every time a message is posted in a channel your bot belongs to. It’s the entry point for everything.

  1. Add a Slack Trigger node to your canvas.
  2. Select your Slack Bot Token credential (or create one — see the Credentials Guide).
  3. Set the Trigger to Message.
  4. Copy the Webhook URL n8n displays. Paste it into your Slack app’s Event Subscriptions → Request URL field.
  5. In Slack app settings, enable the message.channels and message.groups event scopes, then reinstall the app to your workspace.

A message event payload looks like this:

{
  "type": "message",
  "text": "What is our refund policy for annual subscriptions?",
  "user": "U04ABCDEF12",
  "channel": "C06XYZABC99",
  "ts": "1743784201.000100",
  "subtype": null
}
💡

Tip: Slack sends events for bot messages too — including the bot’s own replies. Without the next filter node, every answer the bot posts would re-trigger the workflow and create an infinite loop.

2 Filter Bot Messages (IF)

This IF node stops the workflow from processing bot messages. It’s a one-condition check that routes user messages forward and drops everything else.

  1. Add an IF node connected to the Slack Trigger output.
  2. Set Value 1 to ={{ $json.subtype }}.
  3. Condition: Is Not Equal Tobot_message.
  4. Connect the True output to Step 3. Leave False unconnected.
💡

Tip: You can add a second condition here to limit the bot to a specific channel — filter $json.channel equals C06XYZABC99. This is useful if you want the bot active only in #ask-the-bot and not everywhere.

3 Generate Question Embedding (HTTP Request → OpenAI)

This node calls the OpenAI Embeddings API and converts the user’s question into a 1,536-dimensional vector — a list of numbers that captures the semantic meaning of the sentence. Pinecone will use this to find similar content.

  1. Add an HTTP Request node.
  2. Method: POST | URL: https://api.openai.com/v1/embeddings
  3. Authentication: Generic Credential Type → HTTP Header Auth. Create a credential with Name = Authorization and Value = Bearer YOUR_OPENAI_API_KEY.
  4. Body Content Type: JSON. Paste this body:
{
  "input": "={{ $('Slack Trigger').item.json.text }}",
  "model": "text-embedding-3-small"
}

The response contains the embedding inside data[0].embedding:

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023, -0.0189, 0.0341, "...1,533 more values..."]
    }
  ],
  "model": "text-embedding-3-small",
  "usage": { "prompt_tokens": 11, "total_tokens": 11 }
}
💡

Tip: text-embedding-3-small costs $0.02 per million tokens. A team of 50 people asking 200 questions a day will spend about half a cent on embeddings. If you need higher search accuracy, switch to text-embedding-3-large (3,072 dimensions) — but update your Pinecone index dimension to match before doing so.

4 Query Pinecone (HTTP Request)

This node sends the question vector to Pinecone and gets back the five most semantically similar knowledge chunks, each scored between 0 (irrelevant) and 1 (identical).

  1. Add another HTTP Request node.
  2. Method: POST | URL: https://YOUR_PINECONE_INDEX_HOST/query (replace with your index host from the Pinecone console — it looks like my-index-abc123.svc.us-east-1.pinecone.io)
  3. Add a header: Name = Api-Key, Value = YOUR_PINECONE_API_KEY.
  4. Body Content Type: JSON. Use this expression as the body:
{
  "vector": "={{ $json.data[0].embedding }}",
  "topK": 5,
  "includeMetadata": true,
  "namespace": "knowledge-base"
}

Pinecone responds with the top matches and their stored metadata:

{
  "matches": [
    {
      "id": "doc-refund-annual-001",
      "score": 0.921,
      "metadata": {
        "text": "Annual subscriptions may be refunded within 30 days of purchase for a full refund. After 30 days, refunds are prorated based on remaining months.",
        "source": "help-center/billing",
        "last_updated": "2026-03-01"
      }
    },
    {
      "id": "doc-refund-annual-002",
      "score": 0.874,
      "metadata": {
        "text": "To request a refund, email billing@acme-corp.com with your order number and reason for cancellation. Refunds are processed within 5 business days.",
        "source": "help-center/billing",
        "last_updated": "2026-03-01"
      }
    }
  ],
  "namespace": "knowledge-base"
}
📌

Your Pinecone index must be pre-populated before the bot can answer anything. Each vector record needs a text field in its metadata. The Credentials Guide PDF bundled with the template includes a ready-to-run Python ingestion script that embeds and uploads your documents in minutes.

5 Extract Context (Code)

This JavaScript node processes the Pinecone results: filters low-confidence matches, takes the top 3 chunks, formats them into a numbered context string, and bundles the data for the next node.

const matches = $input.item.json.matches || [];
const slackData = $('Slack Trigger').item.json;

if (matches.length === 0) {
  return [{
    json: {
      context: 'No relevant information found in the knowledge base.',
      question: slackData.text,
      channel: slackData.channel,
      thread_ts: slackData.ts
    }
  }];
}

const context = matches
  .filter(m => m.score > 0.7)
  .slice(0, 3)
  .map((m, i) => `[${i + 1}] ${m.metadata.text}`)
  .join('\n\n');

return [{
  json: {
    context: context || 'No highly relevant information found.',
    question: slackData.text,
    channel: slackData.channel,
    thread_ts: slackData.ts
  }
}];

After this node, the data is clean and ready for the AI:

{
  "context": "[1] Annual subscriptions may be refunded within 30 days...\n\n[2] To request a refund, email billing@acme-corp.com...",
  "question": "What is our refund policy for annual subscriptions?",
  "channel": "C06XYZABC99",
  "thread_ts": "1743784201.000100"
}
💡

Tip: The 0.7 score threshold is a good starting point. If the bot returns off-topic answers, raise it to 0.8. If it says “no information found” for questions you know are in the knowledge base, lower it to 0.65 or check that your Pinecone namespace name matches exactly.

6 Generate AI Answer (HTTP Request → OpenAI Chat)

This node sends the retrieved context and the original question to GPT-4o-mini. The system prompt instructs the model to answer strictly from the provided context — no hallucinating facts that aren’t in your knowledge base.

  1. Method: POST | URL: https://api.openai.com/v1/chat/completions
  2. Reuse your OpenAI HTTP Header Auth credential.
  3. Body (JSON):
{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful company knowledge base assistant. Answer questions using ONLY the provided context. If the context does not contain the answer, say so clearly and suggest the user contact the team directly. Keep answers concise and actionable."
    },
    {
      "role": "user",
      "content": "Context:\n={{ $json.context }}\n\nQuestion: ={{ $json.question }}"
    }
  ],
  "max_tokens": 500,
  "temperature": 0.2
}
💡

Tip: temperature: 0.2 keeps answers factual and consistent. For a knowledge base bot you want determinism, not creativity. max_tokens: 500 keeps responses Slack-readable — roughly 3–5 paragraphs maximum.

7 Post Answer to Slack

The final node takes GPT-4o-mini’s answer and posts it as a thread reply — so the answer lives directly under the original question rather than flooding the main channel.

  1. Add a Slack node. Resource: Message, Operation: Post.
  2. Channel: ={{ $('Extract Context').item.json.channel }}
  3. Text: ={{ $json.choices[0].message.content }}
  4. Under Other Options, set Thread Timestamp to ={{ $('Extract Context').item.json.thread_ts }}
  5. Use the same Slack credential as the Slack Trigger.
💡

Tip: Want to brand the reply? Change the text field to: 🤖 *Knowledge Base Bot:* {{ $json.choices[0].message.content }}. The asterisks render as bold in Slack, making it clear this is an automated response.

The Data Flow

Here’s how a single question moves through all seven nodes, from Slack message to bot reply:

Stage Data Present Key Field
After Slack Trigger Raw Slack event payload text, channel, ts
After Filter Same payload, confirmed user message subtype is null
After OpenAI Embeddings 1,536-number float array data[0].embedding
After Pinecone Query Top 5 knowledge chunks + similarity scores matches[].score, matches[].metadata.text
After Extract Context Formatted context + original question context, question, thread_ts
After OpenAI Chat Full ChatGPT response object choices[0].message.content
Posted to Slack Plain text answer in thread Visible to all channel members instantly

Pinecone Knowledge Base Schema

Every vector record you upsert into Pinecone must follow this structure. The text metadata field is required — the Extract Context node reads it directly.

Field Type Example Description
id String doc-refund-001 Unique identifier for this knowledge chunk
values Float[1536] [0.023, -0.019, …] Embedding from text-embedding-3-small
metadata.text String "Annual subscriptions are refunded within 30 days…" The raw knowledge chunk — required
metadata.source String help-center/billing Where this content came from (optional)
metadata.last_updated String 2026-03-01 Last update date for freshness tracking (optional)
📌

Keep each knowledge chunk between 100 and 500 words. Too short and the chunk loses context; too long and the embedding gets diluted. One concept per chunk is a good rule of thumb — for example, one chunk for the refund policy, a separate chunk for the cancellation process.

Full System Flow

User posts question in Slack
          │
          ▼
┌─────────────────────┐
│    Slack Trigger    │  Receives message.channels event via webhook
└─────────────────────┘
          │
          ▼
┌─────────────────────┐
│ Filter Bot Messages │──── subtype = "bot_message"? ──→ STOP
└─────────────────────┘
          │ user message passes
          ▼
┌──────────────────────────┐
│ Generate Question        │  POST https://api.openai.com/v1/embeddings
│ Embedding (OpenAI)       │  model: text-embedding-3-small → 1,536-dim vector
└──────────────────────────┘
          │
          ▼
┌──────────────────────────┐
│ Query Pinecone           │  POST {index-host}/query
│ (vector search)          │  topK=5 · namespace: knowledge-base
└──────────────────────────┘
          │ top matches with metadata + scores
          ▼
┌──────────────────────────┐
│ Extract Context          │  Filter: score > 0.7
│ (Code node)              │  Join top 3 chunks into context string
└──────────────────────────┘
          │
          ▼
┌──────────────────────────┐
│ Generate AI Answer       │  POST https://api.openai.com/v1/chat/completions
│ (GPT-4o-mini)            │  RAG prompt · temperature: 0.2 · max_tokens: 500
└──────────────────────────┘
          │ grounded answer text
          ▼
┌──────────────────────────┐
│ Post Answer to Slack     │  Thread reply on original message
└──────────────────────────┘
          │
          ▼
Team member sees the answer in 2–3 seconds ✓

Testing Your Workflow

  1. Make sure the workflow is toggled to Active in n8n.
  2. Invite your bot to a test channel: type /invite @YourBotName.
  3. Post a question you know is covered in your knowledge base — for example: “What’s the refund policy for annual plans?”
  4. Within 2–3 seconds, a thread reply should appear with a grounded answer drawn from your Pinecone content.
  5. Post a question that’s definitely not in your knowledge base. The bot should say it doesn’t have that information and suggest contacting the team.
  6. Check the n8n Execution log to confirm all 7 nodes completed with green checkmarks and no errors.
Problem Likely Cause Fix
Bot doesn’t respond at all Workflow not Active, or Slack webhook URL mismatch Toggle workflow to Active; verify webhook URL in Slack app settings matches n8n exactly
Bot replies to its own messages Filter Bot Messages node misconfigured Check the True output of the IF node connects to Step 3; False output should be unconnected
“No relevant information found” for everything Pinecone index empty or wrong namespace Run the ingestion script from the Credentials Guide; confirm namespace is exactly knowledge-base
Off-topic or wrong answers Score threshold too low or chunks too large Raise score threshold to 0.8 in the Extract Context node; re-chunk content into shorter segments
OpenAI 401 Unauthorized API key missing or expired Regenerate key at platform.openai.com and update the HTTP Header Auth credential in n8n
Slack “not_in_channel” error Bot not invited to the channel Run /invite @YourBotName in the channel before testing

Frequently Asked Questions

Do I need to load content into Pinecone before the bot will work?

Yes — without content in Pinecone, every query returns empty results and the bot will say “no information found.” The template package includes a Python ingestion script (in the Credentials Guide PDF) that takes any plain text or Markdown file, splits it into chunks, generates embeddings, and uploads them to Pinecone. You can have a basic knowledge base loaded in 10–15 minutes.

How do I prevent the bot from answering questions it shouldn’t?

The system prompt in Step 6 instructs GPT-4o-mini to answer only from the provided context. If a question doesn’t match anything in Pinecone above the 0.7 threshold, the Extract Context node sends a “no information found” message as the context — and the AI is instructed to honestly say so and redirect to the team. You control what goes into Pinecone, so you control what the bot can answer.

How much does this cost to run per month?

For a team of 50 people asking roughly 200 questions a day: OpenAI embedding calls cost about $0.005/day, and GPT-4o-mini answers cost about $0.10/day. Pinecone’s free Starter plan handles up to 100,000 vectors — more than enough for a thorough company knowledge base. Total cost: roughly $3–4/month in API fees.

Can I restrict the bot to specific Slack channels?

Yes. In the Filter Bot Messages IF node (Step 2), add a second condition: $json.channel equals your target channel ID. The bot will only respond in that specific channel. You can find a channel’s ID by right-clicking it in Slack and selecting “Copy link” — the ID is the string starting with C at the end of the URL.

How do I keep the knowledge base current as our docs change?

Build a second n8n workflow that watches for document updates — a Google Drive trigger that fires when a doc is modified, re-embeds the content, and upserts it to Pinecone by the same id. Since Pinecone’s upsert operation overwrites by ID, you won’t accumulate duplicates. You can also just re-run the ingestion script manually after major documentation updates.

Does this work on n8n Cloud, or only self-hosted?

It works on both. The workflow uses only HTTP Request nodes, a Code node, the Slack Trigger, and the Slack node — all available in n8n Cloud and every self-hosted version from 1.0 onwards. No custom nodes or community packages are required.

🚀 Get the Slack Knowledge Base Bot Template

You now know exactly how this workflow is built. The template gets you there in under 10 minutes: it includes the ready-to-import workflow JSON, a Setup Guide PDF with step-by-step activation instructions, and a Credentials Guide PDF with a working Python ingestion script to load your knowledge base into Pinecone.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add answer feedback: Let users react 👍 or 👎 on bot replies, log reactions to Airtable, and use quality scores to identify gaps in your knowledge base.
  • Auto-ingest from Notion or Confluence: Build a companion n8n workflow that watches for document updates and automatically re-embeds modified pages into Pinecone.
  • Add a Slack slash command: Create a /ask command so users can query the bot privately in DMs without cluttering a shared channel.
  • Multi-namespace routing: Create separate Pinecone namespaces for HR, Engineering, and Sales — and route questions to the right namespace based on which Slack channel they came from.
n8n
Slack
OpenAI
Pinecone
RAG
AI automation
knowledge base
vector search

How to Auto-Sync Notion Databases to Airtable with n8n

Every edit you make in Notion can land in Airtable automatically — no copy-paste, no manual exports, no data drift. If your team lives in Notion but your operations dashboard runs in Airtable, you already know the pain: the two fall out of sync within hours, someone gets the wrong status, and suddenly half your workday disappears into reconciliation. This guide shows you how to build an n8n workflow that polls your Notion database every 15 minutes and upserts each record straight into Airtable — new items get created, updated items get refreshed, and nothing gets duplicated.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A schedule-triggered n8n workflow fires every 15 minutes — no manual intervention needed.
  2. The workflow fetches every page from your chosen Notion database and filters it down to only those edited in the last 15 minutes, so you’re never re-processing stale data.
  3. A Code node extracts the Notion page properties (title, status, priority, email, due date, notes, and tags) into a clean, flat object.
  4. The Airtable node upserts each record using the Notion page ID as the unique key — if the row already exists it updates it; if not it creates a new one.
  5. Your Airtable base stays in near-real-time sync with Notion, with a direct link back to each Notion page for quick reference.

How It Works — The Big Picture

The workflow is a single linear pipeline: schedule → fetch → filter → transform → upsert. The magic is in the upsert step, which uses the Notion page ID as a match key so it can tell the difference between a record it’s seen before and a brand-new one.

┌──────────────────────────────────────────────────────────────────────┐
│  AUTO-SYNC NOTION DATABASE → AIRTABLE                                │
│                                                                      │
│  [Every 15 Min]                                                      │
│       │                                                              │
│       ▼                                                              │
│  [Fetch All Notion Pages]  ← polls your Notion database             │
│       │                                                              │
│       ▼                                                              │
│  [Filter: Modified in Last 15 Min]  ← skips unchanged records       │
│       │                                                              │
│       ▼                                                              │
│  [Extract & Map Fields]  ← Code node: flattens Notion properties    │
│       │                                                              │
│       ▼                                                              │
│  [Upsert in Airtable]  ← creates new rows OR updates existing ones  │
│                            matched by notion_page_id                 │
└──────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted (v1.0+) or an n8n Cloud account
  • Notion account — with a database you want to sync; free tier works
  • Notion Integration — a connected integration with read access to your database (takes about 2 minutes to set up at notion.so/my-integrations)
  • Airtable account — free tier works; you’ll need a base with a table that mirrors your Notion schema
  • Airtable Personal Access Token — generated from your Airtable account settings with data.records:write scope

Estimated build time: 30–45 minutes from scratch, or under 10 minutes with the ready-made template.

Building the Workflow — Step by Step

1 Every 15 Minutes (Schedule Trigger)

This node kicks off the entire workflow on a recurring schedule. It doesn’t pass any data downstream — it’s just the heartbeat.

In n8n, add a Schedule Trigger node and configure it like this:

  1. Set Trigger Interval to Minutes
  2. Set Minutes Between Triggers to 15
  3. Click Save
💡

Tip: If your Notion database is high-volume (hundreds of edits per day), consider shortening the interval to 5 minutes. For low-activity databases, 30 or 60 minutes is perfectly fine and reduces API calls.

2 Fetch All Notion Pages (Notion node)

This node connects to your Notion database and retrieves all pages inside it. We’ll filter them down in the next step — here we just get everything so nothing slips through the cracks.

  1. Add a Notion node and connect your Notion credential (or create one using your integration’s Internal Integration Secret)
  2. Set Resource to Database Page
  3. Set Operation to Get Many
  4. In the Database ID field, paste the ID of your Notion database. You can find it in the database URL: notion.so/workspace/THIS-LONG-ID?v=...
  5. Enable Return All so n8n fetches every page, not just the first 100

After this node runs, each output item represents one Notion page. The raw data looks something like this:

{
  "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "last_edited_time": "2026-04-04T09:47:00.000Z",
  "url": "https://www.notion.so/a1b2c3d4e5f67890abcdef1234567890",
  "properties": {
    "Name": { "title": [{ "plain_text": "Q2 Marketing Campaign" }] },
    "Status": { "select": { "name": "In Progress" } },
    "Priority": { "select": { "name": "High" } },
    "Email": { "email": "james.carter@gmail.com" },
    "Due Date": { "date": { "start": "2026-04-15" } },
    "Notes": { "rich_text": [{ "plain_text": "Coordinate with Emily on copy approval." }] },
    "Tags": { "multi_select": [{ "name": "Marketing" }, { "name": "Q2" }] }
  }
}
📌

The Notion node returns properties in a deeply nested format. Your property names (Name, Status, etc.) must match exactly what they’re called in your Notion database. If you’re using different column names, you’ll update the Code node in Step 4 to match.

3 Filter: Modified in Last 15 Min (Filter node)

Without this filter, the workflow would re-upsert every record in your Notion database on every run — wasteful and slow. This node passes only the pages that were edited since the last trigger fired.

  1. Add a Filter node
  2. Add a condition with Left Value set to the expression:
    ={{ new Date($json.last_edited_time).getTime() }}
  3. Set Operator to Greater Than or Equal
  4. Set Right Value to:
    ={{ Date.now() - 15 * 60 * 1000 }}
  5. Make sure Value Type is set to Number for both sides
💡

Tip: The expression Date.now() - 15 * 60 * 1000 computes the Unix timestamp for exactly 15 minutes ago. If your schedule trigger interval is different, replace 15 with your interval in minutes. Adding a small buffer (say, 16 minutes for a 15-minute schedule) ensures you never miss a record due to tiny timing drift.

4 Extract & Map Fields (Code node)

Notion’s property format is deeply nested — a simple title field is buried inside properties.Name.title[0].plain_text. This Code node flattens everything into a clean, Airtable-ready object with one pass.

  1. Add a Code node
  2. Set Mode to Run Once for Each Item
  3. Paste in the following JavaScript:
// Extract Notion page properties into flat, Airtable-friendly fields
const page = $input.item.json;
const props = page.properties ?? {};

// Helper: safely read any Notion property type
const get = (key, type) => {
  const p = props[key];
  if (!p) return '';
  switch (type) {
    case 'title':        return p.title?.[0]?.plain_text ?? '';
    case 'rich_text':   return p.rich_text?.[0]?.plain_text ?? '';
    case 'select':      return p.select?.name ?? '';
    case 'multi_select': return p.multi_select?.map(s => s.name).join(', ') ?? '';
    case 'date':        return p.date?.start ?? '';
    case 'email':       return p.email ?? '';
    case 'number':      return p.number ?? 0;
    case 'checkbox':    return String(p.checkbox ?? false);
    case 'url':         return p.url ?? '';
    default:            return '';
  }
};

return {
  json: {
    notion_page_id: page.id,
    Name:           get('Name', 'title'),
    Status:         get('Status', 'select'),
    Priority:       get('Priority', 'select'),
    Email:          get('Email', 'email'),
    Due_Date:       get('Due Date', 'date'),
    Notes:          get('Notes', 'rich_text'),
    Tags:           get('Tags', 'multi_select'),
    Last_Edited:    page.last_edited_time,
    Notion_URL:     page.url,
  }
};

After this node runs, each item is a clean, flat object ready to be written to Airtable:

{
  "notion_page_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "Name": "Q2 Marketing Campaign",
  "Status": "In Progress",
  "Priority": "High",
  "Email": "james.carter@gmail.com",
  "Due_Date": "2026-04-15",
  "Notes": "Coordinate with Emily on copy approval.",
  "Tags": "Marketing, Q2",
  "Last_Edited": "2026-04-04T09:47:00.000Z",
  "Notion_URL": "https://www.notion.so/a1b2c3d4..."
}
💡

Tip: If your Notion database has different property names or types, update the get('PropertyName', 'type') calls to match. The helper function supports all common Notion property types — just change the key and type string. Add or remove fields as needed for your own schema.

5 Upsert in Airtable (Airtable node)

This is where the sync actually happens. The Airtable Upsert operation is the key: it checks whether a record matching your specified field already exists, updates it if so, and creates a new one if not — all in a single API call.

  1. Add an Airtable node and connect your Airtable credential (Personal Access Token)
  2. Set Operation to Upsert
  3. Set Base to your Airtable base (select from the dropdown or paste the Base ID)
  4. Set Table to your sync table
  5. Under Fields to Match On, enter notion_page_id — this is the unique key that tells Airtable whether to create or update
  6. In the field mapping section, map each field from the Code node output to the corresponding Airtable column
{
  "notion_page_id": "={{ $json.notion_page_id }}",
  "Name":           "={{ $json.Name }}",
  "Status":         "={{ $json.Status }}",
  "Priority":       "={{ $json.Priority }}",
  "Email":          "={{ $json.Email }}",
  "Due_Date":       "={{ $json.Due_Date }}",
  "Notes":          "={{ $json.Notes }}",
  "Tags":           "={{ $json.Tags }}",
  "Last_Edited":    "={{ $json.Last_Edited }}",
  "Notion_URL":     "={{ $json.Notion_URL }}"
}
📌

The Airtable upsert operation requires your Airtable table to have a text field named notion_page_id. If this field doesn’t exist, Airtable won’t know what to match on and will create duplicates. Create the field in Airtable before activating the workflow.

The Airtable Table Schema

Your Airtable sync table needs to match the fields your Code node outputs. Here’s the recommended schema — create these columns in Airtable before running the workflow for the first time:

Column Name Airtable Field Type Example Value Purpose
notion_page_id Single line text a1b2c3d4-e5f6-... Unique match key — never changes for a given Notion page
Name Single line text Q2 Marketing Campaign The page title from Notion
Status Single line text (or Single select) In Progress Current workflow status
Priority Single line text (or Single select) High Task priority level
Email Email james.carter@gmail.com Contact email from Notion
Due_Date Date 2026-04-15 ISO date string from Notion’s date property
Notes Long text Coordinate with Emily… Rich text notes (plain text only — no formatting)
Tags Single line text Marketing, Q2 Comma-separated list of Notion multi-select values
Last_Edited Single line text 2026-04-04T09:47:00Z ISO timestamp of last Notion edit — useful for debugging
Notion_URL URL https://notion.so/… Direct link back to the source Notion page

Here’s what a couple of synced rows look like in practice:

Name Status Priority Email Due_Date
Q2 Marketing Campaign In Progress High james.carter@gmail.com 2026-04-15
Vendor Contract Review Done Medium emily.rodriguez@outlook.com 2026-03-31
Onboarding Flow Redesign Not Started Low michael.chen@gmail.com 2026-05-01
📌

Column names in Airtable are case-sensitive and must match exactly what the Code node outputs (e.g., Due_Date with a capital D and underscore). If you rename any column, update the Code node’s output keys to match.

Full System Flow

Here’s the complete data journey from Notion edit to Airtable row, end to end:

┌─────────────────────────────────────────────────────────────────────────┐
│  NOTION (source of truth)                                               │
│  User edits a page → last_edited_time updates                           │
└──────────────────────────────┬──────────────────────────────────────────┘
                               │  (every 15 minutes)
                               ▼
┌─────────────────────────────────────────────────────────────────────────┐
│  n8n WORKFLOW                                                           │
│                                                                         │
│  Schedule Trigger (every 15 min)                                        │
│       │                                                                 │
│       ▼                                                                 │
│  Notion node: GET all pages from database                               │
│       │  returns N page objects with nested properties                  │
│       ▼                                                                 │
│  Filter node: keep only pages where                                     │
│               last_edited_time >= now - 15 min                          │
│       │  passes M ≤ N recently changed pages                            │
│       ▼                                                                 │
│  Code node: flatten Notion props → clean JSON object                    │
│       │  { notion_page_id, Name, Status, Priority, … }                  │
│       ▼                                                                 │
│  Airtable Upsert: match on notion_page_id                               │
│       ├─ Record found?  → UPDATE existing row                           │
│       └─ Record missing? → CREATE new row                               │
│                                                                         │
└──────────────────────────────┬──────────────────────────────────────────┘
                               │
                               ▼
┌─────────────────────────────────────────────────────────────────────────┐
│  AIRTABLE (reporting / ops layer)                                       │
│  Rows stay in sync with Notion — max 15 minutes behind                 │
└─────────────────────────────────────────────────────────────────────────┘

Testing Your Workflow

  1. Make a test edit in Notion: Open your Notion database, pick any page, and change a field (e.g., update the Status to “In Review”). Save it — Notion updates last_edited_time automatically.
  2. Run the workflow manually: In n8n, click Test Workflow (the play button) on the Schedule Trigger node to fire it immediately without waiting 15 minutes.
  3. Check the Filter node output: The edited page should pass through. If the Filter passes 0 items, check that your system clock is correct and that the edited page’s last_edited_time is recent.
  4. Check Airtable: Within a few seconds, the corresponding row should appear or update in your Airtable table. Verify the field values match what you changed in Notion.
  5. Test the upsert (no-duplicate check): Run the workflow a second time immediately. The Airtable row should update in place — no duplicate rows should appear.
Problem Likely Cause Fix
Filter passes 0 items even after editing Notion Edit was made more than 15 minutes ago Edit the page again and immediately run the test, or temporarily widen the filter window to 60 minutes
Airtable creates duplicate rows notion_page_id column missing in Airtable table, or the “Fields to Match On” setting is blank Create the notion_page_id column in Airtable and confirm it’s set as the upsert match key
Code node fails with “Cannot read properties of undefined” Your Notion database has a different property name or type than expected Open the Notion node output in n8n and check the exact property names in properties, then update the Code node’s get() calls to match
Notion node returns 0 items Integration not connected to the database In Notion, open the database → click ⋯ → Connections → add your integration
Airtable “Invalid permissions” error Personal Access Token missing data.records:write scope Regenerate the token at airtable.com/account with the correct scopes checked

Frequently Asked Questions

Does this sync work in both directions — Airtable to Notion as well?

The workflow described here is one-directional: Notion is the source of truth and Airtable is the destination. Building a reverse sync (Airtable → Notion) is possible but requires a second workflow that watches Airtable for changes via polling or a webhook. Combining both directions into a bidirectional sync also requires a “last-write-wins” or conflict resolution strategy to avoid infinite loops.

What happens if my Notion database has hundreds of pages?

The Notion node with Return All enabled will fetch every page regardless of count, which can be slow for very large databases (500+ pages). In that case, consider using Notion’s built-in filter inside the Notion node to retrieve only pages modified after a certain date — this offloads the filtering to Notion’s API and reduces the volume of data n8n has to process.

Can I sync multiple Notion databases to multiple Airtable tables?

Yes — you can either duplicate this workflow (one copy per database-table pair) or extend it with a Switch node to route different database IDs to different Airtable table IDs. The duplicate approach is simpler to maintain; the Switch approach reduces the number of active workflows in your n8n instance.

My Notion database has properties that aren’t in the Code node — how do I add them?

Add a new line to the return { json: { ... } } block using the get('YourPropertyName', 'type') helper. The supported types are: title, rich_text, select, multi_select, date, email, number, checkbox, and url. Then create a matching column in your Airtable table and add it to the Airtable node’s field mapping.

Will this workflow work on n8n Cloud or only on self-hosted?

It works on both. The Schedule Trigger, Notion node, Filter node, Code node, and Airtable node are all built into n8n — no extra packages or server access required. On n8n Cloud, just import the JSON template and connect your credentials.

How do I handle Notion relation or formula properties?

Relation properties return an array of page references (not plain text), and formula properties return a computed value in a nested formula object. Neither is handled by the default Code node in this template. You can extend the get() helper with a case 'relation' branch that maps the array to a comma-separated list of page IDs, or use a case 'formula' branch that reads p.formula.string (or .number, .boolean) depending on your formula’s return type.

🚀 Get the Notion → Airtable Sync Template

You now know exactly how this workflow operates. Skip the setup time and grab the ready-to-import template — it includes the workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide that walks you through every API key you’ll need.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted · $14.99

What’s Next?

  • Add a Slack notification: After the Airtable upsert, add a Slack node that posts a message to a channel whenever a high-priority record is synced — good for keeping the team aware of urgent changes without checking Airtable manually.
  • Filter by status: Add an IF node after the Code step to only sync records that are “In Progress” or “Done” — useful if you want Airtable to act as a completed-work log rather than a full mirror.
  • Reverse sync: Build a companion workflow that polls Airtable for changes and writes them back to Notion, creating a true bidirectional sync between both tools.
  • Add error alerting: Wrap the Airtable upsert in a try/catch (or use n8n’s Error Workflow feature) to send yourself an email or Slack DM if the sync fails — so you’re never left with stale data and no idea why.
n8n
Notion
Airtable
automation
database sync
no-code
productivity

How to Auto-Triage Support Emails to Slack with AI Using n8n

Every support inbox gets noisy fast. A billing question sits next to a frantic “our entire account is locked” message, and both look the same in a Gmail preview. By the time someone reads the urgent one, an hour has slipped by. This tutorial walks you through building an n8n workflow that reads every inbound support email, asks GPT-4o mini to classify it by priority and category, routes the result to the right Slack channel — and logs everything to a Google Sheet so nothing ever disappears.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Gmail trigger polls your inbox every minute for new unread emails.
  2. An OpenAI call analyzes the email and returns a priority level (High / Medium / Low), a category (Bug Report, Billing, Feature Request, etc.), a one-sentence summary, and the sender’s detected sentiment.
  3. A conditional branch routes High-priority tickets to a #support-urgent Slack channel with a bold alert, and everything else to #support as a standard notification.
  4. Every ticket — regardless of priority — is appended to a Google Sheet so you have a permanent, searchable log of all incoming requests.

How It Works — The Big Picture

The workflow runs as a polling loop. Think of it as a tireless assistant who checks email every sixty seconds, reads it with an AI brain, and instantly posts into the right Slack channel — then files the paperwork in a spreadsheet.

┌─────────────────────────────────────────────────────────────────────┐
│  GMAIL → SLACK AI SUPPORT TRIAGE                                    │
│                                                                     │
│  [Gmail Trigger]                                                    │
│       │  (new unread email)                                         │
│       ▼                                                             │
│  [Extract Email Fields]                                             │
│       │  (sender, subject, body snippet, timestamp)                 │
│       ▼                                                             │
│  [AI Classify & Summarize]  ←── OpenAI gpt-4o-mini                 │
│       │  (priority, category, summary, sentiment)                   │
│       ▼                                                             │
│  [Parse AI Response]                                                │
│       │                                                             │
│       ▼                                                             │
│  [Check Priority]                                                   │
│       │                        │                                    │
│    priority=High           priority≠High                            │
│       │                        │                                    │
│       ▼                        ▼                                    │
│  [Slack Urgent]          [Slack Standard]                           │
│  #support-urgent          #support                                  │
│       │                        │                                    │
│       └──────────┬─────────────┘                                    │
│                  ▼                                                  │
│         [Log to Google Sheets]                                      │
└─────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n — self-hosted or n8n Cloud (any version 1.0+)
  • Gmail account — connected via Google OAuth2 in n8n
  • OpenAI account — API key with access to gpt-4o-mini
  • Slack workspace — with two channels: #support-urgent and #support
  • Google Sheets — a spreadsheet with the columns described below

Estimated build time: 40–60 minutes from scratch, or under 10 minutes with the template.


Building the Workflow — Step by Step

1 Gmail Trigger — Watch for New Emails

The Gmail Trigger node polls your inbox every minute and fires once for each new unread email it finds. Connect your Gmail credential via Google OAuth2, set Poll Times to Every Minute, and set Read Status to Unread under Filters.

After a test run, the raw payload looks like this:

{
  "id": "18f4a3b2c1d0e9f8",
  "from": { "value": [{ "name": "Sarah Thompson", "address": "sarah.thompson@outlook.com" }] },
  "subject": "Can't log into my account — urgent!",
  "text": "Hi, I've been trying to log in for the past hour and keep getting an 'Invalid credentials' error...",
  "date": "2026-04-03T14:22:00.000Z"
}
💡

Tip: If you only want to monitor a specific support address, route emails to a Gmail label first, then filter by that label in the trigger node.

2 Extract Email Fields — Set Node

The Set node flattens the nested Gmail payload into a clean, flat object for the AI prompt. Configure these assignments:

Field Name Expression Purpose
sender ={{ $json.from?.value?.[0]?.address ?? $json.from }} Sender’s email address
senderName ={{ $json.from?.value?.[0]?.name ?? 'Unknown' }} Sender’s display name
subject ={{ $json.subject ?? '(No Subject)' }} Email subject line
bodySnippet ={{ ($json.text ?? $json.snippet ?? '').substring(0, 1200) }} First 1,200 chars of body
receivedAt ={{ $now.format('yyyy-MM-dd HH:mm') }} Human-readable timestamp
messageId ={{ $json.id }} Gmail message ID
📌

We cap the body at 1,200 characters to keep OpenAI token costs low. Increase this limit if your customers tend to write detailed reports.

3 AI Classify & Summarize — HTTP Request to OpenAI

An HTTP Request node sends the email to OpenAI’s Chat Completions API. Set Method to POST, URL to https://api.openai.com/v1/chat/completions, and Authentication to HTTP Header Auth with Authorization: Bearer YOUR_KEY. Use this JSON body:

{
  "model": "gpt-4o-mini",
  "temperature": 0,
  "response_format": { "type": "json_object" },
  "messages": [
    { "role": "system", "content": "You are a support triage assistant. Return JSON with: priority (High/Medium/Low), category (Bug Report/Billing/Feature Request/General Question/Account Issue), summary (max 120 chars), sentiment (Frustrated/Neutral/Positive). High priority = outage, data loss, billing error, account locked, urgent/ASAP." },
    { "role": "user", "content": "From: {{ $json.senderName }} <{{ $json.sender }}>\nSubject: {{ $json.subject }}\n\n{{ $json.bodySnippet }}" }
  ]
}
💡

Tip: Setting temperature: 0 makes classification deterministic. The response_format: json_object parameter guarantees the response is always valid JSON, preventing parse errors downstream.

4 Parse AI Response — Code Node

This Code node extracts the nested JSON, parses it, and merges AI classifications with the original email fields:

const aiRaw = items[0].json.choices?.[0]?.message?.content ?? '{}';
let ai;
try { ai = JSON.parse(aiRaw); }
catch (e) { ai = { priority: 'Low', category: 'General Question', summary: 'Parse error', sentiment: 'Neutral' }; }

const emailData = $('Extract Email Fields').item.json;
return [{ json: {
  sender: emailData.sender, senderName: emailData.senderName,
  subject: emailData.subject, bodySnippet: emailData.bodySnippet,
  receivedAt: emailData.receivedAt, messageId: emailData.messageId,
  priority: ai.priority ?? 'Low', category: ai.category ?? 'General Question',
  summary: ai.summary ?? emailData.subject, sentiment: ai.sentiment ?? 'Neutral'
}}];

The merged output object looks like this:

{
  "sender": "sarah.thompson@outlook.com",
  "senderName": "Sarah Thompson",
  "subject": "Can't log into my account — urgent!",
  "priority": "High",
  "category": "Account Issue",
  "summary": "User cannot log in due to 'Invalid credentials' error persisting for over an hour.",
  "sentiment": "Frustrated",
  "receivedAt": "2026-04-03 14:22",
  "messageId": "18f4a3b2c1d0e9f8"
}
💡

Tip: The try/catch block ensures the workflow never crashes if OpenAI is rate-limited or returns an unexpected response — it falls back to Low priority instead.

5 Check Priority — IF Node

The IF node checks whether priority equals "High" (case-insensitive). True branch → urgent Slack alert. False branch → standard notification. Left value: ={{ $json.priority }}, Operation: String equals, Right value: High.

📌

To add a Medium-priority route, swap the IF node for a Switch node with three outputs: High, Medium, and Low.

6 Slack Urgent Alert

Posts to #support-urgent with Block Kit formatting. Set Resource to Message, Operation to Post, Message Type to Blocks. The card shows sender name, category, received timestamp, sentiment, subject, and AI summary — everything your on-call team needs at a glance.

💡

Tip: Add an Action block with a button linking directly to the email: https://mail.google.com/mail/u/0/#inbox/{{ $json.messageId }}

7 Slack Standard Alert

Identical configuration to Step 6, but posts to #support with a gentler 📬 New Support Ticket header and includes the priority level in the fields.

8 Log to Google Sheets

Both Slack branches converge here. The Google Sheets node appends one row per email. Set Operation to Append Row and map all nine columns. This creates a permanent, searchable history of every inbound request.


The Data Structure

Create a Google Sheet named Support Ticket Log with these columns in row 1. Names are case-sensitive and must match exactly.

Column Type Example Description
Received At Text 2026-04-03 14:22 Timestamp when the email arrived
Sender Text sarah.thompson@outlook.com Sender’s email address
Sender Name Text Sarah Thompson Sender’s display name from Gmail
Subject Text Can’t log into my account — urgent! Original email subject line
Priority Text High AI-assigned priority: High / Medium / Low
Category Text Account Issue AI-assigned ticket category
Sentiment Text Frustrated Detected sender emotion
AI Summary Text User cannot log in due to… One-sentence AI summary
Message ID Text 18f4a3b2c1d0e9f8 Gmail message ID for deduplication
📌

Column names are case-sensitive. Leaving out columns won’t break the workflow — n8n simply won’t write to unmapped columns.


Full System Flow

  INCOMING EMAIL
       │
       ▼
  ┌────────────────┐
  │  Gmail Trigger │  (polls every minute)
  └────────┬───────┘
           │ raw email payload
           ▼
  ┌──────────────────────┐
  │  Extract Email Fields │  (Set node)
  └──────────┬───────────┘
             │ {sender, subject, bodySnippet, receivedAt, messageId}
             ▼
  ┌──────────────────────────────┐
  │  AI Classify & Summarize     │  (HTTP → OpenAI gpt-4o-mini)
  └──────────┬───────────────────┘
             │ OpenAI JSON response
             ▼
  ┌──────────────────────┐
  │  Parse AI Response   │  (Code node)
  └──────────┬───────────┘
             │ {priority, category, summary, sentiment, + email fields}
             ▼
  ┌──────────────────────┐
  │    Check Priority    │  (IF node)
  └────┬─────────────────┘
       │                     │
  priority=High         priority≠High
       │                     │
       ▼                     ▼
  ┌────────────┐     ┌────────────────┐
  │ Slack      │     │ Slack          │
  │ #support-  │     │ #support       │
  │  urgent    │     │ (standard)     │
  └─────┬──────┘     └──────┬─────────┘
        │                   │
        └──────────┬────────┘
                   ▼
       ┌────────────────────┐
       │  Log to Google     │
       │  Sheets (append)   │
       └────────────────────┘

Testing Your Workflow

  1. Activate the workflow (toggle the Active switch in the top-right).
  2. Send a test email with subject “URGENT: payment failed — please help!” to the monitored inbox.
  3. Wait up to 60 seconds, then check #support-urgent in Slack — you should see a card with a bold header.
  4. Check your Google Sheet — a new row should appear with all nine columns filled.
  5. Send a second email with subject “Feature request: dark mode”. It should appear in #support as Low priority.
Problem Likely Cause Fix
No Slack message received Slack credential scope missing Re-authorize Slack and ensure chat:write scope is granted
Google Sheets row not appended Column name mismatch Check that sheet column headers match exactly (case-sensitive)
OpenAI node returns 401 API key format wrong Verify the Authorization header value is Bearer sk-…
All emails classified as Low AI parse error falling back Inspect Code node output — check choices[0].message.content
Gmail trigger fires duplicates Emails re-marked as unread Enable the Mark as Read option on the Gmail Trigger node

Frequently Asked Questions

Does this workflow automatically reply to the sender?

No — this workflow focuses purely on triage and notification. You can add a Gmail send node at the end to fire an auto-acknowledgment, wired in parallel with the Slack nodes so both happen simultaneously.

How much does the OpenAI API cost to run this?

GPT-4o mini is very inexpensive — roughly $0.15 per million input tokens. For a typical support email (~200 tokens), you’re looking at fractions of a cent per email. Processing 1,000 emails per month costs less than $0.50 in API fees.

Can I add more priority levels or categories?

Absolutely. Edit the system prompt in the HTTP Request node to include any levels or categories you need. Just make sure your IF or Switch node branches match whatever the AI returns.

What happens if OpenAI is down or rate-limited?

The Code node has a try/catch block that falls back to Low priority if parsing fails. The email still gets logged to Google Sheets and posted to Slack — just without an AI classification.

Can I monitor multiple Gmail inboxes?

Yes. Add a second Gmail Trigger node connected to a different Gmail credential and wire it into the same Extract Email Fields node. Both triggers independently poll their inboxes and feed into the same triage pipeline.

Is my email content sent to OpenAI’s servers?

Yes — the subject and first 1,200 characters of the body are sent to OpenAI’s API. OpenAI’s standard API does not use submitted data to train models. If your emails contain sensitive data, consider a self-hosted local model like Ollama with Llama 3 via the HTTP Request node.


🚀 Get the Gmail → Slack AI Support Triage Template

Skip the build and get a ready-to-import n8n workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide PDF — everything you need to be running in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Auto-reply on High priority: Add a Gmail send node that fires an immediate acknowledgment to the customer when their ticket lands as High priority.
  • Create tickets in a helpdesk: Replace or augment the Google Sheets node with a Zendesk, Freshdesk, or Linear node to create a real ticket with the AI-assigned priority already set.
  • Daily digest report: Add a second workflow that runs at 8 AM every morning, reads yesterday’s rows from the Google Sheet, and posts a summary to Slack.
  • Sentiment escalation: DM the support manager directly when sentiment is Frustrated AND priority is High.
n8n
Gmail
Slack
OpenAI
Google Sheets
Support Automation
AI Triage
automation

How to Auto-Log Gmail Emails to Google Sheets with n8n (AI Categorization)

Every day, important emails land in your inbox — and most of them vanish into the void with no record of who sent them, what they were about, or how you responded. If you manage a business inbox, you already know the pain: support requests get buried, sales leads go cold, and billing questions get missed entirely. What if every single email was automatically logged, categorized, and timestamped the moment it arrived? That’s exactly what this n8n workflow does — it watches your Gmail inbox, asks OpenAI to categorize each new message, and appends a clean row to a Google Sheets log in real time.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Gmail trigger that polls your inbox every minute for new unread messages.
  2. An OpenAI call that reads each email’s subject line and preview text, then returns one of six categories: Support, Sales, Billing, Newsletter, Spam, or Other.
  3. A formatting step that packages the sender info, subject, snippet, category, and timestamp into a clean row structure.
  4. A Google Sheets append that adds the row to your Email Log sheet — instantly and automatically.

How It Works — The Big Picture

The workflow is a simple four-node pipeline. Gmail fires whenever a new unread email arrives, OpenAI classifies it in under a second, a Set node formats the fields, and Google Sheets stores the result. Nothing is deleted, moved, or marked read — this is a pure logging workflow that runs silently in the background.

+------------------------------------------------------------------+
|  AUTO-LOG GMAIL TO GOOGLE SHEETS WITH AI CATEGORIZATION          |
|                                                                  |
|  [Gmail Trigger]  -->  [Classify Email with AI]                  |
|   New unread email     POST to OpenAI GPT-4o-mini                |
|   (polls every min)    Returns: Support / Sales / Billing /      |
|                        Newsletter / Spam / Other                 |
|                                 |                                |
|                                 v                                |
|                        [Format Row Data]                         |
|                         Set node -- maps fields                  |
|                                 |                                |
|                                 v                                |
|                        [Log to Google Sheets]                    |
|                         Appends row to "Email Log" sheet         |
+------------------------------------------------------------------+

What You’ll Need

  • An n8n instance (Cloud or self-hosted — version 1.0 or later)
  • A Gmail account with OAuth2 connected in n8n
  • An OpenAI account with an API key (GPT-4o-mini is very cheap — under $0.01 per 1,000 emails)
  • A Google Sheets account with OAuth2 connected in n8n
  • A Google Sheet set up with the correct column headers (details below)

Estimated build time: 30-45 minutes from scratch, or under 10 minutes with the template.

Building the Workflow — Step by Step

1 New Email Trigger (Gmail Trigger)

This node is the entry point — it polls your Gmail inbox every minute looking for new unread messages. When it finds one, it sends all of that email’s metadata downstream.

To configure it:

  1. Add a Gmail Trigger node to your canvas.
  2. Under Credential, connect your Gmail OAuth2 account (click “Create new” if you haven’t set one up yet — n8n will walk you through the Google authorization flow).
  3. Set Poll Times to Every Minute. You can slow this down to every 5 or 15 minutes if your inbox is high-volume.
  4. Under Filters → Read Status, select Unread so you only process new messages, not your entire archive.

When this node fires, it outputs a JSON object for each new unread email. Here’s what a typical message looks like at this stage:

{
  "id": "18f3a1c9b4d2e7a5",
  "threadId": "18f3a1c9b4d2e7a5",
  "subject": "Re: Invoice #4821 - Payment Confirmation",
  "snippet": "Hi Sarah, just confirming that we processed payment this morning...",
  "date": "Fri, 03 Apr 2026 09:14:22 -0500",
  "from": {
    "value": [
      {
        "address": "james.carter@outlook.com",
        "name": "James Carter"
      }
    ]
  }
}
💡

Tip: If you want to monitor a specific label rather than your whole inbox (e.g., a shared support@ alias forwarded to a label), use the Label Names filter to narrow the scope. This prevents the workflow from logging every newsletter and promotional email if you have those auto-labeled.

2 Classify Email with AI (HTTP Request to OpenAI)

This node sends the email’s subject and preview text to OpenAI’s Chat Completions API. A tightly-worded system prompt instructs GPT-4o-mini to reply with a single category word — nothing else.

To configure it:

  1. Add an HTTP Request node.
  2. Set Method to POST and URL to https://api.openai.com/v1/chat/completions.
  3. Under Authentication, choose Predefined Credential Type then OpenAI, and connect your OpenAI API credential.
  4. Enable Send Body and set Content Type to JSON.
  5. In the Body field, paste the expression from the template (it builds a messages array with the system prompt and the email content dynamically).

The system prompt used is:

You are an email classifier. Classify the email into exactly one of these
categories: Support, Sales, Billing, Newsletter, Spam, Other. Reply with
only the category name, nothing else.

OpenAI returns a standard completion response. The category lives at choices[0].message.content:

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Billing"
      }
    }
  ],
  "model": "gpt-4o-mini",
  "usage": {
    "prompt_tokens": 82,
    "completion_tokens": 1,
    "total_tokens": 83
  }
}
💡

Tip: GPT-4o-mini is extremely cheap for single-word completions — roughly $0.00015 per email. Even if you receive 500 emails a day, this classification step costs less than $3 a month. You can swap in gpt-4o if you want higher accuracy on ambiguous messages, but in practice mini handles email classification very well.

3 Format Row Data (Set Node)

This node pulls fields from both the Gmail Trigger (step 1) and the OpenAI response (step 2) and assembles them into a single, clean object that matches the Google Sheets column headers exactly.

To configure it:

  1. Add a Set node.
  2. Switch to Manual Mapping mode.
  3. Create nine fields with these exact names (they must match your sheet headers): Date Received, From Email, From Name, Subject, Snippet, Category, Message ID, Thread ID, Logged At.
  4. Map each field to the corresponding expression from the template.

After this node, the data looks like this — a flat object ready to insert into Sheets:

{
  "Date Received": "Fri, 03 Apr 2026 09:14:22 -0500",
  "From Email": "james.carter@outlook.com",
  "From Name": "James Carter",
  "Subject": "Re: Invoice #4821 - Payment Confirmation",
  "Snippet": "Hi Sarah, just confirming that we processed payment this morning...",
  "Category": "Billing",
  "Message ID": "18f3a1c9b4d2e7a5",
  "Thread ID": "18f3a1c9b4d2e7a5",
  "Logged At": "2026-04-03 09:14:31"
}
📌

The Logged At field uses $now.format('yyyy-MM-dd HH:mm:ss') — this records when n8n processed the email, which may be up to a minute after the email arrived. If you need the exact arrival time, use Date Received for sorting instead.

4 Log to Google Sheets (Append Row)

The final node appends the formatted row to your Email Log sheet. This is a one-way write — it never reads from or overwrites existing rows.

To configure it:

  1. Add a Google Sheets node.
  2. Set Operation to Append Row.
  3. Connect your Google Sheets OAuth2 credential.
  4. In the Document field, paste your spreadsheet ID (found in the URL: docs.google.com/spreadsheets/d/YOUR_ID/edit).
  5. Set Sheet Name to Email Log.
  6. Under Columns, switch to Define Below and map each column name to its corresponding expression.
💡

Tip: Make sure your Google Sheet already has a header row with the exact column names listed above before activating the workflow. If the sheet is empty when the first row comes in, Google Sheets may treat the data row as headers.

The Data Structure — Your Email Log Sheet

Create a new Google Sheet named Email Log and add these column headers in row 1 before activating the workflow. The column names must match exactly — including capitalization and spaces.

Column Type Example Description
Date Received Text Fri, 03 Apr 2026 09:14:22 -0500 Raw date header from the email (RFC 2822 format)
From Email Text james.carter@outlook.com Sender’s email address
From Name Text James Carter Sender’s display name (may be blank if not set)
Subject Text Re: Invoice #4821 – Payment Confirmation Full email subject line
Snippet Text Hi Sarah, just confirming payment… First ~100 characters of the email body (Gmail preview)
Category Text Billing AI-assigned category from OpenAI
Message ID Text 18f3a1c9b4d2e7a5 Unique Gmail message ID — useful for deduplication
Thread ID Text 18f3a1c9b4d2e7a5 Gmail thread ID — group related messages together
Logged At Text 2026-04-03 09:14:31 Timestamp when n8n processed this email (UTC)

Here’s what a few rows look like in practice:

Date Received From Email From Name Subject Category
Fri, 03 Apr 2026 09:14:22 james.carter@outlook.com James Carter Re: Invoice #4821 Billing
Fri, 03 Apr 2026 10:02:45 emily.rodriguez@gmail.com Emily Rodriguez Question about Pro plan Sales
Fri, 03 Apr 2026 11:33:08 noreply@weeklydigest.com Weekly Digest Your top stories this week Newsletter
Fri, 03 Apr 2026 14:17:52 michael.chen@techco.com Michael Chen App is throwing 500 errors Support
📌

Column names are case-sensitive. If your header says from email (lowercase) but n8n maps to From Email, the append will create a new column instead of filling the existing one. Double-check your headers before your first test run.

Full System Flow

Here’s the end-to-end picture showing every data hop from Gmail inbox to spreadsheet row:

  GMAIL INBOX
       |
       |  New unread email arrives
       v
  +-----------------------------------+
  |  Gmail Trigger                    |
  |  Polls every 1 minute             |
  |  Filter: unread only              |
  |  Output: id, threadId, subject,   |
  |  snippet, date, from{}            |
  +----------------+------------------+
                   |
                   |  subject + from + snippet
                   v
  +-----------------------------------+
  |  Classify Email with AI           |
  |  POST to OpenAI GPT-4o-mini       |
  |  System: "Classify into:          |
  |  Support / Sales / Billing /      |
  |  Newsletter / Spam / Other"       |
  |  Output: choices[0].message       |
  |          .content = "Billing"     |
  +----------------+------------------+
                   |
                   |  category string + original fields
                   v
  +-----------------------------------+
  |  Format Row Data (Set Node)       |
  |  Maps 9 fields into flat object   |
  |  Adds "Logged At" timestamp       |
  +----------------+------------------+
                   |
                   |  clean row object
                   v
  +-----------------------------------+
  |  Log to Google Sheets             |
  |  Operation: Append Row            |
  |  Sheet: "Email Log"               |
  |  9 columns mapped                 |
  +-----------------------------------+
                   |
                   v
  GOOGLE SHEETS -- NEW ROW ADDED

Testing Your Workflow

Before activating, run a manual test to confirm every node is working correctly:

  1. Make sure you have at least one unread email in your Gmail inbox (send yourself a test message if needed).
  2. Click Test Workflow in n8n. The Gmail Trigger will fetch the most recent unread message.
  3. Verify the Classify Email with AI node output — you should see a choices[0].message.content field with one of the six category names.
  4. Check the Format Row Data node output — confirm all nine fields are populated correctly.
  5. Open your Google Sheet and confirm a new row appeared at the bottom of Email Log.
  6. Activate the workflow using the toggle in the top-right corner of the editor.
Problem Likely Cause Fix
Gmail Trigger fires but returns no items No unread emails in inbox at test time Send yourself a test email, mark it unread, then re-run
OpenAI returns a 401 error API key not set or expired Check your OpenAI credential in n8n Settings → Credentials
Google Sheets creates a new column instead of filling existing one Column name mismatch (case or spacing) Compare your sheet header row to the field names in the Set node exactly
Category field shows the full OpenAI response object, not just the word Wrong expression path Use $json.choices[0].message.content.trim() in the Set node
Workflow runs on already-processed emails Trigger not filtering by unread Enable the Read Status: Unread filter in the Gmail Trigger

Frequently Asked Questions

Will this workflow mark my emails as read?

No — this workflow is read-only from Gmail’s perspective. It fetches email metadata but doesn’t modify, move, label, or mark any messages. Your inbox stays exactly as it is; the workflow just mirrors information into your spreadsheet.

Can I monitor multiple Gmail accounts?

Yes. Duplicate the workflow and connect a different Gmail credential in the trigger node of each copy. Each workflow operates independently and can log to the same sheet or separate sheets depending on your preference.

How accurate is the AI categorization?

GPT-4o-mini is very accurate for email classification — in testing it correctly categorizes over 95% of emails on the first try. The trickiest cases are emails that straddle categories (e.g., a sales email that’s also a billing question). You can improve accuracy by customizing the system prompt to add examples or additional categories specific to your business.

What happens if OpenAI is down or returns an error?

By default, n8n will mark that execution as failed and stop processing that email. The email won’t be logged to the sheet. To handle this gracefully, you can add an error branch after the HTTP Request node that falls back to writing “Uncategorized” — or enable n8n’s built-in Error Workflow setting to get notified when runs fail.

Can I add more categories beyond the default six?

Absolutely. Edit the system prompt in the HTTP Request node to include your custom categories (e.g., “Partnership”, “Press”, “Internal”). Keep the instruction to reply with only the category name, and update your Google Sheets dropdown validation to match if you’re using data validation on that column.

Does this work with Google Workspace accounts?

Yes. The Gmail OAuth2 connection in n8n supports both personal Gmail accounts and Google Workspace accounts. If you’re on a Workspace account, you may need your admin to approve the n8n OAuth app in the Google Admin Console — check with your IT team if you see a permissions error during setup.


🚀 Get the Gmail to Google Sheets AI Tagging Template

Skip the configuration work — get the ready-to-import n8n workflow JSON, a step-by-step Setup Guide PDF, and a Credentials Guide PDF that walks you through every API key you’ll need. Be up and running in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Auto-reply by category: Add a Gmail “Send Email” node after the Sheets append that fires a canned reply for Support emails, acknowledging receipt within seconds.
  • Slack alerts for high-priority categories: Route Sales and Support emails to a dedicated Slack channel so your team sees them instantly without checking their inbox.
  • Weekly summary report: Add a second n8n workflow on a schedule that reads the Email Log sheet, groups by Category, and emails you a Monday-morning breakdown.
  • CRM sync: For emails categorized as Sales, push the sender’s name and email to HubSpot or Pipedrive as a new contact or deal automatically.
n8n
Gmail
Google Sheets
OpenAI
email automation
AI categorization
workflow automation

How to Auto-Send Weekly Google Sheets Reports via Email with n8n

Every Monday morning, someone on your team is opening a spreadsheet, copying numbers into an email, formatting a table by hand, and hitting send — only to do it all over again next week. It’s tedious, error-prone, and completely unnecessary. With n8n, you can build a workflow that reads your Google Sheet, aggregates the data, builds a polished HTML email, and sends it automatically — without touching a single cell yourself.

In this guide you’ll build a five-node n8n workflow that fires every Monday at 9 AM, pulls rows from a Google Sheets sales tracker, calculates totals and averages, identifies the top performer, and emails a beautiful formatted report to your team. No code experience required — just follow the steps.

Prefer to skip the setup? Grab the ready-made template and be up and running in under 10 minutes.

What You’ll Build

  1. A Google Sheet acts as your data source — your team logs deals, tasks, or KPIs there throughout the week as usual.
  2. Every Monday at 9 AM, n8n automatically reads every row in the sheet.
  3. A Code node crunches the numbers: total sales, deal count, average deal size, and top performer.
  4. A Set node assembles a clean, branded HTML email with a summary card and a full data table.
  5. Gmail delivers the finished report to every recipient on your list — no human involved.

How It Works — The Big Picture

The workflow is a straight linear pipeline: trigger → read → process → format → send. There are no branches or conditionals, which makes it easy to understand and extend later.

+----------------------------------------------------------------------+
|  WEEKLY GOOGLE SHEETS REPORT -- n8n WORKFLOW                        |
|                                                                      |
|  [Schedule Trigger]  ->  [Read Google Sheet]  ->  [Aggregate Data]  |
|   Every Mon 9 AM          All rows read           Totals, avg,      |
|                            from "Weekly Sales"     top performer    |
|                                                         |           |
|                                              [Build Email Content]  |
|                                               HTML email assembled  |
|                                                         |           |
|                                              [Send Email Report]    |
|                                               Delivered via Gmail   |
+----------------------------------------------------------------------+

What You’ll Need

  • n8n — self-hosted or n8n Cloud (free trial available at n8n.io)
  • Google account — for Google Sheets and Gmail (both use OAuth2)
  • A Google Sheet — with columns: Date, Sales Rep, Client, Amount, Status

Estimated build time: 25–40 minutes from scratch, or under 10 minutes with the ready-made template.

Building the Workflow — Step by Step

1 Schedule Trigger

This is the starting gun. The Schedule Trigger fires the workflow on a cron schedule — no manual intervention needed. You’ll configure it to run every Monday at 9 AM.

  1. In your n8n canvas, click Add first step and search for Schedule Trigger.
  2. Under Trigger Rules, click Add Rule and set mode to Custom (Cron Expression).
  3. Enter the expression: 0 9 * * 1 — this means “9:00 AM every Monday”.
  4. Set the Timezone to your local timezone (e.g., America/New_York).
💡

Tip: Want a daily report instead of weekly? Change the cron to 0 9 * * *. For weekdays only, use 0 9 * * 1-5.

2 Read Google Sheet

This node connects to your Google account and reads every data row from the specified sheet. It outputs one n8n item per row.

  1. Add a Google Sheets node after the trigger.
  2. Under Credential, click Create new and complete the Google OAuth2 sign-in flow.
  3. Set Operation to Read Rows.
  4. Paste your Spreadsheet ID (found in the Google Sheets URL between /d/ and /edit).
  5. Set Sheet Name to Weekly Sales (or your actual tab name).
  6. Under Options, ensure First Row Contains Column Names is enabled.
{
  "Date": "03/31/2026",
  "Sales Rep": "James Carter",
  "Client": "Apex Digital LLC",
  "Amount": "4250.00",
  "Status": "Closed Won"
}
💡

Tip: Column names must match exactly — including capitalization — because the Code node references them by name.

3 Aggregate Data (Code node)

This Code node runs once across all items from the sheet. It calculates totals, finds the top performer, and builds the HTML table rows for the email body.

  1. Add a Code node and set Mode to Run Once for All Items.
  2. Paste the following JavaScript:
const rows = $input.all();
const data = rows.map(r => r.json);

if (data.length === 0) {
  return [{ json: { error: 'No data found', dealCount: 0 } }];
}

const totalSales = data.reduce((sum, r) => sum + parseFloat(r['Amount'] || 0), 0);
const count = data.length;
const avg = count > 0 ? (totalSales / count).toFixed(2) : '0.00';

const topRep = data.reduce((best, r) => {
  return parseFloat(r['Amount'] || 0) > parseFloat(best['Amount'] || 0) ? r : best;
}, data[0]);

const tableRows = data.map(r => `
  <tr>
    <td style="padding:8px 12px;border-bottom:1px solid #e2e8f0;">${r['Date'] || ''}</td>
    <td style="padding:8px 12px;border-bottom:1px solid #e2e8f0;">${r['Sales Rep'] || ''}</td>
    <td style="padding:8px 12px;border-bottom:1px solid #e2e8f0;">${r['Client'] || ''}</td>
    <td style="padding:8px 12px;border-bottom:1px solid #e2e8f0;font-weight:600;">$${parseFloat(r['Amount'] || 0).toFixed(2)}</td>
    <td style="padding:8px 12px;border-bottom:1px solid #e2e8f0;">${r['Status'] || ''}</td>
  </tr>
`).join('');

const reportDate = new Date().toLocaleDateString('en-US', {
  weekday: 'long', year: 'numeric', month: 'long', day: 'numeric'
});

return [{
  json: { totalSales: totalSales.toFixed(2), dealCount: count, avgDeal: avg,
    topRep: topRep['Sales Rep'] || 'N/A',
    topRepAmount: parseFloat(topRep['Amount'] || 0).toFixed(2),
    tableRows, reportDate }
}];
💡

Tip: To filter only “Closed Won” deals, add const closedWon = data.filter(r => r['Status'] === 'Closed Won'); and use closedWon instead of data in the reduce calls.

4 Build Email Content (Set node)

The Set node assembles two fields: the email subject line and the full HTML email body, using n8n expressions to inject the aggregated values.

  1. Add a Set node and switch mode to Manual Mapping.
  2. Add a field named emailSubject: Weekly Sales Report – {{ $now.format('MMMM d, yyyy') }}
  3. Add a field named emailBody and paste in your HTML email template using {{ $json.totalSales }} expressions.

5 Send Email Report (Gmail node)

The final node picks up the subject and body and delivers the email via your connected Gmail account.

  1. Add a Gmail node and connect your Gmail account via OAuth2.
  2. Set Operation to Send.
  3. In To, enter your recipient email(s), comma-separated.
  4. Set Subject to ={{ $json.emailSubject }} and Email Type to HTML.
  5. Set Message to ={{ $json.emailBody }}.

The Data Structure

Your Google Sheet needs a tab named Weekly Sales with these exact column headers:

Column Type Example Description
Date Text / Date 03/31/2026 When the deal was logged
Sales Rep Text James Carter Team member who owns this deal
Client Text Apex Digital LLC Company or contact name
Amount Number (no $ symbol) 4250.00 Dollar value of the deal
Status Text Closed Won Current deal status
📌

Important: Store the Amount column as plain numbers without currency symbols or commas. parseFloat() will return NaN for values like $4,250.00, breaking your totals.

Testing Your Workflow

  1. Make sure your Google Sheet has at least 2–3 rows of test data with all five columns filled in.
  2. Open the workflow in n8n and click Test workflow. This manually triggers the workflow regardless of the schedule.
  3. Watch each node light up green as it executes. Click nodes to inspect their output.
  4. Check your Gmail inbox — a formatted report email should arrive within a few seconds.
  5. Once confirmed working, click Save and toggle the workflow Active.
Problem Likely Cause Fix
Google Sheets node returns no rows Wrong sheet name or spreadsheet ID Copy the ID from the URL; ensure the tab name matches exactly (case-sensitive)
Totals show as 0 or NaN Amount column contains $ symbols or commas Remove formatting from the Amount column
Email not received Gmail OAuth token expired or wrong recipient Re-authorize the Gmail credential; double-check the To field
Workflow doesn’t fire on Monday Timezone mismatch Set timezone in the Schedule Trigger to match your local time
HTML renders as plain text Gmail node Email Type set to Plain Text Change Email Type to HTML in the Gmail node settings

Frequently Asked Questions

Can I send the report to multiple people?

Yes. In the Gmail node’s To field, enter comma-separated email addresses. You can also use the CC and BCC fields under Options.

What if my sheet has hundreds of rows?

The Google Sheets node reads all rows in a single API call, and the Code node processes them all at once. For very large sheets (5,000+ rows), add a date filter node after the Google Sheets node to limit processing to the current week’s data.

Can I use this for something other than sales data?

Absolutely. The workflow is a general-purpose “read sheet → aggregate → email” pipeline. Just rename the columns in your sheet and update the field references in the Code node. A project hours report might use Employee, Project, Hours, and Week columns instead.

Does the workflow clear the sheet data after sending?

No — by default it’s read-only and keeps all rows. If you want the report to cover only the previous week’s entries, add a date filter in the Code node: const thisWeek = data.filter(r => new Date(r['Date']) >= sevenDaysAgo);

Is it safe to connect my Gmail account to n8n?

Yes — n8n uses Google’s official OAuth2 flow. Your credentials are stored encrypted in n8n’s database and never exposed to third parties. If you’re self-hosting n8n, they stay entirely on your own server. You can revoke access at any time from your Google Account security settings.


Get the Weekly Report Template

Skip the build — get the complete n8n workflow JSON, a Setup Guide PDF, and a Credentials Guide PDF. Everything you need to go from zero to automated reports in under 10 minutes.

Get the Template

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Slack notification: Drop a Slack node after the Gmail node to post a summary card to your #sales channel at the same time the email goes out.
  • Store historical summaries: After the Aggregate Data node, add a Google Sheets node to write weekly totals to a separate “History” tab.
  • Conditional alerts: Add an IF node that sends a different email when total sales miss the week’s target.
  • Multi-team reports: Use a Switch node to route rows by region and send tailored reports to each team lead.
n8n
Google Sheets
Gmail
automation
weekly report
email automation
scheduled workflow