HomeAI AgentsHow to Build an AI Chatbot…
AI Agents

How to Build an AI Chatbot with Long-Term Memory Using n8n and Google Docs

How to Build an AI Chatbot with Long-Term Memory Using n8n and Google Docs

Ever built a chatbot that forgets everything the moment the conversation ends? You ask your AI assistant to remember your coffee order preference, and five minutes later it has no idea you’re a oat-milk double-shot latte person. It’s frustrating, and it makes the experience feel less intelligent than it should be.

Here’s the good news: you can fix this with n8n. This workflow gives your AI chatbot true persistent memory using Google Docs as a beautifully simple, completely free knowledge store. No database setup. No complex infrastructure. Just your chatbot learning about users, remembering details, and getting smarter with every conversation.

Ready to build this? You can grab the complete ready-to-import workflow with all the setup instructions at EasyWorkflows. Import in seconds and start building immediately.

What You’ll Build

This workflow creates an intelligent chatbot experience where:

  1. Users send messages via chat trigger (or Telegram)
  2. The workflow instantly retrieves all previous memories and notes about that user from Google Docs
  3. An AI agent (GPT-4o-mini) reads the incoming message alongside historical context
  4. The AI thinks about what to remember, what to save, and how to respond
  5. Important facts get saved to the “Long-Term Memories” doc for persistence
  6. Temporary notes get saved to the “Notes” doc for session context
  7. The response goes back to the user via Telegram, chat interface, or both

How It Works — The Big Picture

Let me show you the architecture before we dive into the build:

┌─────────────────────────────────────────────────────────────────┐
│                         USER MESSAGE                             │
│                  (Chat Trigger / Telegram)                       │
└────────────────────────────┬────────────────────────────────────┘
                             │
                ┌────────────┴────────────┐
                │                         │
      ┌─────────▼──────────┐   ┌─────────▼──────────┐
      │   Load Memories    │   │    Load Notes      │
      │  from Google Docs  │   │  from Google Docs  │
      │   (Memory Doc)     │   │   (Notes Doc)      │
      └─────────┬──────────┘   └─────────┬──────────┘
                │                         │
                └────────────┬────────────┘
                             │
                      ┌──────▼──────┐
                      │    Merge    │
                      │ Memories &  │
                      │    Notes    │
                      └──────┬──────┘
                             │
                      ┌──────▼──────┐
                      │  Combine    │
                      │  Context    │
                      └──────┬──────┘
                             │
        ┌────────────────────┼────────────────────┐
        │                    │                    │
   ┌────▼────┐      ┌────────▼────────┐     ┌────▼────┐
   │ Session │      │  AI Memory      │     │  GPT-   │
   │ Memory  │      │  Agent (with    │     │ 4o Mini │
   │ Buffer  │      │  System Prompt) │     │         │
   └────┬────┘      └────────┬────────┘     └────┬────┘
        │                    │                    │
        └────────────────────┼────────────────────┘
                             │
                ┌────────────┴────────────┐
                │                         │
      ┌─────────▼────────┐    ┌──────────▼──────────┐
      │  Save Memory to  │    │  Save Note to      │
      │   Google Docs    │    │  Google Docs       │
      │  (Triggered by   │    │  (Triggered by AI) │
      │   AI decision)   │    │                    │
      └─────────┬────────┘    └──────────┬──────────┘
                │                        │
      ┌─────────▼──────────────────────────▼────────┐
      │              Response Output                │
      │      (Format Chat + Send Telegram)          │
      └──────────────────────────────────────────────┘

What You’ll Need

Before you start, gather these prerequisites:

  • n8n account (cloud.n8n.io or self-hosted) — free tier works fine
  • OpenAI API key with GPT-4o-mini access (~$0.15 per conversation)
  • Google account with Google Docs access (free)
  • Telegram account (optional, but included in this workflow)
  • Telegram Bot Token from BotFather (optional, only if using Telegram output)
  • Basic n8n knowledge — familiarity with nodes, connections, and expressions is helpful but not required

Time estimate: 25–35 minutes for setup and first conversation.

Setting Up Your Google Docs

Google Docs is the memory backbone of this workflow. Think of it like a simple, searchable database that the AI can read and write to. Here’s why it works so well:

  • Free — You don’t pay per request or storage
  • Human-readable — You can open it anytime and read what the AI has learned
  • Searchable — Built-in Find function means you can track memory growth
  • OAuth2-compatible — n8n integrates seamlessly
  • Shareable — If you want to audit or share user data with a team member, it’s just a Google Doc link

Create Two Google Docs

Doc 1: “Long-Term Memories”

This stores facts that persist across conversations. Examples:

[04/08/2026] - Name: Sarah
[04/08/2026] - Preference: Oat milk lattes, no sugar
[04/08/2026] - Job: Product Manager at TechCorp
[04/08/2026] - Timezone: America/Chicago
[04/08/2026] - Goal: Learn n8n automation

Doc 2: “Notes”

This stores temporary reminders, action items, and session notes:

[04/08/2026 14:32:15] - User asked about workflow pricing
[04/08/2026 14:33:42] - Follow up: send template examples
[04/08/2026 14:35:08] - User mentioned bandwidth limits in n8n Cloud

You’ll get the document IDs after you create them. In Google Docs, the URL looks like:

https://docs.google.com/document/d/YOUR_GOOGLE_DOC_ID/edit

Copy that ID—you’ll paste it into the workflow nodes.

Building the Workflow — Step by Step

Now let’s build this. I’ll walk through each node with configurations and tips.

1 Chat Trigger

What it does: Listens for incoming messages from the n8n chat interface or external webhooks. This is your entry point for user input.

How to configure:

  • In n8n, add a “Chat Trigger” node (n8n-nodes-base.chatTrigger)
  • Set chatTriggerType to "webhook"
  • Save and deploy the workflow
  • The node will generate a webhook URL automatically

Data that flows out:

{
  "chatId": "user-session-123",
  "message": "Hey, can you remember I prefer oat milk lattes?",
  "sessionId": "session-456"
}

Tip: The Chat Trigger can receive messages from n8n’s web chat UI, Telegram, Slack, or any HTTP POST request. In this workflow, it fans out to two parallel Google Docs fetch operations.

2 Load Memories from Google Docs

What it does: Retrieves the Long-Term Memories document. This gives the AI context about the user’s past.

How to configure:

  • Add a “Google Docs” node
  • Authenticate with your Google account (OAuth2)
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_MEMORIES
  • Set option to "getText"

Data that flows out:

{
  "data": "[04/08/2026] - Name: Sarah\n[04/08/2026] - Preference: Oat milk lattes, no sugar\n[04/08/2026] - Job: Product Manager at TechCorp"
}

Tip: This node runs in parallel with the Notes fetch, so both operations happen simultaneously. The latency is usually under 500ms per fetch.

3 Load Notes from Google Docs

What it does: Retrieves the Notes document. This provides immediate session context and recent reminders.

How to configure:

  • Add another “Google Docs” node (separate from the Memories node)
  • Authenticate with the same Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_NOTES
  • Set option to "getText"

Data that flows out:

{
  "data": "[04/08/2026 14:32:15] - Asked about workflow pricing\n[04/08/2026 14:33:42] - Follow up: send template examples"
}

Tip: If the Notes doc doesn’t exist yet or is empty, the node will return an empty string. That’s fine—the workflow handles it gracefully.

4 Merge Memories & Notes

What it does: Combines the two separate outputs into a single data structure. This merge prepares the data for aggregation.

How to configure:

  • Add a “Merge” node (@n8n-nodes-base.merge)
  • Set mode to "combine"
  • Set combineBy to "id"
  • Connect Memories output to input 0
  • Connect Notes output to input 1

Data that flows out:

[
  { "json": { "data": "[memories...]", "type": "memories" } },
  { "json": { "data": "[notes...]", "type": "notes" } }
]

Tip: The Merge node in “combine” mode takes multiple inputs and creates an array. This makes it easy for the next node to iterate over both data sources.

5 Combine Context

What it does: Aggregates the memories and notes into a single context object that the AI agent will receive.

How to configure:

  • Add an “Aggregate” node (@n8n-nodes-base.aggregate)
  • Set mode to "combine"
  • Leave other options as default

Data that flows out:

{
  "memory_context": "[all memories]",
  "notes_context": "[all notes]",
  "combined": true
}

Tip: Think of Aggregate as the “summary” step. It pools all items from the previous step so the AI Agent receives one clean input.

6 AI Memory Agent

What it does: This is the brain. The AI Agent reads the user’s message, your system prompt, and all the context from memories and notes. It decides what to respond, what to save, and which tools to call.

How to configure:

  • Add an “AI Tools Agent” node (@n8n/n8n-nodes-langchain.agent)
  • Set agentOptions.systemPrompt to the following:
You are an AI assistant with persistent memory capabilities. Your role is to:

1. MEMORY MANAGEMENT:
   - Extract and save important facts about the user for long-term recall
   - Format memories as: [DATE] - [TOPIC]: [DETAIL]
   - Save to 'Save Memory to Google Docs' when you learn something important

2. NOTE TAKING:
   - Capture actionable items, reminders, or temporary notes
   - Format notes as: [DATE] [TIME] - [NOTE]
   - Save to 'Save Note to Google Docs' when user mentions something to remember

3. CONTEXT AWARENESS:
   - Review loaded memories and notes before responding
   - Reference past conversations naturally
   - Acknowledge when you remember something about the user

4. RESPONSE GUIDELINES:
   - Be helpful, personable, and concise
   - If unsure, ask clarifying questions
   - Never make up information not in your context

Always use the provided tools to save important information.

Tool attachments: Connect the Session Memory Buffer, GPT-4o Mini LLM, and the two Save tools (Save Memory, Save Note). The agent will automatically call these tools based on its logic.

Data that flows out:

{
  "output": "Thanks Sarah! I've noted that you prefer oat milk lattes. I'll remember that for next time we chat. Is there anything else I can help with?",
  "tool_calls": [
    { "tool": "Save Memory to Google Docs", "input": { "topic": "Preference", "detail": "Oat milk lattes, no sugar" } }
  ]
}

Tip: The system prompt is where you define the AI’s personality and its rules for memory management. You can customize this heavily—make it more casual, more formal, more technical, whatever fits your use case.

7 GPT-4o Mini

What it does: The language model that powers the agent. GPT-4o-mini is fast, cheap (~$0.15 per 1K tokens), and smart enough for most chatbot tasks.

How to configure:

  • Add an “OpenAI Chat Model” node (@n8n/n8n-nodes-langchain.lmChatOpenAi)
  • Authenticate with your OpenAI API key
  • Set model to "gpt-4o-mini"
  • Set temperature to 0.7 (keeps responses natural but consistent)

Data flow: The Agent sends prompts to this node, which returns generated text. The node is “read-only” from the Agent’s perspective—it doesn’t make decisions, just generates language.

Tip: If you want more creative responses, increase temperature to 0.9. If you want more deterministic responses, lower it to 0.3. For memory-critical tasks, I’d recommend 0.5–0.7.

8 Session Memory Buffer

What it does: Stores the last 10 messages in the conversation (configurable). This gives the AI short-term context within a single chat session, complementing the long-term Google Docs memory.

How to configure:

  • Add a “Memory Buffer Window” node (@n8n/n8n-nodes-langchain.memoryBufferWindow)
  • Set bufferSize to 10 (keeps the last 10 messages)

Data that flows out:

{
  "history": [
    { "role": "user", "content": "Hey, remember I prefer oat milk?" },
    { "role": "assistant", "content": "I'll remember that!" },
    ...
  ]
}

Tip: The Session Memory Buffer is separate from the long-term Google Docs memory. Use bufferSize 10–20 for natural conversations. If you set it higher (50+), you might hit token limits on OpenAI, increasing costs.

9 Save Memory to Google Docs

What it does: A Google Docs node that appends new long-term memories. When the AI Agent decides to save something important, it calls this tool.

How to configure:

  • Add a “Google Docs” node (different from the retrieval nodes)
  • Authenticate with your Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_MEMORIES
  • Set option to "appendText"
  • Set text to: ={{ "[" + new Date().toLocaleDateString() + "] - " + $json.input.topic + ": " + $json.input.detail }}

Example append:

[04/08/2026] - Preference: Oat milk lattes, no sugar

Tip: The expression creates a timestamp automatically. Each memory is on a new line, making the doc easy to scan.

10 Save Note to Google Docs

What it does: A Google Docs node for temporary notes. When the AI wants to save a reminder or action item, it calls this.

How to configure:

  • Add another “Google Docs” node (separate from Save Memory)
  • Authenticate with your Google account
  • Set resource to "document"
  • Set documentId to YOUR_GOOGLE_DOC_ID_NOTES
  • Set option to "appendText"
  • Set text to: ={{ "[" + new Date().toLocaleDateString() + " " + new Date().toLocaleTimeString() + "] - " + $json.input }}

Example append:

[04/08/2026 14:32:15] - User asked about Telegram integration

Tip: This node includes both date and time, since notes are usually time-sensitive. Notes doc grows fast—you might clean it out monthly or weekly depending on volume.

11 Send Telegram Reply

What it does: Sends the AI’s response to Telegram. Optional, but great for testing or if you want users to chat via Telegram instead of the n8n web UI.

How to configure:

  • Add an “HTTP Request” node (@n8n-nodes-base.httpRequest)
  • Set method to POST
  • Set url to: https://api.telegram.org/bot{{ $env.TELEGRAM_BOT_TOKEN }}/sendMessage
  • Add body parameters:
  • chat_id: YOUR_TELEGRAM_CHAT_ID
  • text: ={{ $json.output }}

Tip: Get your Telegram Bot Token from BotFather on Telegram. Get your Chat ID by messaging the bot and calling /getchatid (if your bot has that command) or using a debug service. If you’re not using Telegram, you can delete this node entirely—the workflow still works.

12 Format Chat Response

What it does: Formats the AI’s output for the n8n chat interface. This is optional if you’re only using Telegram, but it’s useful for testing.

How to configure:

  • Add a “Set” node (@n8n-nodes-base.set)
  • Set mode to "map"
  • Add assignment: response = ={{ $json.output }}

Data that flows out:

{
  "response": "Thanks Sarah! I've noted that you prefer oat milk lattes. I'll remember that for next time."
}

Tip: This node is simple, but it’s important for keeping the chat interface happy. It ensures the response is in the expected format.

The Data Structure

Let’s look at how data flows through the system, using a real example with Sarah:

Initial state (empty memories and notes):

Google Docs - Long-Term Memories:
(empty)

Google Docs - Notes:
(empty)

Sarah sends her first message:

User: "Hi! I'm Sarah from Chicago. I work in product management and I love oat milk lattes."

The workflow retrieves (empty context):

memories_context: ""
notes_context: ""

AI Agent decides to save key facts:

Tool call: Save Memory to Google Docs
Input: { topic: "Name", detail: "Sarah" }
Input: { topic: "Timezone", detail: "America/Chicago" }
Input: { topic: "Job", detail: "Product Manager" }
Input: { topic: "Preference", detail: "Oat milk lattes" }

Google Docs after first conversation:

Google Docs - Long-Term Memories:
[04/08/2026] - Name: Sarah
[04/08/2026] - Timezone: America/Chicago
[04/08/2026] - Job: Product Manager
[04/08/2026] - Preference: Oat milk lattes

Google Docs - Notes:
[04/08/2026 14:35:22] - User wants to learn n8n automation

Sarah’s next message (tomorrow):

User: "Hey, can you help me with n8n workflows?"

The workflow now retrieves:

memories_context: "[04/08/2026] - Name: Sarah\n[04/08/2026] - Timezone: America/Chicago\n[04/08/2026] - Job: Product Manager\n[04/08/2026] - Preference: Oat milk lattes"

notes_context: "[04/08/2026 14:35:22] - User wants to learn n8n automation"

AI response:

Output: "Hey Sarah! Of course—I'd love to help with your n8n workflows. Given that you're a PM at TechCorp in Chicago, I'm guessing you're looking to automate some processes for your team. What kind of workflows are you thinking about?"

Notice how the AI naturally references Sarah by name, acknowledges her role, and connects her earlier interest in n8n to the current question. That’s persistent memory in action.

Full System Flow

Here’s the complete end-to-end journey:

┌─ Start: User sends message via Chat Trigger or Telegram ──┐
│                                                             │
├─ Parallel: Load Memories from Google Docs                 │
│ └─ Retrieve all historical facts about user                │
│                                                             │
├─ Parallel: Load Notes from Google Docs                    │
│ └─ Retrieve session context and recent reminders           │
│                                                             │
├─ Merge: Combine both memory sources                       │
│ └─ Creates array: [memories_obj, notes_obj]               │
│                                                             │
├─ Aggregate: Consolidate into single context               │
│ └─ Pools all items into one data structure                │
│                                                             │
├─ AI Memory Agent (receives):                              │
│ ├─ User message                                            │
│ ├─ All long-term memories                                 │
│ ├─ All notes from this session                            │
│ ├─ Last 10 messages from Session Memory Buffer            │
│ ├─ System prompt (memory management rules)                │
│ └─ Connection to GPT-4o-mini and tools                    │
│                                                             │
├─ AI Decision Making:                                      │
│ ├─ Read context and user message                          │
│ ├─ Generate response                                       │
│ ├─ Decide what facts to save (→ Save Memory tool)         │
│ └─ Decide what notes to capture (→ Save Note tool)        │
│                                                             │
├─ Parallel: Save Memory to Google Docs                     │
│ └─ Append new facts (if AI decided to save)               │
│                                                             │
├─ Parallel: Save Note to Google Docs                       │
│ └─ Append new notes (if AI decided to save)               │
│                                                             │
├─ Output to Chat:                                          │
│ ├─ Format Chat Response → n8n Web UI                     │
│ └─ Send Telegram Reply → User's Telegram                 │
│                                                             │
└─ Complete: User receives response & AI remembers ──────────┘

Testing Your Workflow

Now that your workflow is built, let’s test it thoroughly:

Test Plan

  1. Deploy the workflow and activate it in n8n
  2. Open the Chat Trigger and test with a simple message: "Hi, my name is Sarah and I like coffee."
  3. Check the Google Docs — The Memories doc should now have a new entry with Sarah’s name
  4. Send another message: "What do I like to drink?" The AI should reference the coffee preference from the previous message
  5. Test Telegram output (if enabled) by checking your Telegram chat — the response should arrive there too
  6. Clear the Notes doc and send a message with a reminder: "Remind me to follow up with the marketing team tomorrow." Check that the note appears in the Notes doc
  7. Simulate a delay — Wait 10 minutes, come back, and test that the AI still remembers Sarah. This confirms persistence

Troubleshooting Table

Issue Likely Cause Fix
Chat Trigger shows “Webhook not working” Workflow not saved or deployed Click Save, then Deploy. Check that the webhook URL is active in the node.
Google Docs nodes fail with auth error OAuth2 credential not connected or expired Re-authenticate the Google Docs credential. Grant permission for Docs read/write.
AI response is generic, not remembering user Memories not being retrieved or system prompt unclear Check that Load Memories node is pulling data. Review system prompt in AI Agent node for clarity.
Memory saves fail silently (no error, but doc not updated) Google Doc ID is incorrect or doc doesn’t exist Verify Document ID in Save Memory and Save Note nodes. Double-check the URL in Google Docs.
Telegram message doesn’t send Bot token invalid or chat ID missing Confirm bot token with BotFather. Test chat ID manually using curl or Postman.
Workflow is slow (>5 second response) Large memory doc or network latency Archive old memories to a separate doc. Consider pagination in future versions.

Frequently Asked Questions

How much data can Google Docs store?

Google Docs has a 50MB file size limit, which translates to roughly 10–20 million words. For a personal chatbot, you’d need to chat 100+ times daily for years to hit that limit. For multi-user systems, consider archiving old memories after 6 months.

Why Google Docs instead of a real database?

Simplicity. No backend servers, no authentication headaches, no costs, and you can read/edit memories by hand. For production systems with thousands of users, you’d want PostgreSQL or MongoDB. For personal projects and small teams, Google Docs is perfect.

Can multiple users use the same workflow?

Yes, but you’d want to create separate memory docs per user (or per user-group). Otherwise, all memories get mixed together. A simple fix is to add a User ID prefix in each memory entry: [04/08/2026] [user-sarah] - Preference: Oat milk.

What if I want to use a different LLM instead of GPT-4o-mini?

Easy. Replace the “GPT-4o Mini” node with any other LLM node that n8n supports: Claude (Anthropic), Gemini (Google), Llama (Meta), or even local models. The system prompt stays the same. Just swap the node and re-authenticate.

How do I ensure memory data is private?

The memories are stored in your Google Docs, which means Google has access (per their privacy policy). If you need end-to-end encryption, deploy n8n self-hosted and use your own database with encryption at rest. For most use cases, treating Google Docs like email (encrypted in transit, encrypted at rest on Google’s servers) is sufficient.

What if the AI forgets something it should remember?

This usually means the system prompt isn’t clear enough about what to save. Revisit the “MEMORY MANAGEMENT” section of the system prompt and be more explicit. Example: add “Save names, preferences, job titles, and timezone. Save reminders and follow-ups as notes.” You can also manually add memories to the Google Doc yourself.

Ready to Deploy This Workflow?

Get the complete, ready-to-import JSON file with all the configurations, plus detailed setup guides for OpenAI, Google Docs, and Telegram. Import in seconds and start building.

Get the Workflow & Setup Guide

Includes step-by-step credential setup, testing checklist, and common customizations.

What’s Next?

You’ve built a powerful memory-driven chatbot. Here are four directions you can take it:

  1. Add Slack Integration — Instead of (or in addition to) Telegram, receive and respond to Slack DMs. Users stay in Slack, and memories persist across platforms.
  2. Multi-User Memory Isolation — Spin up separate memory docs per user, or implement a shared “team knowledge base” doc that all users contribute to. Great for support teams.
  3. Export Memory Analytics — Create a separate workflow that reads the memory doc weekly and generates a summary email. “Sarah asked 12 questions about n8n, preferred 5 templates, and wants to learn automation.”
  4. Archive Old Memories — Set up a scheduled n8n workflow that runs monthly, archives memories older than 6 months to a separate “Archive” doc, and clears the active memory doc. Keeps performance snappy.
n8n
AI Agent
Memory Management
Google Docs
GPT-4o
Chatbot
Telegram
LLM Integration
Automation