How to Add LinkedIn Post Commenters to HubSpot CRM with n8n (Auto-Enriched)

Every person who comments on your LinkedIn post is a warm lead — they’ve already seen your content and engaged with it. But manually copying names, finding emails, and entering contacts into your CRM? That’s an afternoon of soul-crushing data entry for every post you publish. This n8n workflow changes that: paste a LinkedIn post URL into a form, and within minutes every commenter is automatically enriched with professional data and waiting in your HubSpot CRM as a ready-to-contact lead.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. You paste a LinkedIn post URL into a simple web form and hit submit.
  2. n8n fetches every comment on that post via the ConnectSafely LinkedIn API — names, profile URLs, and comment text included.
  3. Each commenter’s LinkedIn profile is sent to an Apify actor that returns their full professional details: email address, job title, company name, location, and more.
  4. The workflow filters out any profile where no email was found, so your CRM stays clean.
  5. Verified contacts are automatically created (or updated, if they already exist) in HubSpot with all enriched fields pre-populated — first name, last name, email, title, company, city, and country.

How It Works — The Big Picture

The workflow runs in two logical phases: fetching the comment list, then looping through each commenter to enrich and sync. Here’s the full flow at a glance:

┌──────────────────────────────────────────────────────────────────────┐
│  LINKEDIN → HUBSPOT CRM PIPELINE                                     │
│                                                                      │
│  [Form Trigger]  →  [Fetch Post Comments]  →  [Split Array]          │
│   (Post URL)         (ConnectSafely)           (per commenter)       │
│                                                                      │
│                    ┌─── [Loop Over Items] ────────────────┐          │
│                    │         ↓                            │          │
│                    │  [Enrich with Apify]                 │          │
│                    │         ↓                            │          │
│                    │  [Check Email Exists]                │          │
│                    │    ↓ YES        ↓ NO                 │          │
│                    │  [Create/Update  [Continue Loop] ────┘          │
│                    │   HubSpot]           ↑                          │
│                    │          └────────────┘                         │
│                    └──────────────────────────────────────┘          │
└──────────────────────────────────────────────────────────────────────┘

What You’ll Need

  • n8n (self-hosted, version 1.0+) — the ConnectSafely node is a community node that requires self-hosted n8n
  • ConnectSafely LinkedIn API — a paid third-party service that safely retrieves LinkedIn data without violating platform ToS
  • Apify account — free tier works for small volumes; the specific actor used is the LinkedIn Profile Scraper (Actor ID: UMdANQyqx3b2JVuxg)
  • HubSpot account ��� any plan with API access; you’ll need a Private App token
  • A published LinkedIn post with existing comments to test against

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

📌

Self-hosted n8n required: The ConnectSafely community node is not available on n8n Cloud. You’ll need a self-hosted n8n instance to use this workflow. If you’re on Cloud, reach out to us at support@easyworkflows.net — we can discuss an alternative approach using HTTP Request nodes.

Building the Workflow — Step by Step

1 Form Trigger — Enter Post URL (n8n-nodes-base.formTrigger)

This node creates a simple hosted web form that any team member can use to kick off the pipeline. No coding, no API calls — just paste a URL and click submit.

To configure it: open the node and set the Form Title to something descriptive like “LinkedIn Post Engagement Automation.” Add one field with the label LinkedIn Post URL, mark it as required, and add a placeholder showing the expected URL format. n8n will generate a unique public URL for the form — copy it and share it with your sales or marketing team.

When the form is submitted, this data object flows to the next node:

{
  "LinkedIn Post URL": "https://www.linkedin.com/posts/james-carter_ai-sales-automation-activity-7190000000000000000-XXXX"
}
💡

Tip: You can add a second field asking for a “Campaign Tag” or “Lead Source” label. Pass that value to HubSpot as a custom property so you know which LinkedIn post generated each contact.

2 Fetch Post Comments (ConnectSafely LinkedIn)

This community node uses the ConnectSafely API to retrieve all comments on the LinkedIn post. ConnectSafely is a compliant third-party LinkedIn data provider — it handles authentication and rate limiting so your account stays safe.

Configuration: set Post URL to ={{ $json['LinkedIn Post URL'] }} and set the operation to Get Post Comments. Attach your ConnectSafely API credential. The node returns an array of comment objects, each containing the commenter’s name, profile URL, comment text, and timestamp.

{
  "comments": [
    {
      "name": "Emily Rodriguez",
      "profileUrl": "https://www.linkedin.com/in/emily-rodriguez-sales",
      "commentText": "This is exactly what our team needs! Great post.",
      "timestamp": "2026-04-10T14:32:00Z"
    },
    {
      "name": "Michael Chen",
      "profileUrl": "https://www.linkedin.com/in/michael-chen-b2b",
      "commentText": "Saved this — we're evaluating tools like this right now.",
      "timestamp": "2026-04-10T15:10:00Z"
    }
  ]
}

3 Split Comments Array (splitOut)

The Split Out node takes the comments array from the previous step and breaks it into individual items — one n8n item per commenter. This is necessary so the loop in the next step can process each person separately.

Set Field To Split Out to comments. After this node, each item looks like a single commenter object.

4 Loop Over Items (splitInBatches)

The Split In Batches node processes commenters one at a time (or in small batches), which prevents you from hammering the Apify API with too many simultaneous requests. Leave the Batch Size at the default (1) for reliability, or increase it to 5 if you have a high Apify plan and want faster processing.

This node has two outputs: output 0 fires when the loop is complete (all items processed), and output 1 fires for batches — connect output 1 to the Apify enrichment node.

5 Enrich Profile with Apify (Apify)

This is where the magic happens. For each commenter’s LinkedIn profile URL, this node runs an Apify actor that scrapes their public profile and returns rich contact data — including, when available, their work email address.

Configuration: set the Actor ID to https://console.apify.com/actors/UMdANQyqx3b2JVuxg, set the operation to Run actor and get dataset, and in Custom Body enter:

={{ JSON.stringify({ "linkedin": $json.profileUrl }) }}

Attach your Apify API key credential. The actor typically takes 10–30 seconds per profile. The output contains detailed professional information mapped to numbered field keys:

{
  "02_First_name": "Emily",
  "03_Last_name": "Rodriguez",
  "04_Email": "emily.rodriguez@techcorp.com",
  "07_Title": "VP of Sales",
  "13C_Current_address": "1428 Elm St, Austin, TX 78701",
  "14_City": "Austin",
  "15_Country": "United States",
  "16_Company_name": "TechCorp Solutions"
}
💡

Tip: Email availability depends on LinkedIn profile privacy settings. Expect 30–60% of profiles to return an email. Use the Apify Console to inspect the full schema and add more fields.

6 Check Email Exists (IF)

Not every enriched profile will include an email address — some LinkedIn users keep theirs private. This IF node filters them out before they reach HubSpot, keeping your CRM free of incomplete records.

Configuration: add one condition — Value 1 = ={{ $json['04_Email'] }}, operator = Is not empty. Profiles with a valid email go to TRUE; those without go to FALSE (Continue Loop).

7 Create or Update HubSpot Contact (HubSpot)

The final action node upserts the contact in HubSpot using the email address as the unique identifier. If the contact already exists, their record is updated with the latest data. New contacts are created fresh.

Set Authentication to App Token and attach your HubSpot Private App credential. Map the fields:

HubSpot Field n8n Expression Example Value
Email ={{ $json['04_Email'] }} emily.rodriguez@techcorp.com
First Name ={{ $json['02_First_name'] }} Emily
Last Name ={{ $json['03_Last_name'] }} Rodriguez
Job Title ={{ $json['07_Title'] }} VP of Sales
Company ={{ $json['16_Company_name'] }} TechCorp Solutions
City ={{ $json['14_City'] }} AustinRodriguez
Job Title ={{ $json['07_Title'] }} VP of Sales
Company ={{ $json['16_Company_name'] }} TechCorp Solutions
City ={{ $json['14_City'] }} Austin
Country ={{ $json['15_Country'] }} United States
Street Address ={{ $json['13_Current_address'] }} 1428 Elm St, Austin, TX 78701
💡

Tip: In HubSpot, create a custom contact property called “LinkedIn Source Post” and map it to the original post URL using ={{ $('Form Trigger - Enter Post URL').item.json['LinkedIn Post URL'] }}. This lets you track which post each lead came from directly in your CRM.

8 Continue Loop (No Operation)

This placeholder node serves as the merge point for both branches of the IF node — whether a contact was created in HubSpot (TRUE branch) or skipped due to missing email (FALSE branch), both paths converge here and loop back to Loop Over Items to process the next commenter. No configuration needed.

Full System Flow

Here’s the complete end-to-end picture — from a LinkedIn post URL to a fully enriched HubSpot contact record:

  User submits LinkedIn post URL via Form
             ↓
  [Form Trigger] receives URL
             ↓
  [ConnectSafely] fetches all commenters
  → Returns: [{name, profileUrl, comment}, ...]
             ↓
  [Split Out] → individual commenter items
             ↓
  ┌──────────────── [Loop Over Items] ─────────────────┐
  │  For each commenter:                               │
  │         ↓                                         │
  │  [Apify Actor] enriches LinkedIn profile           │
  │  → Returns: email, name, title, company, city      │
  │         ↓                                         │
  │  [IF: Email Exists?]                               │
  │    ↓ YES                    ↓ NO                  │
  │  [HubSpot: Create/Update]  [Skip]                 │
  │  Contact with enriched data   ↓                   │
  │         ↓                    ↓                    │
  │       [Continue Loop] ←──────┘                    │
  │             ↑                                     │
  └─────────────┘ (next commenter)                    │
             ↓ (all done)                             │
        Workflow complete                             │
        All reachable commenters → HubSpot CRM        │
└─────────────────────────────────────────────────────┘

Testing Your Workflow

  1. Activate the workflow and open the form URL provided by the Form Trigger node.
  2. Paste in a LinkedIn post URL that you own (one with at least 3–5 comments for a meaningful test).
  3. Submit the form and watch the execution in the n8n editor — you should see items flowing through each node.
  4. After the run completes, open HubSpot CRM → Contacts and confirm new contacts appear with all fields populated.
  5. For a commenter whose profile you know (yourself or a colleague), verify that the email and company fields are correct.
Problem Likely Cause Fix
ConnectSafely node shows authentication error API credential not configured Re-enter your ConnectSafely API key in the credential modal
Apify returns empty dataset Actor still running; n8n timed out waiting Increase the HTTP timeout in Apify node settings to 120 seconds
HubSpot returns 409 Conflict Contact already exists with same email Normal behavior — HubSpot updates the existing contact; no action needed
0% of profiles return email Testing with private LinkedIn accounts Try with public LinkedIn profiles; email availability varies by privacy settings
Loop never finishes Continue Loop not connected back to Loop Over Items Ensure both IF branches connect to Continue Loop, and Continue Loop connects back to Loop Over Items input 0

Frequently Asked Questions

Does this workflow violate LinkedIn’s Terms of Service?

ConnectSafely is specifically designed to access LinkedIn data within LinkedIn’s permitted boundaries — it doesn’t use scraping bots or credential stuffing. That said, always review the current ToS for your specific use case. The Apify enrichment step retrieves publicly available profile data only. We recommend consulting your legal team for enterprise use.

What percentage of commenters will have an email address?

Typically 30–60%, depending on your audience. B2B professionals in sales, marketing, and tech tend to have higher email availability on LinkedIn. The IF node ensures only enriched contacts reach your CRM, so partial data never clutters your records.

Can I run this on n8n Cloud instead of self-hosted?

Not with the ConnectSafely community node, which only installs on self-hosted n8n. If you’re on Cloud, it’s possible to replicate the comment-fetching step using an HTTP Request node pointed at a LinkedIn-compliant API — reach out to support@easyworkflows.net for guidance on the alternative setup.

How long does the workflow take to run for a post with 100 comments?

About 15–30 minutes, depending on Apify’s queue. Each enrichment call takes 10–30 seconds per profile, and the loop processes them sequentially. For large volumes, consider running on a schedule or splitting the job into smaller batches using a Filter node to process only the newest commenters since the last run.

Can I add commenters to a HubSpot list or pipeline stage automatically?

Yes — after the “Create or Update HubSpot Contact” node, add a HubSpot node set to “Add contact to list” or “Create deal” and reference the contact’s ID from the previous node’s output. This lets you instantly enroll new LinkedIn-sourced leads into a nurture sequence or sales pipeline.

What if I want to save commenters to Google Sheets instead of HubSpot?

Simply swap the HubSpot node for a Google Sheets “Append Row” node and map the same Apify fields to your sheet columns. The rest of the workflow stays identical. This makes it easy to adapt to any CRM or spreadsheet — just change the final destination node.


🚀 Get the LinkedIn → HubSpot CRM Template

Download the ready-to-import n8n workflow JSON, plus a step-by-step Setup Guide and a Credentials Guide covering ConnectSafely, Apify, and HubSpot — everything you need to go live today.

Get the Template →

Instant download · Works on self-hosted n8n · Includes PDF guides

What’s Next?

  • Auto-enroll in email sequence: After creating the HubSpot contact, trigger a HubSpot workflow to enroll them in a LinkedIn-specific nurture email sequence automatically.
  • Slack notification: Add a Slack node after HubSpot to alert your sales team in real time whenever a high-value contact (e.g., VP or Director title) is added.
  • Run on a schedule: Swap the Form Trigger for a Schedule trigger combined with a stored list of your recent post URLs to process new commenters every morning automatically.
  • Score leads by comment sentiment: Add an OpenAI node before the Apify step to analyze the comment text and assign a lead score — prioritize contacts who left buying-intent comments.
n8n
LinkedIn
HubSpot
Apify
CRM automation
lead generation
sales automation

How to Auto-Generate LinkedIn Posts from Your Blog with n8n and AI

You publish a blog post every week. It’s great content—researched, written, polished. But then you face a familiar problem: how do you turn that article into a compelling LinkedIn post? Do you manually rewrite it? Copy-paste? Start from scratch? You end up spending 20 minutes crafting something that captures the essence of your article, and you repeat this every single week.

What if that rewriting happened automatically?

This guide walks you through building a workflow that pulls your latest blog posts from Ghost CMS, feeds them to an AI agent powered by GPT-4o-mini, and saves LinkedIn-ready promotional posts to a Google Sheet—all on a schedule, no manual work required. Get the complete template below.

What You’ll Build

By the end of this tutorial, you’ll have a fully automated workflow that:

  1. Fetches your latest blog posts from Ghost CMS every Monday morning at 9am
  2. Cleans up the HTML content to extract just the text, removing all markup and formatting noise
  3. Sends each post to an AI agent (GPT-4o-mini) with a custom prompt to generate a professional LinkedIn promotional post
  4. Appends everything to a Google Sheet where you can review, refine, or directly copy the AI-generated post to LinkedIn
  5. Repeats automatically every week, giving you a constantly growing library of pre-written LinkedIn content

How It Works — The Big Picture

Here’s the workflow architecture at a glance:

┌─────────────────────┐
│ Schedule Trigger    │ (Every Monday 9am)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Extract Blog Posts  │ (Ghost CMS - getAll, limit 3)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Map Post Fields     │ (Set node - extract id, title, etc.)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Process Each Post   │ (SplitInBatches - batch size 1)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Strip HTML Tags     │ (Code node - JS to remove markup)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Combine Post Data   │ (Merge node - SQL combine)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Generate LinkedIn   │ (AI Agent - GPT-4o-mini)
│ Post                │
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Merge AI Output     │ (Merge node - combine with original)
└──────────┬──────────┘
           │
           v
┌─────────────────────┐
│ Save to Sheets      │ (Google Sheets - append rows)
└─────────────────────┘

Each blog post flows through this pipeline independently. The workflow extracts relevant data, cleans it, feeds it to AI, and stores the result in a structured spreadsheet for your review.

What You’ll Need

Before you start building, make sure you have:

  • n8n account (free tier works fine, or self-hosted)
  • Ghost CMS with at least 3 published blog posts
  • Ghost Admin API key (generate in Settings → Integrations)
  • OpenAI API key with access to GPT-4o or GPT-4o-mini
  • Google Sheets API credentials (or just use n8n’s built-in Google Sheets connector)
  • A Google Sheet ready to receive the data
  • Time commitment About 45 minutes to build and test the entire workflow

Building the Workflow

Let’s build this step by step. I’ll walk you through each node, what it does, and how to configure it.

1 Schedule Trigger: Weekly Automation

Start with a Schedule node to run your workflow every Monday morning at 9am Eastern Time.

Configuration:

  • Choose Recurring as the trigger type
  • Set Trigger Type to Weekly
  • Select Monday (or your preferred day)
  • Set the time to 09:00:00 (9am)
  • Set timezone to America/New_York

This node outputs a single object with a timestamp. It doesn’t pass data forward—it just kicks off the workflow on schedule.

📌

Pro tip: If you want to test the workflow immediately without waiting for Monday, you can manually trigger it by clicking the “Execute Workflow” button in n8n’s editor. No need to change the schedule.

2 Extract Blog Posts: Pull from Ghost CMS

Next, add a Ghost node configured to fetch your latest blog posts.

Configuration:

  • Create a new Ghost connection using your Admin API key
  • Set the resource to Posts
  • Set the operation to Get All
  • Under “Options,” set Limit to 3 (fetch the 3 most recent posts)
  • Enable Include HTML so we capture the full content

Expected output (sample):

[
  {
    "id": "post_5a2k8x9m",
    "title": "5 Ways to Automate Your Marketing Funnel in 2026",
    "featured_image": "https://ghost.easyworkflows.net/content/images/2026/04/marketing-funnel.jpg",
    "excerpt": "Automation is no longer a luxury...",
    "html": "<h2>Automation is...</h2><p>...",
    "slug": "5-ways-automate-marketing-funnel-2026",
    "published_at": "2026-04-08T08:00:00Z"
  },
  ...
]

3 Map Post Fields: Extract What We Need

Now use a Set node to pluck out just the fields we care about. This keeps our data clean and reduces noise downstream.

Configuration:

  • Add a Set node after the Ghost node
  • In the “Set” section, map these fields from the Ghost posts:
    • idid
    • titletitle
    • featured_imagefeatured_image
    • excerptexcerpt
    • contenthtml
    • link ← construct using slug: https://yourblog.ghost.io/{{$node["Ghost"].data.slug}}/

At this point, each post has a clean data structure with just what we need.

4 Process Each Post: Use SplitInBatches

Since we fetched multiple posts, we need to process them one at a time. A SplitInBatches node lets us handle each post independently before merging results back together.

Configuration:

  • Add a SplitInBatches node
  • Set Batch Size to 1
  • Set Options → Timeout to 120 seconds (gives AI time to respond)

This node splits the array of posts into single-item batches. Each batch loops through the remaining nodes.

5 Strip HTML Tags: Clean the Content

The Ghost CMS gives us HTML-rich content, but we want plain text for the AI. A Code node will strip all HTML tags and clean up whitespace.

Configuration:

  • Add a Code node (JavaScript)
  • Paste this function:
const htmlContent = $node["Map Post Fields"].data.content;

// Strip HTML tags
let cleanText = htmlContent.replace(/<[^>]+>/g, '');

// Decode HTML entities
cleanText = cleanText
  .replace(/&/g, '&')
  .replace(/</g, '<')
  .replace(/>/g, '>')
  .replace(/"/g, '"')
  .replace(/'/g, "'");

// Remove extra whitespace
cleanText = cleanText
  .replace(/\s+/g, ' ')
  .trim();

return { clean_content: cleanText };

Output example:

{
  "clean_content": "5 Ways to Automate Your Marketing Funnel in 2026 Automation is no longer a luxury. It's a necessity..."
}

6 Combine Post Data: Merge Original + Cleaned

We now have two pieces of data floating around: the original post fields and the cleaned content. A Merge node combines them back into a single, complete object.

Configuration:

  • Add a Merge node
  • Merge mode: Combine
  • Input 1: Output from “Map Post Fields” (original fields)
  • Input 2: Output from “Strip HTML Tags” (cleaned content)

Result:

{
  "id": "post_5a2k8x9m",
  "title": "5 Ways to Automate Your Marketing Funnel in 2026",
  "featured_image": "https://...",
  "excerpt": "Automation is no longer a luxury...",
  "content": "<h2>...</h2>...",
  "link": "https://yourblog.ghost.io/...",
  "clean_content": "5 Ways to Automate... [full plain text]"
}

7 Generate LinkedIn Post: AI Agent with GPT-4o-mini

Now the magic happens. We send the cleaned blog content to an AI Agent node powered by OpenAI, which generates a professional LinkedIn promotional post.

Configuration:

  • Add an AI Agent node
  • Model: gpt-4o-mini
  • Credentials: Connect your OpenAI API key
  • System Prompt: Copy and customize this:
    You are a LinkedIn content specialist. Your job is to transform blog articles into engaging, professional LinkedIn posts.
    
    Guidelines:
    - Keep it between 3-5 sentences
    - Use 1-2 relevant emojis (but not too many)
    - Include a call-to-action at the end (e.g., "Read the full article below" or "What's your experience?")
    - Maintain a professional but friendly tone
    - Focus on the key insight or takeaway from the blog post
    - Do NOT include hashtags
    
    Format your response as plain text only.
  • User Message: Set this to:
    Blog Title: {{$node["Combine Post Data"].data.title}}
    
    Blog Content:
    {{$node["Combine Post Data"].data.clean_content}}
    
    Generate a LinkedIn promotional post for this blog article.

Expected output:

"In 2026, your marketing stack is only as strong as your automation. We just published a deep dive into 5 game-changing automation strategies that cut manual work, reduce errors, and scale your growth.

Whether you're managing leads, nurturing prospects, or coordinating campaigns, automation does the heavy lifting. Curious how? Check out the full breakdown below. 📈

What automation tool has made the biggest impact for you?"
💡

Not getting the tone you want? Tweak the system prompt. Ask the AI to be more casual, more technical, more sales-focused, whatever fits your brand. The beauty of AI agents is they adapt to your instructions.

8 Merge AI Output: Combine Generated Post with Metadata

The AI agent returned a LinkedIn post, but we also want to keep the original blog metadata (title, link, featured image) so we can reference them in the Google Sheet.

Configuration:

  • Add another Merge node
  • Merge mode: Combine
  • Input 1: Output from “Combine Post Data” (all original + cleaned fields)
  • Input 2: Output from “Generate LinkedIn Post” (the AI-generated text)
  • In Input 2, set the field name to linkedin_post so the AI output is clearly labeled

Final merged object:

{
  "id": "post_5a2k8x9m",
  "title": "5 Ways to Automate Your Marketing Funnel in 2026",
  "featured_image": "https://...",
  "excerpt": "Automation is no longer a luxury...",
  "content": "<...>",
  "clean_content": "5 Ways to Automate... [plain text]",
  "link": "https://yourblog.ghost.io/...",
  "linkedin_post": "In 2026, your marketing stack is only as strong..."
}

9 Save to Google Sheets: Append the Results

Finally, append each row to your Google Sheet so you have a growing library of LinkedIn posts ready to go.

Configuration:

  • Add a Google Sheets node
  • Credentials: Authenticate with your Google account
  • Spreadsheet: Select or paste the ID of your sheet
  • Sheet: Choose the sheet tab (e.g., “Posts”)
  • Resource: Append
  • Columns to append: Map these fields:
    • id
    • title
    • featured_image
    • excerpt
    • link
    • clean_content
    • linkedin_post

Each week, new rows are added to the bottom of your sheet with the original blog data and the AI-generated post.

The Data Structure

Here’s exactly how your Google Sheet should be organized. Create these column headers in row 1:

id title featured_image excerpt link clean_content linkedin_post
post_5a2k8x9m 5 Ways to Automate Your Marketing Funnel in 2026 https://ghost.easyworkflows.net/content/images/2026/04/marketing-funnel.jpg Automation is no longer a luxury. It’s a necessity… https://blog.easyworkflows.net/marketing-automation-2026/ 5 Ways to Automate Your Marketing Funnel in 2026 Automation is no longer a luxury… In 2026, your marketing stack is only as strong as your automation. We just published a deep dive into 5 game-changing automation strategies…
post_3j9m2x5k Why n8n is Better Than Zapier for Complex Workflows https://ghost.easyworkflows.net/content/images/2026/04/n8n-vs-zapier.jpg When it comes to no-code automation, flexibility matters… https://blog.easyworkflows.net/n8n-vs-zapier-comparison/ Why n8n is Better Than Zapier for Complex Workflows When it comes to no-code automation, flexibility matters… Ever hit a wall with Zapier because it can’t do exactly what you need? n8n is different. We compared side-by-side, and the results might surprise you. Read our full breakdown…

The featured_image column is great for visual reference. The link column lets you click straight to the blog post. And linkedin_post is what you’ll actually copy into LinkedIn when you’re ready to post.

Full System Flow

Here’s a more detailed view of the entire workflow end-to-end:

WORKFLOW: Auto-Generate LinkedIn Posts from Blog Content

┌──────────────────────────────────────────────────────┐
│ 1. SCHEDULE TRIGGER                                  │
│    Runs: Every Monday at 9:00 AM (America/New_York) │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 2. GHOST CMS NODE                                    │
│    Fetches: Latest 3 published blog posts            │
│    Fields: id, title, excerpt, html, featured_image │
│    Output: Array of 3 post objects                   │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 3. SET NODE (Map Fields)                             │
│    Extracts: id, title, featured_image, excerpt,    │
│             content (html), link (slug-based)        │
│    Output: Cleaned post object                       │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 4. SPLIT IN BATCHES NODE                             │
│    Batch Size: 1                                     │
│    Processes: Each post individually in loop         │
│    Output: Single post object (batch)                │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 5. CODE NODE (Strip HTML)                            │
│    Removes: All HTML tags                            │
│    Cleans: Whitespace, HTML entities                │
│    Output: { clean_content: "plain text..." }        │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 6. MERGE NODE (Combine)                              │
│    Input 1: Original post fields                     │
│    Input 2: clean_content from Code node             │
│    Output: Single object with all fields             │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 7. AI AGENT NODE (OpenAI GPT-4o-mini)                │
│    System Prompt: LinkedIn content specialist        │
│    Input: Blog title + clean_content                 │
│    Generates: Professional LinkedIn post (3-5 sent.) │
│    Output: { text: "In 2026, your marketing..." }    │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 8. MERGE NODE (Combine with Metadata)                │
│    Input 1: All original fields + clean_content      │
│    Input 2: AI-generated linkedin_post               │
│    Output: Complete object ready for Sheets          │
└────────────────┬─────────────────────────────────────┘
                 │
                 v
┌──────────────────────────────────────────────────────┐
│ 9. GOOGLE SHEETS NODE (Append)                       │
│    Spreadsheet: "LinkedIn Auto-Posts"                │
│    Sheet Tab: "Posts"                                │
│    Appends: 1 row per blog post                      │
│    Columns: id, title, featured_image, excerpt,      │
│            link, clean_content, linkedin_post        │
└──────────────────────────────────────────────────────┘

RESULT: Every blog post → 1 professional LinkedIn post
        Stored in Google Sheet for review & scheduling

Testing Your Workflow

Before you set it to run on schedule, test it end-to-end. Here’s the checklist:

  1. Execute the workflow manually from the n8n editor (click the play button)
  2. Check the Ghost node output — do you see your 3 recent posts?
  3. Check the Set node output — are all the fields (title, link, content) correct?
  4. Check the Code node output — is the HTML stripped? Is clean_content plain text?
  5. Check the AI Agent output — is the generated LinkedIn post sensible? Does it match your tone?
  6. Check Google Sheets — did the row appear? Are all columns populated?
  7. Copy the LinkedIn post to LinkedIn — does it read well? Would you actually post it?

Troubleshooting table:

Problem Likely Cause Solution
Ghost node returns no posts Admin API key invalid or no published posts Verify API key in Ghost integration; ensure you have 3+ published posts
AI Agent times out Content too long or OpenAI API slow Reduce blog excerpt length; increase timeout in SplitInBatches to 180s
Google Sheets append fails Missing column headers or wrong Sheet ID Manually create header row in Sheet; verify Spreadsheet ID matches
AI-generated post is bland System prompt too generic Customize system prompt with your brand voice, target audience, examples
Duplicate rows in Sheet Workflow executed multiple times Check n8n execution logs; disable “activate” if not ready to run on schedule

Frequently Asked Questions

Can I use WordPress or RSS instead of Ghost CMS?

Absolutely. Replace the Ghost node with a WordPress node (if available in n8n) or use an RSS node to pull your latest posts. The rest of the workflow stays the same. You’ll just need to adjust the field mappings to match WordPress’s output format (e.g., post_content instead of html).

Can I use a different AI model instead of GPT-4o-mini?

Yes. You can swap in Claude 3.5 Sonnet, Gemini, or any other LLM supported by n8n. The workflow structure stays identical—just update the AI Agent node credentials and model selection. Different models may produce slightly different tones, so test and see which you prefer.

How do I customize the LinkedIn post style to match my brand voice?

Edit the System Prompt in the AI Agent node (Step 7). Add specific instructions about your brand voice, target audience, desired length, tone, and format. For example: “Use more technical language,” “Add industry jargon,” “Make it humorous,” “Include a specific CTA.” The AI will adapt accordingly.

Can I auto-post directly to LinkedIn instead of saving to a sheet first?

Not yet—LinkedIn’s API is limited, and n8n doesn’t have a direct “post to LinkedIn feed” node. However, you can use LinkedIn’s RSS feed directly or integrate with a tool like Buffer or Later that connects to n8n. For now, using Google Sheets as a review layer is the safest approach; it gives you a chance to tweak the AI output before publishing.

How many blog posts can I process at once?

The workflow processes posts one at a time (batch size 1) to avoid rate-limiting and stay within API cost bounds. If you want to fetch more than 3 posts, increase the Limit in the Ghost node. Keep in mind: each post calls the OpenAI API, so processing 10 posts will cost more than processing 3. Start with 3 and scale as needed.

💡

Want to take this further? Try setting up a second workflow that watches your Google Sheet and automatically schedules posts to LinkedIn via Buffer on specific days. Or build a variation that emails the LinkedIn post to your team for approval before it hits the sheet. Check out our templates library for more advanced workflows.

Get the Complete Workflow Template

Don’t want to build it from scratch? We’ve packaged the entire workflow—all 9 nodes pre-configured and ready to import—so you can get up and running in minutes, not hours.

Download the Template

Includes step-by-step setup guide and troubleshooting tips.

What’s Next?

You now have a powerful foundation. Here are four natural extensions to consider:

  1. Multi-platform distribution — Fork the workflow to generate Twitter posts, email newsletters, or Slack announcements from the same blog content.
  2. A/B testing variants — Call the AI agent twice with different system prompts and store both versions in the sheet. See which one gets more engagement on LinkedIn, then refine your prompt based on the winner.
  3. Sentiment and keyword extraction — Add a node to analyze the blog post sentiment and extract key topics, storing them in the sheet for SEO and content tracking.
  4. Scheduled LinkedIn publishing — Integrate with Buffer or another scheduling tool to automatically queue the posts for posting at optimal times, bypassing the manual copy-paste step entirely.

The beauty of n8n is that each workflow is a building block. Start here, learn what works for your brand, and expand from there.


Questions? Run into issues? The n8n community forum is incredibly active, and our templates team is always available. Happy automating!

n8n
Ghost CMS
LinkedIn
OpenAI
Google Sheets
automation
content marketing

How to Build an AI Candidate Screening Pipeline with n8n (LinkedIn + Gemini)

Recruiting teams spend hours on first-round screening—parsing LinkedIn profiles, cross-referencing job requirements, and writing candidate summaries. It’s critical work, but it’s repetitive and error-prone. What if you could automate the entire initial review, freeing your team to focus on real conversations with the best candidates?

This n8n workflow does exactly that: a recruiter sends a LinkedIn profile URL via Telegram, three AI agents powered by Google Gemini analyze the candidate against the job requirements, and a formatted assessment comes back within seconds. Everything is logged to Google Sheets for your records. Let’s build it.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. A Telegram bot receives a LinkedIn profile URL from a recruiter
  2. The workflow scrapes the candidate’s profile data using Apify
  3. It retrieves the job description from your Google Drive folder
  4. Three specialized AI agents evaluate the candidate: one scores JD match, one delivers a detailed analysis, and one synthesizes a recruiter-ready recommendation
  5. Results are stored as a row in Google Sheets for future reference
  6. A formatted summary is sent back to Telegram with the screening verdict

How It Works — The Big Picture

The workflow orchestrates a multi-stage evaluation: it gathers data from three sources (LinkedIn, Google Drive, Apify), processes it through three independent LLM agents, consolidates the results, and delivers them both to a persistent data store and back to the recruiter in real time.

┌──────────────────────────────────────────────────────────────────────────┐
│  AI CANDIDATE SCREENING PIPELINE                                        │
│                                                                          │
│  1. Telegram Trigger                                                     │
│         ↓                                                                │
│  2. Extract LinkedIn URL → 3. Apify Scraper (LinkedIn Profile)         │
│         ↓                                                                │
│  4. Poll Apify Status → 5. Get Apify Results                            │
│         ↓                                                                │
│  6. Google Drive: Fetch Job Description                                 │
│  7. Extract PDF Text                                                    │
│         ↓                                                                │
│  8–10. Three Parallel LLM Agents (Gemini 2.5 Pro)                       │
│        • Agent 1: JD Matching Score                                     │
│        • Agent 2: Detailed Candidate Analysis                           │
│        • Agent 3: Recruiter Recommendation                              │
│         ↓                                                                │
│  11. Merge Agent Results                                                │
│  12. Add to Google Sheets                                               │
│  13. Format & Send Telegram Summary                                     │
│         ↓                                                                │
│  14. Telegram Send (Final Message)                                      │
└──────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n account (Cloud or self-hosted, version 1.0+)
  • Telegram Bot API—a bot token created via BotFather
  • Google Account with access to Google Drive, Google Sheets, and the Gemini API
  • Apify Account with API access and a LinkedIn Scraper actor already configured
  • Google Drive Folder containing job description PDFs (one per role)
  • Google Sheets Document where candidate results will be stored
  • LinkedIn URL(s) to test with—public profiles work best

Estimated build time: 45–60 minutes from scratch, or under 10 minutes with the template.

Part 1 — The Trigger and Data Collection

Step 1: Telegram Trigger and URL Extraction

The workflow starts when your recruiter sends a message with a LinkedIn profile link. The Telegram trigger node listens for incoming messages, and a Set node extracts the LinkedIn URL for processing.

Configuration: Set the Telegram Bot token in the credential field. The trigger fires every time a message arrives. A downstream Set node extracts the URL string from the message text using a simple expression.


{
  "message": "Please screen this candidate: https://www.linkedin.com/in/james-carter-52a1b3c/",
  "linkedinUrl": "https://www.linkedin.com/in/james-carter-52a1b3c/",
  "timestamp": "2026-04-08T10:15:00Z"
}
  
💡

Tip: Make sure your recruiter includes the full LinkedIn profile URL in their message. Private or incomplete URLs will cause the Apify scraper to fail silently. Train users to send the full URL like https://www.linkedin.com/in/username/.

Step 2: Apify LinkedIn Scraper Trigger and Polling

Once you have the URL, send it to Apify’s LinkedIn Profile Scraper actor. Apify will queue the job asynchronously, so you need to poll for results. The workflow launches the actor, then checks its status repeatedly until it’s done.

Configuration: In the Apify node, set the Actor ID to your LinkedIn Scraper actor, pass the LinkedIn URL as input, and call the actor. Store the Run ID for polling. Use a Wait node to space out polling calls (2–3 seconds apart, 30–40 attempts). Once the status shows “Succeeded,” fetch the results from the Apify output dataset.


{
  "runId": "YOUR_RUN_ID_FROM_APIFY",
  "status": "Succeeded",
  "profile": {
    "name": "James Carter",
    "headline": "Senior Software Engineer at TechCorp",
    "location": "Austin, TX, USA",
    "about": "10+ years building scalable systems. Expertise in cloud architecture and team leadership.",
    "experience": [
      {
        "title": "Senior Software Engineer",
        "company": "TechCorp Inc.",
        "duration": "2022–Present",
        "description": "Led platform modernization, reducing infrastructure costs by 35%."
      }
    ],
    "skills": ["Python", "AWS", "System Design", "Leadership", "Docker", "Kubernetes"],
    "endorsements": 247
  }
}
  
📌

Important: Apify’s LinkedIn scraper may hit rate limits if called too frequently. Keep polling intervals at 2–3 seconds and fail gracefully if a profile can’t be scraped (use an IF node to check the status).

Step 3: Fetch Job Description from Google Drive

Your Google Drive folder holds job descriptions as PDFs. In parallel with the Apify scrape, the workflow fetches the correct job description. A Set node stores the job title or folder ID for lookup, and a Google Drive node finds and downloads the PDF.

Configuration: Use Google Drive credentials (OAuth2). Set the operation to “Download File” and specify your folder structure. You may hard-code the folder ID or pass it dynamically based on recruiter input. Extract the PDF file ID and download the binary content.


{
  "jobDescriptionFile": {
    "id": "1a2b3c4d5e6f7g8h9i0j_JOB_DESC",
    "name": "Senior_Software_Engineer_2026.pdf",
    "mimeType": "application/pdf",
    "size": 45230,
    "downloadUrl": "https://drive.google.com/file/d/1a2b3c4d5e6f7g8h9i0j_JOB_DESC/view"
  }
}
  

Part 2 — Document Processing and AI Analysis

Step 4: Extract Text from PDF Job Description

PDF files need to be converted to plain text before the AI agents can analyze them. An n8n PDF Extractor node (or a Code node using a library like pdfkit) parses the PDF and outputs clean text.

Configuration: Feed the downloaded PDF binary into a PDF Extract node. Set it to extract all text. The output is clean, line-broken text suitable for LLM processing.


{
  "jobDescriptionText": "Senior Software Engineer - Full-Time, Austin, TX\n\nAbout the Role:\nWe're seeking a Senior Software Engineer to lead our platform modernization initiative...\n\nKey Responsibilities:\n- Design and implement scalable microservices\n- Mentor junior engineers\n- Collaborate with product and design teams\n\nRequired Skills:\n- 8+ years software engineering experience\n- Proficiency in Python, Go, or Rust\n- AWS or GCP certification preferred\n- Strong system design fundamentals\n\nCompensation:\n$180,000–$220,000 + equity"
}
  

Step 5–7: Three Parallel AI Agents (Gemini 2.5 Pro with LangChain)

This is where the intelligence happens. Three specialized LangChain agents, each powered by Google Gemini 2.5 Pro, evaluate the candidate from different angles. They run in parallel for speed, each receiving the same candidate profile and job description but with a different prompt.

Agent 1: JD Matching Agent
Purpose: Assign a match score (0–100%) and list which job requirements the candidate meets and which they lack.
Prompt: “You are a recruitment analyst. Compare this candidate’s profile to the job description. Score the match from 0–100%. List which required skills are present, which are missing, and which desired skills the candidate has. Be precise and numerical.”


{
  "matchScore": 78,
  "requiredSkillsMet": ["Python", "AWS", "System Design", "Leadership"],
  "requiredSkillsMissing": [],
  "desiredSkillsPresent": ["Docker", "Kubernetes"],
  "reasoning": "Strong match on core backend skills and architecture. Leadership experience aligns with mentoring expectations."
}
  

Agent 2: Detailed Analysis Agent
Purpose: Provide a deep-dive evaluation of the candidate’s background, strengths, gaps, and how they’d perform in the role.
Prompt: “You are a senior recruiter reviewing this candidate. Write a comprehensive 2–3 paragraph evaluation of their fit for the role. Consider their experience trajectory, demonstrated technical depth, leadership maturity, and any red flags or concerns. Be constructive but honest.”


{
  "analysis": "James Carter presents a strong profile for this role. His 10 years in software engineering, with the last 4 focused on platform modernization at TechCorp, directly mirror the responsibilities outlined. His experience leading infrastructure cost optimization demonstrates both technical depth and business acumen. However, his background is predominantly in established, large-scale systems; this role will require exposure to startup-pace decision-making. His skill set is very current—Docker, Kubernetes, and AWS are all heavily weighted in the job description. No significant gaps identified beyond the typical onboarding curve."
}
  

Agent 3: Recruiter Recommendation Agent
Purpose: Synthesize the other two analyses and produce a hiring recommendation for the recruiter (e.g., “Strong Yes,” “Yes with caveats,” “No”).
Prompt: “Based on the candidate profile, job description, and the above analyses, provide a short hiring recommendation. Choose from: ‘Strong Yes—move to phone screen,’ ‘Yes, with caveats,’ ‘Maybe—needs clarification on specific skills,’ ‘No—not a fit.’ Explain your recommendation in 1–2 sentences.”


{
  "recommendation": "Strong Yes—move to phone screen",
  "rationale": "Carter's technical skills and leadership experience are a strong fit. His platform modernization background directly aligns with the role's core responsibility. Recommend phone screen to assess cultural fit and career motivation."
}
  
💡

Tip: Parallel execution is critical for speed. All three agents should start at the same time (use a Merge node to combine their outputs). If one agent times out, the workflow won’t block the others—use error handlers to catch and log failures gracefully.

Part 3 — Results Storage and Final Output

Step 8: Merge Agent Results and Add to Google Sheets

Once all three agents finish, a Merge node combines their outputs into a single structured result. This consolidated data is then added as a new row to your Google Sheets document, creating a searchable archive of all screening decisions.

Configuration: Set up the Merge node to combine all agent outputs under a single object. In the Google Sheets node, configure the operation to “Append Row” into your spreadsheet. Map each agent result to a column: matchScore, analysis, recommendation, linkedinUrl, timestamp, and candidateName.


{
  "linkedinUrl": "https://www.linkedin.com/in/james-carter-52a1b3c/",
  "candidateName": "James Carter",
  "timestamp": "2026-04-08T10:15:00Z",
  "matchScore": 78,
  "requiredSkillsMet": "Python, AWS, System Design, Leadership",
  "requiredSkillsMissing": "None",
  "analysis": "James Carter presents a strong profile...",
  "recommendation": "Strong Yes—move to phone screen"
}
  

Step 9: Format and Send Telegram Summary

The final step sends a formatted message back to the recruiter via Telegram. The message includes the match score, the recommendation, and a brief summary for quick review.

Configuration: Use a Set node to format the output as a readable Telegram message (emoji, line breaks, bold text). Then use a Telegram Send node to deliver it to the recruiter’s chat ID (or the original chat where they sent the URL).


{
  "telegramMessage": "✅ SCREENING COMPLETE\\n\\nCandidate: James Carter\\nMatch Score: 78%\\nRecommendation: Strong Yes—move to phone screen\\n\\nProfile: Senior Software Engineer at TechCorp (10 yrs exp)\\nKey Fit: Python, AWS, System Design, Leadership all present.\\n\\nFull analysis saved to screening sheet.",
  "chatId": "YOUR_RECRUITER_CHAT_ID"
}
  
💡

Tip: Add a conditional branch here. If the recommendation is “No,” prefix the Telegram message with a ⛔ emoji. If it’s “Strong Yes,” use a 🚀 emoji. This gives the recruiter instant visual feedback before they even read the details.

The Data Structure

All screening results are logged to a Google Sheets document. This becomes your searchable candidate database. Each row captures one screening event, with columns for the LinkedIn URL, candidate name, all three agent outputs, and the timestamp.

Column Type Example Description
Date Date 2026-04-08 Screening date (auto-populate with workflow timestamp)
Candidate Name Text James Carter Full name from LinkedIn profile
LinkedIn URL URL https://www.linkedin.com/in/james-carter-52a1b3c/ Link to original profile
Job Title Screened For Text Senior Software Engineer Which job description was used
Match Score Number 78 0–100 from Agent 1
Required Skills Met Text Python, AWS, System Design, Leadership Comma-separated list from Agent 1
Required Skills Missing Text (none) Gaps the candidate should address
Detailed Analysis Long Text James Carter presents a strong profile… Full paragraph from Agent 2
Recommendation Text Strong Yes—move to phone screen Decision from Agent 3
📌

Important: Set up the Google Sheets document with these column headers before importing the workflow. The column names must match exactly—the workflow expects Candidate Name, Match Score, etc. If you rename columns, update the field mappings in the Google Sheets node.

Full System Flow

Here’s the complete end-to-end journey, from recruiter message to final Telegram response:

┌─────────────────────────────────────────────────────────────────────────┐
│                    FULL AI SCREENING PIPELINE                          │
│                                                                         │
│  TRIGGER                                                                │
│  ┌──────────────────┐                                                   │
│  │ Telegram Message │ (Recruiter sends LinkedIn URL)                   │
│  └────────┬─────────┘                                                   │
│           │                                                             │
│  DATA GATHERING (Parallel)                                             │
│           ├──→ Apify LinkedIn Scraper ────→ Poll for Results           │
│           │                                                             │
│           └──→ Google Drive ────→ Download Job Description PDF         │
│                                           │                            │
│                                   Extract PDF Text                     │
│           │                                                             │
│           ↓ (Wait for both)                                            │
│           │                                                             │
│  AI ANALYSIS (3 Parallel Agents)                                       │
│           ├──→ Agent 1: JD Match Score & Skills Gap                    │
│           ├──→ Agent 2: Detailed Candidate Analysis                    │
│           └──→ Agent 3: Recruiter Recommendation                       │
│           │                                                             │
│           ↓ (Merge all agent outputs)                                  │
│           │                                                             │
│  PERSISTENCE & OUTPUT                                                  │
│           ├──→ Add Row to Google Sheets                                │
│           │                                                             │
│           └──→ Format & Send Telegram Summary                          │
│                                                                         │
│  ┌──────────────────────────────────────┐                              │
│  │ Recruiter Receives Summary in Telegram│ (with score + recommendation)
│  └──────────────────────────────────────┘                              │
│                                                                         │
│  ✅ Full screening complete in 30–60 seconds                           │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

Before letting your team loose on the workflow, run through this test plan to confirm everything is wired correctly.

  1. Send a test LinkedIn URL via Telegram: Use a public profile (e.g., your own LinkedIn or a known public figure). Send a message to your bot like: “Please screen: https://www.linkedin.com/in/sarah-thompson-engineering/”
  2. Monitor the n8n execution: Open the workflow’s execution history in n8n and watch for successful node completion. Check that Apify returns profile data, Google Drive successfully downloads the PDF, and all three agents produce output.
  3. Check Telegram for the response: Within 30–60 seconds, you should receive a formatted message with a match score and recommendation.
  4. Verify the Google Sheets row: Open your screening spreadsheet and confirm that a new row was added with all the candidate details and agent analysis.
Problem Likely Cause Fix
Telegram message not triggering workflow Bot token incorrect or Telegram node not listening on the right chat Re-check bot token in Telegram credential. Confirm you’re messaging the correct bot.
Apify scraper returns empty profile LinkedIn profile is private or URL is malformed Test with a public profile. Ensure recruiter sends full URL (https://www.linkedin.com/in/username/).
Google Drive node returns “File not found” Job description PDF is not in the specified folder, or folder ID is wrong Double-check folder ID in Google Drive config. Confirm PDF file exists and is accessible.
Gemini agents time out or return empty responses API quota exceeded, malformed prompt, or credential not authenticated Check Google Cloud console for quota limits. Re-authenticate Gemini credential. Simplify the prompt if needed.
Google Sheets append fails Column names don’t match, sheet is read-only, or credentials lack write access Verify column headers match exactly. Check sheet permissions. Re-authenticate Google Sheets credential.
Telegram response is delayed (>2 minutes) Apify polling is slow, or Gemini API is slow Reduce polling interval slightly (1–2 sec). Check n8n logs for slow node execution.

Frequently Asked Questions

Can I screen candidates for multiple jobs at once?

Yes. Instead of hard-coding a single job description, modify the workflow to accept a job title as an input parameter. Add a conditional step that looks up the corresponding PDF from Google Drive based on the title sent in the Telegram message. For example, if the recruiter sends “Screen for Senior Software Engineer,” the workflow finds and uses that specific job description.

What if Apify can’t scrape a LinkedIn profile?

Apify may fail on private profiles, suspended profiles, or if LinkedIn rate-limits the scraper. Add an error handler branch after the Apify polling step. If the status is “Failed,” send a message back to the recruiter explaining the issue and ask them to provide a public-facing profile link or a resume PDF instead. You can then use a PDF extractor node to parse the resume and proceed with the same AI agents.

How much does this cost to run?

Costs depend on your service usage. Gemini API charges per 1,000 input tokens (~$0.075) and 1,000 output tokens (~$0.3). A typical screening run uses about 5,000–8,000 tokens total, so roughly $0.05–$0.10 per candidate. Apify charges based on actor runs (LinkedIn scraper ~$0.05–$0.15 per run). Google Sheets and Google Drive are included in your Google Workspace account (no additional charge). Telegram Bot is free. Total cost per screening: roughly $0.10–$0.30.

Can I customize the AI agents’ evaluation criteria?

Absolutely. Each agent’s instructions are defined in the LangChain prompt. Edit the prompt in each agent node to emphasize different criteria. For example, if your role prioritizes “leadership and mentoring ability,” adjust Agent 2’s prompt to focus on those traits. Or if you want more detail, ask the agents to return structured JSON with sub-scores instead of prose.

What if the recruiter needs to screen a candidate without a LinkedIn profile?

Create an alternative input path. After the initial Telegram message, add a conditional step: if the message contains a LinkedIn URL, proceed with Apify; if it contains a resume attachment or a Drive link, fetch and parse that instead. Both paths merge at the “AI Analysis” stage, so the agents evaluate the same data regardless of source.

How do I integrate this with my existing ATS (Applicant Tracking System)?

Most modern ATSs expose an API for candidate creation. After screening results are added to Google Sheets, add a conditional node that checks the recommendation. If it’s “Strong Yes,” make an HTTP POST request to your ATS’s API with the candidate details, match score, and recruiter notes. This creates a pre-filled candidate record that the recruiter can review and move forward in your hiring workflow.

Get the AI Candidate Screening Template

Stop screening manually. Import the complete 55-node workflow in under 10 minutes, configure your credentials, and let AI agents handle first-round reviews while you focus on real conversations.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted · Setup guide included

What’s Next?

  • Add a second-stage review: Create a companion workflow that triggers when a recruiter flags a candidate for deeper evaluation. It can compile a detailed dossier from LinkedIn, recent GitHub contributions, and portfolio links.
  • Expand to video interviews: Integrate a video interview scheduling tool (e.g., Calendly, Slack) to automatically send a booking link to “Strong Yes” candidates, streamlining the next step.
  • Multi-language support: Use Google Translate in parallel with Gemini to evaluate candidates from non-English profiles, opening your talent pool globally.
  • Scoring refinement: Once you have 20–30 screening results in Google Sheets, analyze which candidates actually performed well in interviews and phone screens. Use that data to fine-tune the agents’ prompts and weightings for even better predictions.
n8n
AI agents
LinkedIn automation
Google Gemini
Telegram
recruitment automation
workflow template

How to Add LinkedIn Post Commenters to HubSpot CRM with n8n

You publish a LinkedIn post, it takes off, and suddenly forty people you’ve never met are commenting on it. Each one of them is a warm lead — someone who already cares about what you have to say. But by the time you’ve finished scrolling through the comments, copying names, and looking people up one by one, the momentum is gone. What if every single one of those commenters appeared in your CRM automatically, complete with their email, job title, and company — ready for your sales team to follow up?

That’s exactly what you’ll build in this guide. Using n8n, Apify, and HubSpot, you’ll create a workflow that scrapes commenters from any LinkedIn post, enriches their profiles with professional data, and pushes qualified contacts straight into your CRM. No manual data entry, no copy-pasting, no missed opportunities.

Prefer to skip the setup? Grab the ready-made template → and be up and running in under 10 minutes.

What You’ll Build

  1. You paste a LinkedIn post URL into a simple web form hosted by n8n.
  2. The workflow scrapes every comment on that post and extracts the commenter’s LinkedIn profile URL.
  3. Each commenter’s profile is enriched through Apify — pulling their email address, job title, company name, city, and country.
  4. If a valid email is found, the workflow creates or updates a contact in HubSpot CRM with all the enriched data.
  5. Commenters without a discoverable email are silently skipped, keeping your CRM clean.

How It Works — The Big Picture

The entire process runs through a single n8n workflow with seven core nodes. Here’s the high-level flow:

┌──────────────────────────────────────────────────────────────────────────────────────┐
│  LINKEDIN COMMENTERS → HUBSPOT CRM                                                  │
│                                                                                      │
│  [Form Trigger]                                                                      │
│       │                                                                              │
│       ▼                                                                              │
│  [Scrape Post Comments]  ──▶  [Loop Over Commenters]                                 │
│                                     │                                                │
│                                     ▼                                                │
│                              [Wait 3 sec]                                            │
│                                     │                                                │
│                                     ▼                                                │
│                              [Enrich Profile]                                        │
│                                     │                                                │
│                                     ▼                                                │
│                              [Extract Fields]                                        │
│                                     │                                                │
│                                     ▼                                                │
│                              [Has Valid Email?]                                      │
│                               ╱            ╲                                         │
│                           YES ╱              ╲ NO                                    │
│                             ╱                  ╲                                     │
│                [Create/Update           [Skip — No Email]                            │
│                 HubSpot Contact]              │                                      │
│                        │                      │                                      │
│                        └───── ◀── Loop ──◀────┘                                      │
└──────────────────────────────────────────────────────────────────────────────────────┘
  

What You’ll Need

  • n8n instance — self-hosted or n8n Cloud (all nodes are built-in, no community nodes required)
  • Apify account — the free tier gives you $5/month of compute, enough to process around 50–100 profiles per run. Sign up here.
  • HubSpot account — free CRM tier works perfectly. You’ll need a Private App Token with crm.objects.contacts.write scope.

Estimated build time: 30–40 minutes from scratch, or under 10 minutes with the template.

Building the Workflow Step by Step

1 Submit LinkedIn Post URL (Form Trigger)

The workflow starts with n8n’s built-in Form Trigger node. This creates a simple web form where you paste the LinkedIn post URL. When you submit the form, the workflow fires.

  1. Add a Form Trigger node to your canvas.
  2. Set the Form Title to LinkedIn Post Engagement Capture.
  3. Add one form field: label it LinkedIn Post URL, set it as required, and add a placeholder like https://www.linkedin.com/posts/username_topic-activity-123...
  4. Save the node. n8n will generate a unique form URL you can bookmark.

After submission, the data flowing out of this node looks like:

{
  "LinkedIn Post URL": "https://www.linkedin.com/posts/james-carter_ai-automation-activity-7291234567890-AbCd",
  "submittedAt": "2026-04-08T14:22:00.000Z"
}
💡

Tip: You can find the URL of any LinkedIn post by clicking the three dots on the post and selecting “Copy link to post.” Make sure you’re copying the full URL, not a shortened one.

2 Scrape Post Comments (HTTP Request → Apify)

This node sends the LinkedIn post URL to Apify’s LinkedIn Post Comments Scraper actor, which returns an array of all commenters with their profile URLs and comment text.

  1. Add an HTTP Request node and connect it to the Form Trigger.
  2. Set the Method to POST.
  3. Set the URL to: https://api.apify.com/v2/acts/curious_coder~linkedin-post-comments-scraper/run-sync-get-dataset-items?token={{ $credentials.httpHeaderAuth.value }}
  4. Under Body, select JSON and enter:
    {
      "postUrl": "{{ $json['LinkedIn Post URL'] }}",
      "maxComments": 100
    }
  5. Set the Timeout to 120000 ms (2 minutes) — scraping takes time.
  6. Under Credentials, add your Apify API token as an HTTP Header Auth credential (header name: Authorization, value: Bearer YOUR_TOKEN). Alternatively, the token in the URL query parameter handles auth directly.

The response is an array of comment objects. Each item looks like:

{
  "profileUrl": "https://www.linkedin.com/in/emily-rodriguez-marketing",
  "commenterName": "Emily Rodriguez",
  "commentText": "This is such a great breakdown! We've been looking at something similar.",
  "timestamp": "2026-04-07T09:15:00.000Z"
}
💡

Tip: The maxComments parameter caps how many comments are scraped. For posts with hundreds of comments, start with 50 to test, then increase once you’ve confirmed the workflow runs smoothly.

3 Loop Over Commenters (Split In Batches)

The comment scraper returns all comments as an array. The Loop node processes them one at a time, which is important because the enrichment step hits an external API that has rate limits.

  1. Add a Split In Batches node and connect it to the HTTP Request node.
  2. Set Batch Size to 1 — we process one commenter per iteration.

The loop feeds each individual commenter object to the next node in sequence.

4 Wait Between Requests (Wait Node)

Adding a 3-second pause between enrichment requests prevents you from hitting Apify’s rate limits and keeps the workflow running reliably even with large comment batches.

  1. Add a Wait node after the loop’s “process” output.
  2. Set Amount to 3 and Unit to seconds.
📌

If you’re on Apify’s paid plan with higher rate limits, you can reduce this to 1 second or remove it entirely.

5 Enrich Profile (HTTP Request → Apify)

This is where the magic happens. For each commenter, we call Apify’s LinkedIn Profile Scraper to pull their full professional details — email, headline, company, location, and more.

  1. Add another HTTP Request node.
  2. Set the Method to POST.
  3. Set the URL to: https://api.apify.com/v2/acts/curious_coder~linkedin-profile-scraper/run-sync-get-dataset-items?token={{ $credentials.httpHeaderAuth.value }}
  4. Under Body, set JSON to:
    {
      "profileUrls": ["{{ $json.profileUrl }}"]
    }
  5. Set the Timeout to 120000 ms.
  6. Use the same Apify credential as Step 2.

The enriched profile data comes back looking something like this:

{
  "firstName": "Emily",
  "lastName": "Rodriguez",
  "email": "emily.rodriguez@techcorp.com",
  "headline": "VP of Marketing at TechCorp",
  "company": "TechCorp Inc.",
  "city": "Austin",
  "country": "US",
  "url": "https://www.linkedin.com/in/emily-rodriguez-marketing",
  "connections": 2847
}

6 Extract Profile Fields (Set Node)

The enrichment response contains dozens of fields. This Set node extracts only the ones you actually need for your CRM, creating a clean, standardized data object.

  1. Add a Set node and connect it to the Enrich Profile node.
  2. Switch to Manual mode.
  3. Map these fields:
    Output Field Expression
    email ={{ $json.email }}
    firstName ={{ $json.firstName }}
    lastName ={{ $json.lastName }}
    jobTitle ={{ $json.headline }}
    company ={{ $json.company }}
    city ={{ $json.city }}
    country ={{ $json.country }}
    linkedinUrl ={{ $json.url }}

After this node, each item has a clean, flat structure:

{
  "email": "emily.rodriguez@techcorp.com",
  "firstName": "Emily",
  "lastName": "Rodriguez",
  "jobTitle": "VP of Marketing at TechCorp",
  "company": "TechCorp Inc.",
  "city": "Austin",
  "country": "US",
  "linkedinUrl": "https://www.linkedin.com/in/emily-rodriguez-marketing"
}

7 Has Valid Email? (IF Node)

Not every LinkedIn profile has a publicly discoverable email address. This IF node checks whether the enrichment found a valid email before attempting to create a CRM contact.

  1. Add an IF node.
  2. Add two conditions (AND):
    • {{ $json.email }} exists
    • {{ $json.email }} is not empty

The true branch goes to HubSpot. The false branch goes to a No-Op node that loops back, silently skipping the commenter.

💡

Tip: Apify typically finds emails for 40–60% of LinkedIn profiles. If you need higher hit rates, consider adding a secondary enrichment service like Hunter.io or Apollo as a fallback before the IF node.

8 Create or Update HubSpot Contact (HubSpot Node)

The final action node. It takes the enriched profile data and creates a new contact in HubSpot — or updates an existing one if a contact with the same email already exists.

  1. Add a HubSpot node.
  2. Set Authentication to App Token.
  3. Set the Email field to ={{ $json.email }}.
  4. Under Additional Fields, map:
    HubSpot Field Expression
    First Name ={{ $json.firstName }}
    Last Name ={{ $json.lastName }}
    Job Title ={{ $json.jobTitle }}
    Company Name ={{ $json.company }}
    City ={{ $json.city }}
    Country ={{ $json.country }}
    Website ={{ $json.linkedinUrl }}

Connect both the HubSpot node’s output and the “Skip — No Email” node back to the Loop node to continue processing the next commenter.

📌

The HubSpot node uses upsert behavior by default — if a contact with the same email already exists, it updates their fields instead of creating a duplicate. This keeps your CRM clean even if you run the workflow on multiple posts where the same people comment.

The Data Flow

Here’s what a contact record looks like as it moves through the system, from raw LinkedIn comment to polished CRM entry:

Stage Data Available Example
After Comment Scrape Profile URL, Name, Comment Text emily-rodriguez-marketing
After Enrichment + Email, Headline, Company, City, Country emily.rodriguez@techcorp.com, VP of Marketing
After Field Extraction Clean 8-field object ready for CRM Flat JSON with all mapped fields
In HubSpot Full contact record with source tracking Contact card with LinkedIn URL as website

Full System Flow

┌──────────────────────────────────────────────────────────────────────────────┐
│                                                                              │
│   USER                     n8n WORKFLOW                      HUBSPOT CRM     │
│                                                                              │
│   Pastes URL ──▶ [Form Trigger]                                              │
│                       │                                                      │
│                       ▼                                                      │
│                  [HTTP Request] ──▶ Apify Comments API                       │
│                       │                                                      │
│                       ▼                                                      │
│                  [Loop: 1 at a time]                                          │
│                       │                                                      │
│                       ▼                                                      │
│                  [Wait 3 sec]                                                │
│                       │                                                      │
│                       ▼                                                      │
│                  [HTTP Request] ──▶ Apify Profile API                        │
│                       │                                                      │
│                       ▼                                                      │
│                  [Set: Extract Fields]                                        │
│                       │                                                      │
│                       ▼                                                      │
│                  [IF: Email exists?]                                          │
│                    ╱        ╲                                                 │
│                YES            NO                                             │
│                 ╱                ╲                                            │
│   [HubSpot: Upsert]     [Skip: No-Op]                                       │
│         │                      │           ──▶  Contact created/updated      │
│         └──── Loop back ◀──────┘                                             │
│                                                                              │
└──────────────────────────────────────────────────────────────────────────────┘
  

Testing Your Workflow

  1. Find a test post. Use one of your own LinkedIn posts that has at least 5–10 comments. Avoid posts with hundreds of comments for your first test — keep it small.
  2. Open the form. Click “Test workflow” in n8n, then open the form URL in your browser. Paste the LinkedIn post URL and submit.
  3. Watch the execution. In n8n, you’ll see the workflow run node by node. The comment scraping takes 30–60 seconds. Each profile enrichment takes another 10–30 seconds.
  4. Check HubSpot. Open your HubSpot contacts list and look for the newly created records. Verify that the name, email, job title, and company are populated correctly.
  5. Review skipped contacts. Check the workflow execution log — any commenters without emails will show as passing through the “Skip — No Email” branch.
Problem Likely Cause Fix
No comments returned Post URL is incorrect or post has no comments Copy the URL directly from the post’s share menu; make sure the post is public
Enrichment returns empty data Apify token is invalid or has run out of credits Check your Apify dashboard for remaining credits and regenerate the token
HubSpot returns 401 error App Token doesn’t have the right scopes In HubSpot, edit your Private App and ensure crm.objects.contacts.write is enabled
Workflow times out Too many comments + enrichment is slow Reduce maxComments to 50, or increase the HTTP Request timeout to 180000 ms
Duplicate contacts in CRM Email field is mapped incorrectly Make sure the HubSpot node’s Email field uses the exact expression ={{ $json.email }}

Frequently Asked Questions

Does this work with n8n Cloud or only self-hosted?

It works with both. Every node in this workflow is a standard n8n built-in node — no community nodes required. The Apify calls are made through regular HTTP Request nodes, so there’s nothing extra to install.

How many comments can it handle per run?

The template is set to scrape up to 100 comments per post. You can increase this by changing the maxComments parameter, but keep in mind that each profile enrichment uses Apify compute credits. A batch of 100 profiles typically costs around $1–2 on the free tier.

What if a commenter is already in my HubSpot CRM?

The HubSpot node uses upsert logic — it matches on email address. If a contact with that email already exists, their record gets updated with the latest data instead of creating a duplicate. Your CRM stays clean no matter how many times you run it.

Can I use a different CRM instead of HubSpot?

Yes. Swap the HubSpot node for any CRM node that n8n supports — Salesforce, Pipedrive, Zoho CRM, or even a Google Sheets node if you want a lightweight approach. The enrichment pipeline stays the same; you just change the final destination.

Is scraping LinkedIn comments against their terms of service?

Apify handles the data collection through their compliant infrastructure. The data accessed is publicly visible comment information. That said, always review LinkedIn’s current terms and your local data protection regulations before using any automation at scale.

What percentage of profiles actually have an email?

Apify’s LinkedIn Profile Scraper typically discovers email addresses for 40–60% of profiles, depending on the industry and how complete people’s profiles are. B2B professionals in tech and marketing tend to have higher hit rates.

Get the LinkedIn Commenters to HubSpot CRM Template

Skip the 40-minute build. Get the pre-built workflow JSON, step-by-step setup guide, and credentials walkthrough — import it into n8n and start capturing leads in under 10 minutes.

Get the Template →

Instant download · Works on n8n Cloud and self-hosted

What’s Next?

  • Add a Slack notification — get a message in your team channel every time a new contact is added to HubSpot, with their name, company, and the post they commented on.
  • Tag contacts by post topic — use a Set node to add a custom HubSpot property that records which post the contact engaged with, so your sales team knows what they’re interested in.
  • Chain with an email sequence — connect HubSpot to your email tool (Mailchimp, SendGrid, or HubSpot’s own sequences) to automatically send a welcome email to new contacts.
  • Schedule it to run on multiple posts — replace the Form Trigger with a Schedule Trigger and a list of post URLs in a Google Sheet to process several posts on autopilot.
n8n
LinkedIn
HubSpot
Apify
CRM
Lead Generation
Contact Enrichment
automation