Blog to Social Posts with AI — free n8n template (RSS + OpenAI + Slack)

Hey n8n community!

Sharing a workflow I built to solve a problem I kept running into: spending 2+ hours manually writing social media content every
time I published a blog post.

The workflow monitors your blog’s RSS feed, automatically fetches full article content when a new post is detected, sends it to
OpenAI, and delivers ready-to-review social posts directly to Slack. The free version covers Twitter and LinkedIn — the paid
version extends this to 5 platforms with content calendar logging and Buffer scheduling.


What It Does

Trigger: RSS feed polling (every 30 minutes by default)

Pipeline:

  1. RSS Trigger detects a new blog post
  2. Basic input validation (catches empty titles/URLs early)
  3. HTTP Request fetches the full page HTML
  4. Code node strips HTML and extracts article text (up to 3000 chars for OpenAI context)
  5. OpenAI GPT-4o-mini generates a Twitter thread + LinkedIn post, both platform-optimized with hashtags
  6. Slack delivers everything to your #content channel, formatted and ready to copy-paste

Cost: About $0.001-0.003 per blog post with GPT-4o-mini. Basically free.


Setup (5 minutes)

Credentials you need:

Steps:

  1. Import the JSON below into n8n
  2. Click the RSS Feed Trigger node → enter your blog’s RSS feed URL
  3. Click Generate Social Posts → create OpenAI credential → paste API key
  4. Click Slack node → create Slack credential → paste bot token
  5. Update the channel name to your preferred channel
  6. Click Test Workflow to verify
  7. Activate — it runs automatically from here

Workflow JSON

{
“name”: “Blog → Social Posts (AI) — Twitter + LinkedIn → Slack (Free Version)”,
“nodes”: [
{
“parameters”: {
“feedUrl”: “https://yourblog.com/feed”,
“options”: {}
},
“id”: “f2a1b3c4-1001-4e00-b001-000000000004”,
“name”: “RSS Feed Trigger”,
“type”: “n8n-nodes-base.rssFeedReadTrigger”,
“typeVersion”: 1,
“position”: [280, 450],
“polling”: true
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“jsCode”: “const item = $input.item.json;\nconst title = item.title || ‘’;\nconst link = item.link || item.url ||
item.guid || ‘’;\nif (!title || !link) throw new Error(‘Missing title or link from RSS feed’);\nreturn { json: { title:
title.trim(), link: link.trim(), description: (item.description || item.summary || ‘’).substring(0, 500) } };”
},
“id”: “f2a1b3c4-1002-4e00-b002-000000000005”,
“name”: “Validate Input”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [560, 450]
},
{
“parameters”: {
“url”: “={{ $json.link }}”,
“options”: { “response”: { “response”: { “fullResponse”: false, “responseFormat”: “text” } }, “timeout”: 15000 }
},
“id”: “f2a1b3c4-1003-4e00-b003-000000000006”,
“name”: “Fetch Blog Content”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.2,
“position”: [840, 450]
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“jsCode”: “const html = $input.item.json.data || $input.item.json.body || ‘’;\nconst title = $items(‘Validate
Input’)[0].json.title;\nconst link = $items(‘Validate Input’)[0].json.link;\nlet content = html;\nconst articleMatch =
html.match(/<article[^>]>([\s\S]?)<\/article>/i);\nconst mainMatch = html.match(/<main[^>]>([\s\S]?)<\/main>/i);\nif
(articleMatch) content = articleMatch[1];\nelse if (mainMatch) content = mainMatch[1];\ncontent =
content.replace(/<script[\s\S]?<\/script>/gi, ‘’).replace(/<style[\s\S]?<\/style>/gi, ‘’).replace(/<[^>]+>/g, ’
').replace(/ /g, ’ ').replace(/\s+/g, ’ ').trim();\nreturn { json: { title, url: link, full_text: content.substring(0,
3000) } };”
},
“id”: “f2a1b3c4-1004-4e00-b004-000000000007”,
“name”: “Extract Article Text”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [1120, 450]
},
{
“parameters”: {
“resource”: “chat”,
“model”: { “__rl”: true, “mode”: “list”, “value”: “gpt-4o-mini” },
“messages”: {
“values”: [
{ “content”: “You are a social media content strategist. Repurpose blog posts into platform-optimized
content.\n\nRules:\n1. Match the tone of the original post\n2. Twitter: max 280 chars per tweet, thread of 3-5 tweets\n3.
LinkedIn: professional, 150-300 words, line breaks\n4. Include 5-8 relevant hashtags per platform\n5. Include the blog URL
naturally\n\nRespond ONLY with valid JSON. No markdown.”, “role”: “system” },
{ “content”: “=Repurpose this blog post:\n\nTitle: {{ $json.title }}\nURL: {{ $json.url }}\nContent:\n{{
$json.full_text }}\n\nGenerate JSON:\n{“twitter_thread”:[“tweet1”,“tweet2”,“tweet3”],“linkedin_post”:“text”,“hasht
ags”:{“twitter”:,“linkedin”:}}”, “role”: “user” }
]
},
“options”: { “temperature”: 0.7, “maxTokens”: 1500 }
},
“id”: “f2a1b3c4-2001-4e00-c001-000000000008”,
“name”: “Generate Social Posts”,
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [1400, 450],
“credentials”: { “openAiApi”: { “id”: “1”, “name”: “OpenAI Account” } }
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“jsCode”: “const raw = $input.item.json.message?.content || ‘’;\nlet parsed;\ntry { parsed =
JSON.parse(raw.replace(/json?\\n?/g, '').replace(//g, ‘’).trim()); } catch (e) { throw new Error('Failed to parse OpenAI
response: ’ + e.message); }\nconst title = $items(‘Extract Article Text’)[0].json.title;\nconst url = $items(‘Extract Article
Text’)[0].json.url;\nif (!parsed.twitter_thread) parsed.twitter_thread = ['Check out: ’ + title + ’ ’ + url];\nif
(!parsed.linkedin_post) parsed.linkedin_post = title + '\n\nRead more: ’ + url;\nreturn { json: { …parsed, title, url } };”
},
“id”: “f2a1b3c4-2002-4e00-c002-000000000009”,
“name”: “Parse AI Response”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [1680, 450]
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“jsCode”: “const data = $input.item.json;\nconst tweets = Array.isArray(data.twitter_thread) ?
data.twitter_thread.join(‘\n—\n’) : data.twitter_thread;\nconst hashtags = data.hashtags || {};\nconst msg = :memo: *New Social Posts Generated*\\n\\n*Blog:* <${data.url}|${data.title}>\\n\\n---\\n*Twitter Thread:*\\n${tweets}\\n +
(hashtags.twitter?.length ? _Hashtags: ${hashtags.twitter.join(' ')}_\\n : ‘’) +
\\n---\\n*LinkedIn:*\\n${data.linkedin_post}\\n + (hashtags.linkedin?.length ? _Hashtags: ${hashtags.linkedin.join(' ')}_\\n
: ‘’) + \\n---\\n_Upgrade for 5 platforms + scheduling_;\nreturn { json: { slack_message: msg } };”
},
“id”: “f2a1b3c4-3001-4e00-d001-000000000010”,
“name”: “Format Slack Message”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [1960, 450]
},
{
“parameters”: {
“select”: “channel”,
“channelId”: { “__rl”: true, “mode”: “name”, “value”: “#content” },
“text”: “={{ $json.slack_message }}”,
“otherOptions”: {}
},
“id”: “f2a1b3c4-3002-4e00-d002-000000000011”,
“name”: “Slack: Social Posts Ready”,
“type”: “n8n-nodes-base.slack”,
“typeVersion”: 2.4,
“position”: [2240, 450],
“credentials”: { “slackApi”: { “id”: “2”, “name”: “Slack Account” } }
}
],
“pinData”: {},
“connections”: {
“RSS Feed Trigger”: { “main”: [[{“node”: “Validate Input”, “type”: “main”, “index”: 0}]] },
“Validate Input”: { “main”: [[{“node”: “Fetch Blog Content”, “type”: “main”, “index”: 0}]] },
“Fetch Blog Content”: { “main”: [[{“node”: “Extract Article Text”, “type”: “main”, “index”: 0}]] },
“Extract Article Text”: { “main”: [[{“node”: “Generate Social Posts”, “type”: “main”, “index”: 0}]] },
“Generate Social Posts”: { “main”: [[{“node”: “Parse AI Response”, “type”: “main”, “index”: 0}]] },
“Parse AI Response”: { “main”: [[{“node”: “Format Slack Message”, “type”: “main”, “index”: 0}]] },
“Format Slack Message”: { “main”: [[{“node”: “Slack: Social Posts Ready”, “type”: “main”, “index”: 0}]] }
},
“active”: false,
“settings”: {“executionOrder”: “v1”},
“tags”:
}


Customization Ideas

  • Change the AI model: Swap gpt-4o-mini for gpt-4o in the OpenAI node for higher quality output (costs more but noticeably
    better)
  • Filter by category: Add an IF node after the RSS trigger to only process posts tagged with certain categories
  • Change the polling interval: The RSS trigger defaults to 30 minutes — you can set it to 15 minutes, 1 hour, or daily
  • Add more platforms: Edit the system prompt to request Instagram, Facebook, or newsletter snippet formats

Full Version

I also have a full version with a few more features that weren’t worth cramming into a free template:

  • 5 platforms instead of 2 (adds Instagram caption, Facebook post, Newsletter snippet)
  • Duplicate detection — tracks processed URLs in workflow static data, never reprocesses the same post
  • AI fallback path — if OpenAI fails or returns invalid JSON, generates basic posts from the excerpt so the pipeline never
    breaks
  • Google Sheets logging — appends a content calendar row for every post (date, title, URL, all 5 posts, hashtags, image prompt)
  • Buffer integration — optionally queues Twitter and LinkedIn posts via Buffer API for scheduled publishing
  • Full error handling — all nodes have retry logic, and any failure sends a Slack alert to #alerts
  • Manual trigger — run it on demand for any URL, not just from RSS

Available at: https://flowyantra.gumroad.com/l/blog-to-social-ai

Also on GitHub for the free version: https://github.com/flowyantra/blog-to-social-ai-n8n


Hope this is useful — happy to answer questions or help if you hit issues with the setup!

1 Like

Thanks for sharing this @flowyantra !

1 Like

this is slick. ive been meaning to build something similar but using local llms to avoid openai costs. what happens when gpt-4o-mini fails on parsing the response? does it just post a fallback or does the whole workflow error out?

1 Like

Hey, thanks! Appreciate that.

So honest answer — the free version just errors out if the JSON parse fails. It’ll throw and stop. Hasn’t happened much in my testing with 4o-mini but it’s not bulletproof.
The paid version has a fallback that retries with a simpler prompt, and worst case it’ll just pull the title + description into a basic post. So you always get something.
Re: local LLMs — I’ve been thinking about that too actually. Ollama + Llama 3 would work, you’d just swap the OpenAI node. Main headache is getting clean JSON out consistently — local models are way more hit or miss on structured output. Would probably need a more forgiving parser.
If you end up trying it lmk how it goes, curious to see what works!

1 Like

yeah the structured output thing is the main headache with local models. ive had okay results using ollama’s json format option with llama 3.1 — still truncates occasionally but way less than without it. probably worth adding a fallback parser that just extracts whatever it can if the full json fails, as a safety net.

nice, didn’t know the json format flag helped that much with llama 3.1. might have to try that.For the truncation thing — a lazy fallback that just regex-grabs whatever fields it can from the broken json would probably work. not pretty but beats a dead workflow. Have you tried mistral or phi-3 for this kind of stuff? curious what else works for structured output.

phi-3 mini surprised me honestly, way more consistent with structured output than llama in my testing. mistral 7b is solid too, iirc it handles json schema more reliably out of the box. id probably start with phi-3 for this kind of thing

Great tip, Benjamin — phi-3 mini for structured output is surprisingly solid, good to hear that confirmed. Mistral 7B’s JSON schema handling out of the box is a nice bonus too. Definitely going to start with phi-3 for this kind of thing. Appreciate the recommendation!

yeah, hope it works out! phi-3 also runs pretty lean so it won’t slow your pipeline down — good pick for something like this where you’re just doing structured extraction. curious to hear how the comparison goes if you end up benchmarking both.

1 Like

Im working on X posting workflows also. And trying to figure settings… theres been noise in the “privacy-first” community regarding Microsoft (phi-3)and telemetry…

Runtime Environment, risk evidently. If you use official Microsoft “optimization scripts” or “ONNX Runtime” packages, they have telemetry (data sent back to MS about how tool is used)

Person could set the environment variable HF_HUB_DISABLE_TELEMETRY=1 or DISABLE_TELEMETRY=1 in their .env or Docker files. (and per search) The risk Microsoft’s huggingface_hub scripts or the DirectML execution provider can attempt to “phone home” with usage stats. Docker Isolation: Run your Phi-3 container with internal: true network settings so it can’t “phone home” to Microsoft’s telemetry servers. And no “Remote Code”: Only use GGUF or Safetensor files. Avoid any model that requires trust_remote_code=True https://www.reddit.com/r/learnmachinelearning/comments/1pjxh66/local_llms_are_private_until_they_arent_the/ also i thought accessing MS Azure cloud mcp server for a tool, and then found this on vulnerability (Mar10) CVE-2026-26118: Azure MCP Server SSRF Vulnerability

good catch on the phi-3 telemetry — didn’t have that on my radar. the env var approach (HF_HUB_DISABLE_TELEMETRY=1) helps, but honestly running GGUF weights via ollama sidesteps most of this cleanly — no HuggingFace hub scripts, no DirectML, just loading the model file directly. if you’re already containerized, adding --network=none or docker internal networking is a solid extra layer on top. probably the safest combo for anything privacy-sensitive.

2 Likes

solid callout on the telemetry thing — hadn’t dug into that yet. For anyone running Ollama locally though, worth noting that Ollama uses GGUF model files by default and doesn’t load any Microsoft runtime packages or ONNX, so the telemetry concern is mostly relevant if you’re using HuggingFace transformers directly or the official Microsoft optimization stack. With Ollama + GGUF it’s basically just raw model weights running locally, no phone-home risk.
That said, the Docker isolation tip is good practice regardless — internal: true on the model container network is smart hygiene. And yeah the Azure MCP CVE is a separate but real concern if anyone’s mixing cloud MCP servers into their stack. Good to flag.
For the X posting workflow — are you triggering from RSS/webhook or doing it on a schedule? We’ve been building similar flows and the tricky part is rate limiting on the X API side.

1 Like

yeah the lean footprint is exactly why I’m leaning phi-3 for this — don’t want the LLM step to become the bottleneck in the pipeline. Will definitely share results if I get around to a proper side-by-side with mistral.

“triggering from RSS/webhook or doing it on a schedule?” Ill be posting solutions to this whole question. X is such a perturbed onerous animal regards their data.

I have run workflows with: #1 twitterapi.io Stream Rules,

with 6 Streams (there’s char limit) each following maybe 12 User accounts, running at 2 min intervals - that push to my webhook ANY tweet by any User. (attached pic). #2 use Advanced Search [here] rest endpoint on that same platform and also over on GetXapi [here]. Cron poll yes maybe 5 min… trying to grab newest tweets to respond to. #3 Make a LIST in the X account, of Accounts to monitor. Then poll that LIST every 5 min for any new tweets [Tweet Timeline api]. **What we have NOT done is use anything from X directly. The idea was can any of this be done on a shoestring? All of these at such tight crons actually do rack up some costs ~$20-$40/mo. Guess what is NO cost? ‘playwright’ .js library. I created a public LIST on X, and Playwright script, on cron, launches headless browser (with cookie you have to feed it), scrapes the LIST for any new tweets. Its amazing. fragile? not so far. the LIST, of course, has all the Users i am monitoring for tweets. thinking of vid tutorials on it.

1 Like