I’m building a content automation system in n8n that handles frequent post generation across multiple social accounts. The system involves scraping, AI-generated scripts (Claude/GPT), and media tools like RunwayML, HeyGen, json2video, etc.
I’m evaluating three architecture options:
Single unified workflow
One workflow per account (for example 50 workflows for 50 account)
Modular main flow with child subflows for sourcing, scripting, media, logging
Key needs:
Retry logic + fallback (e.g., Claude to GPT)
Failure in one run shouldn’t block others
Flexible frequency (e.g., every 15 mins)
What’s the most scalable and fault-tolerant setup you’ve seen for this kind of system in n8n?
Newer version have backups gpts models, so this should fix first point, you can also do retry logic or error paths to try other nodes etc if error. Also on the same point, if you want it to continue on failure and move on, you can handles this easily on a node by node basies, in settings of each node, ie continue on error etc.
In terms of workflow design only main considerations are system spec, ie running everythign in one workflow means big ram needs espeically if not using filesystem mode, so splitting into separate workflow work, but are a pain to then bulk update.
Other things like being flexiable you have schdulers and triggers so thats very easy to manage.
I would probably have a db sotred with all emails, and setup a trigger to loop through or check last update time, and in the workflow have a filter on which node posts ie the one with the credntials for that users, so filter based on email, to correct node so it posts to right profile. So in essence, you workflow runs 1 profile at schedules or trigger, its easy to manage updating the flow for all.
So number one you dont have massive ram build ups due to massive workflows, it runs ocne for each user and completes, then goes to next users etc.
If you prefer seperate workflows can work and ideal if ure making things mroe customisable for each item.
But yeah just try avoid long executions where ure processing 100s of records at atime, use error managemannt and continue on error. and you should be fine.
this may help
Yes, we’re leaning toward a modular workflow architecture. Our current plan is to maintain a single workflow that fetches account data (from Airtable) including theme, and content type. From there:
We run conditional logic to generate content using GPT/Claude
Route media tasks via HeyGen, RunwayML, Bannerbear, and others based on format
Implement error paths and fallback tools — for example, if Leonardo fails, fallback to Freepik
And yes, we’re using “continue on error” in critical nodes to keep the workflow running smoothly across accounts
This lets us avoiding RAM spikes, and keeps things maintainable with modular subflows.
Just wanted to confirm — am I on the right track with this setup?
Also, if you have like a defined task, making a execute by another workflow can be very helpful, so for exmaple, main process inside main workflow, and then say if u do a heygen u can just use execute asnotherworkflow, passin the right data and this saves on mem too, and also only counts as 1 execution too, so processing can run smoother this way too.
so once the subworkflow heygen is run, it just returns to that main workflow, anotherway to manage complex workflow espeically when it’s just a input / output type thing (storing in subworkflow in onedrive / s3 etc, and just return the url or path to main workflow) hope that makes sense but avoid processing large files if not needed too also.,