I’m currently working on an n8n workflow where I want to use multiple AI agents (based on a local LLaMA implementation) to collaboratively generate an article.
The article should consist of the following parts:
- Headline
- Summary
- 4 detailed paragraphs
- Conclusion
Here’s my approach:
- The first agent gets a topic suggestion from me and generates instructions for the subsequent agents.
- Each agent is responsible for writing a specific part of the article (e.g., the headline, summary, or paragraphs).
- The output of one agent should be passed to the next, so the article gradually builds up.
The Problem:
It’s not working as expected. Each agent seems to overwrite the previous output instead of adding to it. As a result, I only get the output of the last agent, rather than a fully assembled article.
My Questions:
- What am I doing wrong? How can I ensure that each agent takes the previous output, adds to it, and passes it on to the next?
- Is there a specific configuration or trick to make this work with a local LLaMA implementation in n8n?
Important: I want to strictly use local models (e.g., LLaMA) and avoid any cloud-based services like OpenAI.
If anyone has experience with such workflows or tips on how to solve this issue, I’d really appreciate your help!
Thanks in advance for your support!
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.72.1
- Database (default: SQLite): Qdrant Vector Store
- n8n EXECUTIONS_PROCESS setting (default: own, main): ?
- **Running n8n via (Docker, npm, n8n cloud, desktop app): VM **
- Operating system: debian