Question about AI strategy with multiple chained system messages

I’m curious how to best cover the following scenario.

I want to give an LLM one input, then it should fire different steps and feed answers back into itself with different instructions, but being aware of the previous conversation context. For example:

  1. Generate context about brand XYZ.
  2. List all Problems the brand solves with its product, and output it in a special format
  3. For each Problem, score the relevance and output it in a special format

I want to generate 30+ problems, so I think it wouldn’t work in one go.
How would I approach this scenario?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

For anybody else finding this, I’m going to try if just using the AI Agent Node with a Window Buffer Memory with a unique session key linking to each brand is going to be enough!

1 Like

Hmmm so I tried with the following workflow. First manual execution I use the first input, telling the model I like flowers. Second manual execution I use the second input, asking for the exact first input, but it gives me the system prompt I think.

Do I need a permanent store for that sort of thing? I know sb who created a persistent chat with ChatGPT via make and saves the chat id to go back to and continue messaging, I’m just not sure how I would do that within n8n…

Any thoughts, ideas, pointers, etc welcome @MutedJam !

I tried just using the AI Agent with a memory and a manual chat input, but same thing it doesn’t react at all like I’d expect…

Now I’m just passing all my context into a Basic LLM as system prompt every time. This could get out of hand token-wise but probably still better than sending all the conversation history every time. And as I want to score text and get numbers back it’s actually ok to “only” have the system message with instructions as context.

Again, it’s not what I was looking for, works for now, but I’m still keen on learning about the truly conversational approach with persistant stored message history more!

Hi @leprodude, sorry for the late reply. Tbh, I am not overly familiar with the inner workings of langchain as I haven’t had time to fully explore it, but perhaps @oleg can clarify how exactly the AI agent works in your example case.

No worries @MutedJam I saw you’re away! Still haven’t really solved it satisfactorily… Sounds like such an easy thing to do tbh and I’m probably just not getting sth, but yeah…