I’m having a problem with the “Retrieve Documents” node (Vector Store Tool) in my n8n workflow for a RAG chatbot. Despite my attempts to modify the node description, the default system message persists:
“System: Use the following pieces of context to answer the users question. If you don’t know the answer, just say that you don’t know, don’t try to make up an answer.”
I tried to personalize this message by modifying the “description” field in the node’s parameters, but without success. The node seems to completely ignore my changes.
I’ve checked that the node is correctly connected to the rest of the workflow, including the OpenAI Chat Model node and the RAG AI Agent.
Has anyone encountered this problem before, or has any idea how to force the node to use a custom system message? Is there a way around this limitation?
Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.
```
<your workflow>
```
That implies to any JSON output you would like to share with us.
Make sure that you have removed any sensitive information from your workflow and include dummy or pinned data with it!
Note you can share your workflow here in the forum as per tip above. It is safe, no credentials shared. Besides access to your S3 bucket is fobidden (“AccessDenied”).
This “OpenAi Chat Model12” is using this system message that i cant change, causing the answer to be “I dont know”, even if the vector storage returned me itens found on the vector storage.
This “OpenAi Chat Model12” is the one atached to the vector storage:
Yes, that’s exactly my problem too! Thanks for bringing it up. I notice that the ‘OpenAI Chat Model12’ system message seems to limit responses, even if relevant items are returned from vector storage. This is making things run less smoothly. If any solutions are available, I’d love to hear from you.
Hello @Dorian_Marty !
I found a way to treat that, I’m using the supabase vector storage directcly, and them i’m sending just the returned itens to my main AI to treat it:
I basically separated the agent node. This supabase one (you can use any other one, pinecone for example), and them im sending this output from this supabase node to the ai.