RAG agent not behaving as expected - not forwarding data or processing it

Describe the problem/error/question

I have stored image names and its lables in supabase database.
I have instructed the agent if the question is image related, call faq_images tool.
the response should be then process using following (only relevant section below)instructions/system message

  • Call the faq_images tool.
  • Parse the text field from the result as JSON.
  • Extract the image_name:
  • From metadata.image_name, or
  • From the last part of pageContent (after the final comma).
  • Pass the image name to the Ask for image tool in this format: { “image_name”: “3_image_1.jpeg” }

faq_images tool response looks good
{
“response”: [
{
“type”: “text”,
“text”: “{"pageContent":"Lock and key,3_image_1.jpeg","metadata":{"loc":{"lines":{"to":1,"from":1}},"source":"blob","blobType":"text/plain","image_name":"3_image_1.jpeg"}}”
}
]
}

What is the error message (if any)?

However the agent does not process or even forward the response as it is to Ask for image tool, this is what I see as output of ai agent tool
“response”: {
“generations”: [
[
{
“text”: “”,
“generationInfo”: {
“prompt”: 0,
“completion”: 0,
“finish_reason”: “tool_calls”,…

and the Ask for image tool receives { “query”: {}} as input

Please share your workflow


</> {
“name”: “Test_ChatBot”,
“nodes”: [
{
“parameters”: {
“options”: {}
},
“id”: “4ef11502-3f75-438c-9ed1-b02b903cbfc2”,
“name”: “OpenAI Chat Model”,
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1,
“position”: [
-19500,
2160
],
“credentials”: {
“openAiApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {},
“id”: “9f32fa0c-8e2c-44b9-9b75-f41306e2a766”,
“name”: “Postgres Chat Memory”,
“type”: “@n8n/n8n-nodes-langchain.memoryPostgresChat”,
“typeVersion”: 1,
“position”: [
-19360,
2160
],
“notesInFlow”: false,
“credentials”: {
“postgres”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {
“content”: “## RAG AI Agent with Chat Interface”,
“height”: 545,
“width”: 1476
},
“id”: “04219df7-03d7-4d3d-8302-9c7dce3fc14f”,
“name”: “Sticky Note2”,
“type”: “n8n-nodes-base.stickyNote”,
“typeVersion”: 1,
“position”: [
-19660,
1760
]
},
{
“parameters”: {
“public”: true,
“options”: {}
},
“id”: “440e3c8c-2131-4231-ad88-2a763e383b5c”,
“name”: “When chat message received”,
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-19300,
1880
],
“webhookId”: “42c0ba7c-5170-40bb-82ec-f46958a30811”
},
{
“parameters”: {
“options”: {
“systemMessage”: “You are a helpful virtual assistant for answering user FAQs.\n- For text-based questions, use the faq_text tool.\n- For visual/image-related questions, use the faq_images tool first, then pass the image_name to the Ask for image tool.\n\nRules:\n- Only respond based on the tool outputs.\n- Do not invent or use internal knowledge.\n- Respond clearly and politely.\n- Use short, helpful messages.\n- Never include system jargon or raw JSON in replies.\n- If no answer/image is found, ask the user to clarify.”,
“maxIterations”: 3
}
},
“id”: “3143f475-8e26-444d-b220-bd75f83774f5”,
“name”: “RAG AI Agent”,
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.6,
“position”: [
-19040,
1880
],
“alwaysOutputData”: true,
“executeOnce”: false,
“retryOnFail”: false
},
{
“parameters”: {
“toolDescription”: “This tool receives an image_name and returns a signed URL for that image from object storage. Use this after retrieving metadata from the faq_images tool.”,
“method”: “POST”,
“url”: “=https://yoururl.supabase.co/storage/v1/object/sign/images/{{$json.image_name}}”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “supabaseApi”,
“sendHeaders”: true,
“specifyHeaders”: “json”,
“jsonHeaders”: “{\n "Content-Type": "application/json";\n}”,
“sendBody”: true,
“specifyBody”: “json”,
“jsonBody”: “{\n "expiresIn": 3600 # URL valid for 1 hour\n}”
},
“type”: “@n8n/n8n-nodes-langchain.toolHttpRequest”,
“typeVersion”: 1.1,
“position”: [
-19240,
2160
],
“id”: “4e81a7c2-ba55-4224-94c7-52ac484024a3”,
“name”: “Ask for image”,
“credentials”: {
“supabaseApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-19040,
2240
],
“id”: “352ab70f-e055-4b73-85dd-06a85431648d”,
“name”: “Embeddings OpenAI2”,
“credentials”: {
“openAiApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-18740,
2240
],
“id”: “bc9dbad6-5ec1-4491-b027-c4058a532e4b”,
“name”: “Embeddings OpenAI3”,
“credentials”: {
“openAiApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {
“mode”: “retrieve-as-tool”,
“toolName”: “=faq_images”,
“toolDescription”: “Use this tool when the user’s message refers to something visual, such as asking to see a object.\nIt performs a semantic search on image descriptions stored in the image vector database.\nIf no match is found, ask the user to clarify their request.”,
“tableName”: {
“__rl”: true,
“value”: “faq_images”,
“mode”: “list”,
“cachedResultName”: “faq_images”
},
“topK”: 1,
“options”: {
“queryName”: “match_faq_images”
}
},
“type”: “@n8n/n8n-nodes-langchain.vectorStoreSupabase”,
“typeVersion”: 1,
“position”: [
-18760,
2100
],
“id”: “af44dace-e2d4-408e-b5e7-5828e5099f5e”,
“name”: “faq_images”,
“credentials”: {
“supabaseApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
},
{
“parameters”: {
“mode”: “retrieve-as-tool”,
“toolName”: “Customer_FAQs”,
“toolDescription”: “This tool searches a semantic FAQ knowledge base to return the most relevant answer(s) to a user’s question.s.\n\nInput: natural language question\nOutput: best-matching text snippet(s).\n”,
“tableName”: {
“__rl”: true,
“value”: “documents”,
“mode”: “list”,
“cachedResultName”: “documents”
},
“topK”: 1,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.vectorStoreSupabase”,
“typeVersion”: 1,
“position”: [
-19060,
2100
],
“id”: “ccffe7ab-374c-485c-a49e-e8b27ce26c28”,
“name”: “faq_text”,
“credentials”: {
“supabaseApi”: {
“id”: “REPLACE_ME”,
“name”: “REPLACE_ME
}
}
}
],
“pinData”: {},
“connections”: {
“OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “RAG AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Postgres Chat Memory”: {
“ai_memory”: [
[
{
“node”: “RAG AI Agent”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“When chat message received”: {
“main”: [
[
{
“node”: “RAG AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“RAG AI Agent”: {
“main”: [

]
},
“Ask for image”: {
“ai_tool”: [
[
{
“node”: “RAG AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Embeddings OpenAI2”: {
“ai_embedding”: [
[
{
“node”: “faq_text”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
},
“Embeddings OpenAI3”: {
“ai_embedding”: [
[
{
“node”: “faq_images”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
},
“faq_images”: {
“ai_tool”: [
[
{
“node”: “RAG AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“faq_text”: {
“ai_tool”: [
[
{
“node”: “RAG AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
}
},
“active”: false,
“settings”: {
“executionOrder”: “v1”
},
“versionId”: “5fdec557-b06f-45cb-bc94-acdb1dfeea19”,
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “xxxxxxxxxxxx”
},
“id”: “9G9ef76lgVYB2TG4”,
“tags”:
}
</>

Information on your n8n setup

n8n version: 1.81.4
Database : supabase vector
n8n EXECUTIONS_PROCESS setting (default: own, main): main
Running n8n via (Docker, npm,desktop app) : docker container running locally
Operating system: windows

I have hit a roadblock now ,helping me move forward would be greatly appreciated. Is there a feature that we could tap/process tool output before rag agent receives it. I am relatively new to the tool.

Hi could you copy the workflow JSON and past it here. Use the ‘</>’ icon for that. It will make answering your question easier.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.