Hi. I’m using a ‘Call Another Workflow’ tool to call a simple SQL query workflow. The sub workflow returns 50 items from the database, but the output from the CAW tool only shows the first item.
There are over 250 rows in the table I’m searching so I’ve had the CAW tool query 50 rows at a time, then using an offset, queried the next 50, and so on. This gives me 7 runs on the output side of the CAW tool, each only containing what would be the first of the 50 rows I can see returned in the sub workflow.
I’m not super technical so I hope this makes sense.
Of course, I’d like for each row to be returned so that the LLM can have this data for further processing.
It only errors when the last query returns no response because the offset rises beyond the last row of the table.
Here is the ‘mother’ workflow:
{
“nodes”: [
{
“parameters”: {
“options”: {
“systemMessage”: “You are a helpful meal planning assistant that enquires, writes and deletes from an SQL database.\nEnquiries will be received in natural language and must be formatted for each tool.\nYou have tools to access the tables. READ THE TOOL DESCRIPTIONS CAREFULLY for their usage with tables, the information required, and the columns and rows to interact with.\nAlways check the various tools for up to date info.”
}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
360,
-80
],
“id”: “cd1aa1ff-ef41-4e17-a3ac-c9de81d04ed7”,
“name”: “AI Agent”
},
{
“parameters”: {
“model”: {
“__rl”: true,
“value”: “gpt-4o”,
“mode”: “list”,
“cachedResultName”: “gpt-4o”
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
80,
40
],
“id”: “30d0426b-11df-482f-a5a7-e6f15987ac81”,
“name”: “OpenAI Chat Model”,
“credentials”: {
“openAiApi”: {
“id”: “fhvtHn1tr37paLC9”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
260,
100
],
“id”: “5784da09-bd68-44ea-a822-9dbff4836f23”,
“name”: “Window Buffer Memory”
}
],
“connections”: {
“AI Agent”: {
“main”: [
]
},
“OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Window Buffer Memory”: {
“ai_memory”: [
[
{
“node”: “AI Agent”,
“type”: “ai_memory”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“instanceId”: “ff8c2d52c7ce62cfe84d973e6c623e66481d2ee10836dcbb74798e52a93f70d8”
}
}
And here is the workflow being called:
{
“nodes”: [
{
“parameters”: {
“workflowInputs”: {
“values”: [
{
“name”: “offset”,
“type”: “number”
}
]
}
},
“type”: “n8n-nodes-base.executeWorkflowTrigger”,
“typeVersion”: 1.1,
“position”: [
0,
0
],
“id”: “5f841349-7887-4bc4-aaaa-44bc8452d4d0”,
“name”: “When Executed by Another Workflow”
},
{
“parameters”: {
“jsCode”: “// Get the input JSON data\nconst data = $json;\n\n// Use offset from the input, or default to 0 if not provided\nconst offset = data.offset || 0;\n\n// Build the SQL query with LIMIT 50 and the offset\nconst query = \n SELECT *\n FROM Ingredients\n LIMIT 50\n OFFSET ${offset};\n
.trim();\n\n// Return the query so the MySQL node can execute it\nreturn [{ query }];\n”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
220,
0
],
“id”: “bd939548-3b44-4272-9ee9-996741e33e7b”,
“name”: “Build Query”
},
{
“parameters”: {
“operation”: “executeQuery”,
“query”: “{{ $json["query"] }}\n”,
“options”: {}
},
“type”: “n8n-nodes-base.mySql”,
“typeVersion”: 2.4,
“position”: [
440,
0
],
“id”: “f763c113-e6e3-4048-a4e8-d4e1e61c0b9d”,
“name”: “MySQL”,
“credentials”: {
“mySql”: {
“id”: “gcovIwAeiXua71UA”,
“name”: “MySQL account”
}
}
}
],
“connections”: {
“When Executed by Another Workflow”: {
“main”: [
[
{
“node”: “Build Query”,
“type”: “main”,
“index”: 0
}
]
]
},
“Build Query”: {
“main”: [
[
{
“node”: “MySQL”,
“type”: “main”,
“index”: 0
}
]
]
},
“MySQL”: {
“main”: [
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “ff8c2d52c7ce62cfe84d973e6c623e66481d2ee10836dcbb74798e52a93f70d8”
}
}
Here is the first of the seven outputs from the mother agent:
[
{
“ingredient_id”: 1,
“name”: “Chicken Breast”,
“default_unit”: “grams”,
“is_pantry_item”: 0,
“created_at”: “2025-03-23 19:22:40”,
“updated_at”: “2025-03-23 19:22:40”,
“snoozed”: 0,
“snoozed_start”: null,
“snoozed_until”: null,
“department_id”: 2
}
]
And here is the, very long, output from the sub workflow of the same run:
“ingredient_id”: 48,
“name”: “Meatballs”,
“default_unit”: “Qty or Grams”,
“is_pantry_item”: 0,
“created_at”: “2025-03-23 19:55:04”,
“updated_at”: “2025-03-23 19:55:04”,
“snoozed”: 0,
“snoozed_start”: null,
“snoozed_until”: null,
“department_id”: 2
},
{
“ingredient_id”: 49,
“name”: “Sausages”,
“default_unit”: “Qty or Pack”,
“is_pantry_item”: 0,
“created_at”: “2025-03-23 19:55:23”,
“updated_at”: “2025-03-23 19:55:23”,
“snoozed”: 0,
“snoozed_start”: null,
“snoozed_until”: null,
“department_id”: 2
},
{
“ingredient_id”: 50,
“name”: “Pork Chops”,
“default_unit”: “Qty or Grams”,
“is_pantry_item”: 0,
“created_at”: “2025-03-23 19:55:47”,
“updated_at”: “2025-03-23 19:55:47”,
“snoozed”: 0,
“snoozed_start”: null,
“snoozed_until”: null,
“department_id”: 2
}
]
Any help here would be great
Information on my n8n setup
- **n8n version: 1.81.3
- **Database (default: SQLite): Not sure on this. Sorry.
- **n8n EXECUTIONS_PROCESS setting (default: own, main):Not sure on this either. Sorry.
- **Running n8n via (Docker, npm, n8n cloud, desktop app): Server hosted, in Docker
- **Operating system:My server’s operating system is Linux/Ubuntu