Sub-Workflow: Avoid Redundant API calls + Forward/Retrieve Data

Describe the problem/error/question

Hi, what’s the best way to avoid unnecessary API calls in sub-workflows (or any workflow, for that matter) and also be able to forward/retrieve the last output of the sub-workflow? Using the “chat with sheets” example from n8n, I think the sub-workflow is loading the sheet each time a chat is entered. I’d like it to load the sheet once, and I’d like to forward or retrieve the output from another workflow at any given time. I’m aware of the n8n get workflow nodes and webhooks/etc, but am looking for a best practice. Thanks!

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Something like the below is what I’m aiming for, though I know I can’t use the wait node like this. It’s just to make the point. Basically, the subworkflow should have an initial state where the data is loaded via workflow activation. Then, the processing of the data is performed as is typically done - by the Execute Workflow Trigger. How can I achieve this or something similar with the workflow + subworkflow setup?

Nice observation!
Personally, I wouldn’t worry about the API limits for most scenarios. Are you somehow hitting the quota for this workflow?

In any case,

Adding chat memory to the agent would be the easiest way I think as tool results can be persisted as part of the conversation. If the agent feels it has enough info from a past message, then it may decide not to use the tool again thus saving on API requests.

Another approach would be implement your own caching mechanism where you’d use something light and fast like Redis.

Good to note, you can set an expiry for the cached data so you could control how long it would be stale for. In the following example, I set the expiry to 5mins.

3 Likes

Thank you Jim, I think I will find use for both the approaches you gave. However, my goal at the moment is to create a minimal code example that only uses n8n nodes. I think it is just good practice to avoid API calls that are unnecessary. If I was coding, I would just store it as a variable or read/write from disk. However, I think each execution in n8n seems to eliminate the variable option? This leaves replacing your Redis approach with an n8n node. But it is not clear to me which node I should use. Any ideas? Perhaps the in-vector memory store? This would not be as “general” as I would like though, given it is designed for embeddings. Basically, what can we replace the Redis with?

You could try the “chat memory manager” and “window buffer memory” (Advanced AI > Other AI nodes > Miscellaneous > Chat memory manager) and have this attached to your AI agent.

If you’re willing to go as extreme as using the in-memory vector store for this (which I wouldn’t recommend btw!), another possible approach is to just copy the spreadsheet contents into a code node and use that instead :thinking:

Thanks again - this also should work for self-hosted options

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.