How to avoid using sub-workflows for simple LLM tool calls in n8n

Question: How to avoid using sub-workflows for simple LLM tool calls in n8n?

Hi all — I’m working on an AI agent in n8n that routes incoming user queries to the appropriate tool based on intent. The architecture is pretty straightforward: the agent chooses between Tool A or Tool B, depending on the user’s input.

:wrench: Each Tool has a focused LLM model with its own specific prompt and behavior — for example:

  • Tool A identifies tone,
  • Tool B summarizes text.

I noticed n8n supports calling LLMs or tools inside sub-workflows, which is great for complex logic. But in my use case, these are just lightweight prompt wrappers — breaking each one out into a full sub-workflow feels like unnecessary overhead, especially when each sub workflow is super specific to it’s parent workflow and only consists of a single node.

The problem:

I’d love to just pass inputs from the AI Agent directly to an in-line “LLM call” with its specific prompt (without forcing a sub-workflow structure). But from what I can tell, n8n doesn’t allow basic Message Model (LLM) nodes to be triggered conditionally from the main workflow, unless wrapped in a sub-workflow. These tools are also very not set in stone, so i’m constantly changing how they work in order to figure out the best setup. So having everything separated into separate workflows where changes need to be made from the parent and sub workflow constantly becomes hard to handle.

It seems n8n can route to tools that perform code or trigger complex chains via sub-workflows…
…but not simply route to lightweight LLM prompt templates without adding sub-workflow complexity.

My Question:

Is there a better way to structure this so I can keep these simple LLM prompt calls inline, without building separate sub-workflows for each one? Maybe, it’s not directly possible the way i’m thinking but is still possible via a workaround?

Here is an example workflow showing how i currently have it working with sub workflows. Which is not ideal.

1 Like

Hey @joshkasaptriosoft ,

I think, I understood what problem you are facing here…

Yes there is workaround available for this…

Just a simple question:
Are you using n8n cloud or self hosted one?
if your answer is self hosted, than this is for you :backhand_index_pointing_down:

Thanks to @octionic for this workaround & solution :white_check_mark:.

2 Likes

Since yesterday it even became a native feature. With prerelease 1.100.0 there is a new Model Switcher Node which can be added between Agent and multiple models.

1 Like

Ohh…great I didn’t knew about this…awesome!

Thanks for the update @octionic .

1 Like

The workflow you provided won’t quite work for me because an agent is able to intelligently call the tools it needs based on context of the message and it’s instructions.

If i have 20 tools for example, i don’t want the ai to loop through asking the same question to 20 models when only the last model is relevant.

Furthermore, an agent has memory/context of tool calls it’s made and the responses in order to better determine if it needs to call. For example, tool 20, followed by tool 5 using info received from tool 20, followed by tool 1 using info return from tool 20 and 5. But this is all dynamic, not hard coded so the ai can take whatever path it wants. I’ve not been able to figure out a good way to replicate this behavior without tool calls from an agent.

I should also provide further clarification that when i say multiple models for various tasks. I don’t really mean a different base model. For example, I’m using gpt-o4-mini for all of them. But each has a different system prompt in order to inhibit different behavior if called.

This might, work, but can’t say for sure until I actually test with it. I’m using docker and it seems the image isn’t updated yet.

You need to pull the „next“ tag.