Model Selector node as fallback mechanism when primary LLM fail

I want to know how can I use Model Selector node as fallback mechanism in my workflow. I checked the docs and I found this statement “… This enables implementing fallback mechanisms for error handling or choosing the optimal model for specific tasks.” it means that I could use it when my primary LLM returning error.

I tried to change the “On Error” setting from “Stop” to “Continue” and tried to get the error message using this syntax {{ $json.execution.error.message }} like on the workflow below but it doesn’t work. So how should I use Model Selector node as fallback mechanism?

Great question @ezraluandre The Model Selector node is not used in the way you might expect. Instead of using the workflow’s execution error handling, the node catches LLM-related errors such as rate limiting, timeouts, or API issues, and then automatically falls back to the alternative model.

How Model Selector Actually Works
The Model Selector checks its rules while the AI Agent is executing, not after. Instead of the primary model failing, the node catches the error internally and then uses the appropriate model based on the rules you defined.

Issues with Your Current Setup

  1. {{ $json.execution.error.message }} won’t work - This syntax is for workflow-level errors, not LLM errors

  2. The AI Agent’s “On Error” setting - Setting this to “Continue” bypasses the Model Selector’s error handling

  3. Rule conditions - You need to reference the actual error from the LLM, not the workflow execution

Here’s how to set it up properly:

1. AI Agent Settings:

  • Keep “On Error” set to “Stop and Return Error” (default)

  • This allows the Model Selector to catch LLM errors

2. Model Selector Rules:
Use the correct error reference syntax:

{{ $error.message }}

3. Example Rule Setup:

For rate limit errors:

  • Condition: {{ $error.message }} contains rate limit (or 429)

  • Model Index: 2 (your fallback model)

For general errors:

  • Condition: {{ $error }} is not empty

  • Model Index: 2 (your fallback model)

To test if it’s working:

  1. Use a model that will definitely fail (wrong API key, or a model that doesn’t exist)

  2. The Model Selector should automatically switch to your fallback

  3. Check the execution logs to see which model was actually used

Thanks it works! but what about if I want the second model to also do the same? Let’s say I want LLM 1 and 2 to be about raid limit do I also use {{ $error.message }} syntax on the second rule?

1 Like

Hi @ezraluandre Yes you can use that $error.message syntax, but it has some limits on, what i would recommend is you should:

Set up rules with increasingly broad conditions:

  • Rule 1: {{ $error.message }} contains 429 > Model 2

  • Rule 2: {{ $error.message }} contains rate limit > Model 3 (broader match, catches what rule 1 misses)

  • Rule 3 (default): {{ $error }} is not empty > Model 3 (catch-all)

1 Like

I just notice that I use {{ error }} syntax and not {{ error.message}}, when I use {{ error.message }} with contain Rate Limit it return this error message:

No matching rule found

None of the defined rules matched the workflow data

why it is failed when I add message parameter?

1 Like

@ezraluandre That is good, using error instead of error.message in your model selector, as sometimes the error i mean that object would not specifically have a message object.

It might be failing because like in your error object it does not exist i mean that message branch and maybe that error object can be empty which is highly unlikely , it must be because the error object does not contain anything like message as insider object.

Ok that’s clarify why I got that error message. Once again thank you!

1 Like

@ezraluandre glad it helped you consider marking that as a solution to let future discoverers know what it means.

Cheers!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.