OpenRouter fallback model not used when free primary model limit hit

In a workflow with a couple of Basic LLM Chain nodes I’m using as the model an OpenRouter sub node with a free model selected, and a paid model as the fallback. But the workflow errors if it hits the per-minute limit for free models at OpenRouter (Error = “OpenAI: Rate limit reached”) and doesn’t use the fallback as I’d hoped it would. If this is how fallback models are expected to operate, is there another way to achieve my desired behaviour?

Locally hosted in a Docker on Windows PC, n8n v 1.110.1

1 Like

hai :waving_hand:t3: @selbrae , As far as I know, currently the OpenRouter node does not automatically switch to the fallback model when rate limits are hit , the workflow errors out immediately. The solution is to use an “Error Trigger” node to catch the error and continue to a fallback workflow that calls the paid model. This approach is widely used to keep workflows running smoothly.

Hope this helps, stay motivated, and good luck!

2 Likes

That worked - thank you so much.
This is the format I ended up with.

1 Like