When running an agent with mistral the agent fails after about a minute with the below message. This happened after the latest update today.
I didn’t change the workflow and tried new workflow, new agent, different deployment (local/cloud)
I just downgraded my local version to 2.4.8 and mistral works again with the agent.
What is the error message (if any)?
Unexpected HTTP client error: TypeError: Failed to parse URL from [object Request]
This does look like a regression introduced after 2.4.8, and similar reports are starting to surface.
If you’re open to it, you could help strengthen the case by validating a couple of quick checks:
• Whether the Mistral Chat node works when used standalone (without the AI Agent)
• Whether the AI Agent works with another provider, to confirm the issue is Mistral-specific
If the results match what others are seeing this would be a good candidate to also report under Known Issues, referencing this thread and the GitHub issue.
That usually helps with visibility and makes it easier for the n8n team to pick it up. You can also tag one of the moderators or support members to help route it internally.
Hi @Captain-AI Welcome!
I never have used Mistral model until now when i created an account and added billings and when got the API keys i brought them to n8n and created credentials everything upto that level seems fine until i attached mistral model to the llm chain and it gave me the same error, i have tried multiple times, i guess temporary workaround is just to use another service provider, and yeah it is worth mentioning that when i have tried Openrouter/Groq and accessed Mistral model via there services everything worked really well, i guess this is a genuine issue. Hope this helps! Please dont consider these AI spammers, they do not even touch the grass when replying to the question with AI.
Sorry about that, my English isn’t perfect, and I didn’t phrase it clearly earlier.
What I meant was: “Whether the issue occurs specifically when using Mistral as the AI Agent’s model.”
Thanks for clarifying.
If you have a minute and feel like testing a bit more, a couple of quick checks could really help narrow this down for the team:
Does the error happen right on the first model call, or only after the Agent tries to continue the conversation?
Does it still happen if streaming is turned off, or only when streaming is enabled?
And just to double-check: does it also fail when the Agent runs without any Memory node attached?
its not mistral though, it’s the node (works via HTTP request)
I’m not using it as a chat, just for processing information
Streaming is turned off
I don’t have a memory-node attached.
I have also posted this on github. Here you can copy the code for the two workflows
@tamy.santos do you know by any chance how I can downgrade my cloud workspace?
It is processing 200 pages of PDF for a client daily which now came to a halt
I’ve seen that work has started on this issue. At this point, the best approach is to wait for the analysis and, in the meantime, follow one of the trade-offs below.
if you want to use Mistral, you should not use the AI Agent, or alternatively downgrade to n8n 2.4.8. If you want to use the AI Agent, you should not use the Mistral Cloud Chat Model for now and instead use another compatible provider.
Hi, same problem after the update, i have more than 150 workflows in production, broken, since last update, all with Mistral chat.
Changing model is not an option.
I’m having the same question
The closest you can get is this Pull-Request that needs only one last check before they merge it into the main-version. https:// github .com/n8n-io/n8n/pull/25342 (remove spaces to visit url)