Mistral Chat-Node failing with agent

Copy from github issue #25318

Describe the problem/error/question

When running an agent with mistral the agent fails after about a minute with the below message. This happened after the latest update today.
I didn’t change the workflow and tried new workflow, new agent, different deployment (local/cloud)
I just downgraded my local version to 2.4.8 and mistral works again with the agent.

What is the error message (if any)?

Unexpected HTTP client error: TypeError: Failed to parse URL from [object Request]

Please share your workflow

Just the agent and mistral node.

Information on your n8n setup

  • n8n version: 2.6.3
  • Database (default: SQLite):
  • Running n8n via n8n cloud:
  • Operating system: MacOS Tahoe

Hi @Captain-AI

Welcome to the n8n community !

This does look like a regression introduced after 2.4.8, and similar reports are starting to surface.

If you’re open to it, you could help strengthen the case by validating a couple of quick checks:

• Whether the Mistral Chat node works when used standalone (without the AI Agent)
• Whether the AI Agent works with another provider, to confirm the issue is Mistral-specific

If the results match what others are seeing this would be a good candidate to also report under Known Issues, referencing this thread and the GitHub issue.

That usually helps with visibility and makes it easier for the n8n team to pick it up. You can also tag one of the moderators or support members to help route it internally.

1 Like

Thanks @tamy.santos for helping streamline the process.

Mistral has no standalone chat node. It only works with the agent (see screenshot).
Yes, the agent works with OpenAI and Google successfully.

Hi @Captain-AI Welcome!
I never have used Mistral model until now when i created an account and added billings and when got the API keys i brought them to n8n and created credentials everything upto that level seems fine until i attached mistral model to the llm chain and it gave me the same error, i have tried multiple times, i guess temporary workaround is just to use another service provider, and yeah it is worth mentioning that when i have tried Openrouter/Groq and accessed Mistral model via there services everything worked really well, i guess this is a genuine issue. Hope this helps! Please dont consider these AI spammers, they do not even touch the grass when replying to the question with AI.

@Captain-AI

Sorry about that, my English isn’t perfect, and I didn’t phrase it clearly earlier.
What I meant was: “Whether the issue occurs specifically when using Mistral as the AI Agent’s model.”
Thanks for clarifying.

If you have a minute and feel like testing a bit more, a couple of quick checks could really help narrow this down for the team:

  • Does the error happen right on the first model call, or only after the Agent tries to continue the conversation?
  • Does it still happen if streaming is turned off, or only when streaming is enabled?
  • And just to double-check: does it also fail when the Agent runs without any Memory node attached?

@tamy.santos
Thanks for following up.

  • yes only occurs with mistral
    • its not mistral though, it’s the node (works via HTTP request)
  • I’m not using it as a chat, just for processing information
  • Streaming is turned off
  • I don’t have a memory-node attached.

I have also posted this on github. Here you can copy the code for the two workflows

@tamy.santos do you know by any chance how I can downgrade my cloud workspace?
It is processing 200 pages of PDF for a client daily which now came to a halt :grimacing:

1 Like

@Captain-AI

If you’re the owner, you can change the Cloud version directly from the Admin Dashboard in your Cloud dashboard.

Cloud dashboard.
[Update cloud]

@Captain-AI

I’ve seen that work has started on this issue. At this point, the best approach is to wait for the analysis and, in the meantime, follow one of the trade-offs below.

if you want to use Mistral, you should not use the AI Agent, or alternatively downgrade to n8n 2.4.8. If you want to use the AI Agent, you should not use the Mistral Cloud Chat Model for now and instead use another compatible provider.

At the moment, there is no third stable option.

Thanks for your support.

  • I am the owner, but cannot downgrade the cloud version.
  • I cannot use another provider for privacy reasons. It’s running for an Enterprise where decisions on processors takes months.
  • I might just replace Mistral-Nodes by http-requests then.

unfortunately yep :confused:

1 Like

Hi, same problem after the update, i have more than 150 workflows in production, broken, since last update, all with Mistral chat.
Changing model is not an option.

1 Like

Hi,

Is there a timeline on fixing this?

I’m having the same question :slight_smile:
The closest you can get is this Pull-Request that needs only one last check before they merge it into the main-version. https:// github .com/n8n-io/n8n/pull/25342 (remove spaces to visit url)

1 Like

It seems like a bug to me,

Anyway, until it gets fixed the current workaround is to use OpenAI credentials and connect via the OpenAI node, It’s stable…

Base URL:

https://api.mistral.ai/v1

1 Like

Great news :sparkles: