Mysql ai agent tool

Hello, I’ve been spending some time lately playing with AI agents, however a very common issue I’m running into is with the MySQL Tool. I’m letting the AI Agent access my db with “Execute SQL” Operation, but it often fails (changing the model sometimes helps, but it’s more of a coin flip). I’m wondering if it would be possible to configure the SQL node to not error the whole scenario (stopping scenario execution) whenever a query is bad? Instead of “Error” I’d like to feed the error back to the AI agent. Is there any work planned on adding this functionality to the MySQL tool?

Welcome to the community @robot_ron

You can configure the MySQL node to continue execution even when queries fail, which will pass the error information to the next node (your AI Agent) instead of stopping the workflow.

Hope this helps!

Hello, thank you for your answer. Unfortunately the feature you’ve mentioned is only available in the standalone node. I was referring to the tool that is connected to the AI agent node, which is almost the same, with the mentioned feature not being included :frowning:

Hi @robot_ron Welcome !

i think you can deal with this in different approaches but since you have to use the tool connected to the ai agent i would suggest you can add other MySQL tools and hardcode some queries which let’s your AI Agent know about the schema of your database.

Pre-expose schema context via SQL queries that lets the ai agent know about the tables and schema as many of these failures happen (which you are facing currently) because the model is guessing column names.

This mirrors how production-grade “tool-using” agents are typically implemented.

You can reduce errors significantly by:

Feeding SHOW TABLES, feeding DESCRIBE orders or hardcoding schema documentation into the agent’s system prompt

for example:

The orders table has columns: id, reference, created_at, status

This won’t eliminate all errors, but it greatly reduces random failures and makes the agent more reliable, additionally you would need to tweak the system prompt a bit for smart tool calling.

I hope this solution helps you @robot_ron

What you need is a better design architecture, which I’ll explain below, such that your agents self-improve with mistakes. I’ll also offer a quick fix.

Quick Fix: “On Error” Node Setting

Every n8n node has a Settings tab (gear icon) with an “On Error” option. By default it’s set to “Stop Workflow” and that’s what’s killing the execution. Change it to:

“Continue Using Error Output”

This gives the MySQL node a second output branch (a red one) that emits the error message as data instead of crashing the workflow. You then route that red error output back into the AI Agent’s conversation loop so it can read the error and self-correct its query.

Wire it Properly: Agent Learns from its Mistakes

1. MySQL Node → Settings → On Error → “Continue Using Error Output”

2. Route the error output back to the agent. After the red (error) output, add a Set node that formats the error into a message like:

Your SQL query failed with this error: {{ $json.error.message }}
Please fix the query and try again. Remember to only use columns that exist in the schema.

Then loop that back into the AI Agent as a tool response.

3. Give the agent schema context upfront. The root cause here is that your AI hallucinated order_reference as a column name. Add a preliminary step (or put it in the agent’s system prompt) that runs:

SHOW COLUMNS FROM orders;

…and feeds the result to the agent. Something like:

“You have access to a MySQL database. Here are the available tables and their columns: [schema output]. Only use columns that exist in this schema. If a query fails, read the error and retry with a corrected query.”

4. Cap retries. Use a counter variable (via Set node + If node) to limit the agent to 2-3 retry attempts so a confused model doesn’t loop forever.

@robot_ron: you noted “swapping models” sometimes helps. That’s likely because you’ve packed the context window in the model you’re running and thus when you switch to a different one (with a CLEAN/EMPTY CONTEXT WINDOW) it follows instructions. As more and more context is added, the AI tries to keep track of EVERY PIECE OF IT, ultimately encountering something like brain fog or overload at which point it gets confused and starts doing weird things, even hallucinating.

The schema context in the system prompt is what stabilizes this across models — without it, the model is guessing at column names based on the table name alone.

No custom code or feature request needed. This is all achievable with current n8n 2.x capabilities.

FYI-

KEY SUGGESTION: Look into the concept of “Context Rot” or “stuffing the context window.”

Here’s an example of what I mean by context rot: When you run, for instance, Claude Code in the CLI, it presents explicitly in your terminal the number of tokens the current context window has burnt through. Suppose the total context window (i.e., the MAX number of tokens the AI can hold in memory at any given time) maxes out at 200k. IYou should really never run an instance of Claude Code beyond 40% of the total available context window, as when you cross that line performance degrades rapidly (200k x 40% = 80k tokens). When running these systems by hand, you structure the design upfront to ensure the primary/orchestration agent never exceeds 40% of the available context, and you have it outsource all laborious tasks to subagents with clean context windows (akin to “swapping models” in this case).