AI slow to parse JSON

Hi there,

I have the following workflow:

I simply execute a SQL statement that returns some data which I need to evaluate before I send it to the AI to work with.

Now this is fine and works, as I can tell the AI to use the JSON from $(‘Microsoft SQL1’).all() etc…however when you do this, it is extremely slow (I’m using Gemini, but that should be a moot point).

To give you an idea, it takes around 40 seconds as the AI attempts to read/parse the JSON data and provide the answer which leads to a horrible end user experience. Now, when I do this:

and I give the AI agent the SQL tool to re-execute the exact same query it just ran, it is way faster. I’m talking 40 seconds → 10 seconds because it’s not having to parse the lengthy JSON and build a nice response.

I have tried everything I can think of to get the first SQL node to spit out something fast like a CSV string for the AI to use and format but I just can’t get it working. I don’t really want to have to leave it in this state of having to re-execute the same SQL I just executed in order to get sensible performance.

Does anyone know of a way I can get the AI to use the returned data quick, much like the Microsoft SQL “tool” does, rather than the “node”?

Any help would be appreciated on this. It’s almost like I need the “Convert to CSV” node to have an option that states “output as string” or something instead so that I can pass that to the AI.

Using n8n version 1.85.4

Anyone out there? :slight_smile:

Long story short; All I’m asking for really is if there’s anyone out there who knows how to convert JSON to CSV or something equivalent so that I can use it as a variable in the AI node.

Thanks

Daily cry for help :slight_smile:

1 Like

Hi,

How much data is actually returned from the SQL server?
It might be the JSON serialization/deserialization time of n8n that takes a lot of time

The main idea is that if you use sql as a tool the might be no serialization within that part, only after.

Here are some troubleshooting steps I can think of:

  • Try to limit your SQL query (select limit 10) → still slow?
  • Try to pin the result from the sql (with the original query) → any difference?
  • Try to change the workflow data savigs settings (the idea is to limit the time spent on storing the json):
    • Do not save intermediate / progress data
    • Do not save execution on succes (I guess it wont affect the run)
    • Do not save execution on failure (I guess it wont affect the run)

reg.
j.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.