Auto-Fixing Ourput Parser, am I doing it wrong?

I’m having much troubles working with the auto-fixing. It seems that it doesnt change a single thing in my workflow; So I’m not sure maybe I have the wrong impression that it is supposed to fix whent he json generated by OpenAI is wrong?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

@oleg When you are back do you have any thoughts on this?

I confirm that I haven’t been able to confirm that the auto-fixing module is working, even with the basic template here:

Witout the json schema i cant help you on that ( and also a input sample). Most of the time is json schema issues.

I can confirm there is no way to get the output parser to work with GPT4-preview.
My solution here works.

@n8nonmac here is the code you should use in a function node placed right after your LLM chain.

const content = items[0].json.text;
let jsonStringStart = content.indexOf('{');
let jsonStringEnd = content.lastIndexOf('}') + 1;
let jsonString = content.substring(jsonStringStart, jsonStringEnd);
let jsonData = JSON.parse(jsonString);
return [{json: jsonData}];

Tu peux m’écrire sur le Slack de Nocode France si tu as un souci. :slight_smile:

I believe It’s linked in my workflow? You just dont have the variables, but the prompt and the json is attached? Or am I missing something in the implementation itself, maybe?

Merci !

Ideally, I’d like to know how the auto-fixing is supposed to work when properly configured!

The auto-fixing makes a second request to force the IA to output its previous answer in a certain format.

I have tested again and again and GPT4-preview is incapable of outputting in the previous format without prepending the format and wrapping it with ```. Thus any output parser relying on that engine would not work.

Best alternative would be to be able to fallback to GPT3.5 to parse but it would be unreliable and incur unnecessary cost.

So the solution I found was to use regex to remove these parts by matching how your json starts and ends.

Here (on my post screeshots) it seems that the result of the autofixing is

“sendToLLM” : false

which would mean there was not even a try at fixing the output?

Seems there no consensus on this. has anyone been able to make the auto-fixing module output a LLM answer?