AI Model Producing Different Outputs for the Same Input in n8n | Agentic workflow

I’m encountering an issue where the AI model (likely using OpenAI or another LLM) returns different responses for the same input and prompt in my n8n workflow. This is causing inconsistencies in automation, making it unreliable for production use.

Issue Details:
- When sending the same input text with the same prompt, I get different responses each time.
- This happens in both test mode and production executions.
- I’ve tried changing the temperature and setting it to 0 (zero), but the problem persists.
- The inconsistency affects automated decision-making and workflow reliability.

The inputs are same:


My prompt:

# Role:
You are an AI assistant specialized in extracting structured data from unstructured text. Your task is to process details about a trucking load and return the extracted information in a valid JSON format.

# Task:
You will receive load details in plain text. Your job is to extract and return the relevant information strictly in JSON format, matching the expected schema.

# Important Notes:
- Return only a valid JSON object matching the expected schema. Do not include any extra text, explanations, or code blocks.
- Ensure "offered_rate" is a number (not a string).
- If a detail is missing, return empty string instead of omitting the key.
- Do not add markdown, code formatting, or additional text —only return raw JSON.
- Use consistent formats: 
  - `"pickup_date"`: Full weekday and date format (e.g., `"Monday, February 17th"`).
  - `"weight"`: Numerical value followed by `"lbs"` (e.g., `"30,000 lbs"`).
  - `"offered_rate"`: Numeric (e.g., `3333` not `"3333"`).

Model output doesn’t fit required format
To continue the execution when this happens, change the ‘On Error’ parameter in the root node’s settings

Please share your workflow

Setup Details:
n8n Version: 1.73.0
Deployment: Google Cloud (Docker)
AI Model Used: (Specify the model, e.g., GPT-4o, Gemini 2.o flash)
Database: PostgreSQL
Execution Mode: queue

Hey @yow1da I would say that is kind of the nature of LLMs now. You cannot guarantee reliable output. That being said, you can add something like Auto-fixing Output Parser node documentation | n8n Docs to auto fix broken output or even add another LLM call to the chain and have it repair / review the JSON. I would probably opt for testing both on multiple examples (think 50+) and see which one works more consistently. I think there’s nothing wrong with your prompt in general. Just add a double layer of security.

HI @jksr
Thank you for your reply

I added Auto-fixing Output, should I add retry prompt?
what do you think ?


Looking at the retry prompt, it’s essentially giving it for another round of correction to the LLM. If you want to be extra sure, you could still add an error workflow for failed executions but configure the AI Agent to still run through on error (not sure if your workflow is processing multiple items at once)

1 Like

@jksr
I added retry prompt, but it is still failing

Model output doesn’t fit required format

To continue the execution when this happens, change the ‘On Error’ parameter in the root node’s settings

I will check it later and give you some feedback

1 Like

Hmm, two things:

  • You changed the prompt in the Auto-fixing Output Parser - that’s not recommended because they use certain variable in there that give you the functionality you’re looking for
  • As soon as I am switching from AI Agent to Basic LLM Chain, I don’t have any problems any more. Could that be an option in your setup?

@jksr hey, thank you much

  • I tried removing the re-tip, but still encountering the same issues.
  • I am trying to build an agentic workflow, this workflow is sub agent and will have tools in the future

From Structured Output Parser node common issues | n8n Docs

So maybe you will habe to add a separate step for strucutred output before you pass to the agent node.

Same problem here.

If in your case you can replace the ai agent with another simple node (‘Basic LLM Chain’, ‘Infomation Extractor’), this will help you solve the problem