The idea is:
The idea is to implement https://platform.openai.com/docs/guides/structured-outputs so that it can be used in AI nodes when using openai models. This is the ONLY really reliable way to get 100% valid json responses.
My use case:
When using models like gpt-40-mini at least 1 out of hundred requests with a json output is invalid and you either have to use an autofix output formater which costs extra tokens and is very slow, or do some kind of ugly error handling. None of this would be necessary if you just implement the Structured Output standard. I’ve been using it in python for months now and never ever had a wonk output in millions of requests.
I think it would be beneficial to add this because:
It would really be a big step for n8n in providing a platform for enterprise grade AI solutions.
Any resources to support this?
Are you willing to work on this?
Sure but you won’t need me it is super easy to implement
No this is a feature request for n8n. Meaning I would love it if the n8n team integrates this in their platform 
With GPT assistants, there is an option for JSON or JSON schema for output type. This allows you to force the model to output a pure JSON, validated on their side. This can be used with an API call to accomplish what you want.
Good idea, but using an api call directly I can also just specify the json output format in the body and achieve the same thing. I have been using this method when I only needed one isolated llm call.
My request would be to implement it in the AI nodes so that it can be used much more easily and also for things like agents, which would be very hard to do with pure api calls in n8n.
3 Likes
Can you share an example of how to do this in n8n? Thanks!
Sure here is a workflow that uses this method in the http request node:
3 Likes
a must have! long prompts give inconsistent results in output and cannot be parsed 