When using models like gpt-40-mini at least 1 out of hundred requests with a json output is invalid and you either have to use an autofix output formater which costs extra tokens and is very slow, or do some kind of ugly error handling. None of this would be necessary if you just implement the Structured Output standard. I’ve been using it in python for months now and never ever had a wonk output in millions of requests.
I think it would be beneficial to add this because:
It would really be a big step for n8n in providing a platform for enterprise grade AI solutions.
Any resources to support this?
Are you willing to work on this?
Sure but you won’t need me it is super easy to implement
With GPT assistants, there is an option for JSON or JSON schema for output type. This allows you to force the model to output a pure JSON, validated on their side. This can be used with an API call to accomplish what you want.
Good idea, but using an api call directly I can also just specify the json output format in the body and achieve the same thing. I have been using this method when I only needed one isolated llm call.
My request would be to implement it in the AI nodes so that it can be used much more easily and also for things like agents, which would be very hard to do with pure api calls in n8n.
This by itself is not really an issue, just a nice to have. The nodes provided by n8n itself at least for me are to be preferred for the following reasons:
They support sending a list of messages, n8n-nodes-openai-structured-outputs only supports on message where you have to construct i json with multiple messages yourself (of course, this is not a blocker, but at least in my opinion the usability of the module provided by n8n is better)
I personally like that n8n can be extended by third party nodes very much, i think this is a great feature. However, for core functionality of my processes i would like to use as many nodes provided by n8n itself and i would rather use third party nodes for more exotic things only