This AI agent sometimes at times outputs a wrong format (also with output parsers). I’d like to re-run the agent if the output is false. However, nothing happens in that case.
Anyone have an idea how to approach this ?
This AI agent sometimes at times outputs a wrong format (also with output parsers). I’d like to re-run the agent if the output is false. However, nothing happens in that case.
Anyone have an idea how to approach this ?
It looks like your topic is missing some important information. Could you provide the following if applicable.
You can have a new AI to process the False data with a new prompt
That just moves a potential issue around … I need to re-run that same AI agent, over and over, until the IF node returns true …
But correct me if I’m wrong …
In your screenshot, what is the output of the “Check data consistency” IF node? I’m guessing it’s the response of the AI agent:
{ "json": { "output": "How can I assist you today?" } }
When the AI agent receives it’s own output, it’s probably not going to respond as expected. You can try to capture the input again by using the “Edit Fields” node and send that back in instead.
Personally, I wouldn’t recommend this approach - you could loop on forever!
If your output is inconsistent and using output parsers give no improvement, then it’s likely the down to the LLM you’re using.
I’d advise either:
The output is either ‘empty’ in one of the generated JSON fields, or its not. When it’s empty, it’s an error and the AI node should try again.
What you’re mentioning is correct though. I’ve tried different models and got different results. With and without output parsing.
Next thing I already tried was (indeed like you said), chopping things into smaller pieces, and thus making the work of the LLM simpler.
I’ve come to see that this seems to be the best approach, and this seems to minimize the error rate.
However, I’m still trying to implement an if error is true loop, that these parts try again. I might try it with an increment, till a certain number.
Somehow though … the AI Agents seem buggy … I’ve tried so many models, setup and even servers (CPUs etc.) Still can’t get anything to run with relative stability … Maybe it’s me, it’s possible … anyone running AI nodes without major issues (and I mean in a pretty big workflow … not just 2 nodes … I’m talking a 25 node workflow or so) … with multiple agents …
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.