The idea is:
Enhance the AI Agent node. When a “Received tool input did not match expected schema” error occurs, instead of throw error and stop the workflow, provide the error details (expected vs. actual) back to the AI model, and let it to retry. Only terminate when the AI falls into a loop of incorrect tool usage. This could be an optional configuration.
My use case:
Current AI Agent node terminates on tool input schema errors, which is problematic with long contexts, smaller model size, or models prone to hallucinations. This leads to frequent workflow failures for complex tasks.
I think it would be beneficial to add this because:
It increases AI Agent robustness, improves success rates for complex tasks, optimizes AI model usage by allowing self-correction, and offers flexible configuration for diverse needs.
Any resources to support this?
Concept aligns with Agent ReAct patterns (adjusting based on feedback) and error handling/retry mechanisms in AI frameworks like LangChain.
Are you willing to work on this?
I am willing to contribute to this feature if it’s deemed valuable for integration into n8n.