Custom API Call Support in Model Sub Node

Idea Overview

The introduction of AI nodes has been a significant advancement, yet there are critical features missing that limit their practical application. The addition of a simplified Chain (Cluster Node) is proposed, which would incorporate a basic model, memory functionality, and potentially an output parser. The current Agent Cluster, with its tool dependencies, introduces complexities such as pre-defined prompts (Re-Act) or tool requirements that could be streamlined.

Furthermore, the proposal includes the development of a new Model Sub Node: an API calling Model. This sub node would leverage the OpenAI format, which is widely adopted by most LLM APIs, and extend compatibility to platforms like Together.ai (that offers almost Every open source model). The API call behind the scenes would be the same as the OpenAI Model Sub Node, but with additionnal parameters such as the base url, the API key (or a predefined credential), the model, and the hyper parameters (temperature, top_p, etc.)

Use Case Enhancement

Implementing this feature would revolutionize how models are integrated and utilized within workflows in Cluster Nodes. It offers the possibility to dynamically select models (and even providers) by adjusting parameters such as URL, API key, and model selection. This flexibility could foster innovative applications, especially in multi-agent setups where AI entities interact with each other. The ability to switch models or providers on-the-fly would not only enhance workflow versatility but also open up new avenues for creative and complex AI-driven interactions.

This proposal aims to bridge the current functionality gaps and unlock new potential for AI node utilization, making them more adaptable and powerful for a wide range of applications.

This is exactly what i am looking for.

I am currently using Groq, but unfortunately there is no way to use a custom model endpoint