LangChain and OpenAI nodes should be able to parallel requests like Http Node

Hi

First I was wondering if HTTP Request Node could run requests in parallel and the answer is YES (see here my proof : Clarify if HTTP Request runs in parallel or sequencially? - #3 by Valerian_Lebert)

Which comes to my feature requests : when building LangChain and AI workflow, I often have to run multiple AI Queries (for exemple for evaluating individualy the results of a retrieval operation).

Current OpenAI node and LangChain nodes does not seem to support that. As a workarount I tried running the same requests in a HTTP node and it is really faster.

I think it would be convenient to have the batching options on AI node :

digi-studio 2024-02-16 at 22.57.13

Good point, often for performance I recreate other node operations with the HTTP Node just for the batching!

What I think would greatly help a lot of nodes is allowing the developers of declarative nodes to make batching configurable, and also allow them to expose that settings to the user.

Donā€™t forget to drop a vote on the request :slight_smile:

Can we please please have this?
We end up using the HTTP node for parallel requests which defeats the purpose of using n8n when we actually have to configure the endpoints ourselves for performance purposes.

Iā€™m out of votes :pensive:

fyi started with the work here but is still wip:

3 Likes

Niice!! Looking forward!

Yes! Excited about this

New version [email protected] got released which includes the GitHub PR 8885.

good evening, is this adjustment now available for the community edition?
I just updated to 1.46.0, but the OpenAI node does not find this batching option at any time

Hey @juniorgregio,

The option is under the node settings for most of the Langchain nodes but the newer OpenAI node doesnā€™t have this yet.

@Jon can I call GPT assistant via n8nā€™s langchain node?

@Jon All right? Can you help me as I come back to this version of the node? This batching issue interests me very much but in the OpenAI node I am not able to operate.

Hi @jan

I am not sure to understand : can we batch openAI or langchain calls with this update ? I yes I didnā€™t find how

@juniorgregio & @Valerian_Lebert,

Most Langchain nodes support this option now however the OpenAI node does not have the option.

@Jon

On ā€œBasic LLMā€ for example I donā€™t see the batching option. Or is it by default ? Which are the LangChain nodes having a batching option for example ?

Hey @Valerian_Lebert,

it shoudl only be available on nodes that make an HTTP request, For the nodes that have it the option is under the Settings.