LangChain and OpenAI nodes should be able to parallel requests like Http Node

Hi

First I was wondering if HTTP Request Node could run requests in parallel and the answer is YES (see here my proof : Clarify if HTTP Request runs in parallel or sequencially? - #3 by Valerian_Lebert)

Which comes to my feature requests : when building LangChain and AI workflow, I often have to run multiple AI Queries (for exemple for evaluating individualy the results of a retrieval operation).

Current OpenAI node and LangChain nodes does not seem to support that. As a workarount I tried running the same requests in a HTTP node and it is really faster.

I think it would be convenient to have the batching options on AI node :

digi-studio 2024-02-16 at 22.57.13

Good point, often for performance I recreate other node operations with the HTTP Node just for the batching!

What I think would greatly help a lot of nodes is allowing the developers of declarative nodes to make batching configurable, and also allow them to expose that settings to the user.

Don’t forget to drop a vote on the request :slight_smile:

Can we please please have this?
We end up using the HTTP node for parallel requests which defeats the purpose of using n8n when we actually have to configure the endpoints ourselves for performance purposes.

I’m out of votes :pensive:

fyi started with the work here but is still wip:

3 Likes

Niice!! Looking forward!

Yes! Excited about this