Has anyone successfully implemented an end-to-end logic for OpenAI Batch API calls in n8n? Specifically, creating the batch in a workflow, calling the Batch API when it’s ready, and saving the results in order. Looking for insights!
n8n version: 1.69.2
Database (default: SQLite): QLite
n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted in google cloud
I believe we don’t have batchAPI already implemented by default but there you could create a workflow to create the batch call and call openai directly as you say.
I’ll let someone else from the community share if they have implemented this already
would be great to have a n8n default node for that, which handles regular checks with the batch api to check if the batch was executed/expired already.
@Paul_Werner_ALMIG Best thing would be not to have a seperate logic but rather integrate it into all the other nodes. So that instead of an instant response I can always select the Batch call and it.
As of right now Batch would consist of at least 2 flows. One which starts it and one which checks it upon completion and then furthers deals with the results. Would be great if that could be integrated into one.
@n8n there is a lot of bulk tasks with which batch generation could be highly beneficial.