End-to-End Workflow for OpenAI Batch API Calls in n8n?

Has anyone successfully implemented an end-to-end logic for OpenAI Batch API calls in n8n? Specifically, creating the batch in a workflow, calling the Batch API when it’s ready, and saving the results in order. Looking for insights!

  • n8n version: 1.69.2
  • Database (default: SQLite): QLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted in google cloud
  • Operating system: Windows10

hi @Kiremit

I believe we don’t have batchAPI already implemented by default but there you could create a workflow to create the batch call and call openai directly as you say.

I’ll let someone else from the community share if they have implemented this already :slight_smile:

It could be done manually yes but the problem is still more regarding the batch itself… and how to automate that.

Has anyone cracked this yet? @Kiremit

No. I guess it would be part code but the Batch requires jsonl format as input I believe

would be great to have a n8n default node for that, which handles regular checks with the batch api to check if the batch was executed/expired already.

@Paul_Werner_ALMIG Best thing would be not to have a seperate logic but rather integrate it into all the other nodes. So that instead of an instant response I can always select the Batch call and it.
As of right now Batch would consist of at least 2 flows. One which starts it and one which checks it upon completion and then furthers deals with the results. Would be great if that could be integrated into one.

@n8n there is a lot of bulk tasks with which batch generation could be highly beneficial.

@ddlawson no