N8n job performance going on 14 hours

I created a workflow with openAi and I’m training it on just 5000 records that was converted to binary. the issue , its been running for 14 hours, how can I speed up the training. I’m running a SAAS version logged into my account, but its maxing out my memory at 95%, 32GB of RAM.

Question: can I increase the processing power before running a workflow, by assigning more GPUS, CPUS, and more memory in the SAAS model ? or can I increase the the processing speed while it currently running?

1. Pre-split the Data

Break the 5000 records into smaller batches (e.g., 500 records) and train in multiple chunks. This avoids memory bottlenecks.

2. Use a Dedicated Cloud Runtime

If your training script is custom, run it in an environment where you can control compute:

  • Google Colab Pro+
  • AWS EC2 (with GPUs)
  • Azure ML
  • Paperspace Gradient
  • RunPod.io or Lambda Labs (for OpenAI-compatible fine-tuning with your own infrastructure)

These let you provision powerful hardware (e.g., 4–8 CPUs, 1–4 GPUs, 64GB+ RAM).

3. Optimize the Training Script

  • Use streaming loaders instead of loading all 5000 records into memory.
  • If using Python: use datasets library (by Hugging Face) or torch.utils.data.DataLoader with num_workers > 0.
  • If binary is needed, check if base64 encoding or compression (like .gz) reduces memory overhead.

I hope to this message helps you