Describe the problem/error/question: The OpenAI Node sits at executing even though it is done passing all the prompts to the LLM
What is the error message (if any)? No error messages
Please share your workflow
It’s a simple workflow just parsing a spreadsheet and passing off the information to OpenAI for analysis. On smaller spreadsheets this does not happen but in this case I passed off over 900 results using about 1.2m tokens.
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
The node actually returns it’s full output but you have to stop the workflow to get the results. just wondering if anyone has seen this before and if there are workarounds.
Running n8n via (Docker, npm, n8n cloud, desktop app):
Operating system:
I did not set this up but I believe it is is a container and we are hosting it locally in a lab to test some things out. So it is also the free version again we are just doing some proof of concept stuff right now so we don’t have the paid for version yet.
I’ve seen this issue before during testing, but never during a production execution. Are you experiencing this while testing?
When this happens, I usually fix it with the following steps:
Refresh the browser cache using Ctrl + Shift + R.
If that doesn’t work, restart the n8n instance.
These steps consistently resolve the issue for me.
Another possible cause is that too much data is being loaded during testing. This data gets stored in your browser’s memory/cache, which can sometimes cause the tab to freeze or hang indefinitely.
To avoid that, here are two possible solutions:
Split your workflow into sub-workflows: Let the main workflow call smaller sub-workflows in production, so heavy data processing happens server-side, not in the browser.
Test with smaller datasets: Use limited data during testing, and only process full datasets during production runs.
Yes I see this while testing the workflow. I am pretty new to n8n and not sure I know the difference between Testing and Production as I don’t see a way to turn my workflow into a “production” workflow I just have the option to test it. That being said if I understand you correctly a workflow in production is processed differently than one that is being tested? Does that mean a production workflow can handle larger datasets potentially? I am working with some rather large data and it has been helpful to break them down into batches during the run.
You will see an execution list on the left. If the execution has a test icon, you ran the execution manually by clicking the buttons in the editor. If it doesn’t have the icon, it means the execution happened by itself, automatically, without human intervention in the editor.
Test executions (the ones you execute manually) store data in your browser cache while you are testing. Sometimes that data is too much to handle and you browser has a hard time trying to keep up.
The executions that happens automatically (when triggered, called by a webhook request or called by another workflow) happens totally on the server side, so data is not stored on your browser and that problem doesn’t happen.