Currently Running a workflow that extracts data from the web using https requests, then i process this data via 3 LLM nodes to provide me with the desired output. However, the LLM’S crash and keep loading endlessly whenever they are given chunks of 500 items or more. I have used the split in batches node and provided it with batches of 10, 50 and so on… It still choked and did’t complete the workflow and didn’t even provide a error message so I’m assuming this is a memory issue. Is there any way of making this work without dividing the workflow into tiny chunks.
Is your use case passing 500 items at once for one agent output, or looking for multiple outputs?
A few thoughts.
You might be going over the token limit if you are passing it all at once.
You might be passing requests to the llm too quickly. If thats the case, you might want to add a loop and wait node so that it waits some time before passing more text.
As far I understand, N8N passes through the items on at a time and they come out in the output parser one at a time. However yes they are around 500 items then they are pushed on to the following node. But no, it’s not a token problem because the llm only receives one item at a time and no also the loop over (split in batches node combined with a pause) also did not fix the issue. It doesn’t even record an error it simply either glitches forever on a single node or passes the error ’ Problem with workflow’ no explanations given.
Click on executions, and find the one that errored out. Here’s an image. In the agent step, you can see the logs. Go through the log and see if an error is coming up.