Maybe someone could help me find a solution to this.
We are passing data to the Basic LLM chain, which is a text for now. Everything works correctly. Now we have added extra data which is a screenshot of the website (binary image example: File Name:
take
File Extension:
jpeg
Mime Type:
image/jpeg
File Size:
1.04 MB)
We want to pass this image to AI scan as well. But the passed data is too big. I tried the Summarization Chain but I could not get it working (errors like: Invalid prompt schema: Single â}â in the template or " n8n may have run out of memory while running this execution. More context and tips on how to avoid this in the docs")
I tried splitting data in chunks manually and passing them with âLoop over Itemsâ one by one to LLM Chain, but Iâm not sure if that is the correct solution.
Maybe someone has done something like that and could guide me through the process
If youâre experiencing memory issues it might be that the images youâre trying to process are too much to process on the current instance. Are you using cloud or a local setup? If the former, can you try running it locally and increase the allocate memory.
Thanks @gualter ! I decided to go with another approach. We scan the image with the OpenAI node first, then pass that info to the main prompt. It works like a charm!