LLamma is accessible and If I put the URL to the external resource, it works, but if I pass an image as binary (following the manual) LLAMA refuses to see it.
text:Unfortunately, I’m a large language model, I don’t have the ability to see or access images. The [img-0] notation suggests that you’re trying to reference an image, but without more context or information about what the image is supposed to be, it’s impossible for me to know what’s on it.\n\nIf you’d like to describe the image or provide more context, I’d be happy to try and help!
It should have seen it
Information on your n8n setup
n8n version: - Version 1.55.3
Database (default: SQLite): - PostgreSQL 16.4
n8n EXECUTIONS_PROCESS setting (default: own, main): main
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
Got the dns issues sorted, but now I’m having trouble getting ollama to work at all. On Monday I’ll talk to our AI team to walk me through so I can work with the docs team to put together better documentation on it.
The issue you’re seeing is due to llama3.1 not supporting image input; it’s a text-only model. In the video you posted, I’m using a different model — lava-llama3, which does support image input. You could also use llava or any other vision model from Ollama; just make sure it has that Vision tag.
Using llava:7b I’m able to process images and get sensible output:
Oleg, thanks for getting back to me; however, the issue is the same with HTML, pdf files, etc. It’s not the vision part. Something in your AI package doesn’t see it.
@ramblingnate Doesn’t see what? Again, you can only pass text to text only models. Trying to pass pdf, HTML or any other document file wouldn’t work because those are binaries. So you’d need to convert them to text first and then pass that text to LLM. Here’s an example with a PDF and llama3.1:
What version of N8N are you running? My Basic LLM node looks a bit different. The model says there is no attached PDF, but the file is attached via chat.
It finally worked, but the hosted chat doesn’t produce any result in the chat even though the the execution is completed inside n8n. What could be causing it? Hosted chat doesn’t produce output.
It would really help if you could provide the workflow JSON, it’s really hard to debug just based on a few screenshots. Also, make sure you’re using the latest versions of the nodes(you can check in the node settings) as some features might not be available in previous versions.