New here. After hours of research, digging through questions, templates, and tests, I’m kind of stumped for what I think should be basic.
In a self hosted n8n installation, I’m receiving an attachment in a webhook, to which I want to use a vision model with ollama to describe the contents of the attachments.
I can’t seem to pass the data to the model. I’ve tried turning the file into base64 and passing it in the user prompt, but it doesn’t detect the image.
Does anyone have a basic workflow example of llama3.2-vision receiving a prompt and image?
here’s the output on the last step:
Information on your n8n setup
- **n8n version:lastest local
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- **Running n8n via (Docker, npm, n8n cloud, desktop app):docker
- **Operating system:wsl (windows sub linux with docker containers)