Format of an image passed to a vision model

Hi, I noticed that an image is passed through to a vision model (Basic LLM Chain + Ollama Chat model) in base64. This increases a number of tokens consumed dramatically.
Is there a way to pass an image not as base64 encoded text using n8n nodes?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.