How to use a local-AI instance and the models in n8n?

Describe the problem/error/question

I tried to connect my local-ai running instance with n8n running instance. both via docker and local-ai is running and working fine with my nextcloud.

What is the error message (if any)?

After setting up the credentials and trying different base URLS, I’m not getting the available models listed to select (screenshot 1)

Its working local with Ollama via Docker on my local rig w/o any issues but it is Ollama and not local-ai.io

To proof my local-ai is working, I attached the working connection included list of models from my nextcloud settings.

I tried some different urls but couldn’t make it work.

Do you know where to find a doc or helping hand? Thanks

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Check first screenshot, models not listed.

Information on your n8n setup

  • n8n version: 1.54.4,
  • Database (default: SQLite): PSQL 14
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker): via Docker compose
  • Operating system: Ubuntu 22.04.4 LTS

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I edited and added the missing information to my main post.

I switched over to ollama and using it now.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.