Need assistance in connecting to on-prem LLM models

Describe the problem/error/question

Hey,

I am automating some of the workflows using AI assistance. We have a few models (OpenAI, Gemini etc) hosted on-prem due to regulatory compliances. 
I am able to connect to OpenAI models using the Azure OpenAI node. But I would like to test the Gemini models too using the Gemini node. 
Unfortunately, the credentials do not work since the on-prem model uses a custom API header for authentication. Below is the curl request.


curl "https://genai-nexus.api.company.net/v1/models/gemini-2.0-flash-001:generateContent" \
    -H 'Content-Type: application/json' \
    -H "api-key: $NEXUS_API_KEY" \
    -X POST \
    -d '{
      "contents": [{
        "role": "user",
        "parts":[{"text": "Hi"}],
        }]
       }'

I know that I can use the HTTP node, but I prefer using the Gemini chat model node to have better integration with the AI agent node. Any suggestions or workaround?

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.113.3
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: Kali Linux

I’m not sure this is possible with the current n8n setup. If you create generic credentials with custom headers, then they only work with HTTP Request node.

As a workaround: can you manipulate the headers of the outgoing requests at the reverse proxy / system level?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.