How connect custom LLM ( example: models Oracle Cloud Infrastructure - Generative AI e.g. OCI - Cohere Command R 08_2024v1.7) to the AI agent node ( model , memory, tools )
Describe the problem/error/question
I’m trying to use a node AI agent, but I’ve encountered an issue with the model selection for LLM – I can only choose from a pre-configured set of models from list. I’m wondering if there’s a way to integrate this agent with a model available in the cloud, specifically the OCI - Cohere Command R 08_2024v1.7 (on-demand) model, which is available in Oracle Cloud Infrastructure.
So far, I’ve managed to do this by making an HTTP call to OCI using Execute Command, passing input data from the “trigger chat” and receiving a response from the model. However, I can’t find a way to integrate this with the node AI agent to use this cloud model instead of the pre-set ones.
Has anyone faced a similar issue or knows how to integrate an external AI model from Oracle Cloud or other cloud provider with this type of AI agent? I’d appreciate any insights or suggestions!
What is the error message (if any)?
I cannot connect the Cohere/LLama model available in OCI Cloud to the Ai agent node in n8n
Please share your workflow
A simple AI agent that takes my chat message as input and then passes it to the HTTP module and returns a response
But I would like to connect to the node AI agent with the model available in Oracle Cloud Infrastructure, because at the moment there is a list of models that cannot be selected other than those that are available - built-in.
Share the output returned by the last node
Information on your n8n setup
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system:
Step-by-Step Solution:
1. Create OCI Generative AI Credentials
Why: n8n needs authenticated access to OCI’s Cohere API.
How:
- In n8n:
2. Build a Custom LLM Node for OCI Cohere
Option A: Use HTTP Request Node (Quick Fix)
-
Add an HTTP Request Node:
-
Map Input/Output:
Option B: Create a Custom Node (Advanced)
- Fork the Cohere Node: Clone the n8n Cohere node and modify its API endpoints to point to OCI.
- Override Base URL:
// In your custom node’s code:
this.httpRequest.baseURL = 'https://generativeai.oci.[REGION].oraclecloud.com/20231130';
3. Integrate with AI Agent Node
-
Set Up AI Agent:
- Add an AI Agent Node.
- Under Model, select Custom LLM.
- Link your OCI Cohere node (HTTP Request or custom) to the AI Agent’s Model input.
-
Map Tools/Memory:
4. Full Workflow Example
AI Agent (Trigger)
│
▼
Function Node (Format input for OCI)
│
▼
HTTP Request Node (Call OCI Cohere)
│
▼
Function Node (Reformat output)
│
▼
AI Agent (Process response with tools/memory)
Key Configurations
- Authentication: Ensure OCI’s IAM policies allow your API key to access generative AI services.
- Error Handling: Add a Catch Node to handle OCI API rate limits (common in cloud providers).
Troubleshooting
- Test API Calls: Use a Debug Node to inspect raw OCI responses.
- Check OCI Logs: Verify requests reach OCI via the [OCI Console → Logging].
Need Help? Share:
- The exact API endpoint/parameters for OCI’s Cohere Command R.
- A screenshot of your AI Agent node setup.
You’re bridging two powerful tools—let’s get them talking! 
Catch you later,
Dandy
1 Like
Sorry, can you help me, please. I have te same problem but I can’t find the custom LLM node on my n8n.
Thanks!
Sorry, can you help me, please. I have te same problem but I can’t find the custom LLM node on my n8n. Can you send me an example, please 
Thanks!
Hey Nacho, let’s sort this out.
The Custom LLM isn’t a pre-built node in n8n, you have to simulate it using the HTTP Request node. Basically, you’re building your own model connection manually.
Here’s a quick example you can set up:
Step 1: Create an HTTP Request node like this
URL: https://generativeai.oci.[YOUR_REGION].oraclecloud.com/20231130/actions/generateText
Method: POST
Headers:
{
“Authorization”: “Bearer YOUR_API_KEY”,
“Content-Type”: “application/json”
}
Body:
{
“prompt”: “{{ $json.input }}”,
“maxTokens”: 1000
}
Step 2: Add a Function node before it with this:
return { input: $json.messages[0].content };
Step 3: Add another Function node after it with this:
return { completion: $json.generatedText };
Step 4: In your AI Agent node, choose “Custom LLM” and wire everything in this order:
AI Agent → Function (input) → HTTP Request → Function (output) → AI Agent
Give it a try, and if anything gets stuck, hit me up and we’ll fix it.
1 Like
Hi Dandy.
Thank you very much for your help, unfortunately I am new to n8n, I sell from using make.com. Could you help me with a screenshot for clarification please.
Thanks again for your help.
Regards
Sorry Dandy, Can you help me please?