I found a temporary solution that works and meets the need, hoping that the n8n developers will support it. It is using the native n8n langchain node, which in a few words is the basis of all n8n.
This code refers to a request made some time ago select-a-query-provider-in-the-open-router-chat-model click to view.
const { ChatOpenAI } = require(‘@langchain/openai’);
// Parámetros de entrada del nodo (Variables)
// Puedes definirlos en el panel de “Input Parameters” del nodo Code
const apiKey = ‘api_key_openrouter’; // Api key OpenRouter
const modelName = ‘qwen/qwen3-next-80b-a3b-instruct’; //name model of openrouter
const temperature = 0.2;
const providerOrder = ‘alibaba,deepinfra/bf16,parasail/bf16’; //provider to routing
// Configuración de la API
const configuration = {
baseURL: ‘ OpenRouter ,
// fetchOptions se puede omitir si no se necesita un proxy
};
// Configura el enrutamiento de proveedores si se ha especificado
const modelKwargs = {};
if (providerOrder) {
modelKwargs.provider = {
order: providerOrder.split(‘,’).map(provider => provider.trim()),
};
}
// Crea una instancia del modelo de chat
const chatModel = new ChatOpenAI({
apiKey: apiKey,
model: modelName,
temperature: temperature,
timeout: 60000,
maxRetries: 2,
configuration,
modelKwargs,
});
// Devuelve el modelo para que la cadena lo use
return chatModel;
