Querying Model Name from AI Agent subnode

Describe the problem/error/question

Hello n8n Community,

I am working on extracting the specific LLM model name used in my workflow for analytical purposes, but I am encountering some difficulties with accessing the data.

My Setup: I have an AI agent node in my workflow. This AI node contains the model selector sub-node, which in turn contains the actual LLM model node (e.g., “OpenRouter Chat Model1”).

The Problem: I need to retrieve the value of model_name from the LLM response. As shown in the attached screenshot, this model_name is present in the output of the “OpenRouter Chat Model1” sub-node, located at item.json.generations[0].generationInfo.model_name.

The model name, including reference to the subnode, can also be found in the browser console.

Screenshot 2

I attempted to access this information using an expression like {{ $('Model Selector').item.json.options.model }}. However, this resulted in the error message: No path back to referenced node There is no connection back to the node 'Model Selector', but it's used in an expression here. Please wire up the node (there can be other nodes in between).

I understand that this error likely indicates that $(NodeName) expressions are designed to reference nodes that are upstream in the direct data flow path, and sub-nodes within a parent node do not typically expose their input/output in a way that allows direct referencing by arbitrary downstream nodes using this syntax. My intention with item.json.options.model was to access a parameter, but the model name is actually in the output.

My Goal: I need a reliable and correct n8n expression to extract the model_name (e.g., “shisa-ai/shisa-v2-llama3.3-70b:free”) so I can use it in subsequent nodes for logging or other processing.

What is the error message (if any)?

No path back to referenced node

There is no connection back to the node ‘Model Selector’, but it’s used in an expression here.

Please wire up the node (there can be other nodes in between).

Please share your workflow

Information on your n8n setup

  • n8n version: 1.100
  • Database (default: SQLite): PostgresQL
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Alpine Linux

@EmeraldHerald Hi! You seem to have knowledge on similair topics. Can you please help me out?

Hey @Koen_7 hope all is good.

Please find my way of determining which LLM was triggered:

Code node output:

A slightly simplified code:

const connected_models = [
  "shisa-ai/shisa-v2-llama3.3-70b:free",
  "tngtech/deepseek-r1t2-chimera:free"
]

return {executed_llms: connected_models.filter(item => $(item).isExecuted)}

If only one is ever expected to be executed:

const connected_models = [
  "shisa-ai/shisa-v2-llama3.3-70b:free",
  "tngtech/deepseek-r1t2-chimera:free"
]

return {executed_llm: $(connected_models[0]).isExecuted ? connected_models[0] : connected_models[1]}

if you want to keep the names of nodes and the names of models different:

const connected_models = {
  "OpenRouter Chat Model": "shisa-ai/shisa-v2-llama3.3-70b:free",
  "OpenRouter Chat Model1": "tngtech/deepseek-r1t2-chimera:free"
};

console.log(Object.keys(connected_models)[0]);

return {
  executed_llm: $(Object.keys(connected_models)[0]).isExecuted ?
    Object.values(connected_models)[0] : Object.values(connected_models)[1]
}

Thank you! Will try this when I’m back at my pc

@jabbson do you also have a solution in which I don’t have to enter the possible model names manually at first?

No, I do not

@jabbson I’ve made a feature request for this case. In my opinion, this should be fairly simple to implement. If you want, you can upvote the request. Thanks for your help anyway!