How does the n8n Agent node define tools for LLMs like Ollama or OpenAI?

I’m trying to understand how the Agent node in n8n defines tools for LLMs like those hosted in Ollama or OpenAI.

I’m working with an LLM hosted on Ollama, and I’m trying to define tools for it. Here’s an example of a tool definition via the API:

curl http://localhost:11434/api/chat -d '{
  "model": "qwen:latest",
  "messages": [
    {
      "role": "user",
      "content": "What is the weather in Toronto?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather for a specific location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and province/state, e.g., Vaughan, ON"
            }
          },
          "required": ["location"]
        }
      }
    }
  ]
}'

However, I haven’t found a way to inspect or debug how tools are defined within the Agent node. The execution log doesn’t seem to include tool details like function names, descriptions, or parameters.

My question:
How can I view or debug how tools are defined and passed by the Agent node in n8n? I’m specifically looking for visibility into tool names, descriptions, and parameter structures.