JSON value not passed properly to LLM in workflow

I am trying to connect to n8n workflow through Google Colab. This is my Colab code

import requests

# 1. Your n8n production URL

n8n_production_url = “”

# 2. Data to send as query parameters

data = {

"city": "Bali",

"sessionID":"435j3h1467"

}

# 3. Execute the GET request

response = requests.post(n8n_production_url, json=data)

# 4. Print results

print(response.content)

In my AI Agent node the prompt I gave is “You are a travel assistant. You plan 4-day trip itineraries. I will provide a city name, and you will respond with a detailed 4-day itinerary for that city. “

I am building a simple agentic workflow where the input from user is a City Name like Bali and the output is an travel itinerary.

The LLM response I am getting is “{“output”:“Got it! Please tell me the city name you want the 4-day itinerary for.”}”

The workflow has the following nodes:

  1. Webhook - To take the response from Google Colab in POST method
  2. AI Agent - LLM, Tool, Memory

Let me know how to overcome this issue and get a proper response from LLM.

Hey @Kaushik_AI Welcome to the n8n community!

Can you please share your workflow JSON with us so that we can spot the issue, i guess if you are getting a response from the AI Agent which means your webhooks are working as they should, not the problem remains with the AI agent, your prompt you are showcasing here i am assuming it is set to as a SYSTEM PROMPT not the prompt to the AI Agent as the prompt will be the query user have posted using the webhook, let me know if this is correct and if you can share your workflow, hope this helps!

Your code is inside a text-field. You must add all of your code into a code field. You can create one like depicted in this screenshot. If you hover with your cursor over such a position, the ‘+’ will appear:

Hey :waving_hand: @Kaushik_AI

Your AI Agent isn’t receiving the city name. The issue is the AI Agent doesn’t know where to read input from when using a Webhook trigger.​​

Quick fix:

  1. Open your AI Agent node

  2. Find the “Text” parameter (the input field)

  3. Set it to “Define below” instead of “Take from previous node”

  4. Switch to Expression mode and enter: {{ $json.city }}​​

This tells the AI Agent to read the city value from your webhook JSON.​

Also add a Respond to Webhook node after the AI Agent to send the response back to Colab.​

Let me know if this works! :rocket:

@Anshul_Namdev

Hey @Kaushik_AI , just as i have said earlier, you have set your “You are a travel assistant. You plan 4-day trip itineraries. I will provide a city name, and you will respond with a detailed 4-day itinerary for that city. “ Which should be a system prompt as a an AI PROMPT which means AI always gets that as a input not the once user is requesting through the webhook, I have made some fixes in your flow now it should be ready:

I am sorry I copied the wrong workflow earlier. Here is the right one. @Anshu But I will follow what you have modified.

No worries @Kaushik_AI , the issue is the same.

@Anshul_Namdev Seeing a new error this time on AI Node

No prompt specified

Expected to find the prompt in an input field called ‘guardrailsInput’ (this is what the guardrails node node outputs). To use something else, change the ‘Prompt’ parameter

import requests

# 1. Your n8n production URL
n8n_production_url = ""

# 2. Data to send as query parameters
query = {
    "prompt": "Bali",
    "sessionID":"h46767"
}

# 3. Execute the GET request
response = requests.post(n8n_production_url, json=query)

# 4. Print results
print(response)

@Kaushik_AI You should try the below code as it is not configured but that is how you could call the webhook accordingly to your shared setup:

import requests
import json

url = "http://localhost:5678/webhook-test/2b6d6491-0557-4dad-a125-22840e4aeac2"

try:
    response = requests.get(url)
    
    print(f"Status Code: {response.status_code}")
    print(f"Headers: {dict(response.headers)}")
    print(f"\nResponse Body:")
    
    try:
        print(json.dumps(response.json(), indent=2))
    except:
        print(response.text)
        
except requests.exceptions.ConnectionError:
    print(f"Error: Could not connect to {url}")
except Exception as e:
    print(f"Error: {e}")

So with this are you calling the webhook correctly which is:

https://YourCoolUrl.com/aKHkdjkjag?Prompt=Hey I want to visit Bali

So with this your Webhook will get triggered and also it will return the data by the respond to webhook node, i have checked this in the below scenario which is fairly simple but that’s just a case and the AI Agent will just be in between:

Let me know if you need more help, hope this helps!

The agent is working fine now. However the response I am seeing on Google Collab is only <Response [200]> and not the LLM output.

The “Return to webhook” node is getting the right input from AI agent but somehow this node is not sending this input it received to my Google Collab.

Hey @Kaushik_AI You are very close! You are currently printing the 'response’ code only, as you have used:

print(response)

For printing the JSON i mean in this case the output of the AI, you need something like this

print(json.dumps(response.json(), indent=2))

additionally instead of this you can also use:

 print(response.json())

And with this you will be able to see your AI output inside your Google colab!

Cheers! Let me know how this goes!

1 Like

Thanks, it worked !!

Glad it worked! Kindly mark that as a solution so that others would also know what’s right

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.