Why do longer prompts fail on Gemini 1.5 Pro via n8n, while short ones work fine?

I’m facing a strange issue while integrating the Gemini 1.5 Pro model for image analysis using n8n.

The flow works perfectly when I send a short prompt (like a single line). But when I use a longer, structured prompt with multiple steps and detailed instructions, the HTTP Request completely fails (not just a bad response — it fails outright).

I’ve already confirmed that:

  • The request is going to the correct endpoint:
    https://generativelanguage.googleapis.com/v1/models/gemini-1.5-pro:generateContent
  • The JSON format is valid:
    contents → role: "user" → parts → text
  • I’m using POST with Content-Type: application/json
  • The API key is working fine for other requests.

Has anyone run into this? Is there a hidden payload size limit? Or could the model be rejecting the input for another reason?

Any insights would be greatly appreciated!

check the token limits. each LLM has different limits on input and output tokens