AI Agent does not receive document input from previous step (in active mode) but does when testing

Describe the problem/error/question

The node runs as it should when testing it. I copy/upload a PDF to the designated folder and the flow triggers. File gets opened, contents extracted to json. The relevant field are handed over to the AI Agent prompt which does its magic. The agent’s output i uploaded as a file via nextcloud.

When i active the workflow and copy a file into the folder, i receive a file in nextcloud that states that the agent could not find the PDF that the prompt refered to.

When manually testing the workflow it works, however. What am I missing?

What is the error message (if any)?

No error; flow executes sucessfully. However, see above. Output of file looks something like this:
No paper text was provided. Please supply the full text in order to receive a summarization according to the specified instructions.

Please share your workflow

Share the output returned by the last node

[

{

“output”: “Title of the Paper:\n\n> No paper provided. … .”

}

]

Information on your n8n setup

  • n8n version: 1.92.2
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: ubuntu 24.04

did you check on the “executions” tab what you get with each node ?

Hi @anthony31 ,
yes i did. The node “Extract from file” lists a json format output of the pdf with all the relevant data (esp. {{ $json.text }} {{ $json.metadata }}). These are handed over to the AI Agent. The agent’s output is formatted as instructed by the prompt (as Markdown) and states that no Document was provided (i.e. empty content of $json.text / $json.metadata).
When editing and testing the workflow, it works.

I changed the AI Agent node to the OpenAI Message Model and added a “convert to file” step. This achieves consistent results as intended. I could however not figure out, why the AI Agent does work in testing but not with an “active” workflow.

Things that I tried but did not help:

  • Changing the Model (Gpt 4.1, 4o, 4.1-mini)
  • Changing file handling (Nextcloud upload, local save) → problem lies imho in the step before that.

When I test this with a sample pdf I have, I notice the $json.text variable is an Array. Im wondering if this is causing the data to look like [Object object] when you run it normally, therefore meaning the LLM received no real text to work with. Try joining the array with some new lines to force a single text string and see if that works better?

Review and summarize {{ $json.text.join('\n\n') }} {{ $json.metadata }}

Source:

Result:

I would maybe also enhance the prompt slightly like this by splitting the prompt and the data text and wrapping that in quotes just to be a little more explicit in your intent:

Review and summarize the following:

"{{ $json.text.join('\n\n') }}"

Also, if all you’re trying to do is summarize the content of the file, there is a special node for that

Here I didnt even have to do anything, it just worked.

You could off course change the prompts to your needs with the options

1 Like

Thanks for your input. I think you solved the issue!

  • I set the option Join pages, which I thought would exactly circumvent the output of $json.text variable as an array. This did not solve the issue.
  • Your updated prompt however, does, when I drop {{ $json.metadata }} and unset the option Join pages. Both $json. inputs still seem to be a problem.
Review and summarize the following:

"{{ $json.text.join('\n\n') }}"
1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.