Hello! I am trying to create a loop that generates some text using chatgpt and asks a person on telegram if the answer is good, if it is, it goes forward, if not, it goes back and generates some text again.
The problem is that the chatgpt node gets the topic from a previous node. The first time it works well but if you ask him to write it again, when it goes back to the chatgpt node it gets an error.
" Bad request - please check your parameters
Invalid value for ‘content’: expected a string, got null."
Why does the node not see the context variable the second time? If i click on the node it is still green and i see the text.
Hi @a8m, thanks for clearly explaining your use case — this is a common issue when loops are involved and nodes rely on prior context.
What’s likely happening:
When your workflow loops back to the ChatGPT node (in your case, “Research”), it’s not receiving the same input context as the first time. Even if the node appears green, it may not have the required content field in the current execution context, which leads to the error:
Bad request – please check your parameters
Invalid value for 'content': expected a string, got null.
Why this happens:
n8n does not retain previous node data across iterations unless it is explicitly passed again. This is especially noticeable in loops, where each path carries its own data.
How to fix it:
Make sure the content or prompt is always passed to the ChatGPT node:
Before re-entering the node, use a Set or Function node to reassign or restore the expected field.
Store the original value in a separate key (e.g., topic_original) so you can reuse it:
Then before calling ChatGPT again, check if content is missing and replace it with topic_original.Example using a Function node:
Use a Merge node in “Multiplex” mode to combine original data with updated values in the loop.
Optional for advanced cases: Store the value in Redis or a temporary DB if your loop becomes more complex and needs persistent memory across paths.
If you can share the JSON of your loop section (especially the part before and after the ChatGPT node), I’d be happy to give you a working example or fix it directly.
and are u referencing the last output back into the openai model?, u can maybe put a node between that two gpts, like a edit node, and loop to there but update that node with output too? and then reference that edit node in the research node?