I use the latest version of n8n, I am using it for the implementation of conversational AI agents that have tool calling capabilities. Currently, I only use n8n for integration and workflow, and I use Dify for prompt engineering and AI creation.
With the growth and updates of the AI agent node, I tried to test building the agent directly in n8n without Dify, but using the same LLM, the same prompt, the same configurations (temperature, top P, etc.). The responses in n8n are much, much inferior, not humanized, do not send emojis with quality, do not follow the step-by-step indicated in the prompt, mix many sentences from the prompt, among other interactions without conversational quality.
I would like to know if there is a way to build a specific prompt for n8n or if this is really a limitation. It seems that in n8n the agent is extremely formal and not very flexible, regardless of the temperature and top P configurations.
Information on your n8n setup
n8n version: 1.79.1
Database (default: SQLite): Postgres
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
I suspect that Dify may use additional system messages or hidden instructions that influence the response style. Try Inspecting the API Calls for Differences.
If Dify is providing better responses, check the exact payload it’s sending to the LLM compared to n8n.
Use n8n’s HTTP Request node to manually call the same LLM API with identical settings and compare outputs.
You might want to insert a Custom Function node to pre-process the prompt before sending it to the AI Agent.
I will try to do this to identify, I suspect the same thing the sending to LLM must have an intermediate processing, but I suspect that this processing is in n8n and not in Dify.
n8n is excellent for running tools, and this must be due to some aggressive prompt they include themselves, but this makes the model very straightforward and lacking personality.