Extremely Strange 🧐 adding AI node to Canvas with Telegram causes workflow to not trigger

I’m building out some automations for a client’s self-hosted n8n instance, and I’ve come across a super weird problem :face_with_monocle:.

For background, I’m an experienced workflow developer with ~50 production n8n workflows/agents under my belt, and I have a number of Telegram-based n8n AI agents that I use on my own self-hosted instances on a daily basis.

I have a very simple sample workflow that is working, where I get a message from Telegram, and reply with a Telegram Message..

This is working as expected:

However, as soon as I add an AI Agent node into the mix, it errors out:

Every time I execute this workflow, the workflow shows as failing instantly, before the trigger node even finishes firing, with a Error: Cannot read properties of undefined (reading 'execute’) error. See below:

I’m pretty perplexed about this one. Haven’t experienced something similar on any of the other n8n instances or Telegram bots I’ve worked on.


For debugging I used an AI agent node just with the ā€œChat Triggerā€, it works as expected.

And the Telegram nodes with an OpenAI Message a model node also works fine.

So it’s something about how the AI Agent Node is interacting with the Telegram nodes.

The weirdest part is that the same error happens, even if the AI Node is disconnected from the Telegram nodes, and is also deactivated. Example, the workflow below falis in the same way.


Any help or insight would be greatly appreciated!!

Information on your n8n setup

  • n8n version: 1.102.3 (Latest)
  • Database: Postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main): EXECUTIONS_MODE = queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker on Railway

Hi @theJoshMuller, welcome to community!
By the way…
image
This cause the error, you adding extra {{ at the start of expression, and you can remove that doubled extra curly branches, and maybe it helps!

1 Like

Removing invalid syntax from the AI Agent expression will definitely help, just ran it successfully like this and received the ā€œtestā€ message back.

Thanks @jabbson and @cutecatcode . I hand-edited some of the code in those demos for client annonimity. Messed up the syntax, I guess. But the problem is still standing.

I copied @jabbson 's corrected version, made sure credentials were right, and still got the same error:

The thing that’s especially weird to me is that the logs aren’t even showing anything. The workflow is failing before the trigger can even fire, which is different from just a given node having bad syntax.

If the same workflow does work for me (I run 1.102.3 by the way) and doesn’t work for you, this probably means that the workflow itself is not the problem (workflow itself is not inherently broken). Instead this must be related to other differences between our environments, such as:

  • Node versions (n8n, AI Agent node, dependencies), which is unlikely since we are running the same version of n8n.
  • Node configuration details (API keys for instance). Just to make sure, do you have credits on your balance for OpenAI?
  • Workflow execution mode (your queue mode), I think I’ve read another topic on this being the reason for some issue, which if I remember correctly was manifesting in the same or similar way. See if temporarily changing the mode helps.

Otherwise, I am at loss here.

Yeah, I’m just using a clean Docker install, in the same way that I’ve done in the past, with the latest official version. So, I’m also doubtful this should be the case :thinking:

Node configuration details (API keys for instance). Just to make sure, do you have credits on your balance for OpenAI?

Yup, Like I said, when I use the OpenAI Message a model node it works fine.

Workflow execution mode (your queue mode), I think I’ve read another topic on this being the reason for some issue, which if I remember correctly was manifesting in the same or similar way. See if temporarily changing the mode helps.

Interesting. Will try that now. My other instances are set up the same way, if I’m not mistaken, so if this is the solution, I’ll be maybe even more confused than I am right now haha

Nope, still fails after seeting EXECUTION_MODE to regular :face_with_monocle: :melting_face:

Me too!! :sweat_smile:

Sorry for the multiple posts.

I dug into the server’s logs, and I see the error there is more detailed. Adding it here:

Worker started execution 68 (job 68)
Worker errored while running execution 68 (job 68)
Cannot read properties of undefined (reading 'execute') (execution 68)
Enqueued execution 68 (job 68)
Execution 68 (job 68) failed
TypeError: Cannot read properties of undefined (reading 'execute')
    at shouldAssignExecuteMethod (/usr/local/lib/node_modules/n8n/src/utils.ts:88:13)
    at NodeTypes.getByNameAndVersion (/usr/local/lib/node_modules/n8n/src/node-types.ts:57:32)
    at new Workflow (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-workflow@file+packages+workflow/node_modules/n8n-workflow/src/workflow.ts:99:30)
    at JobProcessor.processJob (/usr/local/lib/node_modules/n8n/src/scaling/job-processor.ts:106:20)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at Queue.<anonymous> (/usr/local/lib/node_modules/n8n/src/scaling/scaling.service.ts:95:5)

 Problem with execution 68: Error: Cannot read properties of undefined (reading 'execute'). Aborting.

Error: Cannot read properties of undefined (reading 'execute') (execution 68)

Ok solved it. Putting the solution here for if anyone else has the same issue.

The fix for me was setting EXECUTIONS_PROCESS to main.


Gemini tells me:

This environment variable controls how the execution process is managed. It has a few settings, but the one that might help you is main.

By setting EXECUTIONS_PROCESS=main, you eliminate the inter-process communication and potential for serialization/deserialization issues that might be at the heart of this bug. It essentially makes your n8n instance run as a single, monolithic process.

The error seems to happen when the main process hands off the workflow to a worker process. In that handoff, the AI Agent node’s definition is somehow lost or corrupted because the Telegram Trigger is also present. By forcing the execution to stay in the main process, you skip this handoff entirely. The process that loaded the workflow from the database is the same one that will execute it, which should prevent the node definition from getting lost.

2 Likes

Interestingly, this env var is deprecated a while ago and shouldn’t affect anything. The main mode is the default (source 1, source 2). It is said to use Queue mode if full execution isolation is needed.

Did you have it explicitly configured as something else?

Really? That’s super weird. :face_with_monocle:

Declaring it as main seems to be the only thing I changed between when it worked and when it didn’t work.

It was undeclared before.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.