Variances in URL provided by $execution.resumeUrl

Describe the problem/error/question

I have a workflow which reads in a table (referred to as the “import_manager” table) then dispatches this information to other workflows to perform an “initial import” of the data. I start by discretely performing an import on the “top-most” table (organizations), then for the remaining tables I loop through them in the order defined in the import_manager table.

In the first instance of wait for webhook, the URL that is sent looks fine and is in the format https://my-actual-hostname/webhook-waiting/uid.
In the next instance of this, I instead get “http://localhost:5678/webhook-waiting/62820”.

This url does not work for me, presumably because I am in an environment with worker instances, but that is just a guess. Specifically I believe the “localhost” part of this URL is what is tripping things up.

My fallback option here would be to parse this URL and correct it, but ideally I would like for it to be correct when it is sent.

Any ideas?

This is the workflow that dispatches the jobs.

This is one of the workflows that receives the job and POSTs back to resume

Information on your n8n setup

  • n8n version: 0.221.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_MODE setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Unknown linux distro

Hi @mbowler, I am sorry you’re having trouble.

I don’t have access to your Supabase database or API keys, meaning I won’t be able to run your exact workflows. So I tried reproducing your problem with a simple workflow this:

However even on subsequent Waits I am seeing the correct hostname being used by n8n:

image

I am using a slightly newer version of n8n though, so perhaps as a first step could you try upgrading to [email protected] and see if the behaviour persists?

1 Like

Yes, that is probably a good start.
I also just noticed that in my worker configs I do not have the WEBHOOK_URL. It is set on the primary instance. Is it possible this would affect this?

1 Like

Ah yes, it didn’t occur to me to test such a setting when looking into this. I’d suspect this could be causing it, let me give this a go and get back to you :slight_smile:

Yep, I was now able to confirm this. Without the workers knowing about WEBHOOK_URL I would see localhost:5678 from the second wait onwards for manually started executions (and from the first wait for production executions).

Once the workers had WEBHOOK_URL=https://n8n.example.com/ set, this is working as it should.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.