Header Auth Failure with Crawl4AI on Local macOS Install

Describe the problem/error/question

Hey guys,
I followed an example for using Crawl4AI together with n8n. Local Docker Desktop install on macOS Sonoma 14.7.4. I am stuck on the HTTP Request module where I cannot authenticate to Crawl4AI. I tried several local addresses and also this yaml entry

- N8N_AUTH_EXCLUDE_ENDPOINTS=api

from this thread

but no luck. Also, when I try to enter Header Auth with
Name: Authorization
the Value changes from “Bearer 3JF8JD3ihdd” to
“__n8n_BLANK_VALUE_e5362baf-c777…” I don’t know if this is to be expected.

I am out of ideas. Any input is highly apprecited! :slight_smile: If you need more information please let me now. Thanks a lot for your time!

(I changed to http to ttp to be able to post due to the “5 links for new users” rule)

What is the error message (if any)?

for ttp://192.168.65.1:11235/crawl and
ttp://localhost:11235/crawl:
The service refused the connection - perhaps it is offline

{ “body”: { “urls”: “ttps://ai.pydantic.dev/”, “priority”: “10” }, “headers”: { “Authorization”: “hidden”, “accept”: “application/json,text/html,application/xhtml+xml,application/xml,text/;q=0.9, image/;q=0.8, /;q=0.7” }, “method”: “POST”, “uri”: “http://192.168.65.1:11235/crawl”, “gzip”: true, “rejectUnauthorized”: true, “followRedirect”: true, “resolveWithFullResponse”: true, “followAllRedirects”: true, “timeout”: 300000, “encoding”: null, “json”: false, “useStream”: true }

for ttp://192.168.1.5:11235/crawl:
Authorization failed - please check your credentials
Invalid token

401 - "{\"detail\":\"Invalid token\"}"

Request

{ "body": { "urls": "https://ai.pydantic.dev/", "priority": "10" }, "headers": { "Authorization": "**hidden**", "accept": "application/json,text/html,application/xhtml+xml,application/xml,text/*;q=0.9, image/*;q=0.8, */*;q=0.7" }, "method": "POST", "uri": "ttp://192.168.1.5:11235/crawl", "gzip": true, "rejectUnauthorized": true, "followRedirect": true, "resolveWithFullResponse": true, "followAllRedirects": true, "timeout": 300000, "encoding": null, "json": false, "useStream": true }

Please share your workflow

Share the output returned by the last node

Node type

n8n-nodes-base.httpRequest

Node version

4.2 (Latest)

n8n version

1.83.2 (Self Hosted)

Time

25/03/2025, 13:33:32

Stack trace

NodeApiError: Authorization failed - please check your credentials at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:525:33) at processTicksAndRejections (node:internal/process/task_queues:95:5) at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:681:27) at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:913:51 at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:1246:20

Information on your n8n setup

  • n8n version: 1.83.2
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker Desktop
  • Operating system: macOS

Hi @mailfraud lets start by adding your workflow so we can view it. Use the ‘</>’ button and paste.

Then on a first note, since you have n8n and Crawl4AI in Docker, are they on the same network in Docker?

Hi @Sebas, thanks for your answer!

I did correct the insert of the workflow (see above). Also I did update to the newest version in the meantime, but no change.

You are correct, they were not on the same network, I completely forgot! I created a network for both. Now

docker network inspect my-network

gives me the information that container one is under 172.21.0.2 and the other one under 172.21.0.3. Is that correct?

No change in behavior so far though.

Yes that is correct, they will both have there own ip.

Now the trick is: since they both live on the same network, you can reference the containers by name.

I am having the same issue as mailfraud, running n8n & crawl4ai locally in docker. I have put them both on the same network, but they still won’t connect. I am on Windows, but using the same workflow in n8n as shown above.
I have tried the following url’s but the response is always that “the service refused connection - perhaps it is offline” *(h removed from http)
ttp://localhost:11235/
ttp://localhost:11235/crawl/

I also tried using the name instead of “localhost” and get this error: “Method not allowed - please check you are using the right HTTP method”

@LemLearner Welcome on board, hopefully we can find the problem

@Sebas Great to hear! How do I reference them correctly?

I used http://crawl4ai_default:11235/crawl but that leads to: The connection cannot be established, this usually occurs due to an incorrect host (domain) value

and also http://crawl4ai:11235/crawl which says: Forbidden - perhaps check your credentials? and when I put in the Bearer token again: Authorization failed - please check your credentials Invalid token

Is that good news as we are coming close?

I found the solution: when using

docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}"

I get list of running containers as when using “docker network ls” which only lists the existing networks. In that container list there is a container named

crawl4ai-crawl4ai-1

When using this one as URL

http://crawl4ai-crawl4ai-1:11235/crawl

And putting in Header Auth > Value > Expression

Bearer tokenthatyoucreated

I do get a connection! :slight_smile: Bear in mind when creating the token with a password manager with special characters, that also does not seem to work. I got a error message for that so maybe start with a simple word and then ramp up the complexity if necessary.

Thanks a lot @Sebas for the right questions! :slight_smile:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.