How to bypass Cloudflare security with HTTP Request

Describe the problem/error/question

I am trying to use HTTP Request node but the GET call failed due to Cloudflare protection.
How can I bypass this protection?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @zohar_Lerman , hope all is well.

You do realize that this is exactly why service owners use cloudflare in the first place, right? :grinning_face_with_smiling_eyes: If they require human verification, they do not wish their services to be exposed to automated systems / crawlers.

I know, but I am sure the smart people here know how to bypass it :slight_smile:

It’s called a bot detection and if we using Request node to crawl some website. It’s bot then.

That’s why there are some service exists.
They provide the service to bypass the bot detection and charge some fee.

So I think if you want to crawl the page with bot detection. Considering some other service like firecrawl or Crawl4AI.

:white_check_mark: [SOLVED] Use Dockerized Squid Proxy with n8n to Bypass Cloudflare in http get node

:warning: Problem

When using HTTP Request node in n8n to access APIs protected by Cloudflare (e.g., AbuseIPDB, HaveIBeenPwned, etc.), requests often fail with:

403 Forbidden / Access Denied / Cloudflare DDoS Challenge

:light_bulb: Solution

You can route your requests through a lightweight, Dockerized Squid proxy server to bypass rate limiting and bot protection.


:wrench: Setup Instructions

1. Create squid.conf

http_port 3128

acl localnet src 172.0.0.0/8
acl localnet src 192.168.0.0/16
acl localnet src 10.0.0.0/8
acl localnet src 172.18.0.0/16
acl localhost src 127.0.0.1/32

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl Safe_ports port 1025-65535

acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost

http_access deny all

icp_port 0
access_log /var/log/squid/access.log squid

Save it to:
:file_folder: /root/squid/squid.conf


2. Run Squid in Docker

docker run -d --name squid-proxy   -p 3128:3128   -v /root/squid/squid.conf:/etc/squid/squid.conf:ro   sameersbn/squid

Confirm it’s running:

docker ps

3. Test Proxy

From n8n container test with curl (in my case im testing on abuseipdb cloudflare protected) :

curl -x http://<YOUR-HOST-IP>:3128 https://api.abuseipdb.com

You can also use this in your HTTP Request node by setting the Proxy field:

http://<YOUR-HOST-IP>:3128

:white_check_mark: Tips

  • Make sure your Docker subnet (e.g. 172.18.0.0/16) is included in the ACLs.
  • Host IP can be found via: ip addr show eth0
  • Adjust logging for debugging via Squid logs:
docker logs squid-proxy

:speech_balloon: Community Note

This helped me bypass Cloudflare API blocks on abuseipdb in my n8n workflow running on a Hostinger KVM. Let me know if you face any issues!

i’ve found it via googling and lilbit chatgpt.

Out of curiosity, I tried routing requests through squid. In my opinion using squid is not sufficient to overcome very comprehensive cloudflare protection, so this topic caught my eye.

What I did.

  • I copied your configuration of the squid.conf
    (by the way, 172.18.0.0/16 is a subnet of 172.0.0.0/8, which makes the localnet source of 172.18.0.0/16… redundant?)
  • brought up the squid container within my compose infrastructure
  • ran tests:
    • from n8n cli [1]
    • from HTTP Request node with proxy config set [2]

[1]

/home/node # curl -I -x http://squid-proxy:3128 https://www.abuseipdb.com/
HTTP/1.1 200 Connection established
HTTP/2 403 
date: Mon, 04 Aug 2025 00:59:59 GMT
content-type: text/html; charset=UTF-8
x-frame-options: SAMEORIGIN
referrer-policy: same-origin
cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
expires: Thu, 01 Jan 1970 00:00:01 GMT
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=ZaKAwCgAmhPjBUenbLzjbY80qndVAsmjJHJthfoD2XH9XtFc4zFdVXIBtdB8RYIUcWtSvo2eupeZxt60yUm8H73DNtwAZmGXcKRTdsISylzZwEUvyWy32U3b2HARi%2F8CjZBm"}],"group":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
server: cloudflare
cf-ray: 969a1d808b1bac30-YYZ
alt-svc: h3=":443"; ma=86400
server-timing: cfL4;desc="?proto=TCP&rtt=25991&min_rtt=25092&rtt_var=10052&sent=7&recv=7&lost=0&retrans=0&sent_bytes=4497&recv_bytes=1781&delivery_rate=173122&cwnd=253&unsent_bytes=0&cid=50ee1d409c2c7953&ts=49&x=0"

[2]

For both I see requests hitting the squid container and making it into a log:

[1] 1754269869.794    204 172.18.0.4 TCP_TUNNEL/200 5263 CONNECT www.abuseipdb.com:443 - HIER_DIRECT/104.26.12.38 -
[2] 1754269884.381    146 172.18.0.4 TCP_TUNNEL/200 10561 CONNECT www.abuseipdb.com:443 - HIER_DIRECT/104.26.12.38 -

Why I think squid may not be the best way to go about bypassing cloudflare protection (not that anyone should, khm-khm):

  1. To start with, squid doesn’t offer any IP rotation or residential proxy - the proxy still uses the same public IP of the host machine. If that IP is flagged or rate-limited, nothing changes.
  2. Squid provides no user-agent spoofing, JS challenge solving, or cookie handling - most serious Cloudflare protections (especially for sites behind “I’m Under Attack” mode or using so called bot fight mode) require a browser-like client to solve JavaScript challenges and present valid tokens.
  3. If the website/API uses Cloudflare’s Bot Management then simply going through Squid won’t bypass it - they look at TLS fingerprints, JA3 hashes, behaviour, etc.
  4. If you hit a CAPTCHA or JS challenge, curl (or n8n for that matter) won’t solve it. Cloudflare will still block.

I am really interested to know how and most importantly why would it work for you though. What do you think could be different in your setup that makes it work?

I think you’re scraping the AbuseIPDB website, but in my case, I was using their official API endpoint directly. (AbuseIPDB APIv2 Documentation)

So just to clarify:

  • You’re hitting:
    https://www.abuseipdb.com/check/... (web interface)
  • I used:
    https://api.abuseipdb.com/api/v2/check?ipAddress=113.30.176.33&maxAgeInDays=90 (API call)
    with header:
    Key: YOUR_API_KEY

If you’re scraping and looking to bypass Cloudflare protection, I’d suggest using ScraperAPI or a similar service.
I tested ScraperAPI and it worked fine — even with Squid proxy running.

Test setup:

  • In the proxy field, instead of using Docker’s internal hostname like squid-proxy,
    I pointed it to my public IP and Squid port. (my case, i hosted in hostinger kvm2)
  • ScraperAPI handled all JS/cookie challenges, and I got a valid HTML response.


:white_check_mark: Conclusion:
If you’re making official API calls, use AbuseIPDB’s API directly — it’s clean and reliable.
If you’re scraping web pages behind Cloudflare, Squid alone isn’t enough — but combining it with ScraperAPI works well.

Fair enough, I changed my setup to be more similar to yours (even though API endpoints are protected by the same Cloudflare as the web pages and Cloudflare protection was supposedly bypassed by using squid).

I changed the requests from direct website url to the API url with the key in the header. I tried running the workflow without squid and didn’t have a problem even without it. Now I have sent over 200 requests for different IPs and got every single response back.

At this point I have everything working with and without squid when I use API instead of hitting the web pages directly.

1 Like

I actually got stuck on this phase for past 2 days — came across your question while googling, and once I figured it out after trying everything, I came back to share the solution here.

Nice! Glad it worked for u too!!