i have a workflow that query 3 notion dB’s for new pages, every 2 minutes, it’s been working for several months with no problems, few days ago it started to fail with rate late limit error (The service is receiving too many requests from you
You have been rate limited. Please try again in a few minutes).
i disabled all other notion integration and left only n8n,
i disabled all workflows using n8n, waited for an hour, than 24 hours,
update n8n to latest release (selfhosted 2.9.2),
restarted the docker several times,
i tried to change the 3 notion trigger to execute on different minutes with cron (didn’t help),
i managed several times to execute one of this nodes and it worked, published the workflow, it worked one time and after a minute i see a new execution with “starting soon” & “error”.
i dont think that the problem is with notion, i have a feeling that some scheduling from previous workflows still keep on running in the background somehow and hammering notion, couldnt find something in the logs.
thanks
tput returned by the last node
{
“errorMessage”: “The service is receiving too many requests from you”,
“errorDescription”: “You have been rate limited. Please try again in a few minutes.”,
“errorDetails”: {
“rawErrorMessage”: [
“429 - {"object":"error","status":429,"code":"rate_limited","message":"You have been rate limited. Please try again in a few minutes.","request_id":"595e0071-a380-4c95-ad2e-90a8f4b6ee7a"}”
],
“httpCode”: “429”
},
“n8nDetails”: {
“nodeName”: “check Notion Read DB”,
“nodeType”: “n8n-nodes-base.notionTrigger”,
“nodeVersion”: 1,
“time”: “2/24/2026, 6:28:53 PM”,
“n8nVersion”: “2.9.2 (Self Hosted)”,
“binaryDataMode”: “filesystem”,
“stackTrace”: [
“NodeApiError: The service is receiving too many requests from you”,
" at PollContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1550:10)“,
" at processTicksAndRejections (node:internal/process/task_queues:103:5)”,
" at PollContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1850:11)“,
" at PollContext.notionApiRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-nodes-base@file+packages+nodes-base_@[email protected]_asn1.js@5_8da18263ca0574b0db58d4fefd8173ce/node_modules/n8n-nodes-base/nodes/Notion/shared/GenericFunctions.ts:74:11)”,
" at PollContext.poll (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-nodes-base@file+packages+nodes-base_@[email protected]_asn1.js@5_8da18263ca0574b0db58d4fefd8173ce/node_modules/n8n-nodes-base/nodes/Notion/NotionTrigger.node.ts:198:27)“,
" at WorkflowExecute.executePollNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1089:19)”,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1240:11)“,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1659:27”,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2302:11"
]
}
}
Hitting the rate limits are normal but not in production, consider adding wait nodes before calling the notion tools for at least (0.5-1s) and then call the tool also if the issues of rate limits persists just revoked your credentials in notion and create fresh credentials.
the notion trigger is the 1st node in my workflow, adding wait before it will have affect?
i did ofcourse created new integration and new key and it didnt help.
notion allow 3 calls per seconds, now n8n is the only integration and all other workflow are disabled, only 1 workflow with 1 notion trigger (i disabled the other 2 for testing), it’s working for 1 run, and on the next trigger (3 minutes later) it fails. like something is working behind and i cannot see it
@hasamba If notion is your trigger node and even that fails after 2nd run i guess that is some problem related to notion have you tried calling your API using a service Like postman? I mean just to test that is this behavior only in n8n or its a notion side problem.
i didnt tried with postman, but also with n8n it worked every now and then (cant tell when) and few minutes later it does not, im guessing it will work with postman…
@hasamba Yeah like first get the idea about the problem that is it the API of notion or n8n, cause it sounds like it is n8n but as you have suggested it is not, so consider using it with some service like postman or just use what i use daily:
Hi @hasamba, welcome to the n8n community!
A workaround that might help is to duplicate the workflow, activate the new copy, and deactivate/delete the old one. This forces n8n to create a fresh scheduler reference and break the corrupted loop, if exists.
i though so but im guessing that if it was i could find other people complaining about it (which i didnt find) … i will try to re-enable my other notion integration to see if i have problem with those…
I am having the same problem with Notion. Using a workflow that was rock solid and now fails with 429 even when its the first and only API request in more than 24 hours. There is no rhyme or reason to when it does and doesn’t work. I assume it is in fact a Notion problem but I sure can’t figure out what. If I do discover something, I’ll let you know.
Just FYI: Notion says they are receiving similar reports from other customers. I have provided more details but I don’t know if/when to expect a fix. I would recommend reaching out to Notion support so that they have another case to examine, and it’s also possible that the fixes will be per-user rather than system wide so it would be a good idea to get yourself added to the list.
On the local n8n + webhooks limitation — there are actually a few ways to get webhooks working even when your instance is not publicly reachable:
1. Cloudflare Tunnel (free, most reliable)
Run cloudflared tunnel --url http://localhost:5678 and you get a stable HTTPS URL that forwards to your local n8n. You can make it permanent with a named tunnel so the URL stays consistent between restarts. This is probably the cleanest option for a home setup.
2. ngrok ngrok http 5678 gives you a public URL instantly. The free plan rotates URLs on restart, but the paid plan ($8/mo) gives you a stable subdomain. Works great for testing.
3. Tailscale + Funnel
If you use Tailscale, tailscale funnel 5678 exposes your local n8n at a stable *.ts.net URL publicly. Very low overhead.
4. Self-hosted reverse proxy on a cheap VPS
Run nginx or Caddy on a $5/mo VPS with port forwarding back to your home n8n via SSH tunnel. More complex but gives you full control.
For polling workflows like your Notion queries, the current approach (n8n polling every few minutes) is totally valid and may actually be more reliable for local setups since you do not need an always-on public URL. The webhook approach is better when you need near-real-time triggers and can tolerate the infrastructure overhead.
Given Notion confirmed they fixed the rate limit issue, I would probably stay with polling unless latency becomes a problem.