Scrapeninja timing out after 1000ms

I’m using an HTTP Request to call ScrapeNinja /scrape endpoint. In
extractor.error I’m getting ‘Error: Error: Script execution timed out after 1000ms’

It works for other pages, so I guess this particular one is timing out due to a larger size. Any way to modify that timeout via config?

Information on your n8n setup

  • n8n version: 1.79.3
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system:

It doesn’t show up by default, but you can add an Option to the HTTP Request node (at the bottom) to set the timeout.

That assumes the issue is with n8n not waiting for ScrapeNinja to respond. I’m not sure that’s the issue though. It appears that ScrapeNinja is telling you that it timed out waiting for something that was being done on the server end. If that’s the case, you might be able to send it a request parameter (if the service works that way) to allow more time.

Also, btw, it looks like there might be a ScrapeNinja node available soon. If you were running self-hosted, you could go ahead and try the community node, but maybe not quite yet on n8n-cloud.

1 Like

nice have to try this :slight_smile: thanks