To handle your use case of checking if a specific website appears on Google for a large dataset of about 220,000 keywords/URLs safely in n8n, the best workflow architecture is to process the data in batches rather than all at once to avoid memory and call stack issues. Use the SplitInBatches node in n8n to divide your dataset into smaller chunks (e.g., 100-500 keywords at a time), then process each batch sequentially or with controlled parallelism. Store and trigger subsequent batches using external storage or event-based triggers (e.g., after one batch completes, trigger the next). This multi-stage approach helps avoid “Maximum call stack size exceeded” errors and keeps resource usage manageable.
For querying Google search results, there is no official Google Search API offering free access to organic results anymore. Instead, you can:
Use third-party APIs like SerpApi or Zenserp that provide Google Search results data programmatically via REST APIs. These allow you to send a keyword, retrieve the top search results, and parse whether your domain appears there.
In the API response, you can check the URL domains in the organic results to see if your website is listed.
Integrate these API calls using n8n’s HTTP Request nodes, handling pagination and rate limits carefully.
Combining these ideas, you can:
Store your 220,000 keywords in Google Sheets or Supabase.
Use n8n with a SplitInBatches node to fetch manageable keyword chunks.
For each keyword, call a Google Search API like SerpApi to get top results.
Check if your domain is in those results and log/store the outcome.
Repeat until all keywords are processed.
This design balances scale with resource constraints and allows API-based Google searches without scraping or violating terms. Implement delays or rate limit handling to avoid API overuse. This method is practical, scalable, and integrates well into n8n automation environments.
PS - If this ans. satisfied you plz lile it amd mark it as a solution 