I’m building a complex community node (a web crawler) and want to split its complexity into two nodes: a crawler node and web scraping rendering engines (e.g., HTTP node, web scraping API, etc.).
I’ve already built a first working version, but it’s a monstrous single node. Users have to wait 10 minutes while the node processes, after which it outputs a massive JSON containing the crawled pages.
Now, I want to improve the user experience. The ideal UI would be similar to n8n’s AI agent node and its “Tools” feature, which provides a great execution logs UI where users can click to see the chain of actions.
So, my question is:
Can I use the same pattern (“special inputs”) for web crawling nodes, or is the cluster node concept tightly integrated with AI agents, making it difficult to repurpose for web crawling?