Item linking in the Code node

Describe the issue/error/question

What is the error message (if any)?

Hello everyone, I am an n8n novice, and I encountered a little problem when creating a link for automatic parsing of rss. Please help

There is a link that I need to parse, https://cdn.jsdelivr.net/gh/feeddd/feeds/feeds_all_rss.txt, which contains a lot of independent rss links, I need to parse the aggregated links first, and parse out each one The rss link is used as the input of the rss feed node and then continues to parse.

The problem is that the content I parsed from the aggregate link is multiple links, but when the Code node is input as an item, the output is also an item. I know that there is an Item linking method in the Code node, but I don’t know how to make good use of it , Ask for advice, thank you.

Please share the workflow

Share the output returned by the last node

ERROR: Unknown top-level item key: 0 [item 0]

Information on your n8n setup

  • n8n version: 0.202.1
  • **Database you’re using (default: SQLite):**None
  • Running n8n with the execution process [own(default), main]: default
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Hi @soapffz, welcome to the community :tada:

I am not fully sure where item linking would come into play here tbh. Do you simply want to split up the data from your HTTP request and then process each feed URL individually? This is a bit tricky as the RSS Node behaves atypical unfortunately in that it only runs for a single item, not all items you send to it (like most other n8n nodes).

You could however use the Code node in “Run Once for All Items” mode and then loop through each result like so:

Be careful when executing this though, your txt file includes a huge number of items, presumably requiring a lot of RAM when executing. To make it easier to execute this example I’ve limited the number of results returned by the Split out feeds node to 5 (simply comment in the code line returning all items if needed).

If you run out of memory you might need to consider a different approach, and perhaps consider processing the individual feeds in a sub workflow without returning the results for reach to the main workflow (so the memory required by each sub-workflow execution would become available again after each execution).