Processing larger datasets in n8n can indeed eat up a lot more memory than the dataset size itself. The exact requirements usage would, however, depend not just on the data, but also on the type and number of nodes you have in use for example.
As such it’s very hard to say what exactly is killing your pod specifically. If you suspect it could be k8s-specific logic, perhaps you want to simply test the problematic execution outside of your cluster using a test instance fired up using something like docker run -it --rm --name n8n -p 5678:5678 --memory="15360" n8nio/n8n:1.17.1?
It might be worth opening docker stats in parallel to get an idea of the memory consumption during your workflow execution.