Running out of memory

Hi. Is there any way to improve this workflow? I thought that by using the split in batches node Json data from the http node would automatically be discarded after writing it to nocodb but apparently not my workflow stops mid way with the ran out of memory error.

here are my docker environment variables:

 environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - EXECUTIONS_PROCESS=main
      - NODE_OPTIONS=--max_old_space_size=8000
      - N8N_AVAILABLE_BINARY_DATA_MODES=filesystem

here is my work flow.

subworkflows might be a way to do this but I dont quite understand how to go on about implementing it here.

No, nothing will be discarded until the workflow finishes. What you would have to do, is to run the loop part (aLoop Over Items to NocoDB1) in a Subworkflow and make sure to not return any data to the main workflow. That will keep the memory consumption low.

Hi. Isnt the main issue data being stored in the http node?

What node can I use to clear the data and reset the memory but also constantly input a new stream of data to the http node instead of looping over the same 3 items over and over again

also what would the end setup look like? This is what im currently trying to do but getting error “Node nocodb doesnt exist”