Hi. Is there any way to improve this workflow? I thought that by using the split in batches node Json data from the http node would automatically be discarded after writing it to nocodb but apparently not my workflow stops mid way with the ran out of memory error.
here are my docker environment variables:
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- EXECUTIONS_PROCESS=main
- NODE_OPTIONS=--max_old_space_size=8000
- N8N_AVAILABLE_BINARY_DATA_MODES=filesystem
here is my work flow.
subworkflows might be a way to do this but I dont quite understand how to go on about implementing it here.