Is this GPT-to-Shopify Metafield Workflow Reliable for High Volume (up to 1,000 Products)?

Hello,

I am currently building an automation workflow in n8n that connects Shopify and a GPT-based language model (via OpenRouter), and I would like to confirm whether this approach is technically reliable and scalable—particularly when processing a large number of products (up to 1,000).


Objective:

The goal is to automatically process all products from a Shopify store and extract structured information from the unstructured product description field (body_html) using GPT. The extracted values will then be written back to each product as Shopify Metafields.


The workflow does the following:

  1. Retrieve product data from Shopify (including id and body_html).
  2. Clean the HTML content to obtain plain text (via a Function Node).
  3. Send each description to a GPT model (via the OpenRouter Chat Model integrated with n8n’s AI Agent).
  4. Use a fixed prompt to extract the following fields as JSON:
  • produktform
  • werbefläche
  • transportvolumen
  • einsatzbereiche
  1. Parse the JSON response with a JSON Parser Node.
  2. Use an HTTP Request node to write each extracted field back to the corresponding Shopify product as Metafields.

Open Questions:

I would like to know if this setup is stable when scaled to high volumes (e.g., 500 to 1,000 products).

Environment:

  • Shopify REST API
  • OpenRouter Chat Model (GPT-compatible)
  • n8n Cloud

Hi @marcel-new

1000 is an small amount of rows, but n8n has memory restrictions.

I recommend you to do it by phases. I mean:

  • create a workflow that runs every hour.
  • checks which is the current number of processed rows from shopify (starts from row 1 if never started before)
  • process 100 rows
  • save the row to start in the next execution (101) and so on.

This way you can:

  • limit memory usage
  • track progress
1 Like