Hello,
I am currently building an automation workflow in n8n that connects Shopify and a GPT-based language model (via OpenRouter), and I would like to confirm whether this approach is technically reliable and scalable—particularly when processing a large number of products (up to 1,000).
Objective:
The goal is to automatically process all products from a Shopify store and extract structured information from the unstructured product description field (body_html
) using GPT. The extracted values will then be written back to each product as Shopify Metafields.
The workflow does the following:
- Retrieve product data from Shopify (including
id
andbody_html
). - Clean the HTML content to obtain plain text (via a Function Node).
- Send each description to a GPT model (via the OpenRouter Chat Model integrated with n8n’s AI Agent).
- Use a fixed prompt to extract the following fields as JSON:
produktform
werbefläche
transportvolumen
einsatzbereiche
- Parse the JSON response with a JSON Parser Node.
- Use an HTTP Request node to write each extracted field back to the corresponding Shopify product as Metafields.
Open Questions:
I would like to know if this setup is stable when scaled to high volumes (e.g., 500 to 1,000 products).
Environment:
- Shopify REST API
- OpenRouter Chat Model (GPT-compatible)
- n8n Cloud