Data Tables + AI Agent: where/how to define JSON for reliable row updates?
Goal
I’m moving a simple “employee update” use case from Google Sheets to beta Data Tables and want best-practice guidance on where/how to define the JSON format so updates map cleanly to columns.
Context
-
Chat user asks to add/change an employee record.
-
With Google Sheets this works fine.
-
I’m migrating to Data Tables and want the cleanest pattern to ensure the AI output maps to columns and the Upsert is deterministic.
Question
What’s the recommended approach in n8n for:
-
Defining a stable JSON schema that the AI must output, and
-
Mapping that JSON to Data Tables → Upsert (or Create/Update) so the right row is updated via a key (e.g., employee_id)
Hi @bedwards
To reliably update rows in n8n’s Data Tables using an AI Agent:
-
Define a JSON Schema: In your AI Agent node, clearly define a consistent JSON structure that the AI must output. This schema should include a unique key, like employee_id, and all other fields you want to update.
-
Use the “Data Table” Node: In your workflow, add a “Data Table” node after the AI Agent.
-
Select the “Upsert” Operation: Choose the Upsert operation in the Data Table node. This will either update an existing row or create a new one.
-
Set the Primary Key: In the Upsert operation’s “Filters” or “Match By” section, map the unique key from your AI’s JSON output (e.g., {{ $json.employee_id }}) to the corresponding column in your Data Table. This ensures the correct row is always targeted for an update.
-
Map Columns: Map the remaining fields from the AI’s JSON output to their respective columns in the “Columns to Add/Update” section.
This process guarantees that your AI-driven updates are deterministic, using the employee_id to reliably find and update the correct row or create a new one if it doesn’t exist.
If my reply is helpful, kindly click like and mark it as an accepted solution.
Thanks!