Integrating two platforms, the dataset gets broken.
Is there a better method to use for this process?
I pull asset data from HaloPSA and Snipe-it. I then have to translate the HaloPSA data to Snipe-IT formats and fields.
I have a couple of lookup tables for clients and devices. The first part of my workflow is:
When that’s done, I still have to translate the company and device IDs, and verify the model exists in the destination DB. I find the company and device IDs from the lookup table, and run a web lookup against the destination to verify the model exists in that DB.
All of this results in three datasets.
I have used a code node to bring data forward, which replaces the workflow data with the data I called forward. I’ve also used SET nodes to lookup what I’ve already processed, but this seems fragile and sometimes I end up with data in the wrong field. I’ve spent too much time looking for why the computer name is in the serial number field. I’ll file a bug report, but I need to work on this now.
My question is, Is there a better method to use for this process? Should I create three parallel paths, or use sub-flows? I feel I’d run into the same issue and end up using code nodes to pull data around.
What is the error message (if any)?
This is a process question. There isn’t an error message.
These are the lookup table, the code node to pull data forward, and the query model node.
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.100.1
- Database (default: SQLite): Postgres
- n8n EXECUTIONS_PROCESS setting (default: own, main): Own
- Running n8n via (Docker, npm, n8n cloud, desktop app): Docker on Digital Ocean
- Operating system: Ubuntu 22.04.5 LTS