Any limits for data to use Compare Datasets?

I want use a Compare Datastes to keep 2 DB in sync, like once in 10 seconds.
But is there any limits for that? What if DBs contain 100,000 records? Tens of millions records?

Is there anything worth knowing to compare BIG datasets?

I think the server will struggle with this.
I have seen some performance issues with the merge node and (if I recall correctly) also with the compare dataset node
For these kinds of workloads I would push a hash to redis for example and then do smalle batches checking the hashes.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.