I don’t understand why data insertion in baserow takes so long when inserting several lines of data (6400 lines). Is this normal or are there problems?
Configurations in the n8n server and baserow together:
32 GB RAM / 4 cores
Virtual machine
+2 Ghz Frequency
Separate docker environment
Can you help me with my datasets and updates because with this slowness, it’s too long to work.
Thank you very much for your reply.
In my case, I compared the slowness of the insertion with the baserow node and the api.
It’s the API that’s faster, to avoid node timeout, I had to separate the insertions, but despite that, it’s about 30 minutes to insert this data.
So with the insertion batches, I’ve set up as shown in the figure.
Could you please describe the solution in a little more detail?
Ah sorry, you are missing 1 key piece of information about n8n nodes.
Most nodes run per item that go through them. So for baserow and also the http node it will run per record. So it is doing 6400 api requests in this case. The batch options allow you to let it wait for x time after x number of requests, but still it processes every record one by one.
The baserow API also has a bulk option which allows you to send arrays of records to be processed.
This is not yet implemented in the baserow node, so you need to do it manually by grouping the records and then sending a request with the http request node.