Load from Postgres, filter, add/transform some data in set node, clean html with markdown node, convert file to csv and upload to an S3 bucket.
Something is messing the csv up (from S3 our data is loaded from a different service). In 1.44.1 it´s working perfectly in 1.45.0 something happens in the file conversion, which lead to a lot of rows being skipped on load.
In Spreadsheet tool like number, sheets, excel the file is aswell fine.
Describe the problem/error/question
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Hi @Kool_Baudrillard, thanks for reaching out - at what point during the process are the rows skipped? When they are initially loaded from postgresql or when you are converting the file into csv format?
It would help if you could provide a simple version of the workflow that demonstrates the behavior you’re describing.
the load by another system happens after the file is stored in S3.
But the csv generated in 1.44.1 and 1.45.0 seem to differ. The whole flow is fine and works in both versions as expected, the generated csv is the issue. I’ll clarify tomorrow with the dev team, if they see what happens on their side.