ETL / ELT jobs

Can n8n do intermediate level ETL / ELT jobs?

Briefly, I need to create a stock trading matching and ledger engine - matching open trades with closed trades based on a bunch of specific conditions, then move matched trades into a new table and update an account ledger. I’d probably want to do this in batches/intervals instead of kicking off a workflow for every record. I’m on Postgres and doing this with Python pandas now but curious if n8n can speed up development.

Is this possible or does this fall outside the use case of n8n? Thanks

Hey @complex,

Welcome to the community :tada:

n8n is capable of ELT / ETL processes after all it is just reading data, tweaking it and saving it somewhere else. The tricky part is how you do it and what formats you are dealing with.

For speeding up development that all really depends on how handy you are with the Python you are already using and how long it takes you to pick up n8n concepts.

I would recommend running n8n and having a play to see if you can mimic what you have already done that then gives you something to compare and would in theory help you get your head around things as you know what the outcome should be.

1 Like

@Jon thanks! Do you happen to know of any example workflows or templates that do some basic/intermediate level transformation and computation?

I’d recommend you to check the library : n8n workflow templates

Thanks @Joachim_Brindeau I have looked through the templates but haven’t found anything more advanced than some basic join examples or JSON data conversion etc. Not to say that n8n can’t do what I’m trying but I haven’t seen any advanced use cases / templates yet. Sounds like I’ll just need to jump in and give it a shot.

Well imo n8n can do anything.
And if it’s not out of the box, just ask gpt4 for a custom JS function made for n8n. Good luck!

Hey @complex,

Sadly you have pretty much nailed it there, We don’t have more examples covering specifc ETL jobs as it can vary so much between cases. The best option would be to create a small dataset and see how you get on and if we are missing anything and scale up from there.