Hi @nosiremosacasar
Software engineer with 25+ years of experience here. I’ve been in that place as well.
This question reminds me of an old story about brazilian software developers.
Brazil used to have one of the largest concentrations of Clipper developers in the world, with the language being used for everything from small business accounting to banking front-ends.
During a presentation in Brazil, at the FENASOFT trade fair in the early 1990s, Brian Russell, the creator of Clipper, after receiving massive criticism, had to clarify that Clipper was never intended to replace COBOL! 
So, besides the obvious owning bias we feel for code, the way I see this discussion is like trying to find out what is the “best” programming language. The undisputed answer is always: “You have to use the right tool for each job”.
As software engineers, our main concern tends to be with business rules. They must be strictly enforced. Errors must be caught and delt with appropriately, to make sure the system is always at a consistent state. You made that concern clear.
And then there are these side issues that we have to deal with… UX, authenticating users, calling APIs, OAUTH2
, callback endpoints, logging, object-relational mappers, clustering architecture, the list goes on… Infrastructure in general. That tends to feel like a time-consuming pain for Sr. devs.
Enter automations: they make dealing with infrastructure a breeze. Just click here and there and everything is connected. Suddenly, you’re allowed to use a plethora of cloud services without the hassle of making them understand each other. In my experience, that means weeks or even months of saved work, depending on the number of integrations.
Thus, time-to-market gets a lot of points for automations vs code. Sure, execution time gets penalized, but that is the trade-off.
However, where automations may lack efficiency are with complex business rules. That is visually noticeable by the amount of branches on your workflow. And that, IMO, is what makes testing more difficult. Edge cases can break your automation instantly, but…
You’re right that n8n lacks proper testing infrastructure and sophisticated retry logic. But I believe they are just good enough for the workflows you throw in.
Think about it for a second. If you’re with a company that has a mission operation control center 24/7, then automations are really way out of their league. But if you’re hosting your own software with some cloud provider, you probably have as much resources for recovery as with automations. And guess what… your cloud provider may let you know about the error using an automation.
It’s not about being “extremely simple input-output kinds of automations”, as you stated, but how complex are the transformations you need to make and how variable are your inputs.
If your workflow requires transaction-level consistency guarantees, or needs complex state management across executions, that’s probably a sign that you’ve outgrown n8n for that use case.
Maybe that’s the real issue. If you’re fighting the automation and trying to take control of every little piece of code, the experience becomes distressing. And I’m telling you this because I’ve done that and felt the exact same way as you did.
So, my advice to anyone considering automations is to build simpler workflows. Ship fast, leave room for execution errors, log them somewhere with a dedicated error-handling workflow, make sure your workflows don’t simply break on edge cases. Get notified of problems once in a while and apply small upgrades. Enjoy the benefits of cloud services.
For complex, mission-critical business logic with strict consistency requirements, code wins.
@damato