The implementation you have at the moment should be fine. The only caveat could be, if your workflows consume a lot of memory or data, and get triggered simultaneously, you might run into some slower executions or even OOM events - depending on how you’re running n8n and your memory capacity. But this can also happen in large “single” workflows.
If this is the case I would suggest dividing the workload of each workflow into smaller chunks, like sub workflows, but you can still keep them all in one “workflow”. You can check out this docs to get a better idea of the best practices we hold regarding this: Cloud data management | n8n Docs; Execution data | n8n Docs; Memory-related errors | n8n Docs
It might also be easier to debug, test and track performance when the workflows are separate since they’ll get individual ids and it’s easier to map out errors. The execution count is the same either way. But performance-wise you should be good