Hey @Oluwafemi_Esther Welcome to the community!
Let me clarify, if your instance is down or restarting the missed executions in that time are not recoverable as n8n instance is booting up or either down.
As you have said cron fails only in high load, i would say increasing your n8n instance specs would fix that.
I would suggest a more robust way of polling data (assuming you are polling data changes)
So always use a timestamp that you control, so when one run is missed it will simply poll more data for example if set to run every 5 minutes have it check the last 10 minutes when 1 was missed.
With the datatables feature u can store the timestamp internally to make it work easily (also other options exist but datatables would be the easiest in this case)
you can also account for an overlap by default. so always check 15min prior if running every 5 minutes for example.
There is plenty more options for this, but these are the easiest to implement.
of course if you are not polling data at this rate but it is about something that runs every day for example, it is also fixable but not as easy as the examples provided above.
For the restart issue specifically, cron triggers only fire while n8n is running. If your container restarts during a scheduled window that execution is lost. One workaround is using an external cron service (like cron-job.org) that sends a webhook to your workflow instead of relying on the built-in cron node.