N8n Cloud workspace DOWN since Sunday, no fix, no ETA

hello

This situation is beyond frustrating.

Our n8n Cloud workspace has been completely down since Sunday.

After a long waiting , support finally said yesterday that the case was escalated to Engineering due to maxed out disk space.

But here we are:

  • 2+ days later → workspace still offline

  • Zero timeline → no ETA for resolution

  • No workaround → no way for us to continue working

Meanwhile, our business is frozen. Our production workflows are blocked, our teams cannot operate, and we are losing critical time and resources.

How can a paid production service be down for nearly a 3 days without a fix or at least a temporary solution (migration, disk expansion, backup access)?

Why is there no transparent communication or concrete plan of resolution being shared?

This level of downtime is unacceptable and is actively damaging trust in n8n Cloud as a reliable platform.

If this is how production incidents are handled, we need to seriously reconsider using n8n Cloud.
Is there anyone here from the n8n team or community who can help us so we can at least get our workflows running again ?
Thanks
Workspace : otoqitest
version : last version

3 Likes

I am having the same issue - I have tried to log into my n8n Cloud Instance this AM and its giving me the following message - Workspace offline (503)

2 Likes

That is a bit concerning. I think scalabiliy is alway challenging. n8n recently had a surge in demand and perhaps they are still struggling to cater the surging demand. Hope things will be settled soon.

1 Like

It is rather frustrating - I pay for the cloud version but how long will this be down for?

@NXTLVLAI Our server has been back online for 5 minutes. It’s really frustrating because we know the issue, but I don’t know how to fix it. I hope this won’t happen again and that it will be permanently resolved for you soon also for you
I also hope that support will improve, because issues like this may cause n8n to lose customers. Honestly, if it hadn’t been resolved today, we would have started looking for another solution or provider.

I really like n8n, but it’s frustrating when the platform goes down

2 Likes

I am sure you can claim a refund. I understand that it will not compensate for production downtime. But it is standard practice.

1 Like

Agreed, the customer support really need some improvement!

1 Like

Do you have any idea how can I do it? My cloud instace is also down for such a long time, my customer is complaining all the time but i cant do anything due to the support response.

I am having the same issue, our business just STOP

I’m in the EXACT same boat!
How can anyone build a business on this?!
My instance has been down since Sunday. The AI email Told me that this has been escalated but I haven’t heard anything since. Outside of by enterprise, is there a way to get better support? Can someone help me @support_support

Hi all, Mike from the n8n Cloud team here. First and foremost, my sincere apologies for your n8n Cloud instances having trouble. Following our recent rapid growth our response times have been a little longer than we’d like, but we do our best to help everyone’s instances stay up and operational.

As for what’s causing instance downtime, there’s not a single root cause. Given that n8n allows for powerful automations, also the causes for temporary outages might vary from ingesting too much data from a trigger node to filling up disk space quickly by storing gigabytes and gigabytes binary data. Without going into details about individual instances, I want to let you know that all the currently identified issues (including those of the OP, I believe) have either been addressed or are in the progress of being looked into.

As always, if there is anything specific, our support is more than happy to help you. Thank you for your patience, and sorry for the hiccups on your journey continuing to build some awesome automations :folded_hands:t2:

1 Like

Small update: we just discovered that attempting to downgrade an instance from 1.110.0 to 1.109.x will fail, caused by a database schema change between the versions. Anyone affected by this can self-recover by upgrading the instance to 1.110.0 or newer. We’ve now also added a warning to make it explicit that an update to this version is a one-way street.

1 Like