502 Bad gateway - service failed to handle request

Describe the issue/error/question

I’m basically taking a Telegram RSS feed and translating one of the JSON fields using DeepL API
than I merge the data back with the original and I publish the result to Wordpress or Telegram or Baserow, it has been working just great for 4 month but lately, these last few days this Bad Gateway error has been paralyzing the entire translation project https://osintukraine.com

When it does operate like it used to(last success run was 2PM today) this is the output of the DeepL node after translation, before merging


What is the error message (if any)?

Stack

NodeApiError: Bad gateway - the service failed to handle your request
    at Object.deepLApiRequest (/app/code/node_modules/n8n-nodes-base/dist/nodes/DeepL/GenericFunctions.js:31:15)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Object.execute (/app/code/node_modules/n8n-nodes-base/dist/nodes/DeepL/DeepL.node.js:109:42)
    at async Workflow.runNode (/app/code/node_modules/n8n-workflow/dist/src/Workflow.js:594:28)
    at async /app/code/node_modules/n8n-core/dist/src/WorkflowExecute.js:537:49


{"message":"502 - {\"code\":502,\"message\":\"Bad Gateway\"}","name":"Error","stack":"Error: Request failed with status code 502\n    at createError (/app/code/node_modules/axios/lib/core/createError.js:16:15)\n    at settle (/app/code/node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (/app/code/node_modules/axios/lib/adapters/http.js:269:11)\n    at IncomingMessage.emit (node:events:539:35)\n    at endReadableNT (node:internal/streams/readable:1345:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)"}


Please share the workflow

Share the output returned by the last node


Information on your n8n setup

  • n8n version: 0.183.0
  • Database you’re using (default: SQLite): postgresql
  • Running n8n with the execution process [own(default), main]: own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Hey @benb,

On the settings for the DeepL node can you try setting the retry options to see if that changes anything, I suspect as it is a 502 error it could be that either the service is being overloaded with requests or they use something like Cloudflare and that is limiting requests.

1 Like

They are indeed using Cloudflare, I will give that a try and report back, at the same time I contacted DeepL support for further investigation, it seems they indeed suffered some disruption, just not briefed about any detail so it’s hard to debug…

1 Like

Hmm…Maybe I could use the split-n-batch+wait node before the translation so I kind of space it up in between requests?

1 Like

Hey @benb,

That is not a bad idea although if it fails you may hit the same issue but at least with a wait and the retry in the node you have covered a few options. You could also maybe have an error workflow or a failure branch that saves anything that needs to be done to a database so you can redo it in the future, Unless you save the execution logs in which case you should be able to rerun with existing data if it fails.

1 Like

It’s currently going through one of the translation workflow & publication to WP and it seems the retry was enough to get it going beyond the DeepL node, I’ll see if the other workflows behave the same, this may also be working again if DeepL added my server IP to their cloudflare whitelist, still have to confirm that tho

1 Like

I tested the split-batch-n-waith loop to make requests much much much slower and it does not change anything, sometimes, request manage to go through, most of the time they don’t, and there is no way to understand what’s actually happening because there is no more error message than what I managed to add here…

Hey @benb,

HTTP errors often don’t provide more errors, All we tend to do is return the error we get back from the service which sometimes doesn’t help and it makes it tricky for us as we don’t know either and often there is very little we can do as we don’t control the other side.

I am surprised the loop didn’t help but with a 502 there could be a few causes some of them may be unrelated to what you are doing and it could be that someone else has overloaded the other side and it is slowing down requests.

1 Like

Got a reply from DeepL devs :

Our developers came back to us about your case, and pinpointed the fact that the source text was inserted in the URL. As it had over 4000 characters, this is too long a request for the API.

Please take a few minutes to go through our API documentation, which suggests to put the source text into the HTTP body of a POST request. As you sent the same request repeatedly, the same error was shown each time.

From our side the error handling has just been improved, suggesting to put the text into the POST body. For more information check the section Translating large volumes of text from our API documentation.

Please let us know if you have any further issues once you’ve edited your request.


This is really odd because my DeepL node is caped at 1200 characters

So now I’m wondering if the DeepL node is following best practice from deepl API ?
because if as suggested the content is in the URL and should be in the body maybe I should design the workflow in a different way? or is it the DeepL node itself that need some update ?

That is interesting and good to know that they were able to work it out, The main thing I can see there is if you are capping it to 1200 how are they getting 4000 through :thinking:

Looking at the API docs I would have expected a 414 HTTP response though not a 502.

414 The request URL is too long. You can avoid this error by using a POST request instead of a GET request, and sending the parameters in the HTTP body.

Our node is sending the text in the URL so maybe a better long term solution would be to change it to use the body instead, I will do some testing in the morning and see how it goes, In theory it should be a simple enough change and shouldn’t break anything.

1 Like

I understand better now what’s happening, thanks for the information. i’ll share your comment with DeepL if you’re okay?

Right now the node It’s not possible to use in my use case, it seems to work for small message but 1200 isn’t that much; in fact I would love to be able to use 4000, often my telegram translations are cut because of 1200 slice, so if any long term fix is made my project kinds of depend on a lot on DeepL, it would be really appreciated !

right now automated translations are being done use google but they’re lower quality in term of context.

i’ll keep you posted on deepl feedback

1 Like

Hi @benb,

Sounds good to me, 1200 characters seems really low and if it is a simple change then it seems worth doing.

1 Like

Should I open a github issue with this ?

Hey @benb,

I don’t think a Github issue is needed as I don’t see this as a bug we are using the URL option which is valid and it looks like we need to change this to body to support more characters which I would say is a feature request and something that would be handled here.

1 Like

Alright, thanks very much !

1 Like

Seems like DeepL have changed the error message to handle this :

1 Like

That is handy, Was that with your 1200 character limit? I am going to be looking into the node now.

Hey @benb,

Any chance you can DM me some text to play with that failed? As a quick test I have managed to translate all 6726 characters of “Rappers Delight” from English to German as a test but it would be nice to run some of your data through it if you have anything you can share.

Hey @Jon

the source of my translation is this RSS feed Militants via Observer on Inoreader you can simply use the rss feed node to be your source of translation and then you select the “title” element as the source of translation. (see my workflow in the OP)