Developing nodes: Send HTTP API response headers as error output

Hi!

I would like to contribute to the Slack node :blush:

These almost are my first steps digging into node development, so I would appreciate a little bit of help here :pray::innocent:

Let me explain a little bit about the use case I am trying to solve before getting to the question where I am blocked.

Slack API

Slack has rate limiting conditions while interacting with its API. The interesting thing is that they include the number of seconds until you can retry a request in the response Retry-After HTTP header. Example:

HTTP/1.1 429 Too Many Requests
Retry-After: 30

Ideal rate limiting handling

We could avoid reaching the rate limit by forcing a fixed amount of wait time like:
2024_05_15-11_27_02

The limitations with this approach are:

  1. We would be forced to be conservative in our estimations due to how Slack define rate limits, so we will be waiting more than needed
  2. We would not be considering parallel execution of other workflows at this very same time which could interfere in the Slack API consumption

The ideal approach would be to only wait when the rate limit is reached. Something that can be actually implemented in n8n like:

We can behave like that because n8n already allow to configure the Slack node in order to continue even in case of error, and returns the error output (note the emphasis here) with:

Slack node limitations

The problem is that, when the Slack API rate limit is reached, the Slack node does not expose the previously mentioned Retry-After HTTP header information in order to parametrize the Wait node with that as an input value. Instead, it only returns the error message:

I we do not modify the Slack node “On Error” setting and leave the default behaviour (Stop Workflow), it shows a little bit more information:

Node implementation

Taking a look to the node implementation, I thought that it could be as simple as adding something similar to the following code at this point:

else if (response.error === 'ratelimited') {
	const retryAfter = response.headers['retry-after'];

	throw new NodeOperationError(
		this.getNode(),
		'The service is receiving too many requests from you',
		{
			description: `You should wait ${retryAfter} seconds before making another request`,
			level: 'warning',
			messageMapping: { retryAfter },
		},
	);
}

However, after digging a little bit more configuring n8n locally in order to reproduce the error and testing it, that wild guess do not seem to make any sense :sweat_smile:

It seems that the this.helpers.requestWithAuthentication method call already handles the 429 response code and throws a NodeApiError exception not allowing to handle the case from the node side.

Question

I am looking for a little bit of guidance here. I have configured the development environment (congrats btw for having a very straightforward process and instructions), tried to understand the paradigm behind nodes GenericFunctions.ts files and the difference between NodeOperationError and NodeApiError, but I feel like 5 minutes of someone pointing me in the right direction could save some more hours of digging :grimacing:

Thanks!

Information on your n8n setup

  • n8n version: 1.41.0
  • Database: default
  • n8n EXECUTIONS_PROCESS setting: default
  • Running n8n via: pnpm
  • Operating system: macOS

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

So is the only thing you want to contribute a way to handle the rate limiting?

There is an easier way to do that within the n8n node. Just go to the settings and set “Retry On Fail” to enabled and set the “Wait Between Tries” to 5000 ms.

This will be enough for 99% of rate limits out there and won’t stop you at all until you hit the rate limit

Hope that helps. You should still look at how you would fix it in the code to learn, but that likely wouldn’t get merged anyway.
If you want to contribute I would suggest talking to @Jon or someone first to make sure you aren’t wasting your time.

You can always make your own custom node to add functionality though

3 Likes

Hi @liam!

I didn’t thought about the “retry on fail” approach and it seems the most pragmatic way of doing so, so I will go that way. Thanks for the idea!

Regarding contributing, I was trying to understand how n8n can expose error details to other nodes. I think that an ideal scenario would be to expose them not just in this specific rate limiting use case, but also in a more broad way that could benefit any other node. However, I completely understand that this could be something that you do not look for to implement, so if you think that it wouldn’t get merged anyway, I agree that there is no point into implementing it :blush:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.