Exceed maximum context length

Hi folks, I get stuck at this error when I ran the workflow, I was using deepseek-chat model, how to get it solved, any suggestion? thanks a lot!


Bad request - please check your parameters

This model’s maximum context length is 65536 tokens. However, you requested 73149 tokens (73149 in the messages, 0 in the completion). Please reduce the length of the messages or completion.

{
“errorMessage”: “Bad request - please check your parameters”,
“errorDescription”: “This model’s maximum context length is 65536 tokens. However, you requested 73149 tokens (73149 in the messages, 0 in the completion). Please reduce the length of the messages or completion.”,
“errorDetails”: {},
“n8nDetails”: {
“time”: “08/03/2025, 13:16:23”,
“n8nVersion”: “1.80.4 (Cloud)”,
“binaryDataMode”: “filesystem”,
“cause”: {
“status”: 400,
“headers”: {
“access-control-allow-credentials”: “true”,
“cf-cache-status”: “DYNAMIC”,
“cf-ray”: “91cfdc2e2a7dd36a-FRA”,
“connection”: “keep-alive”,
“content-length”: “289”,
“content-type”: “application/json”,
“date”: “Sat, 08 Mar 2025 05:16:23 GMT”,
“server”: “cloudflare”,
“set-cookie”: “HWWAFSESID=fb6339761dc74e03e0; path=/, HWWAFSESTIME=1741410980121; path=/, __cf_bm=3nqr4jbeN.eM3nnjVhW4u60DFTD7wySmMuhalM5f1rw-1741410983-1.0.1.1-XHzDcTrFlXgZV800U06PLxfMNE48YYeQE0zc4cQylSvGbHLXpd3mjAoYatMmt_B9aiZVcl8y.6coeQdtH4BOZj.wvEqNCoTYJ38ir2ym6Yk; path=/; expires=Sat, 08-Mar-25 05:46:23 GMT; domain=.deepseek.com; HttpOnly; Secure; SameSite=None”,
“strict-transport-security”: “max-age=31536000; includeSubDomains; preload”,
“vary”: “origin, access-control-request-method, access-control-request-headers”,
“x-content-type-options”: “nosniff”,
“x-ds-trace-id”: “d5606a5e1bd5d97ca4a235951f3581da”
},
“error”: {
“message”: “This model’s maximum context length is 65536 tokens. However, you requested 73149 tokens (73149 in the messages, 0 in the completion). Please reduce the length of the messages or completion.”,
“type”: “invalid_request_error”,
“param”: null,
“code”: “invalid_request_error”
},
“code”: “invalid_request_error”,
“param”: null,
“type”: “invalid_request_error”,
“attemptNumber”: 1,
“retriesLeft”: 2
}
}
}

That means that the input tokens you are sending with your request exceed the limit that DeepSeek has. If you can reduce the number of tokens somehow that would be helpful otherwise you would have to switch to another model that has more context size like 4o, Gemini flash 2.0 or Claude 3.7.

Thank you! Does n8n support Grok (xAI) ?

I mean you can use any model that is supported by open router - so yes :slight_smile: Models: 'grok' | OpenRouter

I tried to create the credential for grok xai, I used the end point : https://api.x.ai as stated from grok official documents and I have generated a API key for it, but it return the “not found” error, how this can be solved ?

Could you share your workflow in these code blocks? Would be easier to check it that way

copy the json of the workflow in here