AI Agent rate limit. Possible solutions?

Hey, please excuse my ignorance. Learning one step at a time and have zero knowledge.

I have an AI agent tha reaches its rate limit. Essentially using an agent to use player stats and come up with opitmal team from a pinecone retrieval of data. It has ~700 players entries (csv/excel) with a few stats.

Forgive my ignorance but how does it work, and how can i optimse or limit api calls.
I read maybe I can have the sheets loaded into the workflow for easier retreival?
Or maybe use a cache system for agent recall.
Or do I use a wait and batch system.
Neither of the above i have any idea how to do.

Error message at AI Agent output
Bad request - please check your parameters

This model’s maximum context length is 128000 tokens. However, your messages resulted in 317882 tokens (317807 in the messages, 75 in the functions). Please reduce the length of the messages or functions.

Any help or input will be well received :slight_smile:

{
“nodes”: [
{
“parameters”: {
“promptType”: “define”,
“text”: “={{ $json.text }}{{ $(‘Telegram Trigger’).item.json.message.text }}”,
“options”: {
“systemMessage”: “=# ROLE\nYou are a coach thats very accurate and descriptive.\nBuild the best team with the least amount of value\n\n# ADDITIONAL INFORMATION\nYou are currently chatting to {{ $(‘Telegram Trigger’).item.json.message.chat.first_name }}\nThe current time is {{ $now }}\n\n\n\n\n”
}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
-320,
-140
],
“id”: “81ba6408-010f-40d3-9f8c-5a3a2e0606e4”,
“name”: “AI Agent”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.1,
“position”: [
-480,
40
],
“id”: “bee5b829-7434-469d-975f-ae8bf4c30ca2”,
“name”: “OpenAI Chat Model”,
“credentials”: {
“openAiApi”: {
“id”: “cNxdRpV7xFk9HiQA”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“sessionIdType”: “customKey”,
“sessionKey”: “{{ "my_test_session" }}”,
“contextWindowLength”: 20
},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
-340,
40
],
“id”: “5e23c236-b4b8-489a-85ae-9716e9f77af0”,
“name”: “Window Buffer Memory”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.toolSerpApi”,
“typeVersion”: 1,
“position”: [
0,
180
],
“id”: “26020fd1-f1dd-4542-8b08-aa08db1cf56d”,
“name”: “SerpAPI”,
“credentials”: {
“serpApi”: {
“id”: “3EynmrJqgxK7vhVg”,
“name”: “SerpAPI account”
}
}
},
{
“parameters”: {
“mode”: “retrieve-as-tool”,
“toolName”: “aflplayerdata”,
“toolDescription”: “Use this tool to retrieve player data for each round”,
“pineconeIndex”: {
“__rl”: true,
“value”: “aflplayerdata”,
“mode”: “list”,
“cachedResultName”: “aflplayerdata”
},
“topK”: 800,
“options”: {
“pineconeNamespace”: “aflplayerdata”
}
},
“type”: “@n8n/n8n-nodes-langchain.vectorStorePinecone”,
“typeVersion”: 1,
“position”: [
-360,
200
],
“id”: “27ffb9db-a63e-498e-98f1-ec7a24820e6e”,
“name”: “Pinecone Vector Store”,
“credentials”: {
“pineconeApi”: {
“id”: “tTOxiTQ7QswlbGjR”,
“name”: “PineconeApi account”
}
}
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-360,
400
],
“id”: “2b6f0c55-6fc2-4d5a-860b-10c81cea8fd4”,
“name”: “Embeddings OpenAI”,
“credentials”: {
“openAiApi”: {
“id”: “cNxdRpV7xFk9HiQA”,
“name”: “OpenAi account”
}
}
}
],
“connections”: {
“AI Agent”: {
“main”: [

]
},
“OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Window Buffer Memory”: {
“ai_memory”: [
[
{
“node”: “AI Agent”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“SerpAPI”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Pinecone Vector Store”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Embeddings OpenAI”: {
“ai_embedding”: [
[
{
“node”: “Pinecone Vector Store”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “eb2c15c696c62a085738e894875b46152ece38a92529951a19427c0783ab12e1”
}
}

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I think I solved it, by reducing the pinecone limit number. Not sure what it means but it worked. :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.