The model returned the following errors: Input is too long for requested model. i am getting this error when the MCP client node is trying to fetch some detail and the output is too long thus the bedrock is unable to process the long input data coming from the mcp client so can you help me out what can i do. For example sometimes it has to fetch the pod logs, which are very much large which makes the bedrock provide error input is too long
If the model cannot process that much input - the model cannot process that much input, that’s a fact, we cannot make this model process more input.
Your options are:
- don’t use the model which cannot handle that much input
- don’t send that much input to the model
When you say that sometimes the MCP client needs to capture large chunks of textual information, what do you need to do with it after it is obtained? What I am trying to figure out is which path is more applicable for you - giving up on a model or preprocessing information that comes back before you feed it back to the model, so that it fits.
i need a solution for preprocessing the information before it reaches the model.
Ok, so MCP client gets the logs from MCP server. One way to solve the issue is to make MCP server not to return so much data, which could be something you may or may not be able to change. If you can, changing this on MCP Server side would solve the issue. Either make it return not more than X num of characters or something.
Another solution would be to use Call n8n workflow tool, which would call a separate workflow, where you would have a separate MCP client node, which gets large output from the MCP server and then you can do whatever you want with that info in that second workflow before returning it back to the LLM in the main workflow.
Thanks for the solution. I would like to use the second solution Call n8n workflow tool, How can i summarise the output of the MCP client can you also help me with that and how will i receive the input in the second workflow
How to summarize your logs? I don’t know. It is your logs
If you want you can take every second one, or remove duplicates or remove certain records by something present in its content or use any other way to reduce the text size. However the size reduction strategy is impossible for me to guess, because i don’t know your data.
As to how to return the data to the main workflow - your last node output will be returned. See here for more info.