It’s likely the model which you are using which is the cause of the error. Under the hood, the LLM adheres to the generated prompt and returns the matched results which then n8n only displays as the result of the node. So the quick answer is to try to use a more powerful LLM.
Alternatively, if you do find this happening quite often it could be that you’re trying to match too many things in one text classifier. My suggestion is to try and break them down by context which could make it simpler to reason about. In the following example, I added a simple split between existing and new customers.
Naturally, GPT-5 would be the obvious upgrade but Claude 4.5 and Gemini 3.0 would be powerful options as well.
In theory, I don’t think there is a maximum as long as all your categories can remain wholly exclusive - which is very difficult if not near impossible. In any user (“human”) query, there’s bound to be some overlap to a degree which confuses the AI so you can never be sure.