Currently the Text Classifier node injects a massive 700 token explanation of what JSON is behind the scenes. This causes unneeded token bloat, which increases api costs. 100 tokens vs 1000 tokens. The custom prompt option does not change this. Ideally we should be able to customize the entire prompt or optionally remove the injected prompt. Being optional would help avoid issues from basic use or people who don’t care.
My use case:
Saving on api costs. Node uses additional 700+ tokens due to injected prompt vs similar use case with Message A Model under 200.
I think it would be beneficial to add this because: