The idea is:
- Logit bias (
logitBias
) is a valuable parameter that controls the weight placed on certain tokens in a model’s output - Currently, the OpenAI and Azure OpenAI nodes do not support logit bias as a parameter
- This should be supported as it is extremely valuable for scenarios such as classification, decisionmaking, etc.
My use case:
- I have a Slack bot that responds to threads when it is directly addressed—OR when its input is likely to be valuable
- Building a simple LLM chain with a logit bias of 100 for true and false improves accuracy significantly, uses fewer tokens, and improves response time (as it is only one output token)
- This also works extremely well for labeling things like emails (weight 1, 2, 3, 4, 5, etc. to 100)
I think it would be beneficial to add this because:
- See above
Any resources to support this?
- https://help.openai.com/en/articles/5247780-using-logit-bias-to-alter-token-probability-with-the-openai-api
- GitHub - mshumer/openai-logit-bias-classification-walkthrough: Learn how to use logit bias with OpenAI models to create highly-powerful classifiers in minutes.
Are you willing to work on this?
Yes—but maybe someone more familiar with the library can take this and run with it:
For the true / false classifier, I just used an extra LangChain Code node with the following:
const { ChatOpenAI } = require('langchain/chat_models/openai');
const llmInput = await this.getInputConnectionData('ai_languageModel', 0);
console.log(`llmInput:`, llmInput);
const { lc_kwargs } = { ...llmInput };
console.log(`lc_kwargs`, lc_kwargs);
const logitBias = {
"1904": 100,
"3934": 100,
};
const model = new ChatOpenAI({
...lc_kwargs,
logitBias,
});
console.log(`model:`, model);
return model;
If this were added as a parameter with an input option in the dropdown, this would significantly decrease unnecessary complexity