The idea is:
the logprobs parameter offer a interpretability for llm response, and its a required response parameter for every LLM model
My use case:
in my own use , i need to using this parameter to eval the llm’s inner hallucination , and due to the feature of low code workflow, its hard for me to finish this feature on my own
I think it would be beneficial to add this because:
for this project , like llm basic chain, didnt support post and response parameter as much as the these llm provider sdk have list in their docs, and this influence the ability we use our model in this workflow
Any resources to support this?
https://platform.openai.com/docs/api-reference/chat/create#chat-create-logprobs
Are you willing to work on this?
yeah