Llm hallucinating need help

Currently experiencing issues with LLM hallucination when working with food data. Specifically, I’m providing GPT-4 with a meal’s calorie and macronutrient information, along with a list of ingredients. My goal is to get the model to accurately estimate the weight (in grams) of each ingredient, ensuring the total calorie and macronutrient breakdown (protein, carbohydrates, and lipids) is within a 5% margin of error. Despite trying various LLM models and reasoning approaches, I’m not getting the desired level of accuracy. Can anyone help me address this challenge? I need a solution urgently.

Can you provide your prompt??

With your workflow for better understanding your flow and other things…

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.