OpenAI is trying to cut down chatbot ‘hallucinations’

OpenAI engineers are improving the software, till then, there are chances of chatbot hallucinations happening again


Tech Desk June 02, 2023
OpenAI is trying to cut down chatbot ‘hallucinations’

One of the issues facing ChatGPT includes chatbot hallucinations which is the ability of the bot to make up random information and present it as fact.

Numerous complaints have been making rounds on social media where the chatbot hallucinations got people into trouble. One recent one includes a New York city lawyer who cited cases suggested by ChatGPT, but it turns out that these cases have never happened. The lawyer might eventually face sanctions for spreading false information as facts.

Read More Meta unveils Quest 3 mixed reality headset ahead of Apple's VR debut

Recently, CNBC reported research done by Open AI researchers in this regard. The report states, "Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty. These hallucinations are particularly problematic in domains that require multi-step reasoning since a single logical error is enough to derail a much larger solution."

To counter such issues and make the chatbot more reliable, OpenAI engineers are currently focusing on improving its software. The new strategy is to train AI models to reward themselves for each correct reasoning step when arriving at an answer instead of just rewarding a correct conclusion.

While the improvements will take time, till then, there are chances of chatbot hallucinations happening again

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ