DeepMind's new chatbot uses Google Search, humans to give better answers

The Sparrow is being trained to give human-like answers through question-answer training with humans and Google Search

DeepMind, owned by Alphabet, has unveiled a new AI chatbot called Sparrow, trained on DeepMind’s large language model Chinchilla.

The bot can talk to humans and answer questions using Google Search for information, which is rated by humans on usefulness and accuracy. Sparrow is then trained using a reinforcement learning algorithm to achieve a specific object through trial and error.

The AI is being developed to be safe for humans to talk with without the dangerous consequences of encouraging harm or violence.

The large language model that the bot is being trained into, is developed through text that sounds like humans, mostly collected from the internet, and it inevitably also collects biased data. Without appropriate safety measures, an AI chatbot could spew hate and toxic discriminatory content while having conversations with humans.

Read More British lawmakers say business at risk from lack of chip support

Companies have been working to combat the issue and have strived to come up with solutions such as OpenAI, which reinforces learning to incorporate human preferences into their models. This technique has been included in DeepMind's Sparrow. After tests, the bot produced 78% plausible answers to factual questions by humans. In formulating answers, 23 rules had been determined by researchers that the bot had to avoid like financial advice.

Safety researcher at DeepMind, Geoffrey Irving says, that DeepMind hopes to use the approach to use “dialogue in the long term for safety". While a researcher at AI startup Hugging Face, Douwe Kiela says that Sparrow is “a nice next step that follows a general trend in AI, where we are more seriously trying to improve the safety aspects of large-language-model deployments.”

According to MIT Technology Review, the model has yet to be finely developed before it can be employed as it makes mistakes, going off-topic and making up random answers.

Load Next Story