Snapchat adds more security around its AI chatbot

Snapchat launches more advanced tools to curb the misuse of its AI chatbot generating inappropriate responses

PHOTO: Snapchat

Snapchat is launching new tools on its platform, including improvement to its AI chatbot, which was criticised by the Washington Post for responding in an unsafe and inappropriate manner.

The company, Snap, said it had learnt that users had been trying to “trick the chatbot into providing responses that do not conform to our guidelines.” Thus, the social media platform was compelled to install more safety tools to ensure better responses.

The new age filter is one of the features introduced that relays users’ birth dates so that the chatbot can respond in an appropriate manner according to their age. Snap also announced its intention to provide more insights to parents or guardians about children’s interactions with the chatbot under its Family Center which was launched last August.

In its blog post, Snapchat explained that My AI chatbot is not a “real friend,” which instead relies on conversation history to improve its responses.

According to the platform, the bot only gave 0.01% of responses in “non-conforming” language which includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups.

Snap has decided to temporarily block AI bot access for users who misuse it, especially since the company claims the inappropriate responses are mostly just the bot parroting the user.

“We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding OpenAI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service,” the company said.

 

Load Next Story