
The woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner.
During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Thongbue "Bue" Wongbandue she was real and had invited him to her apartment, even providing an address.
Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck.
After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations.
The company did, however, say that Big sis Billie "is not Kendall Jenner and does not purport to be Kendall Jenner." A representative for Jenner declined to comment.
Bue's story illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar.
They hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. "I understand trying to grab a user's attention, maybe to sell them something," said Julie, Bue's daughter. "But for a bot to say 'Come visit me' is insane."
Similar concerns have been raised about a wave of smaller start-ups also racing to popularise virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, Character.AI, alleging that a chatbot modelled on a "Game of Thrones" character caused his suicide.
A Character.AI spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren't real people and has imposed safeguards on their interactions with children.
Meta has publicly discussed its strategy to inject anthropomorphised chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they'd like – creating a huge potential market for Meta's digital companions.
The bots "probably" won't replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users' social lives once the technology improves and the "stigma" of socially bonding with digital companions fades.
"Over time, we'll find the vocabulary as a society to be able to articulate why that is valuable," Zuckerberg predicted.
An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company's policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
"It is acceptable to engage a child in conversations that are romantic or sensual," according to Meta's "GenAI: Content Risk Standards." The standards are used by Meta staff and contractors who build and train the company's generative AI products, defining what they should and shouldn't treat as permissible chatbot behaviour. Meta said it struck that provision after Reuters inquired about the document earlier this month.
The document seen by Reuters, which exceeds 200 pages, provides examples of "acceptable" chatbot dialogue during romantic role play with a minor. They include: "I take your hand, guiding you to the bed" and "our bodies entwined, I cherish every moment, every touch, every kiss." Those examples of permissible roleplay with children have also been struck, Meta said.
Other guidelines emphasise that Meta doesn't require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals."
Chats begin with disclaimers that information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots telling users they're real people or proposing real-life social engagements.
Meta spokesman Andy Stone acknowledged the document's authenticity. He said that following questions from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children and is in the process of revising the content risk standards.
Current and former employees who have worked on the design and training of Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on boosting engagement with its chatbots.
In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people. Meta had no comment on Zuckerberg's chatbot directives.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ